Next Article in Journal
Energy–Exergy–Exergoeconomic Evaluation of a Two-Stage Ammonia Refrigeration Cycle Under Industrial Operating Conditions
Previous Article in Journal
Parametric Investigation of p-y Curves for Improving the Design of Large Diameter Monopiles for Offshore Renewable Energy Applications
Previous Article in Special Issue
Tensor-Train-Based Elastic Wavefield Decomposition in VTI Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Borehole–Surface TEM Forward Modeling with a Time-Parallel Method

1
College of Geophysics, Chengdu University of Technology, Chengdu 610059, China
2
Yellow River Engineering Consulting Co., Ltd., Zhengzhou 450003, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(3), 1161; https://doi.org/10.3390/app16031161
Submission received: 5 December 2025 / Revised: 20 January 2026 / Accepted: 20 January 2026 / Published: 23 January 2026
(This article belongs to the Special Issue Exploration Geophysics and Seismic Surveying)

Abstract

The three-dimensional borehole-to-surface transient electromagnetic (BSTEM) method plays a critical role in resolving subsurface conductivity structures under complex geological conditions. However, its application is often constrained by the high computational costs associated with large-scale simulations and fine temporal resolution. In this study, a time-parallel forward modeling strategy is employed by integrating the finite volume method (FVM) with the Multigrid Reduction-in-Time (MGRIT) algorithm. Maxwell’s equations are discretized in space using unstructured octree meshes, while the MGRIT algorithm enables parallelism along the time axis through coarse–fine temporal grid hierarchy and multilevel iterative correction. Numerical experiments on synthetic and field-scale models demonstrate that the MGRIT-based solver significantly reduces computational time compared to conventional direct solvers, particularly when a large number of processors are utilized. In a field-scale hematite mine model, the MGRIT-based solver reduces the total runtime by more than 40% while maintaining numerical accuracy. The method exhibits parallel scalability and is especially advantageous in problems involving a large number of time channels, where simultaneous time-step updates offer substantial performance gains. These results confirm the effectiveness and robustness of the proposed approach for large-scale 3D TEM simulations under complex conditions and provide a practical foundation for future applications in high-resolution electromagnetic modeling and imaging.

1. Introduction

The borehole-to-surface transient electromagnetic (BSTEM) method receives secondary electromagnetic fields in the borehole. Compared with the ground transient electromagnetic method, the BSTEM can observe stronger signals and the interference can also be attenuated [1,2]. Over decades of development, the borehole transient electromagnetic method has been applied to underground mine fracturing construction [3,4], borehole reservoir monitoring [5], determination of the location and size of the tunnels [6,7] and so on. The borehole transient electromagnetic method aims to provide information from geologic bodies in the subsurface. Therefore, it is frequently required to simulate the complex topography and resistivity heterogeneity of the background in 3D, which demands a large number of computing resources. So far, the limitation of speed for forward modeling has been the major obstacle for 3D TEM inversion [8,9,10,11].
At the present time, the main strategies for discretizing grids of 3D transient electromagnetic modeling include the finite element method (FEM), the finite difference time domain (FDTD) and the finite volume method (FVM). Among these methods, the finite element method, as the most commonly used method in 3D TEM forward modeling, is able to improve the simulation accuracy using high-order basis functions [12,13]. Although the finite element method is generally considered more accurate, the FV method is advantageous in terms of calculation cost and flexibility [14,15]. The time-domain electromagnetic responses are simulated on Octree grids using the finite volume method [16]. The finite volume method provides an efficient and physically consistent framework for electromagnetic modeling, as it naturally supports unstructured meshes, enables accurate representation of complex geological interfaces, allows local grid refinement, and retains the simplicity of finite difference schemes while offering improved geometric flexibility [17]. It should be noted that the system matrix generated by the FV method contains fewer non-zero entries than that produced by the FE method, which may lead to reduced computational costs. A good level of agreement in the modeling results is observed between the FV and FE methods [18].
Nowadays, acceleration with parallel computing has been the primary technology to improve computational efficiency for 3D TEM modeling [8,9,10,11,19]. Several parallelization strategies have been developed for 3D transient electromagnetic simulations. Parallel finite difference schemes based on staggered grids and modified DuFort–Frankel methods have been proposed to accelerate diffusive electromagnetic modeling [19]. Parallel finite element time domain frameworks have been introduced, in which multithreading is employed to speed up preconditioner construction and matrix–vector multiplication [8]. Frequency–time domain transformation techniques have also been explored to exploit parallelism in the frequency domain and thereby reduce the overall computational cost [9]. In addition, GPU-based parallelization using user-friendly programming frameworks has been applied to enable efficient large-scale simulations without extensive low-level implementation effort [10]. Another approach relies on locally optimized meshes for each electromagnetic sounding, allowing forward modeling and sensitivity calculations to be performed independently and then mapped onto a global mesh through fast interpolation, which further enhances parallel efficiency [11]. Unlike frequency-domain EM, time-domain EM requires the discretization of time. An adaptive time-stepping scheme has the potential to decrease the number of unknowns and the number of time steps [20,21,22,23]. In order to reduce computational costs, a common strategy is to use a “stepped” time discretization. The constant Δt can reduce the numbers of refactorizing the system matrix in every time step, and the small time steps at an early stage are necessary for the resolution of high-frequency components in TEM forward modeling [20].
To introduce time-scale parallelism into transient electromagnetic forward modeling (TEM), we apply the novel time-scale parallel algorithm (Multigrid-Reduction-in-Time) in TEM forward modeling. The algorithm is a reduction-based time-multigrid method for solving time-dependent problems [24]. It has been employed in the time integration of partial differential equations, including the heat equation, wave equation, and convection–diffusion equation, achieving significant parallel efficiency while preserving numerical stability [25]. MGRIT has also been applied in electromagnetics and circuit simulation, where it accelerates the computations of transient electromagnetic fields and circuit networks. In time-domain electromagnetic simulations, MGRIT alleviates the strict stability and computational constraints typically associated with conventional explicit or implicit time-stepping schemes. By exploiting multilevel parallelism in time, MGRIT enables the efficient simulation of large-scale electromagnetic wave propagation and diffusion problems [26,27]. These studies suggest that MGRIT can effectively reduce the sequential limitations of time-domain solvers in computational electromagnetics.
Unlike conventional serial time-stepping schemes, the algorithm iteratively updates an initial space–time solution across all time levels simultaneously, thereby enabling parallel computation in the temporal dimension. Essentially, the algorithm functions as a time-domain multigrid method: the time grid is divided into coarse and fine levels, with exact solutions computed on the coarse grid and interpolated to the fine grid. The algorithm accelerates the orthogonal solution of the time-domain electromagnetic method holistically and it is achieved at the cost of more computational resources. The main objectives of this study are (1) to develop a time-parallel forward modeling framework for 3D borehole-to-surface TEM based on the MGRIT algorithm; (2) to evaluate its computational efficiency and scalability for large-scale models; and (3) to validate its numerical accuracy against conventional time-stepping methods using field-scale examples. The paper is structured as follows. We first briefly review the forward modeling for TEM with the finite volume method. We then introduce parallel algorithms on time scales. We discuss the performance of the parallelization of the Multigrid-Reduction-in-Time algorithm on a synthetic example. Finally, we verify the parallelism of the algorithm in a field borehole–surface TEM example.

2. Data

2.1. Synthetic BSTEM Model Parameters

To demonstrate the performance of the MGRIT algorithm, a synthetic borehole-to-surface transient electromagnetic model is constructed, as illustrated in Figure 1. In this model, the air resistivity is set to 108 Ω·m, while the background resistivity is 500 Ω·m. A conductive anomaly, characterized by a resistivity of 0.5 Ω·m, is embedded at a depth of 400 m. The dimensions of the anomaly are 200 m × 200 m × 150 m. The borehole-to-surface transmitter, with a total length of 100 m, is centered at the coordinates (200 m, 200 m, −300 m). The total length of the source is 100 m, the transmitting current is 1 A, and the source is excited by a transient pulse. The computational domain is discretized using an octree grid with a minimum cell size of 40 m. To improve numerical accuracy, regions around the electromagnetic source and surface receiver array are locally refined. The final generated grid includes 86,215 elements. A receiver array comprising 121 stations is deployed directly above the anomaly, with an interval spacing of 100 m. The data acquisition spans a time range from 0 ms to 100 ms, and the impulse response of the electric field component Ex is computed at 40 discrete time channels. Table 1 summarizes the numerical and MGRIT-related parameters used in the synthetic model experiments. These parameters define the spatial discretization and coarsening configuration employed in the parallel simulations. The corresponding numerical results are presented and discussed in Section 3.1.

2.2. Field Study Site: Hubei Hematite Mine (Field Case)

To verify the parallelization performance of the MGRIT algorithm for field-scale models, we constructed a complex 3D model with realistic time-step settings. The model is based on a hematite mine located in a region of Hubei Province, China. The selected site is an active hematite mining area in Hubei Province, China, where borehole and surface electromagnetic surveys are commonly used for deep ore body exploration. The local geology is dominated by sedimentary and metamorphic rocks, with hematite ore bodies mainly occurring in structurally controlled zones. The ore bodies are typically characterized by high electrical conductivity contrasts relative to the surrounding host rocks, which makes the site well-suited for BSTEM investigations. The geological complexity of this site provides a representative field case for validating the proposed modeling method. To more clearly describe the study area, the relative positions of the borehole transmitter and surface receiver electrodes used in the field survey are shown in Figure 2. The red circles denote the borehole locations, while the yellow dots represent the surface receiver electrodes. The surface observation system follows a regular grid layout with a spacing of 200 m between survey lines and 100 m between measurement stations along each line. A total of 10 survey lines is deployed, with 20 observation points on each line, resulting in 200 measurement locations for recording the horizontal electric field components. The total coverage area of the survey is approximately 4 km2. This mine has a complex hematite body located at a depth of 600 m, with a resistivity of 0.5 Ω·m. In this model, the air resistivity is set to 108 Ω·m, while the background’s resistivity is 100 Ω·m. There are 10 survey lines with a spacing of 200 m and 20 observation points per line with a spacing of 100 m. The electromagnetic source used in the simulations is a vertical line current source. The source is positioned within the borehole, with its orientation aligned along the vertical direction. The total length of the source is 554 m, the transmitting current is 10 A, and the source is excited by a transient pulse. The line source is placed from 732 m, −561 m, −20 m to 732 m, −561 m, −574 m. Its position and orientation are illustrated by the pink line in Figure 3, representing the borehole-to-surface transmitter. The model is discretized using octree meshes, as shown in Figure 3. The three-dimensional computational domain, spanning 6000 m laterally, is discretized using an octree grid with a minimum cell size of 20 m. To improve numerical accuracy, regions around the electromagnetic source and surface receiver array are locally refined. Additionally, the subsurface region encompassing the anomaly is further refined, spanning approximately 1000 m in both the X and Y directions and extending from a depth of about 400 m to 1500 m below the surface. Each cell is cubic, with electric field components defined at the centers of the edges. This mesh structure captures the complex geometry of the anomalous bodies with a total of 232,105 subdivided elements. The x component of the electric field is computed at each receiver. The number of time steps is 60 in this example; the coarsening factor of the time grid is set to 3 to reduce communication costs between different processors. Table 2 presents the mesh configuration and numerical parameters for the field example, including the discretization settings and MGRIT-related configurations. The corresponding numerical results are presented and discussed in Section 3.2.

2.3. Octree Mesh and Discretization Strategy

An octree mesh is employed to efficiently discretize the computational domain with variable resolution. In this approach, each cubic control volume can be recursively subdivided into smaller cubes where higher resolution is required, allowing fine discretization in regions of interest while keeping the total number of cells manageable. In the finite volume discretization, the electric field components are located at the centers of octree cell edges, whereas the magnetic field components are placed at the centers of the cell faces. In this study, the mesh is refined around the downhole source, the near-surface observation points, and within the conductive anomaly body to accurately capture the spatial variations in the electromagnetic fields. In less critical regions, coarser cells are used to reduce computational costs. This strategy ensures accurate representation of complex topography and localized features while maintaining overall computational efficiency.
The computational domain is discretized using an adaptive octree mesh, in which cell sizes are controlled by a hierarchy of refinement levels relative to geological features, sources, receivers, and topography. The smallest cell width is denoted by h , and refinement is specified through a list of octree levels n 1 , n 2 , n 3 , . Cells whose centers fall within a distance of n 1 h from any of these points are assigned the finest resolution h . Cells located within a distance of n 2 2 h are assigned a cell width of 2 h , and those within a distance of n 3 4 h use a width of 4 h . This graded refinement ensures a smooth transition from fine to coarse resolution while preserving high accuracy in regions of interest. The field example is discretized using an adaptive octree mesh, in which cell sizes are controlled by a hierarchy of refinement levels relative to topography, the borehole source, surface receivers, and the hematite ore body. The surface topography is represented by a triangulated surface constructed from the supplied elevation points. Refinement is applied only below this surface. Using the octree levels 2 , 4 , the smallest cell size h is enforced in a layer extending downward from the topographic surface. Beneath this layer, cell widths increase gradually, following the octree hierarchy ( 2 h , 4 h ), and ensuring that near-surface electromagnetic fields and topographic effects are resolved accurately while maintaining computational efficiency at depth. No refinement is applied above the topographic surface. The borehole transmitter and surface receivers are treated as point and line features that define local refinement zones. An octree level configuration of 2 , 4 is used, which enforces the smallest cell size h in the vicinity of the electrodes and then progressively coarsens the mesh with increasing distance. This ensures that the rapid spatial variation in the electric field near the source and observation locations is captured with sufficient resolution. To accurately resolve the geometry and electromagnetic response of the ore body, an additional localized refinement region is defined around the anomalous target, and the refinement region is defined by spatial ranges of −500 to 500 m in the x-direction, −250 to 250 m in the y-direction, and −400 to 1500 m in the z-direction. Inside this region, octree levels 0 , 2 , 4 are applied, meaning that the 2 h cell size is used in the core of the anomaly, followed by gradually increasing cell sizes away from it. This strategy ensures both the geometric fidelity of the ore body and accurate representation of strong conductivity contrasts.

3. Materials and Methods

3.1. Time-Domain Diffusion Equation

For quasi-static time-domain electromagnetic problems, and considering only electrical sources, we have [15,16]
× E + B t = 0 ,
× μ 1 B σ E = s e ,
n × E = 0 , B × n = 0 ,
where E is the electric field vector, B is the magnetic flux density vector, s e is the electrical source term, μ is the magnetic permeability in the computational domain, n is the normal vectors on the computational domain boundary, and natural boundary conditions are applied along the computational boundary. The natural boundary conditions are defined by Equation (3). Considering that the first-order backward Eulerian algorithm is used to obtain the governing equations for the corresponding electric fields after eliminating the magnetic field terms,
× μ 1 × E σ E t = s e t ,
where σ is the electrical conductivity in the computational domain. A three-dimensional BSTEM method under complex topography is developed using an octree grid to accurately represent the long downhole conductor, deep ore bodies, and complex terrain. An implicit Euler finite volume scheme is employed to ensure stable and efficient 3D BSTEM simulations under these conditions.
Defining a weak-form solution to the partial differential equation, the inner product is a , b = Ω a · b v . Next, a test function is defined within the same Sobolev space as E . This function is employed to construct the weak form of the finite volume equations. After integrating both sides of Equation (4) and applying integration by parts together with the natural boundary conditions, the weak form of the time-domain Maxwell equations is obtained. Finally, Equation (5) is reformulated as a semi-discrete Maxwell diffusion equation, with discrete spatial operators and a continuous time derivative
C M 1 / μ C T E + M σ E t = s e t ,
where C is the discrete curl operator; M 1 / μ is the face inner-product matrix of the reciprocal of magnetic permeability; M σ is the face inner-product matrix of electrical conductivity.

3.2. Multigrid Reduction-in-Time Algorithm

The MGRIT (Multigrid Reduction-in-Time) algorithm is a reduction-based iterative method that allows parallel-in-time simulations, calculating multiple time steps simultaneously in a simulation. In the practical application of MGRIT, the time grid can be divided into several grids with different coarsening levels. Here, we only briefly describe the two-grid algorithm of MGRIT [24,28].
An initial guess must first be specified for the solution at all time steps and can be chosen arbitrarily. We assume that the initial solution for TEM is e 0 , and ϕ denotes the time-propagation operator for TEM. g 0 is the initial right-hand side of the problem. In this study, the time-parallel algorithm is implemented using the pyMGRIT library. The solution at the beginning of each temporal sub-interval is initialized using a Gaussian random distribution. The resulting MGRIT iterations then iteratively enforce consistency between fine and coarse time grids until convergence is achieved. Single-step iteration is given by
e 0 = g 0 ,
e i = Φ i ( e i 1 ) + g i , i = 1 , 2 , , N .
The conventional time-stepping method requires the electric field to be solved sequentially in time. In contrast, the MGRIT algorithm achieves time parallelism through an iterative framework that couples the original time-stepping problem with a coarser approximate representation. MGRIT first partitions the temporal grid into coarse and fine levels. The time domain is coarsened by a factor m > 1 , resulting in a coarse grid with N Δ = N / m time points and a coarse time step Δ T = m δ t . A schematic of the time grids is shown in Figure 4. Grid points on the coarse and fine levels are referred to as C-points and F-points, respectively. The corresponding coarse-grid problem can be written as
e 0 = g 0 ,
e k m = Φ m e k 1 m + g ~ k m , k = 1 , 2 , , N T ,
g ˜ k m = g k m + Φ m g k m 1 + + Φ m 1 g ( k 1 ) m + 1 ,
Δ Φ = Φ m ,
where e k m denotes the solution of the TEM problem on the coarse time grid, Φ m is the time-stepping operator on the coarse grid, and we define Δ Φ = Φ m for simplicity. The coarse time step size is given by T = m Δ T and the residual on the coarse grid is defined as
r 0 [ c ] = g 0 [ c ] e 0 [ c ] ,
r k m [ c ] = g k m [ c ] e k m [ c ] + Φ e k m 1 [ f ] , k = 1 , 2 , , N T ,
where r 0 [ c ] is the residual of the initial right-hand side as well as the initial solution in C-points and r k m [ c ] is the residual of the right-hand side as well as the initial solution on the coarse grid. The subscripts c and f denote quantities associated with coarse-grid time points and fine-grid time points, respectively. To approximate the error, we need to define the coarse grid correction, and we need to solve the correction problem using ∆Φ.
c 0 c = r 0 c ,
c k m c = Φ Δ c k 1 m c + r k m c , k = 1 , 2 , , N T ,
where c k m c is the coarse grid correction on the C-points. Finally, we update the C-points of the approximate solution using the coarse grid correction.
e k m c e k m c + c k m c , k = 1 , 2 , , N T .
We briefly outline the two-grid version of MGRIT. For multilevel implementations, corrections are obtained from progressively coarser grids; however, those details are beyond the scope of this description. We note that error correction and relaxation can be performed in parallel, which is fundamental to the ability of the MGRIT algorithm to accelerate the forward modeling of borehole–surface TEM.

3.3. Computational Setup

In this study, the MGRIT algorithm was implemented using the pyMGRIT library [24]. The pyMGRIT framework provides a non-intrusive realization of the MGRIT method, which enables time parallelism to be introduced without modifying the underlying time-stepping scheme of the forward solver. In our implementation, the finite-volume-based BSTEM simulation code is coupled with pyMGRIT through its time integration interface, allowing the original sequential time-stepping solver to be wrapped as a black-box time propagator. This non-intrusive coupling preserves the physical and numerical consistency of the original forward modeling code, while enabling multiple time steps to be computed simultaneously across different processors. The time domain is decomposed across MPI ranks, where each rank is assigned a contiguous block of time steps while holding a full spatial octree mesh. Communication occurs only at C-points between neighboring time subdomains. During relaxation, the solution at the last C-point of each time block is sent to the next MPI rank to ensure causality. At the coarsest temporal level, all C-points are gathered on rank 0, which performs a serial time integration and distributes the corrected solution back to all ranks, realizing the multigrid-in-time coarse-grid correction.
The numerical experiments are carried out on a Dell high-performance workstation equipped with an Intel® Xeon® Gold 6138 CPU (80 cores at 2.00 GHz) (Intel, Chengdu, China) and 251.5 GiB of RAM. The operating system is Ubuntu 20.04.6 LTS, and all codes are compiled using LLVM 12.0.0. The simulation code is written primarily in Python (version 3.8.8), with the core computational components implemented in compiled C++ for computational efficiency. The linear systems are solved using the SPLU solver from the SciPy library. The algorithmic workflow and the main computational libraries relied upon in this study are presented in Figure A1 and Appendix A, respectively.

4. Results

4.1. Synthetic Examples

To compare the parallelism of the MGRIT algorithm, we compare the time-to-solution with the Sparse LU decomposition (SPLU) algorithm of the direct solution method in Figure 5. Considering the number of time steps, the coarsening factor of the time grid is 2. We perform parallel tests using between 1 and 32 processors for parallelization in time. With a small number of processors involved in parallelism, the computational time of the MGRIT algorithm shows no significant advantage over the direct method SPLU, and as the number of processors increases, the results show significant speedups compared to SPLU. It is worth noting that for processor counts ranging from 1 to 8, the MGRIT solver exhibits near-linear parallel scalability. When the number of processors exceeds 8, the acceleration effect of the MGRIT algorithm becomes less significant. This behavior is primarily attributed to the additional overhead introduced by multilevel time-grid transfers and inter-processor communication, which gradually offsets the benefit of increased temporal concurrency. As more processors participate in the time-parallel solve, the proportion of time spent on communication, restriction, and interpolation operations increases, eventually dominating the total runtime. Consequently, the achievable speedup no longer scales linearly with the number of processors. This indicates that the linear-speedup limit of the proposed BSTEM problem is reached at approximately 8–16 processors under the current configuration.

4.2. Field Example

We perform parallel tests using between 1 and 48 processors for parallelization in time. It should be noted that the number of processors is not greater than the number of time steps in the simulation. To determine an appropriate number of iterations, the residuals corresponding to different iteration counts are analyzed (Figure 6). The stopping criterion of the MGRIT iteration is based on the relative residual of the time-parallel solution. At present, there is no universally established convergence threshold for MGRIT in large-scale transient electromagnetic simulations. Therefore, an empirical criterion is adopted in this study. By comparing the MGRIT solution with the reference sequential solution, we found that when the residual drops below 10−11, the accuracy of the time-parallel solution satisfies the required numerical precision when compared with the sequential solution. Consequently, this threshold is used as the convergence tolerance for the field example in this work. Therefore, we refer to the residuals of different iterations in Figure 6 and the default number of MGRIT iterations is set to 5. Table 3 summarizes the convergence time of MGRIT under different coarsening factors. These convergence times are obtained under single-core conditions. Figure 7 compares the runtime performance of the MGRIT algorithm under different coarsening factors as the number of computational cores increases. The results show that the choice of coarsening factor has a significant influence on parallel efficiency. Specifically, the coarsening factor of 2 exhibits the highest runtimes, with only moderate reductions as the number of cores increases. The choice of the coarsening factor has a significant impact on the performance of the MGRIT algorithm. For smaller coarsening factors, such as 2, the coarse-grid problem remains relatively large, which limits the efficiency of temporal aggregation and increases the computational cost of coarse-level relaxation. As the coarsening factor increases, the coarse-grid problem becomes smaller, allowing error corrections to propagate more efficiently across temporal levels, which enhances the overall acceleration. This trend is clearly observed for coarsening factors between 3 and 5, where the runtime decreases substantially compared to smaller factors. In particular, factors 4 and 5 achieve the best performance, reducing the total runtime to approximately 2000 s and 1800 s, respectively, while maintaining relatively stable performance as the number of processor cores increases. However, excessively large coarsening factors can degrade convergence because the coarse-grid approximation becomes less accurate. Therefore, there is a trade-off between coarse-grid problem size and convergence stability, and selecting an optimal coarsening factor is crucial for balancing computational efficiency and solution accuracy. In this study, factors 4 and 5 provide a practical balance, offering both significant speedup and stable convergence across the tested processor configurations.
For smaller coarsening factors such as 2, the coarse-grid problem remains relatively large, which weakens the effectiveness of temporal aggregation and increases the cost of coarse-level relaxation. As the coarsening factor increases, the size of the coarse-grid problem decreases significantly, allowing faster propagation of error corrections across temporal levels. This explains the observed performance improvements for factors 3–5. Among them, factors 4 and 5 yield the best performance, reducing the runtime to approximately 2000 s and 1800 s, respectively, and maintaining relatively stable values as core counts increase. These results indicate that larger coarsening factors improve the scalability of the MGRIT algorithm, while overly small factors limit its acceleration potential. It should be noted that when the coarsening factor becomes too large, the coarsened problem no longer adequately represents the original system, which may cause divergence. Under such conditions, the multigrid structure fails to provide proper error damping, leading to slow convergence or complete divergence of the iterative process. Tests on the Daye hematite mine case indicate that MGRIT does not converge when the coarsening factor exceeds 6. On the other hand, when the processor count exceeds 20, further reductions in runtime are not consistently observed, and slight increases may occur in some cases. This phenomenon is closely related to the internal structure of the MGRIT algorithm: optimal parallel efficiency is achieved when the number of processors is an integer multiple of the number of time-grid levels (or time channels), allowing computational resources to be utilized more effectively. As more processors are used, the computational workload per processor becomes too small to offset the increasing communication and synchronization overhead. In particular, MPI point-to-point and collective communications associated with the restriction and interpolation steps begin to dominate the runtime, causing fluctuations or even increases in total computation time. Figure 8 and Figure 9 provide a comparison between the conventional time-stepping scheme and the MGRIT algorithm at t = 10 m s and t = 100 m s , respectively. The distributions of the electric field component E x obtained by both methods exhibit a high level of consistency in terms of spatial patterns and amplitude variations. No significant discrepancies or numerical instabilities are observed in the MGRIT results. At the later stage of the transient response ( t = 100 ms), where the diffusive term dominates, the consistency between the conventional time-stepping solution and the MGRIT result remains high. This demonstrates that temporal coarsening does not introduce noticeable numerical diffusion or compromise solution stability. In addition to the visual comparison shown in Figure 8 and Figure 9, a quantitative assessment is performed by computing the mean squared error at each observation point between the MGRIT results and the time-stepping solutions. The analysis shows that the errors are all below 7%, demonstrating good agreement between the two methods. For the BSTEM configuration, the source-to-receiver offsets are relatively small, so slight differences between the time-stepping and MGRIT simulations are expected. Although Figure 9 visually shows minor discrepancies, these differences are normal under the given configuration, and the results remain quantitatively consistent and reliable. These results confirm that the MGRIT algorithm produces solutions consistent with the standard time-stepping solver. However, the improvement in computational speed is accompanied by a corresponding increase in memory requirements. For the parameters and hardware configuration used in this study, each additional processor requires approximately 2–3 GB of extra memory. To achieve the level of acceleration reported in this paper, an additional 30–45 GB of RAM is therefore required. Furthermore, the relative error remains uniformly small across all receivers and time channels, indicating that no error growth occurs over successive time steps. These results confirm that the MGRIT algorithm produces solutions consistent with those of the conventional time-stepping method, while providing a framework that enables efficient time parallelization. Therefore, the proposed MGRIT implementation is validated as a reliable approach for transient electromagnetic simulations.

5. Discussion

The numerical results presented in this study demonstrate that the MGRIT-based time-parallel strategy can effectively accelerate BSTEM modeling, especially for problems with many time steps. In the synthetic model, the MGRIT algorithm shows clear speedup compared with the conventional Sparse LU (SPLU) time-stepping method when the number of processors increases. This improvement is mainly due to the fact that MGRIT breaks the sequential dependence of traditional time-stepping and allows multiple time levels to be processed in parallel. As the number of processors increases, more time steps can be computed simultaneously, resulting in significant reductions in runtime.
The parallel scaling results also reveal several important characteristics of the MGRIT algorithm. When only a small number of processors are employed, the available temporal concurrency is limited, and the additional computational overhead associated with relaxation steps, inter-processor communication, and coarse-grid corrections reduces the achievable speedup compared with sequential time-stepping. As the number of processors increases, the degree of temporal parallelism improves, allowing multiple time sub-intervals to be processed concurrently. This leads to a clear reduction in time-to-solution and demonstrates the effectiveness of the MGRIT framework for accelerating large-scale transient electromagnetic simulations. However, when the processor count becomes large relative to the number of time steps, the computational workload per processor decreases substantially. In this regime, communication overhead, synchronization costs between different levels, and increased memory usage required to store fine- and coarse-level solution vectors begin to dominate the overall runtime. Consequently, the parallel efficiency saturates and may even slightly degrade as additional processors are introduced. These results highlight an inherent trade-off in time-parallel algorithms such as MGRIT, where improved concurrency is achieved at the expense of higher communication frequency and memory consumption. An optimal balance between processor count, coarsening strategy, and problem size is therefore essential to maximize parallel efficiency.
In the MGRIT framework, the coarsening factor m controls the ratio between the fine-grid and coarse-grid time steps. Specifically, one coarse time step corresponds to m fine time steps, so that only every m-th fine time point is retained on the coarse time grid. This parameter therefore determines how strongly the time domain is compressed when moving to coarser levels in the multilevel hierarchy. Physically, the coarsening factor governs how much temporal information is preserved when approximating the fine-grid transient electromagnetic response on coarser time levels. Numerically, a larger coarsening factor reduces the size of the coarse problem and lowers computational costs, but it also decreases the accuracy of the coarse-grid approximation, which may slow convergence or even lead to instability. Conversely, a smaller coarsening factor improves the quality of the coarse-grid correction but increases computational overhead. The choice of the temporal coarsening factor also plays an important role in the overall performance of the MGRIT solver. Small coarsening factors produce coarse-grid problems that are still relatively large, limiting the effect of temporal coarsening and reducing efficiency. In contrast, moderate coarsening factors reduce the fine-grid workload more effectively and allow faster propagation of error corrections across temporal levels. This results in improved runtime performance, as seen in the field example. However, when the coarsening factor becomes too large, the coarse-grid problem may no longer represent the behavior of the original system well. In this situation, the multilevel process may fail to converge, as observed when the coarsening factor exceeds 6 in the hematite model. The convergence behavior of the MGRIT algorithm in transient electromagnetic simulations depends on the stability of the time-stepping scheme and the accuracy of the coarse-grid approximation. For sufficiently small time steps, the time-propagation operator is contractive, which leads to stable and fast convergence. The coarse-grid correction captures the main low-frequency time errors, while relaxation reduces high-frequency errors. In this work, a stepped time-grid partitioning strategy is used to avoid repeated matrix factorizations. For the field computational settings considered here, good convergence is observed when the coarsening factor is no larger than about six. When the coarsening becomes too large compared with the staircase time step size, the coarse-grid problem can no longer approximate the fine-grid problem well, which may lead to slow convergence or divergence. These results show that selecting an appropriate coarsening factor is important for both efficiency and stability. Accuracy comparisons between MGRIT and the conventional time-stepping approach show that both methods produce generally good anastomotic results of electric field responses at different time channels. The MGRIT solutions do not exhibit numerical oscillations or error accumulation, even at late times when diffusive effects dominate. This indicates that the algorithm provides stable solutions for large-scale 3D TEM simulations. This stability is supported by the smoothing nature of diffusive electromagnetic fields and the robustness of the implicit time-stepping scheme used in the MGRIT framework.
Although the MGRIT method achieves significant speedup, its time-parallel efficiency is inherently limited by the number of available time steps and the structure of the multilevel time grid. When the number of processors exceeds the number of effective time partitions, additional computational resources cannot be fully utilized, leading to a saturation of speedup, as observed in our experiments. Moreover, the performance of MGRIT depends on parameters such as the coarsening factor and the number of iterations. In this study, these parameters are selected based on the residual convergence behavior and numerical stability of the considered models. While this manual tuning is effective for the investigated cases, automated or adaptive parameter selection strategies would further improve robustness and usability for different geological settings. In addition, communication overhead, coarse-grid operator construction, and multilevel transfer operations introduce extra computational costs, especially at large processor counts. Despite these limitations, the numerical results demonstrate that the proposed approach is effective and practical for realistic field-scale transient electromagnetic problems. Future work will focus on adaptive coarsening strategies and the integration of MGRIT with inversion algorithms to further enhance performance for large-scale TEM applications. In large-scale 3D inversions, each iteration of the optimization algorithm requires many forward and adjoint simulations to evaluate data misfit and sensitivities. By reducing the wall-clock time of each forward solve through time parallelization, the overall inversion cycle can be significantly shortened. This allows more inversion iterations to be performed within a fixed time budget.

6. Conclusions

In this study, a time-parallel forward modeling strategy for three-dimensional BSTEM simulations was developed by coupling the FVM with the MGRIT algorithm. The spatial discretization based on an unstructured octree mesh enables the accurate representation of complex geological structures and large conductivity contrasts, while the incorporation of time parallelism overcomes the inherent sequential limitations of conventional time-stepping schemes. Numerical experiments demonstrate that the proposed method achieves a substantial reduction in computational time compared with the traditional Sparse LU (SPLU)-based time-stepping approach, particularly for large-scale models and long transient responses. Validation against conventional time-stepping results confirms that MGRIT preserves numerical accuracy, producing consistent transient electromagnetic fields across all time steps without introducing numerical oscillations or instabilities.
In a field-scale hematite model from Hubei Province, the MGRIT-based solver reduced the total runtime by more than 40%, illustrating its efficiency and scalability on parallel computing platforms. The numerical experiments further demonstrate that the choice of the coarsening factor has a significant impact on the efficiency and robustness of the MGRIT algorithm. Relatively small coarsening factors provide limited temporal acceleration due to insufficient reduction in the fine time grid, whereas excessively large coarsening factors may adversely affect convergence behavior. Among the tested configurations, moderate coarsening factors offer the most favorable trade-off between computational efficiency and numerical stability.
Overall, the proposed approach provides an efficient and accurate computational framework for large-scale transient electromagnetic modeling. The results demonstrate that the method can significantly improve computational efficiency without compromising numerical accuracy, highlighting its potential for large-scale transient electromagnetic applications.

Author Contributions

S.W. was responsible for the conceptualization and development of the methodology, software implementation, validation, formal analysis, visualization, and writing—original draft. H.C. supervised the research, administered the project, acquired funding, and contributed to the writing—review and editing manuscript. R.M. contributed to data curation, investigation, and assisted in the writing—review and editing manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sichuan Provincial Natural Science Foundation Key Project “Study on Borehole-to-Surface Electromagnetic and Ground Electromagnetic Methods for Deep Oil and Gas Exploration in Northeast Sichuan”, grant number 25ZNSFSC0026; the National Key Science and Technology Major Project “Borehole-to-Surface Electromagnetic Method for Deep Oil and Gas Evaluation Demonstration”, grant number 2024ZD1000206.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

Ruolong Ma was employed by the company Yellow River Engineering Consulting Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MGRITMultigrid Reduction-in-Time
BSTEMborehole-to-surface transient electromagnetic
FEMfinite element methods
FDTDfinite difference time domain
FVMfinite volume method

Appendix A

Figure A1. Flowchart of the MGRIT algorithm used in this study.
Figure A1. Flowchart of the MGRIT algorithm used in this study.
Applsci 16 01161 g0a1
This study primarily relies on the following software libraries:
Table A1. Software libraries used in this study.
Table A1. Software libraries used in this study.
LibrariesVersion
Python3.8.8
mpi4py3.1.1
pyMGRIT1.0.6
SciPy1.6.2

References

  1. Dai, Q.; Cai, Y.; Zhang, B. A Novel Finite-Infinite Coupling Approach for 3D Simulation of Borehole-to-Surface Electrical Potential. J. Appl. Geophys. 2022, 206, 104834. [Google Scholar] [CrossRef]
  2. Sun, Z.; Zhang, Y.; Zhao, Y.; Liu, Y.; Zhao, H. Present Study Situation Review of Borehole Transient Electromagnetic Method. Coal Geol. Explor. 2022, 50, 85–97. [Google Scholar]
  3. West, R.C.; Ward, S.H. The Borehole Transient Electromagnetic Response of a Three-dimensional Fracture Zone in a Conductive Half-space. Geophysics 1988, 53, 1469–1478. [Google Scholar] [CrossRef]
  4. Zhao, R.; Fan, T.; Li, Y.; Wang, J.; Ma, Y.; Wang, B.; Lu, L.; Fang, Z. Application of Borehole Transient Electromagnetic Detection in the Test of Hydraulic Fracturing Effect. Coal Geol. Explor. 2020, 48, 41–45. [Google Scholar] [CrossRef]
  5. Dutta, S.M.; Reiderman, A.; Schoonover, L.G.; Rabinovich, M.B. New Borehole Transient Electromagnetic System for Reservoir Monitoring. Petrophysics 2012, 53, 222–232. [Google Scholar]
  6. Xue, G.Q.; Yan, Y.J.; Li, X.; Di, Q.Y. Transient Electromagnetic S-Inversion in Tunnel Prediction. Geophys. Res. Lett. 2007, 34, L18403. [Google Scholar] [CrossRef]
  7. Wang, P.; Li, M.; Yao, W.; Su, C.; Wang, Y.; Wang, Q. Detection of Abandoned Water-Filled Mine Tunnels Using the Downhole Transient Electromagnetic Method. Explor. Geophys. 2020, 51, 667–682. [Google Scholar] [CrossRef]
  8. Fu, H.; Wang, Y.; Um, E.; Fang, J.; Wei, T.; Huang, X.; Yang, G. A Parallel Finite-Element Time-Domain Method for Transient Electromagnetic Simulation. Geophysics 2015, 80, E213–E224. [Google Scholar] [CrossRef]
  9. Liu, C.; Cheng, L.; Abbassi, B. 3D Parallel Surface-Borehole TEM Forward Modeling with Multiple Meshes. J. Appl. Geophys. 2020, 172, 103916. [Google Scholar] [CrossRef]
  10. Liu, S.; Chen, C.; Sun, H. Fast 3D Transient Electromagnetic Forward Modeling Using BEDS-FDTD Algorithm and GPU Parallelization. Geophysics 2022, 87, E359–E375. [Google Scholar] [CrossRef]
  11. Yang, X.; Wu, X.; Yue, M. A Fast 3-D Finite Element Modeling Algorithm for Land Transient Electromagnetic Method with OneAPI Acceleration. Comput. Geosci. 2022, 166, 105186. [Google Scholar] [CrossRef]
  12. Li, J.; Farquharson, C.; Hu, X. 3D Vector Finite-Element Electromagnetic Forward Modeling for Large Loop Sources Using a Total-Field Algorithm and Unstructured Tetrahedral Grids. Geophysics 2017, 82, E1–E16. [Google Scholar] [CrossRef]
  13. Li, J.; Lu, X.; Farquharson, C.G.; Hu, X. A Finite-Element Time-Domain Forward Solver for Electromagnetic Methods with Complex-Shaped Loop Sources. Geophysics 2018, 83, E117–E132. [Google Scholar] [CrossRef]
  14. Lu, X.; Farquharson, C.G. 3D Finite-Volume Time-Domain Modeling of Geophysical Electromagnetic Data on Unstructured Grids Using Potentials. Geophysics 2020, 85, E221–E240. [Google Scholar] [CrossRef]
  15. Jing, X.; Cao, H.; Zhou, J.; Liu, W.; Li, X.; Wen, Y. 3D finite-volume forward modeling of transient electromagnetic using octree meshes. Chin. J. Geophys. 2023, 66, 3524–3539. [Google Scholar] [CrossRef]
  16. Heagy, L.J.; Cockett, R.; Kang, S.; Rosenkjaer, G.K.; Oldenburg, D.W. A Framework for Simulation and Inversion in Electromagnetics. Comput. Geosci. 2017, 107, 1–19. [Google Scholar] [CrossRef]
  17. Jahandari, H.; Farquharson, C.G. A Finite-Volume Solution to the Geophysical Electromagnetic Forward Problem Using Unstructured Grids. Geophysics 2014, 79, E287–E302. [Google Scholar] [CrossRef]
  18. Jahandari, H.; Ansari, S.; Farquharson, C.G. Comparison between Staggered Grid Finite-Volume and Edge-Based Finite-Element Modelling of Geophysical Electromagnetic Data on Unstructured Grids. J. Appl. Geophys. 2017, 138, 185–197. [Google Scholar] [CrossRef]
  19. Commer, M.; Newman, G. A Parallel Finite-difference Approach for 3D Transient Electromagnetic Modeling with Galvanic Sources. Geophysics 2004, 69, 1192–1202. [Google Scholar] [CrossRef]
  20. Um, E.S.; Harris, J.M.; Alumbaugh, D.L. 3D Time-Domain Simulation of Electromagnetic Diffusion Phenomena: A Finite-Element Electric-Field Approach. Geophysics 2010, 75, F115–F126. [Google Scholar] [CrossRef]
  21. Um, E.S.; Commer, M.; Newman, G.A.; Hoversten, G.M. Finite Element Modelling of Transient Electromagnetic Fields near Steel-Cased Wells. Geophys. J. Int. 2015, 202, 901–913. [Google Scholar] [CrossRef]
  22. Cai, H.; Liu, M.; Zhou, J.; Li, J.; Hu, X. Effective 3D-Transient Electromagnetic Inversion Using Finite-Element Method with a Parallel Direct Solver. Geophysics 2022, 87, E377–E392. [Google Scholar] [CrossRef]
  23. Wang, X.; Cai, H.; Liu, L.; Revil, A.; Hu, X. Three-Dimensional Inversion of Long-Offset Transient Electromagnetic Method over Topography. Minerals 2023, 13, 908. [Google Scholar] [CrossRef]
  24. Hahne, J.; Friedhoff, S.; Bolten, M. Algorithm 1016: PyMGRIT: A Python Package for the Parallel-in-Time Method MGRIT. ACM Trans. Math. Softw. 2021, 47, 19. [Google Scholar] [CrossRef]
  25. Falgout, R.D.; Friedhoff, S.; Kolev, T.V.; MacLachlan, S.P.; Schroder, J.B. Parallel Time Integration with Multigrid. SIAM J. Sci. Comput. 2014, 36, C635–C661. [Google Scholar] [CrossRef]
  26. Bolten, M.; Friedhoff, S.; Hahne, J.; Schöps, S. Parallel-in-Time Simulation of an Electrical Machine Using MGRIT. Comput. Vis. Sci. 2020, 23, 14. [Google Scholar] [CrossRef]
  27. Strake, J.; Döhring, D.; Benigni, A. MGRIT-Based Multi-Level Parallel-in-Time Electromagnetic Transient Simulation. Energies 2022, 15, 7874. [Google Scholar] [CrossRef]
  28. Dobrev, V.; Kolev, T.; Petersson, N.A.; Schroder, J.B. Two-Level Convergence Theory for Parallel Time Integration with Multigrid. SIAM J. Sci. Comput. 2016, 39, S501–S527. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of a single anomalous body model, showing (a) the top view and (b) the front view. The red dot and red line denote the source, and the line in panel (a) represents the receiver electrode location.
Figure 1. Schematic diagram of a single anomalous body model, showing (a) the top view and (b) the front view. The red dot and red line denote the source, and the line in panel (a) represents the receiver electrode location.
Applsci 16 01161 g001
Figure 2. BSTEM survey location in Hubei Province with source and receiver positions. The red circles denote the borehole locations, while the yellow dots represent the surface receiver electrodes.
Figure 2. BSTEM survey location in Hubei Province with source and receiver positions. The red circles denote the borehole locations, while the yellow dots represent the surface receiver electrodes.
Applsci 16 01161 g002
Figure 3. A hematite mine in Hubei is discretized with octree mesh. The position and orientation of the line are indicated by the pink line.
Figure 3. A hematite mine in Hubei is discretized with octree mesh. The position and orientation of the line are indicated by the pink line.
Applsci 16 01161 g003
Figure 4. Schematic diagram of multigrid-in-time.
Figure 4. Schematic diagram of multigrid-in-time.
Applsci 16 01161 g004
Figure 5. Comparison of time-to-solution for the synthetic borehole-to-surface TEM forward modeling using the MGRIT algorithm and the SPLU direct solver (runtime in seconds).
Figure 5. Comparison of time-to-solution for the synthetic borehole-to-surface TEM forward modeling using the MGRIT algorithm and the SPLU direct solver (runtime in seconds).
Applsci 16 01161 g005
Figure 6. Convergence history of MGRIT showing the number of iterations and corresponding L2 residual norms.
Figure 6. Convergence history of MGRIT showing the number of iterations and corresponding L2 residual norms.
Applsci 16 01161 g006
Figure 7. Comparison of time-to-solution(s) of the MGRIT algorithm for different coarsening factors.
Figure 7. Comparison of time-to-solution(s) of the MGRIT algorithm for different coarsening factors.
Applsci 16 01161 g007
Figure 8. Comparison of simulation results obtained using the MGRIT method and the conventional time-stepping method with a time step size of 10 ms. The plotted quantity is the electric field magnitude (V/m). (a) Time-stepping simulation result; (b) MGRIT simulation result.
Figure 8. Comparison of simulation results obtained using the MGRIT method and the conventional time-stepping method with a time step size of 10 ms. The plotted quantity is the electric field magnitude (V/m). (a) Time-stepping simulation result; (b) MGRIT simulation result.
Applsci 16 01161 g008
Figure 9. Comparison of simulation results obtained using the MGRIT method and the conventional time-stepping method with a time step size of 100 ms. The plotted quantity is the electric field magnitude (V/m). (a) Time-stepping simulation result; (b) MGRIT simulation result.
Figure 9. Comparison of simulation results obtained using the MGRIT method and the conventional time-stepping method with a time step size of 100 ms. The plotted quantity is the electric field magnitude (V/m). (a) Time-stepping simulation result; (b) MGRIT simulation result.
Applsci 16 01161 g009
Table 1. Numerical and computational parameters for the synthetic BSTEM model.
Table 1. Numerical and computational parameters for the synthetic BSTEM model.
ParameterValue
Minimum cell size40 m
Coarsening factor2
Number of MGRIT levels2
Total number of cells86,215
Transmitter Current Strength1 A
Number of time steps40
Table 2. Numerical and computational parameters for the field BSTEM model.
Table 2. Numerical and computational parameters for the field BSTEM model.
ParameterValue
Minimum cell size20 m
Coarsening factor2, 3, 4, 5
Number of MGRIT levels2
Total number of cells232,105
Transmitter Current Strength10 A
Number of time steps60
Table 3. Convergence time (s) comparison of different coarsening factors.
Table 3. Convergence time (s) comparison of different coarsening factors.
Coarsening FactorNumber of IterationsConvergence Time (s)
253829
353875
453975
553012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Cao, H.; Ma, R. Three-Dimensional Borehole–Surface TEM Forward Modeling with a Time-Parallel Method. Appl. Sci. 2026, 16, 1161. https://doi.org/10.3390/app16031161

AMA Style

Wang S, Cao H, Ma R. Three-Dimensional Borehole–Surface TEM Forward Modeling with a Time-Parallel Method. Applied Sciences. 2026; 16(3):1161. https://doi.org/10.3390/app16031161

Chicago/Turabian Style

Wang, Sihao, Hui Cao, and Ruolong Ma. 2026. "Three-Dimensional Borehole–Surface TEM Forward Modeling with a Time-Parallel Method" Applied Sciences 16, no. 3: 1161. https://doi.org/10.3390/app16031161

APA Style

Wang, S., Cao, H., & Ma, R. (2026). Three-Dimensional Borehole–Surface TEM Forward Modeling with a Time-Parallel Method. Applied Sciences, 16(3), 1161. https://doi.org/10.3390/app16031161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop