Next Article in Journal
Polydeoxyribonucleotides as Emerging Therapeutics for Skin Diseases: Clinical Applications, Pharmacological Effects, Molecular Mechanisms, and Potential Modes of Action
Previous Article in Journal
Efficiency Comparison and Optimal Voyage Strategy of CPP Combination and Fixed Modes Based on Ship Operational Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Sparse Quasi-Newton Algorithm for Multi-Physics Coupled Acid Fracturing Model in Carbonate Reservoirs

1
School of Information and Mathematics, Yangtze University, Jinzhou 434023, China
2
Institute of Machine Learning and Simulation Computing, Yangtze University, Jinzhou 434023, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(19), 10436; https://doi.org/10.3390/app151910436
Submission received: 2 September 2025 / Revised: 22 September 2025 / Accepted: 24 September 2025 / Published: 26 September 2025

Abstract

Acid stimulation is a widely used technique for enhancing hydrocarbon recovery from carbonate reservoir formations. In this study, a mathematical model is developed to describe acidizing-induced pressure behavior in carbonate rocks, based on fluid dynamics and acid transport equations. The model is discretized using the finite volume method, resulting in a numerical framework suitable for simulating acidizing processes in carbonate reservoirs. Due to the model’s inherent characteristics—strong nonlinearity and the presence of high-dimensional sparse systems of equations—a sparse quasi-Newton method is proposed to efficiently solve the resulting system. Numerical experiments confirm the practicality and effectiveness of the proposed approach.

1. Introduction

Carbonate rock reservoirs are influenced by a combination of original rock properties, tectonics, and karstification, resulting in complex spatial pore structures and strong heterogeneity [1]. High-resolution grids (millions to billions of cells) are required to accurately capture their complex structures, which traditional models struggle to represent properly. The complex flow physics in carbonate rocks (multiphase flow, compositional changes, phase transitions, and complex relative permeability and capillary pressure curves) lead to discrete equations with strong nonlinearity and tight coupling. Acidizing is a primary method for geological modification of fracture-cavity carbonate reservoirs [2]. Research on numerical modeling methods for acidizing pressure in carbonate reservoirs can reveal how fractures and cavities influence the acidizing process, thereby providing theoretical guidance for optimizing acidizing treatments and improving stimulation effectiveness [3].
The carbonate rock acidizing pressure model incorporates multiple physical processes such as fracture flow, acid transport and reaction kinetics, and rock matrix deformation. Acidizing pressure numerical simulation is an effective method for reconstructing the multi-physics processes during acidizing stimulation, including formation fracturing induced by fluid pressure, acid mass transfer and chemical reactions, and the evolution of acid-etched fracture conductivity under confining pressure conditions [4]. Current carbonate rock acidizing pressure numerical models primarily focus on multi-scale fracture characterization, coupled multi-physics field solutions, and the integration of high-performance computational methods.
Compared with the finite difference method, the finite volume method combines the geometric flexibility of finite elements with rigorous conservation properties at both local and global scales. Forsyth applied the finite volume method for numerical simulation of multiphase flow in porous media [5]. After discretization using the finite volume method, the carbonate rock acidizing pressure model generates a strongly coupled, nonlinear, high-dimensional, sparse system of equations.
A known sparse quasi-Newton update was presented independently by Schubert [6], Toint [7,8,9], and Broyden [10] and analyzed by Marwil [11]. Unlike Broyden’s and Schubert’s methods, A new quasi-Newton update is proposed based on the least relative change in the updated matrix by Bogle [12]. In [10,11,12], they pay attention to Broyden’s rank-one update. These methods have proved quite successful for nonlinear equations but have the drawback for optimization problems that the matrix B* is nonsymmetric.
This paper develops a sparse quasi-Newton algorithm for solving multi-physics coupled models of carbonate rock acidizing pressure. By exploiting the inherent sparsity of the model, only the non-zero elements in the quasi-Newton matrix are updated. This approach overcomes computational challenges in large-scale high-resolution simulations, enhances computational efficiency and feasibility, improves the capability to solve complex nonlinear systems, and reduces computational costs. Numerical experiments demonstrate that the method provides superior convergence characteristics, proving its significance for the efficient development of carbonate reservoirs with complex heterogeneity.
The paper is organized as follows: Section 2 presents the mathematical model of carbonate rock acidizing pressure and the nonlinear system of equations obtained after the finite volume method discretization. Section 3 introduces the proposed sparse quasi-Newton method. Section 4 conducts numerical experiments to validate the method’s efficiency. The final section provides concluding remarks.

2. Mathematical Model and Discretization

2.1. Hydrodynamic Model of Carbonate Acid Fracturing

The Navier–Stokes–Darcy equations are used to simulate the fluid flow behavior in carbonate rocks, and a mathematical model describing the acid dissolution process in carbonate rock reservoirs is established [13].

2.1.1. Mass Conservation Equation

The mass conservation principle for a compressible fluid is expressed by the continuity equation:
𝜕 ( ρ f ϕ f ) 𝜕 t + ( ρ f ϕ f U f ) = 0 ,
where ρ f is the acid fluid density (which varies with concentration as ρ f = ρ 0 ( 1 + γ c ) , ϕ f is the dynamic porosity (updated by the chemical reaction 𝜕 ϕ f 𝜕 t = k r c ( 1 ϕ f ) , and U f is the velocity vector.

2.1.2. Momentum Conservation Equation

During acid fracturing, fluid flows through the pore–fracture system. The modified Navier–Stokes equations account for Darcy drag, Forchheimer inertial effects at high flow rates, and time-dependent porosity and permeability changes due to chemical reactions. The momentum conservation equation takes the following form:
𝜕 ( ρ f ϕ f U f ) 𝜕 t + ( ρ f ϕ f U f U f ) = ϕ f p + ( ϕ f μ τ f ) + ϕ f ρ f g μ ϕ f K + β ρ f ϕ f | U f | U f ,
where τ f = U f + ( U f ) T 2 3 ( U f ) I is the viscous stress tensor for Newtonian fluids, μ   is the dynamic viscosity of the fluid, which characterizes the internal friction property of the fluid that resists shear deformation, K = K 0 ϕ f 3 ( 1 ϕ f ) 2 is the porosity-dependent permeability (Kozeny–Carman relationship), and β is the Forchheimer coefficient.

2.1.3. Acid Transport Equation

The transport of acid species in the porous medium is governed by the following advection–dispersion–reaction equation:
𝜕 ( ϕ f ρ f C f ) 𝜕 t + ( ρ f U f C f ) ( ϕ f ρ f D eff C f ) = α s R ( C s ) ,
where C f is the average acid concentration in the fluid phase (%), and R ( C s ) is the rate of acid consumption per unit volume and time due to the chemical reaction with minerals, which depends on the surface acid concentration C s . D e f f   is the effective diffusion coefficient, a macroscopic parameter describing the combined effects of molecular diffusion and pore structure on acid transport in porous media. It depends on the acid’s molecular diffusivity ( D m ), the medium’s porosity ( ϕ ) and tortuosity ( τ ), and hydrodynamic conditions. A common relation is D e f f = ϕ τ D m , showing proportionality to porosity and inverse proportionality to tortuosity. α s   represents the specific reactive surface area, which refers to the effective mineral surface area per unit volume of porous media that participates in acid-rock chemical reactions.

2.2. Finite Volume Method Discretization

2.2.1. Discretization of the Mass Conservation Equation

Integrating the mass conservation Equation (1) over the control volume P and applying Gauss’s divergence theorem, we derive the following:
V P 𝜕 ( ρ f ϕ f ) 𝜕 t d V + V P ( ρ f ϕ f U f ) d V = 0 .
After discretization, we obtain the following:
V P ( ρ f ϕ f ) P n + 1 ( ρ f ϕ f ) P n Δ t + f ( ρ f ϕ f ) f n + 1 U f f n + 1 A f = 0 ,
where f represents the faces of control volume P , and A f is the area vector of face f (outward normal).

2.2.2. Discretization of the Momentum Conservation Equation

Integrate the momentum conservation Equation (2) over the control volume P , we obtain the following:
V P 𝜕 ( ρ f ϕ f U f ) 𝜕 t d V + V P ( ρ f ϕ f U f U f ) d V = V P ϕ f p d V + V P ( ϕ f μ τ f ) d V + V P ϕ f ρ f g d V V P μ ϕ f K + β ρ f ϕ f | U f | U f d V .
From discrete nonlinear equations, we obtain the following:
V P ( ρ f ϕ f U f ) P n + 1 ( ρ f ϕ f U f ) P n Δ t + f ( ρ f ϕ f ) f n + 1 U f f n + 1 ( U f f n + 1 A f ) = ϕ f p n + 1 f p f n + 1 A f + f ( ϕ f μ ) f n + 1 ( τ f ) f n + 1 A f + ϕ f p n + 1 ( ρ f ) p n + 1 g V P V P μ ϕ f K P n + 1 + β ( ρ f ϕ f ) P n + 1 | U f p n + 1 | U f .
Similarly, the acid transport Equation (3) can be discretized using the finite volume method. The nonlinear equation system obtained from the discretization is a five-diagonal or block five-diagonal for two-dimensional cases and a seven-diagonal or block seven-diagonal for three-dimensional cases.

3. Sparse Quasi-Newton Methods

After spatial discretization of the acid fracturing model in carbonate reservoirs, the result is a large and sparse nonlinear system, which can be written as follows:
F x = 0 , x R n ,
where F : R n R n is a continuously differentiable function. We denote J ( x ) as the Jacobian matrix of F ( x ) at x . Specifically, we have the following:
F ( x ) = ( F 1 ( x ) ,   F 2 ( x ) ,   ,   F n ( x ) ) ,
and
J ( x ) = ( F 1 ( x ) ,   F 2 ( x ) ,   ,   F n ( x ) ) .
By linearization, the nonlinear Equation (8) at an iterative point x k , we obtain the Newton–Raphson method:
x k + 1 = x k J ( x k ) 1 F ( x k ) .
Newton’s method needs to compute the exact Jacobian matrix. To avoid computing the Jacobian matrix, quasi-Newton methods have been proposed, where J ( x k ) is approximated by a quasi-Newton matrix B k R n × n .
The quasi-Newton methods generate an iteration as follows:
x k + 1 = x k + α k d k ,
where the step length α k > 0 is determined by some line search strategies, and d k is the quasi-Newton direction obtained by solving the subproblem
F ( x k ) + B k d k = 0 .
Usually, the quasi-Newton matrix B k satisfies the secant condition.
s k = x k + 1 x k = α k d k ,
y k = F ( x k + 1 ) F ( x k ) .
The quasi-Newton matrix B k can be updated by various quasi-Newton update formulae, such as the Broyden family of update formulae.
Theorem 1
[14]. Suppose  F : R n R n  satisfies:
(1)
F  is continuously differentiable in an open convex set  D ,
(2)
There exists an  x * D  such that  F ( x * ) = 0  and  J ( x * )  is nonsingular.
  • There exists a constant,  L > 0  such that
J ( x ) J ( y ) F L x y 2 f o r   a l l   x , y D .
Then, there exists ε , δ > 0 such that, if x 0 D satisfies x 0 x * 2 < ε and B 0 J ( x * ) F < δ for nonsingular B 0 :
1. Broyden’s method generates ( B k ) with B k nonsingular,
2. ( x k ) converges to x * ,
3. The convergence is superlinear:
l i m k x k + 1 x * 2 x k x * 2 = 0 .

3.1. Sparse Quasi-Newton Update Methods

The difficulty with large problems is that the quasi-Newton update formulae require n 2 / 2 memory locations, which often becomes impractical as n increases. If the Jacobian matrix of F ( x ) has a sparsity pattern, and an approximation to the Jacobian matrix is stored with only the non-zero elements approximated, the problem may become computationally tractable.
We know that even if B k has the sparsity pattern, B k + 1 defined by Broyden update formulae will not. In order to modify B k + 1 so that it has the desired sparsity, similar to reference [15], we give a kind of sparse quasi-Newton update method. We first denote by K the set of ordered pairs of integers ( i , j ) such that:
( J ( x ) ) i j = 0 , ( i , j ) K ,
where K { 1 ,   ,   n } × { 1 ,   ,   n } . Then, it is natural to ask that for all k , ( B k ) i j = 0 , ( i , j ) K . We consider the system of linear equations:
Q k λ = y k B k s k ,
where
s ( i ) j k = s j k ( i , j ) I , 0 ( i , j ) I , Q i j k = s ( i ) j k s ( j ) i k + δ i j s ( i ) k 2 ,
and where s ( i ) k is the vector whose components are { s ( i ) j k ; j = 1 ,   2 ,   ,   n } .
The sparse quasi-Newton update formula is as follows:
B i j k + 1 = B i j k + s ( i ) j k λ i + λ j s ( j ) i k .
Clearly, B k + 1 is symmetric and possesses the desired sparseness.

3.2. Computation of the Step Length

We define the merit function:
f ( x ) = 1 2 F ( x ) T F ( x ) .
The search step length α k is required to satisfy the Wolfe conditions:
f ( x k + α k d k ) f ( x k ) + c 1 α k g k T d k ,
f ( x k + α k d k ) T d k c 2 g k T d k ,
The parameters c 1 and c 2 are used in the line search procedure, corresponding to the constants in the Wolfe conditions. They are chosen such that 0 < c 1 < c 2 < 1 , where c 1 controls the sufficient decrease condition and c 2 ensures a reasonable step size, where g k = B ( x ) T F ( x ) . In practical computation, the directional derivative in (24) is approximated as follows:
f ( x k + α k d k ) T d k = f ( x k + ( α k + h ) d k ) f ( x k + ( α k h ) d k ) 2 h .
The parameter h in Equation (25) is chosen to balance the truncation error ( O ( h 2 ) ) and the round-off error ( O ( ε h ) ), where ε is the machine epsilon. The theoretically optimal value is ε 1 3 , which corresponds to h 10 5 to 10 6 in double-precision arithmetic. This choice ensures that the approximation maintains the global convergence properties while having minimal impact on the local convergence rate in practical applications.

3.3. Sparse Quasi-Newton Algorithm

The following is the sparse quasi-Newton algorithm for solving nonlinear equation systems.
  • Step 1: Given an initial point x 0 , a symmetric positive definite initial matrix B 0 , line search parameters 0 < c 1 < c 2 < 1 , a convergence tolerance ϵ > 0 , and set the iteration counter k = 0 .
  • Step 2: Compute g k = B k T F ( x k ) . If g k < ϵ , stop; else, go to Step 3.
  • Step 3: Compute search direction d k via (13).
  • Step 4: Compute step size α k via (23)–(25).
  • Step 5: Update x k + 1 using (12).
  • Step 6: Update B k + 1 via (21), increment k k + 1 , return to Step 2.

3.4. Parallel Computation of Quasi-Newton Matrix and GPU Acceleration Techniques

We note that the BFGS update (Broyden [16], Fletcher [17], Goldfarb [18], and Shanno [19]) can be decomposed into the sum of its rows. Dropping the subscript k , and replacing the subscript k + 1 with the superscript *, the BFGS update is defined by the following:
B * = B B s s T B s T B s + y y T y T s ,
then, the BFGS update can be written as follows:
B * = B i = 1 n e i e i T B s s T B s T B s + i = 1 n e i e i T y y T y T s = B i = 1 n e i e i T B s s T B s T B s e i T y y T y T s .
The efficient parallel computation of (27) can be achieved using multiple processors. GPU acceleration [20,21,22,23,24] is implemented as follows: during initialization, GPU parallel scanning computes the initial Jacobian and establishes the index set K ; each component of the nonlinear system is mapped to GPU threads; B k is stored in row-major format; f ( x k + α k d k ) is evaluated in parallel for multiple candidate step lengths.

4. Numerical Experiments

4.1. Acid Fracturing Coupling Model

To evaluate the performance of sparse quasi-Newton algorithms in solving carbonate rock acid pressure multi-field coupling models, this section designs a comparative experiment between the sparse Broyden–Schubert method and the BFGS method. The testing environment is a Windows 10 operating system equipped with 16.00 GB of memory.
The test function selected for the experiment is a simplified carbonate rock acid pressure multi-field coupling model, whose Jacobian matrix has a tridiagonal structure, consistent with the high-dimensional sparse characteristics of the actual acid pressure numerical model after discretization.
F 1 = 2 x 1 + e x p ( x 1 ) + x 2 5 = 0 , F i = x i 1 + 3 x i + e x p ( x i + 1 ) + x i + 2 s i n ( x i + 1 ) 4 = 0 , i = 2 ,   ,   n 1 , F n = x n 1 + 2 x n + e x p ( x n ) c o s ( x n ) 3 = 0 , x 0 = ( 1 ,   1 ,   ,   1 ) T .
The algorithm parameters are set as follows: the convergence tolerance is ε = 10 6 , i.e., the algorithm is considered to have converged when the current gradient norm g k is less than ε ; Standard Wolfe line search is used, with line search parameters set to c 1 = 10 4 and c 2 = 0.9 ; the initial matrix is set to B 0 = J ( x 0 ) , where J ( x 0 ) is the value of the Jacobian matrix of the test function at the initial point x 0 . This strategy provides a high-quality initial guess that captures the problem’s inherent structure, significantly accelerating early convergence. To ensure robustness, we apply minimal regularization: B 0 = J x 0 + ε , where ε = 10 6 . Although this requires one Jacobian evaluation, it reduces total iterations and overall computation time substantially.
To evaluate the performance of the algorithm, multiple quantitative metrics are used. The dimension of the problem is represented by Dim. Iteration represents the total number of iterations performed to satisfy the convergence condition. R represents the average convergence rate of the Broyden–Schubert method, which is used to quantify the average convergence speed of the algorithm throughout the iteration process. Its calculation formula is as follows:
R = 1 m l n N 1 N m ,
where N 1 =   F ( x 0 ) is the Euclidean norm of the initial residual, N m =   F ( x k ) is the residual norm at the final iteration point, and m = N f u n / n is the total number of function component evaluations normalized by the problem dimension n , with Nfun being the total number of function component evaluations. This metric quantifies the computational efficiency in terms of function evaluations. A higher R value indicates better convergence efficiency per function evaluation cost.
Tcpu denotes the total computation time consumed by the algorithm, measured in seconds. The comparison results between the sparse Broyden–Schubert method and the BFGS method for different problem dimensions are shown in Table 1 and Table 2.
From the experimental results, it can be seen that under all problem dimensions, the number of iterations and equivalent iterations ( m ) of the sparse Broyden–Schubert method is both fewer than those of the BFGS method, indicating higher convergence efficiency and lower total computational cost in function evaluation. The average convergence rate ( R ) of the sparse Broyden–Schubert method is consistently higher than that of the BFGS method. A higher R value indicates faster residual decay per unit computational cost, suggesting that this method achieves faster convergence by exploiting the tridiagonal sparsity pattern of the Jacobian matrix.
As the number of problem dimensions increases, the time advantage of the sparse Broyden–Schubert method becomes increasingly evident. At low dimensions ( n   =   100 ), the time difference is within 5%; however, at higher dimensions ( n   =   4000 ), the sparse Broyden–Schubert method saves approximately 31% in computation time compared to the BFGS method. This characteristic makes it more advantageous for handling large-scale numerical computation problems, particularly suitable for solving high-dimensional sparse equation systems such as carbonate rock acid pressure models.
To clearly compare the efficiency of the BFGS and the sparse Broyden–Schubert method in solving this problem, we selected a problem dimension of n = 3000 and plotted the objective function value 1 2 F ( x ) 2 as a function of the number of iterations and computation time.
The sparse Broyden–Schubert method demonstrates significant computational efficiency advantages when solving large-scale linear systems of equations. As shown in Figure 1, subfigure (a) (the relationship between l o g 1 2 F ( x ) 2 and the number of iterations) demonstrates that The sparse Broyden–Schubert method exhibits excellent convergence from the early stages of iteration, with the objective function value decreasing by several orders of magnitude after only four iterations, rapidly approaching the neighborhood of the optimal solution; subfigure (b) (relationship with computation time) further validates its time efficiency—when the objective function reaches a practical accuracy of 10−6, The sparse Broyden–Schubert method requires fewer iterations and reduces the computation time by 25–30% compared to the BFGS method.
The core of this advantage lies in a balance between update frequency and computational cost achieved by the sparse Broyden–Schubert method. Additionally, the sparsity-preserving technique adopted by the sparse Broyden–Schubert method results in lower memory usage, and this memory efficiency is highly valuable in practical engineering applications for large-scale problems.
Analysis of the time complexity using Figure 2a,b show that the sparse Broyden–Schubert (the sparse Broyden–Schubert) method demonstrates a significant scalability advantage when handling problems of varying sizes. Specifically, in subplot (a), which uses a double-logarithmic coordinate system, the time complexity curves of both the BFGS and the sparse Broyden–Schubert method exhibit a stable, approximately linear upward trend. This linear trend indicates that both methods have polynomial time complexity of the same order. However, the relative positions of the curves show that the sparse Broyden–Schubert method’s curve consistently remains below the BFGS method’s curve. This consistent offset indicates that, even for the same order of complexity, the actual computational cost of the sparse Broyden–Schubert method is significantly lower than that of the BFGS method, confirming its superior computational efficiency.
Subplot (b), which uses a linear coordinate system, more clearly illustrates the scaling behavior of the two methods. When the problem dimension n is less than 2000, the distance between the two curves is small, indicating that the computational time of both methods is similar and the advantage of the sparse Broyden–Schubert method is not yet pronounced. However, when the problem size exceeds the critical value of n = 2000 , the performance gap between the methods widens sharply: the BFGS method exhibits a sharp increase in computational time, and its curve’s slope increases significantly.
In contrast, the sparse Broyden–Schubert method’s time consumption curve maintains a relatively gentle growth slope, and even as the problem size continues to increase, its computational time increases at a much slower rate. This clear divergence demonstrates that the sparse Broyden–Schubert method exhibits superior scalability in large-scale problem-solving scenarios and can better adapt to increasing problem sizes.
Overall, the sparse Broyden–Schubert method demonstrates significant advantages in three key aspects: computational efficiency (Tcpu), computational cost ( m ), and convergence performance ( R ). These advantages become more pronounced as the problem dimension increases ( n 1000 ), validating its practicality and efficiency in solving carbonate rock acid pressure multi-field coupling models.

4.2. Nonlinear Heat Conduction Multi-Physics Coupling Model

To validate the efficacy of the sparse quasi-Newton algorithm across a broader spectrum of application problems, this section introduces a new test model: a simplified nonlinear multi-field coupled heat transfer model. This model describes a steady-state temperature field problem incorporating radiation and convective heat dissipation. Its governing equations, after discretization via the finite difference method, yield a high-dimensional nonlinear system characterized by a tridiagonal Jacobian matrix structure. Both in physical background and mathematical formulation, this model differs significantly from the acid fracturing model presented in Section 4.1 (28), thereby serving to evaluate the algorithm’s universal performance across diverse application scenarios.
The discrete form of the simplified heat conduction model is as follows:
F 1 = k T 2 2 T 1 + T l e f t σ T 4 4 T θ 4 h ( T 1 T θ ) = 0 , F i = k T i + 1 2 T i + T i 1 σ T i 4 T θ 4 h ( T i T θ ) = 0 , i = 2 ,   ,   n 1 , F n = k T r i g h t 2 T n + T n 1 σ T n 4 T θ 4 h ( T n T θ ) = 0 , T 0 = ( 1 ,   1 ,   ,   1 ) T .
Here, T i denotes the temperature at the i-th node (the variable to be solved), k is the thermal conductivity, σ is the radiation coefficient, h is the convective heat transfer coefficient, T θ is the ambient temperature, and T l e f t and T r i g h t are the fixed temperatures at the left and right boundaries, respectively. All physical quantities have been nondimensionalized to improve numerical stability. The Jacobian matrix of this model is not only sparse (tridiagonal) but also exhibits stronger nonlinearity due to the nonlinear radiation term T i 4 , making it a more challenging case for testing algorithms.
To ensure a fair comparison, we maintain exactly the same algorithm parameter settings (convergence tolerance   ε = 10 6 , Wolfe line search condition, initial matrix B 0 = J x 0 + ε , and performance evaluation metrics as in Section 4.1 (the original test). In the model, the scaling parameter is selected as: k = 0.1 ; σ = 0.01 ; h = 0.05 ; T θ = 0.5 ; T l e f t = 1.0 ; T r i g h t = 0.8 .
Table 3 and Table 4 present a performance comparison between the sparse Broyden–Schubert method and the BFGS method for solving the nonlinear coupled thermos-hydro-mechanical-chemical (THMC) heat conduction model. The results demonstrate that the sparse Broyden–Schubert method exhibits significant advantages across multiple metrics.
In terms of iterative efficiency, this method achieves fewer iterations across all tested dimensions. In the 2000-dimensional problem, it converges in only four iterations, showing a notable improvement compared to the six iterations required by the BFGS method. This advantage primarily stems from the algorithm’s use of a precisely constructed initial Jacobian matrix. This initial matrix accurately captures the tridiagonal dominant structure resulting from the discretization of the heat conduction term, providing a high-quality structured initial guess for the iterative process and thereby significantly enhancing the accuracy of the search direction.
In terms of function evaluation efficiency, the sparse Broyden–Schubert method requires a significantly lower total number of component function evaluations than the BFGS method, reflecting a substantial computational advantage. This indicates that each function evaluation contributes more effectively to error reduction, resulting in a faster convergence rate. The underlying reason lies in the sparse update mechanism, which strictly preserves the sparsity pattern of the Jacobian matrix and aligns closely with the inherent mathematical structure of the problem.
Regarding computational time, the advantage of the sparse Broyden–Schubert method becomes more pronounced as the problem size increases. In the 2000-dimensional case, it reduces computation time by approximately 17% compared to the BFGS method. This performance gap further widens in higher-dimensional problems, such as the 10,000-dimensional case 24%. For large-scale problems, the use of sparse matrix operations effectively reduces memory access overhead and cache miss rates, thereby enhancing practical computational efficiency.
Particularly noteworthy is that this experiment validates the generality and robustness of the sparse Broyden–Schubert method when applied to problems with diverse physical backgrounds and mathematical properties. Although the heat conduction model differs significantly from the previously studied acid fracturing model in both physical mechanisms and sources of nonlinearity, the method still demonstrates superior performance. This indicates that its advantages do not stem from adaptation to a specific problem but rather from a universal utilization of the sparse structure of the Jacobian matrix. Even when confronted with numerical perturbations caused by the strongly nonlinear radiation term T i 4 , the algorithm maintains strong stability, owing to its high-quality initialization strategy and moderate regularization mechanism, which together ensure a reasonable starting point for iterations and robustness throughout the numerical process.
In summary, as a quasi-Newton algorithm that fully exploits the sparse nature of the Jacobian matrix, the sparse Broyden–Schubert method provides an efficient, stable, and general numerical framework for solving nonlinear systems with sparse structures. It demonstrates broad application prospects in addressing multi-physics coupling problems.

5. Discussion

This study addresses the computational challenges in simulating the multi-physics coupling of permeability, acid concentration, pressure, and velocity fields during carbonate rock acid fracturing. A mathematical model is established and discretized using the finite volume method, resulting in a large-scale system of nonlinear equations. This system exhibits a clear structured sparsity, and to improve solution efficiency, the sparse Broyden–Schubert method—a quasi-Newton algorithm designed for sparse structures—is adopted. Results show that this method significantly outperforms the classical BFGS algorithm in terms of computational speed and numerical stability, enabling more efficient solutions for such high-dimensional nonlinear problems.
From a theoretical perspective, although the superlinear convergence of quasi-Newton methods (Theorem 1) is an asymptotic property independent of dimension n , the practical impact of n lies in how long it takes to reach this regime. For classical BFGS, which updates a dense n × n matrix, superlinear convergence typically emerges after O ( n ) iterations, once sufficient directional information has been gathered to form a high-quality Hessian approximation. The sparse Broyden–Schubert method used here exploits the known sparsity pattern of the Jacobian, drastically reducing the number of free parameters that need to be estimated. As a result, it accumulates the necessary curvature information much faster than dense methods, often entering the superlinear phase in far fewer than O ( n ) iterations. This explains the improved convergence rates in Table 1 and Table 2 and highlights a broader advantage: sparsity not only reduces per-iteration cost but also accelerates the overall convergence process by shortening the initial learning phase.
The key advantage of the method lies in its effective utilization of the Jacobian matrix structure. Under the finite volume discretization, variables interact locally, leading to a regular tridiagonal sparse pattern in the Jacobian. The sparse Broyden–Schubert method takes full advantage of this feature by restricting updates to non-zero entries only, avoiding the expensive full-matrix operations required in BFGS. This not only reduces the computational cost per iteration but also significantly lowers memory usage, thereby accelerating convergence. Specifically, it requires fewer iterations, fewer function evaluations, and achieves a faster average convergence rate. Since nonlinear solvers typically dominate the overall simulation time, this improvement is of practical significance.
The performance trend in Figure 2 demonstrates that the sparse quasi-Newton method exhibits superior scalability. While both algorithms perform comparably on small-scale instances (e.g., n   =   100 ), their behaviors diverge significantly as the system size increases—particularly beyond n   =   2000 . The sharp rise in BFGS computation time reflects its inherent limitation: full-rank updates lead to cubic complexity in matrix operations, which becomes prohibitive at scale. In contrast, the sparse Broyden–Schubert method maintains a nearly linear growth in computational cost, thanks to its restricted update pattern that aligns with the underlying tridiagonal structure. This structural compatibility not only reduces arithmetic operations per iteration but also minimizes memory bandwidth pressure, resulting in a 31% CPU time saving at n   =   4000 . Such scalability is essential for practical deployment in large-scale reservoir simulations, where model fidelity often demands high spatial resolution.
Similarly, the convergence profile in Figure 1 reveals more than just speed—it reflects algorithmic robustness. The rapid drop in residual by several orders of magnitude within the initial iterations suggests that the sparse method captures the dominant curvature directions of the nonlinear system more effectively. This qualitative behavior aligns with the aforementioned theoretical analysis: due to entering the superlinear convergence phase earlier, the rate of residual decline improves at fewer iterations. When translated into wall-clock time, this translates to a 25–30% reduction in time-to-solution to reach 10−6 accuracy. This real-time advantage directly impacts workflows in reservoir engineering, enabling faster turnaround for iterative tasks such as history matching, sensitivity studies, and parametric design optimization.
To further validate the method’s general applicability beyond acid fracturing contexts, we conducted additional tests using a nonlinear multi-field coupled heat transfer model, which also yields a tridiagonal Jacobian structure. The sparse Broyden–Schubert method again demonstrated consistent superiority, reducing iteration counts by 39% and computation time by 28% on average compared to BFGS at dimensions ranging from 500 to 30,000. This confirms that the algorithm’s efficiency is not problem-specific but rather stems from its ability to exploit structured sparsity common in many discretized PDE systems.
The main contribution of this work is the successful application of a sparse quasi-Newton method—less used in engineering simulations previously—to the complex, multi-physics problem of acid fracturing in carbonate rocks, along with a systematic evaluation of its performance. We observe that the sparse structure generated by the finite volume discretization aligns well with the algorithm’s update mechanism, forming a smooth workflow from “modeling” to “discretization” to “numerical solution.” This highlights an important insight: in numerical simulation, selecting a solver that matches the inherent structure of the problem can be more effective than simply pursuing algorithmic complexity.
In summary, the sparse Broyden–Schubert method demonstrates strong computational efficiency, stable convergence, and excellent scalability when solving multi-physics coupled nonlinear systems in acid-fracturing simulations. This study not only validates its practical value in engineering applications but also theoretically elucidates the source of its efficiency advantages: sparsity simultaneously reduces per-iteration cost and accelerates the overall convergence process. It provides a feasible and efficient numerical solution for large-scale acid fracturing modeling. Future work will explore its extension to more complex scenarios, including intricate fracture geometries, unstructured grids, and parallel computing environments.

6. Conclusions

Based on the evolutionary characteristics of carbonate rock acid fracturing, a mathematical model was established that considers the multi-physics coupling among permeability, acid concentration, pressure, and velocity fields. The model was discretized using the finite volume method, resulting in a large-scale nonlinear system of equations. To solve this system, we used the sparse Broyden–Schubert method and performed corresponding numerical experiments.
Numerical results from both the acid fracturing model and the additional thermal conduction coupling model demonstrate the significant advantages of the proposed algorithm over the conventional BFGS method. For the acid fracturing model, the sparse Broyden–Schubert method achieves a reduction in the number of iterations across all dimensions and shortens the average computation time by 30% compared to the BFGS method. Similarly, in the thermal conduction model, the proposed method maintains a 30% improvement in iteration efficiency, reduces computation time by 20–35%, and consistently exhibits superior average convergence rates across all tested dimensions.
While the proposed sparse quasi-Newton method demonstrates significant computational advantages for acid-fracturing simulations, its applicability is subject to certain limitations. The model currently omits thermal and geomechanical couplings, which may restrict its use in environments where these effects are pronounced. Additionally, the dependency on an initial Jacobian evaluation and limited validation under realistic field conditions warrants further research. Future work will focus on extending the model to incorporate additional physics, developing adaptive Jacobian initialization strategies, and validating the method with field–data integration.
The results demonstrate that the adopted sparse quasi-Newton method is both computationally efficient and practically effective for acid-fracturing simulations in carbonate formations, while also exhibiting strong potential for extension to other multi-physics coupling problems in reservoir engineering.

Author Contributions

Conceptualization, Z.C.; methodology, Z.C.; software, M.L.; validation, Z.C. and M.L.; formal analysis, Z.C.; investigation, M.L.; resources, Z.C.; data curation, M.L.; writing—original draft preparation, Z.C. and M.L.; writing—review and editing, Z.C.; visualization, M.L.; supervision, Z.C.; project administration, Z.C.; funding acquisition, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62273060).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed at the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chang, L.; Long, W.; Wu, Y.; Chen, F.; Liu, Z.; Wang, G. Gas condensate occurrence during the pressure depletion of fracture-vuggy carbonate condensate gas reservoir. Sci. Technol. Eng. 2021, 21, 11136–11143. [Google Scholar]
  2. Zhou, S.; Li, X.; Su, G.; Zhou, C.; Li, X. The research of acid rock reaction kinetics experiment of different acid systems for carbonate gas reservoirs. Sci. Technol. Eng. 2014, 14, 211–214. [Google Scholar]
  3. Cai, J.; Fang, H.; Su, J.; Wang, L. Numerical simulation of acidizing in fracture-vug type carbonate reservoir. Sci. Technol. Eng. 2022, 22, 15131–15141. [Google Scholar]
  4. Aljawad, M.S.; Aljuliah, H.; Mahmoud, M.; Desouky, M. Integration of field, laboratory, and modeling aspects of acid fracturing: A comprehensive review. J. Pet. Sci. Eng. 2019, 181, 106158. [Google Scholar] [CrossRef]
  5. Forsyth, P.A. A control-volume, finite-element method for local mesh refinement in thermal reservoir simulation. SPE Reserv. Eng. 1990, 5, 561–566. [Google Scholar] [CrossRef]
  6. Schubert, L. Modification of a quasi-newton method for nonlinear equations with a sparse jacobian. Math. Comput. 1970, 24, 27–30. [Google Scholar] [CrossRef]
  7. Toint, P.L. On sparse and symmetric matrix updating subject to a linear equation. Math. Comput. 1977, 31, 954–961. [Google Scholar] [CrossRef]
  8. Toint, P.L. A sparse quasi-newton update derived variationally with a nondiagonally weighted frobenius norm. Math. Comput. 1981, 37, 425–433. [Google Scholar] [CrossRef]
  9. Toint, P. A note about sparsity exploiting quasi-newton updates. Math. Program. 1981, 21, 172–181. [Google Scholar] [CrossRef]
  10. Broyden, C. The convergence of an algorithm for solving sparse nonlinear systems. Math. Comput. 1971, 25, 285–294. [Google Scholar] [CrossRef]
  11. Marwil, E. Convergence results for schubert’s method for solving sparse nonlinear equations. SIAM J. Numer. Anal. 1979, 16, 588–604. [Google Scholar] [CrossRef]
  12. Bogle, I.; Perkins, J. A new sparsity preserving quasi-newton update for solving nonlinear equations. SIAM J. Sci. Stat. Comput. 1990, 11, 621–630. [Google Scholar] [CrossRef]
  13. Soulaine, C.; Tchelepi, H.A. Micro-continuum approach for pore-scale simulation of subsurface processes. Transp. Porous Media 2016, 113, 431–456. [Google Scholar] [CrossRef]
  14. Dennis, J.E., Jr.; Moré, J.J. Quasi-newton methods, motivation and theory. SIAM Rev. 1977, 19, 46–89. [Google Scholar] [CrossRef]
  15. Fletcher, R. An optimal positive definite update for sparse hessian matrices. SIAM J. Optim. 1995, 5, 192–218. [Google Scholar] [CrossRef]
  16. Broyden, C.G. The convergence of a class of double-rank minimization algorithms: 2. the new algorithm. IMA J. Appl. Math. 1970, 6, 222–231. [Google Scholar] [CrossRef]
  17. Fletcher, R. A new approach to variable metric algorithms. Comput. J. 1970, 13, 317–322. [Google Scholar] [CrossRef]
  18. Goldfarb, D. A family of variable-metric methods derived by variational means. Math. Comput. 1970, 24, 23–26. [Google Scholar] [CrossRef]
  19. Shanno, D.F. Conditioning of quasi-newton methods for function minimization. Math. Comput. 1970, 24, 647–656. [Google Scholar] [CrossRef]
  20. Daokun, C.; Chao, Y.; Fangfang, L.; Wenjing, M. Parallel structured sparse triangular solver for gpu platform. J. Softw. 2023, 34, 4941–4951. [Google Scholar]
  21. Saltz, J.H. Aggregation methods for solving sparse triangular systems on multiprocessors. SIAM J. Sci. Stat. Comput. 1990, 11, 123–144. [Google Scholar] [CrossRef]
  22. Alvarado, F.L.; Schreiber, R. Optimal parallel solution of sparse triangular systems. SIAM J. Sci. Comput. 1993, 14, 446–460. [Google Scholar] [CrossRef]
  23. Raghavan, P.; Teranishi, K. Parallel hybrid preconditioning: Incomplete factorization with selective sparse approximate inversion. SIAM J. Sci. Comput. 2010, 32, 1323–1345. [Google Scholar] [CrossRef]
  24. Anderson, E.; Saad, Y. Solving sparse triangular linear systems on parallel computers. Int. J. High Speed Comput. 1989, 1, 73–95. [Google Scholar] [CrossRef]
Figure 1. (a) log ||F(x)||2 vs. Iteration ( n = 3000); (b) vs. Time ( n = 3000).
Figure 1. (a) log ||F(x)||2 vs. Iteration ( n = 3000); (b) vs. Time ( n = 3000).
Applsci 15 10436 g001
Figure 2. (a) Computation time as a function of problem size (logarithmic scale); (b) in linear scale.
Figure 2. (a) Computation time as a function of problem size (logarithmic scale); (b) in linear scale.
Applsci 15 10436 g002
Table 1. Optimization performance comparison.
Table 1. Optimization performance comparison.
DimBFGSSparse Broyden Schubert
IterationTcpu (s)Iteration Tcpu (s)
50140.0020130.0011
100150.0052130.0035
200150.0186130.0109
300150.0390130.0292
400150.0960130.0731
500150.1450130.1074
600150.2264130.1743
700150.3456130.2345
800150.4504130.3022
900150.5803130.3694
1000150.7275130.4533
2000153.9113132.4461
2500156.5859134.2408
40001420.65611314.2332
50001437.62641325.7713
60001461.33921341.9309
70001496.55901365.1102
800014136.84981392.5859
900014189.514713127.4359
10,00014259.573613172.7991
Table 2. Comparative optimization results.
Table 2. Comparative optimization results.
n Method N 1 N m m R
50BFGS21.37802.0820 × 10−7270.6830
Broyden–Schubert21.37804.1610 × 10−8270.7430
100BFGS30.11601.5070 × 10−7290.6590
Broyden–Schubert30.11603.9490 × 10−8270.7570
200BFGS42.50901.6970 × 10−7290.6670
Broyden–Schubert42.50903.7350 × 10−8270.7720
300BFGS52.02901.7890 × 10−7290.6720
Broyden–Schubert52.02903.6850 × 10−8270.7800
400BFGS60.05801.8650 × 10−7290.6760
Broyden–Schubert60.05803.7110 × 10−8270.7850
500BFGS67.13401.9120 × 10−7290.6790
Broyden–Schubert67.13403.7730 × 10−8270.7890
600BFGS73.53201.9330 × 10−7290.6810
Broyden–Schubert73.53203.8510 × 10−8270.7910
700BFGS79.41701.9340 × 10−7290.6840
Broyden–Schubert79.41703.9340 × 10−8270.7940
800BFGS84.89401.9220 × 10−7290.6860
Broyden–Schubert84.89404.0170 × 10−8270.7950
900BFGS90.03901.9010 × 10−7290.6890
Broyden–Schubert90.03904.0960 × 10−8270.7970
1000BFGS94.90501.8740 × 10−7290.6910
Broyden–Schubert94.90504.1700 × 10−8270.7980
2000BFGS134.19001.4810 × 10−7290.7110
Broyden–Schubert134.19004.6820 × 10−8270.8070
2500BFGS150.02301.2660 × 10−7290.7200
Broyden–Schubert150.02304.8330 × 10−8270.8090
4000BFGS189.75501.5590 × 10−7270.7750
Broyden–Schubert189.75505.1050 × 10−8270.8160
5000BFGS212.14901.1350 × 10−7270.7910
Broyden–Schubert212.14905.2090 × 10−8270.8200
6000BFGS232.39409.3630 × 10−8270.8010
Broyden–Schubert232.39405.2830 × 10−8270.8220
7000BFGS251.01208.4680 × 10−8270.8080
Broyden–Schubert251.01205.3380 × 10−8270.8250
8000BFGS268.34108.0460 × 10−8270.8120
Broyden–Schubert268.34105.3810 × 10−8270.8270
9000BFGS284.61707.8430 × 10−8270.8150
Broyden–Schubert284.61705.4140 × 10−8270.8290
10,000BFGS300.01207.7540 × 10−8270.8180
Broyden–Schubert300.01205.4420 × 10−8270.8310
Table 3. Performance comparison on the heat conduction model.
Table 3. Performance comparison on the heat conduction model.
DimBFGSSparse Broyden Schubert
IterationTcpu (s)IterationTcpu (s)
50060.012430.0076
100060.054430.0349
200060.253240.2096
250060.400340.3378
400061.204341.0850
500072.481841.8392
600073.723042.8919
750075.447844.3431
800077.634246.1400
9000710.004948.1452
10,000713.4487410.7674
20,000781.1970469.0125
30,0007275.02234221.1731
Table 4. Comparative results with additional heat conduction model.
Table 4. Comparative results with additional heat conduction model.
n Method N 1 N m m R
500BFGS0.26306.279 × 10−6110.968
Broyden–Schubert0.26301.163 × 10−571.433
1000BFGS0.37208.932 × 10−6110.967
Broyden–Schubert0.37201.568 × 10−571.439
2000BFGS0.52701.267 × 10−5110.967
Broyden–Schubert0.52705.633 × 10−791.528
2500BFGS0.58901.417 × 10−5110.967
Broyden–Schubert0.58905.657 × 10−791.540
4000BFGS0.74501.794 × 10−5110.967
Broyden–Schubert0.74505.726 × 10−791.564
5000BFGS0.83301.8740 × 10−7130.971
Broyden–Schubert0.83305.771 × 10−791.576
6000BFGS0.91203.008 × 10−6130.971
Broyden–Schubert0.91205.814 × 10−791.585
7000BFGS0.98503.250 × 10−6130.971
Broyden–Schubert0.98505.858 × 10−791.593
8000BFGS1.05303.474 × 10−6130.971
Broyden–Schubert1.05305.901 × 10−791.599
9000BFGS1.11703.686 × 10−6130.971
Broyden–Schubert1.11705.943 × 10−791.605
10,000BFGS1.17803.885 × 10−6130.971
Broyden–Schubert1.17805.986 × 10−791.610
20,000BFGS1.66505.496 × 10−6130.971
Broyden–Schubert1.66506.391 × 10−791.641
30,000BFGS2.04006.732 × 10−6130.971
Broyden–Schubert2.04006.772 × 10−791.658
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Chen, Z. Efficient Sparse Quasi-Newton Algorithm for Multi-Physics Coupled Acid Fracturing Model in Carbonate Reservoirs. Appl. Sci. 2025, 15, 10436. https://doi.org/10.3390/app151910436

AMA Style

Li M, Chen Z. Efficient Sparse Quasi-Newton Algorithm for Multi-Physics Coupled Acid Fracturing Model in Carbonate Reservoirs. Applied Sciences. 2025; 15(19):10436. https://doi.org/10.3390/app151910436

Chicago/Turabian Style

Li, Mintao, and Zhong Chen. 2025. "Efficient Sparse Quasi-Newton Algorithm for Multi-Physics Coupled Acid Fracturing Model in Carbonate Reservoirs" Applied Sciences 15, no. 19: 10436. https://doi.org/10.3390/app151910436

APA Style

Li, M., & Chen, Z. (2025). Efficient Sparse Quasi-Newton Algorithm for Multi-Physics Coupled Acid Fracturing Model in Carbonate Reservoirs. Applied Sciences, 15(19), 10436. https://doi.org/10.3390/app151910436

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop