New Preconditioned Iteration Method Solving the Special Linear System from the PDE-Constrained Optimal Control Problem

: In many ﬁelds of science and engineering, partial differential equation (PDE) constrained optimal control problems are widely used. We mainly solve the optimization problem constrained by the time-periodic eddy current equation in this paper. We propose the three-block splitting (TBS) iterative method and proved that it is unconditionally convergent. At the same time, the corresponding TBS preconditioner is derived from the TBS iteration method, and we studied the spectral properties of the preconditioned matrix. Finally, numerical examples in two-dimensions is applied to demonstrate the advantages of the TBS iterative method and TBS preconditioner with the Krylov subspace method.


Introduction
We consider the time-periodic eddy current optimization problem: Find the control u(x, t) and the state y(x, t) to make the cost function minimum, which is subject to the time-periodic problem σ ∂ ∂t y(x, t) + curl(νcurl(y(x, t))) + εy(x, t) = u(x, t) in Q T , y(x, t) × n = 0 on Σ T , u(x, 0) = u(x, T) in Ω, y(x, 0) = y(x, T) in Ω. ( where Ω is a bounded domain in R d for d ∈ {2, 3} and its boundary Γ is Lipschitzcontinuous. Unit outer normal vector of Ω is marked as n. Q T = Ω × (0, T) is space-time cylinder, which outer surface is Σ T = Γ × (0, T) with T > 0. Besides, we denote the given target state function as y d (x, t), magnetic reluctivity ν ∈ L ∞ (Ω) is uniformly positive, conductivity σ ∈ L ∞ (Ω) is positive constant, β > 0 is a regularization or cost parameter, and ε > 0 is an additional regularization parameter. In some actual calculation we may take ε = 0. The partial differential equations constrained optimization problem was first proposed by J.L. Lions [1]. In the past 50 years, the research on optimal control problems with PDE constrained has developed to a certain extent, and has important applications in modern industrial, medical, economic and other application areas, such as semiconductor material design, water pressure control in oil exploitation, image registration and option price in parameter determination, etc. [1,2]. The eddy current equation originates from the problem of the time-period electromagnetic field, and usually the time-period current is the driving control force. It is used widely in areas such as non-destructive testing of conductive materials [3].
For simplicity, we make the restrictive assumption that Ω 0 = Ω. Since the problem operator of (5) is Hermitian [6], the method of discretize-then-optimize [7][8][9] is adopted to obtain the following finite dimensional optimization problems: where K = K ij n×n is the stiffness matrix and M = M ij n×n is the mass matrix. y, y d , u denote the discrete form ofŷ,ŷ d , u, respectively. (·) * means conjugate transpose. The Lagrangian function of (6) is with p is the Lagrangian multiplier. Then the following complex linear system can be derived:  where ω = σω. We use the relationship of u = 1 β p obtained in the second row of (8) and a scalingû = − βu [6] to reduce (8) as follows: where K ∈ R n×n is symmetric and M ∈ R n×n is symmetric positive definite. Obviously, (9) is a kind of generalized saddle point problem. The iterative method is an effective way to solve this kind of problems. The Hermitian and skew-Hermitian splitting (HSS) iterative method was proposed by Bai et al. in 2003 [10], which greatly accelerated the solution speed of the equation. In 2013 [11], for system order to avoid the dense matrix αV −T T αV generated by the skew-Hermitian part of the HS splitting during the iteration, the coefficient matrix was rotated with matrix 0 I −I 0 and then HS splitting was carried out again. The Hermitian part of the new coefficient matrix can be left matrix of the second iteration equation. It greatly improves calculation efficiency. Meanwhile, it induced a new preconditioner. However, rotation matrix is involved in the preconditioner and the iteration matrix, which adds to the computational complexity. In 2015, in [12], based on the idea of matrix rotation and [11], Zheng, Zhang et al. use two matrices to rotate the coefficient of the equation, then the idea of [11] is used to construct the block alternating splitting (BAS) iteration method and induces BAS preconditioner, which generalize the preconditioner P BD in [13].
When the original equation is discrete, the spectral distribution is usually scattered and the condition number is large. The convergence speed of the Krylov subspace iteration algorithm is usually slow so an effective preconditioner is needed to improve the spectral property of coefficient matrix, so as to improve the calculation efficiency and accuracy. In 2014 [14], Axelsson, Neytcheva et al. rewrite the complex linear system (A Reference [15] gives a new preconditioner C α = (A + αB)A −1 (A + αB), in every step of calculation, the coefficient matrix involves A, and [14] rewrites the coefficient matrix approximately replaces Schur complement 1 + βω 2 M + βKM −1 K of coefficient matrix.
After preconditioning, the spectral radius of the system is clustered to 1 2 , 1 , which effectively accelerates the calculation of Krylov subspace iteration method. In 2018 [18], for coefficient matrix derived from optimization problems constrained by time-periodic parabolic problems, based on the idea of [16] and the coefficient matrix belongs to complex matrix, a new preconditioner was obtained, which performs better when preconditioning the Krylov subspace method. In [6], Axelsson added adjustable parameters α on the basis of [18], and obtained M − β(K − i ωM) β(K + i ωM) αM + 2 β(α + βω 2 )K that, under the condition of optimal parameters, even if the stiffness matrix K is positive semidefinite, the spectral radius of the preconditioned system can be within the range 1 2 , 1 . The purpose of this article is to efficiently solve Equation (9). In this paper, a threeblock splitting (TBS) scheme of coefficient matrix in (9) is proposed. Then, according to the new split format, the TBS iteration method is established to solve (9), and theoretically prove that TBS iterative method is convergent unconditionally. Meanwhile, the TBS iterative method can naturally derive a preconditioner. It can be proved that the preconditioned matrix is concentrated in a certain interval (to be proven). In addition, in the actual calculation, the iteration steps and running time of the iteration method have been greatly improved. The application of the preconditioner greatly reduces the number of the Krylov subspace method and improves the calculation efficiency.
Here is the structure of this paper. We propose the TB splitting and give the corresponding TBS iteration method in Section 2, then prove that it is convergent. In Section 3, we give the TBS preconditioner matrix derived from the TBS iteration method, and its spectral property is analyzed. Some actual experiments in Section 4 illustrate the efficiency and feasibility of the TBS iteration method and TBS preconditioner. Finally, the conclusions are given in Section 5.

TBS Iteration Method
First, according to HS splitting [10], we divide the coefficient matrix A of (9) into then split A 2 into let , finally we get three-block (TB) splitting of the coefficient matrix further we have where I ∈ R n×n is an identity matrix, α > 0. According to the splitting in (13), we can get the following TBS iterative method.

TBS Iteration
∈ C 2n as an arbitrary initial guess. For with α > 0. By using the Gauss elimination method, the steps of the TBS iteration method are described in Algorithm 1. ∈ C 2n , follow the procedure below to calculate As can be seen from Algorithm 1, we can use the conjugate gradient (CG) method or the Cholesky decomposition method for the first three equations, of which the coefficient matrices are real symmetric positive definite matrices. In addition, the fourth step only needs direct calculation without any additional solution steps. At the same time, the second step usesû (k+ 1 2 ) obtained in the first step, and the fourth step uses y (k+1) from the third step, which greatly improves the calculation efficiency and shortens the calculation time of the TBS iteration method. This will also be demonstrated in numerical experiments.
By directly substituting, we can rewrite (14) as follows is the iteration matrix of TBS iterative method, and Next, we will prove the unconditional convergence of the TBS iteration method. Moreover, the upper bound of the spectral radius of L(α) only depends on the parameters α, which provides great convenience for us to further find the optimal parameters. Theorem 1. (Convergence Theorem). Define A ∈ C 2n×2n as in (9), where M ∈ R n×n is a symmetric positive definite, K ∈ R n×n is symmetric, and α is a positive constant. Then for any α, ρ(L(α)) represents the spectral radius of L(α) and sp(M) represents the eigenvalue of matrix M. That is, for any initial vector, the TBS iterative method defined in method 2.1 is unconditionally convergent. Note that the convergence factor σ(α) → 1 as λ j → 0 .
Proof. From the matrix similarity, we get and obviously ρ L(α) = ρ(L(α)). Since the matrices M and K are real symmetric, there are orthogonal matrices Q M , In the following we consider solving the optimal parameters that minimize σ(α).

Theorem 2.
(The optimal parameter theorem). Assume that Theorem 1 holds, then where λ max and λ min are the maximum and minimum eigenvalues of the matrix M, respectively.

Proof.
Define σ (λ) as the derivation of σ(λ), then according to the relation between the σ (λ) and monotonicity of σ(λ), we have Derivatives for σ 1 (α) and σ 2 (α), respectively. By comparing the monotonicity and maximum of the function, we find that if α * makes σ(α), then it satisfies solve it, we can obtain Theorem 2 is proved.

TBS Preconditioner
As shown in [12], there is a unique split for Equation (9), where P(α) is nonsingular, so that the iterative matrix L(α) can be expressed (i.e., L(α) = P −1 (α)G(α)), then we have Obviously, we can regard the splitting matrix P(α) given in (16) as the TBS preconditioner of A in (9), which is considered as the preconditioner derived from the TBS iteration method.
When preconditioning the Krylov subspace iteration method with the TBS preconditioner P(α), it usually involves solving P(α)z = r, where r = r 1 r 2 ∈ R 2n represents the current residual vector and z = z 1 z 2 ∈ C 2n represents the generalized residual vector. We can get the following algorithm to calculate the vector z by using the structure of matrix P(α).
In the implementation of Algorithm 2, by using the conjugate gradient method (CG) or the Cholesky decomposition method, we can solve the first three equations, of which the coefficient matrices are real symmetric positive definite matrices. Additionally, the fourth step only needs direct calculation without any additional solution steps. In the meantime, the second step uses c 2 obtained in the first step, and the fourth step uses z 1 from the third step, which greatly improves the calculation efficiency and shortens the calculation time of the preconditioned Krylov subspace method. This will also be demonstrated in numerical experiments. Algorithm 2 Given r = r 1 r 2 ∈ C 2n you can use the following steps to solve z = z 1 z 2 ∈ C 2n (1) Solve c 2 from (αI + M) 2 + β ω 2 M 2 c 2 = 2α(αI + M)r 2 − i2α β ωMr 1 .

Numerical Experiments
Some numerical experiments were used to illustrate the performance of TBS iterative method and TBS preconditioner to solve Equation (9). We used the iteration time in seconds and the number of iterative steps (i.e., CPUs and IT) to show the advantages of the TBS iteration method and preconditioner over other iteration methods and preconditioner.
The triangle and tetrahedron defined in two-dimensions and three-dimensions obtained by lowest Nedelec edge element [19,20] were used to discretize variables. We used the MATLAB toolbox in [21] to establish the coefficient matrix. All the calculations were done in the computer with MATLAB R2018a. The computer was configured with Intel (R) Core (TM) i5-8250 U CPU (1.60 GHz, 4 G RAM). Example 1 [21]. We consider the problem (1) and (2), where Ω = [0, 1] × [0, 1] ∈ R n×n and σ = 1, ν = 1, β= 10 −8 . We divide the area Ω into two parts, Ω 1 = {x ∈ Ω|x 1 > x 2 } and We first used Table 1 to show the relationship between the level of mesh refinement and the order of the matrices M and K in the 2D case.  Tables 2 and 3 showed the calculation results of TBS method in this paper and BAS iteration methods in [11] when parameter α was the optimal parameter, and ω, ε was different values under different mesh refinement levels. In the experiment, the inner iteration used the conjugate gradient method (CG), and all the initial vectors involved in the iteration method were x (0) = 0, the outer iteration termination condition was where x (k) denoted the value of the k-th step iteration.  Tables 2 and 3 illustrated that, for the iteration steps, no matter the parameter changed or the matrix order changed, the steps of the TBS iterative method performed better, and with the increase of the order, the steps of the TBS method were stable and the increase was small. The operation time was also less than that of previous method. Even though the matrix order was more than 10,000, the iteration time of TBS was shorter, which demonstrated the advantages of the TBS iteration method in actual operation. Tables 4 and 5 showed the operation results of preconditioned Generalized Minimal Residual(5) method (GMRES(5) )with P TBS , P BAS in [12] and P str−I I in [6] when the parameters α were the optimal parameters, different grid refinement level and ω. In the actual calculation, the CG method was used as the internal iteration method. The initial vectors of all the iteration methods involved were x (0) = 0. The iteration termination condition was b−Ax (k) 2 b 2 ≤ 10 −6 , where x (k) was the value obtained by the k−th iteration. At the same time, we used the new marks OUT and IN to denote the number of external and internal iterative steps, respectively. In addition, CPUs represented the calculation times as before. Table 3. Different mesh refinement levels and ω, the results of iteration method in the 2D case, with ε = 10 −4 .  Tables 4 and 5 showed that, for the steps number, regardless of parameter changeed or order changeed, P TBS performed better than P BAS and P str−I I . In terms of iteration time (i.e., CPU), P TBS had less preconditioning time than P BAS , and the CPU of P TBS was similar to P str−I I . Therefore, we believed that P TBS had a certain improvement in the steps number and time and performs well.

Method
Example 2 [21]. We consider the problem (1) and (2), where Ω = [0, 1] × [0, 1] ∈ R n×n and σ = 1, ν = 1,β= 10 −8 . We divide the area Ω into two parts, Ω 1 = x ∈ Ω x 2 < 1 2 and First, we used Table 6 to show the relationship between the level of mesh refinement and the order of matrix M and K in the 2D case. Table 5. Different mesh refinement levels and ω, the result of the preconditioned GMRES (5) in the 2D case, with ε = 10 −4 .  Tables 7 and 8 showed the calculation results of the TBS and BAS iteration methods when the parameters α were the optimal parameters of iteration method, under different mesh refinement levels and ω. The inner iteration method and iteration termination conditions were the same as Example 1. Table 7. Different mesh refinement levels and ω, the results of the iteration method in the 2D case, with ε = 10 −2 .   Tables 7 and 8 showed that the selection of parameters and the change of order had little influence on the iteration steps of TBS. With the increase of order, the iteration steps of the iteration method did not increase significantly. For CPUs, whether the order was increased or not, the iteration time of the algorithm was less than that of BAS iteration method. Tables 9 and 10 showed the operation results of preconditioned GMRES(5) with P TBS in this paper, P BAS in [12] and P str−I I in [6] when the parameters α were the optimal parameters, different grid refinement level and ω, ε. The marks, inner iteration algorithm and iteration termination conditions in the following table were the same as those in Tables 4 and 5. Table 9. Different mesh refinement levels and ω, the result of the preconditioned GMRES (5) in the 2D case, with ε = 10 −2 . From Tables 9 and 10 we can see that, compared with P BAS and P str−I I , P TBS had obvious improvement in the iteration number. For the running time (i.e., CPU), with the increase of the order of the matrices M and K, CPU of P TBS was larger than that of P str−I I and P BAS .

Conclusions
We mainly solve the time-period eddy current optimization problems (1) and (2) in this paper. The method previously put forward for the generalized saddle point problem is not fully applicable to this method, since the coefficient matrix of the equation is a complex matrix, t. Based on this, this paper proposes a new split iteration method (i.e., the TBS iteration method) and a new preconditioner (i.e., the TBS preconditioner) to solve Equation (9). The iteration method is unconditionally convergent, and the optimal parameter which makes the upper bound of the spectral radius of the iteration matrix minimum is obtained. Numerical experiments demonstrate that the TBS method is more feasible and efficient than the previous one. At the same time, the iteration time and steps of the preconditioned GMRES (5) iteration method with the TBS preconditioner also have a significant improvement.

Conflicts of Interest:
The authors declare no conflict of interest.