Abstract
In many fields of science and engineering, partial differential equation (PDE) constrained optimal control problems are widely used. We mainly solve the optimization problem constrained by the time-periodic eddy current equation in this paper. We propose the three-block splitting (TBS) iterative method and proved that it is unconditionally convergent. At the same time, the corresponding TBS preconditioner is derived from the TBS iteration method, and we studied the spectral properties of the preconditioned matrix. Finally, numerical examples in two-dimensions is applied to demonstrate the advantages of the TBS iterative method and TBS preconditioner with the Krylov subspace method.
1. Introduction
We consider the time-periodic eddy current optimization problem: Find the control and the state to make the cost function minimum, which is subject to the time-periodic problem
where is a bounded domain in for and its boundary is Lipschitz-continuous. Unit outer normal vector of is marked as . is space-time cylinder, which outer surface is with . Besides, we denote the given target state function as , magnetic reluctivity is uniformly positive, conductivity is positive constant, is a regularization or cost parameter, and is an additional regularization parameter. In some actual calculation we may take .
The partial differential equations constrained optimization problem was first proposed by J.L. Lions []. In the past 50 years, the research on optimal control problems with PDE constrained has developed to a certain extent, and has important applications in modern industrial, medical, economic and other application areas, such as semiconductor material design, water pressure control in oil exploitation, image registration and option price in parameter determination, etc. [,]. The eddy current equation originates from the problem of the time-period electromagnetic field, and usually the time-period current is the driving control force. It is used widely in areas such as non-destructive testing of conductive materials [].
In actual calculations, for such problems, is time-periodic in [,], that is,
with the angular frequency . Then the solutions of the original control problems (1) and (2) are in the form of
and are the solutions of the equations
For simplicity, we make the restrictive assumption that . Since the problem operator of (5) is Hermitian [], the method of discretize-then-optimize [,,] is adopted to obtain the following finite dimensional optimization problems:
where is the stiffness matrix and is the mass matrix. denote the discrete form of , respectively. means conjugate transpose.
The Lagrangian function of (6) is
with is the Lagrangian multiplier. Then the following complex linear system can be derived:
where . We use the relationship of obtained in the second row of (8) and a scaling [] to reduce (8) as follows:
where is symmetric and is symmetric positive definite.
Obviously, (9) is a kind of generalized saddle point problem. The iterative method is an effective way to solve this kind of problems. The Hermitian and skew-Hermitian splitting (HSS) iterative method was proposed by Bai et al. in 2003 [], which greatly accelerated the solution speed of the equation. In 2013 [], for system , in order to avoid the dense matrix generated by the skew-Hermitian part of the HS splitting during the iteration, the coefficient matrix was rotated with matrix and then HS splitting was carried out again. The Hermitian part of the new coefficient matrix can be left matrix of the second iteration equation. It greatly improves calculation efficiency. Meanwhile, it induced a new preconditioner. However, rotation matrix is involved in the preconditioner and the iteration matrix, which adds to the computational complexity. In 2015, in [], based on the idea of matrix rotation and [], Zheng, Zhang et al. use two matrices to rotate the coefficient of the equation, then the idea of [] is used to construct the block alternating splitting (BAS) iteration method and induces BAS preconditioner, which generalize the preconditioner in [].
When the original equation is discrete, the spectral distribution is usually scattered and the condition number is large. The convergence speed of the Krylov subspace iteration algorithm is usually slow so an effective preconditioner is needed to improve the spectral property of coefficient matrix, so as to improve the calculation efficiency and accuracy. In 2014 [], Axelsson, Neytcheva et al. rewrite the complex linear system into , and rewrite it as . Reference [] gives a new preconditioner , in every step of calculation, the coefficient matrix involves , and [] rewrites the coefficient matrix into the form of , so as to avoid the situation that the condition number of is not good enough. In 2016, Axelsson, Farouq et al. [], for the coefficient matrix obtained by discretizing the Poisson equation constrained optimization problem, apply the method in [] to obtain , avoiding the problem of solving , and reducing the computational complexity. In 2013, in [], Krendl and Simoncini et al. proposed a block diagonal preconditioner for the coefficient matrix of the optimization problem constrained by the time-periodic parabolic problem. In [], Zheng, Zhang et al. proposed a new block triangle preconditioner , in which approximately replaces Schur complement of coefficient matrix. After preconditioning, the spectral radius of the system is clustered to , which effectively accelerates the calculation of Krylov subspace iteration method. In 2018 [], for coefficient matrix derived from optimization problems constrained by time-periodic parabolic problems, based on the idea of [] and the coefficient matrix belongs to complex matrix, a new preconditioner was obtained, which performs better when preconditioning the Krylov subspace method. In [], Axelsson added adjustable parameters on the basis of [], and obtained that, under the condition of optimal parameters, even if the stiffness matrix is positive semidefinite, the spectral radius of the preconditioned system can be within the range .
The purpose of this article is to efficiently solve Equation (9). In this paper, a three-block splitting (TBS) scheme of coefficient matrix in (9) is proposed. Then, according to the new split format, the TBS iteration method is established to solve (9), and theoretically prove that TBS iterative method is convergent unconditionally. Meanwhile, the TBS iterative method can naturally derive a preconditioner. It can be proved that the preconditioned matrix is concentrated in a certain interval (to be proven). In addition, in the actual calculation, the iteration steps and running time of the iteration method have been greatly improved. The application of the preconditioner greatly reduces the number of the Krylov subspace method and improves the calculation efficiency.
Here is the structure of this paper. We propose the TB splitting and give the corresponding TBS iteration method in Section 2, then prove that it is convergent. In Section 3, we give the TBS preconditioner matrix derived from the TBS iteration method, and its spectral property is analyzed. Some actual experiments in Section 4 illustrate the efficiency and feasibility of the TBS iteration method and TBS preconditioner. Finally, the conclusions are given in Section 5.
2. TBS Iteration Method
First, according to HS splitting [], we divide the coefficient matrix of (9) into
then split into
let , finally we get three-block (TB) splitting of the coefficient matrix
further we have
where is an identity matrix, .
According to the splitting in (13), we can get the following TBS iterative method.
TBS Iteration Method. Define as an arbitrary initial guess. For , calculate the next as follows until is convergent:
with .
By using the Gauss elimination method, the steps of the TBS iteration method are described in Algorithm 1.
| Algorithm 1 For the given vector , follow the procedure below to calculate |
|
As can be seen from Algorithm 1, we can use the conjugate gradient (CG) method or the Cholesky decomposition method for the first three equations, of which the coefficient matrices are real symmetric positive definite matrices. In addition, the fourth step only needs direct calculation without any additional solution steps. At the same time, the second step uses obtained in the first step, and the fourth step uses from the third step, which greatly improves the calculation efficiency and shortens the calculation time of the TBS iteration method. This will also be demonstrated in numerical experiments.
By directly substituting, we can rewrite (14) as follows
where
is the iteration matrix of TBS iterative method, and
Next, we will prove the unconditional convergence of the TBS iteration method. Moreover, the upper bound of the spectral radius of only depends on the parameters , which provides great convenience for us to further find the optimal parameters.
Theorem 1.
(Convergence Theorem). Define as in (9), where is a symmetric positive definite, is symmetric, and is a positive constant. Then for any , holds, where
represents the spectral radius of and represents the eigenvalue of matrix . That is, for any initial vector, the TBS iterative method defined in method 2.1 is unconditionally convergent. Note that the convergence factor as .
Proof.
From the matrix similarity, we get
and obviously . Since the matrices and are real symmetric, there are orthogonal matrices , such that , , where the matrix and the matrix . Then
because
we have
Then
Note that , so is always true for any .
In the following we consider solving the optimal parameters that minimize . □
Theorem 2.
(The optimal parameter theorem). Assume that Theorem 1 holds, then
where and are the maximum and minimum eigenvalues of the matrix , respectively.
Proof.
Define as the derivation of , then according to the relation between the and monotonicity of , we have
Derivatives for and , respectively. By comparing the monotonicity and maximum of the function, we find that if makes , then it satisfies
solve it, we can obtain
Theorem 2 is proved. □
3. TBS Preconditioner
As shown in [], there is a unique split
for Equation (9), where is nonsingular, so that the iterative matrix can be expressed (i.e., ), then we have
Obviously, we can regard the splitting matrix given in (16) as the TBS preconditioner of in (9), which is considered as the preconditioner derived from the TBS iteration method.
When preconditioning the Krylov subspace iteration method with the TBS preconditioner , it usually involves solving
where represents the current residual vector and represents the generalized residual vector. We can get the following algorithm to calculate the vector by using the structure of matrix .
In the implementation of Algorithm 2, by using the conjugate gradient method (CG) or the Cholesky decomposition method, we can solve the first three equations, of which the coefficient matrices are real symmetric positive definite matrices. Additionally, the fourth step only needs direct calculation without any additional solution steps. In the meantime, the second step uses obtained in the first step, and the fourth step uses from the third step, which greatly improves the calculation efficiency and shortens the calculation time of the preconditioned Krylov subspace method. This will also be demonstrated in numerical experiments.
| Algorithm 2 Given you can use the following steps to solve |
|
Next, we use Theorem 3 to show the eigenvalues distribution of .
Theorem 3.
Suppose that Theorem 1 is satisfied. Then the eigenvalueofis distributed in, and when the eigenvalue ofis smaller, the eigenvalue ofis more clustered near 1.
Proof.
As shown in (15), . We also know that . Then we can obtain
According to Theorem 1, . Therefore □
4. Numerical Experiments
Some numerical experiments were used to illustrate the performance of TBS iterative method and TBS preconditioner to solve Equation (9). We used the iteration time in seconds and the number of iterative steps (i.e., CPUs and IT) to show the advantages of the TBS iteration method and preconditioner over other iteration methods and preconditioner.
The triangle and tetrahedron defined in two-dimensions and three-dimensions obtained by lowest Nedelec edge element [,] were used to discretize variables. We used the MATLAB toolbox in [] to establish the coefficient matrix. All the calculations were done in the computer with MATLAB R2018a. The computer was configured with Intel (R) Core (TM) i5-8250 U CPU (1.60 GHz, 4 G RAM).
Example 1
[]. We consider the problem (1) and (2), whereand,,. We divide the areainto two parts,and.in (3) is defined as
We first used Table 1 to show the relationship between the level of mesh refinement and the order of the matrices and in the 2D case.
Table 1.
Relationship between matrix order and level of mesh refinement (2D).
Table 2 and Table 3 showed the calculation results of TBS method in this paper and BAS iteration methods in [] when parameter was the optimal parameter, and , was different values under different mesh refinement levels. In the experiment, the inner iteration used the conjugate gradient method (CG), and all the initial vectors involved in the iteration method were , the outer iteration termination condition was , while the inner iteration termination condition was , where denoted the value of the -th step iteration.
Table 2.
Different mesh refinement levels and , the results of iteration method in the 2D case, with ε = 10−2.
Table 3.
Different mesh refinement levels and , the results of iteration method in the 2D case, with ε = 10−4.
Table 2 and Table 3 illustrated that, for the iteration steps, no matter the parameter changed or the matrix order changed, the steps of the TBS iterative method performed better, and with the increase of the order, the steps of the TBS method were stable and the increase was small. The operation time was also less than that of previous method. Even though the matrix order was more than 10,000, the iteration time of TBS was shorter, which demonstrated the advantages of the TBS iteration method in actual operation.
Table 4 and Table 5 showed the operation results of preconditioned Generalized Minimal Residual(5) method (GMRES(5) )with , in [] and in [] when the parameters were the optimal parameters, different grid refinement level and . In the actual calculation, the CG method was used as the internal iteration method. The initial vectors of all the iteration methods involved were . The iteration termination condition was , where was the value obtained by the iteration. At the same time, we used the new marks OUT and IN to denote the number of external and internal iterative steps, respectively. In addition, CPUs represented the calculation times as before.
Table 4.
Different mesh refinement levels and , the result of the preconditioned GMRES (5) in the 2D case, with .
Table 5.
Different mesh refinement levels and , the result of the preconditioned GMRES (5) in the 2D case, with .
Table 4 and Table 5 showed that, for the steps number, regardless of parameter changeed or order changeed, performed better than PBAS and . In terms of iteration time (i.e., CPU), had less preconditioning time than , and the CPU of was similar to . Therefore, we believed that had a certain improvement in the steps number and time and performs well.
Example 2
[]. We consider the problem (1) and (2), whereand, ,. We divide the areainto two parts,and.in (3) is defined as
First, we used Table 6 to show the relationship between the level of mesh refinement and the order of matrix and in the 2D case.
Table 6.
The relationship between matrix order and mesh refinement level (2D).
Table 7 and Table 8 showed the calculation results of the TBS and BAS iteration methods when the parameters were the optimal parameters of iteration method, under different mesh refinement levels and . The inner iteration method and iteration termination conditions were the same as Example 1.
Table 7.
Different mesh refinement levels and , the results of the iteration method in the 2D case, with .
Table 8.
Different mesh refinement levels and , the results of the iteration method in the 2D case, with .
Table 7 and Table 8 showed that the selection of parameters and the change of order had little influence on the iteration steps of TBS. With the increase of order, the iteration steps of the iteration method did not increase significantly. For CPUs, whether the order was increased or not, the iteration time of the algorithm was less than that of BAS iteration method.
Table 9 and Table 10 showed the operation results of preconditioned GMRES(5) with in this paper, in [] and in [] when the parameters were the optimal parameters, different grid refinement level and , . The marks, inner iteration algorithm and iteration termination conditions in the following table were the same as those in Table 4 and Table 5.
Table 9.
Different mesh refinement levels and , the result of the preconditioned GMRES (5) in the 2D case, with .
Table 10.
Different mesh refinement levels and , the result of the preconditioned GMRES (5) in the 2D case, with .
5. Conclusions
We mainly solve the time-period eddy current optimization problems (1) and (2) in this paper. The method previously put forward for the generalized saddle point problem is not fully applicable to this method, since the coefficient matrix of the equation is a complex matrix, t. Based on this, this paper proposes a new split iteration method (i.e., the TBS iteration method) and a new preconditioner (i.e., the TBS preconditioner) to solve Equation (9). The iteration method is unconditionally convergent, and the optimal parameter which makes the upper bound of the spectral radius of the iteration matrix minimum is obtained. Numerical experiments demonstrate that the TBS method is more feasible and efficient than the previous one. At the same time, the iteration time and steps of the preconditioned GMRES (5) iteration method with the TBS preconditioner also have a significant improvement.
Author Contributions
Formal analysis, Y.-R.L.; funding acquisition, X.-H.S.; investigation, S.-Y.L.; project administration, X.-H.S.; software, Y-R. L; supervision, X.-H.S.; validation, S.-Y.L.; writing—original draft, Y.-R.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Natural Science Foundation of Liaoning Province, grant number 20170540323 and the Central University Basic Scientific Research Business Expenses Special Funds, grant number N2005013.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Lions, J.L. Optimal Control of Systems Governed by Partial Differential Equations; Springer: Berlin/Heidelberg, Germany, 1971; ISBN 3340051155. [Google Scholar]
- Hinze, M.; Pinnau, R.; Ulbrich, M.; Ulbrich, S. Optimization with PDE Constraints; Springer: Dordrecht, The Netherlands, 2009; ISBN 9781402088384. [Google Scholar]
- Ida, N. Numerical Modeling for Electromagnetic Non-Destructive Evaluation. Nondestruct. Test. Eval. 1996, 12, 283. [Google Scholar] [CrossRef]
- Gunzburger, M.; Trenchea, C. Optimal control of the time-periodic MHD equations. Nonlinear Anal. Theory Methods Appl. 2005, 63. [Google Scholar] [CrossRef]
- Kolmbauer, M.; Langer, U. A robust preconditioned minres solver for distributed time-periodic eddy current optimal control problems. SIAM J. Sci. Comput. 2012, 34. [Google Scholar] [CrossRef][Green Version]
- Axelsson, O.; Liang, Z.Z. A note on preconditioning methods for time-periodic eddy current optimal control problems. J. Comput. Appl. Math. 2019, 352, 262–277. [Google Scholar] [CrossRef]
- Rees, T.; Dollar, H.S.; Wathen, A.J. Optimal solvers for PDE-constrained optimization. SIAM J. Sci. Comput. 2010, 32, 271–298. [Google Scholar] [CrossRef]
- Bai, Z.-Z. Block preconditioners for elliptic PDE-constrained optimization problems. Computing 2011, 91, 379–395. [Google Scholar] [CrossRef]
- Zhang, G.; Zheng, Z. Block-symmetric and block-lower-triangular preconditioners for PDE-constrained optimization problems. J. Comput. Math. 2013, 31, 370–381. [Google Scholar] [CrossRef]
- Bai, Z.Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
- Bai, Z.Z.; Benzi, M.; Chen, F.; Wang, Z.Q. Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 2013, 33, 343–369. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhang, G.F.; Zhu, M.Z. A block alternating splitting iteration method for a class of block two-by-two complex linear systems. J. Comput. Appl. Math. 2015, 288, 203–214. [Google Scholar] [CrossRef]
- Krendl, W.; Simoncini, V.; Zulehner, W. Stability estimates and structural spectral properties of saddle point problems. Numer. Math. 2013, 124, 183–213. [Google Scholar] [CrossRef]
- Axelsson, O.; Neytcheva, M.; Ahmad, B. A comparison of iterative methods to solve complex valued linear algebraic systems. Numer. Algorithms 2014, 66, 811–841. [Google Scholar] [CrossRef]
- Axelsson, O.; Kucherov, A. Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebr. with Appl. 2000, 7, 197–218. [Google Scholar] [CrossRef]
- Axelsson, O.; Farouq, S.; Neytcheva, M. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems: Poisson and convection-diffusion control. Numer. Algorithms 2016, 73, 631–663. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhang, G.F.; Zhu, M.Z. A note on preconditioners for complex linear systems arising from PDE-constrained optimization problems. Appl. Math. Lett. 2016, 61, 114–121. [Google Scholar] [CrossRef]
- Liang, Z.Z.; Axelsson, O.; Neytcheva, M. A robust structured preconditioner for time-harmonic parabolic optimal control problems. Numer. Algorithms 2018, 79, 575–596. [Google Scholar] [CrossRef]
- Nédélec, J.C. A new family of mixed finite elements in R3. Numer. Math. 1986, 50, 57–81. [Google Scholar] [CrossRef]
- Nédélec, J.C. Mixed finite elements in R3. Numer. Math. 1980, 35, 315–341. [Google Scholar] [CrossRef]
- Anjam, I.; Valdman, J. Fast MATLAB assembly of FEM matrices in 2D and 3D: Edge elements. Appl. Math. Comput. 2015, 267, 252–263. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).