#### 6.1. Automated Multi-Level Sub-Structuring for Hermitian Problems

Over the last twenty years, a new method for huge linear eigenvalue problems:

where

$K\in {\mathbb{R}}^{n\times n}$ and

$M\in {\mathbb{R}}^{n\times n}$ are Hermitian and positive definite, known as automated multi-level sub-structuring (AMLS), has been developed by Bennighof and co-authors and has been applied to frequency response analysis of complex structures [

34,

35,

36,

44,

45,

46,

47]. Here, the large finite element model is recursively divided into very many sub-structures on several levels based on the sparsity structure of the system matrices. Assuming that the interior degrees of freedom of sub-structures depend quasistatically on the interface degrees of freedom and modeling the deviation from quasistatic dependence in terms of a small number of selected sub-structure eigenmodes, the size of the finite element model is reduced substantially, yet yielding satisfactory accuracy over a wide frequency range of interest.

Recent studies in vibro-acoustic analysis of passenger car bodies (e.g., [

34,

45]), where very large FE models with more than six million degrees of freedom appear and several hundreds of eigenfrequencies and eigenmodes are needed, have shown that for this type of problem, AMLS is considerably faster than Lanczos-type approaches.

We briefly sketch the component mode synthesis (CMS) method for the general linear eigenvalue problem

$Kx=\lambda Mx$, which is the essential building block of the AMLS method. CMS assumes that the graph of the matrix

$\left|K\right|+\left|M\right|$ is partitioned into sub-structures. This can be done efficiently by graph partitioners, like METIS [

48] or CHACO [

49], based on the sparsity pattern of the matrices.

We distinguish only between local (

i.e., interior) and interface degrees of freedom. Then,

K and

M (after reordering) have the following form:

where

${K}_{\ell \ell}$ and

${M}_{\ell \ell}$ are block diagonal.

Annihilating

${K}_{\ell i}$ by block Gaussian elimination and transforming the local coordinates to modal degrees of freedom of the substructures, one obtains the equivalent pencil:

Here, Ω is a diagonal matrix containing the sub–structure eigenvalues,

i.e.,

${K}_{\ell \ell}\Phi ={M}_{\ell \ell}\Phi \Omega $,

${\Phi}^{T}{M}_{\ell \ell}\Phi =I,$ and Φ contains in its columns the corresponding eigenvectors. In structural dynamics, Equation (

28) is called the Craig–Bampton form of the eigenvalue Problem (

26) corresponding to the partitioning in Equation (

27).

Selecting some eigenmodes of the eigenvalue problem

${P}^{T}KPy=\lambda {P}^{T}MPy$, usually the ones associated with eigenvalues below a cut-off threshold

γ and dropping the rows and columns in Equation (

28) corresponding to the other modes, one arrives at the component mode synthesis method (CMS) introduced by Hurty [

50] and Craig and Bampton [

51]. The corresponding matrices still have the structure given in Equation (

28) with curtailed matrices.

For medium-sized eigenvalue problems, this approach is very efficient. Since

${K}_{\ell \ell}$ and

${M}_{\ell \ell}$ are block diagonal, it is quite inexpensive to eliminate

${K}_{\ell i}$ and to solve the interior eigenproblems

${K}_{\ell \ell}\Phi ={M}_{\ell \ell}\Phi \Omega $. However, with the increasing size of Problem (

26), CMS suffers some drawbacks. Coarse partitioning leads to huge sub-structures, such that the decoupling and modal reduction become costly, whereas fine partitioning yields a large projected eigenvalue problem

${P}^{T}KPy=\lambda {P}^{T}MPy$, which is dense, and therefore, its numerical solution is time consuming.

A remedy of this dilemma is the AMLS method, which generalizes CMS in the following way. Again, the graph of $\left|K\right|+\left|M\right|$ is partitioned into a small number of subgraphs, but more generally than in CMS, these subgraphs in turn are sub-structured on a number p of levels. This induces the following partitioning of the index set $I=\{1,\dots ,n\}$ of degrees of freedom. ${I}_{1}$ is the set of indices corresponding to interface degrees of freedom on the coarsest level, and for $j=2,\dots ,p$, define ${I}_{j}$ to be the set of indices of interface degrees of freedom on the j-th level, which are not contained in ${I}_{j-1}$. Finally, let ${I}_{p+1}$ be the set of interior degrees of freedom on the finest level.

With these notations, the first step of AMLS is CMS with cut-off frequency

γ applied to the finest sub-structuring. After

j steps,

$1\le j\le p-1$, one derives a reduced pencil:

where

p denotes the degrees of freedom obtained in the spectral reduction in the previous steps,

ℓ collects the indices in

${I}_{p+1-j}$ and

i corresponds to the index set

${\cup}_{k=1}^{p-j}{I}_{k}$ of interface degrees of freedom on levels that are not yet treated. Applying the CMS method to the southeast

$2\times 2$ blocks of the matrices,

i.e., annihilating the off–diagonal block

${K}_{\ell i}^{(j)}$ by block Gaussian elimination and reducing the set of

ℓ-indices by spectral truncation with cut-off frequency

γ, one arrives at the next level. After

p CMS steps and a final spectral truncation of the lower-right blocks, one obtains the reduction of Equation (

26) by AMLS.

Hence, on each level of the hierarchical sub-structuring, AMLS consists of two steps. First, for every sub-structure of the current level, a congruence transformation is applied to the matrix pencil to decouple in the stiffness matrix the sub-structure from the degrees of freedom of higher levels. Secondly, the dimension of the problem is reduced by modal truncation of the corresponding diagonal blocks discarding eigenmodes according to eigenfrequencies, which exceed a predetermined cut-off frequency. Hence, AMLS is nothing but a projection method where the large problem under consideration is projected to a search space spanned by a smaller number of eigenmodes of clamped sub-structures on several levels.

AMLS must be implemented unlike the description above to ensure computational efficiency. Firstly, it is important to handle structures on the same partitioning level separately to profit from the decoupling. Furthermore, structures must be handled in an appropriate order. If all sub-structures that are connected to the same interface on the superior level have already be condensed, the interface should be reduced, as well, to avoid the storage of large dense matrices.

If all sub-structures have been handled, the reduction process terminates with a diagonal matrix

${V}^{T}KV$ with the eigenvalues of the sub-structures on its diagonal, while the projected mass matrix

${V}^{T}MV$ is block-wise dense or zero with a generalized arrowhead structure, as shown in

Figure 6.

#### 6.2. AMLS Reduction for Fluid–Solid Interaction Problems

To apply AMLS (which requires the system matrices to be symmetric) to the fluid–solid interaction Problem (

15), we consider the symmetric eigenproblem:

whose eigenpairs resemble those from Equation (

15) in the following way: if

$({\lambda}^{2},{({x}_{s}^{T},{x}_{f}^{T})}^{T})$ solves Equation (

15), then:

are the solutions of Equation (

30) unless

$\lambda =0$.

If

$\lambda =0$ is an eigenvalue of Problem (

15), then the unphysical constant eigenmode leads to a singular mass matrix in the extended Equation (

30). Problems arising from the singularity of the mass matrix can be overcome by choosing an appropriate sub-structuring.

We have rewritten the non-symmetric eigenvalue problem as a symmetric one of doubled dimension with desired eigenvalues located at neither end of the spectrum. This seems to have several disadvantages, such as computational costs and approximation properties. Actually, the standard AMLS algorithm can be modified without much additional computational effort so that the eigenvalue errors can still be bounded.

The graph partitioning is again based on the union of the sparsity structures of the matrices

K and

M in Equation (

15). This gives an

$s+f$-dimensional partitioning, which can be expanded to an

$2(s+f)$-dimensional partitioning, so that for

$i=1,\dots ,s+f$, the

i-th and

$(i+s+f)$-th degree of freedom belong to the same sub-structure or interface.

The modified AMLS algorithm consists of two steps on each sub-structure

i, which are basically the same as in the standard AMLS algorithm. The first step is to transform the current approximating pencil by symmetric block Gauss elimination to an equivalent one by eliminating all off-diagonal blocks

${K}_{ij}$,

$j\ne i$ corresponding to the current sub-structure. Due to the special block structure of

${K}_{ii}$, the computational effort is approximately the same as for real matrices of half the size of

${K}_{ii}$. The off-diagonal submatrices

${K}_{jk}$ and

${M}_{jk}$,

$j,k<i$, which couple the current sub-structure to higher levels, preserve the block structure as in Equation (

30), and they are blockwise dense or zero.

The second step requires solving the sub-structure eigenvalue problem

$({K}_{ii},{M}_{ii})$. This problem is known to have a symmetric spectrum, because it has (after reordering) the same block structure as Equation (

30). As most of the sub-structures involve either fluid or solid degrees of freedom, the coupling matrix vanishes locally, and we can halve the size of the eigenproblem in these cases. Since we are interested in eigenpairs at the lower end of the spectrum of the original eigenvalue Problem (

15),

i.e., in eigenpairs of the symmetric eigenvalue Equation (

30) corresponding to small eigenvalues in modulus, the current pencil is projected onto the space spanned by all modes with an eigenfrequency that is by modulus smaller than a prescribed cut-off frequency

$\gamma >0$. The reduction process then terminates with a pencil of symmetric matrices, which has a symmetric spectrum.

Unlike the representation above, AMLS should be implemented structure-wise instead of level-wise to benefit from decoupled sub-structures. A precise description is given in [

25].

In [

25], we proved the following error bounds. We first consider the CMS method for the symmetrized eigenproblem in Equation (

30).

**Theorem 5.** Denote by $0={\lambda}_{+1}<{\lambda}_{+2}\le {\lambda}_{+3}\le \cdots <\gamma $ one zero eigenvalue and the leading nonnegative eigenvalues of the original Problem (15) and by $0={\tilde{\lambda}}_{+1}<{\tilde{\lambda}}_{+2}\le {\tilde{\lambda}}_{+3}\le \cdots <\gamma $ the corresponding eigenvalues of the truncated linear eigenproblem. Then, for every $j\ge 2$, such that ${\lambda}_{+j},{\tilde{\lambda}}_{+j}\in J:=(-\gamma ,\gamma )$, it holds that:i.e., The upper bound in Equation (

32) to the relative error has the same structure as the error bound given in [

52] for CMS applied to a definite eigenvalue problem

$Kx=\lambda Mx$. In the definite case, the lower bound is zero due to the fact that CMS is a projection method, and the eigenvalues under consideration are at the lower end of the spectrum.

The bounds Equation (

32) can be shown to be sharp by an example [

53], for practical problems; however, the relative errors are overestimated by two or four orders of magnitude (

cf. Figure 7 for Example 1).

AMLS on p partitioning levels is mathematically equivalent to p CMS steps, so that in the CMS step, on level $k=p,\dots ,1$, eigenmodes on level k are truncated, and eigenmodes on all other levels are retained. We denote by ${\lambda}_{+j}^{(k)}$ the eigenvalue approximation of the j nonnegative eigenvalue, if the lowest k partitioning levels have been handled, i.e., ${\lambda}_{+j}^{(0)}$ denotes the exact eigenvalues and ${\lambda}_{+j}^{(p)}$ the approximation when the reduction process has terminated. Then, we apply the CMS bound in Theorem 5 recursively and obtain the following error bound for AMLS.

**Theorem 6.** Consider the AMLS algorithm for fluid-solid interaction eigenproblems on p levels. Denote by ${\lambda}_{+j}^{(k)}$ the j-th nonnegative eigenvalues after the k lowest partitioning levels have been handled ($k=0,\dots ,p$) and assume that the cut-off frequency satisfies $\gamma >p{\lambda}_{+j}^{(0)}\ge 0$. Then, the eigenvalues can be bounded by: