Fast computation of optimal damping parameters for linear vibrational systems

We formulate the quadratic eigenvalue problem underlying the mathematical model of a linear vibrational system as an eigenvalue problem of a diagonal-plus-low-rank matrix $A$. The eigenvector matrix of $A$ has a Cauchy-like structure. Optimal viscosities are those for which $trace(X)$ is minimal, where $X$ is the solution of the Lyapunov equation $AX+XA^{*}=GG^{*}$. Here $G$ is a low-rank matrix which depends on the eigenfrequencies that need to be damped. After initial eigenvalue decomposition of linearized problem which requires $O(n^3)$ operations, our algorithm computes optimal viscosities for each choice of external dampers in $O(n^2)$ operations, provided that the number of dampers is small. Hence, the subsequent optimization is order of magnitude faster than in the standard approach which solves Lyapunov equation in each step, thus requiring $O(n^3)$ operations. Our algorithm is based on $O(n^2)$ eigensolver for complex symmetric diagonal-plus-rank-one matrices and fast $O(n^2)$ multiplication of linked Cauchy-like matrices.


Introduction
We consider determination of optimal damping for the linear vibrational system described by where M and K (called mass and stiffness, respectively) are real, symmetric positive definite matrices of order n. The damping matrix is defined as where matrices D int and D ext correspond to internal and external damping, respectively. Internal damping can be modeled as or D int = αM + βK, α, β ∈ (0.005, 0.1).
From the optimization point of view we are more interested in the external viscous damping which can be modeled as where ρ i is the viscosity and D i describes a geometry of the corresponding damping position for i = 1, . . . , k. Typically, system has a very few dampers compared to the full dimension, which means that k ≪ n. More details on the structure of internal and external damping can be found in e.g. [6,7,19,32,33]. The model of linear vibrational system (1) corresponds to the quadratic eigenvalue problem (λ 2 M + λD + K) x = 0, x = 0.
The damping optimization problem, in general, can be stated as follows: determine the "best" damping matrix D which insures optimal evanescence of each component of x. There exists several optimization criteria for this problem. One criterion is the so-called spectral abscissa criterion. This criterion requires that the maximal real part of the eigenvalues of the quadratic eigenvalue problem (6) are minimized (see e.g. [11,24]).
In this paper we will use criterion based on total average energy of the considered system. This criterion considers minimization of the total energy of the system (as a sum of kinetic and potential energy) averaged over all initial states of the unit total energy and a given frequency range. Benefits of this criterion are discussed in [10,33]. Moreover, it can be shown (see e.g. [10,23,33]) that this minimization criterion is equivalent to where X is the solution of the Lyapunov equation where matrix A comes form linearization of the system (1) and matrix G determines which part of an undamped eigenfrequencies needs to be damped.
In practice, one can usually influence only the external damping. The problem is to determine the best damping matrix D, which minimizes the total average energy of the system, that is, to determine the optimal matrix D ext for which trace X is minimal with A as in (10).
The paper is organized as follows: symmetric linearization and an overview of the existing O(n 3 ) solution method, which minimizes trace of the solution of the respective Lyapunov equation, are presented in Section 2. A new O(n 2 ) algorithm, which reduces the problem to sequence of eigenproblems of complex symmetric diagonal-plus-rank-one (CSymDPR1) matrices and uses fast multiplication of linked Cauchy-like matrices, is presented in Section 3. Numerical examples and some timings are given in Section 4.

Standard solution
In this section, we describe the standard solution method for finding the optimal damping of the linear vibrational system (1). In Section 2.1, we describe a linearization of the the quadratic eigenvalue problem (6), which is, by changing basis, further reduced to an eigenvalue problem of a simpler matrix. In Section 2.2, we present an existing direct approach (see e.g. [10,33]), which is O(n 3 ) solution of the problem.
For structured system this problem was considered in [6,7] where authors proposed dimension reduction to accelerate the optimization process. However, to be efficient, dimension reduction requires specific structure, so this approach cannot be applied efficiently in general setting.

Symmetric linearization
By symmetric linearization we transform quadratic eigenvalue problem (6) to the generalized eigenvalue problem (GEVP) (see e.g. [30]) the generalized eigenvalue decomposition of the pair (K, M ). Since the calculation of matrices Φ and Ω does not depend on the damping matrix D, they can be calculated prior to the optimization procedure. In the basis and in the basis we have the hyperbolic generalized eigenvalue problem Now we can write the linearized system in the so-called modal coordinates. By simple transformation, (9) is equivalent to the eigenvalue problem for the matrix This is also the matrix A from the Lyapunov equation (7). The matrix G from (7) is equal to where parameter s determines the number of eigenfrequencies of the undamped system which have to be damped (for more details see e.g. [6,31,33]).

Existing approach, one of, O(n 3 ) solution
This is demanding problem, both, from the computational point of view and from the point of optimization of damping positions. The main reason lies in the fact that the criterion of total average energy has many local minima, so we usually need to optimize viscosity parameters for many different damping positions.
Standard direct approaches calculate the solution of Lyapunov equation (7) by using the Schur form, for example, Hammarling algorithm [14,20] and Bartels-Stewart algorithm [2]. The computation of Schur form requires O(n 3 ) operations, so these algorithms are O(n 3 ) solutions. The algorithms are implemented in the SLICOT library [29] and are used in Matlab. The timings for some examples are given in Section 4.

The new algorithm, O(n 2 ) solution
In our approach, instead of using the Schur form, we use diagonalization of the matrix A from (10). The eigenvalue problem for the matrix A is reduced to k eigenvalue problems for the complex symmetric diagonal-plus-rankone (CSymDPR1) matrices, k being the number of dampers. Each of those EVPs can be efficiently solved in O(n 2 ) operations. It is important that updating of the eigenvectors can also be performed using O(n 2 ) operations, due to Cauchy-like structure of eigenvector matrices. In this way, after preparatory steps from Sections 2.1 and 3.3 below, which require O(n 3 ) operations, each computation of trace(x), where X is from (7), requires only O(n 2 ) operations. This makes trace optimization considerably faster.
The section is organized as follows. In Section 3.1, we present existing results about Cauchy-like matrices and their fast multiplication. In Section 3.2 we develop an efficient O(n 2 ) method for the solution of the CSymDPR1 eigenvalue problem. In Section 3.3, we describe reduction to the CSymDPR1 eigenvalue problems. In Section 3.4, we develop fast O(n 2 ) algorithm for the final trace computation, based on the fast multiplication of Cauchy-like matrices.

Cauchy-like matrices
Cauchy-like matrix C(X, Y, P, Q) is the matrix which satisfies the Sylvestertype displacement equation (see e.g. [25]) where X = diag(x l ), Y = diag(y l ), l = 1, . . . , n, and P, Q ∈ R n×k . The matrices X, Y , P and Q are called the generators of C. For example, the standard Cauchy Clearly, given generators, all elements of a Cauchy-like matrix can be computed on O(kn 2 ) operations. For multiplication by Cauchy-like matrices, we have following results. Given n × n Cauchy-like matrix A from (12) and n-dimensional vector v, the product Av can be computed in O(nk log n 2 ) operations [25, Lemma 2.2].
Given two linked Cauchy-like matrices such that where E, F ∈ R n×k1 and N, H ∈ R n×k2 , the product C = A · B is a Cauchy-like matrix which satisfies [25, This generators of C can be computed in O(nk 1 k 2 log n 2 ) operations.
Without loss of generality, we assume that A is irreducible, that is, Indeed, if y i = 0 for some i, then the diagonal element ξ i is an eigenvalue whose corresponding eigenvector is the i-th canonical vector, and if ξ i = ξ j , then ξ i is an eigenvalue of the matrix A.
be the eigenvalue decomposition of A, where Λ = diag(λ 1 , λ 2 , . . . , λ n ) are the eigenvalues and V = v 1 · · · v n is a matrix whose columns are the corresponding eigenvectors. Notice that the eigenvector matrix of a complex symmetric matrix satisfies the relation V −1 = V T . The eigenvalue problem for A can be solved by any of the standard methods (see [34] and the references therein). However, due to the special structure of A, we can use the following approach (see [9] and [13, Section 8.5.3]): the eigenvalues of A are the zeros of the secular equation and the corresponding eigenvectors are given by It is important to notice that V is a Cauchy-like matrix, The equation (16) can, for example, be solved by the secular equation solver from the package MPSolve package [8,21]. If A is real, the eigenvalues interlace the diagonal elements of Ξ, and can be computed highly accurately by bisection [15]. In this case, orthogonality of computed eigenvectors follows from the accuracy of computed λs. In the complex symmetric case there is no interlacing, but orthogonality is not an issue, so we developed a version of the Rayleigh quotient iteration.

Standard Rayleigh quotient iteration (RQI) is as follows [27]: given starting
Than µ → λ. In our case is again a CSymDPR1 matrix which is computed in O(n) operations. For real symmetric or Hermitian matrices, RQI converges quadratically to absolutely largest eigenvalue. In the complex symmetric case, the convergence of RQI is slow and it is better to use the modified Rayleigh quotient iteration (MRQI) which is as follows: given starting x repeat The MRQI method converges quadratically [1,27]. For a CSymDPR1 matrix, having in mind the eigenvector formulas (17), we further modify the method as follows: given starting x repeat This modified method showed very good convergence properties in all our large damping problems. Once µ has converged to an eigenvalue, this eigenvalue can be deflated [26, Section 7.2]. In particular, if for some l < n we have computed eigenvalues λ n−l+1 , . . . , λ n of A, then we can compute the remaining n − l eigenvalues λ 1 , . . . , λ n−l as eigenvalues of the (n − l) × (n − l) CSymDPR1 matrix A = Ξ + ρ y y T , where Ξ = diag(ξ 1 , ξ 2 , . . . , ξ n−l ) and In our implementation, first steps use RQI from (19) and, after that, MRQI from (21) is used until convergence.
The operation count to compute all eigenvalues of A is O(n 2 ), construction of generators for the eigenvector matrix V from (18) takes O(n 2 ) operations (computing Ψ), and the reconstruction of V from its generators, if needed, takes another O(n 2 ) operations. This amounts to O(n 2 ) operations to compute the complete eigenvalue decomposition of A.

Reduction to CSymDPR1 eigenproblems
Let Ξ and Q denote the solution of the hyperbolic GEVP where Ω and Γ are defined by (9), such that Due to sparse structure of the problem (22), the matrices Q and Ξ are computed by solving n eigenvalue problems for 2 × 2 matrices: for i = 1, 2, . . . , n, and all other elements of Q and Ξ are zero. The problem (22) is equal to the problem (9), but without external damping. Instead of solving (9), we now compute the eigenvalue decomposition of the complex symmetric diagonal-plus-low-rank matrix Assume, for example, that there is only one damper with viscosity ρ positioned at the mass l. Instead of solving (9), we compute the eigenvalue decomposition of the CSymDPR1 matrix A = Ξ + ρyy T , where y = Q T n+1:2n,: Φ l,: . In the case of k dampers, the procedure is repeated. For example, in the case of two dampers we need to solve the eigenproblem for the matrix A = Ξ + ρ 1 y 1 y T 1 + ρ 2 y 2 y T 2 . First, we find eigendecomposition of matrix

Then the eigendecomposition of A is computed as
Since A is complex symmetric, we also have From (18) it follows that S 1 and S 2 are Cauchy-like matrices, , where elements of vectors Ψ 1 and Ψ 2 are reciprocals of the norms of unnornmalized eigenvectors x i from (17). The matrices S 1 and S 2 are linked, so according to (13) and (14), S = S 1 · S 2 is a Cauchy-like matrix, S = C(Ξ, Λ, P, Q), P = y S 1 y 2 , Q = S T 2 Ψ 1 Ψ 2 . This procedure is easily generalized to k > 2 dampers.
For small k, the computation of Λ, P and Q requires O(n 2 ) operations.

Trace computation
Let A be given by (23) and let A = SΛS T be its eigenvalue decomposition computed with the method from Section 3.2. Then S is a Cauchy-like matrix defined by S = C(Ξ, Λ, P, Q) for some P, Q ∈ C n×k satisfying S −1 = S T , where k is the number of dampers.
LetĀ denote the elementwise conjugated matrix A. Inserting the eigenvalue decomposition of A into the Lyapunov equation (7) gives Premultiplying this equation by S T = S −1 , postmultiplying byS = S − * and setting Y = S T XS, gives a displacement equation Therefore, Y is a Cauchy-like matrix, Y = C(Λ,Λ, −S T G, S T G). Notice that S T G is not an actual matrix multiplication -due to the special form of G from (11), this is just a selection of columns of S T . Generating full Y , if needed, requires O(sn 2 ) operations.
Since S and Y are linked Cauchy-like matrices, according to (14), Z is a Cauchylike matrix The computation of Q ′ requires O(nks log 2 n) operations (see Section 3.2). Finally, trace(X) is computed using scalar products: This step requires O(kn 2 ) operations to compute elements ofS, O((k + s)n 2 ) operations to compute elements of Z, and O(n 2 ) operations to compute scalar products.

Numerical examples
In this section we present three examples of n-mass oscillator. The size of the problem is n = 801 for the "small" example, n = 1601 for the "large" example, and n = 2001 for the "homogeneous" example with more homogeneous masses. We compare our algorithm with the O(n 3 ) algorithm from [6]. The computations are performed on an Intel i7-8700K CPU running at 3.70GHz with 12 cores.
Let us describe the large example. The small example is similar. We consider the mechanical system shown in Figure 1. The mass oscillator contains two rows of d masses that are grounded from one side, while on the other side masses are connected to one mass which is then grounded. Therefore, we consider 2d + 1 masses and 2d + 3 springs, while the system has three dampers of different viscosities ρ 1 , ρ 2 and ρ 3 . The mass matrix is M = diag(m 1 , m 2 , . . . , m n ), (26) and the stiffness matrix is where For d = 800 and n = 1601 masses, we consider the following configuration: The internal damping is determined by (3) with α = 0.02. As shown on the Figure 1, we consider three dampers. The first two dampers are grounded, while the third damper connects two rows of masses. This means that external damping is determined by (5) with where e i corresponds to the i-th canonical vector.
In the definition of D ext we use indices since, in general, one one needs to optimize damping positions, that is, we would like to optimize viscosities ρ 1 , ρ 2 and ρ 3 over different damping configurations. Here we present results for only one configuration i 1 i 2 i 3 . We would like to damp 27 smallest eigenfrequencies of the undamped system, that is, the matrix G is defined by (11) with s = 27.
In For this example we choose s = 20 in (11). Our problems and solutions are described in Table 1. The timings of the standard algorithm (Matlab) and our new algorithm (Julia) are given in Tables  2 and 3 Table 3: Run times in seconds for the new O(n 2 ) algorithm using Julia with 12 cores. The individual problems are solved with our eigensolver eigen() for CSYmDPR1 matrices followed by fast computation of trace with the function traceX(). The optimization is preformed using the function ConjugateGradient() from the Julia package Optim.jl [17]. Our Julia programs are available on GitHub [12].