Regularized Normalization Methods for Solving Linear and Nonlinear Eigenvalue Problems

: To solve linear and nonlinear eigenvalue problems, we develop a simple method by directly solving a nonhomogeneous system obtained by supplementing a normalization condition on the eigen-equation for the uniqueness of the eigenvector. The novelty of the present paper is that we transform the original homogeneous eigen-equation to a nonhomogeneous eigen-equation by a normalization technique and the introduction of a simple merit function, the minimum of which leads to a precise eigenvalue. For complex eigenvalue problems, two normalization equations are derived utilizing two different normalization conditions. The golden section search algorithms are employed to minimize the merit functions to locate real and complex eigenvalues, and simultaneously, we can obtain precise eigenvectors to satisfy the eigen-equation. Two regularized normalization methods can accelerate the convergence speed for two extensions of the simple method, and a derivative-free ﬁxed-point Newton iterative scheme is developed to compute real eigenvalues, the convergence speed of which is ten times faster than the golden section search algorithm. Newton methods are developed for solving two systems of nonlinear regularized equations, and the efﬁciency and accuracy are signiﬁcantly improved. Over ten examples demonstrate the high performance of the proposed methods. Among them, the two regularization methods are better than the simple method.


Introduction
It is well known that the Rayleigh quotient [1,2]: can be used to determine the real eigenvalue of a symmetric matrix A.
In this paper, we derive a simple normalized condition solver to obtain the eigenvalues of Ax = λx, (2) where A ∈ R n×n is a given matrix, x ∈ R n is an unknown vector, and λ is an unknown eigenvalue in the standard linear eigen-equation.When A is not a symmetric matrix, the Rayleigh quotient (1) cannot be used to determine the eigenvalues.Liu et al. [3] developed a new quotient to determine the eigenvalues of Equation (2).As noticed by Liu et al. [4], it is hard to directly determine the eigenvalue and eigenvector from Equation (2) by a numerical method.In fact, from Ax − λx = 0, we always obtain x = 0 by a numerical method since the right-hand side is a zero vector.In [4], a new strategy to overcome this difficulty uses the variable transformation to a new nonhomogeneous linear system.It possesses a nonzero external excitation term on the right-hand side, such that one can obtain a nonzero eigenvector when the eigen-parameter λ is an eigenvalue.We are going to propose a simple method to nonhomogenize the eigen-equation to obtain a nonhomogeneous linear system, and then it is easy to find the eigenvalue and eigenvector by using the minimization technique.
The standard free vibration model of elastic structural elements is M q(t) + C q(t) + Kq(t) = 0, which by q(t) = e λt x renders a quadratically nonlinear eigenvalue problem [5]: A lot of applications and solvers of quadratic eigenvalue problems have been proposed, e.g., stability analysis of time-delay systems [6], free vibrations of fluid-solids structures [7], a modified second-order Arnoldi method [8], the inexact residual iteration method [9], the homotopy perturbation technique [10], electromagnetic wave propagation and analysis of an acoustic fluid contained in a cavity with absorbing walls [11], and a friction-induced vibration problem under variability [12].In addition, several applications and solvers of generalized eigenvalue problems have been addressed, e.g., the block Arnoldi-type contour integral spectral projection method [13], small-sample statistical condition estimation [14], matrix perturbation methods [15], the overlapping finite element method [16], the complex HZ method [17], the context of sensor selection [18], and a generalized Arnoldi method [19].
As done in [4], we can take y = λx, (5) and combine Equations ( 5) and ( 4) to obtain Upon defining Equation ( 6) becomes a generalized eigenvalue problem for the 2n-vector X: AX = λBX, (8) where A, B ∈ R 2n×2n .Equation ( 8) is used to determine the eigen-pair (λ, X), which is a linear eigen-equation associated with the pencil A − λB, where λ is an eigen-parameter.A main drawback of this argumentation is that the dimension is raised doubly from n to 2n.Equation ( 8) can be written as Because the right-hand side is a zero vector, solving it by the numerical method we can only obtain the trivial solution X = 0. To avoid this situation, Liu et al. [4] introduced an external excitation method by letting X = Y + X 0 , such that Solving this equation for Y, then the eigenvector X = Y + X 0 is obtained.However, it is a problem how to select the proper exciting vector X 0 .The basic idea is the transformation from a homogeneous Equation ( 9) to a nonhomogeneous Equation (10).We need to determine if we can develop a simpler method to realize such a type transformation but without introducing an extra exciting vector X 0 , which is an interesting problem.The present paper attempts to make this type transformation very easy, which is the main motivation and the major novelty: to realize this transformation by a simple normalization technique.The present idea is simpler than that in [4], so the new technique is called a simple method and is introduced in Section 2. Nonlinear eigenvalue problems are important and find a lot of real applications in engineering and applied fields.Betcke et al. [20] collected 52 nonlinear eigenvalue problems in the form of a MATLAB toolbox, which contains problems from models of real-life applications as well as ones constructed specifically to have particular properties.Recently, El-Guide et al. [21] presented two approximation methods for computing eigenfrequencies and eigenmodes of large-scale nonlinear eigenvalue problems resulting from boundary element method solutions of some types of acoustic eigenvalue problems in 3D space.We extend Equation (4) to a general nonlinear eigenvalue problem [20]: which is a nonlinear eigen-equation of λ used to solve the eigen-pair (λ, x), where N(λ) ∈ R n×n .Equation ( 11) is a nonlinear eigenvalue problem because N(λ) is a nonlinear matrix function of the eigen-parameter λ.In Equation (9), N(λ) = A − λB is a linear matrix function of λ, so that it is a linear eigenvalue problem.
Most numerical methods that deal with the nonlinear eigenvalue problems are Newtontype methods [22][23][24][25].In [26], some available solution techniques for nonlinear eigenvalue problems using the Jacobi-Davidson, Arnoldi and the rational Krylov methods were presented.Zhou [27] used the Leray-Schauder fixed-point theorem to acquire the existence of positive solutions of a nonlinear eigenvalue problem.El-Ajou [28] demonstrated the general exact and numerical solutions of four significant matrix fractional differential equations, and a new computational skill was applied for obtaining the general solutions of the nonlinear issue in the Caputo sense.Jadamba et al. [29] addressed the nonlinear inverse problem of estimating the stochasticity of a random parameter in stochastic partial differential equations by using a regularized projected stochastic gradient scheme.Later, Harcha et al. [30] tackled the nonlinear eigenvalue problem with the p-Laplacian fractional involving singular weights and obtained the nonexistence of solutions by utilizing a typical version of Picone's identity.
The nonlinear eigenvalue problem is a great challenge for developing efficient and accurate methods [31].Even for polynomial nonlinear eigenvalue problems, the linearizations to the linear eigenvalue problems in a larger space are quite complicated and are in general not unique.The present paper intends to overcome these challenges, wherein we will directly solve the nonlinear eigenvalue problem in its nonhomogeneous form by incorporating a normalization condition in the original space.
In this paper, we will encounter a problem to solve a nonlinear equation f (x) = 0, but the explicit function of f (x) is not available.The Newton method for iteratively solving f (x) = 0 is given by which needs to carry out a point-wise derivative f (x n ) in the iteration.For some problems, the explicit function f (x) might not be available, and this induces great difficulty in using the Newton method to solve the nonlinear equation.To overcome this inefficiency, Liu [32] derived a derivative-free iterative scheme based on a new splitting technique: where a and b are constants.In Section 6.3, we will develop a derivative-free fixed-point Newton method to determine a and b.With regard to the derivative-free fixed-point Newton methods, one can refer to [33] and references therein.
In addition to the derivative-free fixed-point Newton method and the minimization techniques, we will also develop the Newton method for the nonlinear equations system by incorporating the normalization condition into the eigen-equation.Arnoldi [34] proposed that the method of minimized iterations was recommended as a rapid means for determining a small number of the larger eigenvalues and modal columns of a large matrix.After that, many iterative methods were surveyed in [35] at length.Argyros et al. [36] addressed a semilocal analysis of the Newton-Kurchatov method for solving nonlinear equations involving the splitting of an operator.They also acquired weaker sufficient semilocal convergence criteria and tighter error estimates than in earlier works.Argyros and Shakhno [37] employed local convergence of the combined Newton-Kurchatov method for solving Banach-space-valued equations.Further, they also mentioned that these modifications of earlier conditions resulted in tighter convergence analysis and more precise information on the location of the solution.
This paper develops several simple approaches, including two regularization methods, to solve nonlinear eigenvalue problems.The contributions and innovation points of this paper are given as follows: • When solving nonlinear eigenvalue problems, they can be transformed into minimization problems regardless of real and complex eigenvalues.

•
For solving linear and nonlinear eigenvalue problems, this paper presents normalization techniques to create new nonhomogeneous systems and merit functions.

•
Two simple regularization methods are combined with the Newton iteration method, which results in very fast convergence to solve nonlinear eigenvalue problems.

•
Using the derivative-free fixed-point Newton method to directly solve the regularized scalar equation for nonlinear eigenvalue problems, we can can quickly obtain highprecision eigenvalues.
The remainder of the paper is arranged as follows.In Section 2, we consider a normalization condition for the uniqueness of the eigenvector and derive a simple nonhomogeneous linear system to minimize the residual of the eigen-equation by using the 1D golden section search algorithm (1D GSSA) to determine the real eigenvalue, which results in a simple method (SM).Some examples of linear eigenvalue problems in Section 3 exhibit the advantages of the present methodology of the SM to find the approximate solution of Equation (2).A simple method (SM) of the nonlinear eigenvalue problem (11) is presented in Section 4, which is combined with the golden section search algorithm to be a stable solver of eigenvalues and eigenvectors.For complex eigenvalue problems, we propose two normalization equations with nonhomogeneous terms that appear on the right-hand side.Section 5 displays some examples of nonlinear eigenvalue problems solved by the SM and GSSA.In Section 6, we discuss two simple regularization methods and provide a derivative-free fixed-point Newton method for quickly finding the real eigenvalues.The combination of Newton's method and regularized equations is carried out in this section.Finally, the conclusions are drawn in Section 7.

A Simple Method for Standard Linear Eigenvalue Problems
We can observe that x = 0 in Equation ( 2) is not unique because βx, β = 0 is also a solution if x is a solution of Equation (2).Therefore, for the uniqueness of the eigenvector of Equation (2), an extra normalization condition b T x = 1 (14) can be imposed, where b is a given nonzero vector.For example, if we take b = (1, 0, . . ., 0) T , then the first component of x is normalized to x 1 = 1.If b J = 1 for a given J and b k = 0, k = J, we normalize the J-th component of x to be x J = 1.2) is constrained by a normalization condition (14) for the uniqueness of x, we can derive a nonhomogeneous system to determine x: Proof.Equations ( 2) and ( 14) yield an over-determined system: Multiplying both sides by we have Expanding this, we can prove Equation ( 15).

Remark 1.
If the coefficient matrix A is highly ill-conditioned, it is better to replace Equation ( 15) with the condition number of which is reduced to one-half.Equation ( 18) is easily derived by adding b T xb on both sides of Equation ( 2) and taking into account Equation ( 14) If λ is an eigenvalue and x = 0 is the corresponding eigenvector, Ax − λx = 0 by Equation (2).In other cases, Ax − λx > 0 ∀x = 0.As a consequence, Ax − λx ≥ 0 ∀x ∈ R n .For a given λ ∈ R, if x is solved from Equation ( 18), then we can determine the correct value of λ by minimizing the following merit function: where Ax − λx denotes the Euclidean norm of Ax − λx, and λ ∈ [a, b] is a real eigenvalue to be sought.The numerical procedures for solving Equation (2) are given by (i) selecting [a, b] and b, (ii) solving Equation (18) for each required λ i ∈ [a, b], and (iii) applying the onedimensional golden section search algorithm (1D GSSA) to Equation (19) to pick up the eigenvalue.
Remark 2. This method, involving merely two Equations: ( 18) and ( 19), is the simplest method to find the real eigenvalues of Equation ( 2) and is thus labeled as a simple method (SM).When a SM is used to solve the nonhomogeneous linear system in Equation ( 18) for each eigen-parameter λ, we can easily compute the real eigenvalue λ and then the corresponding eigenvector x with the aid of Equation (19).

Examples of Linear Eigenvalue Problems
In order to assess the performance of the newly developed SM, we test some linear eigenvalue problems.
Although this example is very simple, we adopt it to test the accuracy and efficiency of the proposed SM since the exact eigenvalues are known.By applying the SM, we take J = 1 and ε = 10 −15 as used in the 1D GSSA.We plot f (λ) with respect to the eigen-parameter over an interval, as shown in Figure 1, for which three minimal points are the corresponding eigenvalues λ = 3, 6, 9, the corresponding eigenvectors of which are given as follows: Table 1 lists some results obtained by SM, where EE means the error of the eigenvalue and NI denotes the number of iterations.

Example 5.
Let A = [a ij ], i, j = 1, . . ., n.In Equation ( 2), we take the Hilbert matrix: Since the Hilbert matrix is highly ill-conditioned, we take Equation ( 18) instead of Equation ( 15) to compute the eigenvalue.In Figure 5, with b = 1 n , n = 7, we plot f (λ) with respect to the eigen-parameter over an interval, for which the seven minimal points are the corresponding eigenvalues.There exists the largest eigenvalue between 1.5 and 2. In Table 4, we list the largest eigenvalues for n = 7, . . ., 10 obtained in [41] using the cyclic Jacobi method [42] and in [3] using the external excitation method.Due to the highly ill-conditioned nature of the Hilbert matrix with n = 100, it is a quite difficult linear eigenvalue problem.For this problem, we take J = 1 to compute the largest eigenvalue, which is given by λ = 2.182696097757424.The SM converges very fast with 69 iterations under ε = 10 −15 , and the error of the eigen-equation is Ax − λx = 2.4 × 10 −15 .Notice that the smallest eigenvalue of the Hilbert matrix with a large n is very difficult to compute since it is very close to zero.However, for n = 100 and b 49 = b 50 = 1, we can obtain the smallest eigenvalue 6.18 × 10 −30 , whose error is Ax − λx = 1.41 × 10 −16 .

A Simple Method for Nonlinear Eigenvalue Problems
If x is an eigenvector of Equation ( 11), then with α = 0, αx is also an eigenvector, which means that the eigenvector of Equation ( 11) is not unique.Therefore, we can impose on Equation (11) an extra normalization condition (14).11) is imposed by a normalization condition (14) for the uniqueness of x ∈ R n , we can derive a nonhomogeneous equation system to determine x: Proof.Equation ( 29) is easily derived by adding b T xb on both sides of Equation (11): which, with b T x = 1 by Equation ( 14) being taken on the right-hand side, yields Thus, we prove Equation (29).
The numerical procedures for determining the real eigenvalues of Equation ( 11 19) with f (λ) = N(λ)x for picking up the eigenvalue.This method is the simplest method to find the real eigenvalues of Equation (11), which is labeled as a simple method (SM).11) is imposed by a normalization condition (14) for the uniqueness of x, and λ is a complex eigenvalue, we can derive a nonhomogeneous equations system to determine x: Proof.Inserting Equation (33) into Equation (29) yields Equating the real and imaginary parts of Equation ( 34), we have which can be recast to that in Equation (32).
When x is solved from Equation (32), we can employ the following minimization: min to determine the complex eigenvalue.
For the complex eigenvalue problem, we can also derive another normalization equation.
Theorem 4. For x = u + iv ∈ C n in Equation (11), it is imposed by a normalization condition: for the uniqueness of x, and λ is a complex eigenvalue.We can derive a nonhomogeneous equation system to determine x = u + iv: where c is a 2n-dimensional constant vector and Proof.Inserting Equation (33) into Equation ( 11) and equating the real and imaginary parts, we can derive Dy = 0.
Adding cc T y on both sides yields which by using Equation ( 37) on the right-hand side generates Equation (38).
Therefore, the numerical procedures for determining the complex eigenvalue of a nonlinear eigenvalue problem (11) 32).(iii) Apply the twodimensional GSSA (2D GSSA) to Equation (36).With regard to the two-dimensional golden section search algorithm, one may refer to [43].
Remark 3.Even the proofs of Theorems 2-4 are simple and straightforward; they are crucial for the development of the proposed numerical methods for effectively and accurately solving nonlinear eigenvalue problems.

Examples of Nonlinear Eigenvalue Problems
Example 6.To demonstrate the new idea in Equation (29), we consider a generalized eigenvalue problem Ax = λBx endowed by [44]: By using the SM with b = (1, 0, 0, 0, 0) T and ε = 10 −15 for the 1D GSSA, f (λ) is plotted in Figure 6, for which five eigenvalues appear as minimums.With Example 7. To display the advantage of Equation ( 32), we consider a standard eigenvalue problem with which possesses the complex eigenvalues: where By using the SM with b = (1, 1, 1) T and ε = 10 −15 , we plot f (λ) in Figure 7a with respect to the eigen-parameter over an interval, for which two real eigenvalues are λ = 0.04080314176866112 and λ = 0.7425972620277184, where Nx = 1.22 × 10 −15 and Nx = 2.53 × 10 −15 are obtained, respectively.
Let √ λ = µ, and we can derive There are a total of 24 eigenvalues, as shown in Figure 7b, with respect to λ R = µ 2 R − µ 2 Example 9.As an application, we consider a time-delay linear system of first-order ordinary differential equations: where Bq(t − 1) is a time-delay external force.Inserting q(t) = e λt x into Equation (49) renders By canceling e λt on both sides, we obtain a time-delay nonlinear eigenvalue problem: where A, B ∈ R n×n .The eigenvalues of this system are very important as they reflect the stability of the time-delay system.
We take n = 3 and consider [25] where This describes a time-delay system.

Two Regularization Methods
In the first regularization method (FRM), we take b = αd, ( where α = 0 is a regularization parameter and d is a constant vector.Inserting Equation (61) into Equations ( 29) and ( 14) generates the first regularization equation: In the second regularization method (SRM), we consider another normalization condition instead of Equation ( 14): If α = 1, we recover to Equation (14).Then, as done in the proof of Theorem 2, we can derive the second regularization equation: Equations ( 62) and (64) are different as the α 2 dd T in the first one becomes dd T in the second one.The second regularization method is simpler than the first regularization method.

Newton Iterative Methods
The regularized Equation (62) consists of a system of nonlinear equations for x and x m := λ, m = n + 1, the Jacobian matrix of which at the k-th step is Thus, the Newton iterative method together with the FRM is given by To construct the Newton method, letting x n+1 = λ and using Equations ( 64) and (63) yields the following nonlinear regularized equations with dimension m = n + 1: At the k-th step, the Jacobian matrix reads as Then, the Newton iterative method together with the SRM is given by The iteration is terminated if a given convergence criterion ε = 10 −15 is fulfilled.
As usual, the Newton iterative method needs to invert the Jacobian matrix at each iteration step, which may result in much computational time for a large-scale eigenvalue problem.In order to save the computational time, we return to solve the scalar equation N(λ)x = 0 given below.

A Derivative-Free Fixed-Point Newton Method
The NI of 1D GSSA is usually over 70, as shown by Examples 6-10 in Section 5. To reduce the computational burden, a derivative-free fixed-point Newton method (DFFPNM) for solving a scalar equation F(λ) = 0 can be derived as follows [46,47].Mathematically speaking, solving Equation ( 11) is equivalent to solving which, however, after inserting the solution x obtained from Equation (62) or Equation (64), is a highly nonlinear and implicit function of λ.Suppose that λ * is root with F(λ * ) = 0.In order to get rid of the derivative term in the Newton method, we consider Neglecting the higher-order terms and inserting it into the Newton iterative scheme, we have which, combining with yields where Determining a and b by a fixed-point estimation, the first step is to choose two initial guesses λ 0 and λ 2 .Then, we take λ 1 = (λ 0 + λ 2 )/2.As the approximations of a and b in Equation (76), we can evaluate them by using the technique of finite difference: The resulting iterative scheme (75) together with a and b above is a derivative-free fixedpoint Newton method (DFFPNM).
Comparing the two curves in Figure 9 to Figure 6, we see it is easier to find the zero points of F(λ) = 0 by the Newton-like method.Next we solve Example 6 by the second regularization method (SRM).Table 6 lists the eigenvalue, α, Nx , NI, and [λ 0 , λ 2 ] results used in the DFFPNM.In Table 7, we list the results computed from the Newton method together with the FRM, for which the initial guess is x 0 = 1, λ 0 = c 0 , and α = 3.In Table 8, we list the results computed from the Newton method together with the SRM, for which the initial guess is x 0 = 1, λ 0 = c 0 , and α = 0.01.We solve Example 8 by the FRM with a fixed d = (1, 1, 1) T and two values of α = 1 and α = 2.The two curves are compared in Figure 10: it can be seen that the curve with α = 2 is better than that with α = 1.Table 9 reveals that α = 2 converges faster than α = 1.Here we apply the FRM to solve Example 10 with d = (0, 1, 0) T and ε = 10 −15 .For α = 5, the first real eigenvalue is found to be λ = −0.2328574586400297,and NI = 7 and Nx = 3.14 × 10 −16 are obtained.Then, with d = (1, 1, 1) T and α = 10, the second real eigenvalue is given by λ = 2.355885632295364, and we obtain NI = 6 and Nx = 9.89 × 10 −16 .The two curves of F(λ) are compared in Figure 11.By comparing the SM together with the 1D GSSA, NI can save ten times.
and C = 0.By inserting q = e iωt x into Equation (3), we can obtain a nonlinear eigenvalue problem: For the design of engineering structures, knowing the frequencies ω of the free vibration modes x is of utmost importance.
In Table 12, we list the results computed from the Newton method and FRM, where d = 1 and the initial guess is x 0 = 1, ω 0 = c 0 , and α = 2.It can be seen that all Nx , where N = ω 2 M − K are very small.In Table 13, we list the results computed from the Newton method and SRM, where d 1 = 1, d j = 0, j = 2, . . ., n and the initial guess is x 0 = 1, ω 0 = c 0 , and α = 0.01.The corresponding five modes of free vibration are plotted in Figure 12, wherein all the first components are normalized to one.It can be seen that all Nx are very small, which indicates the high accuracy of the proposed Newton method based on the SRM, which is slightly more accurate than the FRM in Table 12.Indeed, the regularization parameter α controls the convergence speed and accuracy.In Table 14, we list NI and Nx for different values of α.When α ≥ 6, it does not converge within 1000 steps, and the accuracy is reduced to 5.21 × 10 −13 .The best value is α = 0.01.When α = 1, the normalization condition (63) in the SRM recovers to the normalization condition (14) in the SM.However, α = 1 is not the best one, as shown in Table 14.When the proper values of α are taken in the FRM and SRM, they are better than the SM.

Conclusions
Fast and accurate iterative solution methods of linear eigenvalue problems and nonlinear eigenvalue problems were studied in the paper.We transformed the original homogeneous eigen-equation to a nonhomogeneous linear system by imposing an extra normalization condition for the uniqueness of the eigenvector.Over the range given, the curve of the merit function is constructed, for which the real eigenvalues are local minimums.In the merit function, the vector variable is solved from the newly derived nonhomogeneous linear system.Real eigenvalues by using the 1D golden section search algorithm and complex eigenvalues by using the 2D golden section search algorithm can be obtained quite fast.Very accurate eigenvalues and eigenvectors-as reflected by the very small errors to satisfy the eigen-equation with orders of 10 −15 and 10 −16 -were available merely through a few iterations, and the computations of merit functions are quite saving.Complex eigenvalue problems are more difficult than real eigenvalue problems.In Theorems 3 and 4, we explored two normalization equations for complex eigenvalue problems.For real eigenvalue problems, two regularization methods were constructed, which upon combination with the derivative-free fixed-point Newton method can find the real eigenvalues about ten times more quickly than using the 1D GSSA in the simple methods.Moreover, the combination of Newton's iterative technique with two regularized normalization methods confirmed that using suitable values for the regularization parameters can enhance the convergence speed and also the accuracy of the solutions.Compared to the two Newton methods, which need to invert the Jacobian matrices at each iteration step, the derivative-free fixed-point Newton method only solves a scalar equation; hence, it can save much computational time and have the same good convergence rate as the two Newton methods, for which the number of iterations for the tested examples are few.

Figure 1 .
Figure 1.Plotting the merit function with respect to the eigen-parameter, the three minimal points of which are eigenvalues 3, 6, and 9 for Example 1.

Figure 2 .
Figure 2. Plotting the merit function with respect to the eigen-parameter, the four minimal points of which are eigenvalues −4, −2, 8, 12 for Example 2.

Figure 3 .
Figure 3. Plotting the merit function with respect to the eigen-parameter, the four minimal points of which are eigenvalues 1, 2, 6, 30 for Example 3.

Figure 4 .
Figure 4. Plotting the merit function with respect to the eigen-parameter, for which the minimal points are last nine eigenvalues for Example 4.

Figure 5 .
Figure 5. Plotting the merit function with respect to the eigen-parameter, for which the minimal points are seven eigenvalues of Example 5 with n = 7.

Figure 6 .
Figure 6.A generalized eigenvalue problem of Example 6 showing five minima in a merit function obtained by a simple method.

IFigure 7 .
Figure 7.A nonlinear eigenvalue problem of Example 8 showing two minima in a merit function obtained by a simple method for (a) real eigenvalues and (b) complex eigenvalues.

Figure 8 .
Figure 8.A nonlinear eigenvalue problem of Example 10 showing two minima in a merit function obtained by a simple method for real eigenvalues.

Figure 9 .
Figure 9.A generalized eigenvalue problem of Example 6 showing five minima in (a) the first regularization method and (b) the second regularization method.Now, we solve Example 6 in Section 5 by the first regularization method (FRM).Table 5 lists the eigenvalue, α, error Nx , number of iterations (NI), and [λ 0 , λ 2 ] results used in the DFFPNM.

Figure 10 .
Figure 10.Example 8 showing two minima obtained by the first regularization method with different regularization parameters.

Figure 12 .
Figure 12.Example 13 of a five-degree free vibration system displaying the five vibration modes.

Table 1 .
Example 1 solved by SM, listing EE, the error Ax − λx , and NI.

Table 2 .
Example 2 solved by SM, listing EE, the error Ax − λx , and NI.

Table 3 .
Example 3 solved by SM, listing EE, the error Ax − λx , and NI.

Table 5 .
Results of Example 6 solved by FRM and DFFPNM.

Table 6 .
Results of Example 6 solved by SRM and DFFPNM.

Table 7 .
Results of Example 6 solved by the Newton method and FRM.

Table 8 .
Results of Example 6 solved by the Newton method and SRM.

Table 9 .
Results of Example 8 solved by FRM and DFFPNM.

Table 10 .
Results of Example 12 solved by the combination of FRM and the Newton method.

Table 11 .
Results of Example 12 solved by the combination of SRM and the Newton method.
[49]ple 13.As a practical application, we consider a five-story shear building with[49]

Table 12 .
Results of Example 13 solved by the combination of FRM and the Newton method.

Table 13 .
Results of Example 13 solved by the combination of SRM and the Newton method.

Table 14 .
Example 13 solved by the combination of SRM and the Newton method, listing NI and Nx for different values of α.