# Pattern-Multiplicative Average of Nonnegative Matrices Revisited: Eigenvalue Approximation Is the Best of Versatile Optimization Tools

## Abstract

**:**

**G**), and hence to a nonlinear constrained minimization problem for the matrix norm. Former practical solutions faced certain technical problems, which required sophisticated algorithms but returned acceptable estimates. Now, we formulate (for the first time in ecological modeling and nonnegative matrix theory) the PMA problem as an eigenvalue approximation one and reduce it to a standard problem of linear programing (LP). The basic equation of averaging also determines the exact value of λ

_{1}(

**G**), the dominant eigenvalue of matrix

**G**, and the corresponding eigenvector. These are bound by the well-known linear equations, which enable an LP formulation of the former nonlinear problem. The LP approach is realized for 13 fixed-pattern matrices gained in a case study of Androsace albana, an alpine short-lived perennial, monitored on permanent plots over 14 years. A standard software routine reveals the unique exact solution, rather than an approximate one, to the PMA problem, which turns the LP approach into ‘’the best of versatile optimization tools”. The exact solution turns out to be peculiar in reaching zero bounds for certain nonnegative entries of

**G**, which deserves modified problem formulation separating the lower bounds from zero.

## 1. Introduction

**x**(t) ∈ ${\mathbb{R}}_{+}^{n}$ (n ≥ 2) representing the (absolute or relative) abundances of n class-specific groups of individuals, whereafter the matrix population model is a system of first-order difference equations,

**x**(t + 1) =

**L**(t)

**x**(t), t = 0, 1, 2, …,

**L**(t) is therefore called the population projection matrix (PPM) [4], despite the projection term having quite another meaning in matrix algebra [5]. All the matrix entries are nonnegative and called vital rates as they have a certain demographic sense [4]. The pattern of how zero/nonzero elements are arranged within the matrix is associated, in a unique way, to what is called the life cycle graph (ibidem) as it bears the biological knowledge of how the individuals grow/develop for one time step and provide for the population recruitment.

_{1}> 0, of matrix

**L**(t) serves as a “measure of how the local population is adapted to the environment where and when the data have been mined to calibrate the PPM” [8] (p. 1). When the data are of the “identified individuals” type [4], a great advantage of MPMs is that the calibration can be performed just from two successive observations, thus meeting Equation (1) and yielding a time-dependent

**L**(t). Correspondingly, the adaptation measure, λ

_{1}, inherits the time dependence, and the great advantage turns dialectically into a great problem or challenge when the task is to assess the adaptation from a time series of observation data, hence from a finite set of one-step PPMs.

**L**(t)

**x**(t), t → ∞ [9,10,11,12], and it has led to what is now called the stochastic growth rate of a population in a randomly changing environment (see [4] and refs therein). Another more recent approach reverts the task just to averaging a given number, say M (M ≥ 2), of one-step PPMs and calculating the dominant eigenvalue, λ

_{1}(

**G**), of the average matrix

**G**. Letter “G” here bears a hint to the geometric (or multiplicative) nature of averaging, and the hint will be revealed further in Section 2.2. This approach has led to a concept that was proposed 5 years ago with regard to matrix population models [13], and the task of averaging was reduced to a constrained nonlinear minimization problem for the matrix norm [8,13] (see Section 2.3). The norm has however not appeared quite fit for global optimization but required versatile optimization tools [14] to solve the problem.

_{1}(

**G**) from the ideal calculable λ

_{opt}_{1}(

**G**) (Section 2.4). The approach originates from the fact the ideal λ

_{1}(

**G**) is known:

_{1}(

**G**) = (

**Prod**)

^{1/M},

**Prod**denotes the product of M one-step PPMs. This leads to a problem that appears to be formalizable as a standard problem of linear programming (LP), and hence solvable by standard routines. The solution is exemplified by a case study of monitoring a local population of an alpine short-lived perennial for 13 years [15] (Section 2.1). Section 2.3 and Section 2.4 contain, respectively, a short presentation of the published “versatile optimization tools” and an in-depth one of the original LP approach. The outcomes of both are presented in Section 3, enabling the comparison of results, which reveals a surprising contrast and induces certain comments (Section 4), in particular, on how general the proposed LP technique might be.

## 2. Materials and Methods

#### 2.1. Case Study

**x**(t) ∈ ${\mathbb{R}}_{+}^{5}$, and the natural order of components in

**x**(t) leads to the following PPM pattern:

**L**(t) = [l

_{ij}(t)], presented in Table 1. Each of them obeys Equation (1). There are 6 annual PPMs with

**λ**

_{1}(

**L**(t)) > 1 and 7 of those with

**λ**

_{1}(

**L**(t)) < 1, so that even an “educated guess” of what should be λ

_{1}(

**G**), the dominant eigenvalue of the average matrix, remains uncertain.

#### 2.2. Pattern-Multiplicative Average of Annual PPMs

**x**(2022) =

**L**(2021)

**L**(2020) …

**L**(2009)

**x**(2009),

**Prod**=

**L**(2021)

**L**(2020) …

**L**(2009),

**x**(2022) =

**Prod x**(2009).

**G**

^{13}=

**Prod**,

**Prod**can hardly return a nonnegative matrix, nor guarantee the fixed pattern (3) for the root.

#### 2.3. Approximate PMA as a Nonlinear Constrained Minimization Problem

**G**

^{13}−

**Prod**=

**0**,

**G**= [g

_{ij}] for the approximate average matrix, we can see the elements of

**G**

^{13}as certain polynomials of the 5 × 13 = 65th degree. If we consider the quality of approximation to be a scalar value, then it should be the error of approximation. The error can be measured as the (squared) matrix norm, whereby we come to the following nonlinear minimization problem:

**Table 2.**Bounds for the 10 unknown variables ensuing from 13 annual PPMs (Table 1).

Minimal | Vital Rate | Maximal |
---|---|---|

4/3 | a | 49 |

3 | b | 85 |

0 | c | 25 |

0 | d | 7/15 |

0 | e | 1 |

1/49 | f | 7/9 |

0 | h | 2/3 |

6/95 | k | 5/6 |

8/25 | l | 22/23 |

1/35 | m | 5/15 |

#### 2.4. Approximate PMA as an Eigenvalue-Constrained Optimization Problem

**v**is a dominant eigenvector of the average matrix

**G**, i.e., we have

**G v**= λ

_{1}(

**G**)

**v**.

**G**successively on both sides of Equation (10), we obtain

**G**)

^{2}

**v**= λ

_{1}(

**G**)

^{2}

**v**,

(

**G**)

^{3}

**v**= λ

_{1}(

**G**)

^{3}

**v**,

**…**,

(

**G**)

^{13}

**v**= λ

_{1}(

**G**)

^{13}

**v**.

**v**is a dominant eigenvector of the matrix product,

**Prod**, too. Hereby, the other measure of an approximation error is

_{1}(

**G**)

**−**ρ

_{0}|, where ρ

_{0}= ρ(

**Prod**)

**G**that have the same eigenvector

**v**. To exemplify how matrices of a single pattern with different eigenvalues may have the same dominant eigenvector is a student-level exercise (a hint to the solution in Appendix A).

**v***, can, for instance, be expressed in percent and calculated like the vectors

**x*** presented in Table 1. It can thereafter serve as a constraint in the search of approximation among various λ

_{1}values associated with a single

**v***. (To give an example of a matrix eigenvalue being nonlinear as a function of matrix elements is another exercise). Hence, we have a nonlinear minimization problem again. However, the change of variables, a beloved mathematical trick,

_{1}(

**G**)

**−**ρ

_{0}|,

_{1}(

**G**) when ρ

_{0}≥ λ

_{1}(

**G**) and to minimize it when λ

_{1}(

**G**) ≥ ρ

_{0}.

**v*** = [v

_{1}, v

_{2}, …, v

_{5}], the known positive eigenvector of

**G**, and its dominant eigenvalue λ

_{1}(

**G**) are bound by definition to the matrix equation

**Gv*** = λ

_{1}(

**G**)

**v***.

_{1}, this matrix equation reduces to the five scalar linear equations for a, b, … , l, m, λ:

_{5}− λv

_{1}= 0,

bv

_{5}+ dv

_{1}− λv

_{2}= 0,

cv

_{5}+ ev

_{1}+ fv

_{2}+ hv

_{3}− λv

_{3}= 0,

kv

_{3}+ lv

_{4}− λv

_{4}= 0,

mv

_{4}− λv

_{5}= 0,

i.e., for the ten unknown entries of

**G**and the 11th formal variable λ.

**x**= [a, b, …, l, m, λ]

^{T}, while the polygons ${\mathbb{B}}_{p}\subset {\mathbb{R}}_{+}^{11}$(p = 1, 2) are defined in (15) and the equality-type constraints (14). The two inner problems (16) represent the standard LP ones to be solved by standard software routines, such as the function linprog in MATLAB

^{®}[19] (technical details in Appendix B).

## 3. Results

#### 3.1. Case Study Outcome

#### 3.2. Pattern-Multiplicative Averaging as an Approxiation Problem

**Prod**(4), the product of 13 annual PPMs, is too cumbersome to be presented here in its original rational form. Its numerical form up to the 15th significant digit is shown in Table 3 together with the dominant eigenvalue and the corresponding eigenvector.

#### 3.3. Minimizing the Approximation Error as a Matrix Norm

#### 3.4. Eigenvalue-Constrained Optimization Problem

_{1}and B

_{2}.

## 4. Discussion and Conclusions

_{1}(

**G**) (Table 5). On the one hand, the theory of linear programming is known to admit nonunique solutions [23]. On the other, different matrices might well have the same dominant eigenvector (see Appendix A) in our practical case, too. Fortunately, the solution returned by the routine has turned out to be unique, hence the unique, best solution to the averaging problem.

**G**. While there are in total 4 vital rates that have zero minimal values among the 13 annual ones (Table 2), 3 of them, namely, for c, d, e, (Figure 1), are reached by the “ideal” solution (Table 5). The shares of c = 0, d = 0, and d = 0 in the set of 13 annual PPMs (Table 1) are equal, respectively, to 9/13, 7/14, and 1/13 (Appendix C). Whether the average matrix should take into account the most frequent zeros or all of them is a dialectical issue. The paradigm of biological diversity in general and the concept of polyvariant ontogeny [24,25] in particular dictate retaining all the links in the life cycle graph (Figure 1), hence the absence of occasional zeros in the average matrix.

## Funding

## Data Availability Statement

## Acknowledgments

^{®}R2021b.

## Conflicts of Interest

## Appendix A

## Appendix B

^{®}, we have logically reformulated the LP problem in matrix terms and in the terms of problems (14)–(16). The corresponding solver linprog [19] finds the minimum in a problem specified by

**x**= [a, b, …, l, m, λ]

^{T},

**y**

^{T}= [0, 0, …, 0, 1];

**A**and vector

**b**represent the inequality-type constraints (empty in our case),

**A**

_{eq}and

**b**

_{eq}the equality-type constraints:

**lb**and

**ub**, the lower and upper bounds, respectively, for

**x**, are given in Table 2, lacking them for λ, the last component. Extracted from Table 1, these bounds for λ

_{1}(

**G**) are

_{0}and ρ

_{0}≤ λ ≤ 1.5779,

_{0}is given in Table 3. Correspondingly, the pair of

**lb**and

**ub**split to the following ones,

**lb**,

**ub1**and

**lb2**,

**ub**,

**ub1**

_{11}=

**lb2**

_{11}= ρ

_{0}

## Appendix C

^{®}array [26] named All13L. Second, we calculate the frequency of zeros at that place by means of the built-in MATLAB

^{®}function nnz [27] and the following line:

- Similar lines for the places of (2, 1) and (3, 1) return, respectively, 7/13 and 1/13.

## References

- COMADRE. Available online: https://compadre-db.org/Data/Comadre (accessed on 30 May 2023).
- COMPADRE. Available online: https://compadre-db.org/Data/Compadre (accessed on 30 May 2023).
- Berman, A.; Plemmons, R.J. Nonnegative Matrices in the Mathematical Sciences; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1994. [Google Scholar]
- Caswell, H. Matrix Population Models: Construction, Analysis and Interpretation, 2nd ed.; Sinauer Associates: Sunderland, MA, USA, 2001. [Google Scholar]
- Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
- Gantmacher, F.R. Matrix Theory; Chelsea Publ.: New York, NY, USA, 1959. [Google Scholar]
- Logofet, D.O.; Salguero-Gómez, R. Novel challenges and opportunities in the theory and practice of matrix population modelling: An editorial for the special feature “Theory and Practice in Matrix Population Modelling”. Ecol. Model.
**2021**, 443, 109457. [Google Scholar] [CrossRef] - Logofet, D.O. Does averaging overestimate or underestimate population growth? It depends. Ecol. Model.
**2019**, 411, 108744. [Google Scholar] [CrossRef] - Cohen, J.E. Comparative statics and stochastic dynamics of age-structured populations. Theor. Popul. Biol.
**1979**, 16, 159–171. [Google Scholar] [CrossRef] [PubMed] - Tuljapurkar, S.D.; Orzack, S.H. Population dynamics in variable environments I. Long-run growth rates and extinction. Theor. Popul. Biol.
**1980**, 18, 314–342. [Google Scholar] [CrossRef] - Tuljapurkar, S.D. Demography in stochastic environments. II. Growth and convergence rates. J. Math. Biol.
**1986**, 24, 569–581. [Google Scholar] [CrossRef] [PubMed] - Tuljapurkar, S.D. Population Dynamics in Variable Environments; Lecture Notes in Biomathematics; Springer: Berlin, Germany, 1990; Volume 85. [Google Scholar] [CrossRef]
- Logofet, D.O. Averaging the population projection matrices: Heuristics against uncertainty and nonexistence. Ecol. Complex.
**2018**, 33, 66–74. [Google Scholar] [CrossRef] - Protasov, V.Y.; Zaitseva, T.I.; Logofet, D.O. Pattern-multiplicative average of nonnegative matrices: When a constrained minimization problem requires versatile optimization tools. Mathematics
**2022**, 10, 4417. [Google Scholar] [CrossRef] - Logofet, D.O.; Golubyatnikov, L.L.; Kazantseva, E.S.; Belova, I.N.; Ulanova, N.G. Thirteen years of monitoring an alpine short-lived perennial: Novel methods disprove the former assessment of population viability. Ecol. Model.
**2023**, 477, 110208. [Google Scholar] [CrossRef] - Logofet, D.O.; Kazantseva, E.S.; Belova, I.N.; Onipchenko, V.G. How long does a short-lived perennial live? A modelling approach. Biol. Bul. Rev.
**2018**, 8, 406–420. [Google Scholar] [CrossRef] - Logofet, D.O.; Kazantseva, E.S.; Onipchenko, V.G. Seed bank as a persistent problem in matrix population models: From uncertainty to certain bounds. Ecol. Modell.
**2020**, 438, 109284. [Google Scholar] [CrossRef] - MathWorks. Documentation. Available online: https://www.mathworks.com/help/gads/globalsearch.html?s_tid=doc_ta (accessed on 30 May 2023).
- MathWorks. Documentation. Available online: https://www.mathworks.com/help/optim/ug/linprog.html?s_tid=doc_ta (accessed on 30 May 2023).
- Deb, K.; Roy, P.C.; Hussein, R. Surrogate modeling approaches for multiobjective optimization: Methods, taxonomy, and results. Math. Comput. Appl.
**2021**, 26, 5. [Google Scholar] [CrossRef] - Krantz, S.G. Real Analysis and Foundations, 5th ed.; CRC Press: BoicaRaton, FL, USA, 2022; pp. 325–413. [Google Scholar]
- Robert, C.P.; Casella, G. Monte Carlo Statistical Methods, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar] [CrossRef] [Green Version]
- Luenberger, D.G. Introduction to Linear and Non-linear Programming; Addison-Wesley: Reading, MA, USA, 1973. [Google Scholar]
- Zhukova, L.A. Polyvariance of the meadow plants. In Zhiznennye Formy v Ekologii i Sistematike Rastenii (Life Forms in Plant Ecology and Systematics); Mosk. Gos. Pedagog. Inst.: Moscow, Russia, 1986; pp. 104–114. [Google Scholar]
- Zhukova, L.A. Populyatsionnaya Zhizn’ Lugovykh Rastenii (Population Life of Meadow Plants); Lanar: Yoshkar-Ola, Russia, 1995. [Google Scholar]
- MathWorks. Documentation. Available online: https://www.mathworks.com/help/matlab/math/multidimensional-arrays.html (accessed on 30 May 2023).
- MathWorks. Documentation. Available online: https://www.mathworks.com/help/matlab/ref/nnz.html?s_tid=doc_ta (accessed on 30 May 2023).

**Figure 1.**A. albana life cycle graph by ontogenetic stages:

**pl**, plantules;

**j**, juvenile plants;

**im**, immature plants;

**v**, adult vegetative plants; and

**g**, generative plants. Dashed arrows correspond to “virtual” reproduction (adapted from [15]).

**Table 1.**Outcome of PPM calibration for A. albana based on 2009–2022 data (updating Table 2 from [15]).

Census Year, t | Matrix L(t): t → t + 1 | λ_{1}(L(t)) | Vector x *, % |
---|---|---|---|

2009 j = 0 | L_{0}$=\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$30$}\!\left/ \!\raisebox{-1ex}{$13$}\right.\\ \raisebox{1ex}{$8$}\!\left/ \!\raisebox{-1ex}{$37$}\right.& 0& 0& 0& \raisebox{1ex}{$40$}\!\left/ \!\raisebox{-1ex}{$13$}\right.\\ \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$37$}\right.& \raisebox{1ex}{$22$}\!\left/ \!\raisebox{-1ex}{$110$}\right.& \raisebox{1ex}{$28$}\!\left/ \!\raisebox{-1ex}{$99$}\right.& 0& \raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$13$}\right.\\ 0& 0& \raisebox{1ex}{$7$}\!\left/ \!\raisebox{-1ex}{$99$}\right.& \raisebox{1ex}{$19$}\!\left/ \!\raisebox{-1ex}{$35$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$35$}\right.& 0\end{array}\right]$ | 0.5661 | $\left[\begin{array}{c}10.61\\ 18.20\\ 16.99\\ 51.59\\ 2.60\end{array}\right]$ |

2010 j = 1 | L_{1} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$19$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ \raisebox{1ex}{$14$}\!\left/ \!\raisebox{-1ex}{$30$}\right.& 0& 0& 0& \raisebox{1ex}{$31$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ \raisebox{1ex}{$4$}\!\left/ \!\raisebox{-1ex}{$30$}\right.& \raisebox{1ex}{$22$}\!\left/ \!\raisebox{-1ex}{$48$}\right.& \raisebox{1ex}{$17$}\!\left/ \!\raisebox{-1ex}{$55$}\right.& 0& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ 0& 0& \raisebox{1ex}{$34$}\!\left/ \!\raisebox{-1ex}{$55$}\right.& \raisebox{1ex}{$23$}\!\left/ \!\raisebox{-1ex}{$26$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$26$}\right.& 0\end{array}\right]$ | 1.2283 | $\left[\begin{array}{c}15.90\\ 31.99\\ 18.25\\ 32.83\\ 1.03\end{array}\right]$ |

2011 j = 2 | L_{2} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$49$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$19$}\right.& 0& 0& 0& \raisebox{1ex}{$85$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ \raisebox{1ex}{$6$}\!\left/ \!\raisebox{-1ex}{$19$}\right.& \raisebox{1ex}{$35$}\!\left/ \!\raisebox{-1ex}{$45$}\right.& \raisebox{1ex}{$21$}\!\left/ \!\raisebox{-1ex}{$43$}\right.& 0& \raisebox{1ex}{$25$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ 0& 0& \raisebox{1ex}{$10$}\!\left/ \!\raisebox{-1ex}{$43$}\right.& \raisebox{1ex}{$48$}\!\left/ \!\raisebox{-1ex}{$57$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$4$}\!\left/ \!\raisebox{-1ex}{$57$}\right.& 0\end{array}\right]$ | 1.5779 | $\left[\begin{array}{c}17.20\\ 30.40\\ 39.39\\ 12.45\\ 0.55\end{array}\right]$ |

2012 j = 3 | L_{3} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$19$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$49$}\right.& 0& 0& 0& \raisebox{1ex}{$136$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\\ \raisebox{1ex}{$10$}\!\left/ \!\raisebox{-1ex}{$49$}\right.& \raisebox{1ex}{$45$}\!\left/ \!\raisebox{-1ex}{$86$}\right.& \raisebox{1ex}{$39$}\!\left/ \!\raisebox{-1ex}{$87$}\right.& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\\ 0& 0& \raisebox{1ex}{$28$}\!\left/ \!\raisebox{-1ex}{$87$}\right.& \raisebox{1ex}{$45$}\!\left/ \!\raisebox{-1ex}{$58$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$6$}\!\left/ \!\raisebox{-1ex}{$58$}\right.& 0\end{array}\right]$ | 1.2641 | $\left[\begin{array}{c}6.01\\ 43.15\\ 29.67\\ 19.56\\ 1.60\end{array}\right]$ |

2014 j = 5 | L_{5} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$4$}\!\left/ \!\raisebox{-1ex}{$3$}\right.\\ 0& 0& 0& 0& \raisebox{1ex}{$19$}\!\left/ \!\raisebox{-1ex}{$3$}\right.\\ \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$16$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$98$}\right.& \raisebox{1ex}{$6$}\!\left/ \!\raisebox{-1ex}{$34$}\right.& 0& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$3$}\right.\\ 0& 0& \raisebox{1ex}{$4$}\!\left/ \!\raisebox{-1ex}{$34$}\right.& \raisebox{1ex}{$16$}\!\left/ \!\raisebox{-1ex}{$50$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$4$}\!\left/ \!\raisebox{-1ex}{$50$}\right.& 0\end{array}\right]$ | 0.3988 | $\left[\begin{array}{c}11.71\\ 56.64\\ 11.69\\ 17.46\\ 3.50\end{array}\right]$ |

2015 j = 6 | L_{6} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$10$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\\ 0& 0& 0& 0& \raisebox{1ex}{$29$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\\ 0& \raisebox{1ex}{$10$}\!\left/ \!\raisebox{-1ex}{$19$}\right.& \raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$10$}\right.& 0& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\\ 0& 0& \raisebox{1ex}{$5$}\!\left/ \!\raisebox{-1ex}{$10$}\right.& \raisebox{1ex}{$17$}\!\left/ \!\raisebox{-1ex}{$20$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$10$}\right.& 0\end{array}\right]$ | 1.0679 | $\left[\begin{array}{c}9.19\\ 26.66\\ 18.28\\ 41.94\\ 3.93\end{array}\right]$ |

2016 j = 7 | L_{7} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\\ 0& 0& 0& 0& \raisebox{1ex}{$8$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\\ \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$10$}\right.& \raisebox{1ex}{$5$}\!\left/ \!\raisebox{-1ex}{$29$}\right.& \raisebox{1ex}{$5$}\!\left/ \!\raisebox{-1ex}{$13$}\right.& 0& 0\\ 0& 0& \raisebox{1ex}{$8$}\!\left/ \!\raisebox{-1ex}{$13$}\right.& \raisebox{1ex}{$20$}\!\left/ \!\raisebox{-1ex}{$22$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$22$}\right.& 0\end{array}\right]$ | 0.9611 | $\left[\begin{array}{c}5.26\\ 14.04\\ 6.02\\ 71.30\\ 3.37\end{array}\right]$ |

2017 j = 8 | L_{8} =$\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$12$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ 0& 0& 0& 0& \raisebox{1ex}{$23$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ \raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$3$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$8$}\right.& \raisebox{1ex}{$8$}\!\left/ \!\raisebox{-1ex}{$12$}\right.& 0& 0\\ 0& 0& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$12$}\right.& \raisebox{1ex}{$21$}\!\left/ \!\raisebox{-1ex}{$28$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$28$}\right.& 0\end{array}\right]$ | 1.1206 | $\left[\begin{array}{c}12.93\\ 24.78\\ 42.13\\ 18.95\\ 1.21\end{array}\right]$ |

2018 j = 9 | L_{9} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$13$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\\ 0& 0& 0& 0& \raisebox{1ex}{$38$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$12$}\right.& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$23$}\right.& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$13$}\right.& 0& 0\\ 0& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$13$}\right.& \raisebox{1ex}{$22$}\!\left/ \!\raisebox{-1ex}{$23$}\right.& 0\\ 0& 0& 0& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$23$}\right.& 0\end{array}\right]$ | 0.9617 | $\left[\begin{array}{c}13.23\\ 38.65\\ 2.89\\ 43.27\\ 1.96\end{array}\right]$ |

2019 j = 10 | L_{10} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ 0& 0& 0& 0& \raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$1$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$13$}\right.& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$19$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$4$}\right.& 0& 0\\ 0& 0& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$4$}\right.& \raisebox{1ex}{$18$}\!\left/ \!\raisebox{-1ex}{$23$}\right.& 0\\ 0& 0& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$4$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$23$}\right.& 0\end{array}\right]$ | 0.8496 | $\left[\begin{array}{c}18.45\\ \begin{array}{c}18.45\\ \begin{array}{c}6.84\\ 51.04\end{array}\end{array}\\ 5.22\end{array}\right]$ |

2020 j = 11 | L_{11} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$8$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$3$}\right.& 0& 0& 0& \raisebox{1ex}{$9$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\\ \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$3$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$3$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$4$}\right.& 0& 0\\ 0& 0& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$4$}\right.& \raisebox{1ex}{$13$}\!\left/ \!\raisebox{-1ex}{$19$}\right.& 0\\ 0& 0& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$4$}\right.& \raisebox{1ex}{$5$}\!\left/ \!\raisebox{-1ex}{$19$}\right.& 0\end{array}\right]$ | 1.3008 | $\left[\begin{array}{c}15.88\\ \begin{array}{c}21.94\\ \begin{array}{c}31.49\\ 25.53\end{array}\end{array}\\ 5.16\end{array}\right]$ |

2021 j = 12 | L_{12} = $\left[\begin{array}{ccccc}0& 0& 0& 0& \raisebox{1ex}{$29$}\!\left/ \!\raisebox{-1ex}{$5$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$8$}\right.& 0& 0& 0& \raisebox{1ex}{$44$}\!\left/ \!\raisebox{-1ex}{$5$}\right.\\ \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$8$}\right.& \raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$5$}\right.& 0& 0& 0\\ 0& 0& \raisebox{1ex}{$5$}\!\left/ \!\raisebox{-1ex}{$6$}\right.& \raisebox{1ex}{$14$}\!\left/ \!\raisebox{-1ex}{$15$}\right.& 0\\ 0& 0& \raisebox{1ex}{$0$}\!\left/ \!\raisebox{-1ex}{$6$}\right.& \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$15$}\right.& 0\end{array}\right]$ | 1.1143 | $\left[\begin{array}{c}14.86\\ \begin{array}{c}24.21\\ \begin{array}{c}10.36\\ 47.72\end{array}\end{array}\\ 2.85\end{array}\right]$ |

Matrix Prod | λ_{1}(G^{13}), ρ_{0} | Vector v*, % | ||||
---|---|---|---|---|---|---|

0.021185585295608 | 0.039538528446472 | 0.086369212318887 | 0.321576284325812 | 0.312397941844407 | 0.31893645391 0.91584799085 | 29.2923 |

0.032875920640909 | 0.061354070510449 | 0.134019914355007 | 0.498983075380778 | 0.484789540051083 | 45.4532 | |

0.003661007803335 | 0.006824023057585 | 0.014887199373520 | 0.055368303504470 | 0.054013601100565 | 5.0480 | |

0.013845526439184 | 0.025919668563603 | 0.056669501627721 | 0.210813605518226 | 0.203676548113200 | 19.1962 | |

0.000729124010650 | 0.001363464687049 | 0.002980796589266 | 0.011096313418442 | 0.010736266465211 | 1.0103 |

**Table 4.**Solutions to the PMA problem by various optimization techniques (adapted from [14]).

Optimization Method, Loss Function | Matrix G | λ_{1}(G) | Approximation Error | ||||
---|---|---|---|---|---|---|---|

Basin hopping, ${\mathsf{\Phi}\left(\mathit{G}\right)=\left|\right|{\mathit{G}}^{12}\u2013\mathit{P}\mathit{r}\mathit{o}\mathit{d}\left|\right|}^{2}$ | 0 | 0 | 0 | 0 | 3.3309 | 0.8585 | 0.002374 |

0.4530 | 0 | 0 | 0 | 7.8767 | |||

0.0288 | 0.2936 | 0.1474 | 0 | 0 | |||

0 | 0 | 0.1726 | 0.7589 | 0 | |||

0 | 0 | 0 | 0.1034 | 0 | |||

Basin hopping, ${\mathsf{\Phi}\left(\mathit{G}\right)=\left|\right|{\mathit{G}}^{12}\u2013\mathit{P}\mathit{r}\mathit{o}\mathit{d}\left|\right|}^{2}$ and penalty for constraint violations | 0 | 0 | 0 | 0 | 3.3309 | 0.8585 | 0.002374 |

0.4533 | 0 | 0 | 0 | 7.8757 | |||

0.0287 | 0.22936 | 0.1474 | 0 | 0 | |||

0 | 0 | 0.1726 | 0.7589 | 0 | |||

0 | 0 | 0 | 0.1034 | 0 | |||

Basin hopping, S( G) = Φ(G)/Ψ(G)′, where $\Psi \left(\mathit{G}\right)={\Vert {\sum}_{j=0}^{11}{\mathit{L}}_{12}\dots {\mathit{L}}_{j+1}\Vert}^{2}$ | 0 | 0 | 0 | 0 | 3.3348 | 0.8584 | 0.002379 |

0.4322 | 0 | 0 | 0 | 7.9666 | |||

0.0363 | 0.28976 | 0.1485 | 0 | 0.0022 | |||

0 | 0 | 0.1728 | 0.7587 | 0 | |||

0 | 0 | 0 | 0.1034 | 0 |

Inner Problem (16) | Matrix G(x) | λ_{1}(G) | Approximation Error | ||||
---|---|---|---|---|---|---|---|

${y}_{1}\to \underset{\mathit{x}\in {\mathbb{B}}_{1}}{\mathrm{min}}{y}_{1}\left(\mathit{x}\right)$, ${y}_{1}\left(\mathit{x}\right)={\rho}_{0}-{\lambda}_{1}\left(\mathit{G}\left(\mathit{x}\right)\right)$ | 0 | 0 | 0 | 0 | 26.5530 | 0.915847990853247 | 0 |

0 | 0 | 0 | 0 | 41.2023 | |||

0 | 0.2777 | 0.6667 | 0 | 0 | |||

0 | 0 | 0.0632 | 0.8992 | 0 | |||

0 | 0 | 0 | 0.0482 | 0 | |||

${y}_{2}\to \underset{\mathit{x}\in {\mathbb{B}}_{2}}{\mathrm{min}}{y}_{2}\left(\mathit{x}\right)$, ${y}_{2}\left(\mathit{x}\right)={\lambda}_{1}\left(\mathit{G}\left(\mathit{x}\right)\right)-{\rho}_{0}$ | 0 | 0 | 0 | 0 | 26.5530 | 0.915847990853247 | 1.1102 × 10^{−16} |

0 | 0 | 0 | 0 | 41.2023 | |||

0 | 0.2777 | 0.6667 | 0 | 0 | |||

0 | 0 | 0.0632 | 0.8992 | 0 | |||

0 | 0 | 0 | 0.0482 | 0 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Logofet, D.O.
Pattern-Multiplicative Average of Nonnegative Matrices Revisited: Eigenvalue Approximation Is the Best of Versatile Optimization Tools. *Mathematics* **2023**, *11*, 3237.
https://doi.org/10.3390/math11143237

**AMA Style**

Logofet DO.
Pattern-Multiplicative Average of Nonnegative Matrices Revisited: Eigenvalue Approximation Is the Best of Versatile Optimization Tools. *Mathematics*. 2023; 11(14):3237.
https://doi.org/10.3390/math11143237

**Chicago/Turabian Style**

Logofet, Dmitrii O.
2023. "Pattern-Multiplicative Average of Nonnegative Matrices Revisited: Eigenvalue Approximation Is the Best of Versatile Optimization Tools" *Mathematics* 11, no. 14: 3237.
https://doi.org/10.3390/math11143237