Constrained Eigenvalue Minimization of Incomplete Pairwise Comparison Matrices by Nelder-Mead Algorithm

: Pairwise comparison matrices play a prominent role in multiple-criteria decision-making, particularly in the analytic hierarchy process (AHP). Another form of preference modeling, called an incomplete pairwise comparison matrix, is considered when one or more elements are missing. In this paper, an algorithm is proposed for the optimal completion of an incomplete matrix. Our intention is to numerically minimize a maximum eigenvalue function, which is difﬁcult to write explicitly in terms of variables, subject to interval constraints. Numerical simulations are carried out in order to examine the performance of the algorithm. The results of our simulations show that the proposed algorithm has the ability to solve the minimization of the constrained eigenvalue problem. We provided illustrative examples to show the simplex procedures obtained by the proposed algorithm, and how well it ﬁlls in the given incomplete matrices.


Introduction
Increasingly complex decisions are being made everyday and especially in positions of high responsibility, they must be "defensible" in front of stakeholders. In this context, decision analysis can be seen as a set of formal tools which can help decision makers articulate their decisions in a more transparent and justifiable way. Due to its nature, in decision analysis, much of the information required by various models is represented by subjective judgments-very often representing preferences-expressed by one or more experts on a reference set. In complex problems where the reference set can hardly be considered in its entirety, a divide and conquer approach can be helpful, so that judgments are expressed on pairs of alternatives or attributes and can then be combined to reach a global result (see, e.g., [1]). In this way, an originally complex problem was decomposed in many smaller and more tractable subproblems. Such judgments on pairs are called pairwise comparisons and are widely used in a number of multiple-criteria decisionmaking methods as, for instance, the analytic hierarchy process (AHP) by Saaty [2,3]. In addition to having been employed in the AHP, the technique of pairwise comparisons and some of its variants are currently used in other multi-criteria decision-making techniques as, for instance, multi-attribute utility theory [4]; the best-worst method [5]; ELECTRE [6]; PROMETHEE [7]; MACBETH [8]; and PAPRIKA [9]. Hence, it is hard to underestimate the importance of pairwise comparisons in multi-criteria decision analysis.
A pairwise comparison matrix (PCM) is utilized to obtain a priority vector over a set of alternatives for a given criterion or quantify the priorities of the criteria. There are several methods for deriving priority vectors when all elements of the PCM are already known (see, e.g., [10]).
In contrast to the elegance of the approach based on PCMs, it must be said that it is often impractical (and sometimes even undesirable) to ask an expert to express their judgments on each possible pair of alternatives. Hence, incomplete pairwise comparison matrices [11] appeared due to decision makers' limited knowledge about some alternatives or criteria, lack of time because of a large number of alternatives, lack of data, or simply to avoid to overload the decision maker asking a significant amount of questions.
In decision-making contexts, it is often important to infer the values of the missing comparisons starting from the knowledge of the existing ones. This procedure can be automatic or, even better, the inferred values can act as simple suggestions, and not impositions, to help and guide the expert during the elicitation process. In this paper, we considered an optimal completion algorithm for incomplete PCMs.
The first step in this direction consists of considering the missing entries as variables. Let R k + be the positive orthant of the k-dimensional Euclidean space and R n×n + be the set of n × n positive matrices. Then, for a given incomplete matrix A, let x = (x 1 , x 2 , . . . , x k ) ∈ R k + be a vector of missing comparisons expressed as variables x 1 , x 2 , . . . , x k . At this point, it is reasonable to assume that the missing entries are estimated so that they fit as much as possible with the existing ones. That is, all the entries, known and estimates, minimize the global inconsistency of the matrix A. In other words, the estimated missing entries must be as coherent as possible with the already elicited judgments.
If we consider Saaty's CR index [2,3], the optimization problem is the following: where A(x) is an incomplete PCM that contains a total of 2k missing entries that can be written as and λ max (A(x)) represents the maximum eigenvalue function of A(x). In addition, an optimal solution x * = (x * 1 , x * 2 , . . . , x * k ) to the problem (1) is known as an optimal completion of A. Such a problem was originally considered by [12,13]. However, the focus of early research was a specific term of the characteristic polynomial and not λ max , which instead, cannot be represented in a closed form. More recently, the minimization of the Perron eigenvalue (1) of incomplete PCMs has been object of further studies [14][15][16].
The reason for minimizing λ max , or equivalently, adopting Saaty's inconsistency index as an objective function instead of alternative inconsistency indices, was discussed in a survey study [17] (p. 761). It was chosen because of its widespread use.
An equal of even greater amount of interest was generated by the optimal completion of incomplete PCM by optimizing quantities other than λ max . For example, Fedrizzi and Giove [18] proposed a method that minimizes a global inconsistency index in order to compute the optimal values for the missing comparisons. Benítez et al. [19] provided a method based on the linearization process to address a consistent completion of incomplete PCMs by a specific actor. Ergu et al. [20] completed the incomplete PCMs by extending the geometric mean-induced bias matrix. Zhou et al. [21] proposed a DEMATEL-based completion method that estimates the missing values by deriving the total relation matrix from the direct relation matrix. Kułakowski [22] extended the geometric mean method (GMM) for an incomplete PCM, leading to the same results obtained using the logarithmic least square method (LLSM) for incomplete PCM by Bozóki et al. [15], where the weight ratio w i /w j substitutes the missing comparisons in the incomplete PCM. In addition, other researchers have studied several completion methods for incomplete PCMs [23][24][25][26].
The goal of this paper is to propose an efficient and scalable algorithm to solve the constrained problem, where the objective function is a maximum eigenvalue (Perron eigenvalue) function and the constraints are intervals. Due to the unavoidable uncertainty in expressing preferences, it often happens that a decision maker considers it more suitable to state their pairwise comparisons as intervals, rather than as a precise numerical value (see, e.g., [27][28][29][30]). Interval judgments therefore indicate a range for the relative importance of the attributes giving a necessary flexibility to the preference assessment. For this reason, we consider it particularly important that the proposed algorithm, described in Section 3, is able to solve a constrained optimization problem, where variables are subject to interval constraints.
All entries of each matrix in the paper consist of numerical values restricted to the interval [1/9, 9] in order to meet Saaty's proposal and comply with AHP formulation. It must, however, be said that our choice is not binding and that this requirement can be relaxed by removing some constraints.
The paper is organized as follows. Section 2 deals with basic definitions and illustrations regarding the purpose of the work. In Section 3, an algorithm is proposed to solve the constrained eigenvalue problem. In Section 4, illustrative examples are provided to demonstrate the simplex procedure and verify how the algorithm provides an optimal completion. In Section 5, numerical simulations are performed to validate the performance of the proposed algorithm. Finally, Section 6 concludes the paper.

Technical Background
In this section, we will present the background terminologies and some fundamental properties related to the goal of the paper.
Considering a reference set R = {r 1 , r 2 , . . . , r n } with cardinality n, pairwise comparisons in the form of ratios between the weights of the elements of the reference set can be collected into a mathematical structure called the pairwise comparison matrix (PCM).

Definition 1 (Pairwise comparison matrix).
A real matrix A = [a ij ] n×n is said to be a pairwise comparison matrix (PCM) if it is reciprocal and positive, i.e., a ji = 1/a ij and a ij > 0 for all i, j = 1, 2, . . . , n.
Semantically, each entry of A is an estimation of the ratio between two positive weights, i.e., a ij ≈ w i /w j , where w i and w j , are the weights associated with the ith and the jth elements of the reference set, respectively.
The reciprocity of A is a minimal coherence condition which is always required. Nevertheless, as it is desirable to ask experts to discriminate with a sufficient level of rationality, it is important to determine whether the pairwise comparisons contained in a PCM represent rational preferences. The consistency condition corresponds to the condition of rationality.

Definition 2 (Consistency).
A pairwise comparison matrix A = [a ij ] n×n is said to be consistent if and only if the transitivity property a ik = a ij a jk holds for all i, j, k = 1, 2, . . . , n. Otherwise, it is called inconsistent.
However, in light of our cognitive limits, it is hardly ever possible for an expert to express consistent preferences. When one uses Saaty's discrete scale, it is very rare for the pairwise comparison matrix to become consistent. The appearance of the consistency ratio for all 4 × 4 PCMs is 0.001421% [31]. For this reason, as discussed in a recent survey [17], proposals of inconsistency indices are abundant in the literature. In spite of the variety of proposals, the inconsistency index CR proposed by Saaty [2] in their works in the AHP has gained and maintained prominence and to date continue to represent the standard in the field. Definition 3 (Inconsistency Ratio). Saaty's inconsistency ratio (CR) [2,3] is defined by where λ max (A) is the Perron eigenvalue of the complete PCM A, and RI n is the random index value associated with the matrix size n reported in Table 1. Indeed, as a quantification of inconsistency, the greater the value of CR, the greater the estimated inconsistency of the judgments contained in the pairwise comparison matrix. Moreover, as it is unavoidable that a certain level of inconsistency must be tolerated, Saaty [2] proposed the cut-off rule CR < 0.1 to define the set of acceptable PCMs.
Note that the inconsistency of a PCM may arise due to the redundancy of the judgments in the PCM itself.
Definition 4 (Incomplete pairwise comparison matrix). A pairwise comparison matrix A = [a ij ] n×n is called an incomplete pairwise comparison matrix if one or more elements are missing, i.e., a ji = 1/a ij if a ij is known, otherwise a ji = a ij = * , where * represents the unknown elements. In short, it can be represented in the form of: It is conventional to visualize the structure of an incomplete pairwise comparison matrix using an associated directed or undirected graph. However, by targeting our study in the paper, we will use the concept of an undirected graph.
Definition 5 (Undirected graph). An undirected graph G associated with the given n × n incomplete pairwise comparison matrix is defined as where V = {1, 2, . . . , n} denotes the set of vertices (nodes) and E denotes the set of undirected edges {i, j} (pairs of vertices) corresponding to the already assigned comparisons, E = {{i, j}| a ij is known, i, j = 1, 2, . . . , n ; i = j}.
This means that if the matrix entry a ij is already known, the edge is allocated from node i to node j or from node j to node i, with the exception of the diagonal entries. No edge will be assigned to unknown entries. Theorem 1 ([15], Theorem 2). The optimal completion of the incomplete PCM A is unique if and only if the graph G corresponding to the incomplete PCM A is connected.
Note that the matrix A plays a role which is similar to that of the adjacency matrix of graph G. The connectedness of graph (3) will be an important property for our study. In fact, we only consider incomplete PCMs corresponding to connected graphs. Let us briefly justify this assumption. If an incomplete n × n PCM corresponds to a non-connected graph, this simply means that at least two non-empty subsets of elements of the reference set exist, such that no element of the first subset is compared (directly or indirectly) with any element of the second subset. For practical purposes, this is clearly an irrelevant problem and we did not consider it. Definition 6 (Simplex). A simplex S in R k + is defined as the convex hull of k + 1 vertices x 1 , . . . , x k , x k+1 ∈ R k + . For example, a simplex in R + is a line segment, a simplex in R 2 + is a triangle, and a simplex in R 3 + is a tetrahedron.

Nelder-Mead Algorithm for the Optimal Completion of Incomplete PCMs
In this section, a Nelder-Mead algorithm [33] is implemented for the 'optimal completion' of an incomplete pairwise comparison matrix. Let ). The constrained eigenvalue minimization problem is defined by where l i and u i are the lower and upper bounds for the variable x i , respectively. From now on, we shall consider a restriction 1/9 ≤ l i , u i ≤ 9 for all i = 1, 2, . . . , k.
In general, a constrained minimization problem cannot be directly solved by the Nelder-Mead algorithm. However, the constrained problem can be transformed into an unconstrained problem by applying coordinate transformations and penalty functions. Then, the unconstrained minimization problem is solved using the Nelder-Mead algorithm or MATLAB's built-in function fminsearch [34][35][36]. Since our optimization problem (4) is constrained with scalar bounds, it is sufficient to use the coordinate transformation techniques [37] detailed in Section 3.1. It should be noted that our goal is to minimize the maximum eigenvalue (Perron eigenvalue) function, which, in general, cannot be expressed as an analytic function of the variables x 1 , x 2 , . . . , x k .
The procedure to specify the objective function (Perron eigenvalue), to minimize numerically, is given as follows: (1) Fill in the missing positions with zeros in the incomplete PCM A; (2) Set initial value x 0 = (x 1 , x 2 , . . . , x k ), where k is the number of missing entries in the upper triangular matrix A; Let t 0 = (t 1 , t 2 , . . . , t k ) such that t s = log(x s ) for s = 1, 2, . . . , k; (4) Let i, j = 1, 2, . . . , n and A(i, j) be the ith row and jth column entry of A.
For i < j, put the exponential functions: e t s in place of A(i, j) = 0, and e −t s in place of A(j, i) = 0 for all s = 1, 2, . . . , k; Calculate all eigenvalues of A; (6) Verify the Perron eigenvalue from step (5).
Note that the initial value x 0 in step (2) can be replaced by the vertex x m = (x 1 , . . . , x k ), m = 1, 2, . . . , k + 1, of a simplex in the Nelder-Mead algorithm. This is due to the fact that the exponential parameterization of x m ∈ R k + is x m = (e t 1 , e t 2 , . . . , e t k ), and hence t m = (log(x 1 ), . . . , log(x k )). Thus, f (x m ) is obtained from step (6).

The Coordinate Transformation Method
Here, we use a simple coordinate transformation technique [38] (pp. [23][24]; [34,36] evolved from a trigonometric function sin(z) for dual bound constraints (lower and upper bounds). Let x = (x 1 , x 2 , . . . , x k ) ∈ R k + be the original k-dimensional variable vector (or, equivalently, k missing comparisons), and z = (z 1 , z 2 , . . . , z k ) ∈ R k be the new search vector. Let In short: From the initial values x 0,i , where l i ≤ x 0,i ≤ u i , the initial values z 0,i are calculated as In addition, the diameter of the initial simplex region may be vanishingly small. As a result, in order to avoid this problem, it is recommended to shift the initial coordinate values by 2π [34], meaning that: Note that z 0,i is the ith component of z 0 , and x 0,i is the ith component of x 0 .

Nelder-Mead Algorithm
The Nelder-Mead algorithm (also known as the simplex search algorithm) is one of the most popular direct search methods for unconstrained optimization problems since it originally appeared in 1965 [33], and it is suitable for the minimization of functions of several variables. It does not require derivative information, and it is more appropriate for the function minimization where its derivative is unknown or discontinuous [39].
The Nelder-Mead algorithm uses a simplex with k + 1 vertices (points) for k−dimensional vectors (for a function of k variables) in finding the optimum point. In each iteration of the algorithm, the k + 1 vertices are updated and ordered according to the increasing values of the objective function on the k + 1 vertices (see, e.g., [39,40]).
The algorithm has four possible steps in a single iteration: reflection, expansion, contraction and shrink. The associated scalar parameters are: coefficients of reflection (ρ), expansion (χ), contraction (γ) and shrink (σ). They must meet the following constraints' criteria: In most circumstances, the Nelder-Mead algorithm achieves substantial improvements and produces very pleasant results in the first few iterations. In addition, except for shrink transformation, which is exceedingly unusual in practice, the algorithm normally only requires one or two function evaluations per iteration. This is very useful in real applications where the function evaluation takes a lengthy amount of time or the evaluation is costly. For such situations, the method is frequently faster than other existing derivativefree optimization methods [41,42].
The Nelder-Mead algorithm is popular and widely used in practice. The fundamental reason for its popularity in practice, aside from its simplicity to be understood and coded, is its ability to achieve a significant reduction in function value with a little number of function evaluations [41]. The method has been vastly used in practical applications, especially in chemistry, medicine and chemical engineering [42]. It has also been implemented and included in different libraries and software packages. For instance, numerical recipes in C [43]; MATLAB's fminsearch [44]; Python [45]; and MATHEMATICA [46].
The practical implementation of the Nelder-Mead algorithm is often reasonable, although it may, in some rare cases, get stuck in a non-stationary point. Restarting the algorithm several times can be used as a heuristic approach when stagnation occurs [47]. The convergence properties of the algorithm for low and high dimensions were studied by [39,[48][49][50]. Furthermore, the complexity analysis for a single iteration of the Nelder-Mead algorithm has been reported in the literature [51,52].
Another version of the standard Nelder-Mead algorithm [48] has been implemented using adaptive parameters and the authors found that the algorithm performs better than MATLAB's fminsearch by utilizing several benchmark testing functions for higher dimensional problems, but they have not clearly stated the convergence properties of the method.
A modified Nelder-Mead algorithm [35] has also been proposed for solving a general nonlinear constrained optimization problem that handles linear and nonlinear (in)equality constraints. Several benchmark problems have been examined and compared with various methods (the α constrained method with mutation [53]; the genetic algorithm [54]; and the bees algorithm [55]-to mention a few) to evaluate the performance of their algorithm. Regarding the effectiveness and efficiency, the authors discovered that it is competitive to such algorithms. Nonetheless, our approach to handle the interval constraints is different.
The algorithm starts with an initial simplex with k + 1 non-degenerate vertices x 1 , x 2 , . . . , x k+1 around a given initial point x 0 . Vertex x 1 can be chosen arbitrarily. However, the most common choice of x 1 in implementation is x 1 = x 0 in order to make proper restarts of the algorithm [41]. The remaining k vertices are then generated on the basis of step-size 0.05 in the direction of the unit vector e j = (0, 0, . . . , 1, . . . , 0) ∈ R k [44]: The initial simplex S 0 is a convex hull of k + 1 vertices x 1 , . . . , x k , x k+1 ∈ R k + . The ordering of the vertices S 0 should satisfy the increasing function values of the form: We consider x 1 as the best vertex (vertex that results in the minimum function value) and x k+1 as the worst vertex (a vertex which results in the maximum function value). The centroid x is calculated as x = 1 k ∑ k j=1 x j , which is the average of the non-worst k vertices, i.e., all vertices (points) except for x k+1 .
According to [39,56], the description of one simplex iteration of the standard Nelder-Mead algorithm is presented in Algorithm 1. At each iteration, the simplex vertices are ordered as x 1 , . . . , x k , x k+1 according to the increasing values of the objective function.
Note that the new working simplex in a nonshrink iteration of the algorithm has only one new vertex. In this case, the new vertex replaces the worst vertex x k+1 in the former simplex S. In the case of a shrink step, the new simplex S contains k new vertices: x 2 , x 3 . . . . , x k+1 . In this situation, vertex x 1 will be the old vertex from the former simplex S. Now we can solve the constrained eigenvalue minimization problem (4) using the standard Nelder-Mead Algorithm 1, equivalent to MATLAB's fminsearch algorithm, in connection with coordinate transformation (6) for each simplex iteration; henceforth, we shall simply call it Nelder-Mead algorithm.
For the respective given values to TolX, TolFun, MaxIter, and MaxFunEvals, the Nelder-Mead algorithm terminates when one of the following three conditions is satisfied: It is important to note that problem (4) is a non-convex problem, but this can be transformed into a convex minimization problem by using exponential parameterization [15]. From now on, we consider that our optimization problem is convex and constrained with scalar bounds. Algorithm 1 One iteration of the standard Nelder-Mead algorithm.
Accept x e and replace the worst vertex x k+1 with x e else x k+1 ← x r Accept x r and replace the worst vertex Apply the coordinate transformation (6) for each new (accepted) vertex x j , j = 1, 2, . . . , k + 1. S ← {x j } j=1,...,k+1 . Compute f j = f (x j ), j = 1, 2, . . . , k + 1. Sort the k + 1 vertices of the simplex S with an increasing objective function values end while Here, we apply the proposed algorithm for solving the constrained eigenvalue minimization (4) for the given incomplete PCM A. Moreover, we use Saaty's inconsistency ratio (CR) for our inconsistency measure hereinafter.

Illustrative Examples
In this section, examples are given to illustrate the optimal completion of the incomplete PCMs using the Nelder-Mead algorithm. We are interested in matrices of order 4 and above, because matrices of order 3 have an analytic formula [58], and hence the optimal completion will be trivial.

Example 1. Consider the 4 × 4 incomplete pairwise comparison matrix A :
where * represents the missing comparisons. Clearly, by using the consistency condition, one obtains a 12 = a 13 a 32 = 3 and a 24 = a 23 a 34 = 1/3. Equivalently, the above pairwise comparison matrix with unknown variables x 1 and x 2 can be rewritten as As mentioned before, it could be worthwhile that the expert expresses their preferences in the form of intervals. Therefore, we can formulate, as examples, two instances of the eigenvalue minimization problem with interval constraints: min λ max (A(x)) s.t. 1/9 ≤ x 1 ≤ 9 1/9 ≤ x 2 ≤ 9 (11) and: min λ max (A(x)) s.t. 5 ≤ x 1 ≤ 7 1/9 ≤ x 2 ≤ 9.
With initial value x 0 = (1, 1), applying the proposed algorithm on minimization (11), the optimal solution is x * = (3, 1/3), and hence λ max (A(x * )) = 4. In other words, it rebuilds a consistent matrix with entries in the Saaty's discrete scale. Moreover, the iteration and change of function values are reported in Table 2 and in Figure 1. The red point on the contour plot depicted in Figure 1 indicates the constrained minimum (3, 1/3).
Again, solving the optimization problem (12) with initial value x 0 = (6, 1), the algorithm reaches the solution x 1 = 5 and x 2 = 0.2582 with λ max = 4.0246. Here, we omitted reporting the table and figure of this problem, as the simplex procedure was similar to that in Table 2. Note that, due to the constraint on x 1 , it is no longer possible to obtain a consistent matrix. Again, if we change the constraint 1/7 ≤ x 1 ≤ 1/5 on the same problem, the solution becomes x * = (0.2000, 1.2910) with CR = 0.2887.
Applying the method of cyclic coordinates [15] on minimization (12), the optimal solution is x * = (5.0003, 0.2582). The solution is very similar to the previous optimal solution except for x 1 = 5.0003. MATLAB's built-in function fminbnd in the method of cyclic coordinates actually returns the optimal point in the interior of the interval (5, 7), even though the exact solution lies on the boundary. This is due to the slow convergence when the optimal solution saturates some constraints. Conversely, in the case of our algorithm (using the coordinate transformation technique), the optimal point returned by the algorithm can be a boundary point. Furthermore, the search performed by the cyclic coordinate method is "blind", whereas the Nelder-Mead does a better job in interpreting the topology of the function, even in the absence of information on the derivatives. Table 2. Iteration number, incumbent optimal value of λ max , and simplex procedure for minimization (11).
Then, solving minimization (13) with the initial value x 0 = (1, 1, 1, 1, 1, 1), using the proposed algorithm, the optimal solution is:  Table 3 and in Figure 2, respectively. The first graph in Figure 2 shows the evolution of the variables with respect to the number of iterations. Moreover, the convergence of the variables is not monotone. This is mainly due to the contraction step. For example, when we look at the value of x 2 at iterations 11, 18, and 27, it fluctuates. Furthermore, the value of λ max in Table 3 at iterations 2 and 6 drops significantly due to the expansion step.

Iter
Min

Numerical Simulations
In this section, we validate the performance of the proposed algorithm by numerical simulations because of the following reasons. If the optimization problem is strictly convex, then the global convergence is guaranteed for one dimension (1 missing comparison). For dimension two, it may converge to a non-stationary point, though the problem is strictly convex [50]. McKinnon constructed a family of functions of two variables that are strictly convex with up to three-times continuously differentiable for which the algorithm converges to a non-stationary point. Furthermore, another version of the algorithm (adaptive Nelder-Mead algorithm, [48]) among several variants of the Nelder-Mead algorithm (see for instance, [60,61]), has been studied for high dimensional problems, but its global convergence is not well examined.
In general, the convergence properties of the proposed algorithm lack a precise statement. However, the numerical simulations could help clarify the performance of the algorithm with respect to the positive interval constraints [37]. The simulation results can provide information on how well the proposed algorithm fills an incomplete matrix, and validate its performance by considering the number of missing comparisons at large.
In the introduction, we specified that only incomplete PCMs corresponding to connected graphs are taken into account. In fact, it is also worth noting that, if the undirected graph associated with the incomplete matrix is connected, then the parameterized eigenvalue function becomes strictly convex, and therefore the optimal solution will be unique [15]. We stress again that we only consider the case of connected undirected graphs for examining the results of our simulation.
Here, it should be recalled that the Nelder-Mead algorithm directly provides the values to the missing entries. Finally, the values fill in the gap and then give the reconstructed (complete) PCMs. The simulation results obtained by the algorithm are measured by Saaty's inconsistency ratio (CR).
To be more precise, the consistent matrix is modified by multiplicative perturbation U (Hadamard component-wise multiplication by U respecting the interval [1/9, 9] and then reconstruct the lower triangular matrix through reciprocity), where U is an upper triangular matrix generated from the log-normal distribution [62] (pp. 131-134).
Note that type (ii) matrices correspond to more reasonable real-life cases with respect to the very general type (i) matrices.
In order to construct an incomplete PCM, the procedure of an 'eliminating strategy' in both classes of matrices with matrix size n = 4, 5, 6, 7, 8, 9, 10 is given as follows. First, a complete pairwise comparison matrix is generated. Then, we remove one or more entries at random using a uniform distribution in the upper triangular first to produce a various number of missing entries for each matrix size accordingly. Subsequently, we reconstruct the lower triangle starting from the upper one to have the reciprocals of the unknowns. Furthermore, an incomplete PCM will hence be constructed. In all this, there is a test to check whether the graph is connected.
In the end, a complete PCM is reconstructed on the basis of a connected graph by applying the proposed algorithm with bound constraints on [1/9, 9]. Furthermore, then the average CR on the 10,000 simulations will be calculated for a fixed number of missing entries (k) from the given matrix size n (a similar procedure was applied by [63] (pp. 7-8)). More precisely, the steps for calculating the average CR of the reconstructed matrices on the basis of connected graphs for both matrix types (i) and (ii) are as follows: (1) Fix k and n; (2) Generate a random complete PCM on [1/9, 9]; (3) Make the matrix incomplete at random positions using a uniform distribution; (4) Identify whether the graph associated with an incomplete matrix is connected or disconnected. If it is connected, then we apply the proposed algorithm for the optimal completion of the incomplete PCM using the same interval constraint [1/9, 9]; (5) Compute and save the CR value for the reconstructed matrix; (6) Repeat steps (2)-(5) until 10, 000 CR values are obtained; (7) Calculate the average CR.

Simulation Results
The results of simulations of the matrix type (i) are reported in Table 4 and in Figure 3. The numbers reported in boldface font are calculated from the spanning trees. This happens when k = n(n − 1)/2 − (n − 1) = (n − 1)(n − 2)/2. Such numbers do not appear in Table 4 if n ≥ 7. Due to excessive processing time, we did not compute the average CR for all k > 12 except for some random large k values shown in Tables 6 and in 7. Average CR n = 7 n = 8 n = 9 n = 10  Table 4.
The first row in Table 4 represents the average CR of the original matrices when the number of missing entries is null, i.e., k = 0. It can be observed that the average CR decreases across the columns when the number of missing entries rises in each matrix size n. On the contrary, the sequence is increasing across all rows with the exception of k = 0. Note that our 'initial complete matrices are, on average, inconsistent.
The relations between the number of missing comparisons (k) and their respective average CR are depicted in Figure 3. As it can be seen from all line graphs, they are decreasing for each n while the number of missing comparisons (k) is increasing.
Having the same type of simulations work, the average values of CR obtained from the matrix type (ii) are presented in Table 5 and in Figure 4. The numbers that appear in boldface font are also calculated from the spanning trees. The results are similar to Table 4 and Figure 3 in terms of monotonic values. An interesting simulation result, in this case, is that the average CR values of the last row in each column are less than Saaty's threshold 0.1, even though the initial complete matrices had a CR, on average, greater than 0.1.
In our simulations, we observed the performance of the algorithm by taking the matrix size n from 4 up to 10, and the number of missing comparisons (k) up to 12 because of excessive computational time. However, the average CR for some number of missing comparisons (k) are reported in Tables 6 and 7, in order to examine the efficiency of the algorithm for large k. Since our simulation results are based on connected graphs, the eigenvalue function is strictly convex and therefore the optimal solutions obtained by the proposed algorithm are unique. Moreover, the algorithm provides more consistent PCMs with more incomplete information. This means that when k is closer to n(n − 1)/2.
We conclude that the algorithm performs well and is capable of providing an optimal completion for the incomplete PCM up to k = (n − 1)(n − 2)/2. Furthermore, due to the connectedness of the associated undirected graphs used in our numerical simulations, the optimal solutions obtained by the method are unique (from Theorem 1).
The computation time of the proposed algorithm to reconstruct complete PCMs from the incomplete PCMs and then to calculate the average CR of 10,000 completed PCMs versus to the number of missing comparisons k corresponding to Table 4 for matrices of size n = 4, 5, 6, 7, 8, 9, 10 is shown in Figure 5. The computation time is calculated using MATLAB's tic-toc function. As can be seen in Figure 5, at each matrix size n, the computation time increases when the number of missing comparisons (k) increases. Note that the execution time excludes the formation of the initial and incomplete matrices in the process.
All simulation results were run on a laptop (Intel(R) core(TM) i5-8250u, CPU: 1.80 GHz and RAM: 16 GB) using MATLAB (R2020b). Average CR n = 7 n = 8 n = 9 n = 10  Table 5.    Table 7. Average CR of 10,000 modified consistent matrices for an arbitrarily large number of missing comparisons k with respect to matrix size n that continued from

Conclusions
In this paper, we studied an application of the Nelder-Mead algorithm to the constrained 'λ max -optimal completion' and provided numerical simulations to study its performance. Our simulation results indicate that the proposed algorithm is capable of estimating the missing values in the incomplete PCMs, and it is simple, adaptable and efficient. Furthermore, the obtained solution is unique if and only if the undirected graph underlying the incomplete PCM is connected (by Theorem 1). It should be noted that the associated graph is necessarily connected if k ≤ n − 2 and possibly connected if there are at most k = (n − 1)(n − 2)/2 missing comparisons. If k > (n − 1)(n − 2)/2, the graph is not connected because the number of known entries in the incomplete matrix is less than n − 1 (see, e.g., [63] (p. 7)). Most importantly, the average CR in Tables 4 and 5 are calculated on the basis of connected undirected graphs.
Our proposal has its roots in the most widely used inconsistency index, as the CR proposed by Saaty. If, on the one hand, the CR was considered the standard for the quantification of inconsistency, on the other hand, its role has been limited to this simple task. Its use for other purposes, as for instance, the optimal completion of pairwise comparisons matrices, has been impaired by its perception as a function difficult to treat. One of the purposes of this paper is also to help demystify this view.
The future research could be a comparative analysis of the algorithm with other optimal completion methods (see for instance, refs. [13,15,18,20,64]).