AHP-Group Decision Making Based on Consistency

: The Precise consistency consensus matrix (PCCM) is a consensus matrix for AHP-group decision making in which the value of each entry belongs, simultaneously, to all the individual consistency stability intervals. This new consensus matrix has shown signiﬁcantly better behaviour with regards to consistency than other group consensus matrices, but it is slightly worse in terms of compatibility, understood as the discrepancy between the individual positions and the collective position that synthesises them. This paper includes an iterative algorithm for improving the compatibility of the PCCM. The sequence followed to modify the judgments of the PCCM is given by the entries that most contribute to the overall compatibility of the group. The procedure is illustrated by means of its application to a real-life situation (a local context) with three decision makers and four alternatives. The paper also offers, for the ﬁrst time in the scientiﬁc literature, a detailed explanation of the process followed to solve the optimisation problem proposed for the consideration of different weights for the decision makers in the calculation of the PCCM.


Introduction
One of the multicriteria decision making techniques that best responds to the challenges and needs of the Knowledge Society [1], especially the consideration of intangible aspects and decision-making with multiple actors, is the Analytical Hierarchy Process (AHP). AHP was proposed by Thomas L. Saaty in the early 1970s (20th century) [2]. This multicriteria technique incorporates the intangible aspects associated with the human factor through the use of pairwise comparisons. In group decision-making, where all the actors work as a single unit, AHP usually follows one of the two most traditional approaches [3][4][5]: the Aggregation of Individual Judgements (AIJ) and the Aggregation of Individual Priorities (AIP).
Both methods present two important limitations that have been addressed in some of the most recent proposals: the certainty of the data and the use of the geometric mean as the synthesising procedure of the considered values (judgments in AIJ and priorities in AIP). Escobar and Moreno-Jiménez [6] consider the principle of certainty and incorporate the context effect through the procedure called the Aggregation of Individual Preference Structures (AIPS). Altuzarra et al. [7] advance a Bayesian approach as a prioritisation procedure and a group decision-making aggregation procedure.
The concept of consistency [2] is one of the characteristics that distinguishes AHP from the other multicriteria techniques and gives coherence to the method; Moreno-Jiménez et al. [8][9][10] used this to design a new procedure for group decision making: the Consistency Consensus Matrix (CCM). Under certain conditions, the CCM automatically provides an interval judgement matrix where each entry reflects the range of values in which all decision makers would simultaneously be consistent in their initial matrices.
One limitation of this new decision-making tool is that the CCM is sometimes incomplete. The Precise Consistency Consensus Matrix (PCCM) has been proposed [11,12] to respond to this limitation by including more judgments in the group consensus matrix and allowing decision makers to have different weights assigned in the resolution of the problem. This new consensus matrix has, by construction, demonstrated good behaviour with respect to consistency, but it can be improved with respect to compatibility, understood as the discrepancy between the individual positions and the collective position that synthesises them.
This work presents a procedure to improve the compatibility of the PCCM guaranteeing that the consistency does not exceed a predetermined level. Compatibility is improved by modifying those judgments of the PCCM that most contribute to the global compatibility, with the idea of reducing this contribution. The combination of what happens to the consistency and the compatibility will allow selecting, as the preferred option, the one that most improves the cumulative relative changes of the two criteria (consistency and compatibility).
The paper is structured as follows: Section 2 gives the background to the developments; Section 3 describes the PCCM and the algorithm that solves the optimisation problem that aims to find the precise value that maximises the slack of consistency that remains free for the following steps when the actors have different weights; Section 4 explains the proposal for improving the compatibility of the PCCM and applies it to a case study; Section 5 highlights the most important conclusions of the study.

Multiactor Decision Making (MADM)
As previously mentioned, consistency (coherence of decision makers in eliciting their judgments) and a good behaviour in the decision-making with multiple actors are two of the most important properties for multicriteria decision making techniques. [6,13] distinguish three areas in multi-actor decision-making: (i) Group Decision Making (GDM); (ii) Negotiated Decision Making (NDM); and (iii) Systemic Decision Making (SDM).
In GDM, individuals work together in pursuit of a common goal under the principle of consensus. Consensus refers to the approach, model, tools, and procedures for deriving the collective position or final group priority vector.
NDM is based on the principle of agreement and the assumption that all the actors follow the same scientific approach. The individuals resolve the problem separately, the zones of agreement and disagreement between the actors are identified and agreement paths (sometimes known as consensus paths) are constructed by changing, in a personal, semiautomatic or automatic way, one or several judgements.
SDM follows the principle of tolerance: the individual acts independently and the individual preferences, expressed as probability distributions, are aggregated to form a collective one-the tolerance distribution. This new approach integrates all the preferences, even if they are provided from different 'individual theoretical models'; the only requirement is that they must be expressed as some kind of probability distribution.
The systemic situation for dealing with multiactor decision making allows capturing the holistic vision of reality and the subjacent ideas of lateral thinking [14]. The information provided by the tolerance distribution can be used to construct tolerance paths to produce a more democratic and representative final decision. In other words, a decision will be accepted by a greater number of actors or by a number of actors with greater weighting in the decisional process [15,16].

Analytic Hierarchy Process
The Analytic Hierarchy Process is one of the most widely utilised multicriteria decision making techniques. Its methodology consists of three phases [2]: (a) modelling, (b) valuation, and, (c) prioritisation and synthesis.
(a) Modelling refers to the construction of a hierarchy of different levels that represent the relevant aspects of the problem (scenarios, actors, criteria, alternatives). The mission or goal hangs on the highest level. The subsequent levels contain the criteria, the first order subcriteria, the second order, etc. This continues to the last order subcriteria or attributes (characteristics of the reality that are susceptible to be measured for the alternatives); the alternatives hang from the lowest subcriteria level (attributes). (b) Valuation involves the incorporation of the preferences of the decision makers via pairwise comparisons of the elements that hang from the nodes of the hierarchy in relation to the common node. The judgements follow Saaty's fundamental scale [2] and reflect the relative importance of one element with respect to another with regards to the criterion that is considered. They are expressed in reciprocal pairwise comparison matrices. (c) Prioritisation and synthesis determines the local, global and total priorities. Local priorities (priorities of the elements of the hierarchy with regards to the node from which they hang) are obtained from the pairwise comparison matrices using any of the existing prioritisation procedures. The Eigenvector (EGV) and the Row Geometric Mean (RGM) are the two most commonly employed. Global priorities (the priorities of the elements of the hierarchy with regards to the mission) are obtained through the principle of hierarchical composition, whilst Total priorities (the priorities of the alternatives with regards to the mission) are obtained by a multiadditive aggregation of the global priorities of each alternative.
In the AHP-group decision making context, the two techniques traditionally used are: (i) the Aggregation of Individual Judgements (AIJ) and (ii) the Aggregation of Individual Priorities (AIP); firstly, it is necessary to specify the notation that will be utilised. Given a local context (one criteria in the hierarchy) with n alternatives (A 1 ,...,A n ) and r decision makers (D1,...,Dr), let A (k) = (a (k) ij ) be the pairwise comparison matrix of decision-maker Dk (k = 1,...,r; i, j = 1,...,n) and π k be the relative importance in the group (π k ≥ 0, r ∑ k=1 π k = 1).
The priorities following the two approaches AIJ and AIP are obtained as follows: i ) and k = 1,...,r, using one of the existing prioritisation methods and then aggregated to obtain the priorities of the group w (G/P) = (w Using the Weighted Geometric Mean Method (WGMM) as the aggregation procedure, the group judgement matrix and the group priority vector are given by: When the WGMM aggregation procedure is employed and the priorities are obtained using the RGM, the two approaches, AIJ and AIP, provide the same solution [17,18].

Consistency and Compatibility in AHP
AHP allows for the evaluation of the consistency of the decision-maker when the judgements are introduced into the pairwise comparison matrices. Saaty [2] defined consistency in AHP as the cardinal transitivity of the judgements included in the pairwise comparison matrices, that is to say, the reciprocal pairwise comparison matrix A nxn = (a ij ) is consistent if ∀i,j,k = 1,...,n satisfies a ij ·a jk = a ik .
Consistency is associated with the (internal) coherence of the decision makers when their judgements are considered in the pairwise comparison matrices. Consistency is usually evaluated -depending on the prioritisation procedure that is used-as the 'representativeness' of the local priorities vector derived from the pairwise comparison matrices (a ij is an estimation of w i /w j ).
In the case of the EGV and RGM, the inconsistency indicators are given, respectively, by the Consistency Index (CI) and the Geometric Consistency Index (GCI) [19]: where e ij = a ij (w j /w i ). Obviously, if the matrix is consistent, both indicators of inconsistency are null, thus errors e ij = 1 (a ij = w i /w j ). The Consistency Interval Judgement matrix for the group (GCIJA) is an interval matrix GCIJA = ([a ij , a ij ]) where the entries correspond to the range of values for which all the decision makers will not exceed the maximum inconsistency allowed and will belong to the Saaty's fundamental scale range of values [1/9, 9].
The values that determine the limits of each entry of the GCIJA are given by a ij = Max ij ]) with ∆ (k) = GCI* − GCI (k) , GCI* being the maximum inconsistency allowed for the problem and GCI (k) the Geometric Consistency Index for the individual matrix A (k) [20].
Compatibility refers to the (internal) coherence of the group when selecting its priority vector (w (G) = (w 1 (G) , . . . ,w n (G) )), that is to say, its representativeness in relation to the individual positions (w (k) = (w 1 (k) , . . . ,w n (k) )). To evaluate the compatibility of an individual k (w (k) ), k = 1, . . . ,r, with the collective position or group priority vector(w (G) ), it is sufficient to adapt the previous expression of the GCI, taking eij = a i ) in a global one. The concept of compatibility reflects the distance between the individual and collective positions and is calculated automatically, without the express intervention of the individual with the exception of the emission of the initial judgements of the pairwise comparison matrices. [21] advanced the Geometric Compatibility Index (GCOMPI) in order to evaluate the compatibility of the individual positions with respect of the collective position provided by any of the existing procedures. The expression of the GCOMPI for a decision maker k in a local context (one criterion) is given by: and in a global context (hierarchy) by: The GCOMPI for the group is given by: In addition to the use of the GCI and the GCOMPI, two more indicators are used in the literature to compare the behaviour of the different procedures with respect to consistency and compatibility [11,12]: the Number of Violations in Consistency (CVN) for the consistency; and the Number of Violations in Priorities (PVN) for the compatibility.
The CVN considers the mean number of entries of the group pairwise comparison matrix that do not belong to the corresponding consistency stability interval judgement of each individual, calculated for the inconsistency threshold considered in the problem. The Consistency Violation Number (CVN) for the group is given by and The PVN measures the ordinal compatibility of each AHP-GDM procedure by means of the minimum number of violations [22].
The Priority Violation Number (PVN) for the group is given by and

The Precise Consensus Consistency Matrix (PCCM)
Moreno-Jiménez et al. [9,10] proposed a decisional tool, the Consistency Consensus Matrix (CCM), which identifies the core of consistency of the group decision using an interval matrix that may not be complete or connected. In [12], the same authors refined this tool and introduced the PCCM, which selects a precise value for each interval judgement in such a way that the quantity of slack that remains free for successive algorithm iterations is the maximum possible.
Escobar et al. [11] extended the PCCM to allow the assignment of different weights to the decision makers and to guarantee that the group consensus values were acceptable to the individuals in terms of inconsistency. In the same work, these authors put forward a number of methods for completing the PCCM matrix if it were incomplete.
The improved version of the algorithm for constructing the PCCM proposed in [11] starts by calculating the variance of the logarithms of the corresponding judgements, taking into account the fact that decision makers may have different weights. It also provides (Step 1) the initial Consistency Stability Intervals [20] for the individuals and for the group (GCIJA). The judgement with least variance (Step 2) that has a non-null intersection for the initial individual consistency stability intervals is selected. The consistency stability intervals for each decision maker are calculated for this judgement (Step 3) and the intersection of all these intervals is obtained (Step 4). In this common interval, it is guaranteed that the individual judgements can oscillate without the GCI exceeding a previously fixed level of inconsistency. The intersection of the previous interval with the range of values [1/9,9] and the initial consistency stability intervals is then calculated (Step 5). This avoids taking a value distanced from the initial judgements of all the decision makers more than the amount allowed for the fixed inconsistency level. The algorithm determines a precise value that belongs to the common interval (Step 6). Any judgment in this interval will have an acceptable inconsistence. Some of the matrices will be more inconsistent than others and they will therefore admit less slack for the following iterations. In order to address this point, the algorithm selects the value that provides the greatest slack for the most inconsistent matrix (the value that minimises the GCI of the most inconsistent matrix). Finally, the value obtained is included as an entry of the PCCM and serves to update the initial individual judgment matrices (Step 7). The detailed version of the algorithm can be seen in [11].
The consideration of different weights for the decision makers has notably increased the difficulty of the optimisation model (9) solved in Step 6. This non-trivial optimisation problem is solved using an iterative procedure which searches for the intersection points of the parabolas (the second order equations associated with the GCI(A (k) ) functions).
with α rs ∈ log a t rs , log a t rs , where α rs = log a rs , α When all decision makers have the same weight (initial version of the algorithm [12]), all the parabolas have the same 'width' (the same coefficient of the quadratic term). In that situation, the parabolas may intersect in one point or none. But when the decision makers have different weights [11], the parabolas may have different coefficients for their respective quadratic terms. Each pair of parabolas may intersect in one or two points, or none. Moreover, in this case, some parabolas can be tangential. The resolution of the optimisation model (9) should consider all these possibilities and carefully analyse each intersection point. A more detailed explanation of the procedure followed to solve this optimisation model (9) can be seen in Appendix A.

Iterative Procedure
The PCCM decisional tool has been applied to decisional problems [11,12] and the values of consistency are significantly better than those obtained with other GDM approaches (AIJ, Dong procedure [23]), but they are slightly worse in terms of compatibility.
This paper suggests an iterative procedure to improve compatibility without significantly worsening consistency (keeping it below a preset threshold). If the PCCM is constructed by sequentially considering the judgements from the least to the greatest variance, the proposed improvement of compatibility will sequentially consider the judgments with the greatest contribution (participation) to the global compatibility measure employed (4). This value corresponds to the entry p rs of the PCCM for which: where v (G) is the priority vector derived from the PCCM using the RGM method.
The procedure will modify the selected judgement p rs approaching it to the ratio w (G) r w (G) s of the priorities derived for the AIJ matrix; following a similar idea of that employed in the Dong procedure [23].
In any case, the modified value would never exceed the limits of the consistency stability intervals for this judgment, guaranteeing that the level of inconsistency for each decision maker is acceptable.
In what follows, the new iterative procedure for improving compatibility is explained in detail. It is described for any judgement matrix P; it will be applied to P = PCCM, as following Algorithm 1.

Algorithm 1
Let A (k) = (a (k) ij ) be the pairwise comparison matrix of decision maker Dk (k = 1,...,r; i, j = 1,...,n) and π k its relative importance in the group (π k ≥ 0, r ∑ k=1 π k = 1); a ij , a ij (i, j = 1, . . . , n) the limits of the intervals of the Consistency Interval Judgement matrix for the group; θ ∈ [0, 1]; w (G) the priority vector obtained when applying the RGM to the AIJ matrix; P a judgement matrix; and v the priority vector derived from P using the RGM method.
Step 0: Initialisation Let t = 0, P (0) = P, J = {(i, j), with i < j} and calculate for all (i, j) ∈ J: Step 1: Selection of the judgement Let (r, s) be the entry for which d rs = max Step 2: Obtaining a PCCM entry Step 3: Finalisation J = ∅, then Stop Else let t = t + 1 and go to Step 1.

Case Study
The previous procedure was applied to a case study which has been widely employed in the literature [11,12,24,25]: three decision makers (DM1, DM2 and DM3) must compare 5 alternatives (A1, . . . , A5). The individual pairwise comparison matrices are given in Table 1. The decision makers were given different weights (π 1 = 5; π 2 = 4; and π 3 = 2).  Table 2 gives the resulting priorities using the RGM for each of the three individual matrices and their corresponding rankings. The PCCM matrix that is obtained by applying the procedure explained in [11,12] is shown in Table 3. Two other AHP-GDM procedures have been applied: the AIJ that was explained in Section 2, and the Dong procedure [23]. Table 4 shows the priority vectors obtained with the three AHP-GDM procedures. It can be observed that the ranking of the alternatives is the same for the three procedures.  Table 5 shows the consistency and compatibility indicator values for the three AHP-GDM procedures. With respect to the indicators that measure consistency (GCI and CVN), the values obtained with the PCCM are considerably better than those obtained with the other two approaches. The values of the GCI for the AIJ procedure (0.122) and for the Dong procedure (0.069) are, respectively, more than five times (535.7%) and three times (304.2%) greater than that of the PCCM (0.023). The behaviour of the CVN is also better for the PCCM (CVN(PCCM) = 0 while CVN(AIJ) = CVN(Dong) = 0.018).
With respect to the compatibility, the value of the GCOMPI for the AIJ procedure (0.464) is better than those of the PCCM (the value 0.529 is 14% greater than the AIJ) and the Dong procedure (the value 0.472 is 1.7% greater). Finally, in the analysis of the number of violations (ordinal compatibility), the three methods gave the same result (0.136).
Having observed that the PCCM is the procedure (among the three being compared) that achieves the highest value for the GCOMPI indicator, the iterative procedure proposed at Section 4.1 is applied with the aim of detecting an improvement in the compatibility of the PCCM.
The iterative procedure was applied with different values of θ (θ = 0.75; θ = 0.5; θ = 0.25; and θ = 0); the PCCM corresponds to θ = 1. In order to compare the results obtained for the combinations considered, the focus is on the two cardinal indicators-the GCI for consistency and the GCOMPI for compatibility. Tables 6-9 show the sequence of iterations followed when applying the procedure (each column) and the values obtained for the two indicators for each iteration. The second row specifies the judgement that is modified in the corresponding iteration. The values for the original PCCM are shown in the first column as it corresponds to the starting point of the iterative procedure (t = 0). The modified values for each entry can be seen in Table 10. The values of the GCI and GCOMPI for the judgment (1,4), t = 8, are empty because modifying this judgement will lead to a figure out of the matrix GCIJA. The initial value is maintained and the procedure continues, selecting the following judgement. Table 6. Results for the iterative procedure with θ = 0.75 (the best value of the methods is in bold, for each indicator).

Iteration • t = 0 t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8 t = 9 t = 10
Modif  Table 7. Results for the iterative procedure with θ = 0.5 (the best value of the methods is in bold, for each indicator).  Table 9. Results for the iterative procedure with θ = 0 (the best value of the methods is in bold, for each indicator). From Tables 6-9, it is possible to make the following observations:

Iteration
• The order of the entrance of the judgements is the same for all the values considered for θ. This does not always have to happen. • According to the proposal followed in expression (11), the values of the compatibility indicator improve when the value of the parameter θ decreases.

•
In addition to improving compatibility, the final result also improves consistency.

•
The value of the compatibility indicator almost always decreases with the iterations. In just a few cases, for judgements (3,4) and (2,4), compatibility is slightly worse. The lowest value of the GCOMPI (0.477) is obtained with θ = 0 and applying the iteration procedure until the penultimate iteration. This value is only 2.8% greater than the one obtained with the AIJ.

•
The consistency indicator oscillates a little until achieving the highest value at iteration t = 4 (modifying judgement (1,2)). The next iteration (modifying judgement (1,5)) is a turning point and from there the GCI reduces its value significantly (this can also be seen in Figures 1 and 2). It can also be observed that the lower values of the GCI tend to be those obtained with high and intermediate values of θ. The lowest value of the GCI (0.018) is obtained with θ =0.5 and applying the iteration procedure until the last iteration. This value is 15.8% lower than that obtained with the PCCM procedure.

•
The fact that in the iteration t = 8 the judgement (1,4) is not modified means that the value of the GCI is not null at the last iteration of the procedure for θ = 0. Tables 6-9 in the form of graphs; they illustrate the relationship between the two indicator values, GCI and GCOMPI. Figure 1 shows the paths that these values follow in the iterative procedure (as new judgements are modified) for each value of θ separately, while Figure 2 shows all the paths in the same graphic presentation. The graphic visualisations help us to understand the relationship between the two indicators and to identify the steps of the algorithm that provide the biggest changes.  (1,4) is not modified means that the value of the GCI is not null at the last iteration of the procedure for  = 0. Figures 1 and 2 give the information provided in Tables 6-9 in the form of graphs; they illustrate the relationship between the two indicator values, GCI and GCOMPI. Figure 1 shows the paths that these values follow in the iterative procedure (as new judgements are modified) for each value of  separately, while Figure 2 shows all the paths in the same graphic presentation. The graphic visualisations help us to understand the relationship between the two indicators and to identify the steps of the algorithm that provide the biggest changes.   It can be observed that for all the values of , there is a turning point where the value of the GCI begins to decrease significantly. It corresponds to iteration t = 5 when judgment (1,5) is modified. Moreover, when  decreases, the value of the GCOMPI also decreases and the variability of the GCI increases ( Figure 2). Table 10 includes the modified PCCMs corresponding to the last iteration for each value of  Table 11 gives the priorities associated to these matrices; all the priority vectors have the same ranking and their range increases when the value of  decreases.  It can be observed that for all the values of θ, there is a turning point where the value of the GCI begins to decrease significantly. It corresponds to iteration t = 5 when judgment (1,5) is modified. Moreover, when θ decreases, the value of the GCOMPI also decreases and the variability of the GCI increases ( Figure 2). Table 10 includes the modified PCCMs corresponding to the last iteration for each value of θ. Table 11 gives the priorities associated to these matrices; all the priority vectors have the same ranking and their range increases when the value of θ decreases.

Conclusions
This paper has addressed issues related to the calculation and exploitation of the PCCM decisional tool employed in group decision making with AHP.
There are two particular contributions to the literature: (i) For the first time, the procedure followed to solve the optimisation problem that arises in each iteration of the calculation algorithm of the PCCM has been explained in detail. The consideration of different weights for decision makers greatly increases the difficulty of the optimisation problem, and it has been necessary to study all of the possible situations that could occur. (ii) The work presents a proposal to improve the compatibility of PCCM matrices. As previously mentioned, whilst the PCCM gives much better values than other procedures with regards to consistency, its behavior in terms of compatibility is worse. Following a sequential procedure in line with their contribution to the GCOMPI, the judgments of the PCCM are modified using a combination of the initial value of the PCCM and the ratio of the priorities obtained with the AIJ procedure.
The case study proved that compatibility substantially improves, reaching values close to those of the AIJ procedure. Consistency also improved, guaranteeing that the judgments of the consensus matrix belong to the consistency stability intervals of all decision makers.
Although the proposal made in this paper has been focused on improving the compatibility of the PCCM, the procedure can be adapted and applied to any consensus matrix.
Future research will seek to establish other criteria that determine the sequence in which the judgments of the group consensus matrix are selected for modification. At the same time, future extensions of this research will include a comparison of the proposal set out in this paper with the recently published improvements made by the authors of [23] to their methodology.
Author Contributions: The paper has been elaborated jointly by the four authors. The polynomial that is dominant in x = u is a decreasing function at this point. In this case, the solution to the optimisation problem (9) is x * = u ( Figure A1b). c.
The polynomial that is dominant in x = l is a decreasing function at this point. In this case, we start from this initial point and move forward until we find a section/segment in which the dominant polynomial is an increasing function ( Figure A1c).
a. The polynomial that is dominant in = is an increasing function at this point. In this case, the solution to the optimisation problem (9) is * = ( Figure A1a). b. The polynomial that is dominant in = is a decreasing function at this point. In this case, the solution to the optimisation problem (9) is * = (Figure A1b). c. The polynomial that is dominant in = is a decreasing function at this point. In this case, we start from this initial point and move forward until we find a section/segment in which the dominant polynomial is an increasing function ( Figure A1c).  In the Figure A2 it can be appreciated that 1 ( ) = 1 and 2 ( ) = 3 ( ) = 4 ( ) = 4. It can also be observed that 2 ′ ( ) < 3 ′ ( ) and 2 ′ ( ) < 4 ′ ( ), thus the polynomial 2 ( ) is not dominant. Polynomials 3 ( ) and 4 ( ) have the same value as their derivative in = , but the polynomial 4 ( ) has a greater value in the second derivative (greater coefficient of 2 ), it is therefore the dominant polynomial.
In short, if is the polynomial that is dominant in = , the following condition should be fulfilled: Case a. When we say that a polynomial p r is dominant in x = l, we refer to the situation in which the polynomial p r provides the maximum value in a neighbourhood [l, l + ε) : In order to obtain this dominant polynomial, we start by analysing the values [l, l + ε) . The polynomial p r (x) that provides the maximum value at this point is the polynomial that we were looking for. Nevertheless, it is possible that some ties exist. In this case, we should look for the polynomial that is dominant in the neighbourhood [l, l + ε) .
If p i (l) = p j (l), in order to determine which of the two polynomials is dominant in [l, l + ε) , we examine the first derivative: if p i (l) > p j (l), the polynomial p i is dominant.
If p i (l) = p j (l) and p i (l) = p j (l), the dominant polynomial will be that one with the maximum value in the second derivative. a. The polynomial that is dominant in = is an increasing function at this point. In this case, the solution to the optimisation problem (9) is * = ( Figure A1a). b. The polynomial that is dominant in = is a decreasing function at this point. In this case, the solution to the optimisation problem (9) is * = (Figure A1b). c. The polynomial that is dominant in = is a decreasing function at this point. In this case, we start from this initial point and move forward until we find a section/segment in which the dominant polynomial is an increasing function ( Figure A1c).  In the Figure A2 it can be appreciated that 1 ( ) = 1 and 2 ( ) = 3 ( ) = 4 ( ) = 4. It can also be observed that 2 ′ ( ) < 3 ′ ( ) and 2 ′ ( ) < 4 ′ ( ), thus the polynomial 2 ( ) is not dominant. Polynomials 3 ( ) and 4 ( ) have the same value as their derivative in = , but the polynomial 4 ( ) has a greater value in the second derivative (greater coefficient of 2 ), it is therefore the dominant polynomial.
In short, if is the polynomial that is dominant in = , the following condition should be fulfilled: In the Figure A2 it can be appreciated that p 1 (l) = 1 and p 2 (l) = p 3 (l) = p 4 (l) = 4. It can also be observed that p 2 (l) < p 3 (l) and p 2 (l) < p 4 (l), thus the polynomial p 2 (x) is not dominant. Polynomials p 3 (x) and p 4 (x) have the same value as their derivative in x = l, but the polynomial p 4 (x) has a greater value in the second derivative (greater coefficient of x 2 ), it is therefore the dominant polynomial.
In short, if p r is the polynomial that is dominant in x = l, the following condition should be fulfilled: (p r (l), p r (l), p r (l)) = lex max k (p k (l), p k (l), p k (l)) Case b. The polynomial that is dominant in x = u can be determined in an analogous way: (p s (u), −p s (u), p s (u)) = lex max k (p k (u), −p k (u), p k (u)) By determining the polynomials which are dominant in l and u, and calculating their respective derivatives we will be able to identify cases a and b.
Case c. Starting from point x0 = l and from the corresponding polynomial that is dominant at this point, we determine point x1 in which this polynomial becomes dominated. If the following polynomial that is dominant is a decreasing function at this point, we continue the process, updating x0 = x1. At the moment in which we go out from the interval under study, or when the new polynomial that is dominant is an increasing function, we have finished the iterative stage and we only have to calculate the minimum of the present dominant polynomial in the interval under study.
In Figure A1c we can see that p1 is the polynomial that is dominant in x = l. It continues to be the dominant polynomial until x1 where the new polynomial that is dominant is p2. Again, this polynomial is dominant until point x2. But the polynomial that is dominant from this point is an increasing function at x2, and therefore it is not necessary to continue. The solution to the optimisation problem is given by the minimum of the polynomial p2 in the interval [x1, x2].
Finding the points where a polynomial ceases to be dominant is determined by exploring the possible cut points with the rest of the polynomials in the problem (those that are within the interval under study) as following Algorithm A1.

Algorithm A1 min
x∈ [l, u] max k p k (x) where p k (x) = a k x 2 + b k x + c k with a k > 0 Step 1: Find r and with (p r (l), p r (l), p r (l)) = lex max k (p k (l), p k (l), p k (l)) (p s (u), −p s (u), p s (u)) = lex max k (p k (u), −p k (u), p k (u)) If p r (l) ≥ 0 then x * = l. STOP If p s (u) ≤ 0 then x * = u. STOP Step 2: Start from point x 0 = l, where the polynomial that is dominant, p r (x), is a decreasing function at this point.
Step 3: Calculate I = {i such that ∃ t i ∈ (x 0 , u] withp i (t i ) = p r (t i )andp i (t i ) > p r (t i )} Step 4: If I = ∅ the optimal solution is given by: min x∈[x 0 , u] p r (x) Step 5: Let j be the value such that (−t j , p j (t j ), p j (t j )) = lex max i∈I (−t i , p i (t i ), p i (t i )) If p j (t j ) > 0 the optimal solution is given by: Otherwise, update and r = j and go to Step 3.