Distance-Based Decision Making, Consensus Building, and Preference Aggregation Systems: A Note on the Scale Constraints

: Distance metrics and their extensions are widely accepted tools in supporting distance-based decision making, consensus building, and preference aggregation systems. For several models of this nature, it may be necessary to elucidate the problem output in the original input domain. When a particular parameter of interest is desired to be produced in this original domain, i.e., the scale , the decision makers simply resort to constraints that function in parallel with this goal. However, there exist some cases where such a membership is guaranteed by the mathematical properties of the distance metric utilized. In this paper, we argue that the scale constraints utilized in this manner under the distance-metric optimization framework are, in some cases, completely redundant. We provide necessary mathematical proofs and illustrate our arguments through an abstract physical system, examples, a case study, and a brief computational experiment.


Introduction and Related Literature
Decision making is concerned with finding the best alternatives from a finite choice set with regard to their total performances over several relevant criteria. When there is a single individual to make the decision or a committee chooses to act as a solitary decision-making unit, a single decision maker setting emerges [1][2][3][4]. Otherwise, each member of the committee delivers his/her preferences separately and such assessments have to be aggregated into a group decision calling for a group consensus or a group decision-making scenario [5][6][7][8][9][10]. The preference information, so crucial to reaching a valid decision, is highly dependent on the decision situation and the decision maker's or the committee's knowledge, experience, perception, and comprehension of the situation. The theory of decision making is well able to support a variety of preference information categories. Such information can be provided in cardinal [11,12], ordinal [13,14], linguistic [5,15], fuzzy [16,17], interval [18][19][20], multi-granular [21,22], multiplicative [23,24], pair-wise [25,26], and incomplete [27,28] arrangements. The relative importance of the criteria has to be amassed with this information in some way, to obtain the total performances of the decision alternatives. Usually, it is stated as suitable criteria weights, which can either be pre-specified [14] due to some previous experience, separately obtained by some method [26], or extracted from partial information [29].
Various systems were studied and practiced in the past to synthesize the above information. Of particular interest to us in this paper are those relying on distance metrics to support decision making, consensus building, and preference aggregation, namely the distance-based systems (DBS). In mathematics, a metric is a function that assigns a nonnegative distance between the elements of a space. A subset of these functions, the infamous p (or, alternatively, the order p Minkowski) metrics, is extensively used in DBS due to its desirable properties, such as non-negativity, symmetry, and triangle inequality. For detailed information on p metrics, we refer the reader to the body of work [30][31][32].
To illustrate, let V be a vector space and v, u ∈ V be two vectors of order n, such that v = (v 1 , v 2 , . . . , v n ) and u = (u 1 , u 2 , . . . , u n ). This subset of distance functions is characterized by a single-order parameter p and assumes the general form: where d(v, u) is the distance between vectors v and u, and i is an index. Due to the order parameter p, general p Equation (1) provides a useful family of metrics to measure the distance where applicable, depending on the specific problem domain and characteristics.
A collection of methodologies hinge on this family of metrics for the minimization of distance to some form of a pre-specified and desired configuration, sometimes called the ideal or the utopia. Some typical examples are goal programming [33], reference point methods [34], compromise programming [35], and composite programming [36]. Other nonstandard procedures are tailored to employ such metrics in some stage of their execution. Our review of the subject literature revealed that not only this family of metrics but also their extensions and special definitions are widely used as distance measures to support DBS. For the convenience of the reader, we summarized our detailed overview of these studies in Table 1. In order to disclose significant characteristics of each study, we organized four columns after the source information. The domain column is reserved for indicating in which setting, or for what purpose, the distance framework is used (i.e., in the single decision maker setting [37], in the group consensus setting [38], for screening the decision alternatives [11], etc.). The distance notion column shows which metric, or which specially defined form as a metric-based distance measure, is employed to support the method described in that paper (i.e., general p [25], rectilinear [15], Euclidean [39], distance between pre-orders [13], fuzzy distance [40], signed distance [6], etc.). The features column lists which well-granted method, relevant concept, or theory is utilized or whose principles among these were engaged during implementation (i.e., the utility theory [41], feature extraction [42], case-and rule-based reasoning [1], Lagrangian function [12], induced aggregation operators [43], quadratic optimization [2], etc.). Finally, the illustration column is devoted to indicating the scheme preferred by the authors to verify their findings along with the case, or the system, where the corresponding study is piloted (i.e., the case study [44], numerical example [45], computational experiment [46], experimental study [42], etc.).
While the preference of a specific distance measure is usually intended for various goals, such as consensus, ranking, screening, consistency adjustment, etc., implementation of some form of the general p metric or its custom modification is central to DBS. Hence, the principles of distance-metric optimization indeed apply to this group of procedures. In distance-metric optimization, sometimes a parameter of concern does not need to be necessarily constrained to have its final value fall within a target set. This is because such an association may be guaranteed by some gratifying mathematical property of the metric used. As will be clear later in this paper, what we have just noted is exactly the case in DBS.
In order to obtain the preference information from a decision maker or a committee, the typical practice in DBS is to utilize a common scale, including numbers or labels representing the preferences of the decision maker(s). When the final value of a parameter of interest needs to be expressed in this original domain, we impinge upon this parameter to occur within the limits prescribed by such a scale. These constraints are called the scale constraints in DBS.  [47] Group consensus General p metric Goal programming Voting [14] Single decision maker Distance between pre-orders Order theory Numerical example [44] Group consensus Separation distance Compromise programming Conservation planning [41] Single decision maker Profile distance Utility theory ERP software selection [13] Screening, group consensus Distance between pre-orders Order theory Numerical example [5] Group consensus Fuzzy distance Clustering Software selection [10] Group consensus Euclidean Decision support systems Facility planning [25] Group consensus General p metric Goal programming Numerical examples [48] Single decision maker Euclidean Decision balls Numerical example [37] Single decision maker Rectilinear, Chebyshev Fuzzy multi-objective program Numerical example [11] Screening Case-based distance Case-based reasoning Water resources planning [9] Group consensus Rectilinear Assignment model Evaluation of IT security [42] Screening Rectilinear Feature extraction Classification of signals [1] Single decision maker Euclidean, Mahalanobis Rule-based reasoning Decision support system [17] Group consensus Euclidean Intuitionistic fuzzy sets Merger strategy evaluation [38] Group consensus General p metric Goal programming Performance appraisal [22] Group consensus General p metric Fuzzy sets Recruiting [15] Group consensus Rectilinear Fuzzy sets Project selection [8] Group consensus Rectilinear, Euclidean Consistency indices Numerical example [40] Group consensus Fuzzy distance Fuzzy sets Performance evaluation [45] Group consensus Similarity measure Intuitionistic fuzzy sets Evaluation of air quality [12] Group consensus Squared Euclidean Lagrangian function Emergency decision support [49] Group consensus Similarity measure Fuzzy sets Site selection [46] Group consensus Profile distance Bisection algorithm Computational experiment [50] Group consensus Rectilinear Induced aggregation operators Investment planning [51] Group consensus Rectilinear Intuitionistic fuzzy sets Investment planning [7] Group consensus Rectilinear Linear orders Numerical example [27] Group consensus Rectilinear Logarithmic goal programming Numerical example [28] Single decision maker Euclidean Nonlinear programming Investment planning [52] Group consensus General p metric Hesitant fuzzy sets Energy policy assessment [43] Single decision maker Rectilinear Induced aggregation operators Football player selection [53] Single decision maker Affinity distance Co-evolutionary algorithm Garment manufacturing [39] Group consensus Euclidean Interval-valued fuzzy sets Manager selection [2] Single decision maker General p metric Quadratic optimization Credit risk analysis [4] Single decision maker Euclidean Data mining Incident risk analysis [3] Single decision maker Dissimilarity measure Belief functions Numerical examples [6] Group consensus Signed distance Interval-valued fuzzy sets Supplier selection [54] Single decision maker Euclidean Lagrangian function Machine selection [55] Group consensus Fuzzy max and min distance Induced aggregation operators Strategy development [56] Group consensus Rectilinear Goal programming Investment planning [57] Group consensus Hausdorff Sensitivity analysis Forecasting method selection [58] Single decision maker General p metric Multi-objective programming Project selection [59] Single decision maker General p metric Goal programming Indicator ranking [60] Single decision maker Similarity measure Hesitant fuzzy sets ERP system selection [61] Single decision maker Belief interval distance Belief functions Numerical example [62] Group consensus Hausdorff Hesitant fuzzy sets Investment planning [63] Group consensus Euclidean and Hausdorff Type-2 fuzzy sets Numerical example [64] Single decision maker Z-number distance Choquet Integral Inquiry selection [65] Single decision maker Euclidean and Hausdorff Set pair analysis Case study [66] Single decision maker Fuzzy distance Nonlinear optimization Disaster decision making [67] Group consensus Fuzzy similarity measure Type 1 and 2 fuzzy sets Illustrative examples [68] Single decision maker Distance to target Sustainability patterns Assessment of indicators [69] Group consensus Euclidean Preference relations Project selection [70] Single decision maker Similarity measure Shannon entropy Document identity identification [71] Single decision maker Context-based distance Intuitionistic fuzzy sets Route decision making [72] Group consensus Pair distance Belief functions Numerical example [73] Single decision maker Divergence measure Dempster-Schafer theory Pattern classification [74] Single decision maker Negation measure Quantum information fusion Pattern classification [75] Single decision maker Belief Rényi divergence Belief entropy EEG data analysis [76] Single decision maker Belief divergence measure Dempster-Schafer theory Pattern classification In this paper, we argue for some cases in DBS that, due to the utilization of the family of metrics (1), the scale constraints that are put in practice to obtain the final values of parameters as elements of the original scale are actually not needed. Hence, it is interesting to observe that the scale constraints become redundant in practice. Through our discussion, we first introduce this idea to the reader on an abstract physical system, and then, we prove the redundancy result on some existing models. Finally, we illustrate our argument on some published examples, a case study, and a brief computational experiment.
On that account, our contribution is two-fold. First, due to the arguments of the paper, decision makers utilizing the distance-metric framework in objective functions are informed that they do not need the additional scale constraints to obtain the final values of the parameters of interest to be constrained within their initial scale. Second, based on the elimination of the scale constraints, it is suggested that they may not need to resort to optimization in solving such models. Moreover, the possible riddance of such insignificant constraints will lead to simpler problem representations.
We organized the remainder of this paper as follows. In Section 2, we discuss a mechanical apparatus which is an excellent abstraction of our entire discussion. Section 3 is reserved for the proof our argument mathematically. In Section 4, we test our outcomes on some examples and present the case study and the experiment. Subsequently, we come to an end with our conclusions in Section 5.

An Abstract Physical System
Consider a finite number of points located in a bounded two-dimensional region R and associate a nonnegative scalar with each point representing some form of weight assigned to it. Suppose that we measure the distances between the point locations using the Euclidean metric 2 . Furthermore, suppose that we aim to determine a new point in R, say v, such that the sum of the weighted Euclidean distances from v to these points is minimal. This problem has a long history and is called the Weber, min-sum or 1-median problem (see [77] for a complete treatment of the problem and [78] for a connection to other location problems). In an industrial setting, each distinctive weight usually assumes the costs per unit distance of delivering some commodity from v to the respective point. In this scenario, the totality obtained by the summation of the resulting weighted Euclidean distances is a function representing the total transportation cost. The location of point v is the minimizer of this totality and is called the point of minimum aggregate travel, the Weber point, or simply the spatial median.
The above-stated problem is particularly important for our discussion in this paper. A surprisingly easy solution method for this problem is based on a mechanical device, as follows. Suppose we have a plane representing the two-dimensional region R. First, we drill holes through this plane at the coordinates representing the point locations. Then, for each point, we bring out a string and a physical mass of the same magnitude as its specific weight. We tie the ends of the strings together to obtain a knot. Leaving the knot on one surface of the plane, we steer each string through a hole to its other surface. After that, we position the plane horizontally to stillness such that the free ends of the strings hang down from the holes. Finally, we attach the physical mass associated with each point to the string assigned to it. The apparatus we obtain by following these guidelines is called the Varignon frame. An illustration of a frame designed to find the spatial median of the four given points is provided in Figure 1a. Now, suppose that the device we prepared is isolated from the drawbacks of a physical environment. For example, it is under the conditions in which the plane is frictionless, the holes are very small and also frictionless, the strings are very thin, such that their weights are almost ignored, etc. What happens in the case that we leave the masses to gravity and allow the knot to move liberally on the surface? It is easy to see that, due to the masses, each string will impose a force component on the knot and try to pull it towards its hole as much as possible. Eventually, the knot will move to a stable position and stop. This ultimate knot location is the anticipated spatial median of the given points.
To gain valuable insights from this solution, let us now construct the convex hull associated with these points on the surface. Figure 1b is a depiction of the surface where the convex hull, denoted here by C, is shown with a shaded area. It is important for us at this point to observe that the spatial median cannot occur outside the convex hull. This is because there is no force component pulling the knot outside the convex hull. What then happens when a mass outweighs the sum of all the other masses? As there is no friction, the weight of this particular mass will pull the knot down its own hole. This is the most extreme case that can happen, where the optimal solution of the underlying Weber problem occurs exactly on one of the given points. To see it alternatively, recall the Weber problem under the industrial setting and observe that translating the knot location to any neighborhood of the hole is costly in this case.
To summarize, the optimal solution of the above Weber problem can only occur within a non-dominated subset of feasible solutions. Moreover, this set exactly sizes up to the convex hull of given points. While we aimed to generate a point v in R at the beginning, we came to see that its membership is guaranteed even in a more reserved set. It is clearly evident that we did not need a constraint to impose v ∈ R, especially when we found out that v ∈ C was guaranteed.

Analysis
For our purposes, consider the following consensus-building problem. Problem 1. Given n decision alternatives and m committee members, each providing a judgment matrix A k = a k ij of order n, compute a consensus matrix C = c ij from the judgment set A = A k : k = 1, . . . , m such that the collective error due to individual displacements c ij − a k ij between the consensus judgment and the personal judgments is as small as possible.
The above problem arises when the members of a committee settle upon acting as an individual decision-making unit. Hence, a consensus matrix representing the members' aggregated preferences in the form of consensus judgments has to be computed. This problem is elaborated by the adoption of a metric-based approach to compute C that differs from A to the smallest possible extent under a pair-wise comparison scheme [25]. In this approach, the collective error originates from a series of displacements of each c ij from the corresponding m entries a k ij arranged in the same position (ij) of m pair-wise comparison matrices A k , and it is measured with the employment of the general p metric. Though we selected this particular DBS and some related examples to illustrate our arguments, one very important clarification is needed at this point. We clearly and vigorously stress that our aim is neither to criticize the valuable mathematical models presented nor to judge the usefulness of the results in [25]. Our only aim is to illustrate the argument that relates to scale constraints. These models are only selected because they are of very appropriate simplicity, which enhances the demonstration of our idea and the comprehension of the argument by the reader.
The optimization model associated with Problem 1 is as follows: It was suggested that the constraint set must satisfy some scale conditions to anchor the consensus judgments to the original scale domain. Hence, the constraints (3) were appended to establish those of Saaty's pair-wise comparison scale (Saaty, 1980). This is an uncomplicated model and a good basis by which to illustrate the redundancy of scale constraints.
Consider a general-purpose pair-wise comparison scale S = (1/s, . . . , 1, . . . , s), where s > 0, and suppose that the personal judgments a k ij are selected from S and arranged in m pair-wise comparison matrices A k . Thus, the scale constraints under S are of the form: To establish the redundancy result, first let a k ij be arranged on the real line and consider the sets A ij = a k ij : k = 1, . . . , m . We introduce the following definition.

Definition 1. The linear hull H A ij of the set A ij is the intersection of all real line segments containing A ij .
On the real line, we simply have H A ij = a − ij , a + ij , where a − ij = min k a k ij : a k ij ∈ A ij and a + ij = max k a k ij : a k ij ∈ A ij . For convenience, denote the elements of the unknown optimal consensus matrix C * by c * ij and refer to this matrix as the optimal solution to Problem 1.
Proof of Theorem 1. To prove Theorem 1 for M1, we create two solutions, C and C , as follows. First, we isolate the entries c hl and c hl at the position (hl) of both matrices. Then, we fill the matrices by picking an element from each set A ij and assigning it to the position (ij) of both matrices. Hence, we have c ij = c ij ∀i = h, ∀j = l, i. Yet, for c hl we assign c hl = a + hl . Then, since c ij ∈ A ij ⊂ H A ij ⊆ H(S) and c hl ∈ A hl ⊂ H(A hl ) ⊆ H(S), clearly C is a feasible solution satisfying f (C * ) ≤ f (C ). Now, suppose that we select c hl ∈ H(S)\H(A hl ), where c hl is arranged at a distance ε > 0 to the closest judgment term c hl = a + hl , as shown in Figure 2a. Observe that only the displacements associated with c ij and the corresponding m judgment terms a k ij arranged in the same position (ij) of the m pair-wise comparison matrices are considered in the objective function f (C ). A similar argument is also true for f (C ). As c ij = c ij ∀i = h, ∀j = l, i; the difference between f (C ) and f (C ) is solely due to the terms c hl − a k hl p and c hl − a k hl p . For convenience, we bring in the following term: Note that the judgment terms a k hl on the real line are in total order. Hence, without loss of generality, they can be re-indexed, say by q, leading to a q hl , as shown in Figure 2b. Denote the absolute displacement between each adjacent pair a q hl , a q+1 hl by d q , where d q = a q+1 hl − a q hl . Then, in particular, the objective functions f (C ) and f (C ) are given by the following: As d q ≥ 0, ε > 0, and p ≥ 1, a member-to-member comparison of the above objective functions shows that f (C ) < f (C ). Therefore, we have f (C * ) ≤ f (C ) < f (C ), which proves Theorem 1 for M1. Theorem 1 clearly shows that the optimal solution of M1 can only occur in a subset of feasible solutions where each entry c * ij of the optimal solution is a member of the corresponding linear hull H A ij . Then, the following results apply.  Having formalized our previous observation, we now argue that the scale constraints may remain redundant on possible extensions of M1 to solve Problem 1. Note that M1 is a non-linear optimization program due to order parameter p. Therefore, it may be found to be complex from a computational perspective. When rectilinear metric 1 is imposed on M1, the displacements between the consensus judgment and the personal judgments are of the form c ij − a k ij . In this case, observe that the resulting model can be converted to a goal program. Along these lines, let us now analyze the following lexicographic goal program substituted for M1 as an alternative solution method [25] for the same problem: 0.111 ≤ c ij ≤ 9, i, j = 1, . . . , n; j = i where u k ij and ϑ k ij are the deviational variables associated with the displacements c ij − a k ij . In the above formulation, g is a convex combination of two essential objectives: (i) The min-max objective. This term is employed for the minimizing of the maximum total displacement from any pair-wise comparison matrix A k . Hence, δ is a free variable utilized for carrying this objective to the objective function g. (ii) The min-sum objective. This term is employed for the minimizing of the total displacement from the judgment set A.
Finally, λ is a user-defined control parameter utilized for achieving a convex combination of these two objectives. Similarly to M1, the constraint set (12) of this program is found to be redundant. The related construction is as follows.

Corollary 4. Theorem 1 is valid for M2.
Proof of Corollary 4. Recall the solutions C and C . On this occasion, the difference between g(C ) and g(C ) is solely due to terms c hl − a k hl and c hl − a k hl . Thereby, we utilize z(1) according to (5). We also let a k hl be re-indexed, as shown in Figure 2b. As we selected c hl = a + hl and c hl = a + hl + ε, due to (10) we must have u q hl = 0 for C and C . For C , we also obtain: Yet, for C we have: Then, in particular, the objective functions g(C ) and g(C ) are given by the following: Upon simplification and comparison, we obtain: which shows that g(C * ) ≤ g(C ) < g(C ). This proves Theorem 1 for M2.

Remark 2.
H(S)\H(A hl ) = 1/s, a − hl ∪ a + hl , s , and we have proved Theorem 1 by selecting two solutions, C and C , such that c hl = a + hl and c hl ∈ a + hl , s . Alternatively, one can prove Theorem 1 by selecting c hl = a − hl and c hl ∈ 1/s, a − hl .
In sum, our analysis with M1 shows that its underlying model is in fact the following unconstrained nonlinear convex optimization program: Corollary 4 implies that Corollaries 2 and 3 are valid for M2, and therefore, its core is the following linear goal program:

Illustration of the Arguments
In this section, we will present our arguments numerically, with the aid of four studies.

Study 1-Illustration of M1 and M3: Analysis of a Published Example
To illustrate the redundancy of the scale constraints under a representative DBS, we consider the following example. Example 1. This is a slight modification of an original example [25] (p. 128), where consensus between four committee members, each providing judgments in the form of a pair-wise comparison matrix, is sought. In this original example, a few illogical personal judgments are purposely tolerated. For the moment, we assume that violations of the reciprocity condition a ij = 1/a ji ∀i, j are not allowed in this modified example. Hence, we adjusted those judgments where the reciprocity was violated in order to obtain the following judgment set: The aim in this example is to compute a consensus matrix out of this judgment set under the distance-metric optimization framework, using the Euclidean metric. Suppose M1 and M3 are utilized for this aggregation task. Comparing the results obtained separately by M1 and M3 will be of key assistance in recognizing the role of the scale constraints. We utilized the Euclidean metric by imposing p = 2 on M1 and M3. We coded both M1 with scale constraints and its unconstrained counterpart M3 under the GAMS modelling system. In order to solve the resulting models, we utilized BARON as the global optimization solver. The model statistics of these two formulations for a general case of n decision alternatives and m committee members, i.e., for computing an order n consensus matrix, are shown in Table 2. Continuous variables n(n − 1) n(n − 1)(2m + 1) + 1 n(n − 1) n(n − 1)(2m + 1) + 1 Scale constraints 2n(n − 1) When the problem is solved separately with formulations M1 and M3, the objective function values and the consensus matrices attained by the two programs under this setting are found to be exactly the same. This result confirms the redundancy of the 2n(n − 1) scale constraints in M1. The consensus matrix and the objective function value we obtained by the two programs are as follows: Observe that the existence of the scale constraints has no effect on the objective function value and the consensus matrix. We may check whether the consensus judgments in this matrix are elements of the corresponding linear hulls, as indicated by Theorem 1. A summary is as follows: Note that Theorem 1 is validated for M1 with the above results.

Study 2-Illustration of M2 and M4: Analysis of Displacements with Performance Metrics
To illustrate that our reasoning holds true for M2, and in order to analyze the displacements from a given judgment set, we utilize the original set of judgments in Example 1 for this study. This set is the same as A 1 − A 4 except that the following pairs, though each contradicts the notion of a pair-wise comparison, are tolerated throughout the process: a 1 12 , a 1 21 = (1/5, 3), a 2 14 , a 2 41 = (1/3, 5), a 2 34 , a 2 43 = (7, 1/5), a 3 14 , a 3 41 = (7, 1/5), and a 4 12 , a 4 21 = (7, 5). Nevertheless, we will stick to this set so that our solution is comparable with the results of the original example. Our aim for now is to compute a consensus matrix by employing the formulations M2 and M4 and by comparing their displacements from the judgments against each other to find out the real significance of the scale constraints.
For this purpose, we coded linear goal programs M2 and M4 under the same setting and utilized CPLEX as the linear program solver. We again refer the reader to Table 2 for the model statistics of these two formulations concerning the general case of n decision alternatives and m committee members. Suppose we prefer a balanced solution between the basic min-sum and the min-max objectives introduced. That is to say, our purpose on one hand is to minimize the total displacement from the judgment set; yet, on the other hand, we aim to be as close as possible to the most displaced pair-wise comparison matrix. Such a balance may be captured by equalizing the contributions of these two fundamental objectives to the objective function. This requires imposing λ = 0.5 on M2 and M4. When we solved the problem by using formulations M2 and M4 separately under this setting, the resultant objective function values and consensus matrices were again found to be exactly same, confirming that the 2n(n − 1) scale constraints in M2 are not functioning. The consensus matrix, hereafter augmented with a subscript under its notation to denote the preference of λ, and the objective function value we obtained using these programs are given by: Note that Theorem 1 is validated for M2 with the above results. It is also worthwhile noting that M2 and M4 have a large number of compensatory solutions due to different preferences of λ. These solutions reflect different attitudes of the decision maker(s) towards balancing the minimization of total displacement and the displacement associated with the most displaced matrix, respectively. For instance, under λ = 1, the following consensus matrix is reported [25] (p. 129): There was no information on the objective function value, but we would like to note that the corresponding value should be g(C 1 ) = 92.6. To uncover the compensatory mechanism here, let us compare this solution with our findings for λ = 0.5. First, observe that the objective function values g(C 0.5 ) = 58.43 and g(C 1 ) = 92.6 are not cardinally comparable with each other. This is because they are composed of different contributions of the min-sum and min-max objectives. On the other hand, the mechanism of the deviational variables u k ij and ϑ k ij suggests that if these two solutions were alternatives optimal to each other, the total absolute displacement of the resultant consensus matrices to the judgment set should be equal. In this line, we provide an analysis of the absolute displacements associated with the two solutions in Table 3. For a sensible comparison between them, three practical performance metrics are defined.
The total absolute displacement, denoted by T(C), is the measure of the overall absolute displacement of a consensus matrix C from the entire judgment set A. This totality is given by: The maximum absolute displacement from any matrix, denoted by δ(C) to ensure consistency with the goal program, is the measure of the total absolute displacement of a consensus matrix C from the most displaced matrix. As the optimization direction in M2 and M4 is through minimization, at optimality due to constraint (11) we have: The mean absolute displacement, denoted by MAD(C), is the measure of the average absolute displacement of consensus judgments c ij from judgment terms a k ij . Provided that there are mn(n − 1) individual displacement terms c ij − a k ij to consider in total, we must have: .
The values for these metrics are calculated for λ = 0.5 and λ = 1 and summarized in Table 4. Observe that for the measures of the total and mean absolute displacement we obtained exactly the same values; hence, the solutions for λ = 0.5 and λ = 1 are granted as being alternative optimal solutions to the consensus problem. This result numerically justifies the fact that the scale constraints are inessential. The fact remains that our consensus matrix turned out to be closer to the most displaced matrix, as we obtained δ(C 0.5 ) = 24.26 and δ(C 1 ) = 27.05. This is because in our solution we enhanced the contribution of the min-max objective to g(C λ ) by selecting λ = 0.5. Therefore, we were able to attain a slightly better fit to m individual matrices, where the total displacement from the entire judgment set is not affected.

Study 3-Treating the Non-Linear Objective Function of M1 and M3: A Real-Life Case Study
In this study, we show how our argument might work in practice by analyzing a reallife case. Through analyzing this case, we also illustrate a surprisingly practical treatment of the non-linear objective function (2). This is a case we implemented at a private university's newly established Centre for Distance Learning (CDL), located inİzmir, Turkey. The CDL coordinators were concerned with determining which additional servers to utilize in order to cope with capacity requirements due to an increasing number of online night-training programs. Several important performance criteria were relevant to this decision. In an ex ante assessment of the problem, we identified the following criteria: the storage capacity, clock speed, connection interface bandwidth capacity, frequency of the processor, core number, set of commands supported by the processor, number of RAM types supported, processing capabilities of the RAID controller, number of RAID functions, number of ports, RAM capacity, technical support warranties from the provider, and the length of the guarantee period. In order to finalize the criteria set that would be singled out from these options, a preliminary set of visits to the coordinators and technical staff of the center took place. Once the criteria set was established, the staff conducted an internal screening process with the purpose of coming up with a shortlist of servers available that supported the selected technical specifications and the budget approved by the university. At the end of this process, three feasible alternatives were identified. These were the HP S***, IBM X***, and DELL R*** servers, hereafter called Brand 1, Brand 2, and Brand 3, irrespectively. We chose to hide the model codes upon the center's request, and also the association between the brands and their dummy names, as our local application should not be generalized to conclude that one brand should outperform the others in any similar case.
After profiling the criteria and choice sets, a second round of visits was conducted in order to reveal the preferences of the coordinators and the technical staff. They were asked to evaluate the performances of the three servers with respect to the established criteria through the use of Saaty's scale [26] and its original definitions. We chose this scale as the staff disclosed that they were familiar with it and felt much more at ease with such a comparison style. Five questionnaires were extracted on those visits, which resulted in the following pair-wise comparison matrices: The center's aim was to create a group consensus based on the individual assessments of the technical staff. To this end, these valuations first need to be aggregated into a consensus judgment set. One way of processing this information is to use the non-linear convex optimization program M1 with such input.
Nevertheless, based on the arguments in this paper, this aggregation problem can be solved without resorting to optimization, as follows. According to our analysis, we replace the model M1 with the unconstrained nonlinear convex optimization program M3 and eliminate the scale constraints. We then construct the sets A ij , where j = i, and hence do not consider the entries arranged in the main diagonals of the pair-wise comparison matrices A k . Note that all these entries are simply composed of the ones both in the matrices A k and, as such, in the consensus matrix. Thus, we obtain n(n − 1) sets with m elements each. We recognize that model M3 with the above input is an n(n − 1)-dimensional Weber problem with m data points, which can be solved with a generalized Weiszfeld procedure [79] without requiring optimization. However, this is the case with p ≥ 2. Had we used p = 1, we could have done more than that. Assuming we consider the absolute displacements from the judgment terms and thus impose the rectilinear metric 1 , the model M3 becomes a separable program and further decomposes into n(n − 1) independent univariate Weber problems, all of which are very easy to solve.
Turning our attention to the case study at hand, recall that we utilize a Euclidean metric with p = 2, and according to the above process, we obtain the following six sets, with five elements each: Finally, to determine the rank-order of the server alternatives, the consensus judgments should be prioritized. Using the row-geometric mean method [80], we have the final priorities 0.464, 0.327, and 0.208 for Brand 1, 2 and 3, respectively. These findings show that the evaluations of the technical staff suggest installing server Brand 1, which in turn was the final decision of the CDL coordinators.

Study 4-Scale Constraints as Auxiliary? A Computational Experiment
Suppose in this case that Theorem 1 is not known to decision maker(s) and does not apply in this study. For rational decision maker(s), appending auxiliary constraints that restrict the solution space as much as possible to ensure computational efficiency is a good idea that can be resorted to in many optimization models. Finally, in this section, we investigate whether the scale constraints are essential for the distance-based consensus searching of model M1 in this manner, and whether they lead to a reduction in the computation times when compared to the performance of its unconstrained counterpart M3.
For this purpose, we considered fair numbers of decision alternatives and committee members in a sensible decision situation and tried augmenting the problem size gradually by increasing both n and m simultaneously from 3 to 15. This sufficed to obtain a set of test instances for our practical purposes. For each instance, the preference information is randomly generated according to Saaty's scale [26] and arranged in matrices of appropriate number and dimension with the enforcing of each matrix to comply with the reciprocity condition to ensure authenticity. We coded respective models in the GAMS modelling system and utilized BARON as the global optimization solver. The computation was implemented on a 2.27 GHz CPU with 4 GB RAM. Our findings are reported in Table 5, where we used the abbreviations Nscale to denote the number of scale constraints utilized and Nnzero to represent the number of non-zero elements required in the relevant model. The CPU times are decoupled to compilation (Tcom), model generation (Tgen), and execution (Texe) efforts and measured as computer-seconds under the above-mentioned setting. Our experience with the instances using this setting revealed that there is no meaningful difference between computing the solutions for constrained and unconstrained models in terms of CPU effort. Clearly, the scale constraints are not functioning positively from the point of view of computational efficiency either.
Lastly, we would like to highlight that we are not trying to beat the computation times of the constrained model with the unconstrained model in this table. This study shall not be understood in that manner. We just suggest to the decision makers that the 2n(n − 1) scale constraints are not mathematically valid; most probably, all of them will be pruned before the real optimization phase by powerful solvers at the pre-processing stage and hence never work, and therefore, the true underlying model is the unconstrained one, as we have shown.

Conclusions
The DBS literature accommodates a considerable variety of procedures and cases that rest upon novel applications of the distance measures. It is natural, as well as convenient to the decision maker(s), to enforce well-practiced and well-received p metrics and their purposeful extensions through implementation of a procedure and then try to describe the output in the original input domain of a specific problem. When a parameter of interest is to be elucidated in this original domain, the decision maker(s) usually resort to constraints that go through with this purpose.
On the other hand, what is occasionally ignored is that such an intercourse may readily be ensured by the mathematical properties of the distance metric utilized. In this paper, we introduced this idea. We first presented the implications of a completely different physical system whose behavior is interestingly in-line with our arguments. We then constructed the necessary mathematical proofs by considering a group consensus scenario, a well-represented and studied problem in DBS, and two existing mathematical models. We showed that our mathematical construction holds true by referring to four illustrative studies. Our findings indicate the existence of some situations in DBS where the final value of a variable of concern cannot occur exterior to a non-dominated subset of feasible solutions associated with the problem under study. In this regard, the scale constraints used in DBS to delimit such variables are found to be redundant in those situations.
We would like to note that there exist other DBS where our arguments apply in some way. Some examples are, but are not limited to, approximation scenarios to a consistent judgment matrix, the case of inferring priorities to chosen set elements, a consensus searching process between minority and majority principles, and the analysis of bargaining scenarios. Under the principles provided in this paper, it can be shown that scale constraints adopted in a similar fashion are also inessential. This requires further analysis of the mathematical properties and the real meaning of similar constraints under such scenarios.
In the future, we propose that researchers apply similar analysis in the extensions of distance metrics, as well as in their modifications based on the specific theory under study to see whether the same arguments hold. In this manner, one can explore the cases of similarity and dis-similarity measures, fuzzy distances, and other context-based distance definitions in the literature.