An Efﬁcient Approach to Solve the Constrained OWA Aggregation Problem

: Constrained ordered weighted averaging (OWA) aggregation attempts to solve the OWA optimization problem subject to multiple constraints. The problem is nonlinear in nature due to the reordered variables of arguments in the objective function, and the solution approach via mixed integer linear programming is quite complex even in the problem with one restriction of which coefﬁcients are all one. Recently, this has been relaxed to allow a constraint with variable coefﬁcients but the solution approach is still abstruse. In this paper, we present a new intuitive method to constructing a problem with auxiliary symmetric constraints to convert it into linear programming problem. The side effect is that we encounter many small sub-problems to be solved. Interestingly, however, we discover that they share common symmetric features in the extreme points of the feasible region of each sub-problem. Consequently, we show that the structure of extreme points and the reordering process of input arguments peculiar to the OWA operator lead to a closed optimal solution to the constrained OWA optimization problem. Further, we extend our ﬁndings to the OWA optimization problem constrained by a range of order-preserving constraints and present the closed optimal solutions.


Introduction
Yager [1] introduced the ordered weighted averaging (OWA) operator to provide a parameterized class of mean-like operators that can be used to aggregate a collection of input arguments. A unique feature of the OWA operator is that the operator weights are associated with the arguments that are ordered by their magnitudes. Therefore, the OWA operator is the inner product of an ordered input vector and a weighting vector. In the short time since its first appearance, the OWA operator has been applied in diverse fields such as neural networks, database systems, fuzzy logic controllers, multi-criteria decision making, data mining, location-based services (LBS), and geographical information systems (GIS).
Yager [2] presented a new class of OWA aggregation problem, the so-called constrained OWA aggregation problem, and exemplified its simplified problem with a single constraint to indicate that it can be formulated by a mixed integer linear programming problem. Later, Carlsson et al. [3] presented a simple algorithm for the exact computation of optimal solutions to a single constrained OWA aggregation problem. Ahn [4] presented a solution to the same problem through a linear transformation that is accomplished by incorporating weak inequality constraints representing order relations of variables. Recently, Coroianu and Fullér [5] presented a proof of the constrained OWA aggregation problem with a single constraint having variable coefficients, which are extended to the co-monotone constraints that share the same ordering permutation of the coefficients [6]. Chen and Tang [7] considered a constrained OWA aggregation problem with a single constraint and lower bounded variables. In particular, they presented four types of solution depending Symmetry 2022, 14, 724 2 of 12 on the number of zero elements for the three-dimensional constrained OWA aggregation problem with lower bounded variables. They deal with the maximization constrained OWA problem with lower bounded variables and the minimization constrained OWA problem with upper bounded variables. For a three-dimensional case of these models, they present the explicitly optimal solutions theoretically and empirically [8].
In this paper, we deal with a single constrained OWA aggregation problem having variable coefficients in a completely different manner on the basis of the development by Ahn [4]. The proposed method is very intuitive, and thus easy to understand, and can be readily extended to include order-preserving constraints. Specifically, we incorporate auxiliary weak inequality constraints of variables in an attempt to transform it into linear programming (LP) problems, which inevitably leads to as many LPs as the factorial of the number of variables. It is evident that the reordering property of the OWA aggregation instantly leads to the LP equivalent once we add the weak inequalities to the set of original constraints. Instead of solving each LP and selecting the largest objective function value as the optimal solution (in fact, the number of problems to be solved increases exponentially as the number of variables increases), we determine the extreme points of the feasible region of each LP. Interestingly, we find that the set of extreme points consists of a vector with one positive element, a vector with two equal positive elements, etc.; moreover, the places where the positive elements appear solely depend on the order of variables in the weak inequalities incorporated. Consequently, we show that the features of extreme points and the reordering process of input arguments peculiar to the OWA operator lead to a closed optimal solution to the constrained OWA optimization problem. Above all, we notice that this can be achieved by a novel idea of incorporating symmetric weak inequalities into the set of original constraints.
The paper is organized as follows: in Section 2, we present a solution to a single constrained OWA optimization problem with variable coefficients; in Section 3, we deal with a multiple constrained OWA optimization problem, followed by concluding remarks in Section 4.

The OWA Operator
Definition 1 (Yager [1]). An OWA operator of dimension n is a mapping F : R n → R that has an associated weighting vector w = (w 1 , w 2 , · · · , w n ) such that ∑ n j=1 w j = 1, w j ≥ 0, j = 1, · · · , n. The function value F of input arguments x = (x 1 , x 2 , · · · , x n ) determines the aggregated value of arguments in such a manner that with y j being the jth largest element of x = (x 1 , x 2 , · · · , x n ).
The OWA operator is featured by two distinguished procedures, namely, (a) reordering the input arguments and (b) associating weights with the reordered arguments. The reordering process, which is substantially different from other aggregation operators, has a crucial impact on the final aggregation outcomes. A proper determination of operator weights based on the mathematical programming approach is governed principally by two criteria-the attitudinal character (or orness) and the degree of information (or input arguments) usage. Various objective functions are used to maximize information usage whereas the attitudinal character is consistently satisfied as a constraint in the optimization problem. The nonlinear objective functions in the mathematical programming approach include the maximum entropy, the minimal variability, the maximal Renyi entropy, the least square method, and the chi-square method. The linear objective functions, relatively few in number, include the minimax (or improved) disparity [9,10] and the parametric approach [11]. Readers may refer to the surveys of recent developments in the determination of the OWA operator weights [12,13]. The OWA operator weights that are determined by any one of the aforementioned methods serve as the coefficients of the objective function in the OWA aggregation problem.

The Constrained OWA Aggregation
Yager [2] considered the problem of maximizing an OWA aggregation constrained by a collection of linear inequalities: where y j is the jth largest element of x = (x 1 , x 2 , · · · , x n ), A denotes an m × n matrix whose elements are a ij , i = 1, · · · , m, j = 1, · · · , n, and b denotes an m vector whose elements are b i , i = 1, · · · , m. Note that in the above constrained OWA aggregation problem, the input arguments are not specific values but variables that can take on a range of values within the feasible region of the constraints. Further, Yager [2] exemplified a simple constrained OWA aggregation problem with a single constraint as shown in (3) to show that it can be formulated by a mixed integer linear programming problem: where y j denotes the jth largest element of the bag {x 1 , · · · , x n }.
Here, we first deal with the same constrained OWA aggregation problem but from a totally different perspective, and later we extend it to the one with more constraints. Before proceeding further to deal with a general n variable case, we briefly introduce a constrained OWA optimization problem with three variables in Figure 1, which helps to grasp the concept behind our proposed method. function by the elements of extreme points as the order of variables in the weak inequalities since the ordered arguments are associated with the operator weights.
Layer 6 (Global Optimal Solution). The global optimal objective value is the largest among the six local optimal objective values, which can be further grouped into one positive element, two positive elements, and three positive elements across the six sub-problems, and the largest one among them is then selected. In general, a nonnegative real space ℝ is the union of the sets of ! weak inequalities, each of which has an equal volume by symmetry, and thus ℝ can be equivalently expressed as: Incorporating ℝ into Problem (4) does not change its feasible region and thus Problem (4) is equivalent to (5): Loosely speaking, the optimal solution to (4) corresponds to the largest objective function value of ! sub-problems, * = max ,⋯, ! ( ) * , where ( ) * is obtained by solving the following problem (Ahn [4] used a similar reasoning when solving the OWA optimization problem with a single constraint, the coefficients of which are all equal to one):

Layer 1 (Formulation).
The optimal solution to the OWA optimization problem with variable coefficients in each of three variables is to be sought.
Layer 2 (Decomposition). The original problem in Layer 1 is decomposed into six sub-problems each of which has weak inequalities of three variables.

Layer 3 (Linearization). Each (non-linear) sub-problem in
Step 2 becomes a linear programming problem due to the incorporation of weak inequalities and the definition of variables y i , the ith largest of {x 1 , x 2 , x 3 }.
Layer 4 (Extreme Points). The set of extreme points for each sub-problem is determined. Each linearized sub-problem shares common properties in the extreme points of its feasible region. First, the set of extreme points consists of a vector with one positive element, a vector with two equal positive elements, and a vector with three equal positive elements. Moreover, the places where the positive elements appear only depend on the order of permutations of the incorporated weak inequalities.
Layer 5 (Local Optimal Solution). The local optimal objective value F * i , i = 1, . . . , 6 for each sub-problem can be determined by multiplying the coefficients of the objective function by the elements of extreme points as the order of variables in the weak inequalities since the ordered arguments are associated with the operator weights.
Layer 6 (Global Optimal Solution). The global optimal objective value is the largest among the six local optimal objective values, which can be further grouped into one positive element, two positive elements, and three positive elements across the six sub-problems, and the largest one among them is then selected.
In general, a nonnegative real space R n + is the union of the sets of n! weak inequalities, each of which has an equal volume by symmetry, and thus R n + can be equivalently expressed as: Incorporating R n + into Problem (4) does not change its feasible region and thus Problem (4) is equivalent to (5): Loosely speaking, the optimal solution to (4) corresponds to the largest objective function value of n! sub-problems, is obtained by solving the following problem (Ahn [4] used a similar reasoning when solving the OWA optimization problem with a single constraint, the coefficients of which are all equal to one): where σ(·) denotes a permutation of {1, 2, · · · , n}.
For convenience, consider a constraint {x 1 ≥ x 2 ≥ · · · ≥ x n } where σ(1) = 1, σ(2) = 2,· · · , σ(n) = n among n! different permutations. Incorporating this into Problem (4) immediately leads to an LP counterpart (7) since y j corresponds to x j for all j: Note that the inequality constraint in (4) is transformed into an equality in (7). If we add a constraint {x n ≥ x n−1 ≥ · · · ≥ x 1 } to (4), the resulting linearized problem will be: Instead of solving many LP problems one by one (in fact, we have to solve n! LP problems), we attempt to find some properties about the extreme points of the feasible region of the constraints that are relevant to all LP problems. To do so, we start to find the extreme points of the equality constraint in (7) as follows: Then, we are to determine new extreme points of the feasible region restricted by adding each weak equality constraint in (7) to the set (8) one by one. Therefore, the procedure requires determining the extreme points whenever incorporating {x i − x i+1 ≥ 0}, i = 1, · · · , n − 1.

Corollary 1.
If the optimal solution occurs at the kth place (1 ≤ k ≤ n), the optimal extreme point is determined by at the µ(i) place, i = 1, · · · , k 0 elsewhere . (11) where µ(i) = j α(i) = α j . Further, a set of constraints incorporated for the linearized problem Proof. The proof follows directly from Theorem 1.
Proof. Corollary 2 follows directly from the proof of Theorem 1 since the extreme points of (4) are equal to those of (12). Corollary 3. Consider a special case of α i = 1 for all i in (4). Then, the optimal objective function value is determined by Proof. We simply obtain the result (14) since α(i) = 1 for all i and thus ∑ k i=1 α(i) = k for 1 ≤ k ≤ n.
For convenience, we denote the extreme points of the feasible region formed by a set of constraints {x 1 + x 2 + · · · + x n = 1, x 1 ≥ x 2 ≥ · · · ≥ x n ≥ 0}: Finally, consider the following constrained OWA optimization problem (16) which is different from (4) only in the right-hand-side (RHS): maximizeF(x 1 , · · · , x n ) = ∑ n j=1 w j y j subject to where y j denotes the jth largest element of the bag {x 1 , · · · , x n }. We conclude on the basis of the proof of Theorem 1 that the optimal objective function value to Problem (16) simply corresponds to: . Example 1. Consider the following constrained OWA optimization problem presented by Coroianu and Fullér [5]: maximize F = 1 10 y 1 + 4 10 y 2 + 3 10 y 3 + 2 10 y 4 subject to where y j denotes the jth largest element of the bag {x 1 , · · · , x n }.
It follows that α(1) = 1, α(2) = 1, α(3) = 2, and α(4) = 3 from the set of coefficients C = {1, 3, 2, 1}. Note that we break a tie arbitrarily and choose α(1) = 1. The extreme point (1, 0, 0, 0) among the extreme points with one positive element (zeros elsewhere) yields the highest objective function value of 1 10 · 1 α(1) = 1 10 . Considering the extreme points with two or three positive elements, we obtain the highest objective values, such as: Finally, we obtain the following objective function value for the extreme points with all four positive elements: Therefore, the optimal objective function value to the constrained OWA optimization problem is F * = max 1 10 To illustrate, if we incorporate {x 1 ≥ x 4 ≥ x 2 ≥ x 3 } into the problem, it will be linearized as follows: where y j denotes the jth largest element of the bag {x 1 , · · · , x n }.
On the other hand, Corollary 2 indicates that the optimal solution to the minimized version of the example is:

An Extension of a Single Constrained OWA Aggregation Problem
In this section, we extend the development of Section 2 to solve the OWA aggregation problem constrained by a range of incompletely specified arguments. Ahn [4] has dealt with a similar topic in the context of the OWA optimization subject to a sum to unity constraint, and here we extend it to the OWA optimization problem with variable coefficients. To illustrate, consider the following constrained OWA optimization problem, which is different from (16) in that the constraints representing order relations among the variables x j s are incorporated: maximizeF = ∑ n j=1 w j y j subject to α 1 x 1 + α 2 x 2 + · · · + α n x n ≤ b x 1 ≥ 2x 2 ≥ · · · ≥ nx n x j ≥ 0, j = 1, · · · , n.
These order relation constraints ensure x 1 to be the largest, x 2 the second largest, . . . , and x n the least (ties can occur among the x j s), which leads to an equivalent LP problem such that: maximizeF = ∑ n j=1 w j x j subject to α 1 x 1 + α 2 x 2 + · · · + α n x n ≤ b x 1 ≥ 2x 2 ≥ · · · ≥ nx n , x j ≥ 0, j = 1, · · · , n.
(18) (The order relation constraints of (18) are a special case of more general ones {β 1 x 1 ≥ β 2 x 2 ≥ · · · ≥ β n x n }.) Instead of obtaining a solution via an LP package, we attempt first to determine the extreme points of the feasible region formed by the constraints and then to find a closed solution. The feasible region formed by the order relation constraints of (18) is characterized by the non-normalized extreme directions emanating from the origin, D = (d 1 , d 2 , d 3 , · · · , d n ), where: Therefore, the extreme points of the feasible region formed by the constraints in (18) are simply the intersecting points of the halfspace and the non-normalized directions; to find them, we solve: The extreme points are determined by substituting each derived c into the corresponding direction vector d i , i = 1, · · · , n, which finally leads to E 1 = (v 1 , v 2 , · · · , v n ): , 0, · · · , 0 , Remark 1. Let us reconsider the OWA optimization problem in (7) that was formulated to transform the single constrained OWA optimization problem into the LP equivalent. Obviously, the OWA optimization problem (7) is a special case of (18) with b = 1 and β j = 1 for all j. Therefore, the extreme points of the constraints in (7) can be easily determined by replacing b with b = 1 and β j with β j = 1 for all j in E 1 in (19), which leads to the equivalent extreme points of (10).
The extreme points of the constraints in the example are determined by E 1 : Each extreme point clearly represents the order relations of variables such that x 1 is the largest, x 2 is the second largest, x 3 is the third largest, and x 4 is the least, which thus leads to an equivalent linearized objective function of F = 1 10 x 1 + 4 10 x 2 + 3 10 x 3 + 2 10 x 4 since y 1 = x 1 , y 2 = x 2 , y 3 = x 3 , and y 4 = x 4 . The optimal objective function value is simply determined by:

Concluding Remarks
The constrained OWA aggregation attempts to solve the OWA optimization problem subject to multiple constraints. In this paper, we present a new intuitive approach to solve the OWA optimization problem with a single constraint with variable coefficients. Although this problem was previously dealt with by Coroianu and Fullér [5], our approach is distinct from theirs in that we linearize the nonlinear problem by additionally incorporating symmetric weak inequalities. Exploiting the extreme points of the feasible region of each linearized problem reveals interesting properties that eventually lead to theories and corollaries. These results are easy to understand compared to theirs and readily expandable to include other order preserving constraints. To this end, we deal with the OWA optimization problem constrained by a range of incompletely specified variables and find optimal closed solutions that can be readily applied to solve the constrained OWA optimization problem.
Future research topics include the constrained OWA optimization problem with various types of constraints, for example, bounded variables, a convex sequence of variables, etc.