An Iterative Approach for the Solution of the Constrained OWA Aggregation Problem with Two Comonotone Constraints

: In this paper, ﬁrst, we extend the analytical expression of the optimal solution of the constrained OWA aggregation problem with two comonotone constraints by also including the case when the OWA weights are arbitrary non-negative numbers. Then, we indicate an iterative algorithm that precisely indicates whether a constraint in an auxiliary problem is either biding or strictly redundant. Actually, the biding constraint (or two biding constraints, as this case also may occur) are essential in expressing the solution of the initial


Introduction
Since the introduction of ordered weighted averaging operators (OWA operators) by Yager in [1], this topic has attracted huge interest in both theoretical and practical directions. For detailed accounts on the state of the art, we recommend, for example, the works [2][3][4][5]. In this paper, our goal is to continue the investigation of the so-called constrained OWA aggregation problem. Here, the goal is to optimize the OWA operator under linear constraints. Yager started this research in [6] by proposing an algorithm for the maximization problem that can solve the problem in some special cases. Then, in [7], the authors solve the problem in the special case when we have one restriction and all coefficients in the constraint are equal. In paper [8], the result is generalized; this time, the coefficients in the single constraint are arbitrary. Another approach for this case can be also found in the recent paper [9]. The minimization problem in the case of a single constraint is solved in [10]. Recently, in the paper [3], the authors found a way to solve the maximization and minimization problems in the case when we have two comonotone constraints. In this contribution, we continue the work started in [3]. We will discuss the maximization problem since the results for the minimization problem can be easily deduced using the patterns from the papers [3,10]. First, we find a simple way to generalize the main results in [3] for the case when the OWA weights are arbitrary non-negative numbers. In [3], we assumed these weights to be strictly positive to avoid division with zero in some cases. However, these cases can be eliminated as they will give some redundant constraints. In this way, the results apply for OWA operators having some weights that can be equal to zero, such as, for example the Olympic weights (see, e.g., [2]). We reiterate again as in others of our papers that for solving such problems, it seems that one effective approach is to use the dual of some linear programs derived from the initial constrained OWA aggregation problem. Other optimization problems also use the dual of linear programs (see [11,12]). In the papers mentioned earlier, the idea is to optimize the OWA operator under some linear constraints. Another problem which is of great interest among researchers is to optimize the OWA weights under some additional constraints (see, e. g., [2,5,13,14]). Finally, let us also mention that the study of OWA type operators and their generalizations is a dynamic process, and numerous interesting directions have opened in recent years (see, e. g., [15][16][17][18][19]).
In Section 2, we present the constrained OWA aggregation problem with comonotone constraints and, in the special case of two comonotone constraints, we extend the main results proved in [3] for the case when the OWA weights are arbitrary non-negative numbers. In Section 3, we present the iterative algorithm for an auxiliary problem associated with the initial problem that finds at every step a constraint that is either binding or strictly redundant. In Section 4, we test this algorithm on concrete examples, and we also discuss its proficiency. The paper ends with conclusions that sum up the main contributions as an important step to the general setting of an arbitrary number of constraints.

Constrained OWA Aggregation with Comonotone Constraints
In this section, we recall briefly the basics on the constrained OWA aggregation problem. These details can be found in numerous papers, and we use here similar arguments as in [3].
Suppose we have the non-negative weights w 1 , . . . , w n such that w 1 + · · · + w n = 1 and define a mapping F : R n → R, where y i is the i-th largest element of the sample x 1 , . . . , x n . Then consider a matrix A of type (m, n) with real entries and a vector b ∈ R m . A constrained maximum OWA aggregation problem corresponding to the above data is the problem (see [6]) Let us recall now two particular problems where the coefficients in the constraints can be rearranged to satisfy certain monotonicity properties. The maximization problem is and there exists a permutation σ ∈ S n such that Here, S n denotes the set of all permutations of {1, . . . , n}, and for some σ ∈ S n , we use the notation σ k for the value σ(k) for any k ∈ {1, . . . , n}. From now on, we will say that the constraints in problem (2) are comonotone whenever condition (3) is satisfied. The minimization problem is and, again, there exists σ ∈ S n such that Obviously, in the minimization problem above, the constraints are comonotone as well.
Considering the general problem (1), Yager used a method based on mixed integer linear programming to approach the optimal solution. The method is quite complex since it requires to introduce auxiliary variables, sometimes causing difficulties in calculations. When the single constraint is particularized to x 1 + · · · + x n = 1, the problem was solved completely in [7] by providing an analytical solution as a function of the weights. Further-more, considering arbitrary coefficients in the single constraint, in paper [8], the analytical solution was obtained as a function depending on the weights and on the coefficients in the constraint. This problem can be formulated as max F(x 1 , . . . , x n ) subject to α 1 x 1 + · · · + α n x n ≤ 1, x ≥ 0.
The following theorem recalls the main result from [8]. In what follows, S n denotes the set of permutations of the set {1, . . . , n}.
To solve (8), we need its dual, which is Furthermore, we can simplify this problem by introducing the problem Here, we make the first improvement with respect to the reasoning used in [3]. Namely, if w k = 0 for some k ∈ {1, ..., n}, then constraint number k is redundant in (10). Therefore, problem (10) is equivalent to problem where I n = {k ∈ {1, ..., n} : w k > 0}. This improvement will allow us to investigate problems where some of the weights can be equal to 0, such as, for example the Olympic weights (see, e.g., [2]). As we mentioned in [3], if t * 1 , t * 2 , . . . , t * n+1 is a solution of problem (9), then t * 1 , t * 2 is a feasible solution for problem (10), and consequently, it is feasible for problem (11). Now, suppose that t 1 , t 2 is a solution of problem (11). One can easily prove (see again [3]) that t 1 , t 2 extends to a feasible solution t 1 , t 2 , t * 3 , . . . , t * n+1 of problem (9). Thus, considering only the first two components of the feasible solutions of problem (9), we obtain the same set as for the feasible set of problem (11). Obviously, both problems will have the same minimal value, which in addition is finite.
In order to find a solution for problem (11), we need to investigate a problem given as It will suffice to consider only the case when a k > 0 and b k > 0 for all k ∈ {1, . . . , n}. Taking problem (11) becomes exactly a problem of type (12). Therefore, solving problem (12) will result in solving problem (11) as well.
We need the following auxiliary result proved in [3].
Lemma 1. (see [3], Lemma 7) Consider problem (12), where a k and b k are given in (13), k ∈ I n . Suppose that k 1 , k 2 ∈ I n are such that a k 1 ≥ b k 1 and a k 2 ≤ b k 2 . If t * 1 , t * 2 is a solution of the system and if t * 1 , t * 2 is feasible for problem (12), then t * 1 , t * 2 is an optimal solution for problem (12).
All the information above will be very useful in the next section, where we will propose an iterative algorithm to approach the solution of problem (7).
Actually, we can now characterize the optimal solution of problem (7) by slightly improving Theorem 8 in [3]. We omit the proof since one can easily deduce the necessary modifications comparing to the statement of Theorem 8 in [3] coupled with Lemma 1 from above, which stays at the bases of the next theorem.
(iv) If in problem (12) the optimal solution satisfies with equality the constraints C k 1 and C k 2 , where k 1 < k 2 and a k In addition, the optimal value of problem (7) is equal to

Remark 1.
Using the above theorem, we can also generalize the results given in Theorems 9 and 10, respectively, in paper [3]. In Theorem 9, we considered the case when there exists a permutation σ ∈ S n such that 0 < α σ 1 ≤ α σ 2 ≤ · · · ≤ α σ n and 0 < β σ 1 ≤ β σ 2 ≤ · · · ≤ β σ n . Then, in Theorem 10, we considered the case when 0 < α 1 = α 2 = · · · = α n = α. Obviously, in view of Theorem 3 from above, all these results can be extended for the more general case when the weights in problem (7) are only assumed to be non-negative. Then, of course, we can state similar refinements for the case of minimization problem. They are easily deduced from the corresponding results obtained in paper [3] (see Theorems 11-13, respectively, in [3]).

An Iterative Algorithm to Achieve the Optimal Solution
In this section we propose an iterative algorithm to obtain the optimal solution in Problem (7). Although the computer implementation of Theorem 2 can be done in a very convenient way based on the Simplex algorithm (see the examples in [3]), the following algorithm has an interesting particularity; namely, it can identify a constraint that is either binding or redundant. In this way, we can eliminate the constraints one by one until we obtain the optimal solution. We also hope this algorithm can be generalized for the case when we have more than two comonotone constraints; however, this remains an interesting open question in our opinion.
To construct our algorithm, we need to investigate Problem (12), where a k and b k are given in (13), k ∈ I n . We also need some concepts and notations that are well-known in linear programming. Let us denote with C k constraint number k of problem (12), k ∈ I n . Then, we denote with U the feasible region of problem (10). Next, for some k ∈ I n , let P k = I n \ {k} and In other words, U k is the feasible region of any optimization problem which keeps all the constraints from Problem (12) except for constraint C k . The constraint C k is called redundant if U = U k . In other words, the solution set of the optimization problem with feasible region U coincides with the solution set of the optimization problem that has the same objective function and all the constraints except for C k . This means that constraint C k can be removed when solving the given problem. The constraint C k is called strongly redundant if it is redundant and Therefore, C k is strongly redundant if and only if the segment which corresponds to the solutions of the equation a k t 1 + b k t 2 = 1, t 1 ≥ 0, t 2 ≥ 0, does not intersect U. A redundant constraint that is not strongly redundant is called weakly redundant. The constraint C k is called binding if there exists at least one optimal point which satisfies this constraint with equality. This means that the segment corresponding to the equation a k t 1 + b k t 2 = 1, t 1 ≥ 0, t 2 ≥ 0, contains an optimal point of the problem. Note that it is possible for a weakly redundant constraint to be binding as well. All these concepts were discussed with respect to our Problem (12), but of course, they can be defined accordingly for any kind of optimization problem. Now, with these new tools, we can investigate Problem (12). As we said in the introduction, searching for binding constraints may be just as difficult as solving the program. However, we can easily spot some binding constraints in Problem (12). We also believe it is worthwhile to do that as otherwise all constraints are needed when performing the algorithm, and therefore, this calculation will be more complex than those used to eliminate the redundant constraints. In general, if k 1 , k 2 ∈ I n are such that a k 1 ≤ a k 2 and b k 1 ≤ b k 2 , then constraint C k 2 is redundant, and it can be eliminated. Using this fact, we propose a simple method to eliminate some redundant constraints. First, we set M 1 = I n and let N 1 be the subset of M 1 such that a l = min{a k : k ∈ M 1 }, for all l ∈ N 1 . Then, let p 1 be the index with the minimum value (just to make a choice if more indices would satisfy the following property) in N 1 such that b p 1 = min{b k : k ∈ N 1 }. We keep constraint C p 1 and eliminate all the other constraints indexed in N 1 since they all are redundant. Next, for any k ∈ M 1 \ N 1 , we compute a p 1 − a k · (b p 1 − b k ). If this value is strictly negative, then we keep constraint C k , and if not, then we eliminate C k because it is redundant. Let M 2 be the set of indices that correspond only to the constraints that were not eliminated and which does not contain p 1 . We continue with the same reasoning, with the only difference that now we take M 2 instead of M 1 , then define N 2 with respect to M 2 the same way we defined N 1 with respect to M 1 . We then define p 2 the same way we defined p 1 and so on, M 3 , N 3 and p 3 , and so on,..., until we get that M k is the empty set. Note that k is at most equal to n − 1. At every step k, we select in a set that we denote with J, the index of each constraint that was not eliminated from N k , that is, p 1 , p 2 , and so on. Thus, the constraints indexed by J will give the same feasible region as the initial set of n constraints. In addition, if k 1 , k 2 ∈ J, k 1 = k 2 , then a k 1 − a k 2 · (b k 1 − b k 2 ) < 0. Then, there exists at most one index k ∈ J such that a k = b k . There are other methods to obtain the set J, but in our opinion, this one is between the fastest when using the computer. In all that follows in this paper, the set J will be the one obtained with the above technique. Please note that it still may be possible to have redundant constraints among those indexed in J. Just before our first key result, we explain why it is not useful to search such constraints outside the proposed algorithm.
We are now in position to present a key result that will then give us a fast algorithm to solve Problem (12). What is really interesting in the following theorem is that we present a precise and simple method to search a constraint that will prove to be either a binding constraint or a strongly redundant constraint. This happens rarely in linear programming, and it also explains why it is not necessary to search for redundant constraints separately. This is indeed an important advantage mainly because the techniques to find redundant inequalities involve a lot of computation. (12), where a k and b k are given in (13), k ∈ I n . Then, let J ⊆ I n be the set of indices obtained by the technique described just before Lemma 1). Then, Let J 1 = {k ∈ J : a k ≥ b k } and J 2 = {k ∈ J : a k ≤ b k }. In addition, suppose that both J 1 and J 2 are nonempty. Then, let k 1 ∈ J 1 be such that a k 1 = min{a k : k ∈ J 1 } and b k 2 = min{b k : k ∈ J 2 } (by the construction of J, it follows that a k 1 and b k 2 are unique minimizers). Then, constraint C k 1 is either binding, or it is strongly redundant. Similarly, constraint C k 2 is either binding, or it is strongly redundant.

Theorem 3. Consider Problem
Proof. Due to the absolutely similar reasoning of the assertions, we prove only the first one. Suppose that C k 1 is not strongly redundant. Again, let f (t 1 , t 2 ) = t 1 + t 2 be the objective function. We have two cases: (i) the point 1/a k 1 , 0 belongs to the feasible region and (ii) 1/a k 1 , 0 does not belong to the feasible region.
For case (i), since a k 1 ≤ b k 1 , with reasoning as in the proof of Lemma 1, it follows that f 1/a k 1 , 0 ≤ f (t 1 , t 2 ) ≤ f 0, 1/b k 1 for any feasible point (t 1 , t 2 ) that satisfies constraint C k 1 with equality. Now, if (t 1 , t 2 ) is an arbitrary feasible point, it is clear that the intersection of the segments [(0, 0), (t 1 , t 2 )] and 1/a k 1 , 0 , 0, 1/b k 1 is nonempty. Let (u 1 , u 2 ) be in this intersection. With reasoning as in the proof of Lemma 1, we obtain that f (t 1 , t 2 ) ≥ f (u 1 , u 2 ) ≥ f 1/a k 1 , 0 . This means that 1/a k 1 , 0 is an optimal solution of Problem (12).
For case (ii), from all the feasible points that satisfy constraint C k 1 with equality, we chose the one for which the first component has the maximum value. In other words, considering the intersection of 1/a k 1 , 0 , 0, 1/b k 1 with the feasible region, we take the point which is nearest to 1/a k 1 , 0 with respect to the usual Euclidean metric in R 2 . Let us denote this point with t * 1 , t * 2 . Note that since the feasible region is a closed and convex subset of R 2 , this intersection is a closed segment; therefore, the construction of t * 1 , t * 2 is correct. Suppose that a 0 = min{a k : k ∈ J}. It is immediate that 1/a 0 is a feasible point of Problem (12). As 1/a k 1 is not, it necessarily follows that a k 1 > a 0 . From this property it results that there exists a constraint C l such that t * 1 , t * 2 satisfies this constraint with equality and such that a l < a k 1 . This property can be easily deduced using some elementary geometrical reasoning. For the sake of correctness, let us give a rigorous proof. Let us choose arbitrary k ∈ J such that a k < a k 1 (such element exists since a k 1 > a 0 ). In this case, the solution of the system a k 1 t 1 + b k 1 t 2 = 1, a k t 1 + b k t 2 = 1, must have its (unique) solution on the segment t * 1 , t * 2 , 1/a k 1 , 0 . Otherwise, then (0, 0) and t * 1 , t * 2 would be on the same semispace with respect to the separating line a k x + b k y = 1; hence, a k t * 1 + b k t * 2 < 1. This implies that t * 1 , t * 2 is not feasible, which is a contradiction. Now, let us choose an arbitrary k ∈ J such that a k > a k 1 . In this case, the unique solution of system a k 1 t 1 + b k 1 t 2 = 1, a k t 1 + b k t 2 = 1, t 1 ≥ 0, t 2 ≥ 0, lies on the segment t * 1 , t * 2 , 0, 1/b k 1 . Otherwise, we obtain the same contradiction as above. Now, by way of contradiction, suppose that for any k ∈ J, such that a k < a k 1 , we have a k t * 1 + b k t * 2 = 1. Using the properties mentioned just above, we obtain that a k t * 1 + b k t * 2 > 1. Then, if k ∈ J is such that a k > a k 1 , we obtain that a k t 1 + b k t 2 ≥ 1 for all (t 1 , t 2 ) ∈ t * 1 , t * 2 , 1/a k 1 , 0 . All these imply that there exists (t 1 , t 2 ) ∈ t * 1 , t * 2 , 1/a k 1 , 0 sufficiently close to t * 1 , t * 2 , such that t 1 > t * 1 and such that a k t 1 + b k t 2 ≥ 1, for all k ∈ J. This means that (t 1 , t 2 ) is a feasible point for Problem (12). On the other hand, (t 1 , t 2 ) satisfies constraint C k 1 with equality and t 1 > t * 1 . This contradicts the construction of t * 1 . Therefore, there exists a constraint C l such that t * 1 , t * 2 satisfies this constraint with equality and such that a l < a k 1 . By the construction of a k 1 , it follows that a l < b l . Therefore, t * 1 , t * 2 is a feasible point which is a solution of the system where a k 1 ≥ b k 1 and a l < b l . By Lemma 1, it follows that t * 1 , t * 2 is an optimal solution for Problem (12). The proof is complete now. In the case of constraint C k 2 , the reasoning is identical. In this case, if C k 2 is binding, then the optimal solution is the feasible point that satisfies C k 2 with equality and which is the nearest to 0, 1/b k 2 , which equivalently means that from all feasible points that satisfy C k 2 with equality, it has the the minimum value for the first component.
From the above theorem, we can actually describe precisely the optimal point. We did that in the proof, but it is worthwhile to highlight this fact in the following corollary given without proof since it is nothing else but an analytical characterization of the optimal solution obtained in the previous theorem.

Corollary 1.
Consider all hypotheses and notations from Theorem 3. If C k 1 is binding, then let [α, β] be the solution set of variable t 1 obtained after we solve the system of the constraints with the substitution t 2 = 1 − a k 1 t 1 /b k 1 (as C k 1 is binding, this solution set is always nonempty). Then an optimal solution of problem (12) is β, 1 − a k 1 β /b k 1 and the optimal value is 1 − a k 1 /b k 1 β + 1/b k 1 . Then, if C k 2 is binding, denote again with [α, β] the solution set of variable t 1 using the substitution t 2 = 1 − a k 2 t 1 /b k 2 . Then, an optimal solution of problem (12) is α, 1 − a k 2 α /b k 2 , and the optimal value is 1 − a k 2 /b k 2 α + 1/b k 2 . Now, we are in position to present an algorithm that will give us a solution of Problem (12). Obviously, there are two methods to approach the solution. Either we search the binding constraint in the set J 1 or we search in the set J 2 . If we would make simulations on very large numbers of such problems, we believe that we would obtain on average the same running time. In general, we will apply the algorithm using J 1 if its cardinal is less than or equal to the cardinal of J 2 ; otherwise, we will apply the algorithm using J 2 . We can also propose an algorithm that takes into consideration both sets J 1 and J 2 . First, we search the binding constraint on J 1 taking the constraint C k 1 as described in the statement of Theorem 3. If C k 1 is binding, then we find the optimal solution as described in the previous corollary. If C k 1 is redundant, then we check the constraint C k 2 in J 2 selected exactly as in the statement of Theorem 3. If C k 1 is binding, then we find the optimal solution as described in the previous corollary. If not, then we update J 1 = J 1 \ {k 1 } and search again for the constraint in J 1 according to the construction in Theorem 3, and so on. We omit this second variant because we think it is slower in general.

Algorithm 1
In this algorithm, we search for the binding constraint from beginning to end either in J 1 or in J 2 . The first two steps are essential in our choice. For that, we need to compute a k * = min{a k : k ∈ J} and b k * * = min{b k : k ∈ J}. Note that besides computing a k * and b k * * , we will also need to identify the indexes a k * and b k * * .
Step 1 If a k * ≥ b k * , then (1/a k * , 0) is an optimal solution of Problem (12), and 1/a k * is the optimal value of Problem (12). If a k * < b k * , then go to step 2.
Step 2 If a k * * ≤ b k * * , then (0, 1/b k * * ) is the optimal solution of Problem (12) and 1/b k * * is the optimal value of Problem (12). If a k * * > b k * * then go to step 3.
Step 3 If we reached this step of the algorithm, it means that both J 1 and J 2 are nonempty. What is more, both of them contain at least one index corresponding to a binding constraint. Let us explain for J 1 , since for J 2 , the explanation is identical. As a k * * > b k * * , it follows that k * * is in J 1 . If C k would be strongly redundant for any k ∈ J 1 such that a k < a k * * , then by Theorem 3, it easily follows that constraint C k * * is binding. Here, we need to decide if we search the binding constraint considering the set J 1 or the set J 2 . We can impose a selection criterion. For example, we choose to go with J 1 if its cardinal is less than or equal to the cardinal of J 2 and with J 2 otherwise. In what follows, we explain the algorithm when the option is J 1 , and at the end of it, we explain in a remark the very small differences that occur in the case when the option is J 2 .
Take a l 1 = min{a k : k ∈ J 1 }. We solve the system of the constraints indexed in J with the substitution t 2 = 1 − a l 1 t 1 /b l 1 . If we obtain for variable t 1 the solution [α, β] then an optimal solution of problem (12) is β, 1 − a l 1 β /b l 1 , and the optimal value is 1 − a l 1 /b l 1 β + 1/b l 1 . If this system has no solution, then go to step 4.
Step 4 We set J := J \ {l 1 } and J 1 := J 1 \ {l 1 }, and we repeat all steps 1-3 for the newly obtained J and J 1 .
We observe that step 3 is repeated until we get the first binding constraint, and we know that in the worst case this binding constraint is C k * * . We also have to notice that we have a maximum of n − 1 iterations in terms of repeatedly applying step 3. What is more, if we chose to go with J 1 because its cardinal does not exceed the cardinal of J 2 , then we have at most [|J|/2] + 1 iterations (here, [·] stands for the integer part of a real number). The most important utility of Algorithm 1 is that it helps us to indicate the binding constraint or the binding constraints, respectively, in Theorem 2, corresponding to case (iii) or (iv), respectively.
By simple calculations, the above system has the solution t 1 ∈ 1 4 , 1 2 . By using the first algorithm we get that t * 1 = 1 4 and by the substitutions used here, we get that t * 2 = 1−2t * 1 4 = 1 8 . Therefore, 1 4 , 1 8 is the optimal solution of our problem, and t * 1 + t * 2 = 3 8 is the optimal value. Even if J 1 has three elements, we do not need more iterations. Indeed, a l 1 = min{a k : k ∈ J 1 } = 3. Thus, we need to solve the system of the constraints under the substitution t 2 = 1−3t 1 2 . By simple calculations, we obtain t 1 ∈ 1 5 , 1 4 ; hence, we obtain the optimal solution at the first attempt. Note that with the second algorithm, no additional iterations are needed.
The following example shows that sometimes we may need step 4 of the first algorithm.

Example 2.
Let us consider the problem Again, we cannot obtain the solution in the first two steps of Algorithm 1. Therefore, we move again to the third step. We have J 1 = {1, 2, 3} and J 2 = {3, 4, 5}. First, let us apply the algorithm for J 1 . We observe that a k * = min{a k : k ∈ J 1 } = a 3 = 4. Therefore, we have to solve the system of the constraints in the special case when t 2 = 4 − t 1 . After simple calculations, we get that this system has no solution. This means that constraint C 3 is redundant. This means that we have to go to step 4. We set J 1 := J 1 \ {3}. Now, min{a k : k ∈ J 1 } = a 2 = 1/3. We have to solve the system of constraints in the special case when t 2 = 12 − 4t 1 . We obtain the system Clearly, we have two comonotone constraints here; therefore, we need the auxiliary problem (12), and by simple calculations this problem is 13 3 t 1 + 13t 2 ≥ 1, 13t 1 + 13 3 t 2 ≥ 1, 13 2 t 1 + 13 2 t 2 ≥ 1, t 1 , t 2 ≥ 0. (15) We have a k * = min{a k : k ∈ {1, 2, 3}} = a 1 = 13 This means that we need Step 3 in Algorithm 1. We have J 1 = {2, 3} and min{a k : k ∈ J 1 } = a 3 = 13 2 . Therefore, we solve the system of constraints in (15) in the special case when that is, when constraint 3 is satisfied with an equality. After simple calculations, we obtain the solution of this system as the interval 1 26 , 3 26 . This means that case (iii) in Theorem 2 is applicable, and by applying the formula for this case, we obtain that an optimal solution for problem (14) is and the optimal value of this problem is 2 13 .

Conclusions
In this paper, we extended the solving of the constrained OWA aggregation problem with two comonotone constraints to the case when the OWA weights are arbitrary nonnegative numbers. Moreover, we proposed an iterative algorithm to approach the optimal solution. This algorithm indicates a constraint that is either biding or strictly redundant. We hope this is a first step towards the solving of constrained OWA aggregation problems with an arbitrary number of constraints.
Author Contributions: Conceptualization, formal analysis, writing-review and editing L.C.; Conceptualization, formal analysis, writing-review and editing R.F. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The study did not report any data. Acknowledgments: Lucian Coroianu was supported by a grant awarded by the University of Oradea and titled "Approximation and optimization methods with applications".

Conflicts of Interest:
The authors declare no conflict of interest.