The General Least Square Deviation OWA Operator Problem

A crucial issue in applying the ordered weighted averaging (OWA) operator for decision making is the determination of the associated weights. This paper proposes a general least convex deviation model for OWA operators which attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation. The model includes the least square deviation (LSD) OWA operators model suggested by Wang, Luo and Liu in Computers & Industrial Engineering, 2007, as a special class. We completely prove this constrained optimization problem analytically. Using this result, we also give solution of LSD model suggested by Wang, Luo and Liu as a function of n and α completely. We reconsider two numerical examples that Wang, Luo and Liu, 2007 and Sang and Liu, Fuzzy Sets and Systems, 2014, showed and consider another different type of the model to illustrate our results.


Introduction
Yager [1,2] introduced the concept of ordered weighted averaging (OWA) operator.It is an important issue to the application and theory of OWA operators to determine the weights of the operators.Previous studies have proposed a number of approaches for obtaining the associated weights in different areas such as date mining, decision making, neural networks, approximate reasoning, expert systems, fuzzy system and control [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20].A number of approaches have been proposed for the identification of associated weights, including exponential smoothing [6], quantifier guided aggregation [19,20] and learning [20].O'Hagan [9] proposed another approach that determines a special class of OWA operators having maximal entropy for the OWA weights; this approach is algorithmically based on the solution of a constrained optimization problem.Hong [10] provided new method supporting the minimum variance problem.Fullér and Majlender [7,8] suggested a minimum variance approach to obtain the minimal variability OWA weights and proved that the maximum entropy model could be transformed into a polynomial equation that could be proved analytically.Liu and Chen [13] proposed a parametric geometric approach that can be used to obtain maximum entropy weights.Wang and Parkan [18] suggested a new method which generates the OWA operator weights by minimizing the maximum difference between any two adjacent weights.They transferred the minimax disparity problem to a linear programming problem, obtained weights for some special values of orness, and proved the dual property of OWA.Liu [12] proved that the minimax disparity OWA problem of Wang and Parkan [18] and the minimum variance problem of Fullér and Majlender [7] would always produce the same weight vector.Emrouznejad and Amin [5] gave an alternative disparity problem to identify the OWA operator weights by minimizing the sum of the deviation between two distinct OWA weights.Amin and Emrouznejad [3,4] proposed an extended minimax disparity model.Hong [11] proved this open problem in a mathematical sense.Recently, Wang et al. [18] suggested a least square deviation model for obtaining OWA operator weights, which is nonlinear and was proved by using LINGO program for a given degree of orness.Sang and Liu [17] proved this constrained optimization problem analytically, using the method of Lagrange multipliers.Liu [14] stidied the general minimax disparity OWA operator optimization problem which includes a minimax disparity OWA operator optimization model and a general convex OWA operator optimization problem which includes the maximum entropy [7] and minimum variance OWA problem [8,10,15].Liu [15] suggested a general optimization model for determining ordered weighted averaging (OWA) operators and three specific models for generating monotonic and symmetric OWA operators.
In this paper, we propose a general least convex deviation model for OWA operators which attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation.The model includes the least square deviation (LSD) OWA operators model suggested by Wang et al. [1].We completely prove the optimization problem mathematically and consider the same numerical examples that Wang et al. [1] and Sang and Liu [17] presented in their illustration of the application of the least square deviation model.We also determine the solution OWA operator weights not for some discrete value of α but for all orness levels 0 ≤ α ≤ 1 as a function of α.

The Least Convex Deviation Model
Yager [2] introduced an aggregation technique based on the ordered weighted averaging (OWA) operators.An OWA operator of dimension n is a mapping F : R n → R that has an associated weighting vector where b j is the jth largest element of a collection of the aggregated objects {a 1 , • • • , a n }.In [2], Yager introduced a measure of "orness" associated with the weighting vector W of an OWA operator, which is defined as Wang and Parkan [17] proposed a minimax disparity OWA operator optimization problem: The minimax disparity approach obtains OWA operator weights based on the minimization of the maximum difference between any two adjacent weights.Recently, Liu [14] considered the general minimax disparity OWA operator optimization problem as follows.

Minimize max
where F is a strictly convex function on [0, ∞) and is at least two order differentiable.
Liu [14] also considered a general convex OWA operator optimization problem with given orness level: where F is a strictly convex function on [0, 1] and is at least two order differentiable.When F(x) = x ln x, (1) becomes the maximum entropy OWA operator problem that was discussed in [7,12].F(x) = x 2 in (1) corresponds to minimum variance OWA operator problem [8,10].When F(x) = x p , p > 1, (1) becomes the OWA problem of R ényi entropy [15].
Wang et al. [1] have introduced the following least squares deviation (LSD) method as an alternative approach to determine the OWA operator weights.

Minimize
They solved this problem by using LINGO or MATLAB software package.Recently, Sang and Liu [17] solved this constrained optimization problem analytically by using the method of Lagrange multipliers.The general least convex deviation model for OWA operators attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation, which includes the least square deviation (LSD) problem as a special case.
We now propose the general least convex deviation model with a given orness level as follows, where F is a strictly convex function on [0, 1], and F is continuous on [0, 1) such that F (0) = 0.The followings are well-known propositions which can be easily checked.
is an optimal solution of the model (3) for orness(W) = 1 − α, and vice versa.Hence, for any α > 1/2, we can consider the model (3) for degree of orness (1 − α), and then take the reverse of that optimal solution.

Optimal Solution of the Least Convex Deviation Problem
In this section, we consider the mathematical proof of the optimization problem (3).We need the following lemmas to find optimal solution of the model (3).Lemma 1.Let {w i } be the set of nonnegative weighting vectors where w Proof.We note that and define a function H( ) on ≥ 0 by Then H( ) ia continuous and Let a + = b − δ = a for some > 0 and δ > 0. Then we have and then there exist * and δ * such that 0 < * < and 0 < δ * < δ and and, by (4), Then since a < a + * < b − δ * < b and F is strictly increasing, we have This completes the proof.

Lemma 2.
Let {w i } be the set of nonnegative weighting vectors such that Proof.Let w (i) be the i-th smallest weighting vector of {w i }.Then we have Hence there exists some w (k 0 ) such that w we consider two possible cases; First we suppose that and let Now we suppose that We note that for 0 ≤ ≤ 1, there exists 0 ≤ h( ) = δ ≤ 1 such that Then h is an increasing continuous function of and we have three possible cases as ↑ 1.; We define a function H( ) on 0 ≤ ≤ 1 by such that H 1 ( , δ) = α.Then H is continuous and, then by ( 6), we have (Case 1) H 1 ( 0 , 1) = α for some 0 < 0 < 1; From (7), we have There are two possible cases, that is, or First, suppose that Then, from ( 8) and ( 9), there exist 0 < * ≤ 0 and 0 < δ * ≤ 1 such that Put Then we have Second, suppose that and let a = (1 We note that Since H(0) > 1 and H(1) < 1 from ( 8) and (11), there exist 0 < * < 1, 0 < δ * < 1 such that Hence we obtain w * i , i = 1, 2, • • • , n by putting And, just like (Case 1), we have From (7), we have There are two possible cases, that is, But if H(1) ≤ 1,then it is easy to obtain desired w * i , i = 1, 2, • • • , n by the similar arguments to the above.Hence we consider the case Now ( 12) and ( 13) are exactly the same as ( 5) and ( 6) regarding w (k 0 +1) as w (k 0 ) and (1 5) and ( 6).If we use the same arguments as above finite number of times, then we finally have the following situation; there exist w ] in Lemma 1, then we obtain the desired result of w * i , i = 1, 2, • • • , n by using Lemma 1 again.We complete the proof.
The following result is immediately from Lemma 2.
Lemma 3. The model ( 3) is equivalent to the following model: where F is a strictly convex function on [0, ∞), and F is continuous on [0, 1) such that F (0) = 0.
Lemma 4. If we put w i = ∑ i k=1 x k , i = 1, • • • , n, then the model ( 14) is transformed into the following model: where F is a strictly convex function on [0, 1] with continuous first differentiability of F such that F (0) = 0.
We now prove the optimization problem of model ( 3).We note that F is strictly convex if and only if F is strictly increasing.Theorem 1.Let F be a strictly convex function on [0, 1] and F be continuous on [0, 1) such that F (0) = 0. Then the optimal solution for the model ( 3) with given orness level 0 < α < 1/2 is as follow: In case of w * where a * , b * are determined by the constraints: and and where c * is determined by the constraints such that Proof.By Lemma 4, we consider the following model (15 There are two possible cases such as (case 1) and let x k for k = 1, • • • , n be a vector such that We also note that and we put from ( 22) and ( 24) because We also have, from ( 21) and ( 23) We now show that Since F(y) − F(y 0 ) ≥ F (y 0 )(y − y 0 ) (the equality holds if and only if y = y 0 ), we have that where the second equality comes from the fact that F (x * 1 ) = F (0) = 0, the third equality comes from (25), the fifth equality comes from (26) and ( 27) and the second inequality comes from the fact that a The equality holds if and only if and where c * is determined by the constraints such that Then from (29), We note that and then x * k for k = 1, 2, • • • , n satisfies constraints of the model (15).We now show that Then from (33) and (34), where the first equality comes from (35) and the last equality comes from (30).Hence we have where the second equality comes from (28) and the fourth equality comes from (36).The equality holds if and only if By Lemma 2, the solution OWA operator weights for 0 ≤ α ≤ 1/2 has the form As a special case of model ( 3), we consider the following model for p > 1. Minimize Note 2. Let S m (α) be a subset of 0 < α < 1/2 on which the optimal solution for the model (37) with given orness level 0 < α < 1/2 has the form of (0, m is a linear function of α with positive slope, then we define J n (m) by {J n (m) We also have From now on we have the closed form of the exact optimal solutions of the LSD OWA model specifically as a function of n and α.
From Corollary 1, x * m is a linear function of α on each interval (J n (i), It is also easy to check that x * m is continuous as a function of α.Hence we have the following property.
as a function of α, be the optimal solution for the model (37) with given orness level 0 ≤ α ≤ 1 when p = 2. Then w * m = f m (α) is continuous and piecewise linear.

Numerical Examples
We consider the same numerical example that Wang et al. [1] presented in their illustration of the application of the least square deviation model for n = 5.Wang et al. [18] determined the OWA operator weights satisfying discrete degrees of orness: α = 0, 0.1, • • • , 0.9, 1.But, in this example, we determine the solution OWA operator weights as a continuous function of α for all orness level 0 ≤ α ≤ 1 using our results.In case of (J 5 (1), J 5 (0)] = ( In case of (J 5 (2), J 5 (1)] = ( We next consider the same numerical example that Sang and Liu [17] presented in their illustration of the application of the least square deviation model for n = 10.Sang and Liu [17] determined the OWA operator weights satisfying discrete degrees of orness: α = 0, 0.1, • • • , 0.9, 1.But, in this example, we determine the solution OWA operator weights w * k , k = 1, 2, • • • , 10 as a function of α for all orness level 0 ≤ α ≤ 1.