Next Article in Journal
Optimal Approximate Solution of Coincidence Point Equations in Fuzzy Metric Spaces
Next Article in Special Issue
The General Model for Least Convex Disparity RIM Quantifier Problems
Previous Article in Journal
Multiblock Mortar Mixed Approach for Second Order Parabolic Problems
Previous Article in Special Issue
Representing by Orthogonal Polynomials for Sums of Finite Products of Fubini Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The General Least Square Deviation OWA Operator Problem

1
Department of Mathematics, Myongji University, Yongin 449-728, Kyunggido, Korea
2
Department of Managment, Nagoya University of Commerce & Business, 4-4 Sagamine Komenoki, Nisshin 470-0193, Aichi, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(4), 326; https://doi.org/10.3390/math7040326
Submission received: 8 March 2019 / Revised: 25 March 2019 / Accepted: 1 April 2019 / Published: 3 April 2019
(This article belongs to the Special Issue Special Polynomials)

Abstract

:
A crucial issue in applying the ordered weighted averaging (OWA) operator for decision making is the determination of the associated weights. This paper proposes a general least convex deviation model for OWA operators which attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation. The model includes the least square deviation (LSD) OWA operators model suggested by Wang, Luo and Liu in Computers & Industrial Engineering, 2007, as a special class. We completely prove this constrained optimization problem analytically. Using this result, we also give solution of LSD model suggested by Wang, Luo and Liu as a function of n and α completely. We reconsider two numerical examples that Wang, Luo and Liu, 2007 and Sang and Liu, Fuzzy Sets and Systems, 2014, showed and consider another different type of the model to illustrate our results.

1. Introduction

Yager [1,2] introduced the concept of ordered weighted averaging (OWA) operator. It is an important issue to the application and theory of OWA operators to determine the weights of the operators. Previous studies have proposed a number of approaches for obtaining the associated weights in different areas such as date mining, decision making, neural networks, approximate reasoning, expert systems, fuzzy system and control [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. A number of approaches have been proposed for the identification of associated weights, including exponential smoothing [6], quantifier guided aggregation [19,20] and learning [20]. O’Hagan [9] proposed another approach that determines a special class of OWA operators having maximal entropy for the OWA weights; this approach is algorithmically based on the solution of a constrained optimization problem. Hong [10] provided new method supporting the minimum variance problem. Fullér and Majlender [7,8] suggested a minimum variance approach to obtain the minimal variability OWA weights and proved that the maximum entropy model could be transformed into a polynomial equation that could be proved analytically. Liu and Chen [13] proposed a parametric geometric approach that can be used to obtain maximum entropy weights. Wang and Parkan [18] suggested a new method which generates the OWA operator weights by minimizing the maximum difference between any two adjacent weights. They transferred the minimax disparity problem to a linear programming problem, obtained weights for some special values of orness, and proved the dual property of OWA. Liu [12] proved that the minimax disparity OWA problem of Wang and Parkan [18] and the minimum variance problem of Fullér and Majlender [7] would always produce the same weight vector. Emrouznejad and Amin [5] gave an alternative disparity problem to identify the OWA operator weights by minimizing the sum of the deviation between two distinct OWA weights. Amin and Emrouznejad [3,4] proposed an extended minimax disparity model. Hong [11] proved this open problem in a mathematical sense. Recently, Wang et al. [18] suggested a least square deviation model for obtaining OWA operator weights, which is nonlinear and was proved by using LINGO program for a given degree of orness. Sang and Liu [17] proved this constrained optimization problem analytically, using the method of Lagrange multipliers. Liu [14] stidied the general minimax disparity OWA operator optimization problem which includes a minimax disparity OWA operator optimization model and a general convex OWA operator optimization problem which includes the maximum entropy [7] and minimum variance OWA problem [8,10,15]. Liu [15] suggested a general optimization model for determining ordered weighted averaging (OWA) operators and three specific models for generating monotonic and symmetric OWA operators.
In this paper, we propose a general least convex deviation model for OWA operators which attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation. The model includes the least square deviation (LSD) OWA operators model suggested by Wang et al. [1]. We completely prove the optimization problem mathematically and consider the same numerical examples that Wang et al. [1] and Sang and Liu [17] presented in their illustration of the application of the least square deviation model. We also determine the solution OWA operator weights not for some discrete value of α but for all orness levels 0 α 1 as a function of α .

2. The Least Convex Deviation Model

Yager [2] introduced an aggregation technique based on the ordered weighted averaging (OWA) operators. An OWA operator of dimension n is a mapping F : R n R that has an associated weighting vector W = ( w 1 , , w n ) T with properties w 1 + + w n = 1 , 0 w i 1 , i = 1 , , n , and
F ( a 1 , , a n ) = i = 1 n w i b i ,
where b j is the jth largest element of a collection of the aggregated objects { a 1 , , a n } . In [2], Yager introduced a measure of "orness" associated with the weighting vector W of an OWA operator, which is defined as
o r n e s s ( W ) = i = 1 n n i n 1 w i .
Wang and Parkan [17] proposed a minimax disparity OWA operator optimization problem:
Minimize max i { 1 , , n 1 } | w i w i + 1 | subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
The minimax disparity approach obtains OWA operator weights based on the minimization of the maximum difference between any two adjacent weights. Recently, Liu [14] considered the general minimax disparity OWA operator optimization problem as follows.
Minimize max i { 1 , , n 1 } | F ( w i ) F ( w i + 1 ) | subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
where F is a strictly convex function on [ 0 , ) and is at least two order differentiable.
Liu [14] also considered a general convex OWA operator optimization problem with given orness level:
Minimize V W = i = 1 n F ( w i ) subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 < α < 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
where F is a strictly convex function on [ 0 , 1 ] and is at least two order differentiable.
When F ( x ) = x ln x , (1) becomes the maximum entropy OWA operator problem that was discussed in [7,12]. F ( x ) = x 2 in (1) corresponds to minimum variance OWA operator problem [8,10]. When F ( x ) = x p , p > 1 , (1) becomes the OWA problem of R e ´ nyi entropy [15].
Wang et al. [1] have introduced the following least squares deviation (LSD) method as an alternative approach to determine the OWA operator weights.
Minimize i = 1 n 1 ( w i + 1 w i ) 2 subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
They solved this problem by using LINGO or MATLAB software package. Recently, Sang and Liu [17] solved this constrained optimization problem analytically by using the method of Lagrange multipliers. The general least convex deviation model for OWA operators attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation, which includes the least square deviation (LSD) problem as a special case.
We now propose the general least convex deviation model with a given orness level as follows,
Minimize F ( W ) = i = 1 n 1 F | w i + 1 w i | subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n ,
where F is a strictly convex function on [ 0 , 1 ] , and F is continuous on [0, 1) such that F ( 0 ) = 0 .
The followings are well-known propositions which can be easily checked.
Proposition 1.
If o r n e s s ( W ) = 1 , then W = ( 1 , 0 , , 0 ) is the only feasible solution of the model (3). For o r n e s s ( W ) = 0 , W = ( 0 , , 0 , 1 ) is the only feasible solution of the model (3). Since F ( W ) = 0 if and only if W = ( 1 / n , , 1 / n ) , we have that if o r n e s s ( W ) = 1 / 2 , then W = ( 1 / n , , 1 / n ) is the only optimum solution of the model (3).
Proposition 2.
If W * = ( w 1 * , , w n * ) is an optimal solution of the model (3) for a given level of o r n e s s ( W ) = α , then W ^ * = ( w ^ 1 * , , w ^ n * ) , where w i * = w ^ n i + 1 * , i = 1 , , n is an optimal solution of the model (3) for o r n e s s ( W ) = 1 α , and vice versa. Hence, for any α > 1 / 2 , we can consider the model (3) for degree of o r n e s s ( 1 α ) , and then take the reverse of that optimal solution.
By Proposition 1 and 2, without loss of generality, we may assume that α ( 0 , 1 / 2 ) .

3. Optimal Solution of the Least Convex Deviation Problem

In this section, we consider the mathematical proof of the optimization problem (3). We need the following lemmas to find optimal solution of the model (3).
Lemma 1.
Let { w i } be the set of nonnegative weighting vectors where w i = a for i = 1 , , k 0 , w i = b for i = k 0 + 1 , , n 1 , a < b = w n 1 > w n such that i = 1 n n i n 1 w i = α , i = 1 n w i = 1 . If 0 < α < 1 / 2 , then there exists the set { w i * } of nonnegative weighting vectors such that i = 1 n n i n 1 w i * = α , 0 w i * w i + 1 * 1 , i = 1 , , n 1 , i = 1 n w i * = 1 , and
i = 1 n 1 F ( | w i + 1 * w i * | ) i = 1 n 1 F ( | w i + 1 w i | ) .
Proof. 
We note that
i = 1 k 0 n i n 1 a + i = k 0 + 1 n n i n 1 b = α
and
k 0 a + ( n k 0 1 ) b + w n = 1 .
Consider ϵ > 0 and δ > 0 ( δ > 0 depends on ϵ > 0 ) such that
i = 1 k 0 n i n 1 ( a + ϵ ) + i = k 0 + 1 n n i n 1 ( b δ ) = α ,
and define a function H ( ϵ ) on ϵ 0 by
H ( ϵ ) = k 0 ( a + ϵ ) + ( n k 0 ) ( b δ ) .
Then H ( ϵ ) ia continuous and
H ( 0 ) = k 0 a + ( n k 0 ) b > k 0 a + ( n k 0 1 ) b + w n = 1 .
Let a + ϵ = b δ = a for some ϵ > 0 and δ > 0 . Then we have
i = 1 n n i n 1 a = n a 2 = α
so that a = 2 α / n . Now since 0 < α < 1 / 2 ,
H ( ϵ ) = k 0 ( a + ϵ ) + ( n k 0 ) ( b δ ) = k 0 a + ( n k 0 ) a = n a = 2 α < 1
and then there exist ϵ * and δ * such that 0 < ϵ * < ϵ and 0 < δ * < δ and
H ( ϵ * ) = k 0 ( a + ϵ * ) + ( n k 0 ) ( b δ * ) = 1 ,
and, by (4),
i = 1 k 0 n i n 1 ( a + ϵ * ) + i = k 0 + 1 n n i n 1 ( b δ * ) = α .
Let
w i * = a + ϵ * , i = 1 , , k 0 b δ * , i = k 0 + 1 , , n .
Then since a < a + ϵ * < b δ * < b and F is strictly increasing, we have
i = 1 n 1 F ( | w i + 1 * w i * | ) = F ( ( b a ) ( ϵ * + δ * ) ) < F ( b a ) = F ( | w k 0 + 1 w k 0 | ) i = 1 n 1 F ( | w i + 1 w i | ) .
This completes the proof. □
Lemma 2.
Let { w i } be the set of nonnegative weighting vectors such that i = 1 n n i n 1 w i = α , i = 1 n w i = 1 . If 0 < α < 1 / 2 , then there exists the set { w i * } of nonnegative weighting vectors such that i = 1 n n i n 1 w i * = α , 0 w i * w i + 1 * 1 , i = 1 n w i * = 1 and
i = 1 n 1 F ( | w i + 1 * w i * | ) i = 1 n 1 F ( | w i + 1 w i | ) .
Proof. 
Let w ( i ) be the i-th smallest weighting vector of { w i } . Then we have
α = i = 1 n n i n 1 w i i = 1 n n i n 1 w ( i ) .
Hence there exists some w ( k 0 ) such that w ( k 0 ) w ( k 0 ) w ( k 0 + 1 ) and
i = 1 k 0 n i n 1 w ( k 0 ) + i = k 0 + 1 n n i n 1 w ( i ) = α
where 1 k 0 n . Since
1 = i = 1 n w ( i ) k 0 w ( k 0 ) + i = k 0 + 1 n w ( i ) ,
we consider two possible cases;
k 0 w ( k 0 ) + i = k 0 + 1 n w ( i ) = 1
or
k 0 w ( k 0 ) + i = k 0 + 1 n w ( i ) > 1 .
First we suppose that
k 0 w ( k 0 ) + i = k 0 + 1 n w ( i ) = 1
and let
w i * = w ( k 0 ) , i = 1 , , k 0 w ( i ) , i = k 0 + 1 , , n .
Since w ( i ) w i * , i = 1 , 2 , n and 1 = i = 1 n w ( i ) = i = 1 n w i * , we have that w ( i ) = w i * , i = 1 , 2 , n and then i = 1 n n i n 1 w i * = α , 0 w i * w i + 1 * 1 and i = 1 n w i * = 1 . Since F is nondecreasing on [ 0 , ) ,
i = 1 n 1 F ( | w i + 1 w i | ) i = 1 n 1 F ( | w ( i + 1 ) w ( i ) | ) = i = 1 n 1 F ( | w i + 1 * w i * | ) .
Now we suppose that
k 0 w ( k 0 ) + i = k 0 + 1 n w ( i ) > 1 .
We note that for 0 ϵ 1 , there exists 0 h ( ϵ ) = δ 1 such that
H 1 ( ϵ , δ ) = i = 1 k 0 n i n 1 [ ( 1 ϵ ) w ( k 0 ) + ϵ w ( k 0 + 1 ) ] + i = k 0 + 1 n n i n 1 [ ( 1 δ ) w ( i ) + δ w ( k 0 + 1 ) ] = α .
Then h is an increasing continuous function of ϵ and we have three possible cases as ϵ 1 . ; (Case 1) h ( ϵ 0 ) = 1 : H 1 ( ϵ 0 , 1 ) = α for some 0 < ϵ 0 < 1 , (Case 2) h ( 1 ) = 1 : H 1 ( 1 , 1 ) = α , and (Case 3) h ( 1 ) = δ 0 : H 1 ( 1 , δ 0 ) = α for some 0 < δ 0 < 1 .
We define a function H ( ϵ ) on 0 ϵ 1 by
H ( ϵ ) = i = 1 k 0 [ ( 1 ϵ ) w ( k 0 ) + ϵ w ( k 0 + 1 ) ] + i = k 0 + 1 n [ ( 1 δ ) w ( i ) + δ w ( k 0 + 1 ) ]
such that H 1 ( ϵ , δ ) = α . Then H is continuous and, then by (6), we have
H ( 0 ) = k 0 w ( k 0 ) + i = k 0 + 1 n w ( i ) > 1 .
(Case 1) H 1 ( ϵ 0 , 1 ) = α for some 0 < ϵ 0 < 1 ;
From (7), we have
i = 1 k 0 n i n 1 [ ( 1 ϵ 0 ) w ( k 0 ) + ϵ w ( k 0 + 1 ) ] + i = k 0 + 1 n n i n 1 w ( k 0 + 1 ) = α .
There are two possible cases, that is,
H ( ϵ 0 ) = i = 1 k 0 [ ( 1 ϵ 0 ) w ( k 0 ) + ϵ 0 w ( k 0 + 1 ) ] + i = k 0 + 1 n w ( k 0 + 1 ) 1
or
H ( ϵ 0 ) = i = 1 k 0 [ ( 1 ϵ 0 ) w ( k 0 ) + ϵ 0 w ( k 0 + 1 ) ] + i = k 0 + 1 n w ( k 0 + 1 ) > 1 .
First, suppose that
H ( ϵ 0 ) = i = 1 k 0 [ ( 1 ϵ 0 ) w ( k 0 ) + ϵ 0 w ( k 0 + 1 ) ] + i = k 0 + 1 n w ( k 0 + 1 ) 1 .
Then, from (8) and (9), there exist 0 < ϵ * ϵ 0 and 0 < δ * 1 such that
H ( ϵ ) = i = 1 k 0 [ ( 1 ϵ * ) w ( k 0 ) + ϵ * w ( k 0 + 1 ) ] + i = k 0 + 1 n ( 1 δ * ) w ( i ) + δ * w ( k 0 + 1 ) = 1 .
Put
w i * = [ ( 1 ϵ * ) w ( k 0 ) + ϵ * w ( k 0 + 1 ) ] , i = 1 , , k 0 ( 1 δ * ) w ( i ) + δ * w ( k 0 + 1 ) , i = k 0 + 1 , , n .
Then we have i = 1 n n i n 1 w i * = α , 0 w i * w i + 1 * 1 and i = 1 n w i * = 1 . And since F is nondecreasing on [ 0 , ) , by construction of w i * for i = 1 , 2 , , n ,
i = 1 n 1 F ( | w i + 1 w i | ) i = 1 n 1 F ( | w ( i + 1 ) w ( i ) | ) F | w ( k 0 + 1 ) w ( k 0 ) | + i = k 0 + 1 n 1 F ( ( 1 δ * ) | w ( i + 1 ) w ( i ) | ) i = 1 n 1 F ( | w i + 1 * w i * | ) .
Second, suppose that
H ( ϵ 0 ) = i = 1 k 0 [ ( 1 ϵ 0 ) w ( k 0 ) + ϵ 0 w ( k 0 + 1 ) ] + i = k 0 + 1 n w ( k 0 + 1 ) > 1 ,
and let a = ( 1 ϵ 0 ) w ( k 0 + 1 ) + ϵ 0 w ( k 0 + 1 ) , b = w ( k 0 + 1 ) and w n = 1 ( i = 1 k 0 [ ( 1 ϵ 0 ) w ( k 0 + 1 ) + ϵ 0 w ( k 0 + 1 ) ] + i = k 0 + 1 n 1 w ( k 0 + 1 ) ) . Then a < b > w n and from Lemma 1, we obtain w i * , i = 1 , 2 , , n such that i = 1 n n i n 1 w i * = α , 0 w i * w i + 1 * 1 , i = 1 n w i * = 1 , and i = 1 i = n 1 F ( | w i + 1 * w i * | ) i = 1 i = n 1 F ( | w i + 1 w i | ) .
(Case 2) H 1 ( 1 , 1 ) = α ;
From (7),
i = 1 k 0 n i n 1 w ( k 0 + 1 ) + i = k 0 + 1 n n i n 1 w ( k 0 + 1 ) = α ,
hence
w ( k 0 + 1 ) = 2 α n < 1 n .
We note that
H ( 1 ) = i = 1 n w ( k 0 + 1 ) = 2 α < 1 .
Since H ( 0 ) > 1 and H ( 1 ) < 1 from (8) and (11), there exist 0 < ϵ * < 1 , 0 < δ * < 1 such that
H ( ϵ ) = i = 1 k 0 [ ( 1 ϵ * ) w ( k 0 ) + ϵ * w ( k 0 + 1 ) ] + i = k 0 + 1 n ( 1 δ * ) w ( i ) + δ * w ( k 0 + 1 ) = 1 .
Hence we obtain w i * , i = 1 , 2 , , n by putting
w i * = ( 1 ϵ * ) w ( k 0 ) + ϵ * w ( k 0 + 1 ) , i = 1 , , k 0 ( 1 δ * ) w ( i ) + δ * w ( k 0 + 1 ) , i = k 0 + 1 , , n
such that i = 1 n n i n 1 w i * = α , 0 w i * w i + 1 * 1 and i = 1 n w i * = 1 . And, just like (Case 1), we have
i = 1 n 1 F ( | w i + 1 w i | ) i = 1 n 1 F ( | w i + 1 * w i * | ) .
(Case 3) H 1 ( 1 , δ 0 ) = α for some 0 < δ 0 < 1 ;
From (7), we have
i = 1 k 0 + 1 n i n 1 w ( k 0 + 1 ) + i = k 0 + 2 n n i n 1 [ ( 1 δ 0 ) w ( i ) + δ 0 w ( k 0 + 1 ) ] = α .
There are two possible cases, that is,
H ( 1 ) = ( k 0 + 1 ) w ( k 0 + 1 ) + i = k 0 + 2 n [ ( 1 δ 0 ) w ( i ) + δ w ( k 0 + 1 ) ] 1
or
H ( 1 ) = ( k 0 + 1 ) w ( k 0 + 1 ) + i = k 0 + 2 n [ ( 1 δ 0 ) w ( i ) + δ w ( k 0 + 1 ) ] > 1 .
But if H ( 1 ) 1 , then it is easy to obtain desired w i * , i = 1 , 2 , , n by the similar arguments to the above. Hence we consider the case
H ( 1 ) = ( k 0 + 1 ) w ( k 0 + 1 ) + i = k 0 + 2 n [ ( 1 δ 0 ) w ( i ) + δ w ( k 0 + 1 ) ] > 1 .
Now (12) and (13) are exactly the same as (5) and (6) regarding w ( k 0 + 1 ) as w ( k 0 ) and ( 1 δ ) w ( i ) + δ w ( k 0 + 1 ) as w ( i ) , i = k 0 + 2 , , n in (5) and (6). If we use the same arguments as above finite number of times, then we finally have the following situation; there exist w i , i = 1 , , n such that
i = 1 n 2 n i n 1 w ( n 2 ) + 1 n 1 w ( n 1 ) = α .
and
( n 2 ) w ( n 2 ) + w ( n 1 ) + w ( n ) > 1 .
If we put a = w ( n 2 ) , b = w ( n 1 ) and w n = 1 [ ( n 2 ) w ( n 2 ) + w ( n 1 ) ] in Lemma 1, then we obtain the desired result of w i * , i = 1 , 2 , , n by using Lemma 1 again. We complete the proof. □
The following result is immediately from Lemma 2.
Lemma 3.
The model (3) is equivalent to the following model:
Minimize i = 1 n 1 F ( w i + 1 w i ) subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 / 2 , w 1 + + w n = 1 , 0 w i , i = 1 , , n , w i w i + 1 , i = 1 , , n 1 ,
where F is a strictly convex function on [ 0 , ) , and F is continuous on [0, 1) such that F ( 0 ) = 0 .
Lemma 4.
If we put w i = k = 1 i x k , i = 1 , , n , then the model (14) is transformed into the following model:
Min V W = k = 2 n F ( x k ) subject to o r n e s s ( W ) = k = 1 n ( n k ) ( n k + 1 ) 2 ( n 1 ) x k = α , 0 α 1 / 2 , k = 1 n ( n k + 1 ) x k = 1 , 0 x k , k = 1 , , n , .
where F is a strictly convex function on [ 0 , 1 ] with continuous first differentiability of F such that F ( 0 ) = 0 .
We now prove the optimization problem of model (3). We note that F is strictly convex if and only if F is strictly increasing.
Theorem 1.
Let F be a strictly convex function on [ 0 , 1 ] and F be continuous on [0, 1) such that F ( 0 ) = 0 . Then the optimal solution for the model (3) with given orness level 0 < α < 1 / 2 is as follow:
In case of w 1 * = x 1 * = 0 , it is the weighting function w i * = k = 1 i x k * , i = 1 , 2 , , n with
x k * = ( F ) 1 ( a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) ) , k H 0 , k H
where a * , b * are determined by the constraints:
k H ( n k ) ( n k + 1 ) 2 ( n 1 ) x k * = α k H ( n k + 1 ) x k * = 1
and H = { k | a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) > 0 } .
In case of w 1 * = x 1 * > 0 , it is the weighting function w i * = k = 1 i x k * , i = 1 , 2 , , n with
x k * = ( F ) 1 c * ( k 1 ) ( n k + 1 ) n 1 , k = 2 , 3 , , n
and
x 1 * = 1 n 1 k = 2 n ( n k + 1 ) x k *
where c * is determined by the constraints such that
1 2 α = k = 1 n ( k 1 ) ( n k + 1 ) n 1 x k * .
Proof. 
By Lemma 4, we consider the following model (15) to get x k * for i = 1 , 2 , , n .
Minimize V W = k = 2 n F ( x k ) subject to o r n e s s ( W ) = k = 1 n ( n k ) ( n k + 1 ) 2 ( n 1 ) x k = α , 0 < α < 1 / 2 , k = 1 n ( n k + 1 ) x k = 1 , 0 x k , k = 1 , , n .
There are two possible cases such as (case 1) w 1 * = x 1 * = 0 or (2) w 1 * = x 1 * > 0 .
(Case 1) w 1 * = x 1 * = 0 .
Let x k * = m a x { ( F ) 1 ( a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) ) , 0 } such that
( n k ) ( n k + 1 ) x k * = 2 ( n 1 ) α
( n k + 1 ) x k * = 1
and let x k for k = 1 , , n be a vector such that
( n k ) ( n k + 1 ) x k = 2 ( n 1 ) α
( n k + 1 ) x k = 1 , 0 x k , k = 1 , , n .
We also note that
F ( x k * ) = 0 , k H a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) , k H
and we put x k = x k * + β k for k = 1 , , n . Then, noting that x k = β k , k H , we have
k H ( n k + 1 ) x k + k H ( n k + 1 ) β k = k = 1 n ( n k + 1 ) β k = 0
from (22) and (24) because
1 = k = 1 n ( n k + 1 ) x k = k = 1 n ( n k + 1 ) ( x k * + β k ) = k = 1 n ( n k + 1 ) x k * + k = 1 n ( n k + 1 ) β k = 1 + k = 1 n ( n k + 1 ) β k .
We also have, from (21) and (23)
k H ( n k ) ( n k + 1 ) x k + k H ( n k ) ( n k + 1 ) β k = k = 1 n ( n k ) ( n k + 1 ) β k = 0 ,
because
2 ( n 1 ) α = k = 1 n ( n k ) ( n k + 1 ) x k = k = 1 n ( n k ) ( n k + 1 ) ( x k * + β k ) = k = 1 n ( n k ) ( n k + 1 ) x k * + k = 1 n ( n k ) ( n k + 1 ) β k = 2 ( n 1 ) α + k = 1 n ( n k ) ( n k + 1 ) β k .
We now show that
k = 2 n F ( x k ) k = 2 n F ( x k * ) .
Since F ( y ) F ( y 0 ) F ( y 0 ) ( y y 0 ) (the equality holds if and only if y = y 0 ), we have that
k = 2 n F ( x k ) k = 2 n F ( x k * ) = k = 2 n F ( x k * + β k ) k = 2 n F ( x k * ) k = 2 n F ( x k * ) β k = k = 1 n F ( x k * ) β k = k H β k [ a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) ] = a * k H ( n k ) ( n k + 1 ) β k + b * k H ( n k + 1 ) β k = a * [ k H ( n k ) ( n k + 1 ) x k ] + b * [ k H ( n k + 1 ) x k ] = k H x k [ a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) ] 0 ,
where the second equality comes from the fact that F ( x 1 * ) = F ( 0 ) = 0 , the third equality comes from (25), the fifth equality comes from (26) and (27) and the second inequality comes from the fact that a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) 0 for k H . The equality holds if and only if β i = 0 , i = 2 , , n . This completes the Case 1.
(Case 2) w 1 * = x 1 * > 0 .
Let
x k * = ( F ) 1 c * ( k 1 ) ( n k + 1 ) n 1 , k = 2 , 3 , , n
and
x 1 * = 1 n 1 k = 2 n ( n k + 1 ) x k *
where c * is determined by the constraints such that
1 2 α = k = 1 n ( k 1 ) ( n k + 1 ) n 1 x k * .
Then from (29),
k = 1 n ( n k + 1 ) x k * = 1 .
We note that
1 2 α = k = 1 n ( k 1 ) ( n k + 1 ) n 1 x k * = k = 1 n ( n k + 1 ) x k * 2 k = 1 n ( n k ) ( n k + 1 ) 2 ( n 1 ) x k * .
Since k = 1 n ( n k + 1 ) x k * = 1 , we have
k = 1 n ( n k ) ( n k + 1 ) x k * = 2 ( n 1 ) α .
and then x k * for k = 1 , 2 , , n satisfies constraints of the model (15). We now show that x k * for k = 1 , 2 , , n is the optimal solution of the model (15). Let x k for k = 1 , 2 , , n be a vector such that
k = 1 n ( n k ) ( n k + 1 ) x k = 2 ( n 1 ) α
k = 1 n ( n k + 1 ) x k = 1 , x k > 0 .
Then from (33) and (34),
1 2 α = k = 1 n ( n k + 1 ) x k 2 k = m n ( n k ) ( n k + 1 ) 2 ( n 1 ) x k = k = 1 n ( k 1 ) ( n k + 1 ) n 1 x k .
If we put x k = x k * + β k , k = 1 , 2 , , n , then we have
k = 1 n ( k 1 ) ( n k + 1 ) n 1 β k = 0
because
1 2 α = k = 1 n ( k 1 ) ( n k + 1 ) n 1 x k = k = 1 n ( k 1 ) ( n k + 1 ) n 1 ( x k * + β k ) = k = 1 n ( k 1 ) ( n k + 1 ) n 1 x k * + k = 2 n ( k 1 ) ( n k + 1 ) n 1 β k = 1 2 α + k = 1 n ( k 1 ) ( n k + 1 ) n 1 β k
where the first equality comes from (35) and the last equality comes from (30). Hence we have
k = 2 n F ( x k ) k = 2 n F ( x k * ) = k = 2 n F ( x k * + β k ) k = 2 n F ( x k * ) k = 2 n F ( x k * ) β k = c * k = 2 n ( k 1 ) ( n k + 1 ) n 1 β k = c * k = 1 n ( k 1 ) ( n k + 1 ) n 1 β k = 0
where the second equality comes from (28) and the fourth equality comes from (36). The equality holds if and only if β i = 0 for i = 2 , , n . This completes the proof. □
Note 1. Observe that H = { k | a * ( n k ) + b * > 0 } is either { 1 , 2 , , m 1 } or { m , m + 1 , , n } for some m { 1 , 2 , , n } . By Lemma 2, the solution OWA operator weights for 0 α 1 / 2 has the form
W * = 0 , 0 , , 0 , w m * , w m + 1 * , , w n * .
Then H = { m , m + 1 , , n } and by, w m * < w m + 1 * < , < w n * . We also note that w 1 * = x 1 * > 0 H = { 1 , 2 , , n } , and w 1 * = 0 H = { m , m + 1 , , n } for some m 2 .
As a special case of model (3), we consider the following model for p > 1 .
Minimize i = 1 n 1 ( w i + 1 w i ) p subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
Note 2. Let S m ( α ) be a subset of 0 < α < 1 / 2 on which the optimal solution for the model (37) with given orness level 0 < α < 1 / 2 has the form of ( 0 , , 0 , w m * , w m + 1 * , , w n * ) , 0 < w m * , , w n * . If x m * = w m * is a linear function of α with positive slope, then we define J n ( m ) by { J n ( m ) < α } = { α | x m * = w m * > 0 } . We also have
S m ( α ) = { α | x m * = w m * > 0 } { α | x m 1 * = w m 1 * > 0 } c = { J n ( m ) < α J n ( m 1 ) } .
From now on we have the closed form of the exact optimal solutions of the LSD OWA model specifically as a function of n and α .
Corollary 1
([17]). The optimal solution for the model (37) with given orness level 0 < α < 1 / 2 when p = 2 and w 1 * = x 1 * > 0 is the weighting function w i * = k = 1 i x k * , i = 1 , 2 , , n , where
x 1 * = 10 ( n 2 n ) α 3 n 2 + 5 n + 2 2 n ( n 2 + 1 )
and
x k * = 15 ( 1 2 α ) ( k 1 ) ( n k + 1 ) 2 n ( n 3 + n 2 + n + 1 ) , k = 2 , , n
on J n ( 1 ) = 3 n 2 5 n 2 10 n ( n 1 ) < α < 1 / 2 .
Proof. 
By the Equation (20) in with F ( x ) = x 2 and ( F ) 1 ( x ) = 1 2 x ,
1 2 α = k = 1 n ( k 1 ) ( n k + 1 ) n 1 ( F ) 1 c * ( k 1 ) ( n k + 1 ) n 1 = k = 1 n ( k 1 ) ( n k + 1 ) n 1 1 2 c * ( k 1 ) ( n k + 1 ) n 1 = c * n ( n 3 + n 2 + n + 1 ) 60 ( n 1 ) ,
then we have
c * = 60 ( n 1 ) ( 1 2 α ) n ( n 3 + n 2 + n + 1 ) .
Then by, Equation (18)
x k * = ( F ) 1 c * ( k 1 ) ( n k + 1 ) n 1 = 1 2 c * ( k 1 ) ( n k + 1 ) n 1 = 1 2 60 ( n 1 ) ( 1 2 α ) n ( n 3 + n 2 + n + 1 ) ( k 1 ) ( n k + 1 ) n 1 = 30 ( 1 2 α ) ( k 1 ) ( n k + 1 ) n ( n 3 + n 2 + n + 1 )
for k = 2 , , n and hence by Equation (19)
x 1 * = 1 n 1 k = 2 n ( n k + 1 ) x k * = 1 n 1 k = 2 n 30 ( 1 2 α ) ( k 1 ) ( n k + 1 ) 2 n ( n 3 + n 2 + n + 1 ) = 10 ( n 2 n ) α 3 n 2 + 5 n + 2 2 n ( n 2 + 1 ) .
Since x 1 * = w 1 * > 0 , noting that x 1 * is a linear function of α with positive slope,
( n 2 ) ( 3 n + 1 ) 10 n ( n 1 ) < α < 1 2 .
So that w i * = k = 1 i x k * , i = 1 , 2 , , n is the optimal solution for the model (37) for J n ( 1 ) = 3 n 2 5 n 2 10 n ( n 1 ) < α < 1 2 .  □
Corollary 2
([17]). The optimal solution for the model (38) with given orness level 0 < α < 1 / 2 when p = 2 and H = { m , m + 1 , , n } for m { 2 , , n } is the weighting function w i * = k = m i x k * , i = m , m + 1 , , n ,
with
x 1 * = x 2 * = = x m 1 * = 0 ,
x k * = a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) 2 , k = m , , n
where
a * = A ( n , m , α ) B ( n , m ) and b * = C ( n , m , α ) D ( n , m ) ,
A ( n , m , α ) = 480 α ( n 1 ) ( 2 n 2 m + 3 ) + 120 ( n m ) ( 3 n 3 m + 5 ) B ( n , m ) = ( n m ) ( n m + 1 ) ( n m + 2 ) ( n m + 3 ) 3 ( n m ) 2 + 9 ( m n ) + 8 C ( n , m , α ) = 240 α ( n 1 ) ( 3 n 3 m + 5 ) + 96 3 ( n m ) 2 + 6 ( n m ) + 1 D ( n , m ) = ( n m + 1 ) ( n m + 2 ) ( n m + 3 ) 3 ( n m ) 2 + 9 ( m n ) + 8
on J n ( m ) < α J n ( m 1 ) , m = 2 , , n 1
with
J n ( 0 ) = 1 2 , J n ( m ) = ( n m 1 ) ( 3 n 3 m + 4 ) 10 ( n m + 1 ) ( n 1 ) .
Proof. 
Let H = { m , m + 1 , , n } be given for m { 2 , , n } and F ( x ) = x 2 , ( F ) 1 ( x ) = 1 2 x in the Equation (16) of Theorem 1. If
k = m n ( n k ) ( n k + 1 ) 2 ( n 1 ) 1 2 ( a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) ) = α
and
k = m n ( n k + 1 ) 1 2 ( a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) ) = 1 ,
then we have
a * = A ( n , m , α ) B ( n , m ) , b * = C ( n , m , α ) D ( n , m )
where
A ( n , m , α ) = 480 α ( n 1 ) ( 2 n 2 m + 3 ) + 120 ( n m ) ( 3 n 3 m + 5 ) B ( n , m ) = ( n m ) ( n m + 1 ) ( n m + 2 ) ( n m + 3 ) 3 ( n m ) 2 + 9 ( m n ) + 8 C ( n , m , α ) = 240 α ( n 1 ) ( 3 n 3 m + 5 ) + 96 3 ( n m ) 2 + 6 ( n m ) + 1 D ( n , m ) = ( n m + 1 ) ( n m + 2 ) ( n m + 3 ) 3 ( n m ) 2 + 9 ( m n ) + 8 .
Hence we have
x k * = a * ( n k ) ( n k + 1 ) + b * ( n k + 1 ) 2 , m k n .
Since x m * = w m * is the linear function of α with positive slope, we have { J n ( m ) < α } = { α | x m * > 0 } , so that
J n ( m ) = ( n m 1 ) ( 3 n 3 m + 4 ) 10 ( n m + 1 ) ( n 1 ) .
This completes the proof. □
From Corollary 1, x m * is a linear function of α on each interval ( J n ( i ) , J n ( i 1 ) ] , i = 1 , 2 , , n 1 . It is also easy to check that x m * is continuous as a function of α . Hence we have the following property.
Proposition 3.
Let w m * = f m ( α ) , m = 1 , 2 , , n , as a function of α , be the optimal solution for the model (37) with given orness level 0 α 1 when p = 2 . Then w m * = f m ( α ) is continuous and piecewise linear.

4. Numerical Examples

We consider the same numerical example that Wang et al. [1] presented in their illustration of the application of the least square deviation model for n = 5 . Wang et al. [18] determined the OWA operator weights satisfying discrete degrees of orness: α = 0 , 0 . 1 , , 0 . 9 , 1 . But, in this example, we determine the solution OWA operator weights as a continuous function of α for all orness level 0 α 1 using our results.
Example 1
([3]). Suppose that p = 2 and n = 5 . Then, from Theorem 1 and Equation (39) of Corollary 2,
J 5 ( 0 ) = 1 2 , J 5 ( 1 ) = 6 25 , J 5 ( 2 ) = 13 80 , J 5 ( 3 ) = 1 12 , J 5 ( 4 ) = 0 .
In case of ( J 5 ( 1 ) , J 5 ( 0 ) ] = ( 6 25 , 1 2 ] , we substituting n with 5 and k with 1 , 2 , , 5 in equations of Theorem 1. Then
x 1 * = 12 + 50 α 65 , x 2 * = 2 4 α 13 , x 3 * = 3 6 α 13 , x 4 * = 3 6 α 13 , x 5 * = 2 4 α 13 .
Thus the optimal solution of the problem is
w 1 * = 12 + 50 α 65 , w 2 * = 2 + 30 α 65 , w 3 * = 1 5 , w 4 * = 28 30 α 65 , w 5 * = 38 50 α 65 .
In case of ( J 5 ( 2 ) , J 5 ( 1 ) ] = ( 13 80 , 6 25 ] , we substituting n with 5 and k with 2 , , 5 in Equation (38) of Corollary 2. Then
x 1 * = 0 , x 2 * = 26 + 160 α 155 , x 3 * = 33 60 α 155 , x 4 * = 57 160 α 155 , x 5 * = 46 140 α 155 .
Thus the optimal solution of the problem is
w 1 * = 0 , w 2 * = 26 + 160 α 155 , w 3 * = 7 + 100 α 155 , w 4 * = 64 60 α 155 , w 5 * = 110 200 α 155 .
Similarly, we can obtain optimal solutions as a linear function of α on each intervals ( J 5 ( 3 ) , J 5 ( 2 ) ] = ( 1 12 , 13 80 ] and ( J 5 ( 4 ) , J 5 ( 3 ) ] = ( 0 , 1 12 ] , as on ( J 5 ( 3 ) , J 5 ( 2 ) ] = ( 1 12 , 13 80 ] , the optimal solution is
w 1 * = 0 , w 2 * = 0 , w 3 * = 3 + 36 α 19 , w 4 * = 6 + 4 α 19 , w 5 * = 16 40 α 19 ,
and on ( J 5 ( 4 ) , J 5 ( 3 ) ] = ( 0 , 1 12 ] , the optimal solution is
w 1 * = 0 , w 2 * = 0 , w 3 * = 0 , w 4 * = 4 α , w 5 * = 1 4 α .
In terms of Proposition 2, if the orness level α ( 1 2 , 1 ) , the optimal solutions W ^ * = ( w ^ 1 * , , w ^ n * ) is the dual of the optimal solutions W * = ( w 1 * , , w n * ) with 1 α ( 0 , 1 2 ) and w ^ i * = w n i + 1 * .
Table 1 shows the OWA operator weights determined by model (37) with n = 5 and p = 2 as a continuous piecewise linear function of 0 α 1 / 2 .
We next consider the same numerical example that Sang and Liu [17] presented in their illustration of the application of the least square deviation model for n = 10 . Sang and Liu [17] determined the OWA operator weights satisfying discrete degrees of orness: α = 0 , 0 . 1 , , 0 . 9 , 1 . But, in this example, we determine the solution OWA operator weights w k * , k = 1 , 2 , , 10 as a function of α for all orness level 0 α 1 .
Example 2
([17]). Suppose that p = 2 and n = 10 . Then, from Corollary 1 and Equation (39) of Corollary 2, we have
J 10 ( 0 ) = 1 2 , J 10 ( 1 ) = 62 225 , J 10 ( 2 ) = 98 405 , J 10 ( 3 ) = 5 24 , J 10 ( 4 ) = 11 63 ,
J 10 ( 5 ) = 19 135 , J 10 ( 6 ) = 8 75 , J 10 ( 7 ) = 13 180 , J 10 ( 8 ) = 1 27 , J 10 ( 9 ) = 0 .
In case of ( J 10 ( 1 ) , J 10 ( 0 ) ] = ( 62 225 , 1 2 ] , we substitute k with 1 , 2 , , 10 in equations of Corollary 1. Then
x 1 * = 62 + 225 α 505 , x 2 * = 27 54 α 1111 , x 3 * = 48 96 α 1111 , x 4 * = 63 126 α 1111 , x 5 * = 72 144 α 1111 , x 6 * = 75 150 α 1111 , x 7 * = 72 144 α 1111 , x 8 * = 63 126 α 1111 , x 9 * = 48 96 α 1111 , x 10 * = 27 54 α 1111 .
Thus the optimal solution of the problem is
w 1 * = 62 505 + 45 α 101 , w 2 * = 547 5555 + 441 α 1111 , w 3 * = 307 5555 + 345 α 1111 , w 4 * = 8 5555 + 219 α 1111 ,
w 5 * = 368 5555 + 75 α 1111 , w 6 * = 743 5555 75 α 1111 , w 7 * = 1103 5555 219 α 1111 , w 8 * = 1418 5555 345 α 1111 ,
w 9 * = 1658 5555 441 α 1111 , w 10 * = 163 505 45 α 101 .
In case of ( J 10 ( 2 ) , J 10 ( 1 ) ] = ( 98 405 , 62 225 ] , we substitute k with 2 , , 10 in Equation (38) of Corollary 2. Then
x 1 * = 0 , x 2 * = 243 α 748 147 1870 , x 3 * = 3 α 22 1 55 , x 4 * = 21 α 1496 + 329 11220 ,
x 5 * = 189 α 1496 + 239 3740 , x 6 * = 75 α 374 + 16 187 , x 7 * = 177 α 748 + 529 5610 ,
x 8 * = 351 α 1496 + 337 3740 , x 9 * = 291 α 1496 + 273 3740 , x 10 * = 87 α 748 + 241 5610 .
Thus the optimal solution of the problem is
w 1 * = 0 , w 2 * = 243 α 748 147 1870 , w 3 * = 345 α 748 181 1870 , w 4 * = 669 α 1496 757 11220 ,
w 5 * = 60 α 187 2 561 , w 6 * = 45 α 374 + 46 561 , w 7 * = 87 α 748 + 989 5610 ,
w 8 * = 525 α 1496 + 2989 11220 , w 9 * = 6 α 11 + 56 165 , w 10 * = 45 α 68 + 13 34 .
Similarly, we can obtain optimal solutions as a linear function of α on each intervals such as ( J 10 ( 3 ) , J 10 ( 2 ) ] = ( 5 24 , 98 405 ] , ( J 10 ( 4 ) , J 10 ( 3 ) ] = ( 11 63 , 5 24 ] , ( J 10 ( 5 ) , J 10 ( 4 ) ] = ( 19 135 , 11 63 ] , ( J 10 ( 6 ) , J 10 ( 5 ) ] = ( 8 75 , 19 135 ] , ( J 10 ( 7 ) , J 10 ( 6 ) ] = ( 13 180 , 8 75 ] , ( J 10 ( 8 ) , J 10 ( 7 ) ] = ( 1 27 , 13 180 ] and ( J 10 ( 9 ) , J 10 ( 8 ) ] = ( 0 , 1 27 ] .
Example 3.
In this example we consider a different type of the model (37) when p = 3 / 2 and n = 10 :
Minimize i = 1 i = 9 ( w i + 1 w i ) 3 2 subject to o r n e s s ( W ) = i = 1 10 n i 9 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , 10 .
We determine the solution OWA operator weights w k * , k = 1 , 2 , , 10 as a function of α on ( J 10 ( 1 ) , 1 / 2 ] . If p = 3 / 2 then F ( x ) = x 3 2 , and then ( F ) 1 ( x ) = 4 9 x 2 . By the Equation (20) in with F ( x ) = x 3 2 and ( F ) 1 ( x ) = 4 9 x 2 , we have
1 2 α = k = 1 n ( k 1 ) ( n k + 1 ) n 1 ( F ) 1 c * ( k 1 ) ( n k + 1 ) n 1 = k = 1 10 ( k 1 ) ( 10 k + 1 ) 10 1 4 9 c * ( k 1 ) ( 10 k + 1 ) 10 1 2 .
Since c * = 27 47630 142890 α + 71445 , we have
x k * = 4 9 c * ( k 1 ) ( 10 k + 1 ) 10 1 2 = 3 23815 ( 2 α 1 ) ( k 1 ) 2 ( k 11 ) 2
for k = 2 , , 10 in Equation (18) of and
x 1 * = 1 10 1 k = 2 10 ( n k + 1 ) x k * = 238 2165 + 909 2165 α .
in Equation (19) of.
Since x 1 * = w 1 * > 0 ,
J 10 ( 1 ) = 238 909 < α < 1 / 2 .
Thus the optimal solution of the problem (40) in case of ( J 10 ( 1 ) , 1 / 2 ] = ( 238 909 , 1 / 2 ] is
w 1 * = 238 2165 + 909 α 2165 , w 2 * = 475 4763 + 9513 α 23815 , w 3 * = 1607 23815 + 7977 α 23815 , w 4 * = 284 23815 + 5331 α 23815 ,
w 5 * = 1444 23815 + 375 α 4763 , w 6 * = 3319 23815 375 α 4763 , w 7 * = 5047 23815 5331 α 23815 , w 8 * = 1274 4763 7977 α 23815 ,
w 9 * = 7138 23815 9513 α 23815 , w 10 * = 671 2165 909 α 2165 .
By similar method in the proof of Corollary 2, we have
J 10 ( m ) = ( m 9 ) ( 4 m 3 133 m 2 + 1480 m 5516 ) 126 ( m 11 ) ( m 2 22 m + 122 ) , m = 1 , 2 , , 9 .
Since 0 . 2 ( J 10 ( 3 ) , J 10 ( 2 ) ] = ( 0 . 198 , 0 . 230 ] ,
a * = 0.022 , b * = 0.157
and from Equation (16) in,
x k * = 4 9 a * ( 10 k ) ( 11 k ) + b * ( 11 k ) 2 , k = 3 , 4 , , 10 ,
that is
x 1 * = x 2 * = 0 , x 3 * = 0.0001 , x 4 * = 0.012 , x 5 * = 0.033 , x 6 * = 0.051
x 7 * = 0.058 , x 8 * = 0.050 , x 9 * = 0.032 , x 10 * = 0.011
so that the optimal solution is
w 1 * = w 2 * = 0 , w 3 * = 0.0001 , w 4 * = 0.012 , w 5 * = 0.046 , w 6 * = 0.097
w 7 * = 0.155 , w 8 * = 0.205 , w 9 * = 0.237 , w 10 * = 0.248 .
Similarly for 0 . 1 ( J 10 ( 7 ) , J 10 ( 6 ) ] = ( 0 . 070 , 0 . 103 ] , we have
a * = 0.099 , b * = 0.385 ,
and from Equation (16) in,
x 1 * = = x 6 * = 0 , x 7 * = 0.056 , x 8 * = 0.140 , x 9 * = 0.145 , x 10 * = 0.066
so that the optimal solution is
w 1 * = = w 6 * = 0 , w 7 * = 0.056 , w 8 * = 0.196 , w 9 * = 0.341 , w 10 * = 0.407 .

5. Conclusions

This paper proposes a general least convex deviation model for obtaining OWA operator weights, with orness as its control parameter. This general model includes the least squares deviation (LSD) method by Wang et al. [1] as a special class. We completely proved this constrained optimization problem mathematically. Using this result, we also give solution of LSD model suggested by Wang, Luo and Liu as a function of n and α completely. We considered the same numerical examples that Wang et al. [1] and Sang and Liu [17], and presented the exact optimal solutions as a function of n and α completely.

Author Contributions

Conceptualization, D.H.H; methodology, D.H.H; formal analysis, D.H.H; investigation, D.H.H and S.H.; writing–review and editing, D.H.H and S.H.; funding acquisition, D.H.H.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03027869).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.M.; Luo, Y.; Liu, X. Two new models for determining OWA operater weights. Comput. Ind. Eng. 2007, 52, 203–209. [Google Scholar] [CrossRef]
  2. Yager, R.R. Ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  3. Amin, G.R.; Emrouznejad, A. An extended minimax disparity to determine the OWA operator weights. Comput. Ind. Eng. 2006, 50, 312–316. [Google Scholar] [CrossRef]
  4. Amin, G.R. Notes on priperties of the OWA weights determination model. Comput. Ind. Eng. 2007, 52, 533–538. [Google Scholar] [CrossRef]
  5. Emrouznejad, A.; Amin, G.R. Improving minimax disparity model to determine the OWA operator weights. Inf. Sci. 2010, 180, 1477–1485. [Google Scholar] [CrossRef]
  6. Filev, D.; Yager, R.R. On the issue of obtaining OWA operator weights. Fuzzy Sets Syst. 1988, 94, 157–169. [Google Scholar] [CrossRef]
  7. Fullér, R.; Majlender, P. An analytic approach for obtaining maximal entropy OWA operators weights. Fuzzy Sets Syst. 2001, 124, 53–57. [Google Scholar] [CrossRef]
  8. Fullér, R.; Majlender, P. On obtaining minimal variability OWA operator weights. Fuzzy Sets Syst. 2003, 136, 203–215. [Google Scholar] [CrossRef]
  9. O’Hagan, M. Aggregating template or rule antecedents in real-time expert systems with fuzzy set logic. In Proceedings of the 22nd annual IEEE Asilomar Conference on Signals, Systems, Computers, Pacific Grove, CA, USA, 31 October–2 November 1988; pp. 681–689. [Google Scholar]
  10. Hong, D.H. A note on the minimal variability OWA operator weights. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2006, 14, 747–752. [Google Scholar] [CrossRef]
  11. Hong, D.H. On proving the extended minimax disparity OWA problem. Fuzzy Sets Syst. 2011, 168, 35–46. [Google Scholar] [CrossRef]
  12. Liu, X. The solution equivalence of minimax disparity and minimum variance problems for OWA operators. Int. J. Approx. Reason. 2007, 45, 68–81. [Google Scholar] [CrossRef]
  13. Liu, X.; Chen, L. On the properties of parametric geometric OWA operator. Int. J. Approx. Reason. 2004, 35, 163–178. [Google Scholar] [CrossRef]
  14. Liu, X. A general model of parameterized OWA aggregation with given orness level. Int. J. Approx. Reason. 2008, 48, 598–627. [Google Scholar] [CrossRef]
  15. Liu, X. Models to determine parameterized ordered weighted averaging operators using optimization criteria. Inf. Sci. 2012, 190, 27–55. [Google Scholar] [CrossRef]
  16. Majlender, P. OWA operators with maximal Rényi entropy. Fuzzy Sets Syst. 2005, 155, 340–360. [Google Scholar] [CrossRef]
  17. Sang, X.; Liu, X. An analytic approach to obtain the least square deviation OWA operater weights. Fuzzy Sets Syst. 2014, 240, 103–116. [Google Scholar] [CrossRef]
  18. Wang, Y.M.; Parkan, C. A minimax disparity approach obtaining OWA operator weights. Inf. Sci. 2005, 175, 20–29. [Google Scholar] [CrossRef]
  19. Yager, R.R. Families of OWA operators. Fuzzy Sets Syst. 1993, 59, 125–148. [Google Scholar] [CrossRef]
  20. Yager, R.R.; Filev, D. Induced ordered weighted averaging operators. IEEE Trans. Syst. Man Cybern. Part B 1999, 29, 141–150. [Google Scholar] [CrossRef] [PubMed]
Table 1. The LSD solution OWA operator weights.
Table 1. The LSD solution OWA operator weights.
WOrness ( W ) = α
0 α 1 12 1 12 < α 13 80 13 80 < α 6 25 6 25 < α 1 2
w 1 * 000 12 + 50 α 65
w 2 * 00 26 + 160 α 155 2 + 30 α 65
w 3 * 0 3 + 36 α 19 7 + 100 α 155 1 5
w 4 * 4 α 6 + 4 α 19 64 60 α 155 28 30 α 65
w 5 * 1 4 α 16 40 α 19 46 140 α 155 38 50 α 65

Share and Cite

MDPI and ACS Style

Hong, D.H.; Han, S. The General Least Square Deviation OWA Operator Problem. Mathematics 2019, 7, 326. https://doi.org/10.3390/math7040326

AMA Style

Hong DH, Han S. The General Least Square Deviation OWA Operator Problem. Mathematics. 2019; 7(4):326. https://doi.org/10.3390/math7040326

Chicago/Turabian Style

Hong, Dug Hun, and Sangheon Han. 2019. "The General Least Square Deviation OWA Operator Problem" Mathematics 7, no. 4: 326. https://doi.org/10.3390/math7040326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop