Next Article in Journal
Some Properties and Generating Functions of Generalized Harmonic Numbers
Next Article in Special Issue
A Note on Type 2 Degenerate q-Euler Polynomials
Previous Article in Journal
Extended Mizoguchi-Takahashi Type Fixed Point Theorems and Their Application
Previous Article in Special Issue
The General Least Square Deviation OWA Operator Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The General Model for Least Convex Disparity RIM Quantifier Problems

Department of Mathematics, Myongji University, Yongin Kyunggido 449-728, Korea
Mathematics 2019, 7(7), 576; https://doi.org/10.3390/math7070576
Submission received: 24 May 2019 / Revised: 24 June 2019 / Accepted: 26 June 2019 / Published: 28 June 2019
(This article belongs to the Special Issue Special Polynomials)

Abstract

:
Hong (Mathematics 2019, 7, 326) recently introduced the general least squares deviation (LSD) model for ordered weighted averaging (OWA) operator weights. In this paper, we propose the corresponding generalized least square disparity model for regular increasing monotone (RIM) quantifier determination under a given orness level. We prove this problem mathematically. Using this result, we provide the full solution of the least square disparity RIM quantifier model as an illustrative example.

1. Introduction

One of the important topics in the theory of ordered weighted averaging (OWA) operators is the determination of the associated weights. Several authors have suggested a number of methods for obtaining associated weights in many areas, such as decision-making, approximate reasoning, expert systems, data mining, as well as fuzzy systems and control [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. Yager [15] proposed regular increasing monotone (RIM) quantifiers as a method for obtaining OWA weight vectors through fuzzy linguistic quantifiers. Information aggregation procedures guided by verbally expressed concepts and a dimension-independent description of the desired aggregation can be provided by RIM quantifiers. Liu [11] and Liu and Da [12] gave solutions to the maximum-entropy RIM quantifier model for the case in which the generating functions are differentiable. Liu and Lou [19] proved the solution equivalence to the maximum-entropy and minimax ratio RIM quantifier problems, as well as the solution equivalence to the minimum-variance and minimax disparity RIM quantifier problems by taking a theoretical approach. Hong [20] provided proof of the minimum variance RIM quantifier problem and minimax disparity RIM quantifier problem. Hong [21] also provided generalized solutions to the maximum entropy RIM quantifier problem and minimax ratio RIM quantifier problem. Liu [22] suggested a general RIM quantifier determination model, and proved it analytically using the optimal control method, and proved the solution equivalence to the minimax problem for the RIM quantifier. However, Hong [23] recently proposed a modified model for the general RIM quantifier model and the correct formulation of Liu’s result.
Hong and Han [10] recently provided the following general model for the least squares deviation (LSD) method as an alternative approach to determine the OWA operator weights:
Minimize F ( W ) = i = 1 n 1 F | w i + 1 w i | subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n ,
where F is a strictly convex function on [ 0 , 1 ] , and F is continuous on [0, 1), such that F ( 0 ) = 0 .
In this paper, the corresponding generalized least convex disparity model for RIM quantifier determination under a given orness level is proposed and proved analytically. This paper is organized as follows: In Section 2, we present the preliminaries, and in Section 3, we review some general models for the RIM quantifier problem. In Section 4, we propose the general least convex disparity model for the RIM quantifier problem and prove, mathematically, for the case in which the generating functions are absolutely continuous functions and F is a strictly convex function on [ 0 , ) , such that F ( 0 ) = 0 . We also provide the least square disparity (LSD) RIM quantifier model as an illustrative example.

2. Preliminaries

Yager [15] introduced a new aggregation technique based on OWA operators. An OWA operator of dimension n is a mapping F : R n R that has an associated weight vector W = ( w 1 , , w n ) T , with the properties w 1 + + w n = 1 , 0 w i 1 , i = 1 , , n , such that
F ( a 1 , , a n ) = i = 1 n w i b i ,
where b j is the jth largest element of the collection of the aggregated objects { a 1 , , a n } .
A measure of “ o r n e s s ” associated with the weight vector W of an OWA operator was introduced by Yager [15]:
o r n e s s ( W ) = i = 1 n n i n 1 w i .
This measure characterizes the degree to which the aggregation is like an O R operation.
RIM quantifiers, as a method for obtaining OWA weight vectors through fuzzy linguistic quantifiers, were introduced by Yager [16]. Information aggregation procedures guided by verbally expressed concepts and a dimension-independent description of the desired aggregation can be provided by RIM quantifiers.
Definition 1
([16]). A fuzzy subset Q on the real number line is called a RIM quantifier if Q ( 0 ) = 0 , Q ( 1 ) = 1 , and Q ( x ) Q ( y ) for x > y .
Definition 2.
For f ( t ) on [0, 1] and a RIM quantifier Q ( x ) , f ( t ) is called generating function of Q ( x ) , if it satisfies
Q ( x ) = 0 x f ( t ) d t ,
where f ( t ) 0 and 0 1 f ( t ) d t = 1 .
Yager extended the o r n e s s measure of the OWA operator, and defined the o r n e s s of a RIM quantifier [16] as
o r n e s s ( Q ) = 0 1 Q ( x ) d x = 0 1 ( 1 t ) f ( t ) d t .
Since the RIM quantifier can be seen as a continuous form of OWA, an operator with a generating function, the OWA optimization problem can be extended to the case of the RIM quantifier.

3. The General Model for the Minimax RIM Quantifier Problem

The essential supremum and the essential infimum of f are defined as follows [24]:
e s s s u p f = inf { t : | { x [ 0 , 1 ] : f ( x ) > t } | = 0 } ,
e s s i n f f = sup { t : | { x [ 0 , 1 ] : f ( x ) < t } | = 0 }
where | E | is the Lebesgue measure of the Lebesgue measurable set, E.
A property is said to be almost everywhere (a.e.) in [0, 1] if this property holds in [0, 1], except in some subset of [0, 1] with a Lebesgue measure of zero.
Here, we introduce the known concept of absolute continuity to prove Theorem 1.
Definition 3
([24]). A function f on a finite interval [a, b] is said to be absolutely continuous on [a, b] if, given ϵ > 0 , there exists δ > 0 such that for any collection { [ a i , b i ] } i I of non-overlapping subintervals of [a, b],
| f ( b i ) f ( a i ) | < ϵ if | ( b i a i ) | < δ .
The following lemma is well-known.
Proposition 1
([24]). A function f on a finite interval [a, b] is absolutely continuous on [a, b] if, and only if f ( x ) exists a . e . in (a, b), f ( x ) is integrable, and
f ( x ) f ( a ) = a x f ( t ) d t ( a x b ) .
Wang and Parkan [13] suggested the following minimax disparity OWA problem:
Minimize max i { 1 , , n 1 } | w i w i + 1 | subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
The corresponding minimax disparity RIM quantifier problem [20] with a given orness level 0 < α < 1 is finding a solution f : [ 0 , 1 ] [ 0 , 1 ] to the following optimization problem:
Minimize e s s s u p t ( 0 , 1 ) f ( t ) subject to 0 1 ( 1 r ) f ( r ) d r = α , 0 < α < 1 , 0 1 f ( r ) d r = 1 , f ( r ) 0
where the generating functions are absolutely continuous.
The minimax ratio RIM quantifier problem [21] with a given orness level 0 < α < 1 is finding a solution f : [ 0 , 1 ] [ 0 , 1 ] to the following optimization problem:
Minimize e s s s u p t ( 0 , 1 ) f ( t ) f ( t ) subject to 0 1 ( 1 r ) f ( r ) d r = α , 0 < α < 1 , 0 1 f ( r ) d r = 1 , f ( r ) 0
where the generating functions are absolutely continuous.
The general model for the minimax RIM quantifier problem [23] is as follows:
Minimize M f = e s s s u p r ( 0 , 1 ) | F ( f ( x ) ) f ( x ) | subject to 0 1 r f ( r ) d r = α , 0 < α < 1 , 0 1 f ( r ) d r = 1 , f ( r ) 0
where F is a strictly convex function on [ 0 , ) that is differentiable to at least the second order and the generating functions are absolutely continuous.

4. The General Model for the Least Convex Disparity RIM Quantifier Problem

Wang et al. [14] proposed the following least squares deviation (LSD) method as an alternative approach to determine the OWA operator weights:
Minimize i = 1 n ( w i w i 1 ) 2 subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n .
The corresponding least squares disparity RIM quantifier problem under a given orness level can be
Minimize D f = 0 1 ( f ) 2 ( r ) d r subject to 0 1 ( 1 r ) f ( r ) d r = α , 0 < α < 1 , 0 1 f ( r ) d r = 1 , f ( r ) 0 ,
where the generating functions are absolutely continuous.
Hong and Han [10] recently proposed the general least convex deviation model with a given orness level, as follows:
Minimize F ( W ) = i = 1 n 1 F | w i + 1 w i | subject to o r n e s s ( W ) = i = 1 n n i n 1 w i = α , 0 α 1 , w 1 + + w n = 1 , 0 w i , i = 1 , , n ,
where F is a strictly convex function on [ 0 , 1 ] , and F is continuous on [0, 1), such that F ( 0 ) = 0 .
We now propose the general least squares disparity models under a given orness level, as follows:
Minimize V F ( f ) = 0 1 F ( | f ( r ) | ) d r subject to 0 1 ( 1 r ) f ( r ) d r = α , 0 < α < 1 , 0 1 f ( r ) d r = 1 , f ( r ) 0 ,
where f is absolutely continuous and F is a strictly convex function on [ 0 , ) , such that F ( 0 ) = 0 .
The least squares disparity RIM quantifier problem is a special case of Model 3, where F ( x ) = x 2 . We now prove Model 3 for the least convex disparity RIM quantifier problem.
Note 1. It is clear that the optimal solution for α = 1 / 2 is f * ( s ) = 1 a.e. with V f * = 0 . Since V f = V f R , 0 1 f ( r ) d r = 0 1 f R ( r ) d r and 0 1 r f R ( r ) d r = 1 0 1 r f ( r ) d r where f R ( r ) = f ( 1 r ) , if a weighting function f * ( r ) is optimal to Model 3 for some given level of preference α ( 0 , 1 / 2 ) , then f * ( 1 r ) is optimal to Model 3 for a given level of preference, 1 α . Therefore, we may assume that α ( 0 , 1 / 2 ) without loss of generality. That is, for α > 1 / 2 , we can consider Model 3 for the level of preference with index 1 α , and take the reverse of that optimal solution.
Lemma 1.
Let f be a nonnegative, absolutely continuous function, such that 0 1 ( 1 r ) f ( r ) d r = α , 0 1 f ( r ) d r = 1 . Then, for α ( 0 , 1 / 2 ) , there exists a nonnegative nondecreasing continuous function, h, such that 0 1 ( 1 r ) h ( r ) d r = α , 0 1 h ( r ) d r = 1 and V h V f .
Proof. 
Let h 1 ( x ) = 0 x { f ( t ) } + d t + f ( 0 ) . Then,
f ( x ) = 0 x f ( t ) d t + f ( 0 ) h 1 ( x ) .
Also, since h 1 ( 0 ) = f ( 0 ) , h 1 ( x ) f ( x ) and h 1 ( x ) is nondecreasing,
h 1 ( x ) = { f ( x ) } + , h 1 ( 1 ) > 1 ,
and | f ( x ) | h 1 ( x ) . If 0 1 h 1 ( x ) d x = c 1 and ( 1 / c ) h 1 ( x ) = h 2 ( x ) , then 0 1 x f ( x ) d x 0 1 x h 2 ( x ) d x . Hence, α = 0 1 ( 1 x ) f ( x ) d x 0 1 ( 1 x ) h 2 ( x ) d x . Let a ( 0 , 1 ) , such that h 2 ( a ) = 1 . If h 3 ( x ) = h 2 ( x + a ) 1 on [ a , 1 a ] , then for any ϵ [ 0 , 1 ] ,
a 1 a ϵ h 3 ( x ) d x = 0 .
Let h ϵ ( x ) = ϵ h 3 ( x a ) + 1 , x [ 0 , 1 ] . Then, h ϵ ( x ) 0 is nondecreasing, and
0 1 h ϵ ( x ) d x = 1 .
We also note that if ϵ = 1 , then h ϵ ( x ) = h 2 ( x ) and if ϵ 0 , then
0 1 ( 1 x ) h ϵ ( x ) d x 1 / 2 .
Also, for α ( 0 , 1 / 2 ) , there exists 0 < ϵ 1 , such that
0 1 ( 1 x ) h ϵ ( x ) d x = α .
Noting that ( h ϵ ) ( x ) = ( ϵ / c ) h 1 ( x ) for x [ 0 , 1 ] , we have ( h ϵ ) ( x ) | f ( x ) | , so that V h ϵ V f . This completes the proof. □
Note 2. We note that
α = 0 1 ( 1 r ) f ( r ) d r = 0 1 ( 1 r ) 0 r f ( s ) d s d r + 1 2 f ( 0 ) = 0 1 s 1 ( 1 r ) f ( s ) d r d s + 1 2 f ( 0 ) = 0 1 1 2 ( 1 s ) 2 f ( s ) d s + 1 2 f ( 0 )
and
1 = 0 1 f ( r ) d r = 0 1 0 r f ( s ) d s d r + f ( 0 ) = 0 1 s 1 f ( s ) d r d s + f ( 0 ) = 0 1 ( 1 s ) f ( s ) d s + f ( 0 ) .
Hence, by Lemma 1, Model 3 is equivalent to the following model:
Minimize V F ( f ) = 0 1 F ( f ( s ) ) d s subject to 0 1 1 2 ( 1 s ) 2 f ( s ) d s + 1 2 f ( 0 ) = α , 0 < α < 1 , 0 1 ( 1 s ) f ( s ) d s + f ( 0 ) = 1 , f ( s ) 0 . ,
where f is absolutely continuous and F is a strictly convex function on [ 0 , ) , such that F ( 0 ) = 0 .
We now prove Model 4, which is equivalent to Model 3.
Theorem 1.
Suppose that F is a strictly convex function on [ 0 , ) and F is continuous with F ( 0 ) = 0 . Then, the optimal solution of Model 4 for α ( 0 , 1 / 2 ) is as follows:
In the case of f * ( 0 ) = 0 , the solution has the form f * ( s ) = 0 s g * ( 1 r ) d r with
g * ( r ) = ( F ) 1 1 2 r 2 a * + r b * a . e . i f ( F ) 1 1 2 r 2 a * + r b * > 0 , 0 a . e . elsewhere
and a * / 2 + b * 0 .
In the case of f * ( 0 ) > 0 , the solution has the form f * ( s ) = 0 s g * ( 1 r ) d r + f * ( 0 ) , where f * ( 0 ) = 1 0 1 r g * ( r ) d r with
g * ( r ) = ( F ) 1 c * r ( 1 r ) a . e .
where c * is determined by the constraint:
1 2 α = 0 1 r ( 1 r ) g * ( r ) d r .
Proof. 
Model 4 is equivalent to the following model by taking f ( 1 r ) = g ( r ) :
Minimize V g = 0 1 F ( g ( r ) ) d r subject to 0 1 1 2 r 2 g ( r ) d r = α 1 2 f ( 0 ) , 0 1 r g ( r ) d r = 1 f ( 0 ) , g ( r ) 0 .
Case 1) f * ( 0 ) = 0 . Let g * be the function, such that
g * ( r ) = ( F ) 1 1 2 r 2 a * + r b * a . e . if ( F ) 1 1 2 r 2 a * + r b * > 0 , 0 a . e . elsewhere ,
where a * , b * are determined by the constraints:
0 1 1 2 r 2 g * ( r ) d r = α 0 1 r g * ( r ) d r = 1 , g * ( r ) 0 .
Also, let g be a function to satisfy the constraint:
0 1 1 2 r 2 g ( r ) d r = α 1 2 f ( 0 ) 0 1 r g ( r ) d r = 1 f ( 0 ) , g ( r ) 0 .
Assume that { r : g * ( r ) > 0 } = A and { r : g * ( r ) = 0 } = B . We also note that
F g * ( r ) = 1 2 r 2 a * + r b * a . e . , if r A , 0 a . e . elsewhere ,
and 1 2 r 2 a * + r b * 0 for r B , since F ( 0 ) = 0 and F ( x ) is a strictly increasing function. We put g ( r ) = g * ( r ) + h ( r ) , r [ 0 , 1 ] . Then, noting that g ( r ) = h ( r ) , r B , we have
A r h ( r ) d r + B r g ( r ) d r = 0 1 r h ( r ) d r = f ( 0 ) ,
since
1 f ( 0 ) = 0 1 r g ( r ) d r = 0 1 r g * ( r ) d r + 0 1 r h ( r ) d r = 1 + 0 1 r h ( r ) d r .
We also have
A 1 2 r 2 h ( r ) d r + B 1 2 r 2 g ( r ) d r = 0 1 1 2 r 2 h ( r ) d r = 1 2 f ( 0 ) ,
since
α 1 2 f ( 0 ) = 0 1 1 2 r 2 g ( r ) d r = 0 1 1 2 r 2 g * ( r ) d r + 0 1 1 2 r 2 h ( r ) d r = α + 0 1 1 2 r 2 h ( r ) d r .
We now show that
0 1 F ( g ( r ) ) d r 0 1 F ( g * ( r ) ) d r .
Since F ( x ) F ( x 0 ) F ( x 0 ) ( x x 0 ) (the equality holds if, and only if x = x 0 ), we have that
0 1 F ( g ( r ) ) d r 0 1 F ( g * ( r ) ) d r = 0 1 F ( g * ( r ) + h ( r ) ) d r 0 1 F ( g * ( r ) ) d r 0 1 F ( g * ( r ) ) h ( r ) d r = A 1 2 r 2 a * + r b * h ( r ) d r = a * A 1 2 r 2 h ( r ) d r + b * A r h ( r ) d r = a * B 1 2 r 2 g ( r ) d r 1 2 f ( 0 ) + b * B r g ( r ) d r f ( 0 ) = B 1 2 r 2 a * + r b * g ( r ) d r f ( 0 ) a * 2 + b * 0 ,
where the second equality comes from (6), the fourth equality comes from (7) and (8), and the second inequality comes from the fact that 1 2 r 2 a * + r b * 0 for r B and a * / 2 + b * 0 . In particular, two inequalities hold the equality whenever f * = f a . e . This completes Case 1 of the proof.
Case 2) f * ( 0 ) > 0 . Let g * be the function, such that
g * ( r ) = ( F ) 1 c * r ( 1 r ) a . e . ,
where c * is determined by the constraint:
1 2 α = 0 1 r ( 1 r ) g * ( r ) d r .
Then,
F ( g * ( r ) ) = c * r ( 1 r ) a . e .
We note that
1 2 α = 0 1 r ( 1 r ) g * ( r ) d r = 0 1 r g * ( r ) d r 2 0 1 1 2 r 2 g * ( r ) d r .
Since ( f * ) ( 1 r ) = g * ( r ) and 0 1 f * ( r ) d r = 1 , f * ( s ) = 0 s g * ( 1 r ) d r + c for some c > 0 where c = 1 0 1 0 t g * ( 1 r ) d r d t . Hence, f * ( 0 ) = 1 0 1 0 t g * ( 1 r ) d r d t . We also note that
0 1 0 t g * ( 1 r ) d r d t = 0 1 r 1 g * ( 1 r ) d t d r = 0 1 ( 1 r ) g * ( 1 r ) d r = 0 1 r g * ( r ) d r .
Since 0 1 r g * ( r ) d r = 1 f * ( 0 ) , we have that 0 1 1 2 r 2 g * ( r ) d r = α 1 2 f * ( 0 ) from (12). Hence, g * satisfies constraints of Model 5. Now, let g be a function to satisfy the constraint:
0 1 1 2 r 2 g ( r ) d r = α 1 2 f ( 0 ) , 0 1 r g ( r ) d r = 1 f ( 0 ) , g ( r ) 0 .
Then, from (13),
1 2 α = 0 1 r ( 1 r ) g ( r ) d r .
We put g ( r ) = g * ( r ) + h ( r ) , r [ 0 , 1 ] . Then, from (10) and (14),
0 = 0 1 r ( 1 r ) h ( r ) d r .
We now show that
0 1 F ( g ( r ) ) d r 0 1 F ( g * ( r ) ) d r .
Since F ( x ) F ( x 0 ) F ( x 0 ) ( x x 0 ) (the equality holds if, and only if x = x 0 ), we have that
0 1 F ( g ( r ) ) d r 0 1 F ( g * ( r ) ) d r = 0 1 F ( g * ( r ) + h ( r ) ) d r 0 1 F ( g * ( r ) ) d r 0 1 F ( g * ( r ) ) h ( r ) d r = 0 1 c * r ( 1 r ) h ( r ) d r = 0
where the second equality comes from (11) and the last equality comes from (15). This completes the proof. □
Remark 1.
Theorem 1 is not correct if we assume that f ( x ) is continuous. To show this, we consider the following example.
Example 1.
If F ( x ) = ( 1 / 2 ) x 2 , then F ( x ) = x and F ( x ) = 1 . Let C ( x ) be a Cantor function—that is, a nondecreasing continuous function with C ( x ) = 0 a . e . (see [24]). Let f * ( x ) = 2 C ( x ) , since 0 1 C ( r ) d r = 1 / 2 , and let 0 1 ( 1 r ) f * ( r ) d r = α 0 . Then, we have
max r ( 0 , 1 ) F | ( f * ) ( r ) | = 0 .
Hence, we have f * ( x ) = 2 C ( x ) , as the optimal solution of Model 3 under a given orness level, α 0 . This example shows that Theorem 1 is incorrect if we do not assume that f ( x ) is absolutely continuous. Indeed, f * ( x ) = 2 C ( x ) differs from the optimal solution in Theorem 1.

5. Numerical Example

If F ( x ) = ( 1 / 2 ) x 2 , then F ( x ) = x = ( F ( x ) ) 1 . Hence, Model 3 is the least squares deviation (LSD) RIM quantifier problem under a given orness level:
Minimize D f = 0 1 ( f ) 2 ( r ) d r subject to 0 1 ( 1 r ) f ( r ) d r = α , 0 < α < 1 , 0 1 f ( r ) d r = 1 , f ( r ) 0 .
The optimal solution of Model 16 can be proved by using Theorem 1, as follows:
Case 1) f * ( 0 ) = 0 0 < α 0.3 ;
0 2 b * a * 1 4 r 4 a * + 1 2 r 3 b * d r = α , 0 2 b * a * 1 2 r 3 a * + r 2 b * d r = 1 ,
imply that
a * = 243 1250 α 4 , b * = 81 250 α 3 .
The condition of a * / 2 + b * 0 implies 0 < α 243 810 = 0.3 . Hence, by Theorem 1, the optimal solution of the LSD RIM quantifier Model 10 for α ( 0 , 0.3 ] is
g * ( r ) = max 243 2500 α 4 r 2 + 81 250 α 3 r , 0 , f * ( s ) = 0 s g * ( 1 r ) d r = 0 , if s 0 , 1 10 α 3 3 ( 27 s 3 + ( 81 135 α ) s 2 + ( 81 + 270 α ) s + 500 α 3 135 α + 27 ) 2500 α 4 , if s 1 10 α 3 , 1 .
Case 2) f * ( 0 ) > 0 0.3 < α < 0.5
1 2 α = 0 1 r ( 1 r ) g * ( r ) d r = 0 1 r ( 1 r ) c * r ( 1 r ) d r
implies that
c * = 30 60 α and f * ( 0 ) = 3 2 + 5 α .
The condition of f * ( 0 ) > 0 implies 0.3 < α < 0.5 . Hence, by Theorem 1, the optimal solution of the LSD RIM quantifier Model 10 for α ( 0.3 , 0.5 ) is
g * ( r ) = ( 30 60 α ) r ( 1 r ) , r [ 0 , 1 ] , f * ( s ) = 0 s g * ( 1 r ) d r + f * ( 0 ) = ( 10 + 20 α ) s 3 + ( 15 30 α ) s 2 3 2 + 5 α , s [ 0 , 1 ] .
We also note that f α * ( s ) = 1 for α = 0.5 , and f α * ( s ) = f 1 α * ( 1 s ) for α ( 0.5 , 1 ) .

6. Conclusions

This paper proposed a generalized least squares deviation (LSD) model for the RIM quantifier determination problem. We completely proved this constrained optimization problem mathematically. Using this result, we provided a solution of the least squares deviation (LSD) RIM quantifier model as an illustrative example.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03027869).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Amin, G.R.; Emrouznejad, A. An extended minimax disparity to determine the OWA operator weights. Comput. Ind. Eng. 2006, 50, 312–316. [Google Scholar] [CrossRef]
  2. Amin, G.R. Notes on priperties of the OWA weights determination model. Comput. Ind. Eng. 2007, 52, 533–538. [Google Scholar] [CrossRef]
  3. Emrouznejad, A.; Amin, G.R. Improving minimax disparity model to determine the OWA operator weights. Inf. Sci. 2010, 180, 1477–1485. [Google Scholar] [CrossRef]
  4. Filev, D.; Yager, R.R. On the issue of obtaining OWA operator weights. Fuzzy Sets Syst. 1988, 94, 157–169. [Google Scholar] [CrossRef]
  5. Fullér, R.; Majlender, P. An analytic approach for obtaining maximal entropy OWA operators weights. Fuzzy Sets Syst. 2001, 124, 53–57. [Google Scholar] [CrossRef]
  6. Fullér, R.; Majlender, P. On obtaining minimal variability OWA operator weights. Fuzzy Sets Syst. 2003, 136, 203–215. [Google Scholar] [CrossRef]
  7. O’Hagan, M. Aggregating template or rule antecedents in real-time expert systems with fuzzy set logic. In Proceedings of the Twenty-Second Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 31 October–2 November 1988. [Google Scholar]
  8. Hong, D.H. A note on the minimal variability OWA operator weights. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2006, 14, 747–752. [Google Scholar] [CrossRef]
  9. Hong, D.H. On proving the extended minimax disparity OWA problem. Fuzzy Sets Syst. 2011, 168, 35–46. [Google Scholar] [CrossRef]
  10. Hong, D.H.; Han, S. The general least square deviation OWA operator problem. Mathematics 2019, 7, 326. [Google Scholar] [CrossRef]
  11. Liu, X. On the maximum entropy parameterized interval approximation of fuzzy numbers. Fuzzy Sets Syst. 2006, 157, 869–878. [Google Scholar] [CrossRef]
  12. Liu, X.; Da, Q. On the properties of regular increasing monotone (RIM) quantifiers with maximum entropy. Int. J. Gen. Syst. 2008, 37, 167–179. [Google Scholar] [CrossRef]
  13. Wang, Y.M.; Parkan, C. A minimax disparity approach obtaining OWA operator weights. Inf. Sci. 2005, 175, 20–29. [Google Scholar] [CrossRef]
  14. Wang, Y.M.; Luo, Y.; Liu, X. Two new models for determining OWA operater weights. Comput. Ind. Eng. 2007, 52, 203–209. [Google Scholar] [CrossRef]
  15. Yager, R.R. Ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Trans. Syst. Man Cybernet 1988, 18, 183–190. [Google Scholar] [CrossRef]
  16. Yager, R.R. OWA aggregation over a continuous interval argument with application to decision making. IEEE Trans. Syst. Man Cybernet. Part B 2004, 34, 1952–1963. [Google Scholar] [CrossRef]
  17. Yager, R.R. Families of OWA operators. Fuzzy Sets Syst. 1993, 59, 125–148. [Google Scholar] [CrossRef]
  18. Yager, R.R.; Filev, D. Induced ordered weighted averaging operators. IEEE Trans. Syst. Man Cybernet. Part B: Cybern. 1999, 29, 141–150. [Google Scholar] [CrossRef]
  19. Liu, X.; Lou, H. On the equivalence of some approaches to the OWA operator and RIM quantifier determination. Fuzzy Sets Syst. 2007, 159, 1673–1688. [Google Scholar] [CrossRef]
  20. Hong, D.H. The relationship between the minimum variance and minimax disparity RIM quantifier problems. Fuzzy Sets Syst. 2011, 181, 50–57. [Google Scholar] [CrossRef]
  21. Hong, D.H. The relationship between the maximum entropy and minimax ratio RIM quantifier problems. Fuzzy Sets Syst. 2012, 202, 110–117. [Google Scholar] [CrossRef]
  22. Liu, X. A general model of parameterized OWA aggregation with given orness level. Int. J. Approx. Reason. 2008, 48, 598–627. [Google Scholar] [CrossRef] [Green Version]
  23. Hong, D.H. A note on solution equivalence to general models for RIM quantifier problems. Fuzzy Sets Syst. 2018, 332, 25–28. [Google Scholar] [CrossRef]
  24. Wheeden, R.L.; Zygmund, A. Measure and Integral: An Introduction to Real Analysis; Marcel Dekker, Inc.: New York, NY, USA, 1977. [Google Scholar]

Share and Cite

MDPI and ACS Style

Hong, D.H. The General Model for Least Convex Disparity RIM Quantifier Problems. Mathematics 2019, 7, 576. https://doi.org/10.3390/math7070576

AMA Style

Hong DH. The General Model for Least Convex Disparity RIM Quantifier Problems. Mathematics. 2019; 7(7):576. https://doi.org/10.3390/math7070576

Chicago/Turabian Style

Hong, Dug Hun. 2019. "The General Model for Least Convex Disparity RIM Quantifier Problems" Mathematics 7, no. 7: 576. https://doi.org/10.3390/math7070576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop