Next Article in Journal
Perspectives on Adversarial Classification
Previous Article in Journal
Controllability of Semilinear Systems with Multiple Variable Delays in Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Optimal and Asymptotic Properties of a Fuzzy L2 Estimator

by
Jin Hee Yoon
1,* and
Przemyslaw Grzegorzewski
2,3
1
Department of Mathematics and Statistics, Sejong University, Seoul 05006, Korea
2
Faculty of Mathematics and Information Science, Warsaw University of Technology, Koszykowa 75, 00-662 Wasaw, Poland
3
Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01-447 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 1956; https://doi.org/10.3390/math8111956
Submission received: 31 August 2020 / Revised: 14 October 2020 / Accepted: 24 October 2020 / Published: 4 November 2020

Abstract

:
A fuzzy least squares estimator in the multiple with fuzzy-input–fuzzy-output linear regression model is considered. The paper provides a formula for the L 2 estimator of the fuzzy regression model. This paper proposes several operations for fuzzy numbers and fuzzy matrices with fuzzy components and discussed some algebraic properties that are needed to use for proving theorems. Using the proposed operations, the formula for the variance, provided and this paper, proves that the estimators have several important optimal properties and asymptotic properties: they are Best Linear Unbiased Estimator (BLUE), asymptotic normality and strong consistency. The confidence regions of the coefficient parameters and the asymptotic relative efficiency (ARE) are also discussed. In addition, several examples are provided including a Monte Carlo simulation study showing the validity of the proposed theorems.

1. Introduction

Regression analysis is commonly perceived as one of the most useful tools in statistical modeling. If the data could be observed precisely, the classical regression appears usually as a sufficient solution. However, we encounter a lot of situations where the observations cannot be obtained precisely. In such a case we need a framework to handle the uncertain matters coming from the following two sources: randomness and imprecision. As far as randomness could be satisfactorily managed by probability theory, one has to adopt a suitable approach for modeling imprecise data. Applying fuzzy sets proposed by Zadeh [1], Tanaka et al. [2] introduced a fuzzy regression analysis. On the other hand, Diamond [3] generalized the main technique of the regression analysis, i.e., the least squared method, for fuzzy numbers. Further on numerous researchers considered different fuzzy regression models, known as crisp input-fuzzy output (CIFO) or fuzzy input-fuzzy output (FIFO), both with crisp or fuzzy parameters (see, e.g., [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]). Many fewer efforts were dedicated to exploring statistical properties of the estimators in fuzzy regression models. For situations where data has an assumed error structure Diamond [25] and Näther [26,27,28,29] discussed the fuzzy best linear unbiased estimators (FBLUEs). Kim et al. [16] proved the asymptotic properties of fuzzy least squares estimators (FLSEs) for a fuzzy simple linear regression model. Due to the complexity of mathematical formulas describing fuzzy least squares estimators, some authors use α -cuts of fuzzy numbers to express the estimates (e.g., [21,30]), while others separate the estimators into some parts, e.g., corresponding to the mode and two spreads in the case of triangular fuzzy numbers (see, e.g., [6,26,31,32]). Moreover, many authors do not express any analytic formulas for the desired estimators but they determine the estimates from the normal equations directly (see [3,19]).
To overcome these problems the Yoon et al. [33,34,35] redefined the mathematical model of a fuzzy linear regression using the so-called triangular fuzzy matrix and suitable operations defined both on triangular fuzzy numbers like on triangular fuzzy matrices. Even though this paper deals with the triangular fuzzy numbers, it can be extended to the general case [36]. Moreover, the importance of triangular and trapezoidal fuzzy numbers has been emphasized in [37]. This approach enables us to determine fuzzy least squares estimators of the regression parameters in a concise form which is also useful for exploring the statistical properties of the estimators. The asymptotic theory for fuzzy multiple regression model is hardly discussed in the paper so far. In this contribution we continue the examination of the fuzzy least squares estimator obtained there, focusing on its fundamental finite-sample and asymptotic properties.
The paper is organized as follows: In Section 2, we introduce basic notation related to triangular fuzzy numbers, triangular fuzzy matrices and operation defined on them which would be used later in the contribution. Some algebraic properties that are needed to prove theorems are discussed in Section 2. In Section 3, we discuss the fuzzy linear regression model based on the author’s previous studies [33,35]. Next, in Section 4 we prove that the fuzzy least squares estimator shown in the previous section is Best Linear Unbiased Estimator (BLUE). Then we present the next important results of the paper showing that under some general assumptions the aforementioned estimator is asymptotically normal (in Section 5) and strongly consistent (Section 6). In addition, the confidence regions of the coefficient parameters and the asymptotic relative efficiency (ARE) are also discussed in Section 6. Several examples are proposed including Monte Carlo simulation study. Section 7 concludes the results.

2. Preliminaries

Each triangular fuzzy number A can be represented by an ordered triple, i.e., A = ( l a , a , r a ) , where a is the mode of A, while l a and r a denote the lower and the upper bound of the support of A, respectively. Further on let us assume that F T denote a family of all triangular fuzzy numbers defined on the real numbers R .
Besides well-known basic operations on fuzzy numbers, like addition
X Y = ( l x + l y , x + y , r x + r y ) ,
where X = ( l x , x , r x ) , Y = ( l y , y , r y ) F T , or scalar multiplication
k X = ( k l x , k x , k r x ) i f k 0 , ( k r x , k x , k l x ) i f k < 0 .
Some other operations defined in F T are sometimes useful. Let us recall here some concepts proposed in [33]:
X Y = l x l y + x y + r x r y ,
X Y = ( l x l y , x y , r x r y ) .
Clearly, X Y , X Y F T , but the output of (3) is a crisp number (i.e., it is isomorphic with the corresponding real value).
Further on we’ll also need some operations defined on the matrices of fuzzy numbers. We denote by M R * a set of all n × n real crisp matrices with nonnegative elements and let M F T be a set of all triangular fuzzy matrices matrix (t.f.m.), i.e., matrices whose elements belong to F T . For any two n × n triangular fuzzy matrices Γ ˜ = [ X i j ] and Λ ˜ = [ Y i j ] , a crisp matrix A = [ a i j ] and a constant k > 0 we have
Γ ˜ Λ ˜ = [ X i j Y i j ] ,
Γ ˜ Λ ˜ = [ k = 1 n X i k Y k j ] ,
Γ ˜ Λ ˜ = [ k = 1 n X i k Y k j ] ,
A ˜ Γ ˜ = [ k = 1 n a i k X k j ] ,
k Γ ˜ = [ k X i j ] .
Of course, Γ ˜ Λ ˜ , Γ ˜ Λ ˜ , A ˜ Γ ˜ M F T and Γ ˜ Λ ˜ M R * .
We can also define the following three types of fuzzy scalar multiplications of a crisp matrix:
X A ˜ = [ a i j X ] ,
X Γ ˜ = [ X X i j ] ,
X Γ ˜ = [ X X i j ] ,
where X F T , A ˜ = [ a i j ] M R * and Γ ˜ = [ X i j ] M F T .
The proofs of the following properties of the foregoing operations in F T and in F T are straightforward.
Proposition 1.
For any X , Y , Z F T and k R * the following properties hold:
1. 
X Y = Y X and X Y = Y X
2. 
( X Y ) Z = X ( Y Z )
3. 
( X Y ) Z = ( X Z ) Y = X ( Y Z )
4. 
( k X ) Y = k ( X Y ) = X ( k Y )
5. 
( k X ) Y = k ( X Y ) = X ( k Y ) .
Proposition 2.
Let Γ ˜ , Λ ˜ , Ω ˜ M F T . Then we have
1. 
Γ ˜ Λ ˜ Λ ˜ X ˜ and Γ ˜ Λ ˜ Λ ˜ X ˜
2. 
( Γ ˜ Λ ˜ ) Ω ˜ = Γ ˜ ( Λ ˜ Ω ˜ )
3. 
( Γ ˜ Λ ˜ ) Ω ˜ = Γ ˜ ( Λ ˜ Ω ˜ )
4. 
( Γ ˜ Λ ˜ ) Ω ˜ = ( Γ ˜ Ω ˜ ) + ( Λ ˜ Ω ˜ )
5. 
( Γ ˜ Λ ˜ ) Ω ˜ = ( Γ ˜ Ω ˜ ) ( Λ ˜ Ω ˜ )
6. 
( Γ ˜ Λ ˜ ) T = Λ ˜ T Γ ˜ T and ( Γ ˜ Λ ˜ ) T = Λ ˜ T Γ ˜ T .
Proposition 3.
For A ˜ , B ˜ M R * and Γ ˜ , Λ ˜ M F T the following properties hold:
1. 
A ˜ Γ ˜ Γ ˜ A ˜
2. 
( A ˜ Γ ˜ ) Λ ˜ = A ˜ ( Γ ˜ Λ ˜ ) and ( A ˜ Γ ˜ ) Λ ˜ = A ˜ ( Γ ˜ Λ ˜ )
3. 
Γ ˜ ( Λ ˜ A ˜ ) = ( Γ ˜ Λ ˜ ) A ˜ and Γ ˜ ( Λ ˜ A ˜ ) = ( Γ ˜ Λ ˜ ) A ˜
4. 
( A ˜ Γ ˜ ) T = Γ ˜ T A ˜ T and ( Γ ˜ A ˜ ) T = A ˜ T Γ ˜ T
5. 
( A ˜ + B ˜ ) Γ ˜ = A ˜ Γ ˜ B ˜ Γ ˜
6. 
( A ˜ B ˜ ) Γ ˜ = A ˜ ( B ˜ Γ ˜ ) .
Proposition 4.
For A ˜ M R , X F T and Γ ˜ , Λ ˜ M F T we have
1. 
X Γ ˜ = Γ ˜ X and X Γ ˜ = Γ ˜ X
2. 
( X Γ ˜ ) Λ ˜ = X ( Γ ˜ Λ ˜ ) = Γ ˜ ( X Λ ˜ )
3. 
( X Γ ˜ ) Λ ˜ = X ( Γ ˜ Λ ˜ ) = Γ ˜ ( X Λ ˜ )
4. 
( A ˜ Γ ˜ ) X = A ˜ ( Γ ˜ X ) = Γ ˜ ( A ˜ X ) = ( A ˜ X ) Γ ˜
5. 
( A ˜ Γ ˜ ) X = A ˜ ( Γ ˜ X ) = Γ ˜ ( A ˜ X ) = ( A ˜ X ) Γ ˜
6. 
Γ ˜ ( X A ˜ ) = ( Γ ˜ X ) A ˜ = ( Γ ˜ A ˜ ) X = X ( Γ ˜ A ˜ )
7. 
Γ ˜ ( X A ˜ ) = ( Γ ˜ X ) A ˜ = ( Γ ˜ A ˜ ) X = X ( Γ ˜ A ˜ ) .

3. Fuzzy Least Squares Estimation

Consider a classical simple linear model
Y = X β + ϵ
where Y = ( Y 1 , , Y n ) T is a vector of observed responses, X is a design matrix with size n × ( p + 1 ) of explanatory variables x i j , β = ( β 0 , β 1 , , β p ) T denotes a p-dimensional vector of unknown parameters, and ϵ = ( ϵ 1 , , ϵ n ) T is a vector of errors. We usually assume that E ϵ = 0 and V a r ϵ = σ 2 I n , where σ 2 < . It is also usual to take x i 0 1 , i = 1 , , n .
The most common estimator in the simple regression model is the least squares estimator (LSE) given by
β ^ n = ( X T X ) 1 X T Y ,
where the design matrix X is supposed to have the full rank. By the Gauss–Markov theorem (14) is the best linear unbiased estimator (BLUE) of the parameters, where “best” means giving the lowest variance of the estimate, as compared to other unbiased, linear estimators. On the other hand, estimator (14) is strong consistent under certain conditions for the design matrix, i.e., ( X n T X n ) 1 0 .
In this paper, we consider a fuzzy generalization of (13), i.e., the following linear regression model with fuzzy inputs and fuzzy outputs
Y i = β 0 β 1 X i 1 β p X i p Φ i , i = 1 , , n ,
where X i j = ( l x i j , x i j , l x i j ) and Y i = ( l y i , y i , r y i ) , while β 1 , , β p denote unknown crisp regression parameters to be estimated. Moreover, let Φ i , i = 1 , , n , denote fuzzy error terms which express both randomness and fuzziness allowing negative spreads [8,16,38], i.e., Φ i = ( θ i l , ϵ i , θ i r ) , where θ i l , ϵ i , θ i r are crisp random variables which satisfy the following assumptions:
Assumption A1.
(A1) 
ϵ 1 , , ϵ n are i.i.d. r.v.’s such that E ( ϵ i ) = 0 and V a r ( ϵ i ) = σ ϵ 2 <
(A2) 
θ 1 r , , θ n r are i.i.d. r.v.’s such that E ( θ i r ) = 0 and V a r ( θ i r ) = σ r 2 <
(A3) 
θ 1 l , θ n l are i.i.d. r.v.’s such that E ( θ i l ) = 0 and V a r ( θ i l ) = σ l 2 <
(A4) 
ϵ i , θ i r and θ i l are mutually uncorrelated.
It can be shown (see [33]) that defining the following design matrix
X ˜ = ( 1 , 1 , 1 ) ( l x 11 , x 11 , r x 11 ) ( l x 1 p , x 1 p , r x 1 p ) ( 1 , 1 , 1 ) ( l x n 1 , x n 1 , r x n 1 ) ( l x n p , x n p , r x n p ) ,
and a vector y ˜ = [ ( l y i , y i , r y i ) ] n × 1 = [ ( l y 1 , y 1 , r y 1 ) , , ( l y n , y n , r y n ) ] T and assuming that d e t ( X ˜ T X ˜ ) 0 , we obtain the following least squares estimator
β ^ = ( X ˜ T X ˜ ) 1 X ˜ T y ˜ .
In the next section, we explore basic properties (16) of the show that it is BLUE. Then, in Section 4 we examine the asymptotic behavior of (16).

4. BLUE in Fuzzy Regression Model

The following theorems provide formulas to find the expectation and variance of our estimator.
Lemma 1.
Let Γ ˜ m × n M F T , y ˜ n × 1 = [ Y j ] = [ ( l y j , y j , r y j ) ] M F T , where j = 1 , , n . Then
E ( Γ ˜ y ˜ ) = Γ ˜ E ( y ˜ ) .
Proof. 
Let Γ ˜ = [ Z i j ] = [ ( l z i j , z i j , r z i j ) ] . Then
E ( Γ ˜ y ˜ ) = E j = 1 n Z i j Y j = E j = 1 n ( l z i j l y j + z i j y j + r z i j r y j ) = j = 1 n l z i j E ( l y j ) + z i j E ( y j ) + r z i j E ( r y j ) = j = 1 n Z i j E ( Y j ) = Γ ˜ [ E ( Y j ) ] ,
where E ( Y j ) = E ( l y j ) , E ( y j ) , E ( r y j ) for j = 1 , , n . □
Let V a r ( l y j ) = σ l y j 2 , V a r ( y j ) = σ y j 2 and V a r ( r y j ) = σ r y j 2 . The matrix Σ ˜ n × n M F T is the diagonal matrix with the diagonals Σ ˜ j j = ( σ l y j 2 , σ y j 2 , σ r y j 2 ) = ( σ l 2 , σ ϵ 2 , σ r 2 ) = σ Φ 2 , while Σ ˜ j l = ( 0 , 0 , 0 ) for j , l = 1 , , n and j l .
Lemma 2.
Let Γ ˜ m × n M F T , y ˜ n × 1 = [ Y j ] = [ ( l y j , y j , r y j ) ] M F T , where j = 1 , , n . Then
V a r ( Γ ˜ y ˜ ) = ( Γ ˜ Σ ˜ ) Γ ˜ T = σ Φ 2 ( Γ ˜ Γ ˜ T ) .
Proof. 
Let Γ ˜ = [ Z i j ] = [ ( l z i j , z i j , r z i j ) ] and y ˜ = [ Y j ] = [ ( l y j , y j , r y j ) ] , where i = 1 , , m , j = 1 , , n . Since Γ ˜ y ˜ = [ j = 1 n Z i j Y j ] , we have
V a r ( Γ ˜ y ˜ ) = C o v ( j = 1 n Z i j Y j , j = 1 n Z k j Y j ) ,
where i , k = 1 , , m . The diagonal elements of V a r ( Γ ˜ y ˜ ) , i.e., for i = k , are V a r j = 1 n Z i j Y j , where j = 1 n Z i j Y j = j = 1 n ( l z i j , z i j , r z i j ) ( l y j , y j , r y j ) = j = 1 n ( l z i j l y j + z i j y j + r z i j r y j ) . Hence, by (A4), we obtain
V a r j = 1 n Z i j Y j = V a r j = 1 n l z i j l y j + z i j y j + r z i j r y j = j = 1 n ( l z i j 2 σ l y j 2 + z i j 2 σ y j 2 + r z i j 2 σ r y j 2 ) = j = 1 n { ( ( l z i j , z i j , r z i j ) ( l z i j , z i j , r z i j ) ) ( σ l 2 , σ ϵ 2 , σ r 2 ) } = j = 1 n ( Z i j Z i j ) σ Φ 2 .
Note that σ Φ 2 is a crisp vector, not a triangular fuzzy number.
If i k , by (A1)–(A3), the off-diagonal elements of V a r ( Γ ˜ y ˜ ) are
C o v ( j = 1 n Z i j Y j , j = 1 n Z k j Y j ) = C o v ( j = 1 n l z i j l y j + z i j y j + r z i j r y j , j = 1 n l z k j l y j + z k j y j + r z k j r y j ) = j = 1 n ( l z i j l z k j σ l y j 2 + z i j z k j σ y j 2 + r z i j r z k j σ r y j 2 ) = j = 1 n ( Z i j Z k j ) σ Φ 2 .
On the other hand
( Γ ˜ Σ ˜ ) Γ ˜ T = [ Z i j σ Φ 2 ] Γ ˜ T = [ j = 1 n ( Z i j σ Φ 2 ) Z k j ] = [ j = 1 n ( Z i j Z k j ) σ Φ 2 ] ,
where i , k = 1 , , m , j = 1 , , n . Thus, we obtain V a r ( Γ ˜ y ˜ ) = ( Γ ˜ Σ ˜ ) Γ ˜ T .
Keeping also in mind that Σ ˜ = σ Φ 2 I ˜ , where I ˜ is the identity in M R , we obtain
( Γ ˜ Σ ˜ ) Γ ˜ T = ( Γ ˜ σ Φ 2 I ˜ ) Γ ˜ T = ( σ Φ 2 ( Γ ˜ I ˜ ) ) Γ ˜ T = ( σ Φ 2 Γ ˜ ) Γ ˜ T = σ Φ 2 ( Γ ˜ Γ ˜ T ) .
Thus the proof is completed. □
Corollary 1.
Let β ^ be the least squares estimator of β. Then
V a r ( β ^ ) = σ Φ 2 ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 .
Proof. 
By (16) we have β ^ = ( X ˜ T X ˜ ) 1 X ˜ T y ˜ = ( X ˜ T X ˜ ) 1 X ˜ T y ˜ , where ( X ˜ T X ˜ ) 1 X ˜ T M F T . Hence, by Lemma 2, we get
V a r ( β ^ ) = σ Φ 2 ( X ˜ T X ˜ ) 1 X ˜ T ( X ˜ T X ˜ ) 1 X ˜ T T = σ Φ 2 ( X ˜ T X ˜ ) 1 X ˜ T X ˜ ( X ˜ T X ˜ ) 1 = σ Φ 2 ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 .
which completes the proof. □
Theorem 1.
Let β ^ be the least squares estimator given by (16). Then β ^ is the unbiased estimator of β.
Proof. 
Firstly, we want to show that E ( y ˜ ) = X ˜ β . Since
l y i = β 0 + β 1 l x i 1 + + β i p l x i p + θ i l = j = 0 p β j l x i j + θ i l , y i = β 0 + β 1 x i 1 + + β i p x i p + ϵ i = j = 0 p β j x i + ϵ i , r y i = β 0 + β 1 ξ i 1 r + + β i p ξ i p r + θ i r = j = 0 p β j r x i j + θ i r ,
thus, by Assumptions A we obtain
E ( y ˜ ) ) = [ E ( Y j ) ] = [ ( E ( l y j ) , E ( y j ) , E ( r y j ) ) ] = [ ( j = 0 p β j l x i j , j = 0 p β j x i j , j = 0 p β j r x i j ) ] = X ˜ β .
Now, by Lemma 1, we conclude that
E ( β ^ ) = E ( X ˜ T X ˜ ) 1 X ˜ T y ˜ = E ( ( X ˜ T X ˜ ) 1 X ˜ T ) y ˜ = ( X ˜ T X ˜ ) 1 X ˜ T E ( y ˜ ) = ( X ˜ T X ˜ ) 1 X ˜ T X ˜ β = ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) β = β ,
which means that β ^ is the unbiased estimator of β . □
Now we are able to state the main theorem in this section.
Theorem 2.
The least squares estimator β ^ , given by (16), is the best linear unbiased estimator of β.
Proof. 
It is clear that β ^ = ( X ˜ T X ˜ ) 1 X ˜ T y ˜ is the linear estimator with the fuzzy operation ⊙, i.e., fuzzy-type linear estimator. By Theorem we also know that β ^ is unbiased. Therefore, it is enough to prove that β ^ has the minimum variance among the linear unbiased estimators of β .
Let β ^ * be arbitrary linear unbiased estimator for β . Then
β ^ * = ( ( X ˜ T X ˜ ) 1 X ˜ T ) y ˜ + Λ ˜ y ˜ = ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ y ˜
for some Λ ˜ M F T . By Corollary 1 we have
V a r ( β ^ * ) = V a r ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ y ˜ = σ Φ 2 ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ T = σ Φ 2 ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ X ˜ ( X ˜ T X ˜ ) 1 Λ ˜ T = σ Φ 2 ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 + σ Φ 2 ( X ˜ T X ˜ ) 1 ( X ˜ T Λ ˜ T ) + ( Λ ˜ X ˜ ) ( X ˜ T X ˜ ) 1 + ( Λ ˜ Λ ˜ T ) .
Since
E ( β ^ * ) = ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ E ( y ˜ ) = ( X ˜ T X ˜ ) 1 X ˜ T Λ ˜ ) ( X ˜ β ) = ( X ˜ T X ˜ ) 1 X ˜ T ( X ˜ β ) + Λ ˜ ( X ˜ β ) = ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) β + ( Λ ˜ X ˜ ) β = β + ( Λ ˜ X ˜ ) β ,
hence, by the unbiasedness of β ^ * , it means Λ ˜ X ˜ = O ˜ , where O ˜ is the zero matrix in M R . Let Λ ˜ = [ Z i j ] = [ ( l z i j , z i j , r z i j ) ] , where i = 1 , , p + 1 and j = 1 , , n . Then
Λ ˜ X ˜ = k = 1 n Z i k X k l = k = 1 n ( l z i k l x k l + z i k x k l + r z i k r x k l ) = O ˜ ,
where i , l = 1 , , p + 1 . So k = 1 n ( l z i k l x k l + z i k x k l + r z i k r x k l ) = 0 for all i , l = 1 , , p + 1 . Since we assumed Λ ˜ , X ˜ M F T = M F T ( R + ) , hence all components are non-negative, i.e., l z i k l x k l + z i k x k l + r z i k r x k l = 0 for all k = 1 , , n . Thus we conclude that l z i k l x k l + z i k x k l + r z i k r x k l = 0 for all k = 1 , , n . Moreover, since l z i k l x k l = 0 , z i k x k l = 0 and r z i k r x k l = 0 , hence l z i k l x k l = 0 , z i k x k l = 0 and r z i k r x k l = 0 , so Z i k X k l = ( l z i k l x k l , z i k x k l , r z i k r x k l ) = ( 0 , 0 , 0 ) for all k = 1 , , n . Thus
Λ ˜ X ˜ = [ k = 1 n Z i k X k l ] = [ k = 1 n ( l z i k l x k l , z i k x k l , r z i k r x k l ) ] = [ ( 0 , 0 , 0 ) ] = Θ ˜ ,
where Θ ˜ is the zero triangular fuzzy matrix in M F T which has all elements ( 0 , 0 , 0 ) F T . Of course, X ˜ T Λ ˜ T = ( Λ ˜ X ˜ ) T . Therefore, Equation (19) reduces to
V a r ( β ^ * ) = σ Φ 2 ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 + ( Λ ˜ Λ ˜ T ) = V a r ( β ^ ) + σ Φ 2 ( Λ ˜ Λ ˜ T ) .
Let β ^ * = [ β ^ 0 * , β ^ 1 * , , β ^ p * ] T . Then V a r ( β ^ k * ) for k = 0 , , p , appear on the diagonal of V a r ( β ^ * ) . The diagonal elements of Λ ˜ Λ ˜ T are k = 1 n Z i k Z i k = ( k = 1 n l z i k 2 , k = 1 n z i k 2 , k = 1 n r z i k 2 ) , where i = 1 , , n . Thus the diagonal elements of σ Φ 2 ( Λ ˜ Λ ˜ T ) are k = 1 n σ l 2 l z i k 2 + k = 1 n z i k 2 σ ϵ 2 + k = 1 n r z i k 2 σ r 2 , where i = 1 , , n . Hence,
V a r ( β ^ k * ) = V a r ( β ^ k ) + k = 1 n σ l 2 l z i k 2 + k = 1 n σ ϵ 2 z i k 2 + k = 1 n σ r 2 r z i k 2 .
For any linear unbiased estimator β ^ * we get V a r ( β ^ k * ) V a r ( β ^ k ) for all k = 0 , , p . Thus, V a r ( β ^ k ) has the minimum variance among the linear unbiased estimators for all k = 0 , , p . This proves the theorem. □
Example 1.
With the modified data of [7] in Table 1, we evaluate the covariance matrix of some fuzzy-type linear estimators to compare the variances of some estimators. The data involves student grades and family income. The data were fuzzified as triangular fuzzy numbers. We compare four fuzzy-type linear estimators.
β ^ is the our estimator and β ^ 1 , β ^ 2 , β ^ 3 are the several modified four fuzzy-type linear estimators. We define the estimators as followings:
β ^ = ( X ˜ t X ˜ ) 1 X ˜ t y ˜ ,
β ^ 1 = ( X ˜ t X ˜ ) 1 X ˜ t y ˜ ( A ˜ y ˜ ) ,
β ^ 2 = ( X ˜ t X ˜ ) 1 X ˜ t y ˜ ( B ˜ y ˜ ) ,
and
β ^ 3 = ( X ˜ t X ˜ ) 1 X ˜ t y ˜ ( C ˜ y ˜ ) ,
where A ˜ = [ A i j ] , B ˜ = [ B i j ] and C ˜ = [ C i j ] and A i j = ( a i j l , a i j m , a i j r ) , B i j = ( b i j l , b i j m , b i j r ) and C i j = ( c i j l , c i j m , c i j r ) ( i = 1 , 2 , 3 and j = 1 , , 10 ) . a i j m is the mode of A i j , a i j l , a i j r are the left and right spreads of A i j , respectively. We define all a i j l = a i j m = a i j r = 0 except a 19 l = 0.1 , https://www.overleaf.com/project/5d494ddde255ad39c9816390 a 25 r = 0.1 and a 32 m = 0.1 . All b i j l = b i j m = b i j r = 0 except b 12 m = 0.01 , b 25 r = 0.01 and b 39 l = 0.01 , and c i j l = c i j m = c i j r = 0 for all i , j , except c 14 l = 0.1 , c 27 m = 0.1 and c 3 , 10 r = 0.1 . We get the estimates: β ^ = [ 11.9986 4.0970 2.9970 ] t , β ^ 1 = [ 11.5416 4.8970 2.4650 ] t , β ^ 2 = [ 12.5306 5.3540 2.9970 ] t , and β ^ 3 = [ 18.1686 9.1170 8.8870 ] t .
By applying Theorems 1 and 2, we obtain the covariance matrices of the estimators as follows:
Σ = 1.2366 −0.0831 −0.1110 −0.0831 0.0090 0.0035 −0.1110 0.0035 0.0155 ,
Σ 1 = 1.2367 −0.0831 −0.1110 −0.0831 0.0091 0.0035 −0.1110 0.0035 0.0156 ,
Σ 2 = 1.2367 −0.0831 −0.1110 −0.0831 0.0092 0.0035 −0.1110 0.0035 0.0155 ,
and
Σ 3 = 1.2466 −0.0831 −0.1110 −0.0831 0.0190 0.0035 −0.1110 0.0035 0.0255 ,
V a r ( β 0 ^ ) = 1.2366 is smaller than 1.2367 and 1.2466. V a r ( β 1 ^ ) = 0.0090 is the smallest among 0.0090, 0.0091, 0.0092 and 0.0190. Finally, V a r ( β 2 ^ ) = 0.0155 is the smaller than 0.0156, 0.0255. Hence we conclude that the estimators β ^ have the minimum variances among β ^ , β ^ 1 , β ^ 2 and β ^ 3 .

5. Asymptotic Normality

We start this section by citing some theorems concerning the Central Limit Theorem (CLT) and Strong Law of Large Numbers (SLLN) for martingales which will be useful in the proof of the main result.
Theorem 3
(SLLN for martingales). Let S n = i = 1 n X i , n 1 , be a martingale such that E | X k | p < for k 1 and 1 p 2 . Suppose that { b n } is a sequence of positive constants increasing to ∞ as n , and i = 1 n E [ X i 2 ] / b i 2 < . Then S n / b n a . s . 0 , where the notation a . s . means the almost surely converges.
Theorem 4
(Hajék–Sidăk CLT). Let { X n } be a sequence of i.i.d. random variables (r.v.’s) with the mean μ and a finite variance σ 2 . Let { c n } be a sequence of real vectors c n = ( c n 1 , , c n n ) T . If
max 1 i n c n i 2 i = 1 n c n i 2 1 0 as n ,
then
Z n = i = 1 n c n i ( X i μ ) σ 2 i = 1 n c n i 2 L N 0 , 1 ,
where the notation L stands for the convergence in law.
The proof of Theorem 4 can be found in [39].
Theorem 5
(Courant–Fisher minimax theorem). For any n × n real symmetric matrix A its eigenvalues λ 1 λ n satisfy
λ k = min d i m ( C ) = k max | | x | | = 1 , x C < A x , x > ,
where C is a subspace of R n .
To obtain some asymptotic properties of our least squares estimator in the generalized fuzzy regression model, the following additional assumption is required besides Assumption A given in Section 3:
Assumption A2.
(B1) 
max 1 i n x i ˜ T ( X T X ˜ ) 1 x i ˜ 0 as n , where x i ˜ T denotes the i-th row in a fuzzy matrix X ˜ .
(B2) 
n ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 Π ˜ as n for some Π ˜ M F T .
Now we are able to formulate one of the main results of this contribution. Let β ^ n be the estimator for β of size n.
Theorem 6.
If model (15) satisfies Assumptions A and B, then the least squares estimator β ^ n is asymptotically normal, i.e.,
n β ^ n β L N p + 1 0 , σ Φ 2 Π ˜ ,
where σ Φ 2 = ( σ l 2 , σ ϵ 2 , σ r 2 ) .
Proof. 
By (16) one can find that
β ^ n = ( X ˜ T X ˜ ) 1 ( X ˜ T y ˜ ) = ( X ˜ T X ˜ ) 1 X ˜ T y ˜ = ( X ˜ T X ˜ ) 1 X ˜ T ( X ˜ β + Φ ) = β + ( X ˜ T X ˜ ) 1 ( X ˜ T Φ ) = β + ( X ˜ T X ˜ ) 1 X ˜ T Φ ,
so, consequently, β n ^ β = ( X ˜ T X ˜ ) 1 X ˜ T Φ .
Let λ n R p + 1 ( λ n 0 ) be an arbitrary but fixed vector. Moreover, let Z n = λ n T ( β ^ n β ) = C n ˜ T Φ R , where C n ˜ = X ˜ ( X ˜ T X ˜ ) 1 λ n M F T . If we denote C n ˜ T = [ C n 1 , , C n n ] , where C n 1 , , C n n F T , then (see [33])
i = 1 n C n i C n i = C ˜ n T C ˜ n = λ n T ( X ˜ T X ˜ ) 1 X ˜ T X ˜ ( X ˜ T X ˜ ) 1 λ n = λ n T ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 λ n = λ n T ( X ˜ T X ˜ ) 1 λ n .
We claim that C ˜ n satisfies the regularity condition of Theorem 3. Then we can obtain the asymptotic distribution of Z n . Let x ˜ i T be the i-th row of X ˜ . Then we get C n i = x i T ( X ˜ T X ˜ ) 1 λ n . Since C n i F T we have C n i T = C n i . Hence
C n i C n i = C n i T C n i = ( λ n T ( X ˜ T X ˜ ) 1 x i ) ( x i T ( X ˜ T X ˜ ) 1 λ n ) = λ n T ( X ˜ T X ˜ ) 1 ( x i x i T ) ( X ˜ T X ˜ ) 1 λ n .
Therefore, by Theorem 5 (see [33])
sup λ n λ n T ( X ˜ T X ˜ ) 1 ( x i ˜ x i ˜ T ) ( X ˜ T X ˜ ) 1 λ n λ n T ( X ˜ T X ˜ ) 1 λ n
becomes
ch m a x [ ( X ˜ T X ˜ ) 1 ( x i ˜ x i ˜ T ) ] = x i ˜ T ( X ˜ T X ˜ ) 1 x i ˜ ,
where ch m a x ( Q ) stands for the largest characteristic value of matrix Q. Thus
sup λ n max i C n i C n i i = 1 n C n i C n i = max i sup λ n λ n T ( X ˜ T X ˜ ) 1 ( x i ˜ x i ˜ T ) ( X ˜ T X ˜ ) 1 λ n λ n T ( X ˜ T X ˜ ) 1 λ n = max i x i ˜ T ( X ˜ T X ˜ ) 1 x i ˜ ,
which, by Assumption (B1), converges to 0 as n . It means that
max 1 i n C n i C n i i = 1 n C n i C n i 0
as n . So, consequently,
Z n = λ n T ( β ^ n β ) L N 0 , 1
by the Hajék–Sidăk Central Limit Theorem 4.
On the other hand, one may notice that
V a r n ( β n ^ β ) = n σ Φ 2 ( X ˜ T X ˜ ) 1 ( X ˜ T X ˜ ) ( X ˜ T X ˜ ) 1 σ Φ 2 Π ˜ ,
as n (by Assumption B2). Thus
n β n ^ β L N p + 1 0 , σ Φ 2 Π ˜ ,
which completes the proof. □

6. Strong Consistency and Confidence Region

The weak consistency is a direct consequence of the asymptotic normality.
Theorem 7.
For the model (1), under Assumptions A and B, the β ^ is weakly consistent estimator for β, that is
β n ^ P β ,
where the notation P means converges in probability.
Proof. 
From the fact that n β n ^ β converges a random variable in law, which is non-degenerated, it follows that n β n ^ β = O P ( 1 ) , where O P ( 1 ) stands for bounded in probability. This implies that each β ^ i is weakly consistent, and then β n ^ β P 0 .  □
One of main results of this section is the following theorem, the strong consistency of fuzzy least squares estimators. Moreover, this theorem explains that for strong consistency property, asymptotic normality is not needed, and hence, some of the following theorems may be relaxed.
Theorem 8.
For the model (15), suppose that Assumptions A and B are fulfilled. Furthermore, assume that s j 2 = i = 1 n ( l x i j + x i j + r x i j ) 2 ( j = 0 , 1 , , p ) as n , and the sequence of matrices ( X ˜ t X ˜ ) 1 d i a g ( s 0 2 , , s p 2 ) is bounded. Then β n ^ defined on (16) is strongly consistent for β, that is
β ^ n a . s . β ,
where the notation a . s . means converges in almost surely.
Proof. 
From the fact that β n ^ β = ( X ˜ t X ˜ ) 1 X ˜ t Φ and under the assumption s j 2 for all j, it can be proved by Theorem 3 that
d i a g ( 1 / s 0 2 , , 1 / s p 2 ) X ˜ t Φ 0
almost surely. The result of the theorem is now an obvious consequence of the assumption. The proof is completed. □
Next, we provide an approximate confidence region for β based on the large-sample normality of the FLSE. The asymptotic normality of n ( β ^ n β ) , derived in Theorem 6, under the regularity conditions, suggests the use of the pivotal quantity of the form
Q n ( β ^ n ) = n ( β ^ n β ) t ( σ Φ 2 Π ˜ n ) 1 ( β ^ n β ) ,
where Π ˜ n = n ( X ˜ t X ˜ ) 1 ( X ˜ t X ˜ ) ( X ˜ t X ˜ ) 1 .
The following theorem gives a large sample distribution of Q n ( β ^ n ) .
Corollary 2.
Under the conditions of Theorem 6, Q n ( β ^ n ) has asymptotically a chi-squared distribution with p + 1 degrees of freedom.
Proof. 
Corollary 2 follows immediately from Theorem 6. □
Corollary 3.
We can define a confidence region C 1 α ( β ) as the set of β ^ n such that
( β ^ n β ) t ( σ Φ 2 Π ˜ n ) 1 ( β ^ n β ) δ ,
where δ = 1 n χ 1 α 2 ( p + 1 ) and χ 1 α 2 ( p + 1 ) is ( 1 α ) th the quantile of the chi-squared distribution. Then, for large n, C 1 α ( β ) provides a 100 ( 1 α ) percent confidence region for β.
Proof. 
It is directly obtained from Corollary 2. □
Remark 1.
Note that it is well known that under certain regularity conditions, the sequence of the crisp LSE’s β ˘ n has asymptotically a normal distribution in the sensethat
n ( β ˘ n β ) L N p + 1 0 , σ ϵ * 2 V 1 ,
where σ ϵ * 2 is the variance of errors ϵ i in model and V is given by V n = 1 n ( X X ) V as n . Thus, a 100 ( 1 α ) percent approximate confidence region based on the LSE, denoted by C 1 α * ( β ) , is the set of β ˘ n such that
( β ˘ n β ) t V ( β ˘ n β ) δ * ,
where δ * = σ * ϵ 2 n χ 1 α 2 ( p ) and σ ϵ * 2 = V a r [ ϵ ] . Then, for n large C 1 α * ( β ) provides a 100 ( 1 α ) percent confidence region for β.
Now we compare the sequences of the FLSE { β ^ n } and the classical crisp LSE { β ˘ n } . A numerical measure of the asymptotic relative efficiency (ARE) of { β ^ n } with respect to { β ˘ n } on the inverse ratio of their generalized limiting variances, and denote it by e ( F , C ) , which implies a strictly smaller asymptotic confidence region, is given by [40]
e ( F , C ) = | σ ϵ * 2 V 1 | | σ Φ 2 Π ˜ | 1 p + 1 .
If the ARE of the FLSE’s { β ^ n } with respect to the crisp LSE { β ˘ n } is greater than 1, we can then say that { β ^ n } is more efficiency than { β ˘ n } .
Example 2.
The dataset from in Table 1 is used asan example of ARE. If we put σ θ l 2 = σ θ r 2 = σ ϵ 2 = 1 and σ ϵ * 2 = 1 , then we obtain
e ( F , C ) = | σ ϵ * 2 V 1 | | σ Φ 2 Π ˜ | 1 p + 1 = 0.0494 0.0058 1 2 = 2.9101 .
In this case, it can be concluded that our estimator { β ^ n } is more efficient than { β ˘ n } . Let us regard the data as crisp numbers, i.e., Y i = ( y i , y i , y i ) and X i = ( x i , x i , x i ) . We take σ θ l 2 = σ θ r 2 = 0 , σ ϵ 2 = 1 and σ ϵ * 2 = 1 . Then we have
e ( F , C ) = | σ ϵ * 2 V 1 | | σ Φ 2 Π ˜ | 1 p + 1 = 0.0494 0.0555 1 2 = 0.9435 .
We can verify that the efficiency is approximately 1, as we wished.
Example 3 
(Monte Carlo Simulation). We performed Monte Carlo simulations to examine the performance of the proposed estimator with fuzzy observations discussed in this paper. The asymptotic behavior and the accuracies for some finite sample datasets are investigated. Two independent variables of parameters are chosen for this simulation: ( β 0 , β 1 ) = ( 1.50 , 0.20 ) . For the two independent variables, the spreads and the modes are selected from the normal distributions N ( 15 , 4 2 ) , N ( 2 , 0.5 2 ) , respectively. In addition, the measurement errors of the modes and spreads are chosen to be Gaussian white noise with mean zero and variance 0.25: Φ i = ( θ i l , ϵ i , θ i r ) , ϵ i , θ i l , θ i r NID ( 0 , 0.5 2 ) ( i = 1 , , n ) . In this paper, sample sizes for n = 10 , 50 , 100 have been used for the small, moderate and large samples. Here 1000 different datasets were generated for a particular sample size n. For each data set we estimated the parameters β 0 , β 1 by the proposed estimators, and provided the average estimates and average mean squared error (MMSE) over 1000 simulations. These results are shown in Table 2. In addition, the percentiles, minimum, 1st quartile, median, 3rd quartile, 95 percent point, and the maximum of the 1000 estimation errors are given in Table 3 and Table 4.
Thus, the accuracy of estimators was assessed for several values of sample sizes, our simulation results indicate that our estimation procedures have smaller mean bias and smaller mean squared error in the estimates of parameters as the sample size increases.

7. Conclusions

In this paper consider a multiple fuzzy-input–fuzzy-output regression model with fuzzy error terms, described by a suitable matrix, called the triangular fuzzy matrix. We proposed a simple formula for the fuzzy least squares estimator and examine its fundamental properties. It appears that the suggested estimator is BLUE so it is in some sense optimal. Moreover, due to its asymptotical normality under quite general assumptions, we open a new perspective for constructing statistical tests and confidence intervals useful both in the model validation and in forecasting. These very topics will be considered in further research. Another open problem to be undertaken is to discover the analogous results in more general fuzzy regression models based on trapezoidal fuzzy numbers, LR-fuzzy numbers and so on.

Author Contributions

J.H.Y. developed the conceptualization, proving the theorems, data analysis and drafted the manuscript. Moreover, investigation, reviewing, and editing were done by P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1A2C1A01011131).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  2. Tanaka, H.; Uejima, S.; Asai, K. Linear regression analysis with fuzzy model. IEEE Trans. Syst. Man Cybernet 1982, 12, 903–907. [Google Scholar]
  3. Diamond, P. Fuzzy Least Squares. Inform. Sci. 1988, 46, 141–157. [Google Scholar] [CrossRef]
  4. Bargiela, A.; Pedrycz, W.; Nakashima, T. Multiple regression with fuzzy data. Fuzzy Sets Syst. 2007, 158, 2169–2188. [Google Scholar] [CrossRef]
  5. Celminš, A. Least squares model fitting to fuzzy vector data. Fuzzy Sets Syst. 1987, 22, 245–269. [Google Scholar] [CrossRef]
  6. Choi, S.H.; Yoon, J.H. General fuzzy regression using least squares method. Int. J. Syst. Sci. 2010, 4, 477–485. [Google Scholar] [CrossRef]
  7. Choi, S.H.; Jung, H.Y.; Lee, W.J.; Yoon, J.H. Fuzzy regression model with monotonic response function. Commun. Korean Math. Soc. 2018, 3, 973–983. [Google Scholar]
  8. Diamond, P.; Körner, R.K. Extended fuzzy linear models and least squares estimates. Fuzzy Sets Syst. 1997, 33, 15–32. [Google Scholar] [CrossRef] [Green Version]
  9. D’Urso, P.; Gastaldi, T. A least-squares approach to fuzzy linear regression analysis. Comput. Stat. Data Anal. 2000, 34, 427–440. [Google Scholar] [CrossRef]
  10. D’Urso, P. Linear regression analysis for fuzzy/crisp input and fuzzy/crisp output data. Comput. Stat. Data Anal. 2003, 42, 47–72. [Google Scholar] [CrossRef]
  11. González-Rodríguez, G.; Blanco, A.; Colubi, A.; Lubiano, M.A. Estimation of a simple linear regression model for fuzzy random variables. Fuzzy Sets Syst. 2009, 160, 357–370. [Google Scholar] [CrossRef]
  12. Jung, H.; Yoon, J.H.; Choi, S.H. Fuzzy linear regression using rank transform method. Fuzzy Sets Syst. 2015, 274, 97–108. [Google Scholar] [CrossRef]
  13. Grzegorzewski, P.; Mrowka, E. Linear Regression Analysis for Fuzzy Data. In Proceedings of the 10th IFSA World Congress-IFSA, Istanbul, Turkey, 29 June–2 July 2003; pp. 228–231. [Google Scholar]
  14. Grzegorzewski, P.; Mrowka, E. Regression Analysis with Fuzzy Data. In Soft Computing-Tools, Techniques and Applications; Grzegorzewski, P., Krawczak, M., Eds.; Zadro Exit: Warszawa, Poland, 2004; pp. 65–76. [Google Scholar]
  15. Hong, D.H.; Hwang, C. Extended fuzzy regression models using regularization method. Inform. Sci. 2004, 164, 31–46. [Google Scholar] [CrossRef]
  16. Kim, H.K.; Yoon, J.H.; Li, Y. Asymptotic properties of least squares estimation with fuzzy observations. Inform. Sci. 2008, 178, 439–451. [Google Scholar] [CrossRef]
  17. Kim, I.K.; Lee, W.-J.; Yoon, J.H.; Choi, S.H. Fuzzy Regression Model Using Trapezoidal Fuzzy Numbers for Re-auction Data. Int. J. Fuzzy Log. Intell. Syst. 2016, 16, 72–80. [Google Scholar] [CrossRef] [Green Version]
  18. Lee, W.-J.; Jung, H.; Yoon, J.H.; Choi, S.H. The statistical inferences of fuzzy regression based on bootstrap techniques. Soft Comput. 2015, 19, 883–890. [Google Scholar] [CrossRef]
  19. Ming, M.; Friedman, M.; Kandel, A. General fuzzy least squares. Fuzzy Sets Syst. 1997, 88, 107–118. [Google Scholar] [CrossRef]
  20. Nasrabadi, M.M.; Nasrabadi, E. A mathematical-programming approach to fuzzy linear regression analysis. Appl. Math. Comput. 2004, 155, 873–881. [Google Scholar] [CrossRef]
  21. Sakawa, M.; Yano, H. Multiobjective fuzzy linear regression analysis for fuzzy input-output data. Fuzzy Sets Syst. 1992, 47, 173–181. [Google Scholar] [CrossRef]
  22. Yang, M.; Lin, T. Fuzzy least-squares linear regression analysis for fuzzy input-output data. Fuzzy Sets Syst. 2002, 126, 389–399. [Google Scholar] [CrossRef]
  23. Yoon, J.H.; Choi, S.H. Componentwise fuzzy linear regression using least squares estimation. J. Mult.-Valued Log. Soft Comp. 2009, 55, 137–153. [Google Scholar]
  24. Yoon, J.H.; Choi, S.H. Fuzzy rank linear regression model. In AI 2009: Advances in Artificial Intelligence; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5866, pp. 617–626. [Google Scholar]
  25. Diamond, P. Fuzzy Kriging. Fuzzy Sets Syst. 1989, 46, 315–332. [Google Scholar] [CrossRef]
  26. Körner, R.; Näther, W. Linear regression with random fuzzy variables: Extended classical estimates, best linear estimates, least squares estimates. Inform. Sci. 1998, 109, 95–118. [Google Scholar] [CrossRef]
  27. Näther, W. Linear statistical inference for random fuzzy data. Statistics 1997, 29, 221–240. [Google Scholar] [CrossRef]
  28. Näther, W. On random fuzzy variables of second order and their application to linear statistical inference with fuzzy data. Metrika 2000, 51, 201–221. [Google Scholar] [CrossRef]
  29. Näther, W. Regression with fuzzy random data. Comput. Stat. Data Anal. 2006, 51, 235–252. [Google Scholar] [CrossRef]
  30. Parchami, A.; Mashinchi, M. Fuzzy estimation for process capability indices. Inform. Sci. 2006, 177, 1452–1462. [Google Scholar] [CrossRef]
  31. Chang, P.-T.; Lee, E.S. Fuzzy least absolute deviations regression and the conflicting trends in fuzzy parameters. Comput. Math. Appl. 1994, 28, 89–101. [Google Scholar] [CrossRef]
  32. Yoon, J.H.; Choi, S.H. Separate fuzzy regression with crisp input and fuzzy output. J. Korean Data Inform. Sci. 2007, 18, 301–314. (In Korean) [Google Scholar]
  33. Yoon, J.H.; Choi, S.H. Fuzzy least squares estimation with new fuzzy operations. Adv. Intell. Syst. Comput. 2013, 190, 193–202. [Google Scholar]
  34. Yoon, J.H.; Choi, S.H.; Grzegorzewski, P. On asymptotic properties of the multiple fuzzy least squares estimator. In Soft Methods for Data Science; Ferraro, M.B., Giordani, P., Vantaggi, B., Gagolewski, M., Ángeles Gil, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 525–532. [Google Scholar]
  35. Yoon, J.H.; Jung, H.Y.; Lee, W.J.; Choi, S.H. Optimal properties of a fuzzy least estimator based on new operations. In Proceedings of the SCIS&ISIS 2014, Kitakyushu, Japan, 3–6 December 2014; pp. 36–41. [Google Scholar]
  36. Yoon, J.H. Fuzzy regression analysis for generalized fuzzy data based on L2-distance. Int. J. Fuzzy Syst. 2020. submitted. [Google Scholar]
  37. Pedrycz, W. Why triangular membership functions? Fuzzy Sets Syst. 1994, 64, 21–30. [Google Scholar] [CrossRef]
  38. Chang, P.-T.; Lee, E.S. Fuzzy linear regression with spreads unrestricted in sign. Comput. Math. Appl. 1994, 28, 61–70. [Google Scholar] [CrossRef] [Green Version]
  39. Drygas, H. Weak and strong consistency of the least square estimates in regression models. Z. Wahrscheinlickeitstheorie Verw Geb. 1976, 34, 119–127. [Google Scholar] [CrossRef]
  40. Serfling, R.J. Approximation Theorems of Mathematical Statistics; John Wiley & Sons, Inc.: New York, NY, USA; London, UK; Sydney, Australia; Toronto, ON, Canada, 1980. [Google Scholar]
Table 1. Numerical data for an example.
Table 1. Numerical data for an example.
ID Y i = ( l y i , y i , r y i )        X i = ( l x i , x i , r x i )
1     (3.40, 4.00, 4.80)      (16.80, 21.00, 23.10)
2     (2.70, 3.00, 3.30)      (12.75, 15.00, 17.25)
3     (3.15, 3.50, 3.85)      (13.50, 15.00, 17.25)
4     (1.60, 2.00, 2.40)      ( 7.65,    9.00, 10.35)
5     (2.70, 3.00, 3.45)      (10.80, 12.00, 13.20)
6     (2.97, 3.50, 4.20)      (14.40, 18.00, 19.80)
7     (2.25, 2.50, 2.88)      ( 5.40,    5.50,    7.20)
8     (2.00, 2.50, 3.00)      (10.20, 12.00, 14.40)
Table 2. Average estimates and average mean squared errors.
Table 2. Average estimates and average mean squared errors.
n = 10   (MMSE) n = 50   (MMSE) n = 100    (MMSE)
β ^ 0    1.50   (0.0124)     1.50   (0.0124)   1.50   (0.0124)
β ^ 1    0.20   (0.0124)     0.20   (0.0124)   0.20   (0.0124)
Table 3. Characteristics of estimate errors for β ^ 0 .
Table 3. Characteristics of estimate errors for β ^ 0 .
n Min.1st Quar.Median3rd Quar.95%Max.
10   1.50     0.0124   1.50   0.0124   1.50   0.0124
50   0.20     0.0124   0.20   0.0124   0.20   0.0124
100   0.20     0.0124   0.20   0.0124   0.20   0.0124
Table 4. Characteristics of estimate errors for β ^ 1 .
Table 4. Characteristics of estimate errors for β ^ 1 .
n Min.1st Quar.Median3rd Quar.95%Max.
10 1.50   0.0124   1.50   0.0124   1.50   0.0124
50 0.20   0.0124   0.20   0.0124   0.20   0.0124
100 0.20   0.0124   0.20   0.0124   0.20   0.0124
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yoon, J.H.; Grzegorzewski, P. On Optimal and Asymptotic Properties of a Fuzzy L2 Estimator. Mathematics 2020, 8, 1956. https://doi.org/10.3390/math8111956

AMA Style

Yoon JH, Grzegorzewski P. On Optimal and Asymptotic Properties of a Fuzzy L2 Estimator. Mathematics. 2020; 8(11):1956. https://doi.org/10.3390/math8111956

Chicago/Turabian Style

Yoon, Jin Hee, and Przemyslaw Grzegorzewski. 2020. "On Optimal and Asymptotic Properties of a Fuzzy L2 Estimator" Mathematics 8, no. 11: 1956. https://doi.org/10.3390/math8111956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop