Abstract
A fuzzy least squares estimator in the multiple with fuzzy-input–fuzzy-output linear regression model is considered. The paper provides a formula for the estimator of the fuzzy regression model. This paper proposes several operations for fuzzy numbers and fuzzy matrices with fuzzy components and discussed some algebraic properties that are needed to use for proving theorems. Using the proposed operations, the formula for the variance, provided and this paper, proves that the estimators have several important optimal properties and asymptotic properties: they are Best Linear Unbiased Estimator (BLUE), asymptotic normality and strong consistency. The confidence regions of the coefficient parameters and the asymptotic relative efficiency (ARE) are also discussed. In addition, several examples are provided including a Monte Carlo simulation study showing the validity of the proposed theorems.
1. Introduction
Regression analysis is commonly perceived as one of the most useful tools in statistical modeling. If the data could be observed precisely, the classical regression appears usually as a sufficient solution. However, we encounter a lot of situations where the observations cannot be obtained precisely. In such a case we need a framework to handle the uncertain matters coming from the following two sources: randomness and imprecision. As far as randomness could be satisfactorily managed by probability theory, one has to adopt a suitable approach for modeling imprecise data. Applying fuzzy sets proposed by Zadeh [1], Tanaka et al. [2] introduced a fuzzy regression analysis. On the other hand, Diamond [3] generalized the main technique of the regression analysis, i.e., the least squared method, for fuzzy numbers. Further on numerous researchers considered different fuzzy regression models, known as crisp input-fuzzy output (CIFO) or fuzzy input-fuzzy output (FIFO), both with crisp or fuzzy parameters (see, e.g., [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]). Many fewer efforts were dedicated to exploring statistical properties of the estimators in fuzzy regression models. For situations where data has an assumed error structure Diamond [25] and Näther [26,27,28,29] discussed the fuzzy best linear unbiased estimators (FBLUEs). Kim et al. [16] proved the asymptotic properties of fuzzy least squares estimators (FLSEs) for a fuzzy simple linear regression model. Due to the complexity of mathematical formulas describing fuzzy least squares estimators, some authors use -cuts of fuzzy numbers to express the estimates (e.g., [21,30]), while others separate the estimators into some parts, e.g., corresponding to the mode and two spreads in the case of triangular fuzzy numbers (see, e.g., [6,26,31,32]). Moreover, many authors do not express any analytic formulas for the desired estimators but they determine the estimates from the normal equations directly (see [3,19]).
To overcome these problems the Yoon et al. [33,34,35] redefined the mathematical model of a fuzzy linear regression using the so-called triangular fuzzy matrix and suitable operations defined both on triangular fuzzy numbers like on triangular fuzzy matrices. Even though this paper deals with the triangular fuzzy numbers, it can be extended to the general case [36]. Moreover, the importance of triangular and trapezoidal fuzzy numbers has been emphasized in [37]. This approach enables us to determine fuzzy least squares estimators of the regression parameters in a concise form which is also useful for exploring the statistical properties of the estimators. The asymptotic theory for fuzzy multiple regression model is hardly discussed in the paper so far. In this contribution we continue the examination of the fuzzy least squares estimator obtained there, focusing on its fundamental finite-sample and asymptotic properties.
The paper is organized as follows: In Section 2, we introduce basic notation related to triangular fuzzy numbers, triangular fuzzy matrices and operation defined on them which would be used later in the contribution. Some algebraic properties that are needed to prove theorems are discussed in Section 2. In Section 3, we discuss the fuzzy linear regression model based on the author’s previous studies [33,35]. Next, in Section 4 we prove that the fuzzy least squares estimator shown in the previous section is Best Linear Unbiased Estimator (BLUE). Then we present the next important results of the paper showing that under some general assumptions the aforementioned estimator is asymptotically normal (in Section 5) and strongly consistent (Section 6). In addition, the confidence regions of the coefficient parameters and the asymptotic relative efficiency (ARE) are also discussed in Section 6. Several examples are proposed including Monte Carlo simulation study. Section 7 concludes the results.
2. Preliminaries
Each triangular fuzzy number A can be represented by an ordered triple, i.e., , where a is the mode of A, while and denote the lower and the upper bound of the support of A, respectively. Further on let us assume that denote a family of all triangular fuzzy numbers defined on the real numbers .
Besides well-known basic operations on fuzzy numbers, like addition
where , or scalar multiplication
Some other operations defined in are sometimes useful. Let us recall here some concepts proposed in [33]:
Clearly, , but the output of (3) is a crisp number (i.e., it is isomorphic with the corresponding real value).
Further on we’ll also need some operations defined on the matrices of fuzzy numbers. We denote by a set of all real crisp matrices with nonnegative elements and let be a set of all triangular fuzzy matrices matrix (t.f.m.), i.e., matrices whose elements belong to . For any two triangular fuzzy matrices and , a crisp matrix and a constant we have
Of course, and .
We can also define the following three types of fuzzy scalar multiplications of a crisp matrix:
where , and .
The proofs of the following properties of the foregoing operations in and in are straightforward.
Proposition 1.
For any and the following properties hold:
- 1.
- and
- 2.
- 3.
- 4.
- 5.
- .
Proposition 2.
Let . Then we have
- 1.
- and
- 2.
- 3.
- 4.
- 5.
- 6.
- and .
Proposition 3.
For and the following properties hold:
- 1.
- 2.
- and
- 3.
- and
- 4.
- and
- 5.
- 6.
- .
Proposition 4.
For and we have
- 1.
- and
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- .
3. Fuzzy Least Squares Estimation
Consider a classical simple linear model
where is a vector of observed responses, X is a design matrix with size of explanatory variables , denotes a p-dimensional vector of unknown parameters, and is a vector of errors. We usually assume that and , where . It is also usual to take , .
The most common estimator in the simple regression model is the least squares estimator (LSE) given by
where the design matrix X is supposed to have the full rank. By the Gauss–Markov theorem (14) is the best linear unbiased estimator (BLUE) of the parameters, where “best” means giving the lowest variance of the estimate, as compared to other unbiased, linear estimators. On the other hand, estimator (14) is strong consistent under certain conditions for the design matrix, i.e., .
In this paper, we consider a fuzzy generalization of (13), i.e., the following linear regression model with fuzzy inputs and fuzzy outputs
where and , while denote unknown crisp regression parameters to be estimated. Moreover, let , , denote fuzzy error terms which express both randomness and fuzziness allowing negative spreads [8,16,38], i.e., , where are crisp random variables which satisfy the following assumptions:
Assumption A1.
- (A1)
- are i.i.d. r.v.’s such that and
- (A2)
- are i.i.d. r.v.’s such that and
- (A3)
- are i.i.d. r.v.’s such that and
- (A4)
- , and are mutually uncorrelated.
It can be shown (see [33]) that defining the following design matrix
and a vector and assuming that , we obtain the following least squares estimator
4. BLUE in Fuzzy Regression Model
The following theorems provide formulas to find the expectation and variance of our estimator.
Lemma 1.
Let where . Then
Proof.
Let . Then
where for . □
Let , and . The matrix is the diagonal matrix with the diagonals , while for and .
Lemma 2.
Let where . Then
Proof.
Let and , where , . Since , we have
where . The diagonal elements of , i.e., for , are , where . Hence, by (A4), we obtain
Note that is a crisp vector, not a triangular fuzzy number.
If by (A1)–(A3), the off-diagonal elements of are
On the other hand
where . Thus, we obtain .
Keeping also in mind that where is the identity in , we obtain
Thus the proof is completed. □
Corollary 1.
Let be the least squares estimator of β. Then
Proof.
By (16) we have , where . Hence, by Lemma 2, we get
which completes the proof. □
Theorem 1.
Let be the least squares estimator given by (16). Then is the unbiased estimator of β.
Proof.
Firstly, we want to show that . Since
thus, by Assumptions A we obtain
Now, by Lemma 1, we conclude that
which means that is the unbiased estimator of . □
Now we are able to state the main theorem in this section.
Theorem 2.
The least squares estimator , given by (16), is the best linear unbiased estimator of β.
Proof.
It is clear that is the linear estimator with the fuzzy operation ⊙, i.e., fuzzy-type linear estimator. By Theorem we also know that is unbiased. Therefore, it is enough to prove that has the minimum variance among the linear unbiased estimators of .
Let be arbitrary linear unbiased estimator for . Then
for some . By Corollary 1 we have
Since
hence, by the unbiasedness of , it means , where is the zero matrix in . Let , where and . Then
where . So for all Since we assumed , hence all components are non-negative, i.e., for all . Thus we conclude that for all . Moreover, since , and , hence and , so for all . Thus
where is the zero triangular fuzzy matrix in which has all elements . Of course, . Therefore, Equation (19) reduces to
Let . Then for , appear on the diagonal of . The diagonal elements of are , where . Thus the diagonal elements of are , where . Hence,
For any linear unbiased estimator we get for all . Thus, has the minimum variance among the linear unbiased estimators for all . This proves the theorem. □
Example 1.
With the modified data of [7] in Table 1, we evaluate the covariance matrix of some fuzzy-type linear estimators to compare the variances of some estimators. The data involves student grades and family income. The data were fuzzified as triangular fuzzy numbers. We compare four fuzzy-type linear estimators.
Table 1.
Numerical data for an example.
is the our estimator and , , are the several modified four fuzzy-type linear estimators. We define the estimators as followings:
and
where , and and , and is the mode of , are the left and right spreads of , respectively. We define all except , https://www.overleaf.com/project/5d494ddde255ad39c9816390 and . All except , and , and for all , except , and . We get the estimates: and
By applying Theorems 1 and 2, we obtain the covariance matrices of the estimators as follows:
and
is smaller than 1.2367 and 1.2466. is the smallest among 0.0090, 0.0091, 0.0092 and 0.0190. Finally, is the smaller than 0.0156, 0.0255. Hence we conclude that the estimators have the minimum variances among , , and .
5. Asymptotic Normality
We start this section by citing some theorems concerning the Central Limit Theorem (CLT) and Strong Law of Large Numbers (SLLN) for martingales which will be useful in the proof of the main result.
Theorem 3
(SLLN for martingales). Let , , be a martingale such that for and . Suppose that is a sequence of positive constants increasing to ∞ as , and . Then , where the notation means the almost surely converges.
Theorem 4
(Hajék–Sidăk CLT). Let be a sequence of i.i.d. random variables (r.v.’s) with the mean μ and a finite variance . Let be a sequence of real vectors . If
then
where the notation stands for the convergence in law.
The proof of Theorem 4 can be found in [39].
Theorem 5
(Courant–Fisher minimax theorem). For any real symmetric matrix A its eigenvalues satisfy
where C is a subspace of .
To obtain some asymptotic properties of our least squares estimator in the generalized fuzzy regression model, the following additional assumption is required besides Assumption A given in Section 3:
Assumption A2.
- (B1)
- as , where denotes the i-th row in a fuzzy matrix .
- (B2)
- as for some .
Now we are able to formulate one of the main results of this contribution. Let be the estimator for of size n.
Theorem 6.
If model (15) satisfies Assumptions A and B, then the least squares estimator is asymptotically normal, i.e.,
where .
Proof.
By (16) one can find that
so, consequently, .
Let be an arbitrary but fixed vector. Moreover, let , where . If we denote , where , then (see [33])
We claim that satisfies the regularity condition of Theorem 3. Then we can obtain the asymptotic distribution of . Let be the i-th row of . Then we get . Since we have . Hence
Therefore, by Theorem 5 (see [33])
becomes
where stands for the largest characteristic value of matrix Q. Thus
which, by Assumption (B1), converges to 0 as . It means that
as . So, consequently,
by the Hajék–Sidăk Central Limit Theorem 4.
On the other hand, one may notice that
as (by Assumption B2). Thus
which completes the proof. □
6. Strong Consistency and Confidence Region
The weak consistency is a direct consequence of the asymptotic normality.
Theorem 7.
For the model (1), under Assumptions A and B, the is weakly consistent estimator for β, that is
where the notation means converges in probability.
Proof.
From the fact that converges a random variable in law, which is non-degenerated, it follows that , where stands for bounded in probability. This implies that each is weakly consistent, and then □
One of main results of this section is the following theorem, the strong consistency of fuzzy least squares estimators. Moreover, this theorem explains that for strong consistency property, asymptotic normality is not needed, and hence, some of the following theorems may be relaxed.
Theorem 8.
For the model (15), suppose that Assumptions A and B are fulfilled. Furthermore, assume that () as , and the sequence of matrices is bounded. Then defined on (16) is strongly consistent for β, that is
where the notation means converges in almost surely.
Proof.
From the fact that and under the assumption for all j, it can be proved by Theorem 3 that
almost surely. The result of the theorem is now an obvious consequence of the assumption. The proof is completed. □
Next, we provide an approximate confidence region for based on the large-sample normality of the FLSE. The asymptotic normality of , derived in Theorem 6, under the regularity conditions, suggests the use of the pivotal quantity of the form
where .
The following theorem gives a large sample distribution of .
Corollary 2.
Under the conditions of Theorem 6, has asymptotically a chi-squared distribution with degrees of freedom.
Proof.
Corollary 2 follows immediately from Theorem 6. □
Corollary 3.
We can define a confidence region as the set of such that
where and is th the quantile of the chi-squared distribution. Then, for large n, provides a percent confidence region for β.
Proof.
It is directly obtained from Corollary 2. □
Remark 1.
Note that it is well known that under certain regularity conditions, the sequence of the crisp LSE’s has asymptotically a normal distribution in the sensethat
where is the variance of errors in model and V is given by as . Thus, a percent approximate confidence region based on the LSE, denoted by , is the set of such that
where and . Then, for n large provides a percent confidence region for β.
Now we compare the sequences of the FLSE and the classical crisp LSE . A numerical measure of the asymptotic relative efficiency (ARE) of with respect to on the inverse ratio of their generalized limiting variances, and denote it by , which implies a strictly smaller asymptotic confidence region, is given by [40]
If the ARE of the FLSE’s with respect to the crisp LSE is greater than 1, we can then say that is more efficiency than .
Example 2.
The dataset from in Table 1 is used asan example of ARE. If we put and , then we obtain
In this case, it can be concluded that our estimator is more efficient than . Let us regard the data as crisp numbers, i.e., and . We take , and . Then we have
We can verify that the efficiency is approximately 1, as we wished.
Example 3
(Monte Carlo Simulation). We performed Monte Carlo simulations to examine the performance of the proposed estimator with fuzzy observations discussed in this paper. The asymptotic behavior and the accuracies for some finite sample datasets are investigated. Two independent variables of parameters are chosen for this simulation: . For the two independent variables, the spreads and the modes are selected from the normal distributions , , respectively. In addition, the measurement errors of the modes and spreads are chosen to be Gaussian white noise with mean zero and variance 0.25: , . In this paper, sample sizes for have been used for the small, moderate and large samples. Here 1000 different datasets were generated for a particular sample size n. For each data set we estimated the parameters , by the proposed estimators, and provided the average estimates and average mean squared error (MMSE) over 1000 simulations. These results are shown in Table 2. In addition, the percentiles, minimum, 1st quartile, median, 3rd quartile, 95 percent point, and the maximum of the 1000 estimation errors are given in Table 3 and Table 4.
Table 2.
Average estimates and average mean squared errors.
Table 3.
Characteristics of estimate errors for .
Table 4.
Characteristics of estimate errors for .
Thus, the accuracy of estimators was assessed for several values of sample sizes, our simulation results indicate that our estimation procedures have smaller mean bias and smaller mean squared error in the estimates of parameters as the sample size increases.
7. Conclusions
In this paper consider a multiple fuzzy-input–fuzzy-output regression model with fuzzy error terms, described by a suitable matrix, called the triangular fuzzy matrix. We proposed a simple formula for the fuzzy least squares estimator and examine its fundamental properties. It appears that the suggested estimator is BLUE so it is in some sense optimal. Moreover, due to its asymptotical normality under quite general assumptions, we open a new perspective for constructing statistical tests and confidence intervals useful both in the model validation and in forecasting. These very topics will be considered in further research. Another open problem to be undertaken is to discover the analogous results in more general fuzzy regression models based on trapezoidal fuzzy numbers, LR-fuzzy numbers and so on.
Author Contributions
J.H.Y. developed the conceptualization, proving the theorems, data analysis and drafted the manuscript. Moreover, investigation, reviewing, and editing were done by P.G. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1A2C1A01011131).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
- Tanaka, H.; Uejima, S.; Asai, K. Linear regression analysis with fuzzy model. IEEE Trans. Syst. Man Cybernet 1982, 12, 903–907. [Google Scholar]
- Diamond, P. Fuzzy Least Squares. Inform. Sci. 1988, 46, 141–157. [Google Scholar] [CrossRef]
- Bargiela, A.; Pedrycz, W.; Nakashima, T. Multiple regression with fuzzy data. Fuzzy Sets Syst. 2007, 158, 2169–2188. [Google Scholar] [CrossRef]
- Celminš, A. Least squares model fitting to fuzzy vector data. Fuzzy Sets Syst. 1987, 22, 245–269. [Google Scholar] [CrossRef]
- Choi, S.H.; Yoon, J.H. General fuzzy regression using least squares method. Int. J. Syst. Sci. 2010, 4, 477–485. [Google Scholar] [CrossRef]
- Choi, S.H.; Jung, H.Y.; Lee, W.J.; Yoon, J.H. Fuzzy regression model with monotonic response function. Commun. Korean Math. Soc. 2018, 3, 973–983. [Google Scholar]
- Diamond, P.; Körner, R.K. Extended fuzzy linear models and least squares estimates. Fuzzy Sets Syst. 1997, 33, 15–32. [Google Scholar] [CrossRef]
- D’Urso, P.; Gastaldi, T. A least-squares approach to fuzzy linear regression analysis. Comput. Stat. Data Anal. 2000, 34, 427–440. [Google Scholar] [CrossRef]
- D’Urso, P. Linear regression analysis for fuzzy/crisp input and fuzzy/crisp output data. Comput. Stat. Data Anal. 2003, 42, 47–72. [Google Scholar] [CrossRef]
- González-Rodríguez, G.; Blanco, A.; Colubi, A.; Lubiano, M.A. Estimation of a simple linear regression model for fuzzy random variables. Fuzzy Sets Syst. 2009, 160, 357–370. [Google Scholar] [CrossRef]
- Jung, H.; Yoon, J.H.; Choi, S.H. Fuzzy linear regression using rank transform method. Fuzzy Sets Syst. 2015, 274, 97–108. [Google Scholar] [CrossRef]
- Grzegorzewski, P.; Mrowka, E. Linear Regression Analysis for Fuzzy Data. In Proceedings of the 10th IFSA World Congress-IFSA, Istanbul, Turkey, 29 June–2 July 2003; pp. 228–231. [Google Scholar]
- Grzegorzewski, P.; Mrowka, E. Regression Analysis with Fuzzy Data. In Soft Computing-Tools, Techniques and Applications; Grzegorzewski, P., Krawczak, M., Eds.; Zadro Exit: Warszawa, Poland, 2004; pp. 65–76. [Google Scholar]
- Hong, D.H.; Hwang, C. Extended fuzzy regression models using regularization method. Inform. Sci. 2004, 164, 31–46. [Google Scholar] [CrossRef]
- Kim, H.K.; Yoon, J.H.; Li, Y. Asymptotic properties of least squares estimation with fuzzy observations. Inform. Sci. 2008, 178, 439–451. [Google Scholar] [CrossRef]
- Kim, I.K.; Lee, W.-J.; Yoon, J.H.; Choi, S.H. Fuzzy Regression Model Using Trapezoidal Fuzzy Numbers for Re-auction Data. Int. J. Fuzzy Log. Intell. Syst. 2016, 16, 72–80. [Google Scholar] [CrossRef]
- Lee, W.-J.; Jung, H.; Yoon, J.H.; Choi, S.H. The statistical inferences of fuzzy regression based on bootstrap techniques. Soft Comput. 2015, 19, 883–890. [Google Scholar] [CrossRef]
- Ming, M.; Friedman, M.; Kandel, A. General fuzzy least squares. Fuzzy Sets Syst. 1997, 88, 107–118. [Google Scholar] [CrossRef]
- Nasrabadi, M.M.; Nasrabadi, E. A mathematical-programming approach to fuzzy linear regression analysis. Appl. Math. Comput. 2004, 155, 873–881. [Google Scholar] [CrossRef]
- Sakawa, M.; Yano, H. Multiobjective fuzzy linear regression analysis for fuzzy input-output data. Fuzzy Sets Syst. 1992, 47, 173–181. [Google Scholar] [CrossRef]
- Yang, M.; Lin, T. Fuzzy least-squares linear regression analysis for fuzzy input-output data. Fuzzy Sets Syst. 2002, 126, 389–399. [Google Scholar] [CrossRef]
- Yoon, J.H.; Choi, S.H. Componentwise fuzzy linear regression using least squares estimation. J. Mult.-Valued Log. Soft Comp. 2009, 55, 137–153. [Google Scholar]
- Yoon, J.H.; Choi, S.H. Fuzzy rank linear regression model. In AI 2009: Advances in Artificial Intelligence; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5866, pp. 617–626. [Google Scholar]
- Diamond, P. Fuzzy Kriging. Fuzzy Sets Syst. 1989, 46, 315–332. [Google Scholar] [CrossRef]
- Körner, R.; Näther, W. Linear regression with random fuzzy variables: Extended classical estimates, best linear estimates, least squares estimates. Inform. Sci. 1998, 109, 95–118. [Google Scholar] [CrossRef]
- Näther, W. Linear statistical inference for random fuzzy data. Statistics 1997, 29, 221–240. [Google Scholar] [CrossRef]
- Näther, W. On random fuzzy variables of second order and their application to linear statistical inference with fuzzy data. Metrika 2000, 51, 201–221. [Google Scholar] [CrossRef]
- Näther, W. Regression with fuzzy random data. Comput. Stat. Data Anal. 2006, 51, 235–252. [Google Scholar] [CrossRef]
- Parchami, A.; Mashinchi, M. Fuzzy estimation for process capability indices. Inform. Sci. 2006, 177, 1452–1462. [Google Scholar] [CrossRef]
- Chang, P.-T.; Lee, E.S. Fuzzy least absolute deviations regression and the conflicting trends in fuzzy parameters. Comput. Math. Appl. 1994, 28, 89–101. [Google Scholar] [CrossRef]
- Yoon, J.H.; Choi, S.H. Separate fuzzy regression with crisp input and fuzzy output. J. Korean Data Inform. Sci. 2007, 18, 301–314. (In Korean) [Google Scholar]
- Yoon, J.H.; Choi, S.H. Fuzzy least squares estimation with new fuzzy operations. Adv. Intell. Syst. Comput. 2013, 190, 193–202. [Google Scholar]
- Yoon, J.H.; Choi, S.H.; Grzegorzewski, P. On asymptotic properties of the multiple fuzzy least squares estimator. In Soft Methods for Data Science; Ferraro, M.B., Giordani, P., Vantaggi, B., Gagolewski, M., Ángeles Gil, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 525–532. [Google Scholar]
- Yoon, J.H.; Jung, H.Y.; Lee, W.J.; Choi, S.H. Optimal properties of a fuzzy least estimator based on new operations. In Proceedings of the SCIS&ISIS 2014, Kitakyushu, Japan, 3–6 December 2014; pp. 36–41. [Google Scholar]
- Yoon, J.H. Fuzzy regression analysis for generalized fuzzy data based on L2-distance. Int. J. Fuzzy Syst. 2020. submitted. [Google Scholar]
- Pedrycz, W. Why triangular membership functions? Fuzzy Sets Syst. 1994, 64, 21–30. [Google Scholar] [CrossRef]
- Chang, P.-T.; Lee, E.S. Fuzzy linear regression with spreads unrestricted in sign. Comput. Math. Appl. 1994, 28, 61–70. [Google Scholar] [CrossRef]
- Drygas, H. Weak and strong consistency of the least square estimates in regression models. Z. Wahrscheinlickeitstheorie Verw Geb. 1976, 34, 119–127. [Google Scholar] [CrossRef]
- Serfling, R.J. Approximation Theorems of Mathematical Statistics; John Wiley & Sons, Inc.: New York, NY, USA; London, UK; Sydney, Australia; Toronto, ON, Canada, 1980. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).