Next Article in Journal
Physics-Informed Neural Networks and Functional Interpolation for Data-Driven Parameters Discovery of Epidemiological Compartmental Models
Next Article in Special Issue
Contracting and Involutive Negations of Probability Distributions
Previous Article in Journal
Green STEM to Improve Mathematics Proficiency: ESA Mission Space Lab
Previous Article in Special Issue
A Study of GD- Implications, a New Hyper Class of Fuzzy Implications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Principal Component Analysis and Factor Analysis for an Atanassov IF Data Set

by
Viliam Ďuriš
1,*,
Renáta Bartková
2 and
Anna Tirpáková
1,3
1
Department of Mathematics, Constantine the Philosopher University in Nitra, Tr. A. Hlinku 1, 94974 Nitra, Slovakia
2
Podravka International s.r.o., Janka Jesenského 1486, 96001 Zvolen, Slovakia
3
Department of School Education, Faculty of Humanities, Tomas Bata University in Zlín, Štefánikova 5670, 76000 Zlín, Czech Republic
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(17), 2067; https://doi.org/10.3390/math9172067
Submission received: 29 May 2021 / Revised: 7 August 2021 / Accepted: 24 August 2021 / Published: 26 August 2021
(This article belongs to the Special Issue Fuzzy Systems and Optimization)

Abstract

:
The present contribution is devoted to the theory of fuzzy sets, especially Atanassov Intuitionistic Fuzzy sets (IF sets) and their use in practice. We define the correlation between IF sets and the correlation coefficient, and we bring a new perspective to solving the problem of data file reduction in case sets where the input data come from IF sets. We present specific applications of the two best-known methods, the Principal Component Analysis and Factor Analysis, used to solve the problem of reducing the size of a data file. We examine input data from IF sets from three perspectives: through membership function, non-membership function and hesitation margin. This examination better reflects the character of the input data and also better captures and preserves the information that the input data carries. In the article, we also present and solve a specific example from practice where we show the behavior of these methods on data from IF sets. The example is solved using R programming language, which is useful for statistical analysis of data and their graphical representation.

1. Introduction

In mathematics, just as in other scientific disciplines, there is a shift from theoretical mathematics to mathematics which would be applicable in practice. Such mathematics knowledge includes the field of statistics and probability. The theory of probability is a relatively young mathematical discipline whose axiomatic construction was built by Russian mathematician Kolmogorov in 1933 [1]. For the first time in history, the basic concepts of probability theory were defined precisely but simply. A random event was defined as a subset of a space, a random variable as a measurable function, and its mean value as an integral (abstract Lebesgue integral). Like the Kolmogorov theory of probability in the first half of the 20th century, the Zadeh fuzzy set played an important role in the second half of the 20th century [2,3,4,5]. Zadeh’s concept of a fuzzy set was generalized by Atanassov. In May 1983 it turned out that the new sets allow the definition of operators which are, in a sense, analogous to the modal ones (in the case of ordinary fuzzy sets such operators are meaningless, since they reduce to identity). It was then that the author realized that he had found a promising direction of research and published the results in [6]. Atanassov defined Intuitionistic Fuzzy sets (IF sets) and described them in terms of membership value, non-membership value and hesitation margin [7,8]. An IF set is a pair of functions = μ A , ν A where function μ A : Ω 0 , 1 is called the membership function and function ν A : Ω 0 , 1 is called the non-membership function, in force that μ A + ν A 1 . Many writers have attempted to prove some known assertions from the classical probability theory in the theory of IF sets [9,10,11,12] and apply known statistical methods in these sets.
In 2010, Bujnowski P., Kacprzyk J., and Szmidt E. [13] defined a correlation coefficient (more in Section 3) and presented novel-approach dimensionality reduction data sets through Principal Component Analysis on IF sets [14]. For this article, we saw the practical use of IF sets to solve the problem of the reduction of dimensionality data sets. Therefore, it motivated us to continue this idea.
One of the main problems in data analysis is to reduce the number of variables while maintaining the maximum information that the data carries. Among the most-used methods to reduce the dimension of data are Principal Component Analysis (PCA) and Factor Analysis (FA) (more in Section 2). The source data from an IF set accurately reflect the nature of the component under investigation. In the classical case of the use of methods PCA and FA, we examine the sample only from a one-sided view. In the case of data from an IF set, the sample is examined from two views: membership function and non-membership function. Alternatively, we can talk about up to three views if we include the degree of uncertainty of the IF set of a given data sample. The degree of uncertainty can be defined for each IF set in Ω by the formula
π A ω = 1 μ A ω ν A ω
while 0 π A ω 1 for each ω Ω [15].
Based on the above, an IF set better describes the character of the studied compounds. The paper aims to show the use of data from an IF set to address a specific example for known methods used to reduce the dimensions of the data set. The comparison of methods with classical theory and the comparison of methods with each other are used to reduce the dimensions of the data set. The rest of the paper is organized as follows: Section 2 contains the methods’ description. Section 3 defines the correlation between IF sets. Section 4 contains the specific example of the use of Principal Component Analysis and Factor Analysis methods. Section 5 contains the conclusion, a comparison of methods and a discussion.

2. Methods’ Description

Principal Component Analysis (PCA) was introduced in 1901 by Karl Pearson [16]. The method aims to transform the input multi-dimensional data so that the output data of the most important linear directions is obtained, with the least significant directions being ignored. Thus, we extract the characteristic directions (characters) from the original data and at the same time reduce the data dimension. The method is one of the basic methods of data compression—original n variables can be represented by a smaller number m of variables while explaining a sufficiently large part of the variability of the original data set. The system of new variables (the so-called main components) consists of a linear combination of the original variables. The first main component describes the largest part of the variability of the original data set. The other major components contribute to the overall variance, always with a smaller proportion. All pairs of main components are perpendicular to each other [17]).
The basic steps of the PCA include the construction of a correlation matrix from source data, the calculation of eigenvalues of the correlation matrix, the alignment from the largest λ 1 > > λ n , the calculation of eigenvectors of the correlation matrix corresponding to its eigenvalues υ 1 , , υ n , the calculation of the variability of the original data σ 2 , the determination of the number of main components sufficient to represent the original variables based on variability and the transfer of the original data to a new base. The number of major components (MC) is determined either by our consideration of the need to maintain information (eigenvalues, which explain e.g., 90% of variability). By Kaiser’s Rule using those MC whose eigenvalue is greater than the average of all eigenvalues (with standard data, the average is 1, i.e., taking the MC, whose eigenvalue is greater than 1), we use MC, which together account for at least 70% of the total variance, or based on a graphical display, the so-called Screen Plot chart, where we find a turning point in this chart and take MC into account for this turning point.
Factor Analysis (FA) was introduced in 1904 by Charles Edward Spearman and described in 1995 by Bartholomew D. J. [18]. This method allows new variables to be created from a set of original variables. It allows you to find hidden (latent) causes that are a source of data variability. With latent variables, it is possible to reduce the number of variables while keeping the maximum amount of information, and to establish a link between observable causes and new variables (factors). If we assume that input variables are correlated, then the same amount of information can be described by fewer variables. In the resulting solution, each original variable should be correlated with as few factors as possible, and the number of factors should be minimal. The factor saturations reflect the influence of the k th common factor on the j th random variable. Several methods are used to estimate factor saturation, so-called factor extraction methods. In our paper, we used the method of the main components. Other known methods include the maximum plausibility method or the least-squares method.
The number of common factors can be determined either by the eigenvalue criterion (the so-called Kaiser’s Rule), when factors which have their eigenvalues λ > 1 are considered significant. The reliability of this rule depends on the number of input variables (if the number of variables is between 20 and 50, the rule is reliable, if the number is less than 20, there is an erroneous tendency to determine a smaller number of factors, and if the number is greater than 50, this leads to a false determination of a large number of factors) and the criterion of the percentage of explained variability when common factors should explain as much as possible the total variability. Alternatively, it depends on the Screen Plot chart of eigenvalues (it is recommended that several factors be used; they are located in front of the turning point on the chart). The basic steps of FA are the selection of input data (assumption of correlation), the determination of the common factors, the estimation of parameters (if the communality is less than 0.5, it is appropriate to exclude the given indicator from the analysis), the rotation of factors (Varimax Method—orthogonal rotation) and the factor matrix (factor saturation matrix). High factor saturation means that the factor significantly influences the indicator. Those factors whose absolute value is greater than 0.3 are considered to be statistically significant, medium significant factors are those with an absolute value greater than 0.4, and very significant factors are those with an absolute value greater than 0.5 [17]. The main idea of the methods is to reduce the number of variables (reduce the dimension of the data file) while maintaining the highest variability of the original data. For both methods we need the construction of a correlation matrix from source data. Therefore, we need to define the correlation coefficient for IF sets.

3. Correlation between IF Sets

The correlation between IF sets was introduced by Szmidt and Kacprzyk in 2010 [13]. Let A , B are IF sets be defined at Ω = ω 1 , ω 2 , , ω n . Sets A , B are characterized by pair sequence:
μ A ω 1 , ν A ω 1 , π A ω 1 , μ B ω 1 , ν B ω 1 , π B ω 1 , μ A ω 2 , ν A ω 2 , π A ω 2 , μ B ω 2 , ν B ω 2 , π B ω 2 , μ A ω n , ν A ω n , π A ω n , μ B ω n , ν B ω n , π B ω n ,
where each function corresponds to the competence function, the incompetence function, and the degree of uncertainty of the sets A and B .
Definition 1.
(Szmidt, Kacprzyk, Bujnowski [14]) The correlation coefficient r A I F S A , B between two IF sets A and B in Ω is:
r A I F S A , B = 1 3 r 1 A , B + r 2 A , B + r 3 A , B
where
r 1 A , B = i = 1 n μ A ω i μ A ¯ μ B ω i μ B ¯ i = 1 n μ A ω i μ A ¯ 2 1 2 i = 1 n μ B ω i μ B ¯ 2 1 2
r 2 A , B = i = 1 n ν A ω i ν A ¯ ν B ω i ν B ¯ i = 1 n ν A ω i ν A ¯ 2 1 2 i = 1 n ν B ω i ν B ¯ 2 1 2
r 3 A , B = i = 1 n π A ω i π A ¯ π B ω i π B ¯ i = 1 n π A ω i π A ¯ 2 1 2 i = 1 n π B ω i π B ¯ 2 1 2
At the same time
μ A ¯ = 1 n i = 1 n μ A ω i , ν A ¯ = 1 n i = 1 n ν A ω i , π A ¯ = 1 n i = 1 n π A ω i
μ B ¯ = 1 n i = 1 n μ B ω i , ν B ¯ = 1 n i = 1 n ν B ω i , π B ¯ = 1 n i = 1 n π B ω i
The correlation coefficient (1.2) depends on the amount of information expressed as a competence function and as an incompetence function (1.3), (1.4) as well as the reliability of the information expressed as a degree of uncertainty (1.5). For the correlation coefficient (1.2), the following properties apply [14]:
  • r A I F S A , B = r A I F S B , A
  • If A = B , then r A I F S A , B = 1
  • r A I F S A , B 1
These properties apply to each element (1.3)—(1.5). The correlation coefficient r A I F S A , B = 1 not only for A = B but also for the perfect linear correlation of data [5].

4. Use of PCA and FA Methods

We have selected the 20 most sold car brands for 2020 (we tracked sales for a period of 12 months). The data come from our own survey, in which we asked car dealers in two cities (Nitra and Žilina, Slovak Republic) about the best-selling car brands in 2020. There were 20 brands listed and 5 criteria were assessed (the criteria were not specifically selected, they were created on the basis of most common questions that buyers ask when buying a car): A—power, B—equipment, C—price, D—driving properties, E—consumption. Each criterion was evaluated twice: the percentage the criterion is met for each participant and the percentage the criterion is not met. The results are in Table 1 below.
Data A, B, C, D and E from Table 1 are assigned the competence and incompetence functions. Since the values in Table 1 are expressed as percentages, we can easily assign the competence function of the values in the “met” column and the incompetence function to the values in the “not met” column, provided μ , ν 0 , 1 that a μ + ν 1 for A, B, C, D, and E. Then these values are IF data. From the relationship (1.1) we calculate the degree of uncertainty for A, B, C, D, and E (Table 2).
First, we will conduct the Principal Component Analysis. We start by calculating the values of the correlation matrices from the values of the input variables of the competence function R μ , the incompetence function R ν and the degree of uncertainty R π . We calculate the values of the correlation matrices from Equations (1.3)–(1.5).
R μ = 1.00000000 0.17614030 0.15817266 0.26372330 0.06097966 0.17614030 1.00000000 0.25863849 0.41187539 0.05797482 0.15817266 0.25863849 1.00000000 0.05248103 0.22512182 0.26372330 0.41187539 0.05248103 1.00000000 0.18133075 0.06097966 0.05797482 0.22512182 0.18133075 1.00000000
R μ = 1.00000000 0.17614030 0.15817266 0.26372330 0.06097966 0.17614030 1.00000000 0.25863849 0.41187539 0.05797482 0.15817266 0.25863849 1.00000000 0.05248103 0.22512182 0.26372330 0.41187539 0.05248103 1.00000000 0.18133075 0.06097966 0.05797482 0.22512182 0.18133075 1.00000000
R ν = 1.0000000 0.61169656 0.06464770 0.25152916 0.2390998 0.6116966 1.00000000 0.09219544 0.45217323 0.4269612 0.0646477 0.09219544 1.00000000 0.03038455 0.3267094 0.2515292 0.45217323 0.03038455 1.00000000 0.2221003 0.2390998 0.42696123 0.32670936 0.22210033 1.0000000
R π = 1.00000000 0.01429969 0.18037422 0.02824073 0.06232790 0.01429969 1.00000000 0.25376953 0.04773643 0.25863891 0.18037422 0.25376953 1.00000000 0.26175490 0.04768127 0.02824073 0.04773643 0.26175490 1.00000000 0.06177976 0.06232790 0.25863891 0.04768127 0.06177976 1.00000000
The eigenvalues of the correlation matrix R μ are: λ 1 = 1.6328865, λ 2 = 1.3278224, λ 3 = 0.9150624, λ 4 = 0.6406398, λ 5 = 0.4835889. The variability of the input variables (sum of the elements on the main diagonal = sum of the eigenvalues of the correlation matrix) is σ 2 = 5. The eigenvalues are displayed on the charts (Figure 1, Table 3). From the graph, we can see that the turning point is behind the third component. Additionally, according to Kaiser’s Rule, the first two components are considered.
In the row “Standard deviation”, there are the values of the main components, hence λ i , i = 1 , 2 , 3 , 4 , 5 . In the row “Proportions of Variance”, there are the shares of variability λ i σ 2 , i = 1 , 2 , 3 , 4 , 5 . And in the row “Cumulative Proportion”, there are the cumulative shares of variability. We can see that the first two components meet 77.52% of the input data variability.
We will do the same for the values of the incompetence function input variables and the degree of uncertainty.
The eigenvalues of the correlation matrix R ν are λ 1 = 2.1645080, λ 2 = 1.1884539, λ 3 = 0.7637763, λ 4 = 0.5663754, λ 5 = 0.3168864. The variability of the input variables is σ 2 = 5 . The eigenvalues are displayed on the charts (Figure 2, Table 4). According to the graph, the turning point could be located after the second component. Both the first two components are considered from the graph and the Kaiser Rule.
The first two components meet 67.06% of the input data variability, which is insufficient. The first three components meet 82.33% of the input data variability, which is permissible.
The eigenvalues of the correlation matrix R π are λ 1 = 1.4352248, λ 2 = 1.2553099, λ 3 = 0.9695103, λ 4 = 0.7644114, λ 5 = 0.5755435. The variability of the input variables is σ 2 = 5 . The eigenvalues are displayed on the charts (Figure 3, Table 5). The turning point would be located behind the second component according to the graph. According to Kaiser’s Rule, the first two components are considered.
The first two components meet 53.81% of the input data variability, which is insufficient. The first three components meet 73.2% of the input data variability, which is permissible.
Now we calculate the correlation matrix R for the complete correlation of components according to (1.2).
R = 1.00000000 0.2673789 0.01414871 0.16233724 0.03859740 0.2673789 1.0000000 0.03235480 0.30392835 0.03678250 0.01414871 0.0323548 1.00000000 0.09461713 0.16804997 0.16233724 0.3039284 0.09461713 1.00000000 0.03418311 0.03859740 0.0367825 0.16804997 0.03418311 1.00000000
The eigenvalues of the correlation matrix R are λ 1 = 1.5023867, λ 2 = 1.1774258, λ 3 = 0.8652105, λ 4 = 0.8134299, λ 5 = 0.6415472. We will display them on the chart (Figure 4).
From the graph, it is visible that the turning point is located behind the third component. According to Kaiser’s Rule, the first two components which are considered meet 53.6% of the input data variability, which is insufficient. Therefore, we will consider the first three components that meet the 70.9% of input data variability, which is sufficient.
From the results received so far, we can determine the number of main components at 3. The results of the overall correlation also enable a reduction in the dimension from five to three, i.e., the original 5 components can be replaced by three main components while maintaining 70.9% of the original data variability.
We mark the eigenvectors of the covariance matrix R μ as V μ i , i = 1 , 2 , 3 , 4 , 5 . Similarly, we mark our eigenvector of the covariance matrix R ν as V ν i and the eigenvectors of the covariance matrix R π   V π i for i = 1 , 2 , 3 , 4 , 5 . The results of PCA are summarized in Table 6. The columns in the table represent the first three eigenvectors of the covariance matrices R μ , R ν , R π . The main components are obtained by multiplying the eigenvectors with the original data.
In this way, we will gain a reduction in the dimension of the original data from five to three.
We will also address the cases of Factor Analysis based on the PCA method. Input data are shown in Table 2. The correlation matrices and their eigenvalues are calculated in the previous instance of the PCA method. As the number of input variables is 20, we are offered at least two criteria to determine the number of factors. The eigenvalue criterion determines for us two important factors (we can see this with eigenvalues of the correlation matrices R μ , R ν , R π ; only the first two values are greater than 1 in any case).
From the chart of eigenvalues, Figure 1, we can see that the turning point is behind the third component. From the graphs Figure 2 and Figure 3, we can see that the turning point is at the second component in both cases. Let us have a look at the variability of the data. If two factors are considered, data variability is very low in all cases. At three components, data variability is greater than 70%, so it is sufficient. Hence, we will further consider three factors. We will first solve the case of Factor Analysis (hereinafter FA) for the input data of the competence functions μ A , μ B , μ C , μ D , μ E . The first three factors represent 77.52% of the input data variability. We perform the FA using the R program (Table 7), and we use the Varimax method to rotate the factors.
At the output, we have a calculated matrix of factor saturation after rotations (columns RC1-RC3). In column h2, there are values of the communalities. We can see that the first factor (the first column of RC1) is highly saturated in the second and fourth variables. The second factor (column RC2) is highly saturated in the fifth variable. The third factor (column RC3) is highly saturated in the first variable. In the third variable, saturation is not high enough for either factor. The values of communalities are sufficiently high, so we can consider presenting the original five variables with three variables.
Let us try to exclude the third variable from the original data and repeat the FA (Table 8) without this variable. In this case, the first three factors represent 86.05% of the variability.
From the output, we can see that the factor saturation matrix is factorially clean because it has high factor saturation with just one factor. The values of communalities are sufficiently high. It is confirmed that we can use three factors instead of the original five variables.
Next, we address the case of FA (Table 9) for input data of incompetence functions ν A , ν B , ν C , ν D , ν E . The first three factors represent 82.33% of the variability of the input data.
We can see that the values of communalities are sufficiently high. In the first four variables, the matrix has high factor saturation with just one factor, but the fifth variable is highly saturated with a second factor. Additionally, the first factor, where saturation is greater than 0.4, is statistically significant.
We will try to exclude one variable. Let us delete the third variable as in the previous case. In this case, the first three factors represent 92.04% of the variability (Table 10).
The factor saturation matrix is factorially clean. The values of communalities are sufficiently high. It is confirmed that we can use three factors instead of the original five variables.
We will still solve the case of FA (Table 11) for input data of degree of uncertainty π A , π B , π C , π D , π E . The first three factors represent 73.2% of the variability of the input data.
We can see that the values of communalities are sufficiently high. The matrix is not completely factorially clean. We will, therefore, try to exclude the third variable, as in the previous cases. Then, the first three factors represent 82.06% of the input data variability (Table 12).
The factor saturation matrix is factorially clean. The values of communalities are sufficiently high. It is confirmed that we can use three factors instead of the original five variables.
In this way, we will gain a reduction in the dimension of the original data from five to three.

5. Conclusions, Comparison of Methods

The aim of our work was to extend the use of IF sets in probability theory and statistics and to verify the behavior of IF data in solving the problem of multidimensional data analysis. We dealt with the issue of reducing the size of the data file while maintaining sufficient variability of the data (that is, to preserve sufficient information that the data carries). We applied the methods to the IF sets and then interpreted them on a specific example from common practice. In our example, we have described in detail the behavior of the given methods on IF sets in three directions through membership function, non-membership function and hesitation margin.
If we examine the data from three perspectives (membership function, non-membership function and hesitation margin) using the PCA method and Kaiser’s Rule, we are able to reduce the dimension of the data from five to two. With such a reduction, the variability is too low. We achieve the sufficient variability when reducing the dimension from five to three, which we also confirmed by the FA method. Thus, both methods allow the dimension of the original data set to be reduced from five to three while maintaining sufficient variability of the original data.
Similarly, in the classical case when using the PCA and FA methods, a reduction of the dimension from five to three is permissible. In this case, the variability of the data is lower, i.e., it retains less information of the original data. Thus, we can say that based on the solved example, we came to the following conclusion: The proposed reduction of data dimension by PCA and FA methods is the same in the classical case as in the use of data from IF sets, but when examining data from IF sets in three directions, a higher variability of data remains.
In this paper, we presented a new approach in solving PCA and FA methods using three data perspectives from IF sets (membership function, non-membership function and hesitation margin), which better describe the sample and maintain higher data variability when reducing the dimension.

Author Contributions

All authors contributed equally and significantly in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no conflict of interests.

References

  1. Kolmogorov, A.N. Osnovnyje Ponjatija Teorii Verojatnostej; Nauka: Moskva, Russia, 1974; 119p. [Google Scholar]
  2. Zadeh, L.A. Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers, 1st ed.; World Scientific Pub Co Inc.: Singapore, 1996; ISBN 13: 978-9810224219. [Google Scholar]
  3. Zadeh, L. Probability measures of Fuzzy events. J. Math. Anal. Appl. 1968, 23, 421–427. [Google Scholar] [CrossRef] [Green Version]
  4. Dvurećenskij, A.; Riečan, B. Fuzzy quantum models. Int. J. Gen. Syst. 1991, 20, 39–54. [Google Scholar] [CrossRef]
  5. Tirpáková, A.; Markechová, D. The fuzzy analogies of some ergodic theorems. Adv. Differ. Equ. 2015, 2015, 171. [Google Scholar] [CrossRef] [Green Version]
  6. Atanassov, K. Intuitionistic Fuzzy Sets. In Fuzzy Sets and Systems; Elsevier: Amsterdam, The Netherlands, 1986; Volume 20, pp. 87–96. [Google Scholar] [CrossRef]
  7. Atanassov, K. Intuitionistic Fuzzy Sets; Springer: Berlin/Heidelberg, Germany, 1999; ISBN 978-3-7908-2463-6. [Google Scholar]
  8. Atanassov, K. On Intuitionistic Fuzzy Sets Theory; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 978-3-6424-4259-9. [Google Scholar]
  9. Riečan, B. On Finitely Additive IF-States; Springer Science and Business Media LLC: Secaucus, NJ, USA, 2015; Volume 322, pp. 149–156. [Google Scholar]
  10. Riečan, B.; Atanassov, K. Some properties of operations conjunction and disjunction from Lukasiewicz type on intuitionistic fuzzy sets. Part 1. Notes Intuit. Fuzzy Sets 2014, 20, 1–6. [Google Scholar]
  11. Riečan, B. On the Atanassov Concept of Fuzziness and One of Its Modification. In Soft Computing Applications for Group Decision-making and Consensus Modeling; Springer Science and Business Media LLC: Secaucus, NJ, USA, 2015; Volume 332, pp. 27–40. [Google Scholar]
  12. Riečan, B. Probability theory and the operations with IF-sets. In Proceedings of the 2008 IEEE International Conference on Fuzzy Systems (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 1250–1252. [Google Scholar]
  13. Kacprzyk, J.; Szmidt, E. Correlation of Intuitionistic Fuzzy Sets; Lecture Notes in AI; Springer: Cham, Switzerland, 2010; pp. 169–177. [Google Scholar] [CrossRef]
  14. Szmidt, E.; Kacprzyk, J.; Bujnowski, P. Advances in principal component analysis for intuitionistic fuzzy data sets. In Proceedings of the 2012 6th IEEE International Conference Intelligent Systems, Sofia, Bulgaria, 6–8 September 2012; pp. 194–199. [Google Scholar]
  15. Bartková, R. Principal Component Analysis and Factor Analysis for IF data sets. In New Developments in Fuzzy Sets, Intuitionistic Fuzzy Sets; Generalized Nets and Related Topics; IBS PAN—SRIPAS: Warsaw, Poland, 2013; Volume 1, pp. 17–30. [Google Scholar]
  16. Pearson, K.F.R.S. LIII. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef] [Green Version]
  17. Kráľ, P.; Kanderová, M.; Kaščáková, A.; Nedelavá, G.; Valenčáková, V. Viacrozmerné Štatistické Metódy so Zameraním na Riešenie Problémov Ekonomickej Praxe; Banská Bystrica: Ekonomická Fakulta UMB: Banská Bystrica, Slovakia, 2009. [Google Scholar]
  18. Bartholomew, D.J. Spearman and the origin and development of factor analysis. Br. J. Math. Stat. Psychol. 1995, 48, 211–220. [Google Scholar] [CrossRef]
Figure 1. Eigenvalues of the correlation matrix R μ .
Figure 1. Eigenvalues of the correlation matrix R μ .
Mathematics 09 02067 g001
Figure 2. Eigenvalues of the correlation matrix R ν .
Figure 2. Eigenvalues of the correlation matrix R ν .
Mathematics 09 02067 g002
Figure 3. Eigenvalues of the correlation matrix R π .
Figure 3. Eigenvalues of the correlation matrix R π .
Mathematics 09 02067 g003
Figure 4. Eigenvalues of the correlation matrix R .
Figure 4. Eigenvalues of the correlation matrix R .
Mathematics 09 02067 g004
Table 1. The competence and the incompetence functions.
Table 1. The competence and the incompetence functions.
A (%)B (%)C (%)D (%)E (%)
Brandmnmmnmmnmmnmmnm
1847811131499067020
273135330206353216027
35613651447659145518
47711634116041206210
59347111294776146333
653355738385047464437
79138511245675102521
88864812452658405140
9691562115907416788
103625826117770226122
1162317117177648227916
126227491618577625634
1362316024443847278118
1455305033385776174826
1561385818564162283524
16714551620764249905
175039602769148365436
1875118812061747564
1974667623678486724
205927642139516612788
Table 2. Degree of uncertainty.
Table 2. Degree of uncertainty.
ABCDE
Brand μ A ν A π A μ B ν B π B μ C ν C π C μ D ν D π D μ E ν E π E
10.840.070.090.810.110.080.310.490.200.900.060.040.700.200.10
20.730.130.140.530.300.170.200.630.170.530.210.260.600.270.13
30.560.130.310.650.140.210.040.760.200.590.140.270.550.180.27
40.770.110.120.630.040.330.110.600.290.410.200.390.620.100.28
50.930.040.030.710.110.180.290.470.240.760.140.100.630.330.04
60.530.350.120.570.380.050.380.500.120.470.460.070.440.370.19
70.910.030.060.850.110.040.240.560.200.750.100.150.250.210.54
80.880.060.060.480.120.400.450.260.290.580.400.020.510.400.09
90.690.150.160.620.110.270.050.900.050.740.160.100.780.080.14
100.360.250.390.820.060.120.110.770.120.700.220.080.610.220.17
110.620.310.070.710.170.120.170.760.070.480.220.300.790.160.05
120.620.270.110.490.160.350.180.570.250.760.020.220.560.340.10
130.620.310.070.600.240.160.440.380.180.470.270.260.810.180.01
140.550.300.150.500.330.170.380.570.050.760.170.070.480.260.26
150.610.380.010.580.180.240.560.410.030.620.280.100.350.240.41
160.710.040.250.550.160.290.200.760.040.420.490.090.900.050.05
170.500.390.110.600.270.130.060.910.030.480.360.160.540.360.10
180.750.110.140.880.010.110.200.610.190.740.070.190.560.040.40
190.740.060.200.670.060.270.230.670.100.840.080.080.670.240.09
200.590.270.140.640.210.150.390.510.100.660.120.220.780.080.14
Table 3. The PCA results calculated using the R program are as follows.
Table 3. The PCA results calculated using the R program are as follows.
Importance of Components:
Comp.1Comp.2Comp.3 Comp.4Comp.5
Standard deviation1.27784451.15231180.95658890.80039980.69540557
Proportion of Variance0.32657730.26556450.18301250.12812800.09671778
Cumulative Proportion0.32657730.59214180.77515430.90328221.00000000
Table 4. The PCA results calculated using the R program are as follows.
Table 4. The PCA results calculated using the R program are as follows.
Importance of Components:
Comp.1Comp.2Comp.3Comp.4Comp.5
Standard deviation1.47122671.09016230.87394290.75257920.56292667
Proportion of Variance0.43290160.23769080.15275530.11327510.06337729
Cumulative Proportion0.43290160.67059240.82334760.93662271.00000000
Table 5. The PCA results calculated using the R program are as follows.
Table 5. The PCA results calculated using the R program are as follows.
Importance of Components:
Comp.1Comp.2Comp.3Comp.4Comp.5
Standard deviation 1.1980091.12040610.98463710.87430630.7586459
Proportion of Variance0.2870450.25106200.19390210.15288230.1151087
Cumulative Proportion0.2870450.53810690.73200900.88489131.0000000
Table 6. PCA results.
Table 6. PCA results.
V μ 1 V μ 2 V μ 3 V ν 1 V ν 2 V ν 3 V π 1 V π 2 V π 3
A−0.46−0.190.690.49−0.360.48−0.270.350.81
B−0.550.44−0,140.60−0,140.100.500.51−0.04
C−0.08−0.750.16−0.16−0,800.020.67−0.17−0.01
D−0.630.03−0.060.43−0.15−0.850.46−0.290.57
E0.290.450.690.440.440.16−0.12−0.710.14
Table 7. Factor Analysis.
Table 7. Factor Analysis.
Standardized Loadings (Pattern Matrix) Based upon Correlation Matrix
RC1RC2RC3h2
A0.220.080.870.82
B0.870.040.020.76
C−0.46−0.520.540.77
D0.68−0.280.340.66
E−0.140.910.060.86
Table 8. Factor Analysis.
Table 8. Factor Analysis.
Standardized Loadings (Pattern Matrix) Based upon Correlation Matrix
RC1RC2RC3h2
A0.13−0.020.980.98
B0.890.070.000.80
C0.76−0.220.240.69
E−0.070.99−0.020.98
Table 9. Factor Analysis.
Table 9. Factor Analysis.
Standardized Loadings (Pattern Matrix) Based upon Correlation Matrix
RC1RC2RC3h2
A0.92−0.060.040.85
B0.790.210.370.80
C0.14−0.590.000.81
D0.180.050.970.98
E0.410.700.120.68
Table 10. Factor Analysis.
Table 10. Factor Analysis.
Standardized Loadings (Pattern Matrix) Based upon Correlation Matrix
RC1RC2RC3h2
A0.950.050.050.91
B0.740.380.340.81
C0.160.970.090.98
D0.150.090.980.98
Table 11. Factor Analysis.
Table 11. Factor Analysis.
Standardized Loadings (Pattern Matrix) Based upon Correlation Matrix
RC1RC2RC3h2
A−0.010.070.940.90
B0.270.79−0.060.69
C0.710.21−0.370.68
D0.83−0.110.160.72
E0.22−0.78−0.110.67
Table 12. Factor Analysis.
Table 12. Factor Analysis.
Standardized Loadings (Pattern Matrix) Based upon Correlation Matrix
RC1RC2RC3h2
A0.02−0.011.000.99
B0.800.18−0.050.68
D0.000.98−0.010.95
E−0.780.19−0.090.66
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ďuriš, V.; Bartková, R.; Tirpáková, A. Principal Component Analysis and Factor Analysis for an Atanassov IF Data Set. Mathematics 2021, 9, 2067. https://doi.org/10.3390/math9172067

AMA Style

Ďuriš V, Bartková R, Tirpáková A. Principal Component Analysis and Factor Analysis for an Atanassov IF Data Set. Mathematics. 2021; 9(17):2067. https://doi.org/10.3390/math9172067

Chicago/Turabian Style

Ďuriš, Viliam, Renáta Bartková, and Anna Tirpáková. 2021. "Principal Component Analysis and Factor Analysis for an Atanassov IF Data Set" Mathematics 9, no. 17: 2067. https://doi.org/10.3390/math9172067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop