Extending Fuzzy Cognitive Maps with TensorBased Distance Metrics
Abstract
:1. Introduction
2. Previous Work
3. MyersBriggs Type Indicator
 Approach to socialization: Introvert (I) vs Extrovert (E). As the name of this variable suggests, it denotes the degree a person is open to others. Introverts tend to work mentally in isolation and rely on indirect cues from others. On the contrary, extroverts share their thoughts frequently with others and ask for explicit feedback.
 Approach to information gathering: Sensing (S) vs Intuition (N). Persons who frequently resort to sensory related functions observe the outside world, whether the physical or social environment, in order to collect information about open problems or improve situational awareness belong to the S group. On the other hand, persons labeled as N rely on a less concrete form of information representation for reaching insight.
 Approach to decision making: Thinking (T) vs Feeling (F). This variable indicates the primary means by which an individual makes a decision. This may be rational thinking with clearly outlined processes, perhaps in the form of corporate policies of formal problem solving methods such as 5W or TRIZ, or a more abstract and empathy oriented way based on external influences and the emotional implications of past decisions.
 Approach to lifestyle: Judging (J) vs Perceiving (P). This psychological function pertains to how a lifestyle is led. Perceiving persons show more understanding to other lifestyles and may not object to open ended evolution processes over a long amount of time. On the contrary, judging persons tend to close open matters as soon as possible and are more likely to apply old solutions to new problems.
4. Cognitive Maps
 All neurons are eventually activated and assigned to clusters, leaving thus no gaps to the topological map. Thus all available neurons are utilized.
 Moreover, in the long run the number of neuron activations is roughly the same for each neuron. For sufficiently large number of epochs each neuron is activated with equal probability.
 Random order. In each epoch the data points are selected based on a random permutation of their original order.
 Reverse order. In each epoch the previous order is reversed.
4.1. Training
 The ${\ell}_{1}$ norm or Manhattan distance.
 The ${\ell}_{2}$ norm of Euclidean distance.
 Square.
 Hexagon.
 Cross.
 Constant rate: This is the simplest case as $\eta \left[r\right]$ has a constant positive value of ${\eta}_{0}$. This imples ${\eta}_{0}$ should be carefully chosen in order to avoid both a slow synaptic weight convergence and missing the convergence. In some cases a theoretical value of ${\eta}_{0}$ is given by (9), where ${\lambda}^{\u2020}$ is the maximum eigenvalue of the input autocorrelation matrix:$${\eta}_{0}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{2}{{\lambda}^{\u2020}}}.$$
 Cosine rate: A common option for the learning rate is the cosine decay rate as shown in (10), which is in general considered flexible and efficient in the sense that the learning rate is initially large enough so that convergence is quickly achieved but also it becomes slow enough so that no overshoot will occur.$$\eta \left[r\right]\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}cos\left({\displaystyle \frac{\pi r}{2{r}_{0}}}\right),\phantom{\rule{2.em}{0ex}}0\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}r\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}{r}_{0}1.$$In (10) the argument stays in the first quadrant, meaning that the $\eta \left[r\right]$ is always positive. However, the maximum number of epochs ${r}_{0}$ should be known in advance. This specific learning rate has the advantage that initially it is relatively high but gradually drops with a quadratic rate as seen in Equation (11):$$cos\vartheta \phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}\sum _{k=0}^{+\infty}{\left(1\right)}^{k}\phantom{\rule{0.166667em}{0ex}}{\displaystyle \frac{{\vartheta}^{2k}}{\left(2k\right)!}}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}1{\displaystyle \frac{{\vartheta}^{2}}{2}}+{\displaystyle \frac{{\vartheta}^{4}}{4!}}{\displaystyle \frac{{\vartheta}^{6}}{6!}}+\dots \phantom{\rule{0.222222em}{0ex}}\approx \phantom{\rule{0.222222em}{0ex}}1{\displaystyle \frac{{\vartheta}^{2}}{2}}.$$To see what this means in practice, let us check when $\eta \left[r\right]$ drops below $0.5$:$$\eta \left[r\right]\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{1}{2}}\phantom{\rule{0.222222em}{0ex}}\iff \phantom{\rule{0.222222em}{0ex}}cos\left({\displaystyle \frac{\pi r}{2{r}_{0}}}\right)\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}cos\left({\displaystyle \frac{\pi}{3}}\right)\phantom{\rule{0.222222em}{0ex}}\Rightarrow \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{\pi r}{2{r}_{0}}}\phantom{\rule{0.222222em}{0ex}}\ge \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{\pi}{3}}\phantom{\rule{0.222222em}{0ex}}\iff \phantom{\rule{0.222222em}{0ex}}r\phantom{\rule{0.222222em}{0ex}}\ge \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{2{r}_{0}}{3}}.$$Thus, for only a third of the total available number of iterations the learning rate is above $0.5$. Alternatively, for each iteration where the learning rate is above that threshold there are two where respectively it is below that, provided that the number of iterations is close to the limit ${r}_{0}$. Another way to see this, the learning rate decays with a rate given by (13):$$\left{\displaystyle \frac{\partial \eta \left[r\right]}{\partial r}}\right\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}\left{\displaystyle \frac{\partial}{\partial r}}cos\left({\displaystyle \frac{\pi r}{2{r}_{0}}}\right)\right\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{\pi}{2{r}_{0}}}\leftsin\left({\displaystyle \frac{\pi r}{2{r}_{0}}}\right)\right.$$
 Inverse linear: The learning rate scheme of Equation (14) is historically among the first. It has a slow decay which translates in the general case to a slow convergence rate, implying that more epochs are necessary in order for the SOM to achieve a truly satisfactory performance.$$\eta \left[r;{\gamma}_{0},{\gamma}_{1},{\gamma}_{2}\right]\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{{\gamma}_{2}}{{\gamma}_{1}r+{\gamma}_{0}}}.$$Now the learning rate decays with a rate of:$$\left{\displaystyle \frac{\partial \eta \left[r\right]}{\partial r}}\right\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{{\gamma}_{2}{\gamma}_{1}}{{\left({\gamma}_{1}r+{\gamma}_{0}\right)}^{2}}}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}\mathrm{O}\left({\displaystyle \frac{1}{{r}^{2}}}\right).$$In order for the learning rate to drop below $0.5$ it suffices that:$$\eta \left[r;{\gamma}_{0},{\gamma}_{1},{\gamma}_{2}\right]\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{1}{2}}\phantom{\rule{0.222222em}{0ex}}\iff \phantom{\rule{0.222222em}{0ex}}r\phantom{\rule{0.222222em}{0ex}}\ge \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{2{\gamma}_{2}{\gamma}_{0}}{{\gamma}_{1}}}.$$From the above equation it follows that ${\gamma}_{1}$ determines convergence to a great extent.
 Inverse polynomial: Equation (17) generalizes the inverse linear learning rate to a higher dimension. In this case there is no simple way to predict its behavior, which may well fluctuate before the dominant term takes over. Also, the polynomial coefficients should be carefully selected in order to avoid negative values. Moreover, although the value at each iteration can be efficiently computed, numerical stability may be an issue especially for large values of p or when r is close to a root. If possible the polynomial should be given in the factor form. Also, ideally polynomials with roots of even moderate multiplicity should be avoided if r can reach their region as the lower order derivatives of the polynomial do not vanish locally. To this end algorithmic techniques such as Horner’s schema [82] should be employed. In this case:$$\eta \left[r;{\left\{{\gamma}_{j}\right\}}_{j=0}^{p+1},p\right]\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{{\gamma}_{p+1}}{{\sum}_{j=0}^{p}{\gamma}_{j}{r}^{j}}}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{{\gamma}_{p+1}}{{\gamma}_{p}{\prod}_{j=1}^{p}\left(r{\xi}_{j}\right)}}.$$For this option the learning rate decay rate is more complicated compared to the other cases as:$$\left{\displaystyle \frac{\partial \eta \left[r\right]}{\partial r}}\right\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{{\gamma}_{p+1}\left{\sum}_{j=1}^{p}j{\gamma}_{j}{r}^{j1}\right}{{\left({\sum}_{j=0}^{p}{\gamma}_{j}{r}^{j}\right)}^{2}}}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}\mathrm{O}\left({\displaystyle \frac{1}{{r}^{p+1}}}\right).$$
 Inverse logarithmic: A more adaptive choice for the learning rate and an intermediate selection between the constant and the inverse linear options is the inverse logarithmic as described by Equation (19). The logarithm base can vary depending on the application and here the Neperian logarithms will be used. Although all logarithms have essentially the same order of magnitude, local differences between iterations may well be observed. In this case:$$\eta \left[r;{\gamma}_{0},{\gamma}_{1},{\gamma}_{2}\right]\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{{\gamma}_{2}}{{\gamma}_{1}ln\left(1+r\right)+{\gamma}_{0}}}.$$As r grows, the logarithm tends to behave approximately like a increasing piecewise constant for increasingly large intervals of r. Thus, the learning rate adapts to the number of iterations and does not require a maximum value ${r}_{0}$. Equation (20) gives the rate of this learning rate:$$\left{\displaystyle \frac{\partial \eta \left[r\right]}{\partial r}}\right\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}\left{\displaystyle \frac{{\gamma}_{2}{\gamma}_{1}}{\left(1+r\right){\left({\gamma}_{1}ln\left(1+r\right)+{\gamma}_{0}\right)}^{2}}}\right\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}\mathrm{O}\left({\displaystyle \frac{1}{r{ln}^{2}r}}\right).$$In order for the learning rate to drop below $0.5$ it suffices that:$$\eta \left[r;{\gamma}_{0},{\gamma}_{1},{\gamma}_{2}\right]\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{1}{2}}\phantom{\rule{0.222222em}{0ex}}\iff \phantom{\rule{0.222222em}{0ex}}r\phantom{\rule{0.222222em}{0ex}}\ge \phantom{\rule{0.222222em}{0ex}}exp\left({\displaystyle \frac{2{\gamma}_{2}{\gamma}_{0}}{{\gamma}_{1}}}\right)1.$$Due to the nature of the exponential function all three parameters play their role in determining the number of epochs.
 Exponential decay: Finally the learning rate diminishes sharper when the scheme of Equation (22) is chosen, although that depends mainly on the parameter ${\gamma}_{1}$:$$\eta \left[r;{\gamma}_{0},{\gamma}_{1}\right]\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\gamma}_{0}exp\left({\gamma}_{1}r\right).$$The learning rate in this case decays according to:$$\left{\displaystyle \frac{\partial \eta \left[r\right]}{\partial r}}\right\phantom{\rule{0.222222em}{0ex}}\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}\phantom{\rule{0.222222em}{0ex}}{\gamma}_{0}{\gamma}_{1}exp\left({\gamma}_{1}r\right)\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\gamma}_{1}\eta \left[r\right].$$Therefore the learning rate decays with a rate proportional to its current value, a well known property of the exponential function, implying this decay is quickly accelerated. Additionally, in order for the learning rate to drop below $0.5$ it suffices that:$$\eta \left[r;{\gamma}_{0},{\gamma}_{1}\right]\phantom{\rule{0.222222em}{0ex}}\le \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{1}{2}}\phantom{\rule{0.222222em}{0ex}}\iff \phantom{\rule{0.222222em}{0ex}}r\phantom{\rule{0.222222em}{0ex}}\ge \phantom{\rule{0.222222em}{0ex}}{\displaystyle \frac{ln\left(2{\gamma}_{0}\right)}{{\gamma}_{1}}}.$$
 Constant ${\alpha}_{0}$
 Rectangular with rectangle side ${\alpha}_{0}$
 Circular with radius ${\rho}_{0}$
 Triangular with height ${h}_{0}$ and base ${h}_{b}$.
 Gaussian with mean ${\mu}_{0}$ and variance ${\sigma}_{0}^{2}$
 Rectangular with rectangle size ${\alpha}_{0}$.
 Circular with radius ${\rho}_{0}$.
 Gaussian with mean ${\mu}_{0}$ and variance ${\sigma}_{0}^{2}$
4.2. Error Metrics
Algorithm 1 SOM training. 

5. Results
5.1. Dataset and Data Point Representation
 A point or even an entire class may be better represented by more than one vectors. Thus, these vectors may be concatenated to yield a matrix.
 Higher order relationships between vectors cannot be represented by other vectors.
5.2. Proposed Metrics
5.3. Experimental Setup
 Clustering quality: As SOMs perform clustering general metrics can be used, especially since the dataset contains ground truth classes.
 Topological map: It is possible to construct figure of merits based on the SOM operating principles. Although they are by definition SOMspecific, they nonetheless provide insight on how the selforganization of the neurons takes place while adapting to the dataset topology.
 MBTI permuations: Finally, the dataset itself provides certain insight. Although no specific formulas can be derived, a qualitative analysis based on findings from the scientific literature.
5.4. Topological Error
 In each case the variance is relatively small, implying that there is a strong concentration of the number of epochs around the respective mean value. In other words, ${I}_{0}$ is a reliable estimator of the true number of epochs of the respective combination of distance metric and learning rate.
 For the same learning rate the fuzzy version of the tensor distance metric consistently requires a lower number of epochs. It is followed closely by the tensor distance metric, whereas the ${\ell}_{2}$ and ${\ell}_{1}$ norms are way behind with the former being somewhat better than the latter.
 Conversely, for the same metric the cosine decay rate systematically outperforms the other two options. The inverse linear decay rate may be a viable alternative, although there is a significant gap in the number of epochs. The exponential decay rates results in very slow convergence requiring almost twice the number of epochs compared to the cosine decay rate.
5.5. Clustering Quality
5.6. MBTI Permutations
5.7. Complexity
5.8. Discussion
 The cosine decay rate outperforms the inverse linear and the exponential ones. This can be explained by the adaptive nature of the cosine as well as by the fact that the exponential function decays too fast and before convergence is truly achieved.
 Partitioning clusters in Gaussian regions results in lower error in every test case. This is explained by the less sharp shape of these regions compared to cubes or domes. Moreover, with the tensor distance metrics, which can in the general case approximate more smooth shapes, the cluster boundaries can better adapt to the topological properties of the dataset.
 The fuzzy version of the tensor distance metric results in better performance, even a slight one, in all cases. The reason for this may be the additional flexibility since personalities sharing traits from two categories can belong to both up to an extent. On the contrary, all the other distance metrics assign a particular personality to a single cluster.
 The complexity of the tensor metrics in terms of the number of floating point operations involved is clearly more than that of either the ${\ell}_{1}$ and the ${\ell}_{2}$ norm. However, because of the lower number of iterations that difference is not evident in the total execution time.
 The interpretability of the resulting cognitive map is limited by the texts of the original dataset, which in turn are answers to specific questions. Adding more cognitive dimensions to these texts would improve personality clustering quality.
 Although the MBTI map is small, for each cognitive map there is a large number of equivalent permutations. Finding them is a critical step before any subsequent analysis takes place.
 The curent version of the proposed methodology does not utilize neuron bias.
5.9. Recommendations
 Text, despite being an invaluable source of information about human traits, is not the only one. It is highly advisable that a cross check with other methods utilizing other modalities should take place.
 In case where the personalities of two or more group members are evaluated, it is advisable that their compatibility is checked against the group tasks in order to discover potential conflict points or communication points as early as possible.
6. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
 Kangas, J.; Kohonen, T.; Laaksonen, J. Variants of selforganizing maps. IEEE Trans. Neural Netw. 1990, 1, 93–99. [Google Scholar] [PubMed]
 Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Lagani, G. Hebbian learning meets deep convolutional neural networks. In Proceedings of the International Conference on Image Analysis and Processing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 324–334. [Google Scholar]
 Myers, S. MyersBriggs typology and Jungian individuation. J. Anal. Psychol. 2016, 61, 289–308. [Google Scholar] [PubMed] [Green Version]
 Isaksen, S.G.; Lauer, K.J.; Wilson, G.V. An examination of the relationship between personality type and cognitive style. Creat. Res. J. 2003, 15, 343–354. [Google Scholar]
 Poria, S.; Majumder, N.; Mihalcea, R.; Hovy, E. Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE Access 2019, 7, 100943–100953. [Google Scholar]
 Batbaatar, E.; Li, M.; Ryu, K.H. Semanticemotion neural network for emotion recognition from text. IEEE Access 2019, 7, 111866–111878. [Google Scholar]
 Beliy, R.; Gaziv, G.; Hoogi, A.; Strappini, F.; Golan, T.; Irani, M. From voxels to pixels and back: Selfsupervision in naturalimage reconstruction from fMRI. In Proceedings of the 2019 Conference on Neural Information Processing Systems NIPS, Vancouver, BC, Canada, 8–14 September 2019; pp. 6517–6527. [Google Scholar]
 Sidhu, G. Locally Linear Embedding and fMRI feature selection in psychiatric classification. IEEE J. Transl. Eng. Health Med. 2019, 7, 1–11. [Google Scholar]
 Sun, X.; Pei, Z.; Zhang, C.; Li, G.; Tao, J. Design and Analysis of a HumanMachine Interaction System for Researching Human’s Dynamic Emotion. IEEE Trans. Syst. Man Cybern. Syst. 2019. [Google Scholar] [CrossRef]
 Vesanto, J.; Alhoniemi, E. Clustering of the selforganizing map. IEEE Trans. Neural Netw. 2000, 11, 586–600. [Google Scholar]
 Kohonen, T. Exploration of very large databases by selforganizing maps. In Proceedings of the International Conference on Neural Networks (ICNN’97), Houston, TX, USA, 12 June 1997; Volume 1, pp. PL1–PL6. [Google Scholar]
 Kosko, B. Fuzzy cognitive maps. Int. J. ManMach. Stud. 1986, 24, 65–75. [Google Scholar]
 Taber, R. Knowledge processing with fuzzy cognitive maps. Expert Syst. Appl. 1991, 2, 83–87. [Google Scholar]
 Stach, W.; Kurgan, L.; Pedrycz, W.; Reformat, M. Genetic learning of fuzzy cognitive maps. Fuzzy Sets Syst. 2005, 153, 371–401. [Google Scholar]
 Yang, Z.; Liu, J. Learning of fuzzy cognitive maps using a nichingbased multimodal multiagent genetic algorithm. Appl. Soft Comput. 2019, 74, 356–367. [Google Scholar]
 Salmeron, J.L.; Mansouri, T.; Moghadam, M.R.S.; Mardani, A. Learning fuzzy cognitive maps with modified asexual reproduction optimisation algorithm. Knowl.Based Syst. 2019, 163, 723–735. [Google Scholar]
 Wu, K.; Liu, J. Robust learning of largescale fuzzy cognitive maps via the lasso from noisy time series. Knowl.Based Syst. 2016, 113, 23–38. [Google Scholar]
 Wu, K.; Liu, J. Learning largescale fuzzy cognitive maps based on compressed sensing and application in reconstructing gene regulatory networks. IEEE Trans. Fuzzy Syst. 2017, 25, 1546–1560. [Google Scholar]
 Liu, Y.c.; Wu, C.; Liu, M. Research of fast SOM clustering for text information. Expert Syst. Appl. 2011, 38, 9325–9333. [Google Scholar]
 Drakopoulos, G.; Giannoukou, I.; Mylonas, P.; Sioutas, S. On tensor distances for self organizing maps: Clustering cognitive tasks. In Proceedings of the International Conference on Database and Expert Systems Applications Part II; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12392, pp. 195–210. [Google Scholar] [CrossRef]
 Nam, T.M.; Phong, P.H.; Khoa, T.D.; Huong, T.T.; Nam, P.N.; Thanh, N.H.; Thang, L.X.; Tuan, P.A.; Dung, L.Q.; Loi, V.D. Selforganizing mapbased approaches in DDoS flooding detection using SDN. In Proceedings of the 2018 International Conference on Information Networking (ICOIN), Chiang Mai, Thailand, 10–12 January 2018; pp. 249–254. [Google Scholar]
 Hawer, S.; Braun, N.; Reinhart, G. Analyzing interdependencies between factory change enablers applying fuzzy cognitive maps. Procedia CIRP 2016, 52, 151–156. [Google Scholar]
 Zhu, S.; Zhang, Y.; Gao, Y.; Wu, F. A Cooperative Task Assignment Method of MultiUAV Based on Self Organizing Map. In Proceedings of the 2018 International Conference on CyberEnabled Distributed Computing and Knowledge Discovery (CyberC), Zhengzhou, China, 18–20 October 2018; pp. 437–4375. [Google Scholar]
 Ladeira, M.J.; Ferreira, F.A.; Ferreira, J.J.; Fang, W.; Falcão, P.F.; Rosa, Á.A. Exploring the determinants of digital entrepreneurship using fuzzy cognitive maps. Int. Entrep. Manag. J. 2019, 15, 1077–1101. [Google Scholar]
 Herrero, J.; Dopazo, J. Combining hierarchical clustering and selforganizing maps for exploratory analysis of gene expression patterns. J. Proteome Res. 2002, 1, 467–470. [Google Scholar]
 Imani, M.; Ghoreishi, S.F. Optimal FiniteHorizon Perturbation Policy for Inference of Gene Regulatory Networks. IEEE Intell. Syst. 2020. [Google Scholar] [CrossRef]
 Drakopoulos, G.; Gourgaris, P.; Kanavos, A. Graph communities in Neo4j: Four algorithms at work. Evol. Syst. 2019. [Google Scholar] [CrossRef]
 Gutiérrez, I.; Gómez, D.; Castro, J.; Espínola, R. A new community detection algorithm based on fuzzy measures. In Proceedings of the International Conference on Intelligent and Fuzzy Systems, Istanbul, Turkey, 23–25 July 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 133–140. [Google Scholar]
 Luo, W.; Yan, Z.; Bu, C.; Zhang, D. Community detection by fuzzy relations. IEEE Trans. Emerg. Top. Comput. 2017, 8, 478–492. [Google Scholar] [CrossRef]
 Drakopoulos, G.; Gourgaris, P.; Kanavos, A.; Makris, C. A fuzzy graph framework for initializing kmeans. IJAIT 2016, 25, 1650031:1–1650031:21. [Google Scholar] [CrossRef]
 Yang, C.H.; Chuang, L.Y.; Lin, Y.D. Epistasis Analysis using an Improved Fuzzy Cmeansbased Entropy Approach. IEEE Trans. Fuzzy Syst. 2019, 28, 718–730. [Google Scholar] [CrossRef]
 Tang, Y.; Ren, F.; Pedrycz, W. Fuzzy Cmeans clustering through SSIM and patch for image segmentation. Appl. Soft Comput. 2020, 87, 105928. [Google Scholar] [CrossRef]
 Felix, G.; Nápoles, G.; Falcon, R.; Froelich, W.; Vanhoof, K.; Bello, R. A review on methods and software for fuzzy cognitive maps. Artif. Intell. Rev. 2019, 52, 1707–1737. [Google Scholar] [CrossRef]
 Etingof, P.; Gelaki, S.; Nikshych, D.; Ostrik, V. Tensor Categories; American Mathematical Soc.: Providence, RI, USA, 2016; Volume 205. [Google Scholar]
 Batselier, K.; Chen, Z.; Liu, H.; Wong, N. A tensorbased volterra series blackbox nonlinear system identification and simulation framework. In Proceedings of the 2016 IEEE/ACM International Conference on ComputerAided Design (ICCAD), Austin, TX, USA, 7–10 November 2016; pp. 1–7. [Google Scholar]
 Batselier, K.; Chen, Z.; Wong, N. Tensor Network alternating linear scheme for MIMO Volterra system identification. Automatica 2017, 84, 26–35. [Google Scholar] [CrossRef] [Green Version]
 Batselier, K.; Ko, C.Y.; Wong, N. Tensor network subspace identification of polynomial state space models. Automatica 2018, 95, 187–196. [Google Scholar] [CrossRef] [Green Version]
 Battaglino, C.; Ballard, G.; Kolda, T.G. A practical randomized CP tensor decomposition. SIAM J. Matrix Anal. Appl. 2018, 39, 876–901. [Google Scholar] [CrossRef]
 Sidiropoulos, N.D.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
 Ragusa, E.; Gastaldo, P.; Zunino, R.; Cambria, E. Learning with similarity functions: A tensorbased framework. Cogn. Comput. 2019, 11, 31–49. [Google Scholar] [CrossRef]
 Lu, W.; Chung, F.L.; Jiang, W.; Ester, M.; Liu, W. A deep Bayesian tensorbased system for video recommendation. ACM Trans. Inf. Syst. 2018, 37, 1–22. [Google Scholar] [CrossRef]
 Drakopoulos, G.; Stathopoulou, F.; Kanavos, A.; Paraskevas, M.; Tzimas, G.; Mylonas, P.; Iliadis, L. A genetic algorithm for spatiosocial tensor clustering: Exploiting TensorFlow potential. Evol. Syst. 2020, 11, 491–501. [Google Scholar] [CrossRef]
 Bao, Y.T.; Chien, J.T. Tensor classification network. In Proceedings of the 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, USA, 17–20 September 2015; pp. 1–6. [Google Scholar]
 Yu, D.; Deng, L.; Seide, F. The deep tensor neural network with applications to large vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2012, 21, 388–396. [Google Scholar] [CrossRef]
 Drakopoulos, G.; Mylonas, P. Evaluating graph resilience with tensor stack networks: A Keras implementation. Neural Comput. Appl. 2020, 32, 4161–4176. [Google Scholar] [CrossRef]
 Hore, V.; Viñuela, A.; Buil, A.; Knight, J.; McCarthy, M.I.; Small, K.; Marchini, J. Tensor decomposition for multipletissue gene expression experiments. Nat. Genet. 2016, 48, 1094. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 Zhang, C.; Fu, H.; Liu, S.; Liu, G.; Cao, X. Lowrank tensor constrained multiview subspace clustering. In Proceedings of the IEEE international conference on computer vision, Santiago, Chile, 7–13 December 2015; pp. 1582–1590. [Google Scholar]
 Cao, X.; Wei, X.; Han, Y.; Lin, D. Robust face clustering via tensor decomposition. IEEE Trans. Cybern. 2014, 45, 2546–2557. [Google Scholar] [CrossRef]
 Zaharia, M.; Xin, R.S.; Wendell, P.; Das, T.; Armbrust, M.; Dave, A.; Meng, X.; Rosen, J.; Venkataraman, S.; Franklin, M.J.; et al. Apache Spark: A unified engine for big data processing. Commun. ACM 2016, 59, 56–65. [Google Scholar] [CrossRef]
 Alexopoulos, A.; Drakopoulos, G.; Kanavos, A.; Mylonas, P.; Vonitsanos, G. Twostep classification with SVD preprocessing of distributed massive datasets in Apache Spark. Algorithms 2020, 13, 71. [Google Scholar] [CrossRef] [Green Version]
 Yang, H.K.; Yong, H.S. SPARAFAC: Distributed tensor decomposition using Apache Spark. J. KIISE 2018, 45, 280–287. [Google Scholar] [CrossRef]
 Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A fresh approach to numerical computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef] [Green Version]
 Bezanson, J.; Chen, J.; Chung, B.; Karpinski, S.; Shah, V.B.; Vitek, J.; Zoubritzky, L. Julia: Dynamism and performance reconciled by design. Proc. ACM Program. Lang. 2018, 2, 1–23. [Google Scholar] [CrossRef] [Green Version]
 Lee, J.; Kim, Y.; Song, Y.; Hur, C.K.; Das, S.; Majnemer, D.; Regehr, J.; Lopes, N.P. Taming undefined behavior in LLVM. ACM SIGPLAN Not. 2017, 52, 633–647. [Google Scholar] [CrossRef]
 Innes, M. Flux: Elegant machine learning with Julia. J. Open Source Softw. 2018, 3, 602. [Google Scholar] [CrossRef] [Green Version]
 Besard, T.; Foket, C.; De Sutter, B. Effective extensible programming: Unleashing Julia on GPUs. IEEE Trans. Parallel Distrib. Syst. 2018, 30, 827–841. [Google Scholar] [CrossRef] [Green Version]
 Mogensen, P.K.; Riseth, A.N. Optim: A mathematical optimization package for Julia. J. Open Source Softw. 2018, 3. [Google Scholar] [CrossRef]
 Ruthotto, L.; Treister, E.; Haber, E. jinv–A flexible Julia package for PDE parameter estimation. SIAM J. Sci. Comput. 2017, 39, S702–S722. [Google Scholar] [CrossRef] [Green Version]
 Krämer, S.; Plankensteiner, D.; Ostermann, L.; Ritsch, H. QuantumOptics.jl: A Julia framework for simulating open quantum systems. Comput. Phys. Commun. 2018, 227, 109–116. [Google Scholar] [CrossRef] [Green Version]
 Witte, P.A.; Louboutin, M.; Kukreja, N.; Luporini, F.; Lange, M.; Gorman, G.J.; Herrmann, F.J. A largescale framework for symbolic implementations of seismic inversion algorithms in Julia. Geophysics 2019, 84, F57–F71. [Google Scholar] [CrossRef]
 Pittenger, D.J. The utility of the MyersBriggs type indicator. Rev. Educ. Res. 1993, 63, 467–488. [Google Scholar] [CrossRef]
 Gordon, A.M.; Jackson, D. A Balanced Approach to ADHD and Personality Assessment: A Jungian Model. N. Am. J. Psychol. 2019, 21, 619–646. [Google Scholar]
 Lake, C.J.; Carlson, J.; Rose, A.; ChlevinThiele, C. Trust in name brand assessments: The case of the MyersBriggs type indicator. Psychol.Manag. J. 2019, 22, 91. [Google Scholar] [CrossRef]
 Stein, R.; Swan, A.B. Evaluating the validity of MyersBriggs Type Indicator theory: A teaching tool and window into intuitive psychology. Soc. Personal. Psychol. Compass 2019, 13, e12434. [Google Scholar] [CrossRef]
 Plutchik, R.E.; Conte, H.R. Circumplex Models of Personality and Emotions; American Psychological Association: Washington, DC, USA, 1997. [Google Scholar]
 Ekman, P. Darwin, deception, and facial expression. Ann. N. Y. Acad. Sci. 2003, 1000, 205–221. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 Furnham, A. MyersBriggs type indicator (MBTI). In Encyclopedia of Personality and Individual Differences; Springer: Berlin/Heidelberg, Germany, 2020; pp. 3059–3062. [Google Scholar]
 Xie, Y.; Liang, R.; Liang, Z.; Huang, C.; Zou, C.; Schuller, B. Speech emotion classification using attentionbased LSTM. IEEE/ACM Trans. Audio Speech Lang. Process. 2019, 27, 1675–1685. [Google Scholar] [CrossRef]
 Kim, Y.; Moon, J.; Sung, N.J.; Hong, M. Correlation between selected gait variables and emotion using virtual reality. J. Ambient. Intell. Humaniz. Comput. 2019, 1–8. [Google Scholar] [CrossRef]
 Zheng, W.; Yu, A.; Fang, P.; Peng, K. Exploring collective emotion transmission in facetoface interactions. PLoS ONE 2020, 15, e0236953. [Google Scholar] [CrossRef]
 Nguyen, T.L.; Kavuri, S.; Lee, M. A multimodal convolutional neurofuzzy network for emotion understanding of movie clips. Neural Netw. 2019, 118, 208–219. [Google Scholar] [CrossRef]
 Mishro, P.K.; Agrawal, S.; Panda, R.; Abraham, A. A novel type2 fuzzy Cmeans clustering for brain MR image segmentation. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
 Sheldon, S.; ElAsmar, N. The cognitive tools that support mentally constructing event and scene representations. Memory 2018, 26, 858–868. [Google Scholar] [CrossRef]
 Zap, N.; Code, J. Virtual and augmented reality as cognitive tools for learning. In EdMedia+ Innovate Learning; Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2016; pp. 1340–1347. [Google Scholar]
 Spevack, S.C. Cognitive Tools and Cognitive Styles: Windows into the CultureCognition System. Ph.D. Thesis, UC Merced, Merced, CA, USA, 2019. [Google Scholar]
 Lajoie, S.P. Computers As Cognitive Tools: Volume II, No More Walls; Routledge: London, UK, 2020. [Google Scholar]
 Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEGbased brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef]
 Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: Control signals review. Neurocomputing 2017, 223, 26–44. [Google Scholar] [CrossRef]
 Sakhavi, S.; Guan, C.; Yan, S. Learning temporal information for braincomputer interface using convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5619–5629. [Google Scholar] [CrossRef] [PubMed]
 Kaur, B.; Singh, D.; Roy, P.P. Age and gender classification using brain–computer interface. Neural Comput. Appl. 2019, 31, 5887–5900. [Google Scholar] [CrossRef]
 Beale, M.H.; Hagan, M.T.; Demuth, H.B. Neural Network Toolbox User’s Guide; The Mathworks Inc.: Natick, MA, USA, 2010. [Google Scholar]
 Graillat, S.; Ibrahimy, Y.; Jeangoudoux, C.; Lauter, C. A Parallel Compensated Horner Scheme. In Proceedings of the SIAM Conference on Computational Science and Engineering (CSE), Atlanta, GA, USA, 3 March–27 February 2017. [Google Scholar]
 Amirhosseini, M.H.; Kazemian, H. Machine Learning Approach to Personality Type Prediction Based on the Myers–Briggs Type Indicator^{®}. Multimodal Technol. Interact. 2020, 4, 9. [Google Scholar] [CrossRef] [Green Version]
Symbol  Meaning 

$\stackrel{\u25b5}{\phantom{\rule{0.0pt}{0ex}}=}$  Definition or equality by definition 
$\left\{{s}_{1},\dots ,{s}_{n}\right\}$ or ${\left\{{s}_{k}\right\}}_{k=1}^{n}$  Set with elements ${s}_{1},\dots ,{s}_{n}$ 
$\leftS\right$ or $\left\left\{{s}_{1},\dots ,{s}_{n}\right\}\right$  Set cardinality 
${\times}_{k}$  Tensor multiplication along the kth direction 
$vec\left(\xb7\right)$  Vectorize operation for matrices and tensors 
$loc\left(\xb7\right)$  Location function for data points 
$invloc\left(\xb7\right)$  Inverse location relationship for neurons 
$weight\left(u\right)$  Synaptic weights of neuron u 
$bias\left(u\right)$  Bias of neuron u 
$\mathsf{\Gamma}\left(u\right)$  Neighborhood of neuron u 
$\mathrm{\Delta}\left(u\right)$  Cover of neuron u 
$\u2329{p}_{1}\Vert {p}_{2}\u232a$  KullbackLeibler divergence between discrete distributions ${p}_{1}$ and ${p}_{2}$ 
Type  Attributes  Type  Attributes 

ISTJ  Introversion, Sensing, Thinking, Judging  INFJ  Introversion, Intuition, Feeling, Judging 
ISTP  Introversion, Sensing, Thinking, Perceiving  INFP  Introversion, Intuition, Feeling, Perceiving 
ESTP  Extraversion, Sensing, Thinking, Perceiving  ENFP  Extraversion, Intuition, Feeling, Perceiving 
ESTJ  Extraversion, Sensing, Thinking, Judging  ENFJ  Extraversion, Intuition, Feeling, Judging 
ISFJ  Introversion, Sensing, Feeling, Judging  INTJ  Introversion, Intuition, Thinking, Judging 
ISFP  Introversion, Sensing, Feeling, Perceiving  INTP  Introversion, Intuition, Thinking, Perceiving 
ESFP  Extraversion, Sensing, Feeling, Perceiving  ENTP  Extraversion, Intuition, Thinking, Perceiving 
ESFJ  Extraversion, Sensing, Feeling, Judging  ENTJ  Extraversion, Intuition, Thinking, Judging 
Neighborhood  Weight  Shape  Neighborhood  Weight  Shape 

Square  Square  Cube  Triangular  Triangular  Pyramid 
Square  Triangular  Pyramid  Circular  Semicircular  Dome 
Square  Semicircular  Dome  Gaussian  Gaussian  3D Gaussian 
Attribute  Position in (31) 

Normalized number of words  ${m}_{1,1}$ 
Normalized number of characters  ${m}_{1,2}$ 
Normalized number of punctuation marks  ${m}_{1,3}$ 
Normalized number of question marks  ${m}_{1,4}$ 
Normalized number of exclamation points  ${m}_{1,5}$ 
Normalized number of occurences of two or more ’.’  ${m}_{1,6}$ 
Normalized number of positive words  ${m}_{2,1}$ 
Normalized number of negative words  ${m}_{2,2}$ 
Normalized number of selfreferences  ${m}_{2,3}$ 
Normalized number of references to others  ${m}_{2,4}$ 
Normalized number of words pertaining to emotion  ${m}_{2,5}$ 
Normalized number of words pertaining to reason  ${m}_{2,6}$ 
Parameter  Options 

Synaptic weight initialization  Random 
Bias mechanism  Not implemented 
Neighborhood $\mathsf{\Gamma}\left(u\right)$ shape  Cross 
Distance function $g\left(\xb7,\xb7\right)$  Tensor (T), Fuzzy tensor (F), ${\ell}_{1}$ norm (L1), ${\ell}_{2}$ norm (L2) 
Proximity function $h\left(\xb7,\xb7\right)$  Gaussian (G), Circular (C), Rectangular (R) 
Cover threshold ${\eta}_{0}$  Equation (27)  $0.5$ 
Weight function in $\mathcal{C}$  Gaussian, Circular, Rectangular (as above) 
Gaussian  ${\mu}_{0}=0$, ${\sigma}_{0}^{2}=8$ 
Circular  ${\rho}_{0}=4$ 
Rectangular  ${a}_{0}=4$ 
Learning rate parameter  Cosine (S), Inverse linear (L), Inverse quadratic (Q), Exponential (E) 
Cosine  ${r}_{0}=40$ 
Inverse linear  ${\gamma}_{2}=1$, ${\gamma}_{1}=0.025$, ${\gamma}_{0}=1$ 
Exponential  ${\gamma}_{0}=1$, ${\gamma}_{1}=0.125$ 
Grid size ${b}_{0}$ and ${b}_{1}$  Equation (30)  ${b}_{0}\in \left\{2,\dots ,8\right\}$, ${b}_{1}=0$ 
Number of classes ${C}_{0}$  16 
Number of rows per class ${N}_{0}$  256 
Number of attributes  2${Q}_{0}$ 
Number of runs ${R}_{0}$  100 
#  Configuration  #  Configuration  #  Configuration  #  Configuration 

1  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{C},\mathrm{C},\mathrm{S}\right)$  10  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{C},\mathrm{C},\mathrm{S}\right)$  19  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{C},\mathrm{C},\mathrm{S}\right)$  28  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{C},\mathrm{C},\mathrm{S}\right)$ 
2  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{R},\mathrm{R},\mathrm{S}\right)$  11  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{R},\mathrm{R},\mathrm{S}\right)$  20  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{R},\mathrm{R},\mathrm{S}\right)$  29  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{R},\mathrm{R},\mathrm{S}\right)$ 
3  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{G},\mathrm{G},\mathrm{S}\right)$  12  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{G},\mathrm{G},\mathrm{S}\right)$  21  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{G},\mathrm{G},\mathrm{S}\right)$  30  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{G},\mathrm{G},\mathrm{S}\right)$ 
4  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{C},\mathrm{C},\mathrm{L}\right)$  13  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{C},\mathrm{C},\mathrm{L}\right)$  22  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{C},\mathrm{C},\mathrm{L}\right)$  31  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{C},\mathrm{C},\mathrm{L}\right)$ 
5  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{R},\mathrm{R},\mathrm{L}\right)$  14  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{R},\mathrm{R},\mathrm{L}\right)$  23  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{R},\mathrm{R},\mathrm{L}\right)$  32  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{R},\mathrm{R},\mathrm{L}\right)$ 
6  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{G},\mathrm{G},\mathrm{L}\right)$  15  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{G},\mathrm{G},\mathrm{L}\right)$  24  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{G},\mathrm{G},\mathrm{L}\right)$  33  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{G},\mathrm{G},\mathrm{L}\right)$ 
7  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{C},\mathrm{C},\mathrm{E}\right)$  16  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{C},\mathrm{C},\mathrm{E}\right)$  25  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{C},\mathrm{C},\mathrm{E}\right)$  34  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{C},\mathrm{C},\mathrm{E}\right)$ 
8  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{R},\mathrm{R},\mathrm{E}\right)$  17  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{R},\mathrm{R},\mathrm{E}\right)$  26  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{R},\mathrm{R},\mathrm{E}\right)$  35  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{R},\mathrm{R},\mathrm{E}\right)$ 
9  $\left({p}_{0},{p}_{0},\mathrm{L}1,\mathrm{G},\mathrm{G},\mathrm{E}\right)$  18  $\left({p}_{0},{p}_{0},\mathrm{L}2,\mathrm{G},\mathrm{G},\mathrm{E}\right)$  27  $\left({p}_{0},{p}_{0},\mathrm{T},\mathrm{G},\mathrm{G},\mathrm{E}\right)$  36  $\left({p}_{0},{p}_{0},\mathrm{F},\mathrm{G},\mathrm{G},\mathrm{E}\right)$ 
Cosine  Inv. linear  Exponential  

${\ell}_{1}$ norm  ${I}_{0}=26.4417$/${\sigma}_{0}^{2}=12.3873$  ${I}_{0}=27.500$/${\sigma}_{0}^{2}=16.8865$  ${I}_{0}=33.1125$/${\sigma}_{0}^{2}=14.8873$ 
${\ell}_{2}$ norm  ${I}_{0}=22.3334$/${\sigma}_{0}^{2}=13.0228$  ${I}_{0}=24.667$/${\sigma}_{0}^{2}=14.3098$  ${I}_{0}=31.8333$/${\sigma}_{0}^{2}=15.5642$ 
Tensor  ${I}_{0}=18.8731$/${\sigma}_{0}^{2}=11.6686$  ${I}_{0}=20.2504$/${\sigma}_{0}^{2}=12.7633$  ${I}_{0}=26.0021$/${\sigma}_{0}^{2}=14.6574$ 
Fuzzy  ${I}_{0}=14.4457$/${\sigma}_{0}^{2}=12.1282$  ${I}_{0}=18.3333$/${\sigma}_{0}^{2}=12.6645$  ${I}_{0}=25.3333$/${\sigma}_{0}^{2}=14.0995$ 
ISTJ  ISFJ  INFJ  INTJ 
ISTP  ISFP  INFP  INTP 
ESTP  ESFP  ENFP  ENTP 
ESTJ  ESFJ  ENFJ  ENTJ 
ENFJ  ISFP  ENFJ  ESFP 
ISTJ  INTP  ESTJ  ISFJ 
INTJ  INFJ  ENTP  ISTP 
ESFJ  ENTJ  ESTP  ISFP 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Drakopoulos, G.; Kanavos, A.; Mylonas, P.; Pintelas, P. Extending Fuzzy Cognitive Maps with TensorBased Distance Metrics. Mathematics 2020, 8, 1898. https://doi.org/10.3390/math8111898
Drakopoulos G, Kanavos A, Mylonas P, Pintelas P. Extending Fuzzy Cognitive Maps with TensorBased Distance Metrics. Mathematics. 2020; 8(11):1898. https://doi.org/10.3390/math8111898
Chicago/Turabian StyleDrakopoulos, Georgios, Andreas Kanavos, Phivos Mylonas, and Panagiotis Pintelas. 2020. "Extending Fuzzy Cognitive Maps with TensorBased Distance Metrics" Mathematics 8, no. 11: 1898. https://doi.org/10.3390/math8111898