Multi-Dimensional Data Analysis Platform (MuDAP): A Cognitive Science Data Toolbox

: Researchers in cognitive science have long been interested in modeling human perception using statistical methods. This requires maneuvers because these multiple dimensional data are always intertwined with complex inner structures. The previous studies in cognitive sciences commonly applied principal component analysis (PCA) to truncate data dimensions when dealing with data with multiple dimensions. This is not necessarily because of its merit in terms of mathematical algorithm, but partly because it is easy to conduct with commonly accessible statistical software. On the other hand, dimension reduction might not be the best analysis when modeling data with no more than 20 dimensions. Using state-of-the-art techniques, researchers in various research disciplines (e.g., computer vision) classified data with more than hundreds of dimensions with neural networks and revealed the inner structure of the data. Therefore, it might be more sophisticated to process human perception data directly with neural networks. In this paper, we introduce the multi-dimensional data analysis platform (MuDAP), a powerful toolbox for data analysis in cognitive science. It utilizes artificial intelligence as well as network analysis, an analysis method that takes advantage of data symmetry. With the graphic user interface, a researcher, with or without previous experience, could analyze multiple dimensional data with great ease.


Introduction
The main challenge of cognitive science is not only revealing apparent facts of perception but clarifying the mechanisms behind perception and cognition.However, the researchers examining human cognition are always overwhelmed by the complexity of it.For instance, when perceiving a given face, a viewer is able to extract facial identity, expression, and social characteristics (such as attractiveness and competence) accurately and effortlessly [1][2][3].
The modeling of human perception depends on the repertoire of data analysis methods.A certain number of researchers in cognitive science are only equipped with classical statistical analysis tools such as analysis of variance (ANOVA) and linear regression.These researchers are challenged when studying face perception as many of the dimensions in face perception are intertwined.For instance, facial identity ("Who is the person?") and facial expression ("What is the emotion?") are widely believed as distinctive.Facial identity is generally regarded as a kind of invariant information that remains consistent within a short period of time, while facial expression is regarded as kind of variant information that is changeable with even tiny muscle movements from time to time [4,5].Conversely, the converging evidence suggested that the perceived emotional expression of a face is affected by the facial identity of that person, thus, these two aspects of a face are inter-dependent with each other [6,7].Similarly, although one may perceive multiple social characteristics from a face, many of these characteristics are heavily correlated with each other [8].Several past studies like [8,9] even suggested that seemingly complicated social characteristics can be easily represented on a two-or three-dimensional framework.For example, most frameworks believe that the dominance and the trustworthiness of a face are perceived in orthogonal mechanisms.
So far, the researchers in cognitive sciences tend to reduce data dimensions when dealing with face perception data.A common method to conduct dimension-reduction in multiple dimensional data is principal component analysis (PCA), a kind of multivariate method.Briefly speaking, in a typical PCA, the original data matrix (in which data are largely dependent on each other) was transformed into a new matrix formed by the principal components (PCs) calculated via the linear combinations of the original data [10].The first PC explains most of the data variance, and the following PCs each explain most of the remaining variance.PCA has been widely used in modeling perception of social characteristics, but there are several concerns regarding only using PCA for this kind of multiple dimensional data.
First of all, PCA is not easy to conduct properly.Though some researchers in face perception have used PCA to reduce data dimensions [11,12], the operation procedures of it are not standardized among different labs.In a typical principal component analysis, the original data are supposed to be continuous variables.Whereas, in many human perception studies, the data are measured in discrete Likert scales (e.g., 9 point scale in [8]).Although many treated such data as a kind of continuous variable, some argued this approximation may jeopardize the rigorousness of data analysis.Furthermore, the implications of principal component analysis require maneuvers that many researchers may not actually understand nor even command.For example, the data rotation operation is always recommended but sometimes left undone [8].In a recent large scaled replication study [13], the authors argued the rotation procedure is vital but has been ignored in some seminal works.Second, PCA is not ideal for all research questions.It operates under the assumption that samples follow some specific distributions, which can lead to meaningless reductions when dealing with data that are not uniformly distributed.For example, real human perception data may contain multiple clusters, but these clusters are not evenly distributed in some unknown dimensions.Furthermore, reducing data dimensions in human perception data is not a necessity.The general logic behind the PCA is to reduce the data dimensions with a mathematical algorithm, but data reduction may not be the omnipotent solution for modeling face perception with a small number of dimensions [14].Though computer science researchers utilize PCA as well, the number of the output dimensions (the number of PCs) in their typical studies are much greater than the original number of dimensions dealt with in human cognition studies.For example, in one of the pioneer works using computer vision techniques in face perception [15], the authors used 50 PCs to reduce the data dimensions of real face images (with 54, 150 dimensions).
In considering the suitability of PCA as a dimension-reduction technique, t-distributed stochastic neighbor embedding (t-SNE) emerges as an exemplary alternative [16,17].Notably in its distinction from PCA, t-SNE boasts several advantageous properties.As a nonlinear algorithm, t-SNE excels at preserving the local intricacies of data structures [18,19].Tailored specifically for visualizing complex, high-dimensional datasets, t-SNE adeptly generates two-or three-dimensional representations that illuminate data clusters and relationships, which may remain obscured by linear methods such as PCA.Furthermore, the robustness of t-SNE for diverse data characteristics sets it apart.It eschews the assumption of a global Gaussian distribution in favor of a more adaptable probabilistic model, capable of flexibly accommodating a spectrum of data distributions.Furthermore, what is worth mentioning is that t-SNE, a projection-based method [20], does not cause serious data loss, so its clustering results on the two-dimensional plane can reveal hidden information that PCA may regard as non-principal components, including new resulting clusters, the internal structure of the data, etc.
To validate the reliability of the data-dimension-reduction results, network analysis, a method that takes advantage of data symmetry is necessary [21][22][23].Network analysis, especially in the fields of cognition and psychology [24][25][26], is a powerful tool for assessing the reliability of clustering results by leveraging the inherent symmetries within the data.By examining the strength of connections within the network, this method reflects the degree of correlation between clustering results, capitalizing on the symmetry of the pairwise clustering correlation data.By identifying areas where these symmetries hold true, network analysis can confirm that the clustering results are not merely a product of random chance but reflect genuine, underlying groupings within the dataset.
It is challenging and unrealistic for scholars who are not majoring in computer science to use complex dimension-reduction tools, even though in some circumstances, reducing the data dimension is unnecessary.In the sense of this, the neural network approach might be the solution.Using state-of-the-art techniques, the researchers in other research disciplines (specifically, computer vision) classified data with more than hundreds of dimensions with neural networks and revealed the inner structure of the data.This fantastic cutting edge technique has been widely utilized in various applications, such as image classification [27][28][29][30], system identification [31][32][33], natural language processing [34][35][36], autonomous driving [37][38][39][40] and fault diagnosis [41][42][43][44].Thus, the neural networks might be the ideal candidate to further classify perception data.Although some of the researchers are aware of the neural networks, they have difficulties when applying these techniques.Specifically, it may take a long time to learn the programming language and build the environment for the neural network.Thus, it is reasonable to introduce an easy-to-use platform with a graphic user interface (which is close to much of the statistics software that the researchers used) for the researchers in cognitive science who are not familiar with coding.
Considering the aforementioned concerns, this paper offers the multi-dimensional data analysis platform (MuDAP) for researchers in cognitive science.The MuDAP is designed with a standardized pipeline and equipped with state-of-the-art neural network techniques based on existing machine learning libraries in Python.The contribution of this paper is listed as follows: 1.
The framework structure of the multi-dimensional data analysis platform (MuDAP) is introduced.

2.
A graphic visualization data-dimension-reduction algorithm based on t-SNE is utilized to dig the real clusters based on the inner structure of the data.

3.
A network analysis taking advantage of the symmetric structure of the data on correlations between each predicted cluster is performed to verify the reliability of the clustering results based on the result trustworthiness.

4.
An embedded neural network training algorithm is proposed to solve the corresponding regression and classification problems using the cluster results as labels.

5.
A step-by-step illustration of how to use MuDAP, analyzing the introduced face perception experiment data, is shown that verifies the function of the MuDAP.

Framework Structure
The multi-dimensional data analysis platform (MuDAP) is built within Python, and its framework structure is first introduced in this section.

Dataset Import
MuDAP addresses the causal relationship among the class of high dimensional data in cognition science.The imported original data should be obtained from real sconces, such as the decision making or perception collected from a questionnaire survey.In this sense, an element denotes a set of collected high dimensional data, where a i j , j = 1, . . ., m is the value at a featured dimension, m is the total number of element features, and l i is the labeled value of that element.All these collected elements give rise to the dataset Λ and are, hence, stored in the directory 'MuPAP/LoadDataFile/. . .' with the file name 'data' in CSV format.

Graphic User Interface
As shown Figure A1, MuDAP has a total number of six function buttons in its main welcome screen, and these buttons correspond to the procedures explained below.
1. Dimension Reduction: This function employs the t-SNE method to reduce the high dimensional data to fit a 2D plane and plot all the data with their labels in this plane.It verifies the data structure to confirm if any clusters are formed such that later procedures like regression or classification can be carried out.If so, the users can manually insert the center points of the obtained clusters into the memory.

Network Analysis:
This function can only be performed after the center points of the clusters are stored.It exploits the relationship between all elements in a cluster and each original featured dimension via network analysis.

User Configuration:
This function enables the users to tune the training parameters shown in Figure A2 according to their identical data structure within the later DNN training process.

Regression Analysis with DNN:
This function trains a deep neural network to predict the distances between any given new element to all existing cluster centers in the 2D plane.

Classification Analysis with DNN:
This function trains a deep neural network to predict the closest cluster of any given new element in the 2D plane.

Contact Information:
The developer contact information of MuDAP is shown in this function to allow users to make any direct queries.
The toolbox we have developed operates through a three-step process.Initially, the t-SNE algorithm is utilized to capture the spatial structure of the data without any loss of information.This step is crucial as it provides a comprehensive overview of the data's inherent dimensions.Second, network analysis is employed to establish direct connections between the various clusters.This analysis is instrumental in validating the relationships within the data and ensuring the reliability of the clustering results.Finally, a neural network is trained to perform regression and classification on the data, which are essential for drawing meaningful insights and predictions.After completing these three steps, the complex and cumbersome high-dimensional cognitive data can be objectively and quantitatively described and analyzed.

User Instructions for MuDAP
Before conducting any further data analysis, the users must check the data structure of the imported data by using the 'Dimension Reduction' function and double check whether the formed clusters have any consistency with the given labels.If there actually exist several clusters, the user can insert the centers into storage and perform the following steps.Then, the 'Network Analysis' function is performed to further analyze the relationship between each identical cluster.
After that, the tuning parameters are set in the 'User Configuration' function before any neural network training procedures.Hence, the DNN is trained to solve the respective regression and classification problems via the 'Regression Analysis with DNN' and 'Classification Analysis with DNN' functions.In this sense, MuDAP is capable of discovering the causal relationship between the data structure and data type.

Neural-Network-Based Training Procedure Description
This section describes the behind-screen mechanism of MuDAP via explaining how to train the corresponding neural network models for both regression and classification problems.

Problem Formulation
First of all, if the 'Dimension Reduction' function confirms that the imported data structures of the obtained data file appear to have some clusters, the user can enter the center point values of these clusters manually.In this sense, MuDAP is capable of classifying the data type via calculating the distance between an obtained element point and each cluster center in the 2D plane.The input and output of a neural network model is given as to denote the imported data features and the obtained results associated with each clusters, so the dimension n denotes the total number of clusters.For each element λ i ∈ Λ, the distances zi j , j = 1, . . ., n from its t-SNE plot position to all cluster centers are measured to generate the regression benchmark vector Then, denote the index li of the cluster with the shortest distance to each t-SNE plot.Using these values, the classification benchmark vector is whose elements are defined as After defining the above regression and classification benchmark vectors, randomly split the original dataset Λ into a training set Λ train and a testing set Λ test for later on training procedures with a given proportion.Using the above notations, the corresponding regression and classification problems for high dimensional data are stated as below.
Definition 1.The High Dimensional Data Regression Problem is defined as updating the essential parameters of the neural network iteratively alongside the training epochs based on the data from the training set Λ train , so that the mean square error loss function is minimized for all elements in Λ train .
Definition 2. The High Dimensional Data Classification Problem is defined as updating the essential parameters of the neural network repeatedly alongside the training epochs based on the data from the training set Λ train , such that the cross entropy loss function is minimized for every element in Λ train .

Neural Network Structure
MuDAP typically employs neural network models to solve the above-mentioned regression and classification problems in Definitions 1 and 2, and an exemplary neural network model is shown in Figure 1 for a direct view.As seen in this figure, the general structure of the embedded model within MuDAP has m units in its input layer and n units in its output layer, respectively.There are k hidden layers in the dashed red box, and each of them has r i units in it.The layer-to-layer data transformation is described by where z out i ∈ R r i is the i-th layer output, z in i+1 ∈ R r i+1 is the (i + 1)-th layer input, and W i+1 ∈ R r i+1 ×r i is a transfer matrix.Moreover, each unit embeds the activation function f i,j (•) as where RReLU(•) is a randomized leaky rectified linear unit function and a is a small scalar.The activation function RReLU(•) contributes to the network model with non-linear properties to approach the non-linear logistics function in practice.The output layer computes the result using for the regression problem, and for the classification problem with g(•) being a softmax function Figure 1.The structure of the embedded neural network model.

A Training Algorithm Description
After generating an appropriate neural network model, the embedded neural network training algorithm shown in Algorithm 1 is proposed to handle the problems in Definitions 1 and 2. This algorithm first divides the training set Λ train into a series of batches with an equal number of elements, and it employs all elements in each batch to train the parameters with the neural network model.Any batch size larger than 1 is capable of compensating for the negative effect of a single poor/wrong data sample using the other good data in the same batch.
In addition, the estimation accuracy rates of elements within Λ train and Λ test are denoted as and considered the performance index of the classification problem, where the correction number of each element λ i is denoted as

Algorithm 1
The embedded neural network training algorithm.

Input:
The dataset Λ with high dimensional data structure, the mean square error loss function (7), the cross entropy loss function (8) Calculate the summation of ( 7) or ( 8) regarding the j-th batch.

7:
Calculate the gradients of W i and b i,j with respect to the obtained sum.

8:
Conduct adaptive moment estimation (Adam) gradient descent training to update W i and b i,j using the obtained gradients.

9:
end for 10: Obtain the estimation accuracy rates of the updated model from the data in Λ train and Λ test .

11:
Set i = i + 1 to the next training epoch.12: end while 13: return W i and b i,j (the updated model parameters).

Data Analysis
Here is a step-by-step illustration of using MuDAP.In this case, we analyzed the data from a previous study.We first introduce the experimental design and the data structure, then show how to analyze the data via MuDAP.

Design Parameter Specifications
To obtain the perception data, a total number of 32 independent participants were asked to make their individual judgments on 40 different faces.In this sense, each participant rated these faces from 1-7 based on 8 common social traits and also expressed their subjective feelings on the corresponding possible academic major, i.e., Science and Engineering or Humanity and Social Science.In addition, the authors denoted 1 for least emotional and 7 for most emotional and asked the participants to rate the perceived emotion of each face between 1-7.Therefore, 32 × 40 = 1280 human perception elements were then collected to provide a dataset Λ, and each element was denoted as where a j i , j = 1, . . ., 8 is the subject perception value on each social traits, b i is the perceived emotion of that face, and c i represents the binary feeling concerning academic major.To avoid the subjective interference in decision making, the authors further transferred these values into either 0 or 1 to yield a sparse enough data structure.The data have been kept publicly available at the Open Science Framework (OSF) and can be accessed at https://osf.io/4zf8t/(accessed on 22 February 2022).

t-SNE Plot Analysis
The first step is to examine whether the obtained 8 social trait ratings plus the perceived emotion are capable of determining the perceived academic major of an identical face.To deal with this case, [a 1 i a 2 i . . .a 8 i , b i ] ⊤ and c i are treated as a data tuple, as well as a label.
The t-SNE method is then utilized to plot the 2D projection of the original high dimensional data on a 2D plane, and the results are shown in Figure 2. It is clear from this figure that the data form nine clusters; however, the corresponding labels are rather random and mixed.
A conclusion is made that a neural network cannot be trained to accurately tell the correct subject feelings on academic majors using [a Based on the previous model of face perception proposed in [5], the social traits belong to the invariant aspect of the face, while the emotional expression belongs to the variant aspect of the face.To model the face perception in a better way, we would like to elucidate the data structure of [a 1 i a 2 i . . .a 8 i ] ⊤ without the disturbance of the emotion effect (i.e., the variant facial information).Hence, we used the same t-SNE method to model these data (see Figure 3) with b i as the label to see whether there exists any relationship between the social traits and the perceived emotion of the face.To none of our surprise, there was no relationship between them.In addition, we further check the data structure of [a 1 i a 2 i . . .a 8 i , b i , c i ] ⊤ without applying any labels, whose results are plotted in Figure 4.It is obvious that there were now 12 clusters of data in total, which means the subject feelings certainly extend the potential recognition types for the faces.To validate if the recorded data have any causal relationships, we, hence, checked the data structure of [a 1 i a 2 i . . .a 8 i ] ⊤ with a 1 i being the label.The obtained results are plotted in Figure 5, and no causal relationship can be observed from the figure.The similar procedures were performed for each value of a j i , j = 1, . . ., 8, being the label, but no causal relationship could be found either.Therefore, we have to reach the sad conclusion that the current data do not have any causal relationships between them.

Perception Data Analysis
For further analysis, we use the results in the above subsection with respect to [a 1 i a 2 i . . .a 9 i ] ⊤ .We insert each cluster center into the 'x-axis' and 'y-axis' textboxes in the 'Dimension Reduction' function screen and then click the 'Add' button to save the results into memory.
Then, we analyze the relationship between each facial feature and face cluster using the 'Network Analysis' function.For each type of face cluster, we count how many a j i , j = 1, . . ., 8 and b i in the exact type have value 1, and the corresponding percentage values are listed in Table 1.The results suggest that the perception of the faces can be summarized as nine types in this figure.In this table, each value represents the weight of that social characteristic in that type of face.For instance, the Type 1 face belongs to people who are perceived as not attractive nor dominant but trustworthy, competent, moral, masculine, mature, sociable, and expressive.Moreover, for each type of personality, we check through each face type to see the corresponding number of '1' for this personality, then divide the sum to obtain the percentage number, and the outputs are listed in Table 2.With this table, we are able to qualify to what extent each facial characteristic determines each type of personality.The 5th type of personality encompasses all nine characteristics and is the type that has the high probability.The 'attractiveness' largely determines 5th, 6th, 7th and 8th types of personalities; the 'trustworthiness' largely determines the 3rd, 5th and 9th ones; the 'dominance' largely determines the 2nd, 3rd, 4th, 5th and the 6th ones; the 'masculinity' largely determines the 1st, 3rd, 4th, 5th and the 7th ones.Using the network analysis explained in [24] (with the threshold predetermined by a Sigmoid function), we can better clarify the relationship among the facial characteristics we tested (Table 3).First, it is obvious that three characteristics, attractiveness, dominance, and masculinity, are only determined by themselves.This finding is coherent with the theory from [9] (using PCA) that the various facial characteristics are driven by three PCs.The other facial characteristics are more likely to be secondary characteristics, which are driven by the aforementioned three characteristics.Interestingly, the expressiveness, representing the emotional valence, is not driven by any characteristic directly, indicating its uniqueness in face perception compared with other social characteristics.For a visual view, the relationship between the nine features obtained from network analysis is then shown in Figure 6.
Table 3.The outcome matrix of network analysis on 9 observed types of personalities.Each value represents the degree to which one character (row) is predicted by another character (column), e.g., the value 92.90 indicates that the Type4 is largely determined by Type1.We then applied the same analysis on the relationship among nine observed types of personalities (Table 4).The 5th type of personality is linked with all kinds of personalities but the 1st and the 8th ones.Therefore, one may conclude that the 5th type of personality might be the most typical male professor.However, the 1st and the 8th types of personalities do not originate from any kind of personality.Considering the small numbers of these two types, it is possible these two types are simply the collections of undefined personalities.Similarly, the relationships between the nine types obtained from network analysis are illustrated in Figure 7.
Table 4.The outcome matrix of network analysis on 9 facial characteristics.Each value represents the degree to which one character (row) is predicted by another character (column), e.g., the value 92.97 indicates that the trustworthiness is largely determined by attractiveness.

Neural Network Analysis
Based on the clustering results obtained in the previous two subsections, the following two neural-network-based regression and classification tasks will naturally be derived.
The neural network model used in this paper has a total number of five hidden layers, and the numbers of units in these layers are further defined as follows: respectively.The regression problem is solved using the 'Regression Analysis with DNN' function by setting the following configurations where γ is the learning rate of the Adam gradient descent training procedure.The training loss and the testing loss along the time line are shown in Figure 8 with different colors.
The mean square training loss declines from over 1000 to below 1, which indicates that the trained neural network accurately predicts the distances between the t-SNE plot of an extra element and the nine cluster centers.After that, the classification problem is solved using the 'Classification Analysis with DNN' function by setting the following configurations: The training results are shown in Figure 9 with blue denoting the training accuracy and orange denoting the testing accuracy.The training accuracy rate ϕ train rises from the initial value 11% to the final value close to approximately 80% after 100 epochs, and the testing accuracy rate ϕ test finally reaches 83%.This confirms that the training neural network can accurately predict which cluster is the closest one to the t-SNE plot of an extra element.With nine social characteristics (or even with fewer PCs), it is mathematically possible to formulate more possible combinations of face type.However, with the DNN analysis, it is obvious that not every combination is possible in real life.This is not practical in classical PCA.In this case, the MuDAP (powered by DNN) is able to illustrate the inner structure of social characteristics and may benefit future research in personality and social traits.For instance, future researchers may study why certain types are formed and why certain combinations do not exist.

Conclusions and Future Work
The researchers in cognitive science have long been interested in modeling and classifying the human perception data.This task requires a powerful and easy-to-use analysis platform.Although classical dimension reduction methods like PCA are powerful for building an initial framework, they are not capable of data and the inner structure and clustering of these data.DNN, on the other hand, has been well-proven to be an ideal method for this kind of research question.However, DNN is not easy to train and implement.Here, the multi-dimensional data analysis platform (MuDAP) with a graphical user interface has been developed to assist cognitive science researchers in handling complex human perception data and classifying its potential structures.The operations of this toolbox are structured into three steps.Initially, dimension reduction based on the t-SNE algorithm captures the spatial structure of the data, identifying nine cluster centers without information loss.Subsequently, network analysis is employed to establish direct connections between each cluster, thereby verifying the reliability of the aforementioned results.Finally, the nine cluster centers are designated as labels, and a neural network is trained to perform both regression and classification on the data.MuDAP facilitates the objective, qualitative, and quantitative analysis of complex, high-dimensional cognitive data, simplifying the research process for cognitive scientists in fields outside of computer science.
In this paper, analyzing the data from the experiment using MuDAP demonstrated that the platform is capable of elucidating the inner structures of various social characteristics (the DNN) and can show the relationship among the different types of personalities (the network analysis).Moreover, using the MuDAP, we are able to show that the facial characteristics of male faces can be summarized in nine types determined mainly by attractiveness, dominance and masculinity but not expressiveness.This finding is inaccessible following dimensional reduction, which only shows the essential components of the social characteristics.Finally, with the help of our trained DNN, new-coming test data are classified into correct clusters, and their distance to each cluster center is predicted accurately.
The MuDAP is an easy-to-use and powerful data analysis platform for cognitive scientists dealing with multiple dimensional data.With this, researchers without expertise in coding can process massive amounts of data with great ease at the GUI.The output of the data offers data classification, which is useful for multiple dimensional data analysis.Therefore, the analysis from the MuDAP complements the usage of the current data analysis methods.

Figure 3 .
Figure 3. 2D plot of [a 1 i a 2 i . . .a 8 i ] ⊤ with marked label b i using t-SNE.

Figure 4 .
Figure 4. 2D plot of [a 1 i a 2 i . . .a 8 i , b i , c i ] ⊤ without any marked labels using t-SNE.

Figure 6 .
Figure 6.The visual view of the relationship between the 9 features using network analysis, where solid lines are values 90-100, dashed lines are for values 80-90, and arrows are the pointing directions.

Figure 8 .
Figure 8.The training results of the neural network for the regression problem.

Figure 9 .
Figure 9.The training results of the neural network for the classification problem.
, initial model parameters W i and b i,j , the training element number η, the training batch size n batch and the total training epoch number n epoch .Output: W i and b i,j (the model parameters ).Training epoch number set at i = 1.2: Split η elements of Λ to Λ train and the rest to Λ test randomly.3: The int(η/n batch ) batches form from Λ train .4: while i ⩽ n epoch do 1: initialization: 1 i a 2 i . . .a 8 i , b i ] ⊤ .
i . . .a 8 i , b i ] ⊤ with marked label c i using t-SNE.

Table 1 .
The summary of the MuDAP output.Each data point indicates the percentage of the social characteristic of that type of face.For instance, face Type 1 is not regarded as attractive.

Table 2 .
The summary of the percentage of ('1' at) each facial characteristic involved in each type of observed personality.