Next Article in Journal
Experimental Study on the Bearing Capacity of Reinforced Concrete Pipes with Corrosion-Thinning Defects Repaired by UHP-ECC Mortar Spraying
Next Article in Special Issue
Advances in Architecture, Protocols, and Challenges in Internet of Things: From Technologies to Applications
Previous Article in Journal
DNA Barcoding and Molecular Phylogenetics Revealed a New Cryptic Bamboo Aphid Species of the Genus Takecallis (Hemiptera: Aphididae)
Previous Article in Special Issue
A Novel Adaptive Group Sparse Representation Model Based on Infrared Image Denoising for Remote Sensing Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HELPFuL: Human Emotion Label Prediction Based on Fuzzy Learning for Realizing Artificial Intelligent in IoT

1
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 300018, China
2
School of Cyberspace Security, Hangzhou Dianzi University, Hangzhou 300018, China
3
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
4
Hebei Provincial Key Laboratory of Big Data Computing, Hebei University of Technology, Tianjin 300401, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(13), 7799; https://doi.org/10.3390/app13137799
Submission received: 11 May 2023 / Revised: 14 June 2023 / Accepted: 25 June 2023 / Published: 1 July 2023

Abstract

:
Human emotion label prediction is crucial to Artificial Intelligent in the Internet of Things (IoT). Facial expression recognition is the main technique to predict human emotion labels. Existing facial expression recognition methods do not consider the compound emotion and the fuzziness of emotion labels. Fuzzy learning is a mathematical tool for dealing with fuzziness and uncertainty information. The advantage of using fuzzy learning for human emotion recognition is that multiple fuzzy sentiment labels can be processed simultaneously. This paper proposes a fuzzy learning-based expression recognition method for human emotion label prediction. First, a fuzzy label distribution system is constructed using fuzzy sets for representing facial expressions. Then, two fuzzy label distribution prediction methods based on fuzzy rough sets are proposed to solve the compound emotion prediction. The probability that a sample is likely and definitely belongs to an emotion is obtained by calculating the upper and lower approximations. Experiments show the proposed algorithm not only performs well on human emotion label prediction but can also be used for other label distribution prediction tasks. The proposed method is more accurate and more general than other methods. The improvement of the method on the effect of emotion recognition extends the application scope of artificial intelligence in IoT.

1. Introduction

Human emotion label prediction can play a critical role in realizing artificial intelligence (AI) in the Internet of Things (IoT) domain [1,2,3]. IoT refers to a network of physical devices, vehicles, home appliances, and other objects that are embedded with sensors, software, and network connectivity to exchange data with other devices and systems over the Internet [4,5,6,7]. Facial expression recognition is one of the main AI technologies for human emotion label prediction [8,9]. The integration of facial expression recognition into IoT systems can enhance the intelligence of IoT and make it more responsive to human needs and emotions [10,11,12].
There are some possible applications of facial expression recognition in IoT [13], as shown in Figure 1, including: 1. Smart homes: The system could use facial expression recognition technology to detect the mood of the occupants and adjust the lighting, temperature, and music to create a more relaxing environment; 2. Healthcare: If a patient is feeling anxious, the system could provide calming music or guided meditation to help them relax; 3. Marketing: A retailer could use the system to track which products or promotions generate the most positive emotional responses from customers; 4. Automotive: The system could then provide alerts or take other actions to help keep the driver alert and focused on the road; 5. Education: The system could help teachers identify when students are struggling or disengaged and adjust their teaching strategies accordingly.
Facial expression recognition can be challenging due to the complexity of human emotions, which involve a combination of basic emotions or are difficult to label with a single emotion label [14]. For example, a person may feel both surprise and happy at the same time. Furthermore, the compound emotion may be ambiguous; that is, the degree of emotions also varies. Thus, the main challenges are feature learning, compound emotion expression, and ambiguous emotion expression.
Many studies have been conducted to improve expression recognition methods in terms of feature learning. Lopes et al. [15] apply CNN to extract only expression-specific features from a face image. Li et al. [16] introduce both static images and dynamic image sequences of facial expression feature learning based on deep neural networks and discuss these methods’ advantages and limitations. Wang et al. [17] embed a varied number of region features produced by CNN into a compact fixed-length representation. Rouast et al. [18] found that deep neural networks are used for multiple modal data feature learning, including spatial feature, temporal feature, and joint feature representations. However, the above approach does not consider more complex emotion recognition tasks, such as compound and ambiguous emotion.
Fuzzy learning is an effective mathematical tool to deal with ambiguous problems [19]. Zadeh proposes that the imprecision inherent in natural language is, by nature, possible rather than probabilistic. Fuzzy rough set is an effective mathematical tool that combines the advantages of rough and fuzzy sets [20]. Rough sets provide a sound model for handling uncertainty in classification [21]. They have not only been used in models [22,23,24], but also in several application such as in [25,26,27]. In fuzzy rough sets, a family of fuzzy sets is used to approximate a fuzzy class. The differences between fuzzy upper and lower approximations describe the inconsistency of the instances in fuzzy classification [28]. For dealing with diverse types of the attributes, researchers proposed different extensions of fuzzy rough sets [29,30,31]. To alleviate the shortcoming of fuzzy rough sets being sensitive to noisy samples, robust fuzzy rough sets were developed [32]. Chai et al. proposed a new intuitionistic fuzzy rough set approach [33]. Quran et al. proposed a neutrosophic soft rough set to solve uncertainty [34]. However, the extended models and algorithms only focus on data mapping of the conditional attribute.
In fuzzy learning, emotions can be represented as fuzzy sets that capture the uncertainty and ambiguity of emotion. The membership function of the fuzzy set represents the degree to which an emotion belongs to a particular category [35]. For example, the membership function of a fuzzy set representing a complex emotion such as “bittersweet” may assign a high degree of membership to both the happiness and sadness categories. However, most facial expression recognition methods are based on deep learning [36,37], and few studies have applied fuzzy learning to facial expression recognition methods.
In this paper, a facial expression recognition method based on fuzzy learning for human emotion label prediction is proposed to address the above problems. The contributions of this paper are summarized as follows: 1. constructing a fuzzy label distribution system using fuzzy sets for representing human emotions; 2. proposing two fuzzy label distribution prediction methods based on fuzzy rough sets to solve the compound fuzzy human emotion prediction problem; 3. the proposed method can also be used to solve single-label, multi-label, and probability-based label distribution prediction problems through degradation and transformation.

2. Fuzzy Sets for Label Distribution

2.1. Fuzzy Set

Natural language is full of uncertainties, and when we express fuzziness of words in natural language, such as young, old, good-looking, small, or big, we tend to use fuzzy sets for expression. Unlike a classical set, a fuzzy set has no definite elements and can only be expressed by means of a membership function. Therefore, the definition of a fuzzy set is also often written as follows. Let us assume that FS  = < U , C , D >  is a fuzzy set, where instance set  U = { x 1 , x 2 , x N } , feature set  C = { b 1 , b 2 , b M } , and label set  D = { d 1 , d 2 , , d L } d j ( x i ) [ 0 , 1 ]  is regarded as the fuzzy membership degree of  x i  in  d j . All the labels in the label set are used to describe the instance; in other words, the label set is completed.
For an unseen instance  x , the task of fuzzy Learning for Label Distribution is to predict the fuzzy membership degree function  d j ( x ) , where  ( d j D ) . The fuzzy membership degree of all labels is from the complete set  D ( x ) = { d 1 ( x ) , d 2 ( x ) , , d L ( x ) } .
The fuzzy label distribution system can be reduced to the multi-label system or single-label system when the arbitrary label in the label set satisfies  d j ( x ) { 0 , 1 } . The label of an instance is defined as  ( d j ( x ) = 0 , x d j 1 , x d j ) .

2.2. Fuzzy Rough Sets (FRS)

Let U be a finite set of instances and x, y, zU, and equivalence relation R satisfies the following conditions  R ( x , x ) = 1 R ( x , y ) = R ( y , x ) , and  R ( x , y ) = 1 ,   R ( y , z ) = 1 R ( x , z ) = 1 . U is partitioned into a family of equivalence classes  [ x ] R  by an equivalence relation R.
Given  X U , the upper and lower approximations of X are defined as  R ̲ X = { [ x ] R | [ x ] R X }  and  R ¯ X = { [ x ] R | [ x ] R X } , respectively. Rough sets are a pair of upper and lower approximations that approach as close as possible to X B N D R X = R ¯ X R ̲ X  is called the boundary of X.
The extension of rough sets is to equip U with fuzzy equivalence relation R, which satisfies the conditions  R ( x , x ) = 1 R ( x , y ) = R ( y , x ) , and  m i n ( R ( x , y ) R ( y , z ) ) R ( x , z ) . A family of fuzzy classes makes a fuzzy partition of U.
Let R be a fuzzy equivalence relation and X be a fuzzy set on U, then the lower and upper fuzzy approximations of X are defined as follows:
R ̲ X ( x ) = inf y U m a x ( 1 R ( x , y ) , X ( y ) ) , R ¯ X ( x ) = sup y U m i n ( R ( x , y ) , X ( y ) ) .
Fuzzy rough sets are a pair of fuzzy upper and lower approximations that approach as close as possible to X.
Given a fuzzy approximation space  I S = < U , R > , X is a crisp subset of U, and R is a fuzzy relation. The lower and upper fuzzy approximations are defined as
R ̲ X ( x ) = inf y X 1 ( R ( x , y ) , R ¯ X ( x ) = sup y X R ( x , y ) .

2.3. Advantages of Fuzzy Label Distribution Learning

Label distribution learning (LDL) [38] has been widely studied to deal with the overall label distribution. However, LDL does not fit some applications of the fuzzy concept, which is subjective and has no clear boundaries, such as complex emotions. Each instance may be associated with multiple emotions simultaneously. Each emotion is a fuzzy concept, is subjective, and has no clear boundaries. Fuzzy label distribution learning (FLDL) is a more appropriate model than probability distribution to describe the distribution of all labels of the instance.
The two examples above describe the difference between them: (1) There is a video emotion recognition task shown in Figure 2, where the label set contains four basic emotions (angry, worried, sad, and happy). The value of each label is represented by a fuzzy membership degree; the higher the label value, the larger the membership degree. From the series of facial images in the video, we can see that the man has a complexity of emotions that contain several basic emotions simultaneously. He looks like he is very angry, not very worried, rather sad, and not happy. Therefore, the fuzzy membership degrees of these labels are 0.8, 0.2, 0.7, and 0.05, respectively. (2) There is a nature image classification task shown in Figure 3, where the label set contains four basic labels (sky, cloud, building, and water). The value of each label is proportional to the entire description of the instance. The natural scene image is formed by the sky, mostly cloudy, with some buildings, and lots of water. The proportions of these labels are 0.25, 0.45, 0.2, and 0.1, respectively. These labels sum up to 1.
The task of FLDL can be transformed into the tasks of SLL and MLL. In SLL, the label of an instance corresponds to the maximum fuzzy membership degree. The predicted process of SLL can be formulated as follows:
D ( x ) arg d j max { d j ( x ) } j = 1 L .
In MLL, the label of an instance is the subset where the fuzzy membership degrees of instance to these labels are larger than  λ . The predicted process of MLL can be formulated as follows:
D ( x ) arg d j { d j | d j ( x ) λ } j = 1 L .
Examples of prediction labels of SLL, MLL, and FLDL are shown in Figure 4, where the triangle represents the corresponding label associated with an instance and the dotted line represents the constant  λ . Fuzzy membership degrees of an instance to all labels are computed. In SLL, the third label is the maximum fuzzy membership degree. In MLL, the first and third labels are larger than  λ . SLL and MLL can be regarded as special cases of FLDL. Therefore, FLDL is considered the more general model.
The output space of FLDL is larger than SLL and MLL. There are L possible outputs for SLL, and  2 L 1  possible outputs for MLL. In FLDL, each label satisfies  d j [ 0 , 1 ] . Therefore, there are infinite possible outputs for FLDL.

3. Human Emotion Label Prediction Based on Fuzzy Learning (HELPFuL)

In this section, we first describe our algorithm in detail, and then we demonstrate the algorithm with an example.

3.1. Algorithm

In this subsection, we propose two FLDL algorithms based on fuzzy rough sets. The first one is the un-weighted algorithm belonging to a lazy learning approach, and the other is the weighted algorithm; the weights are learned from the training data.
Given a fuzzy label distribution system FMLS  =   < U , C , D > , the fuzzy upper and lower approximations of a test instance  x  to the fuzzy label  d j D  are defined as follows:
R ¯ d j ( x ) = sup y U , x U m i n ( R ( x , y ) , d j ( y ) ) , R ̲ d j ( x ) = inf y U , x U m a x ( 1 R ( x , y ) , d j ( y ) ) .
The fuzzy upper approximation of the fuzzy label represents the possibility of the instance belonging to this label. The fuzzy lower approximation of the fuzzy label represents the necessity of the instance belonging to this label. Fuzzy rough sets are regarded as the measures of possibility and necessity. In modeling the uncertainty of a judgment, it is natural to expect that the boundary between an event and its contrary is not too rigid. In order to characterize the uncertainty, both possibility and necessity are required. Therefore, we use fuzzy upper and lower approximations to confirm the label of an instance.
Given a fuzzy label distribution system FMLS  = < U , C , D > , for the test instance  x , the fuzzy membership degree of  x  to label  d j D  is defined as follows:
d j ( x ) = u j R ¯ d j ( x ) + l j R ̲ d j ( x ) ,
where  u j  and  l j  are the weights of the fuzzy upper and lower approximations of  d j , and these weights satisfy  u j 0 l j 0 u j + l j = 1 .
There are three special methods in assigning a label to an instance: pessimism assignment method, optimistic assignment method, and mean assignment method.
1. The optimistic assignment method  D u p p e r ( x ) = { d j ( x ) } j = 1 L : If  u j = 1  and  l j = 0 , then we have  d j ( x ) = R ¯ d j ( x ) . The fuzzy upper approximation is regarded as the fuzzy membership degree of  x  to  d j .
2. The pessimism assignment method  D l o w e r ( x ) = { d j ( x ) } j = 1 L : If  u j = 0  and  l j = 1 , then we have  d j ( x ) = R ̲ d j ( x ) . The fuzzy lower approximation is regarded as the fuzzy membership degree of  x  to  d j .
3. The mean assignment method  D m e a n ( x ) = { d j ( x ) } j = 1 L : If  u j = 1 2  and  l j = 1 2 , then we have  d j ( x ) = 1 2 ( R ¯ d j ( x ) + R ̲ d j ( x ) ) . The mean of the fuzzy upper and lower approximation is regarded as the fuzzy membership degree of  x  to  d j .
The optimistic assignment method may lead to overestimation of the fuzzy membership degree. The pessimism assignment method easily results in the underestimation of the fuzzy membership degree. The mean assignment method can avoid the estimation problems mentioned above. However, a simple average method can easily ignore the important information included in the training data. Therefore, we propose the weighted average assignment method by learning the weights of the fuzzy upper and lower approximations of each label.
The label set of  x , assigned by the weighted average method, is denoted as  D w e i g h t ( x ) = { d j ( x ) } j = 1 L , where  d j ( x ) = u j R ¯ d j ( x ) + l j R ̲ d j ( x ) . The fuzzy upper and lower approximation of each training instance to each label are computed. The label of each training instance is known. The weights  u j  and  l j  are learned by the least squares method. The optimization objective is defined as follows:
min i = 1 n ( d ( x i ) u j R ¯ d ( x i ) l j R ̲ d ( x i ) ) 2 s . t . u j 0 ; l j 0 ; u j + l j = 1 .
We now formulate the algorithm of FLDL-based fuzzy rough sets (FFRS), as shown in Algorithms 1 and 2.
Algorithm 1: Learning weights of upper and lower approximations
Input: Fuzzy label distribution system  U = { x 1 , x 2 , , x N } , label set D
for  d j D  do
Applsci 13 07799 i001
end
Output u = [ u j ] j = 1 L  and  l = [ l j ] j = 1 L
Algorithm 2: Fuzzy-rough-based Fuzzy label distribution classification
Input: Fuzzy label distribution system  U = { x 1 , x 2 , , x N } , label set D, test
   instance  x u = [ u j ] j = 1 L , and  l = [ l j ] j = 1 L
for  d j D  do
Applsci 13 07799 i002
end
Output: label set  D ( x ) = { d j ( x ) } j = 1 L

3.2. Example

Example 1. We apply an FLDL task in which we encounter four instances. The detailed information is shown in Table 1, where C is the condition attribute and  D = d 1 , d 2 , d 3  is a complete label set.
We consider the Gaussian kernel function to compute the fuzzy equivalence relation between instances. The kernel parameter is set to 0.2. Given the Fuzzy label distribution training dataset and a test instance  x , the value of condition attribute  C ( x ) = 0.65 , and the label set  D ( x )  of  x  is computed by FLDL-FRS.
First, the fuzzy equivalence relation between the training and test data is computed by the Gaussian kernel function.
R ( x , x 1 ) = 0.7132 , 1 R ( x , x 1 ) = 0.2868 ; R ( x , x 2 ) = 0.3314 , 1 R ( x , x 2 ) = 0.6686 ; R ( x , x 3 ) = 0.6376 , 1 R ( x , x 3 ) = 0.3624 ; R ( x , x 4 ) = 0.2085 , 1 R ( x , x 4 ) = 0.7915 .
Then, the fuzzy upper and lower approximations are computed by the Formula (5).
R ¯ d 1 ( x ) = sup x i U m i n ( R ( x , x i ) , d 1 ( x i ) ) = { 0.64 , 0.3314 , 0.6376 , 0.2085 } = 0.64 ; R ¯ d 2 ( x ) = sup x i U m i n ( R ( x , x i ) , d 2 ( x i ) ) = { 0.31 , 0.3314 , 0.42 , 0.2085 } = 0.42 ; R ¯ d 3 ( x ) = sup x i U m i n ( R ( x , x i ) , d 3 ( x i ) ) = { 0.7132 , 0.23 , 0.55 , 0.19 } = 0.7132 ; R ̲ d 1 ( x ) = inf x i U m a x ( 1 R ( x , x i ) , d 1 ( x i ) ) = { 0.64 , 0.6686 , 0.76 , 0.7915 } = 0.64 ; R ̲ d 2 ( x ) = inf x i U m a x ( 1 R ( x , x i ) , d 2 ( x i ) ) = { 0.31 , 0.74 , 0.42 , 0.7915 } = 0.31 ; R ̲ d 3 ( x ) = inf x i U m a x ( 1 R ( x , x i ) , d 3 ( x i ) ) = { 0.72 , 0.6686 , 0.55 , 0.7915 } = 0.55 .
Finally, the label set of  x  can be presented by three methods.
D u p p e r ( x ) = { 0.64 , 0.42 , 0.7132 } ; D l o w e r ( x ) = { 0.64 , 0.31 , 0.55 } ; D m e a n ( x ) = { 0.64 , 0.365 , 0.6316 } ;
Given the Gaussian kernel as follows:
1 0.8021 0.9992 0.6376 0.8021 1 0.8655 0.9603 0.9992 0.8655 1 0.7132 0.6376 0.9603 0.7132 1
The fuzzy upper and lower approximations and real label of each training instance are used to learn the weights.
u 1 = 0.8489 , u 2 = 0.5543 , u 3 = 0.4908 ; l 1 = 0.1503 , l 2 = 0.4457 , l 3 = 0.5092 .
The fuzzy upper and lower approximations of the test instance  x  are computed by Formula (5). We obtain the label set of  x  by the weighted average assignment method.
D w e i g h t ( x ) = { d j ( x ) } j = 1 L = { 0.64 , 0.3710 , 0.6301 } ;
The real label is denoted as Original, and the four methods are denoted as FFRS-u, FFRS-l, FFRS-m, and FFRS-w. We can see that the top and bottom lines denote the upper and lower approximations, respectively. FFRS-m and FFRS-w, which are two lines in the middle, are close to the real label.

4. Experiments

In this section, we report some experiments to demonstrate the effectiveness of the proposed method over several fuzzy label distribution classification datasets.

4.1. Datasets

The CHEAVD dataset (http://www.chineseldc.org/emotion.html (accessed on 1 March 2023)), published by the National Laboratory of Pattern Recognition Institute of Automation at the Chinese Academy of Sciences, is a Chinese natural emotional visual-audio database. This dataset is a multi-modal fuzzy label distribution dataset. It contains 1981 samples, each of which contains eight emotions (happy, angry, surprised, disgusted, neutral, worried, anxious, sad). Each sample is described by 18 attributes. The first 10 descriptors are audio attributes and another eight are visual. Each video is associated with eight memberships of different emotions. We recognize the emotions of the videos according to the audio and image information. Some examples of the video faces in the dataset are shown in Figure 5.
Estimations of our algorithms are applied to fifteen label distribution classification datasets, including biological data, image data, and text data. These datasets are available in the Library of Label Distribution Learning (http://cse.seu.edu.cn/PersonalPage/xgeng/LDL/index.htm (accessed on 1 March 2023)). Brief introductions of these datasets are listed in Table 2.
The thirteenth and fourteenth datasets are two widely used facial expression image databases. The JAFFE database [39] contains 213 grayscale images and the bigger database BU-3DFE [40] contains 2500 images. In both databases, the instances are represented by the 243-dimensional LBP features [41]. Each facial expression image is associated with six basic emotions, i.e., happy, sad, surprised, afraid, angry, and disgusted. Sixty persons were employed to score each image on a five-level scale [42]. The average score of each emotion was used to represent the intensity of the emotion. As for the dataset SJAFFE (Scored JAFFE), each score indicates the degree of the corresponding emotion. Similarly, each image of the Scored BU-3DFE was scored by 23 persons in the same way as JAFFE. Other datasets were used for the LDL task.

4.2. Evaluation Measures for FLDL

There are many ways to measure the distance between possibility distributions, which in turn can be used to determine the distance between label sets. The measures for the distance, which reflected the difference between the label sets, were contrary to that for the similarity. The higher the distance between the label sets, the lower their similarity. Given two label sets  D = { d 1 , d 2 , , d L }  and  D ˜ = { d ˜ 1 , d ˜ 2 , , d ˜ L }  in FLDL, there are several measures for the distance.
In particular, Chebyshev distance (Cheb) and Clark distance (Clark) are the distance measurements of the probability distribution, and the smaller their value, the better the performance. Cosine coefficient (Cosine) and intersection similarity (Intersection) are the similarity measurements; the larger their results, the better the performance [43]. In this experiment, we use a 10-fold cross-validation method on each algorithm.

4.3. Experimental Setup

We design two experiments to verify the effectiveness of the proposed methods. One is on the fuzzy label distribution dataset, and the other is on the label distribution datasets. The proposed methods can not only deal with the fuzzy label distribution task but also with the label distribution task when the output of our algorithm is normalized to satisfy the probability distribution.
In the multi-modal Fuzzy label distribution dataset, as the kernel matrices are generated first, we only compared three un-weighted methods (FFRS-u, FFRS-l, FFRS-m) and one weighted method (FFRS-w) with the AA-kNN algorithm. The RBF kernel was used to compute the kernel matrices for audio features with different parameters. The hyper-parameter of the Gaussian function was set as the number of features. With regard to the visual attributes, by extracting the features from each video frame, we could record each video clip as a set of feature vectors. Based on the feature vectors, to describe the video, we introduce three types of models: covariance matrix, Gaussian distribution, and linear subspace on visual attributes. Subsequently, the Riemannian kernel was used to compute the kernel matrices and then they were normalized to a unit diagonal. Four evaluation measures based on the possibility distribution were used to verify the effectiveness of the FLDL algorithms, which are introduced in Section 4.2.
In label distribution datasets, we compare the proposed algorithm FFRS-w with four LDL algorithms, i.e., PT-SVM, PT-Bayes, AA-kNN, and AA-BP. The results are shown in Table 2. Four criteria, listed in Table 3, were used to verify the effectiveness of LDL algorithms.

4.4. Results and Discussion

The results of four evaluation measures on the multi-modality FLDL dataset are shown in Figure 6. The four evaluation measures for FLDL, listed in Table 4 are the distance measurements of the possibility distribution, and the smaller their value, the better the performance.
The results indicate that FFRS-w exhibited better performance than all other algorithms. FFRS-u and FFRS-l exhibited the worst performances of all four evaluation measures. The results were consistent with the previous analysis. FFRS-u and FFRS-l may lead to overestimation and underestimation. FFRS-m and AA-kNN were similar; they avoided the estimation problems by averaging the outputs. However, the average method can easily ignore the important information included in the data. Therefore, FFRS-w exhibited the best performance.
The mean and standard deviation of results are shown from Table 5, Table 6, Table 7 and Table 8. The best results are highlighted in boldface. In order to evaluate the performance of each algorithm on all datasets, the average ranks of each algorithm are listed in the bottom row.
The results demonstrate that the proposed FFRS-w algorithm exhibited the best performance. AA-kNN exhibited better performance than all other algorithms, while FFRS exhibited better performance than AA-kNN in most datasets. The average ranks of these algorithms are shown in Figure 7, where FRFS-w always ranks at the top among all four measurements.
AA-kNN maintained the structure information of the labels. However, PT-SVM destroyed the distribution structure during the process of transforming the data, and therefore, PT-SVM exhibited results that were worse than AA-kNN. PT-Bayes exhibited good performances when the data matched the Gaussian assumption required by the algorithm. However, its performance fell when operated on other real datasets. Too many parameters in AA-BP led to poor performance because it is difficult to find the optimized settings.
The FFRS-w algorithm learns more information from the data. The lower and upper approximations are the measures of necessity and possibility. They are combined to form the interval, of which the weights of each label are trained and used to evaluate the test data. This process improves the robustness of the algorithm. Therefore, FFRS-w exhibits better performances than AA-kNN.

5. Conclusions

In this paper, we propose a fuzzy learning-based expression recognition method for human emotion prediction. We construct a fuzzy label distribution system using fuzzy sets and present two fuzzy label distribution prediction methods based on fuzzy rough sets. To compare our algorithm with the label distribution approach, we conduct experiments on the fuzzy label distribution task and the label distribution task, respectively. The experimental results show that the accuracy of the proposed method is superior to the other methods on both tasks. The other methods cannot directly deal with the human emotion fuzzy label distribution task; therefore, the proposed method is a more general label distribution prediction method that is more suitable for deployment on AI IoT.
This work only proposes the framework of fuzzy learning for label fuzzy distribution, and there are still some drawbacks: firstly, the accuracy of the proposed method, although better than the label distribution learning, still has some room for improvement; secondly, the efficiency of the proposed method and how to quickly output the fuzzy prediction of multiple composite labels; finally, the scalability of the proposed method, and how to extend the proposed method to multi-modality data.
Therefore, in future work, we will try to improve the following aspects: firstly, the fuzzy rough set can be replaced by a more advanced fuzzy learning method such as intuitionistic fuzzy rough set and neutrosophic rough sets, thus further improving the accuracy of the method; secondly, the parallel computing framework of the proposed method is increased to improve the efficiency of the framework; finally, we study how to extend the fuzzy relationship calculation method between samples so that it can be applied to multi-modality data.

Author Contributions

Conceptualization, L.Z.; Methodology, L.Z.; Software, L.Z.; Resources, Y.X., T.Y. and L.L.; Writing—original draft, L.Z.; Writing—review & editing, H.Z. and M.M.; Funding acquisition, H.Z., Y.W. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Key Research and Development Projects of Zhejiang Province (under Grant 2022C01200), Natural Science Foundation of Hebei Province (under Grant F2020202028), and Natural Science Foundation of China (under Grants 62232005).

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bellini, P.; Nesi, P.; Pantaleo, G. IoT-enabled smart cities: A review of concepts, frameworks and key technologies. Appl. Sci. 2022, 12, 1607. [Google Scholar] [CrossRef]
  2. Bauer, M.; Sanchez, L.; Song, J. IoT-enabled smart cities: Evolution and outlook. Sensors 2021, 21, 4511. [Google Scholar] [CrossRef] [PubMed]
  3. Albulayhi, K.; Abu Al-Haija, Q.; Alsuhibany, S.A.; Jillepalli, A.A.; Ashrafuzzaman, M.; Sheldon, F.T. IoT intrusion detection using machine learning with a novel high performing feature selection method. Appl. Sci. 2022, 12, 5015. [Google Scholar] [CrossRef]
  4. Madakam, S.; Lake, V.; Lake, V.; Lake, V. Internet of Things (IoT): A literature review. J. Comput. Commun. 2015, 3, 164–174. [Google Scholar] [CrossRef] [Green Version]
  5. Farooq, M.U.; Waseem, M.; Mazhar, S.; Khairi, A.; Kamal, T. A review on internet of things (IoT). Int. J. Comput. Appl. 2015, 113, 1–7. [Google Scholar]
  6. Fang, W.; Cui, N.; Chen, W.; Zhang, W.; Chen, Y. A Trust-Based Security System for Data Collection in Smart City. IEEE Trans. Ind. Inform. 2021, 17, 4131–4140. [Google Scholar] [CrossRef]
  7. Fang, W.; Zhu, C.; Yu, F.; Wang, K.; Zhang, W. Towards Energy-Efficient and Secure Data Transmission in AI-Enabled Software Defined Industrial Networks. IEEE Trans. Ind. Inform. 2022, 18, 4265–4274. [Google Scholar] [CrossRef]
  8. Dzedzickis, A.; Kaklauskas, A.; Bucinskas, V. Human emotion recognition: Review of sensors and methods. Sensors 2020, 20, 592. [Google Scholar] [CrossRef] [Green Version]
  9. Fang, W.; Zhang, W.; Yang, W.; Li, Z.; Gao, W.; Yang, Y. Trust management-based and energy efficient hierarchical routing protocol in wireless sensor networks. Digit. Commun. Netw. 2021, 7, 470–478. [Google Scholar] [CrossRef]
  10. Hossain, M.S.; Muhammad, G. Emotion recognition using secure edge and cloud computing. Inf. Sci. 2019, 504, 589–601. [Google Scholar] [CrossRef]
  11. Muhammad, G.; Alshehri, F.; Karray, F.; El Saddik, A.; Alsulaiman, M.; Falk, T.H. A comprehensive survey on multimodal medical signals fusion for smart healthcare systems. Inf. Fusion 2021, 76, 355–375. [Google Scholar] [CrossRef]
  12. Xiao, L.; Wan, X.; Lu, X.; Zhang, Y.; Wu, D. IoT security techniques based on machine learning: How do IoT devices use AI to enhance security? IEEE Signal Process. Mag. 2018, 35, 41–49. [Google Scholar] [CrossRef]
  13. Naqvi, R.A.; Arsalan, M.; Rehman, A.; Rehman, A.U.; Loh, W.K.; Paul, A. Deep learning-based drivers emotion classification system in time series data for remote applications. Remote Sens. 2020, 12, 587. [Google Scholar] [CrossRef] [Green Version]
  14. Canal, F.Z.; Müller, T.R.; Matias, J.C.; Scotton, G.G.; de Sa Junior, A.R.; Pozzebon, E.; Sobieranski, A.C. A survey on facial emotion recognition techniques: A state-of-the-art literature review. Inf. Sci. 2022, 582, 593–617. [Google Scholar] [CrossRef]
  15. Lopes, A.T.; De Aguiar, E.; De Souza, A.F.; Oliveira-Santos, T. Facial expression recognition with convolutional neural networks: Coping with few data and the training sample order. Pattern Recognit. 2017, 61, 610–628. [Google Scholar] [CrossRef]
  16. Li, S.; Deng, W. Deep facial expression recognition: A survey. IEEE Trans. Affect. Comput. 2020, 13, 1195–1215. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, K.; Peng, X.; Yang, J.; Meng, D.; Qiao, Y. Region attention networks for pose and occlusion robust facial expression recognition. IEEE Trans. Image Process. 2020, 29, 4057–4069. [Google Scholar] [CrossRef] [Green Version]
  18. Rouast, P.V.; Adam, M.T.P.; Chiong, R. Deep learning for human affect recognition: Insights and new developments. IEEE Trans. Affect. Comput. 2019, 12, 524–543. [Google Scholar] [CrossRef] [Green Version]
  19. Zadeh, L.A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [Google Scholar] [CrossRef]
  20. Dubois, D.; Foulloy, L.; Mauris, G.; Prade, H. Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliab. Comput. 2004, 10, 273–297. [Google Scholar] [CrossRef]
  21. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  22. Lin, G.P.; Qian, Y.H.; Li, J.J. Nmgrs: Neighborhood-based multigranulation rough sets. Int. J. Approx. Reason. 2012, 53, 1080–1093. [Google Scholar] [CrossRef] [Green Version]
  23. Yao, Y.Y.; Zhao, Y. Attribute reduction in decision-theoretic rough set models. Inf. Sci. 2008, 178, 3356–3373. [Google Scholar] [CrossRef] [Green Version]
  24. Liang, J.Y.; Wang, F.; Dang, C.Y.; Qian, Y.H. An efficient rough feature selection algorithm with a multi-granulation view. Int. J. Approx. Reason. 2012, 53, 912–926. [Google Scholar] [CrossRef] [Green Version]
  25. Qian, J.; Miao, D.Q.; Zhang, Z.H.; Li, W. Hybrid approaches to attribute reduction based on indiscernibility and discernibility relation. Int. J. Approx. Reason. 2011, 52, 212–230. [Google Scholar] [CrossRef] [Green Version]
  26. Mi, J.S.; Zhang, W.X. An axiomatic characterization of a fuzzy generalization of rough sets. Inf. Sci. 2004, 160, 235–249. [Google Scholar] [CrossRef]
  27. Hu, Q.H.; Zhang, L.; Chen, D.G.; Pedrycz, W.; Yu, D.R. Gaussian kernel based fuzzy rough sets: Model, uncertainty measures and applications. Int. J. Approx. Reason. 2010, 51, 453–471. [Google Scholar] [CrossRef] [Green Version]
  28. Hu, Q.H.; Yu, D.R.; Pedrycz, W.; Chen, D.G. Kernelized fuzzy rough sets and their applications. IEEE Trans. Knowl. Data Eng. 2011, 23, 1649–1667. [Google Scholar] [CrossRef]
  29. Qian, Y.H.; Li, Y.B.; Liang, J.Y.; Lin, G.P.; Dang, C.Y. Fuzzy granular structure distance. IEEE Trans. Fuzzy Syst. 2015, 23, 2245–2259. [Google Scholar] [CrossRef]
  30. Hu, Q.H.; Zhang, L.; An, S.; Zhang, D.; Yu, D.R. On robust fuzzy rough set models. IEEE Trans. Fuzzy Syst. 2012, 20, 636–651. [Google Scholar] [CrossRef]
  31. Zhao, S.Y.; Chen, H.; Li, C.P.; Zhai, M.Y.; Du, X.Y. Rfrr: Robust fuzzy rough reduction. IEEE Trans. Fuzzy Syst. 2013, 21, 825–841. [Google Scholar] [CrossRef]
  32. Yu, Y.; Pedrycz, W.; Miao, D.Q. Neighborhood rough sets based multi-label classification for automatic image annotation. Int. J. Approx. Reason. 2013, 54, 1373–1387. [Google Scholar] [CrossRef]
  33. Al-Quran, A.; Hassan, N.; Marei, E. A novel approach to neutrosophic soft rough set under uncertainty. Symmetry 2019, 11, 384. [Google Scholar] [CrossRef] [Green Version]
  34. Chai, J.; Liu, J.N.K.; Li, A. A new intuitionistic fuzzy rough set approach for decision support. In Proceedings of the Rough Sets and Knowledge Technology: 7th International Conference, RSKT 2012, Chengdu, China, 17–20 August 2012; pp. 71–80. [Google Scholar]
  35. Karyotis, C.; Doctor, F.; Iqbal, R.; James, A.E.; Chang, V. A Fuzzy Modelling Approach of Emotion for Affective Computing Systems. IoTBD 2016, 453–460. [Google Scholar]
  36. Guo, J. Deep learning approach to text analysis for human emotion detection from big data. J. Intell. Syst. 2022, 31, 113–126. [Google Scholar] [CrossRef]
  37. Hua, W.; Dai, F.; Huang, L.; Xiong, J.; Gui, G. HERO: Human emotions recognition for realizing intelligent Internet of Things. IEEE Access 2019, 7, 24321–24332. [Google Scholar] [CrossRef]
  38. Geng, X. Label distribution learning. IEEE Trans. Knowl. Data Eng. 2016, 28, 1734–1748. [Google Scholar] [CrossRef] [Green Version]
  39. Qian, W.; Xiong, Y.; Yang, J.; Shu, W. Feature selection for label distribution learning via feature similarity and label correlation. Inf. Sci. 2022, 582, 38–59. [Google Scholar] [CrossRef]
  40. Collins, K.M.; Bhatt, U.; Weller, A. Eliciting and learning with soft labels from every annotator. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Virtual, 6–10 November 2022; pp. 40–52. [Google Scholar]
  41. Tan, C.; Chen, S.; Geng, X.; Ji, G. A label distribution manifold learning algorithm. Pattern Recognit. 2023, 135, 109–112. [Google Scholar] [CrossRef]
  42. Zhao, X.; An, Y.; Xu, N.; Geng, X. Continuous label distribution learning. Pattern Recognit. 2023, 133, 109056. [Google Scholar] [CrossRef]
  43. Xu, N.; Shu, J.; Zheng, R.; Geng, X.; Meng, D.; Zhang, M. Variational Label Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 135, 109–112. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Facial expression recognition used in AIIoT.
Figure 1. Facial expression recognition used in AIIoT.
Applsci 13 07799 g001
Figure 2. Fuzzy label distribution task.
Figure 2. Fuzzy label distribution task.
Applsci 13 07799 g002
Figure 3. Label distribution task.
Figure 3. Label distribution task.
Applsci 13 07799 g003
Figure 4. Example for SLL, MLL, and FLDL.
Figure 4. Example for SLL, MLL, and FLDL.
Applsci 13 07799 g004
Figure 5. Example of the video faces in the dataset.
Figure 5. Example of the video faces in the dataset.
Applsci 13 07799 g005
Figure 6. Results of four evaluation measures on CHEAD dataset.
Figure 6. Results of four evaluation measures on CHEAD dataset.
Applsci 13 07799 g006
Figure 7. Example of the video emotion recognition.
Figure 7. Example of the video emotion recognition.
Applsci 13 07799 g007
Table 1. Fuzzy label distribution dataset.
Table 1. Fuzzy label distribution dataset.
UC   d 1   d 2   d 3
  x 1 0.390.640.310.72
  x 2 0.180.350.740.23
  x 3 0.350.760.420.55
  x 4 0.090.410.650.19
Table 2. Database information.
Table 2. Database information.
No.DatasetInstancesFeaturesLabels
1Yeast-alpha24652418
2Yeast-cdc24652415
3Yeast-elu24652414
4Yeast-diau2465247
5Yeast-heat2465246
6Yeast-spo2465246
7Yeast-cold2465244
8Yeast-dtt2465244
9Yeast-spo52465243
10Yeast-spoem2465242
11Human Gene30,5423668
12Natural Scene20002949
13SJAFFE2132436
14SBU_3DFE25002436
15Movie775518695
Table 3. Evaluation measures of LDL.
Table 3. Evaluation measures of LDL.
NameFormula
1Hamming distance   D i s ( D , D ˜ ) = 1 L l = 1 L | d j d ˜ j |
2Euclid distance   D i s ( D , D ˜ ) = 1 L l = 1 L ( d j d ˜ j ) 2 1 2
3Lambert distance   D i s ( D , D ˜ ) = 1 L i = 1 L | d j d ˜ j | d j + d ˜ j
4Fuzzy absolute distance   D i s ( D , D ˜ ) = l = 1 L | d j d ˜ j | l = 1 L ( d j + d ˜ j )
Table 4. Evaluation measures of FLDL.
Table 4. Evaluation measures of FLDL.
NameFormula
1Cheb   D i s ( D , D ˜ ) = max 1 l L | d l d ˜ l |
2Clark   D i s ( D , D ˜ ) = l = 1 L ( d l d ˜ l ) 2 ( d l + d ˜ l ) 2
3Cosine   S i m ( D , D ˜ ) = l = 1 L d l d ˜ l l = 1 L d l 2 l = 1 L d ˜ l 2
4Intersection   S i m ( D , D ˜ ) = l = 1 L min ( d l , d ˜ l )
Table 5. Results (mean ± std (rank)) of the Chebyshev distance (the smaller the better).
Table 5. Results (mean ± std (rank)) of the Chebyshev distance (the smaller the better).
No.PT-BayesPT-SVMAA-BPAA-kNNFFRS-w
10.174 ± 0.0110.017 ± 0.0020.036 ± 0.0030.015 ± 0.0010.014 ± 0.001
20.172 ± 0.0160.020 ± 0.0030.037 ± 0.0020.018 ± 0.0010.016 ± 0.001
30.166 ± 0.0140.019 ± 0.0010.036 ± 0.0020.018 ± 0.0010.016 ± 0.001
40.167 ± 0.0070.046 ± 0.0040.048 ± 0.0030.039 ± 0.0010.038 ± 0.002
50.176 ± 0.0180.046 ± 0.0010.052 ± 0.0030.045 ± 0.0010.043 ± 0.001
60.178 ± 0.0090.065 ± 0.0060.067 ± 0.0050.064 ± 0.0020.059 ± 0.004
70.177 ± 0.0110.057 ± 0.0030.057 ± 0.0030.055 ± 0.0020.052 ± 0.002
80.177 ± 0.0100.040 ± 0.0010.043 ± 0.0020.039 ± 0.0010.037 ± 0.001
90.211 ± 0.0110.093 ± 0.0060.094 ± 0.0060.096 ± 0.0050.093 ± 0.006
100.190 ± 0.0170.091 ± 0.0050.089 ± 0.0050.093 ± 0.0040.088 ± 0.005
110.195 ± 0.0850.054 ± 0.0040.059 ± 0.0040.065 ± 0.0050.053 ± 0.004
120.407 ± 0.0270.414 ± 0.0360.335 ± 0.0160.374 ± 0.0130.350 ± 0.015
130.121 ± 0.0160.127 ± 0.0170.130 ± 0.0170.114 ± 0.0170.120 ± 0.016
140.116 ± 0.0040.119 ± 0.0060.113 ± 0.0050.103 ± 0.0030.135 ± 0.005
150.199 ± 0.0090.213 ± 0.0390.157 ± 0.0130.154 ± 0.0050.129 ± 0.008
Table 6. Results (mean ± std (rank)) of the Clark distance (the smaller the better).
Table 6. Results (mean ± std (rank)) of the Clark distance (the smaller the better).
No.PT-BayesPT-SVMAA-BPAA-kNNFFRS-w
16.382 ± 0.3050.921 ± 0.1072.352 ± 0.1730.758 ± 0.0400.685 ± 0.047
24.987 ± 0.2220.785 ± 0.0841.718 ± 0.1100.717 ± 0.0410.649 ± 0.042
34.461 ± 0.2820.691 ± 0.0471.488 ± 0.0980.644 ± 0.0160.587 ± 0.019
41.744 ± 0.0710.528 ± 0.0310.568 ± 0.0330.455 ± 0.0110.439 ± 0.020
51.415 ± 0.1020.396 ± 0.0160.459 ± 0.0310.392 ± 0.0100.371 ± 0.008
61.473 ± 0.0690.565 ± 0.0490.599 ± 0.0430.559 ± 0.0240.515 ± 0.033
70.845 ± 0.0590.267 ± 0.0140.268 ± 0.0150.260 ± 0.0130.243 ± 0.009
80.846 ± 0.0510.186 ± 0.0080.204 ± 0.0120.182 ± 0.0070.172 ± 0.006
90.681 ± 0.0380.287 ± 0.0190.291 ± 0.0180.297 ± 0.0160.285 ± 0.019
100.424 ± 0.0380.187 ± 0.0110.184 ± 0.0110.191 ± 0.0080.183 ± 0.010
1134.24 ± 3.63414.63 ± 0.64722.79 ± 1.84116.28 ± 0.81814.51 ± 0.650
127.149 ± 0.1097.208 ± 0.2056.767 ± 0.0953.044 ± 0.1376.663 ± 0.113
130.904 ± 0.0860.935 ± 0.0741.046 ± 0.1240.843 ± 0.1130.889 ± 0.085
141.116 ± 0.0201.147 ± 0.0641.051 ± 0.0640.841 ± 0.0140.900 ± 0.041
151.547 ± 0.0751.537 ± 0.2161.269 ± 0.0891.276 ± 0.0461.120 ± 0.050
Table 7. Results (mean ± std (rank)) of the Cosine coefficient (the larger the better).
Table 7. Results (mean ± std (rank)) of the Cosine coefficient (the larger the better).
No.PT-BayesPT-SVMAA-BPAA-kNNFFRS-w
10.743 ± 0.0150.991 ± 0.0020.949 ± 0.0060.994 ± 0.0010.995 ± 0.001
20.766 ± 0.0170.991 ± 0.0020.960 ± 0.0040.992 ± 0.0010.993 ± 0.001
30.780 ± 0.0180.992 ± 0.0010.965 ± 0.0040.993 ± 0.0010.994 ± 0.001
40.856 ± 0.0070.982 ± 0.0020.979 ± 0.0020.986 ± 0.0010.988 ± 0.001
50.866 ± 0.0150.986 ± 0.0010.981 ± 0.0030.986 ± 0.0010.988 ± 0.001
60.859 ± 0.0080.971 ± 0.0050.969 ± 0.0040.972 ± 0.0020.977 ± 0.003
70.898 ± 0.0080.986 ± 0.0010.986 ± 0.0020.987 ± 0.0010.988 ± 0.001
80.897 ± 0.0080.993 ± 0.0010.991 ± 0.0010.993 ± 0.0010.994 ± 0.001
90.893 ± 0.0080.973 ± 0.0030.973 ± 0.0030.970 ± 0.0030.974 ± 0.003
100.914 ± 0.0110.976 ± 0.0030.978 ± 0.0020.975 ± 0.0020.978 ± 0.002
110.456 ± 0.0890.832 ± 0.0110.726 ± 0.0260.766 ± 0.0200.834 ± 0.011
120.559 ± 0.0140.490 ± 0.0820.697 ± 0.0110.624 ± 0.0160.678 ± 0.010
130.930 ± 0.0130.920 ± 0.0140.908 ± 0.0190.934 ± 0.0180.931 ± 0.013
140.924 ± 0.0040.914 ± 0.0060.926 ± 0.0060.938 ± 0.0020.920 ± 0.005
150.850 ± 0.0080.806 ± 0.0610.895 ± 0.0140.880 ± 0.0060.926 ± 0.009
Table 8. Results (mean ± std (rank)) of the intersection similarity (the larger the better).
Table 8. Results (mean ± std (rank)) of the intersection similarity (the larger the better).
No.PT-BayesPT-SVMAA-BPAA-kNNFFRS-w
10.660 ± 0.0160.949 ± 0.0060.877 ± 0.0080.958 ± 0.0020.962 ± 0.003
20.681 ± 0.0150.948 ± 0.0060.891 ± 0.0070.953 ± 0.0030.957 ± 0.003
30.695 ± 0.0190.951 ± 0.0030.899 ± 0.0060.955 ± 0.0010.959 ± 0.001
40.764 ± 0.0080.926 ± 0.0040.922 ± 0.0040.937 ± 0.0020.939 ± 0.003
50.773 ± 0.0180.935 ± 0.0030.925 ± 0.0050.936 ± 0.0020.939 ± 0.001
60.765 ± 0.0100.906 ± 0.0080.902 ± 0.0070.908 ± 0.0040.915 ± 0.005
70.802 ± 0.0120.934 ± 0.0040.934 ± 0.0040.936 ± 0.0030.940 ± 0.002
80.801 ± 0.0110.954 ± 0.0020.950 ± 0.0030.955 ± 0.0020.958 ± 0.001
90.789 ± 0.0110.907 ± 0.0060.906 ± 0.0060.904 ± 0.0050.909 ± 0.006
100.810 ± 0.0170.909 ± 0.0050.911 ± 0.0050.907 ± 0.0040.912 ± 0.005
110.470 ± 0.0620.781 ± 0.0100.671 ± 0.0250.742 ± 0.0140.784 ± 0.009
120.350 ± 0.0140.364 ± 0.0550.499 ± 0.0120.544 ± 0.0180.556 ± 0.014
130.846 ± 0.0160.839 ± 0.0150.824 ± 0.0220.855 ± 0.0210.848 ± 0.016
140.834 ± 0.0030.827 ± 0.0090.847 ± 0.0080.872 ± 0.0020.839 ± 0.007
150.725 ± 0.0110.711 ± 0.0520.788 ± 0.0150.780 ± 0.0070.864 ± 0.015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Zhang, H.; Wu, Y.; Xu, Y.; Ye, T.; Ma, M.; Li, L. HELPFuL: Human Emotion Label Prediction Based on Fuzzy Learning for Realizing Artificial Intelligent in IoT. Appl. Sci. 2023, 13, 7799. https://doi.org/10.3390/app13137799

AMA Style

Zhang L, Zhang H, Wu Y, Xu Y, Ye T, Ma M, Li L. HELPFuL: Human Emotion Label Prediction Based on Fuzzy Learning for Realizing Artificial Intelligent in IoT. Applied Sciences. 2023; 13(13):7799. https://doi.org/10.3390/app13137799

Chicago/Turabian Style

Zhang, Lingjun, Hua Zhang, Yifan Wu, Yanping Xu, Tingcong Ye, Mengjing Ma, and Linhao Li. 2023. "HELPFuL: Human Emotion Label Prediction Based on Fuzzy Learning for Realizing Artificial Intelligent in IoT" Applied Sciences 13, no. 13: 7799. https://doi.org/10.3390/app13137799

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop