Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = emotional granularity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 12261 KiB  
Article
Integrating Reliability, Uncertainty, and Subjectivity in Design Knowledge Flow: A CMZ-BENR Augmented Framework for Kansei Engineering
by Haoyi Lin, Pohsun Wang, Jing Liu and Chiawei Chu
Symmetry 2025, 17(5), 758; https://doi.org/10.3390/sym17050758 - 14 May 2025
Viewed by 405
Abstract
As a knowledge-intensive activity, the Kansei engineering (KE) process encounters numerous challenges in the design knowledge flow, primarily due to issues related to information reliability, uncertainty, and subjectivity. Bridging this gap, this study introduces an advanced KE framework integrating a cloud model with [...] Read more.
As a knowledge-intensive activity, the Kansei engineering (KE) process encounters numerous challenges in the design knowledge flow, primarily due to issues related to information reliability, uncertainty, and subjectivity. Bridging this gap, this study introduces an advanced KE framework integrating a cloud model with Z-numbers (CMZ) and Bayesian elastic net regression (BENR). In stage-I of this KE, data mining techniques are employed to process online user reviews, coupled with a similarity analysis of affective word clusters to identify representative emotional descriptors. During stage-II, the CMZ algorithm refines K-means clustering outcomes for market-representative product forms, enabling precise feature characterization and experimental prototype development. Stage-III addresses linguistic uncertainties in affective modeling through CMZ-augmented semantic differential questionnaires, achieving a multi-granular representation of subjective evaluations. Subsequently, stage-IV employs BENR for automated hyperparameter optimization in design knowledge inference, eliminating manual intervention. The framework’s efficacy is empirically validated through a domestic cleaning robot case study, demonstrating superior performance in resolving multiple information processing challenges via comparative experiments. Results confirm that this KE framework significantly improves uncertainty management in design knowledge flow compared to conventional implementations. Furthermore, by leveraging the intrinsic symmetry of the normal cloud model with Z-numbers distributions and the balanced ℓ1/ℓ2 regularization of BENR, CMZ–BENR framework embodies the principle of structural harmony. Full article
(This article belongs to the Special Issue Fuzzy Set Theory and Uncertainty Theory—3rd Edition)
Show Figures

Figure 1

23 pages, 14629 KiB  
Article
Multi-Stage Simulation of Residents’ Disaster Risk Perception and Decision-Making Behavior: An Exploratory Study on Large Language Model-Driven Social–Cognitive Agent Framework
by Xinjie Zhao, Hao Wang, Chengxiao Dai, Jiacheng Tang, Kaixin Deng, Zhihua Zhong, Fanying Kong, Shiyun Wang and So Morikawa
Systems 2025, 13(4), 240; https://doi.org/10.3390/systems13040240 - 31 Mar 2025
Viewed by 1355
Abstract
The escalating frequency and complexity of natural disasters highlight the urgent need for deeper insights into how individuals and communities perceive and respond to risk information. Yet, conventional research methods—such as surveys, laboratory experiments, and field observations—often struggle with limited sample sizes, external [...] Read more.
The escalating frequency and complexity of natural disasters highlight the urgent need for deeper insights into how individuals and communities perceive and respond to risk information. Yet, conventional research methods—such as surveys, laboratory experiments, and field observations—often struggle with limited sample sizes, external validity concerns, and difficulties in controlling for confounding variables. These constraints hinder our ability to develop comprehensive models that capture the dynamic, context-sensitive nature of disaster decision-making. To address these challenges, we present a novel multi-stage simulation framework that integrates Large Language Model (LLM)-driven social–cognitive agents with well-established theoretical perspectives from psychology, sociology, and decision science. This framework enables the simulation of three critical phases—information perception, cognitive processing, and decision-making—providing a granular analysis of how demographic attributes, situational factors, and social influences interact to shape behavior under uncertain and evolving disaster conditions. A case study focusing on pre-disaster preventive measures demonstrates its effectiveness. By aligning agent demographics with real-world survey data across 5864 simulated scenarios, we reveal nuanced behavioral patterns closely mirroring human responses, underscoring the potential to overcome longstanding methodological limitations and offer improved ecological validity and flexibility to explore diverse disaster environments and policy interventions. While acknowledging the current constraints, such as the need for enhanced emotional modeling and multimodal inputs, our framework lays a foundation for more nuanced, empirically grounded analyses of risk perception and response patterns. By seamlessly blending theory, advanced LLM capabilities, and empirical alignment strategies, this research not only advances the state of computational social simulation but also provides valuable guidance for developing more context-sensitive and targeted disaster management strategies. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

20 pages, 1931 KiB  
Article
Information-Theoretic Measures of Metacognitive Efficiency: Empirical Validation with the Face Matching Task
by Daniel Fitousi
Entropy 2025, 27(4), 353; https://doi.org/10.3390/e27040353 - 28 Mar 2025
Cited by 1 | Viewed by 602
Abstract
The ability of participants to monitor the correctness of their own decisions by rating their confidence is a form of metacognition. This introspective act is crucial for many aspects of cognition, including perception, memory, learning, emotion regulation, and social interaction. Researchers assess the [...] Read more.
The ability of participants to monitor the correctness of their own decisions by rating their confidence is a form of metacognition. This introspective act is crucial for many aspects of cognition, including perception, memory, learning, emotion regulation, and social interaction. Researchers assess the quality of confidence ratings according to bias, sensitivity, and efficiency. To do so, they deploy quantities such as metad-d or the Mratio These measures compute the expected accuracy level of performance in the primary task (Type 1) from the secondary confidence rating task (Type 2). However, these measures have several limitations. For example, they are based on unwarranted parametric assumptions, and they fall short of accommodating the granularity of confidence ratings. Two recent papers by Dayan and by Fitousi have proposed information-theoretic measures of metacognitive efficiency that can address some of these problems. Dayan suggested metaI and Fitousi proposed metaU, metaKL, and metaJ. These authors demonstrated the convergence of their measures on the notion of metacognitive efficiency using simulations, but did not apply their measures to real empirical data. The present study set to test the construct validity of these measures in a concrete behavioral task—the face-matching task. The results supported the viability of these novel indexes of metacognitive efficiency, and provide substantial empirical evidence for their convergence. The results also adduce considerable evidence that participants in the face-matching task acquire valuable metaknowledge about the correctness of their own decisions in the task. Full article
(This article belongs to the Special Issue Information-Theoretic Principles in Cognitive Systems)
Show Figures

Figure 1

25 pages, 4755 KiB  
Article
MSDSANet: Multimodal Emotion Recognition Based on Multi-Stream Network and Dual-Scale Attention Network Feature Representation
by Weitong Sun, Xingya Yan, Yuping Su, Gaihua Wang and Yumei Zhang
Sensors 2025, 25(7), 2029; https://doi.org/10.3390/s25072029 - 24 Mar 2025
Cited by 1 | Viewed by 592
Abstract
Aiming at the shortcomings of EEG emotion recognition models in feature representation granularity and spatiotemporal dependence modeling, a multimodal emotion recognition model integrating multi-scale feature representation and attention mechanism is proposed. The model consists of a feature extraction module, feature fusion module, and [...] Read more.
Aiming at the shortcomings of EEG emotion recognition models in feature representation granularity and spatiotemporal dependence modeling, a multimodal emotion recognition model integrating multi-scale feature representation and attention mechanism is proposed. The model consists of a feature extraction module, feature fusion module, and classification module. The feature extraction module includes a multi-stream network module for extracting shallow EEG features and a dual-scale attention module for extracting shallow EOG features. The multi-scale and multi-granularity feature fusion improves the richness and discriminability of multimodal feature representation. Experimental results on two datasets show that the proposed model outperforms the existing model. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

14 pages, 769 KiB  
Article
Speech Emotion Recognition Using Multi-Scale Global–Local Representation Learning with Feature Pyramid Network
by Yuhua Wang, Jianxing Huang, Zhengdao Zhao, Haiyan Lan and Xinjia Zhang
Appl. Sci. 2024, 14(24), 11494; https://doi.org/10.3390/app142411494 - 10 Dec 2024
Cited by 1 | Viewed by 1480
Abstract
Speech emotion recognition (SER) is important in facilitating natural human–computer interactions. In speech sequence modeling, a vital challenge is to learn context-aware sentence expression and temporal dynamics of paralinguistic features to achieve unambiguous emotional semantic understanding. In previous studies, the SER method based [...] Read more.
Speech emotion recognition (SER) is important in facilitating natural human–computer interactions. In speech sequence modeling, a vital challenge is to learn context-aware sentence expression and temporal dynamics of paralinguistic features to achieve unambiguous emotional semantic understanding. In previous studies, the SER method based on the single-scale cascade feature extraction module could not effectively preserve the temporal structure of speech signals in the deep layer, downgrading the sequence modeling performance. To address these challenges, this paper proposes a novel multi-scale feature pyramid network. The enhanced multi-scale convolutional neural networks (MSCNNs) significantly improve the ability to extract multi-granular emotional features. Experimental results on the IEMOCAP corpus demonstrate the effectiveness of the proposed approach, achieving a weighted accuracy (WA) of 71.79% and an unweighted accuracy (UA) of 73.39%. Furthermore, on the RAVDESS dataset, the model achieves an unweighted accuracy (UA) of 86.5%. These results validate the system’s performance and highlight its competitive advantage. Full article
Show Figures

Figure 1

33 pages, 531 KiB  
Article
The Limits of Words: Expanding a Word-Based Emotion Analysis System with Multiple Emotion Dictionaries and the Automatic Extraction of Emotive Expressions
by Lu Wang, Sho Isomura, Michal Ptaszynski, Pawel Dybala, Yuki Urabe, Rafal Rzepka and Fumito Masui
Appl. Sci. 2024, 14(11), 4439; https://doi.org/10.3390/app14114439 - 23 May 2024
Cited by 1 | Viewed by 2049
Abstract
Wide adoption of social media has caused an explosion of information stored online, with the majority of that information containing subjective, opinionated, and emotional content produced daily by users. The field of emotion analysis has helped effectively process such human emotional expressions expressed [...] Read more.
Wide adoption of social media has caused an explosion of information stored online, with the majority of that information containing subjective, opinionated, and emotional content produced daily by users. The field of emotion analysis has helped effectively process such human emotional expressions expressed in daily social media posts. Unfortunately, one of the greatest limitations of popular word-based emotion analysis systems has been the limited emotion vocabulary. This paper presents an attempt to extensively expand one such word-based emotion analysis system by integrating multiple emotion dictionaries and implementing an automatic extraction mechanism for emotive expressions. We first leverage diverse emotive expression dictionaries to expand the emotion lexicon of the system. To do that, we solve numerous problems with the integration of various dictionaries collected using different standards. We demonstrate the performance improvement of the system with improved accuracy and granularity of emotion classification. Furthermore, our automatic extraction mechanism facilitates the identification of novel emotive expressions in an emotion dataset, thereby enriching the depth and breadth of emotion analysis capabilities. In particular, the automatic extraction method shows promising results for applicability in further expansion of the dictionary base in the future, thus advancing the field of emotion analysis and offering new avenues for research in sentiment analysis, affective computing, and human–computer interaction. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 1359 KiB  
Article
Utilizing Latent Diffusion Model to Accelerate Sampling Speed and Enhance Text Generation Quality
by Chenyang Li, Long Zhang and Qiusheng Zheng
Electronics 2024, 13(6), 1093; https://doi.org/10.3390/electronics13061093 - 15 Mar 2024
Viewed by 2574
Abstract
Diffusion models have achieved tremendous success in modeling continuous data modalities, such as images, audio, and video, yet their application in discrete data domains (e.g., natural language) has been limited. Existing methods primarily represent discrete text in a continuous diffusion space, incurring significant [...] Read more.
Diffusion models have achieved tremendous success in modeling continuous data modalities, such as images, audio, and video, yet their application in discrete data domains (e.g., natural language) has been limited. Existing methods primarily represent discrete text in a continuous diffusion space, incurring significant computational overhead during training and resulting in slow sampling speeds. This paper introduces LaDiffuSeq, a latent diffusion-based text generation model incorporating an encoder–decoder structure. Specifically, it first employs a pretrained encoder to map sequences composed of attributes and corresponding text into a low-dimensional latent vector space. Then, without the guidance of a classifier, it performs the diffusion process for the sequence’s corresponding latent space. Finally, a pretrained decoder is used to decode the newly generated latent vectors, producing target texts that are relevant to themes and possess multiple emotional granularities. Compared to the benchmark model, DiffuSeq, this model achieves BERTScore improvements of 0.105 and 0.009 on two public real-world datasets (ChnSentiCorp and a debate dataset), respectively; perplexity falls by 3.333 and 4.562; and it effectively quadruples the text generation sampling speed. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

32 pages, 1399 KiB  
Article
Arabic Emotion Recognition in Low-Resource Settings: A Novel Diverse Model Stacking Ensemble with Self-Training
by Maha Jarallah Althobaiti
Appl. Sci. 2023, 13(23), 12772; https://doi.org/10.3390/app132312772 - 28 Nov 2023
Cited by 1 | Viewed by 1913
Abstract
Emotion recognition is a vital task within Natural Language Processing (NLP) that involves automatically identifying emotions from text. As the need for specialized and nuanced emotion recognition models increases, the challenge of fine-grained emotion recognition with limited labeled data becomes prominent. Moreover, emotion [...] Read more.
Emotion recognition is a vital task within Natural Language Processing (NLP) that involves automatically identifying emotions from text. As the need for specialized and nuanced emotion recognition models increases, the challenge of fine-grained emotion recognition with limited labeled data becomes prominent. Moreover, emotion recognition for some languages, such as Arabic, is a challenging task due to the limited availability of labeled data. This scarcity exists in both size and the granularity of emotions. Our research introduces a novel framework for low-resource fine-grained emotion recognition, which uses an iterative process that integrates a stacking ensemble of diverse base models and self-training. The base models employ different learning paradigms, including zero-shot classification, few-shot methods, machine learning algorithms, and transfer learning. Our proposed method eliminates the need for a large labeled dataset to initiate the training process by gradually generating labeled data through iterations. During our experiments, we evaluated the performance of each base model and our proposed method in low-resource scenarios. Our experimental findings indicate our approach outperforms the individual performance of each base model. It also outperforms the state-of-the-art Arabic emotion recognition models in the literature, achieving a weighted average F1-score equal to 83.19% and 72.12% when tested on the AETD and ArPanEmo benchmark datasets, respectively. Full article
Show Figures

Figure 1

25 pages, 5785 KiB  
Article
Investigating and Analyzing Self-Reporting of Long COVID on Twitter: Findings from Sentiment Analysis
by Nirmalya Thakur
Appl. Syst. Innov. 2023, 6(5), 92; https://doi.org/10.3390/asi6050092 - 12 Oct 2023
Cited by 5 | Viewed by 3964
Abstract
This paper presents multiple novel findings from a comprehensive analysis of a dataset comprising 1,244,051 Tweets about Long COVID, posted on Twitter between 25 May 2020 and 31 January 2023. First, the analysis shows that the average number of Tweets per month wherein [...] Read more.
This paper presents multiple novel findings from a comprehensive analysis of a dataset comprising 1,244,051 Tweets about Long COVID, posted on Twitter between 25 May 2020 and 31 January 2023. First, the analysis shows that the average number of Tweets per month wherein individuals self-reported Long COVID on Twitter was considerably high in 2022 as compared to the average number of Tweets per month in 2021. Second, findings from sentiment analysis using VADER show that the percentages of Tweets with positive, negative, and neutral sentiments were 43.1%, 42.7%, and 14.2%, respectively. To add to this, most of the Tweets with a positive sentiment, as well as most of the Tweets with a negative sentiment, were not highly polarized. Third, the result of tokenization indicates that the tweeting patterns (in terms of the number of tokens used) were similar for the positive and negative Tweets. Analysis of these results also shows that there was no direct relationship between the number of tokens used and the intensity of the sentiment expressed in these Tweets. Finally, a granular analysis of the sentiments showed that the emotion of sadness was expressed in most of these Tweets. It was followed by the emotions of fear, neutral, surprise, anger, joy, and disgust, respectively. Full article
(This article belongs to the Section Medical Informatics and Healthcare Engineering)
Show Figures

Figure 1

38 pages, 661 KiB  
Article
Development and Validation of an Ability Measure of Emotion Understanding: The Core Relational Themes of Emotion (CORE) Test
by James L. Floman, Marc A. Brackett, Matthew L. LaPalme, Annette R. Ponnock, Sigal G. Barsade and Aidan Doyle
J. Intell. 2023, 11(10), 195; https://doi.org/10.3390/jintelligence11100195 - 9 Oct 2023
Cited by 1 | Viewed by 5181
Abstract
Emotion understanding (EU) ability is associated with healthy social functioning and psychological well-being. Across three studies, we develop and present validity evidence for the Core Relational Themes of Emotions (CORE) Test. The test measures people’s ability to identify relational themes underlying 19 positive [...] Read more.
Emotion understanding (EU) ability is associated with healthy social functioning and psychological well-being. Across three studies, we develop and present validity evidence for the Core Relational Themes of Emotions (CORE) Test. The test measures people’s ability to identify relational themes underlying 19 positive and negative emotions. Relational themes are consistencies in the meaning people assign to emotional experiences. In Study 1, we developed and refined the test items employing a literature review, expert panel, and confusion matrix with a demographically diverse sample. Correctness criteria were determined using theory and prior research, and a progressive (degrees of correctness) paradigm was utilized to score the test. In Study 2, the CORE demonstrated high internal consistency and a confirmatory factor analysis supported the unidimensional factor structure. The CORE showed evidence of convergence with established EU ability measures and divergent relationships with verbal intelligence and demographic characteristics, supporting its construct validity. Also, the CORE was associated with less relational conflict. In Study 3, the CORE was associated with more adaptive and less maladaptive coping and higher well-being on multiple indicators. A set of effects remained, accounting for variance from a widely used EU test, supporting the CORE’s incremental validity. Theoretical and methodological contributions are discussed. Full article
13 pages, 2797 KiB  
Article
Exercise Modulates Brain Glucose Utilization Response to Acute Cocaine
by Colin Hanna, John Hamilton, Kenneth Blum, Rajendra D. Badgaiyan and Panayotis K. Thanos
J. Pers. Med. 2022, 12(12), 1976; https://doi.org/10.3390/jpm12121976 - 30 Nov 2022
Cited by 6 | Viewed by 2320
Abstract
Exercise, a proven method of boosting health and wellness, is thought to act as a protective factor against many neurological and psychological diseases. Recent studies on exercise and drug exposure have pinpointed some of the neurological mechanisms that may characterize this protective factor. [...] Read more.
Exercise, a proven method of boosting health and wellness, is thought to act as a protective factor against many neurological and psychological diseases. Recent studies on exercise and drug exposure have pinpointed some of the neurological mechanisms that may characterize this protective factor. Using positron emission tomography (PET) imaging techniques and the glucose analog [18F]-Fluorodeoxyglucose (18F-FDG), our team sought to identify how chronic aerobic exercise modulates brain glucose metabolism (BGluM) after drug-naïve rats were exposed to an acute dose of cocaine. Using sedentary rats as a control group, we observed significant differences in regional BGluM. Chronic treadmill exercise treatment coupled with acute cocaine exposure induced responses in BGluM activity in the following brain regions: postsubiculum (Post), parasubiculum (PaS), granular and dysgranular insular cortex (GI and DI, respectively), substantia nigra reticular (SNR) and compact part dorsal tier (SNCD), temporal association cortex (TeA), entopenduncular nucleus (EP), and crus 1 of the ansiform lobule (crus 1). Inhibition, characterized by decreased responses due to our exercise, was found in the ventral endopiriform nucleus (VEn). These areas are associated with memory and various motor functions. They also include and share connections with densely dopaminergic areas of the mesolimbic system. In conclusion, these findings suggest that treadmill exercise in rats mediates brain glucose response to an acute dose of cocaine differently as compared to sedentary rats. The modulated brain glucose utilization occurs in brain regions responsible for memory and association, spatial navigation, and motor control as well as corticomesolimbic regions related to reward, emotion, and movement. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

18 pages, 3735 KiB  
Article
A Hierarchical Heterogeneous Graph Attention Network for Emotion-Cause Pair Extraction
by Jiaxin Yu, Wenyuan Liu, Yongjun He and Bineng Zhong
Electronics 2022, 11(18), 2884; https://doi.org/10.3390/electronics11182884 - 12 Sep 2022
Cited by 4 | Viewed by 2389
Abstract
Recently, graph neural networks (GNN), due to their compelling representation learning ability, have been exploited to deal with emotion-cause pair extraction (ECPE). However, current GNN-based ECPE methods mostly concentrate on modeling the local dependency relation between homogeneous nodes at the semantic granularity of [...] Read more.
Recently, graph neural networks (GNN), due to their compelling representation learning ability, have been exploited to deal with emotion-cause pair extraction (ECPE). However, current GNN-based ECPE methods mostly concentrate on modeling the local dependency relation between homogeneous nodes at the semantic granularity of clauses or clause pairs, while they fail to take full advantage of the rich semantic information in the document. To solve this problem, we propose a novel hierarchical heterogeneous graph attention network to model global semantic relations among nodes. Especially, our method introduces all types of semantic elements involved in the ECPE, not just clauses or clause pairs. Specifically, we first model the dependency between clauses and words, in which word nodes are also exploited as an intermediary for the association between clause nodes. Secondly, a pair-level subgraph is constructed to explore the correlation between the pair nodes and their different neighboring nodes. Representation learning of clauses and clause pairs is achieved by two-level heterogeneous graph attention networks. Experiments on the benchmark datasets show that our proposed model achieves a significant improvement over 13 compared methods. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

21 pages, 3401 KiB  
Article
A Study on Epidemic Information Screening, Prevention and Control of Public Opinion Based on Health and Medical Big Data: A Case Study of COVID-19
by Jinhai Li, Yunlei Ma, Xinglong Xu, Jiaming Pei and Youshi He
Int. J. Environ. Res. Public Health 2022, 19(16), 9819; https://doi.org/10.3390/ijerph19169819 - 9 Aug 2022
Cited by 7 | Viewed by 2499
Abstract
The outbreak of the coronavirus disease 2019 (COVID-19) represents an alert for epidemic prevention and control in public health. Offline anti-epidemic work is the main battlefield of epidemic prevention and control. However, online epidemic information prevention and control cannot be ignored. The aim [...] Read more.
The outbreak of the coronavirus disease 2019 (COVID-19) represents an alert for epidemic prevention and control in public health. Offline anti-epidemic work is the main battlefield of epidemic prevention and control. However, online epidemic information prevention and control cannot be ignored. The aim of this study was to identify reliable information sources and false epidemic information, as well as early warnings of public opinion about epidemic information that may affect social stability and endanger the people’s lives and property. Based on the analysis of health and medical big data, epidemic information screening and public opinion prevention and control research were decomposed into two modules. Eight characteristics were extracted from the four levels of coarse granularity, fine granularity, emotional tendency, and publisher behavior, and another regulatory feature was added, to build a false epidemic information identification model. Five early warning indicators of public opinion were selected from the macro level and the micro level to construct the early warning model of public opinion about epidemic information. Finally, an empirical analysis on COVID-19 information was conducted using big data analysis technology. Full article
Show Figures

Figure 1

23 pages, 5116 KiB  
Article
Customer Perceived Risk Measurement with NLP Method in Electric Vehicles Consumption Market: Empirical Study from China
by Tao Shu, Zhiyi Wang, Ling Lin, Huading Jia and Jixian Zhou
Energies 2022, 15(5), 1637; https://doi.org/10.3390/en15051637 - 22 Feb 2022
Cited by 20 | Viewed by 4627
Abstract
In recent years, as people’s awareness of energy conservation, environmental protection, and sustainable development has increased, discussions related to electric vehicles (EVs) have aroused public debate on social media. At some point, most consumers face the possible risks of EVs—a critical psychological perception [...] Read more.
In recent years, as people’s awareness of energy conservation, environmental protection, and sustainable development has increased, discussions related to electric vehicles (EVs) have aroused public debate on social media. At some point, most consumers face the possible risks of EVs—a critical psychological perception that invariably affects sales of EVs in the consumption market. This paper chooses to deconstruct customers’ perceived risk from third-party comment data in social media, which has better coverage and objectivity than questionnaire surveys. In order to analyze a large amount of unstructured text comment data, the natural language processing (NLP) method based on machine learning was applied in this paper. The measurement results show 15 abstracts in five consumer perceived risks to EVs. Among them, the largest number of comments is that of “Technology Maturity” (A13) which reached 25,329, and which belongs to the “Performance Risk” (PR1) dimension, indicating that customers are most concerned about the performance risk of EVs. Then, in the “Social Risk” (PR5) dimension, the abstract “Social Needs” (A51) received only 3224 comments and “Preference and Trust Rank” (A52) reached 22,324 comments; this noticeable gap indicated the changes in how consumers perceived EVs social risks. Moreover, each dimension’s emotion analysis results showed that negative emotions are more than 40%, exceeding neutral or positive emotions. Importantly, customers have the strongest negative emotions about the “Time Risk” (PR4), accounting for 54%. On a finer scale, the top three negative emotions are “Charging Time” (A42), “EV Charging Facilities” (A41), and “Maintenance of Value” (A33). Another interesting result is that “Social Needs” (A51)’s positive emotional comments were larger than negative emotional comments. The paper provides substantial evidence for perceived risk theory research by new data and methods. It can provide a novel tool for multi-dimensional and fine-granular capture customers’ perceived risks and negative emotions. Thus, it has the potential to help government and enterprises to adjust promotional strategies in a timely manner to reduce higher perceived risks and emotions, accelerating the sustainable development of EVs’ consumption market in China. Full article
Show Figures

Figure 1

20 pages, 52068 KiB  
Article
The Meaning of Emoji to Describe Food Experiences in Pre-Adolescents
by Julia Sick, Erminio Monteleone, Lapo Pierguidi, Gastón Ares and Sara Spinelli
Foods 2020, 9(9), 1307; https://doi.org/10.3390/foods9091307 - 16 Sep 2020
Cited by 34 | Viewed by 8054
Abstract
Ongoing research has shown that emoji can be used by children to discriminate food products, but it is unclear if they express emotions and how they are linked to emotional words. Little is known about how children interpret emoji in terms of their [...] Read more.
Ongoing research has shown that emoji can be used by children to discriminate food products, but it is unclear if they express emotions and how they are linked to emotional words. Little is known about how children interpret emoji in terms of their emotional meaning in the context of food. This study aimed at investigating the emotional meaning of emoji used to describe food experiences in 9–13-year-old pre-adolescents and to measure related age and gender differences. The meaning of 46 emoji used to describe food experience was explored by: mapping emoji according to similarities and differences in their emotional meaning using the projective mapping technique, and linking emoji with emotion words using a check-all-that-apply (CATA) format. The two tasks gave consistent results and showed that emoji were discriminated along the valence (positive vs. negative) and power (dominant vs. submissive) dimension, and to a lower extent along the arousal dimension (high vs. low activation). In general, negative emoji had more distinct meanings than positive emoji in both studies, but differences in nuances of meaning were found also among positive emoji. Girls and older pre-adolescents (12–13 years old (y.o.)) discriminated positive emoji slightly better than boys and younger pre-adolescents (9–11 y.o.). This suggests that girls and older pre-adolescents may be higher in emotional granularity (the ability to experience and discriminate emotions), particularly of positive emotions. The results of the present work can be used for the development of an emoji-based tool to measure emotions elicited by foods in pre-adolescents. Full article
Show Figures

Graphical abstract

Back to TopTop