1. Introduction
A Meme is a rapidly spreading idea, behavior, or style, generally in the form of images sourced from TV shows, movies, or user-created visuals accompanied by a witty text [
1]. Typically, memes are intentionally created to express opinions, ideas, and emotions relevant to a topic trending on social media. Generally, memes are created when a particular topic gains widespread attention, as it is a phenomenon amplified by the growing number of social media users. The increase in meme creation and distribution has prompted the application of sentiment analysis, which is an analytical method that can be used to quickly study public opinion towards digitally circulating memes [
2]. Through sentiment analysis, the prevailing public mood can be discerned and anticipated for future trends [
3]. This analytical method has found applications in various domains, including politics [
4,
5], tourism [
6,
7], government intelligence [
8], and academia [
9]. Meme images may or may not contain human faces. However, sentiment analysis of meme images that contain human faces can be carried out with the use of facial expression recognition (FER) images. FER images contain only the face of a human and show various expressions of human faces that depict different emotions.
Sentiment analysis using datasets of human facial images that are not memes has been conducted by [
10,
11,
12,
13,
14,
15] and the resulting accuracy is more than 90%. These studies used publicly available datasets such as Facial Expression Recognition (FER2013), Japanese Female Facial (JAFFE), and/or Extended Cohn-Kande (CK+). The RAF-DB dataset was also used in [
16,
17] for robust facial emotion recognition. The primary objective of these studies was to categorize facial expressions into distinct emotions, including happiness, sadness, anger, and neutral. Deep learning was the most common approach for facial expression classification within these studies.
Despite extensive research on sentiment analysis, the number of studies that focus on memes is still limited [
18]. Sentiment analysis has been frequently used to classify public comments/reviews regarding products, services, policies, and other subjects. Previous studies that have carried out sentiment analysis of memes using meme datasets include hateful meme classification [
19,
20], sentiment analysis-based emotion detection [
21], sarcasm detection [
18,
22], and identification of offensive content within memes [
23]. All these studies focus on meme datasets that contain non-Indonesian text, indicating the scarcity of research on sentiment analysis using meme datasets that contain Indonesian text.
Previous studies have carried out sentiment analysis on image-based memes. Prakash and Aloysius [
21] carried out sentiment analysis on meme images based on facial expressions using a CNN architecture. The corresponding emotions of the facial expressions within the meme images are identified using the proposed CNN architecture. Subsequently, the meme images are categorized into three types of polarity, namely positive, negative, and neutral, based on the identified emotion. Meanwhile, Kumar et al. [
24] applied the Bag of Visual Words (BoVW) and SVM algorithm to classify meme images into five types of polarity, namely highly positive, positive, neutral, negative, and highly negative. This research achieved an accuracy of 75.4% in carrying out sentiment analysis of meme images. Furthermore, Elahi et al. [
25] conducted sentiment analysis on Bengali memes using a multimodal approach. The proposed method produced an image accuracy level of 73%.
Various techniques have been used to recognize features of objects within images, including the utilization of key points. Arwoko et al. [
26] used key points to recognize finger movements representing the numbers 1 through 5, in which the proposed Deep Neural Network (DNN) model achieved an accuracy of 99%. Hangaragi et al. [
27] used facial points (key points) to recognize eyes, nose, mouth, and outer facial lines which are then further used to detect human faces. The goal of the study was to create a facial recognition system based on a DNN model, which achieved an accuracy of 94.23%. The use of key points for feature extraction and the use of DNN models for classification in research [
21,
22] was proven to produce a high level of accuracy. Therefore, in this research, key points are used for feature extraction in meme images, while a DNN is used for one of the classification models. By using key points, facial features that are needed for image sentiment classification can be extracted. This is different from the research by Aggarwal et al. [
19] and Rosid et al. [
22], in which sentiment classification was conducted with the use of all the features within the image.
This study aims to develop a sentiment analysis model for meme images based on key points [
27], using a dataset of meme images containing Indonesian text from digital media. The meme images analyzed in this study contain human faces. Sentiment analysis is conducted based on key points that represent facial features such as eyebrows, eyes, and mouth. In meme image classification, key points can be focused on representing important facial features to conduct sentiment analysis. These key points are represented in various forms, namely (x, y) coordinates, vectors centered on one point [
26], and proposed graph representations, namely directed graph, weighted graph, and weighted directed graph. The use of graphs can make classification easier, especially for complex systems [
28].
DNN models were fine-tuned and used to determine the sentiment polarity of the meme images, with the aim of achieving optimal accuracy. The use of key points in image sentiment analysis is still not widely developed. Usually, sentiment analysis is carried out from images that are processed directly using deep learning methods. The key point is used by [
22] to recognize hand gestures to indicate numbers 1 to 5. By using key points, an accuracy of 98.5% is obtained. Even though the accuracy of [
21] using machine learning methods is still around 75.4%, further innovation is needed. By using key points, image detection can be carried out in the required areas (corners, edges, unique patterns) so that the model does not process the entire image. This can speed up the process, reduce computing load, and save memory. Therefore, the contributions of this study are as follows (1) Developing a human facial sentiment detection model using key points, (2) Representing key points as various graphs, (3) Constructing a meme dataset with Indonesian text.
The subsequent sections of the paper are organized as follows:
Section 2 describes related work on sentiment analysis,
Section 3 details the methodology of this study,
Section 4 elaborates on the experiment and comparative performance analysis of the proposed model,
Section 5 discusses the results of the experiment, and finally,
Section 6 presents the conclusion and future works.
2. Related Work
Previous works have been conducted on sentiment analysis of meme images. Image meme datasets were used for human facial emotion classification [
9,
10,
11,
12,
13,
14] and for building a presence system based on facial data [
22]. Meena et al. [
10] analyzed the sentiment of images that are not memes and consist of human faces using a hybrid CNN-InceptionV3 model. The public datasets in this study were Facial Expression Recognition (FER2013), Japanese Female Facial (JAFFE), and Extended Cohn-Kande (CK+). The same datasets were analyzed in the study by Meena et al. [
11] to detect facial sentiment using VGG19. The results of this study indicated that on the CK+ and JAFFE datasets, the use of the VGG19 method produced better accuracy compared to InceptionV3. Moung et al. [
12] compared the performance of Resnet50, InceptionV3, and CNN models for image sentiment analysis on the FER2013 dataset. The experimental results of this study showed that the ResNet50 model obtained the highest accuracy.
Prior research on sentiment analysis of multimodal data has been conducted by utilizing the Contrastive Language-Image Pre-training (CLIP) model, which is a model that connects text and image data to generate a unified understanding of both modalities. Liang et al. [
29] used a CLIP-based multimodal feature extraction model to extract aligned features from different modalities that prevent significant feature differences caused by data heterogeneity. It was shown in the experimental results that the proposed model achieved classification accuracy improvements of 9.57%, 3.87%, 3.63%, 3.14%, 0.77%, and 0.28% for six different classification tasks compared to baseline models. Yu et al. [
30] proposed a cross-modal sentiment model based on CLIP image-text attention interaction. The model utilizes pre-trained ResNet50 and RoBERTa to extract primary image-text features. The experimental results indicate that the model achieved accuracy rates of 75.38% and 73.95%.
Furthermore, previous studies on sentiment analysis have also been carried out on multimodal meme datasets to identify offensive content [
19] and to analyze sentiment based on facial expressions [
18,
20,
31]. While text meme datasets have been used for sentiment analysis [
32] and sarcasm detection [
33]. Research related to sentiment analysis using text meme datasets has been carried out in various fields, including economics [
34,
35], tourism [
36], academics [
37], and customer satisfaction [
38]. Almost all of these studies use datasets from Twitter. Various machine learning and deep learning methods were used to perform different tasks. It was shown that the use of deep learning methods produced accuracies of more than 85%.
Several prior studies have focused on conducting sentiment analysis of multimodal memes. Kumar et al. [
24] aimed to analyze fine-grained sentiment from multimodal memes. The polarity of sentiment was categorized into five classes, namely very positive, positive, neutral, negative, and very negative. This study used a hybrid deep learning model, namely ConvNet-SVMBoVW, that was based on CNN and SVM. The proposed model achieved an accuracy of 75.4%. Kumar and Garg [
31] conducted multimodal meme sentiment analysis, focusing on meme images containing text. The sentiment score of images is calculated using SentiBank, a visual sentiment ontology, and R-CNN. Text sentiment score was calculated using a hybrid of context-aware lexicon and ensemble learning. Finally, the polarity of the multimodal memes was determined by aggregating the scores of image sentiment and text sentiment. Evaluation was carried out using randomly selected multimodal tweets related to the Indian Criminal Court verdict against the LGBT community. The proposed model achieved an accuracy of 91.32% for multimodal sentiment analysis.
Moreover, several previous studies focused on sentiment analysis based on facial expressions. Aksoy and Güney [
14] explored sentiment analysis based on facial expression using a dataset of human facial images that are not memes. Facial expressions were classified into seven groups, namely happy, sad, surprised, disgusted, angry, afraid, and neutral. Several public datasets were used, including Facial Expression Recognition (FER2013), Japanese Female Facial (JAFFE), and Extended Cohn-Kande. The experimental results indicate that the highest accuracy was obtained with the use of Histograms of Oriented Gradients (HOG) for feature extraction and ResNet for classification. Moreover, sentiment analysis based on facial expression is carried out by considering many facial features. Zhang [
39] stated that in a meme image, there exists a relationship between the object, emotion, and sentiment.
Datasets of human face images are slightly different from meme image datasets. In datasets of human face images, facial expressions are shown from several different angles of the face. The meme images used in this study are meme images in which the face of a person can be observed. Occasionally, the human face in a meme image may look small. Based on the results of individual tests to see the differences in sentiment groups, with α = 5%, it was obtained that in the face dataset (FER2013), 88.4% of features were different between the sentiment groups, while in the meme dataset, only 53% of features were different. As a result, sentiment analysis on the human face dataset will produce better accuracy than sentiment analysis on the meme dataset.
In this research, sentiment analysis is carried out by extracting facial features within meme images using key points. The key points indicating eyes, eyebrows, and mouth are represented in five forms, namely (x, y) coordinates, vectors centered on one point, directed graph, weighted graph, and weighted directed graph. Model evaluation is carried out using several evaluation metrics, namely accuracy, recall, precision, and F1-score.
5. Discussion
The performance results presented in the previous section indicated that the DNN model with the directed graph key point representation achieved superior performance compared to the basic model and other deep learning models. In this section, the impact of several factors on the ability of the model to determine the sentiment of the meme images is investigated and presented.
There is a significant difference in the accuracy of the DNN model with the directed graph key point representation when carrying out sentiment analysis of meme images compared to non-meme images. The sentiment accuracy of meme images achieved by the proposed model was not as high as that of publicly available human face images. There are several differences between the meme image dataset and the human facial image dataset. The publicly available human facial image dataset (FER2013, JAFFE, CK+) is a facial expression dataset that was built to specifically show happy, sad, angry, neutral, or other facial expressions, and therefore, they can be clearly differentiated. This is different from the meme image dataset. In the meme images, facial expression labeling is done by annotators who are facial expression experts. Some meme images have different facial expression interpretations, of which different experts labeled the same meme image differently. To reduce bias that occurred due to differences in labeling, only meme images that were labeled the same by the annotators were included in the dataset.
When compared to previous similar studies using meme datasets, the results of this study show an increase in accuracy. Kumar’s study used a hybrid CNN-SVM and achieved an accuracy of 75.4%. Direct image classification using CNN without any special treatment. This differs from the proposed method, which uses keypoints and representations in a directed graph in the classification model. Therefore, the image dataset is taken from the keypoints and represented in a directed graph. This directed graph feature is then classified into positive, negative, or neutral sentiment. This method produces an 8% increase in accuracy compared to Kumar’s.
The use of the media pipe library in determining key points from images really helps the classification process. The key points produced by the MediaPipe library are more precise than those produced by the OpenPose library. In MediaPipe, face points are divided into 468 keypoints, while in OpenPose, face points are divided into 70 keypoints. Even though MediaPipe provides a larger number of keypoints, in some images where the mouth shape is extreme, MediaPipe fails to correctly detect the mouth.
Figure 9 shows a mouth expression in which the key point transformation does not match the expression in the original image. Inaccuracy in describing expressions can cause errors in sentiment classification by the system. In
Figure 9, the annotator labeled the image as having a negative sentiment, while the system labeled the image as having a neutral sentiment.
Key points are represented in five forms: (x, y) coordinates, vectors centered on one point, directed graph, weighted graph, and weighted directed graph. Based on the experiment results, the (x, y) coordinates representation of key points resulted in the lowest accuracy. This is due to the position of the key point. Meanwhile, the vectors centered on a single point representation produced higher accuracy compared to the (x, y) coordinate representation. The vectors centered on one point representation are equipped with relative coordinates of each key point to the point that has been determined as the centroid of the key point of each eye, eyebrow and mouth shape. A vector-based key point dataset, expressed in relative coordinates, reveals the distinctive characteristics of each object for a specific sentiment. This results in a better accuracy compared to the (x, y) coordinates’ representation.
Better accuracy is obtained with the use of key points in the form of graphs. In the directed graph representation, every pair of adjacent key points is expressed in vectors. Therefore, data is obtained in the form of relative coordinates for two consecutive key points. In the weighted graph representation, the weight is expressed as the angle between 2 key points. Based on the experimental results, the directed graph representation produced better accuracy compared to the weighted graph representation. Although the weighted directed graph representation produced almost the same accuracy as the directed graph representation, the learning process took longer.
The performance of the DNN model with a directed graph representation is compared to other models incorporating the proposed directed graph representation. The purpose of this comparison is to determine whether the directed graph representation can elevate the accuracy of the other models.
At the start of testing, parameter values were chosen based on the test results in [
22], namely 3 hidden layers with each size being i = 13, j = 13, k = 14, activation = relu, and solver = adam. To obtain optimal results, parameter tuning was carried out. The parameter settings are as follows:
Size of each hidden layer: 10 to 100;
Activation: tanh, relu and logistics;
Solver: SGD, adam and L-BFGS;
Learning rate: constant, adaptive.
Based on the results of parameter tuning trials, the accuracy value increased by 1.1–3.0%. The best accuracy value was obtained with the use of the following parameters:
Author Contributions
Conceptualization, E.A., A.S. and D.O.S.; methodology, E.A., A.S. and D.O.S.; software, E.A.; validation, E.A.; formal analysis, A.S. and D.O.S.; investigation, E.A., A.S. and D.O.S.; resources, E.A.; data curation, E.A. and A.S.; writing—original draft preparation, E.A.; writing—review and editing, A.S. and D.O.S.; visualization, E.A.; supervision, A.S. and D.O.S.; project administration, A.S.; funding acquisition, A.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia and Universitas Surabaya under Grant Pakerti ITS-Ubaya-no 1777/PKS/ITS/2023.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are openly available in FigShare at doi: 10.6084/m9.figshare.28917455.
Acknowledgments
We also want to thank Institut Teknologi Sepuluh Nopember and Universitas Surabaya for funding this research under Grant Pakerti ITS-Ubaya.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Tesaurus Meme. Available online: https://kbbi.kemdikbud.go.id/ (accessed on 13 July 2021).
- Mertiya, M.; Singh, A. Combining naive bayes and adjective analysis for sentiment detection on Twitter. In Proceedings of the 2016 International Conference on Inventive Computation Technologies, Coimbatore, India, 26–27 August 2016; Volume 2, pp. 1–6. [Google Scholar]
- Pozzi, F.A.; Fersini, E.; Messina, E.; Liu, B. Challenges of Sentiment Analysis in Social Networks: An Overview; Elsevier Inc.: Amsterdam, The Netherlands, 2017; ISBN 9780128044384. [Google Scholar]
- D’Andrea, A.; Ferri, F.; Grifoni, P.; Guzzo, T. Approaches, Tools and Applications for Sentiment Analysis Implementation. Int. J. Comput. Appl. 2015, 125, 26–33. [Google Scholar] [CrossRef]
- Chaudhry, H.N.; Javed, Y.; Kulsoom, F.; Mehmood, Z.; Khan, Z.I.; Shoaib, U.; Janjua, S.H. Sentiment analysis of before and after elections: Twitter data of U.S. election 2020. Electronics 2021, 10, 2082. [Google Scholar] [CrossRef]
- Hashim, R.; Omar, B.; Saeed, N.; Ba-Anqud, A.; Al-Samarraie, H. the Application of Sentiment Analysis in Tourism Research: A Brief Review. IJBTS Int. J. Bus. Tour. Appl. Sci. 2020, 8, 51–60. [Google Scholar]
- Hemamalini, U.; Perumal, S. Literature review on sentiment analysis. Int. J. Sci. Technol. Res. 2020, 9, 2009–2013. [Google Scholar] [CrossRef]
- Kumar, A.; Sharma, A. Systematic Literature Review on Opinion Mining of Big Data for Government Intelligence. Webology 2017, 14, 6–47. [Google Scholar]
- Almosawi, M.M.; Mahmood, S.A. Lexicon-Based Approach for Sentiment Analysis to Student Feedback. Webology 2022, 19, 6971–6989. [Google Scholar]
- Meena, G.; Mohbey, K.K.; Kumar, S. Sentiment analysis on images using convolutional neural networks based Inception-V3 transfer learning approach. Int. J. Inf. Manag. Data Insights 2023, 3, 100174. [Google Scholar] [CrossRef]
- Meena, G.; Mohbey, K.K.; Indian, A.; Kumar, S. Sentiment Analysis from Images using VGG19 based Transfer Learning Approach. Procedia Comput. Sci. 2022, 204, 411–418. [Google Scholar] [CrossRef]
- Moung, E.G.; Wooi, C.C.; Sufian, M.M.; On, C.K.; Dargham, J.A. Ensemble-based face expression recognition approach for image sentiment analysis. Int. J. Electr. Comput. Eng. 2022, 12, 2588–2600. [Google Scholar] [CrossRef]
- Patel, K.; Mehta, D.; Mistry, C.; Gupta, R.; Tanwar, S.; Kumar, N.; Alazab, M. Facial Sentiment Analysis Using AI Techniques: State-of-the-Art, Taxonomies, and Challenges. IEEE Access 2020, 8, 90495–90519. [Google Scholar] [CrossRef]
- Aksoy, O.E.; Güney, S. Sentiment Analysis from Face Expressions Based on Image Processing Using Deep Learning Methods. J. Adv. Res. Nat. Appl. Sci. 2022, 8, 736–752. [Google Scholar] [CrossRef]
- de Paula, D.; Alexandre, L.A. Facial Emotion Recognition for Sentiment Analysis of Social Media Data. In Proceedings of the 10th Iberian Conference, IbPRIA 2022, Aveiro, Portugal, 4–6 May 2022; Volume 13256, pp. 207–217. [Google Scholar] [CrossRef]
- Gan, Y.; Xu, L.; Song, S.; Tao, X. Context transformer with multiscale fusion for robust facial emotion recognition. Pattern Recognit. 2025, 167, 111720. [Google Scholar] [CrossRef]
- So, J.; Han, Y. Facial Landmark-Driven Keypoint Feature Extraction for Robust Facial Expression Recognition. Sensors 2025, 25, 3762. [Google Scholar] [CrossRef]
- Avvaru, A.; Vobilisetty, S. BERT at SemEval-2020 Task 8: Using BERT to analyse meme emotions. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona, Spain, 12–13 December 2020; pp. 1094–1099. [Google Scholar] [CrossRef]
- Aggarwal, A.; Sharma, V.; Trivedi, A.; Yadav, M.; Agrawal, C.; Singh, D.; Mishra, V.; Gritli, H. Two-Way Feature Extraction Using Sequential and Multimodal Approach for Hateful Meme Classification. Complexity 2021, 2021, 510253. [Google Scholar] [CrossRef]
- Thapa, S.; Shah, A.; Jafri, F.; Naseem, U.; Razzak, I. A Multi-Modal Dataset for Hate Speech Detection on Social Media: Case-study of Russia-Ukraine Conflict. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE), Abu Dhabi, United Arab Emirates, 7–December 2022; pp. 1–6. [Google Scholar]
- Prakash, T.N.; Aloysius, A. Hybrid Approaches Based Emotion Detection in Memes Sentiment Analysis. Int. J. Eng. Res. Technol. 2021, 14, 151–155. [Google Scholar]
- Rosid, M.A.; Siahaan, D.; Saikhu, A. Sarcasm Detection in Indonesian-English Code-Mixed Text Using Multihead Attention-Based Convolutional and Bi-Directional GRU. IEEE Access 2024, 12, 137063–137079. [Google Scholar] [CrossRef]
- Suryawanshi, S.; Chakravarthi, B.R.; Arcan, M.; Buitelaar, P. Multimodal Meme Dataset (MultiOFF) for Identifying Offensive Content in Image and Text. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, Marseille, France, 11–16 May 2020; pp. 32–41. [Google Scholar]
- Kumar, A.; Srinivasan, K.; Cheng, W.H.; Zomaya, A.Y. Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data. Inf. Process. Manag. 2020, 57, 102141. [Google Scholar] [CrossRef]
- Elahi, K.T.; Rahman, T.B.; Shahriar, S.; Sarker, S. Explainable Multimodal Sentiment Analysis on Bengali Memes. In Proceedings of the 26th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 13–15 December 2023; pp. 1–6. [Google Scholar]
- Arwoko, H.; Yuniarno, E.M.; Purnomo, M.H. Hand Gesture Recognition Based on Keypoint Vector. In Proceedings of the IES 2022 International Electronics Symposium: Energy Development for Climate Change Solution and Clean Energy Transition, Surabaya, Indonesia, 9–11 August 2022; pp. 530–533. [Google Scholar]
- Hangaragi, S.; Singh, T.; N, N. Face Detection and Recognition Using Face Mesh and Deep Neural Network. Procedia Comput. Sci. 2023, 218, 741–749. [Google Scholar] [CrossRef]
- Albadani, B.; Shi, R.; Dong, J.; Al-Sabri, R.; Moctard, O.B. Transformer-Based Graph Convolutional Network for Sentiment Analysis. Appl. Sci. 2022, 12, 1316. [Google Scholar] [CrossRef]
- Liang, Y.; Han, D.; He, Z.; Kong, B.; Wen, S. SceEmoNet: A Sentiment Analysis Model with Scene Construction Capability. Appl. Sci. 2025, 15, 8588. [Google Scholar] [CrossRef]
- Lu, X.; Ni, Y.; Ding, Z. Cross-Modal Sentiment Analysis Based on CLIP Image-Text Attention Interaction. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 895–903. [Google Scholar] [CrossRef]
- Kumar, A.; Garg, G. Sentiment analysis of multimodal twitter data. Multimed. Tools Appl. 2019, 78, 24103–24119. [Google Scholar] [CrossRef]
- Asmawati, E.; Saikhu, A.; Siahaan, D. Sentiment Analysis of Text Memes: A Comparison among Supervised Machine Learning Methods. In Proceedings of the 9th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Jakarta, Indonesia, 6–7 October 2022; pp. 349–354. [Google Scholar] [CrossRef]
- Rosid, M.A.; Siahaan, D.; Saikhu, A. Improving Sarcasm Detection in Mash-Up Language Through Hybrid Pretrained Word Embedding. In Proceedings of the IEEE 8th International Conference on Software Engineering and Computer Systems (ICSECS), Penang, Malaysia, 25–27 August 2023; pp. 58–63. [Google Scholar] [CrossRef]
- Nguyen, B.H.; Huynh, V.N. Textual analysis and corporate bankruptcy: A financial dictionary-based sentiment approach. J. Oper. Res. Soc. 2022, 73, 102–121. [Google Scholar] [CrossRef]
- Ghobakhloo, M.; Ghobakhloo, M. Design of a personalized recommender system using sentiment analysis in social media (case study: Banking system). Soc. Netw. Anal. Min. 2022, 12, 1–16. [Google Scholar] [CrossRef]
- George, O.A.; Ramos, C.M.Q. Sentiment analysis applied to tourism: Exploring tourist-generated content in the case of a wellness tourism destination. Int. J. Spa Wellness 2024, 7, 139–161. [Google Scholar] [CrossRef]
- Yan, W.; Zhou, L.; Qian, Z.; Xiao, L.; Zhu, H. Sentiment Analysis of Student Texts Using the CNN-BiGRU-AT Model. Sci. Program. 2021, 2021, 8405623. [Google Scholar] [CrossRef]
- Andrian, B.; Simanungkalit, T.; Budi, I.; Wicaksono, A.F. Sentiment Analysis on Customer Satisfaction of Digital Banking in Indonesia. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 466–473. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, J.; Ding, W.; Wang, Z. Object aroused emotion analysis network for image sentiment analysis. Knowl.-Based Syst. 2024, 286, 111429. [Google Scholar] [CrossRef]
- Ekman, P. Membaca Emosi Orang; Think Jogjakata: Yogyakarta, Indonesia, 2003; Available online: https://dlibrary.ittelkom-pwt.ac.id/index.php?p=show_detail&id=1896&keywords= (accessed on 13 July 2021).
- Guermazi, R.; Abdallah, T.B.; Hammami, M. Facial micro-expression recognition based on accordion spatio-temporal representation and random forests. J. Vis. Commun. Image Represent. 2021, 79, 103183. [Google Scholar] [CrossRef]
- Cruz, R.F.; Koch, S.C. Issues of vailidity and reliability in the Use of Movement Observations and Scales. In Dance/Movement Therapists in Action: A Working Guide to Research Options; Cruz, R.F., Berrol, C., Eds.; Charles, C. Thomas: Springfield, IL, USA, 2004; pp. 45–68. [Google Scholar]
- Soon, H.F.; Amir, A.; Azemi, S.N. An Analysis of Multiclass Imbalanced Data Problem in Machine Learning for Network Attack Detections. J. Phys. Conf. Ser. 2021, 1755, 012030. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).