A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis
Abstract
:1. Introduction
2. Problem Statement
- User state detection: refers to the automatic detection of an aspect of the user state by the system. Has many variations, but since in this work we aim to address risk prevention we focus on the detection of the sentiment and stress levels of users, which are sentiment analysis and stress analysis. Those two techniques address the problem of detecting automatically the sentiment and stress level of users by employing different techniques (e.g., machine learning, natural language processing (NLP)) on different data sources (e.g., text, audio, images), either using one data modality or multiple.
- Risk prevention: addresses the prevention of risks that users on a system can suffer. In our case, we focus on the prevention of risks that users can suffer while navigating online social environments, such as SNSs. It can be performed by employing user state detection and applying feedback to users when necessary or giving recommendations to them, which is the focus of the present survey, but it can also be addressed by performing analysis of relations between users and warning users about dangerous people, as an example.
- Recommendation: encompasses the techniques used by a system to give recommendations about different matters to users (e.g., what to buy, when to invest, whom to trust). User state detection can be used by the system to perform recommendations, and recommendations can in turn be used to prevent risks that users could be exposed to in a system.
3. Detection Approaches Review
- Is it relevant to the works being reviewed in a given section? (e.g., a work included in the sentiment analysis on audio data subsection has to focus on sentiment analysis techniques applied to audio).
- The work uses a technique or techniques to address the problem which are different than the ones used in other works reviewed previously in the same section? (e.g., dictionary-based methods are different from machine learning models for sentiment classification in sentiment analysis). There can be works using the same technique as another reviewed work if the problem addressed by the paper is different (e.g., one work addresses emotion detection using big data, which is different from other work performing emotion detection on stored data or on single users).
- Does the work provide experiments and data that gives insight on the usefulness of the used technique to address the problem at its focus? (e.g., accuracy, precision, and recall of the proposed technique or method on the addressed problem, or data on the significance of an effect, for example data on the significance of the effect of emotion on keystroke latency).
3.1. Sentiment Analysis on Text
3.2. Visual Sentiment Analysis
3.3. Sentiment Analysis on Audio Data
3.4. Multi-Modal Fusion Sentiment Analysis
3.5. CBR for Sentiment Analysis
3.6. Stress Analysis and Keystroke Dynamics
4. MAS-Based Prevention and Recommendation Systems Review
5. Discussion
6. Conclusions and Future Lines of Work
- Using current technologies in user state detection for creating improved user guiding systems in online environments.
- Combining different technologies compatible with the architecture of SNSs with emotion detection techniques and testing their effectiveness in real-life scenarios as guiding or recommendation systems.
- Improving user state detection techniques.
Author Contributions
Funding
Conflicts of Interest
References
- Vanderhoven, E.; Schellens, T.; Vanderlinde, R.; Valcke, M. Developing educational materials about risks on social network sites: A design based research approach. Educ. Technol. Res. Dev. 2016, 64, 459–480. [Google Scholar] [CrossRef]
- De Moor, S.; Dock, M.; Gallez, S.; Lenaerts, S.; Scholler, C.; Vleugels, C. Teens and ICT: Risks and Opportunities. Belgium: TIRO. 2008. Available online: http://www.belspo.be/belspo/fedra/proj.asp?l=en&COD=TA/00/08 (accessed on 25 April 2020).
- Livingstone, S.; Haddon, L.; Görzig, A.; Ólafsson, K. Risks and Safety on the Internet: The Perspective of European Children: Full Findings and Policy Implications From the EU Kids Online Survey of 9–16 Year Olds and Their Parents in 25 Countries. EU Kids Online, Deliverable D4. EU Kids Online Network, London, UK. 2011. Available online: http://eprints.lse.ac.uk/33731/ (accessed on 25 April 2020).
- Vandenhoven, E.; Schellens, T.; Valacke, M. Educating teens about the risks on social network sites. Media Educ. Res. J. 2014, 43, 123–131. [Google Scholar] [CrossRef] [Green Version]
- Christofides, E.; Muise, A.; Desmarais, S. Risky disclosures on Facebook: The effect of having a bad experience on online behavior. J. Adolesc. Res. 2012, 27, 714–731. [Google Scholar] [CrossRef]
- George, J.M.; Dane, E. Affect, emotion, and decision making. Organ. Behav. Hum. Decis. Process. 2016, 136, 47–55. [Google Scholar] [CrossRef]
- Thelwall, M. TensiStrength: Stress and relaxation magnitude detection for social media texts. Inf. Process. Manag. 2017, 53, 106–121. [Google Scholar] [CrossRef] [Green Version]
- Thelwall, M.; Buckley, K.; Paltoglou, G.; Cai, D.; Kappas, A. Sentiment strength detection in short informal text. J. Am. Soc. Inf. Sci. Technol. 2010, 61, 2544–2558. [Google Scholar] [CrossRef] [Green Version]
- Shoumy, N.J.; Ang, L.M.; Seng, K.P.; Rahaman, D.M.; Zia, T. Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals. J. Netw. Comput. Appl. 2020, 149, 102447. [Google Scholar] [CrossRef]
- Zhang, C.; Zeng, D.; Li, J.; Wang, F.Y.; Zuo, W. Sentiment analysis of Chinese documents: From sentence to document level. J. Am. Soc. Inf. Sci. Technol. 2009, 60, 2474–2487. [Google Scholar] [CrossRef]
- Hu, M.; Liu, B. Mining opinion features in customer reviews. In AAAI; Springer: Berlin/Heidelberg, Germany, 2004; Volume 4, pp. 755–760. [Google Scholar]
- Jakob, N.; Gurevych, I. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, Cambridge, MA, USA, 9–11 October 2010; pp. 1035–1045. [Google Scholar]
- Lu, B.; Ott, M.; Cardie, C.; Tsou, B.K. Multi-aspect sentiment analysis with topic models. In Proceedings of the 2011 11th IEEE International Conference on Data Mining Workshops, Vancouver, BC, Canada, 11 December 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 81–88. [Google Scholar] [CrossRef] [Green Version]
- Popescu, A.M.; Etzioni, O. Extracting product features and opinions from reviews. In Natural Language Processing and Text Mining; Springer: Berlin/Heidelberg, Germany, 2007; pp. 9–28. [Google Scholar]
- Nasukawa, T.; Yi, J. Sentiment analysis: Capturing favorability using natural language processing. In Proceedings of the 2nd international conference on Knowledge capture, Sanibel Island, FL, USA, 23–25 October 2003; ACM: New York, NY, USA, 2003; pp. 70–77. [Google Scholar] [CrossRef]
- Li, F.; Han, C.; Huang, M.; Zhu, X.; Xia, Y.J.; Zhang, S.; Yu, H. Structure-aware review mining and summarization. In Proceedings of the 23rd International Conference on Computational Linguistics, Beijing, China, 23–27 August 2010; Association for Computational Linguistics: Stroudsburg, PA, USA, 2010; pp. 653–661. [Google Scholar]
- Borth, D.; Ji, R.; Chen, T.; Breuel, T.; Chang, S.F. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, Spain, 21–25 October 2013; ACM: New York, NY, USA, 2013; pp. 223–232. [Google Scholar] [CrossRef] [Green Version]
- You, Q.; Luo, J.; Jin, H.; Yang, J. Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks. In Proceedings of the AAAI’15: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 381–388. [Google Scholar]
- Kaushik, L.; Sangwan, A.; Hansen, J.H. Sentiment extraction from natural audio streams. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 8485–8489. [Google Scholar]
- Deb, S.; Dandapat, S. Emotion classification using segmentation of vowel-like and non-vowel-like regions. In IEEE Transactions on Affective Computing; IEEE: Piscataway, NJ, USA, 2019; Volume 10, pp. 360–373. [Google Scholar] [CrossRef]
- Deng, J.; Zhang, Z.; Marchi, E.; Schuller, B. Sparse autoencoder-based feature transfer learning for speech emotion recognition. In Proceedings of the 2013 Humaine Association Conference on Affective Computing And Intelligent Interaction, Geneva, Switzerland, 2–5 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 511–516. [Google Scholar] [CrossRef] [Green Version]
- Mairesse, F.; Polifroni, J.; Di Fabbrizio, G. Can prosody inform sentiment analysis? experiments on short spoken reviews. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 5093–5096. [Google Scholar]
- Kanluan, I.; Grimm, M.; Kroschel, K. Audio-visual emotion recognition using an emotion space concept. In Proceedings of the 2008 16th European Signal Processing Conference, Lausanne, Switzerland, 25–29 August 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–5. [Google Scholar]
- Chen, J.; Hu, B.; Xu, L.; Moore, P.; Su, Y. Feature-level fusion of multimodal physiological signals for emotion recognition. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 395–399. [Google Scholar]
- Nicolaou, M.A.; Gunes, H.; Pantic, M. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans. Affect. Comput. 2011, 2, 92–105. [Google Scholar] [CrossRef] [Green Version]
- Hossain, M.S.; Muhammad, G.; Alhamid, M.F.; Song, B.; Al-Mutib, K. Audio-visual emotion recognition using big data towards 5G. Mob. Netw. Appl. 2016, 21, 753–763. [Google Scholar] [CrossRef]
- Zhou, F.; Jianxin Jiao, R.; Linsey, J.S. Latent customer needs elicitation by use case analogical reasoning from sentiment analysis of online product reviews. J. Mech. Des. 2015, 137, 071401. [Google Scholar] [CrossRef]
- Ohana, B.; Delany, S.J.; Tierney, B. A case-based approach to cross domain sentiment classification. In Proceedings of the International Conference on Case-Based Reasoning, Lyon, France, 3–6 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 284–296. [Google Scholar]
- Ceci, F.; Goncalves, A.L.; Weber, R. A model for sentiment analysis based on ontology and cases. IEEE Lat. Am. Trans. 2016, 14, 4560–4566. [Google Scholar] [CrossRef]
- Vizer, L.M.; Zhou, L.; Sears, A. Automated stress detection using keystroke and linguistic features: An exploratory study. Int. J. Hum. Comput. Stud. 2009, 67, 870–886. [Google Scholar] [CrossRef]
- Feldman, R. Techniques and applications for sentiment analysis. Commun. ACM 2013, 56, 82–89. [Google Scholar] [CrossRef]
- Schouten, K.; Frasincar, F. Survey on aspect-level sentiment analysis. IEEE Trans. Knowl. Data Eng. 2016, 28, 813–830. [Google Scholar] [CrossRef]
- Ji, R.; Cao, D.; Zhou, Y.; Chen, F. Survey of visual sentiment prediction for social media analysis. Front. Comput. Sci. 2016, 10, 602–611. [Google Scholar] [CrossRef]
- Li, L.; Cao, D.; Li, S.; Ji, R. Sentiment analysis of Chinese micro-blog based on multi-modal correlation model. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 4798–4802. [Google Scholar] [CrossRef]
- Gunawardhane, S.D.; De Silva, P.M.; Kulathunga, D.S.; Arunatileka, S.M. Non invasive human stress detection using key stroke dynamics and pattern variations. In Proceedings of the 2013 International Conference on Advances in ICT for Emerging Regions (ICTer), Colombo, Sri Lanka, 11–15 December 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 240–247. [Google Scholar]
- Lee, P.M.; Tsui, W.H.; Hsiao, T.C. The influence of emotion on keyboard typing: An experimental study using auditory stimuli. PLoS ONE 2015, 10, e0129056. [Google Scholar] [CrossRef] [Green Version]
- Bradley, M.M.; Lang, P.J. The International Affective Digitized Sounds (2nd Edition; IADS-2): Affective Ratings of Sounds and Instruction Manual; Tech. Rep. B-3; University of Florida: Gainesville, FL, USA, 2007. [Google Scholar]
- Matsiola, M.; Dimoulas, C.; Kalliris, G.; Veglis, A.A. Augmenting user interaction experience through embedded multimodal media agents in social networks. In Information Retrieval and Management: Concepts, Methodologies, Tools, and Applications; IGI Global: Pennsylvania, PA, USA, 2018; pp. 1972–1993. [Google Scholar] [CrossRef]
- Rosaci, D. CILIOS: Connectionist inductive learning and inter-ontology similarities for recommending information agents. Inf. Syst. 2007, 32, 793–825. [Google Scholar] [CrossRef]
- Buccafurri, F.; Comi, A.; Lax, G.; Rosaci, D. Experimenting with certified reputation in a competitive multi-agent scenario. IEEE Intell. Syst. 2015, 31, 48–55. [Google Scholar] [CrossRef]
- Rosaci, D.; Sarnè, G.M. Multi-agent technology and ontologies to support personalization in B2C E-Commerce. Electron. Commer. Res. Appl. 2014, 13, 13–23. [Google Scholar] [CrossRef]
- Cissée, R.; Albayrak, S. An agent-based approach for privacy-preserving recommender systems. In Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, Honolulu, HI, USA, 14–18 May 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 1–8. [Google Scholar]
- Singh, A.; Sharma, A. MAICBR: A multi-agent intelligent content-based recommendation system. In Information and Communication Technology for Sustainable Development; Springer: Berlin/Heidelberg, Germany, 2018; pp. 399–411. [Google Scholar] [CrossRef]
- Villavicencio, C.; Schiaffino, S.; Diaz-Pace, J.A.; Monteserin, A.; Demazeau, Y.; Adam, C. A MAS approach for group recommendation based on negotiation techniques. In Proceedings of the International Conference on Practical Applications of Agents and Multi-Agent Systems, Sevilla, Spain, 1–3 June 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 219–231. [Google Scholar] [CrossRef]
- Rincon, J.; de la Prieta, F.; Zanardini, D.; Julian, V.; Carrascosa, C. Influencing over people with a social emotional model. Neurocomputing 2017, 231, 47–54. [Google Scholar] [CrossRef]
- Upadhyay, A.; Chaudhari, A.; Ghale, S.; Pawar, S. Detection and prevention measures for cyberbullying and online grooming. In Proceedings of the 2017 International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 19–20 January 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4.
- Aguado, G.; Julian, V.; Garcia-Fornes, A.; Espinosa, A. A Multi-Agent System for guiding users in on-line social environments. Eng. Appl. Artif. Intell. 2020, 94, 103740. [Google Scholar] [CrossRef]
- Aguado, G.; Julián, V.; García-Fornes, A.; Espinosa, A. Using Keystroke Dynamics in a Multi-Agent System for User Guiding in Online Social Networks. Appl. Sci. 2020, 10, 3754. [Google Scholar] [CrossRef]
- Camara, M.; Bonham-Carter, O.; Jumadinova, J. A multi-agent system with reinforcement learning agents for biomedical text mining. In Proceedings of the 6th ACM Conference on Bioinformatics, Computational Biology and Health Informatics, Atlanta, GA, USA, 9–12 September 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 634–643. [Google Scholar] [CrossRef]
- Almashraee, M.; Diaz, D.M.; Unland, R. Sentiment Classification of on-line Products Based on Machine Learning Techniques and Multi-agent Systems Technologies. In Proceedings of the Industrial Conference on Data Mining-Workshops, Berlin, Germany, 13–20 July 2012; IBaI Publishing: Fockendorf, Germany, 2012; pp. 128–136. [Google Scholar]
- Lombardo, G.; Fornacciari, P.; Mordonini, M.; Tomaiuolo, M.; Poggi, A. A multi-agent architecture for data analysis. Future Internet 2019, 11, 49. [Google Scholar] [CrossRef] [Green Version]
- Schweitzer, F.; Garcia, D. An agent-based model of collective emotions in online communities. Eur. Phys. J. B 2010, 77, 533–545. [Google Scholar] [CrossRef]
- El Fazziki, A.; Ennaji, F.Z.; Sadiq, A.; Benslimane, D.; Sadgal, M. A MULTI-AGENT BASED SOCIAL CRM FRAMEWORK FOR EXTRACTING AND ANALYSING OPINIONS. J. Eng. Sci. Technol. 2017, 12, 2154–2174. [Google Scholar]
- Bordera, J. PESEDIA. Red Social Para Concienciar en Privacidad. Master’s Thesis, Universitat Politècnica de València, Valencia, Spain, 2016. [Google Scholar]
Reference | Metrics | Values |
---|---|---|
[10] | Accuracy, precision, recall and F1 | 0.6778–0.7961, 0.628–0.8905, 0.5931–0.9231 and 0.5549–0.8591 |
[11] | Precision and recall | 0.56–0.79 and 0.67–0.80 |
[12] | Precision, recall and F1 | Single-domain approach: 0.622–0.792, 0.414–0.661 and 0.497–0.702 |
Cross-domain approach: 0.565–0.678, 0.273–0.435 and 0.36–0.518 | ||
[13] | Acuraccy, precision, recall and F1 | Multi-aspect sentence labeling: 0.477–0.83, 0.126–0.969, 0.179–1 and 0.148–0.887 |
L1 error, aspect, review and MAP @10 | Multi-aspect rating prediction with indirect supervision: 0.238–0.645, −0.149–0.715, 0.454–0.846 and 0.129–0.429 | |
L1 error | Supervised multi-aspect rating prediction: 0.554–1.071 | |
[14] | Precision and recall | Explicit feature extraction task: 93–95% and 73–80% |
Finding word semantic orientation: 72–88% and 55–83% | ||
Extracting opinion phrases: 79% and 76% | ||
Extracting opinion phrase polarity: 86% and 89% | ||
[15] | Precision and recall | 0.75–0.955 and 0.2–0.286 |
[16] | Precision, recall and F1 | 0.761–0.918, 0.37–0.82, and 0.498–0.837 |
[17] | Accuracy | Using visual features: 0.49–0.83 |
Using visual and text features: 0.48–0.88 | ||
[18] | Precision, recall, accuracy and F1 | 0.691–0.797, 0.729–0.905, 0.667–0.783 and 0.722–0.846 |
[19] | Accuracy | 0.68–0.82 |
[20] | Accuracy | 0.452–0.851 |
[21] | Unweighted Average Recall (UAR) | 0.579–0.627 |
[22] | Accuracy (over automatic speech recognition output and human transcripts) | Using text and acoustic features: 0.825 and 0.81 |
Using only text: 0.75 and 0.844 | ||
[23] | Mean linear error and correlation between estimates and the reference for valence, activation, and dominance | Acoustic emotion estimation: 0.13, 0.16, 0.14 and 0.53, 0.82, 0.78 |
Visual emotion estimation (eyes region): 0.18, 0.19, 0.13 and 0.57, 0.58, 0.57 | ||
Visual emotion estimation (lips region): 0.18, 0.19, 0.14 and 0.58, 0.62, 0.53 | ||
Decision-level fusion acoustic and visual emotion estimation: 0.14, 0.12, 0.09 and 0.7, 0.84, 0.8 | ||
[24] | Accuracy for arousal and valence | Unimodal (Electroencephalogram): 0.65–0.7563 and 0.73–0.74 |
Unimodal (Peripheral physiological signals): 0.6689–0.6905 and 0.6689–0.6905 | ||
Feature-level fusion: 0.7844–0.8563 and 0.8279–0.8398 | ||
Decision-level fusion: 0.66–0.73 and 0.5–0.64 | ||
[25] | Root Mean Squared Error (RMSE), Correlation Coefficient (COR) and Sign Agreement Metric (SAGR) for valence and arousal | Single-cue prediction: 0.17–0.22, 0.444–0.712, 0.648–0.841 and 0.24–0.29, 0.411–0.586, 0.681–0.764 |
Support vector regression: 0.21–0.25, 0.146–0.551, 0.538–0.740 and 0.26–0.27, 0.388–0.419, 0.667–0.716 | ||
Feature-level fusion: 0.19–0.21, 0.583–0.681, 0.733–0.856 and 0.24–0.28, 0.461–0.589, 0.685–0.763 | ||
Model-level fusion: 0.16–0.19, 0.653–0.782, 0.830–0.892 and 0.22–0.26, 0.479–0.639, 0.637–0.8 | ||
Output-associative fusion: 0.15–0.18, 0.664–0.796, 0.825–0.907 and 0.21–0.24, 0.536–0.642, 0.719–0.8 | ||
[26] | Accuracy and speed up factor (big data tools) | 0.831 and 75.55 |
[27] | Precision and recall (sentiment analysis) | 0.751 and 0.758 |
Reduction of product attribute redundancy | 0.41 | |
[28] | Accuracy | 0.62–0.7258 |
[29] | Accuracy, precision, recall and F1 | 0.85–0.91 0.85–0.899, 0.847–0.92 and 0.85–0.91 |
[7] | Exact matches with strength label, 1 level away of strength matches (+−1 of the label), correlation and MAD | Stress strength detection: 0.31–0.579, 0.791–0.939, 0.329–0.505 and 0.502–0.893 |
Relaxation strength detection: 0.482–0.717, 0.916–0.963, 0.332–0.466 and 0.338–0.515 | ||
[30] | Accuracy | Detecting cognitive stress: 0.521–0.615 (raw data), 0.641–0.75 (normalized data) |
Detecting physical stress: 0.541–0.625 (raw data), 0.542–0.625 (normalized data) | ||
False Acceptance Rates (FARs) and False Rejection Rates (FRRs) | Detecting cognitive stress: 0.375–0.75 and 0.208–0.479 (raw data), 0.187–0.333 and 0.229–0.479 (normalized data) | |
Detecting physical stress: 0.437–0.771 and 0.125–0.375 (raw data), 0.25–0.437 and 0.333–0.5 (normalized data) |
Reference | Technique |
---|---|
[10] | Sentence-level sentiment analysis based on lexicon and syntactic structures. Document-level sentiment analysis using a weighted sum of the polarities of sentences within the document. |
[11] | Part of Speech (POS) tagging. Frequent feature generation using association rule mining. Feature pruning as pruning meaningless and redundant features. Opinion words extraction, extracting words that are adjacent to frequent features and are adjectives that modify the feature. Infrequent feature identification using opinion words to find the nearest noun/noun phrase of the opinion word. |
[12] | Conditional Random Fields (CRFs) for extracting aspects. Features: token string text, POS tag of the token text, label of existence of short dependency path (between the token and opinion expressions), word distance label (token appear in the closest word distance to an opinion expression or not), opinion sentence label (does the token appear in a sentence with opinion expressions?). |
[13] | Multi-aspect sentence labeling: Latent Dirichlet Allocation (LDA), Multi-grain LDA (MG-LDA), Segmented Topic Model (STM) and Local LDA models weakly supervised using seed words, supervised Support Vector Machine (SVM) and a majority baseline that assigns the most common aspect label. For multi-aspect rating prediction with indirect supervision: LDA, MG-LDA, STM, and local LDA weakly supervised with seed words are used to label sentences with aspects, and a Support Vector Regression (SVR) model is trained with the combined vectors of each entity and their overall ratings. Supervised multi-aspect rating prediction: Perceptron Ranking (PRank) and linear SVR are used, trained with and without features derived from LDA, MG-LDA, STM, and local LDA, that do not make use of seed words, and trained with unigram baseline features. |
[14] | Feature extraction: the algorithm extracts frequent noun phrases from parsed reviews. It also examines opinion phrases associated with explicit features in order to extract implicit properties. Finding opinion phrases: if there is an explicit feature in a sentence, the algorithm applies extraction rules to find opinion phrases. Each head word, together with its modifiers, is returned as a potential opinion phrase. Opinion phrases polarity detection: relaxation labeling. |
[15] | POS tagging and shallow parser. Syntactic parsing and sentiment lexicon used for sentiment analysis and relating sentiment expressions to subjects. |
[16] | CRFs. Joint extraction of opinions and object features. |
[17] | Linear SVMs and Logistic Regression (LR) for learning Ajective Noun pairs (ANPs) detectors for visual sentiment analysis. Visual Sentiment Ontology (VSO) based on ANPs. SentiBank, visual concept detection library that can detect 1200 ANPs in images using ANP detectors, based on VSO. |
[18] | Progressive Convolutional Neural Network (PCNN) for image sentiment classification trained using weakly labeled data. Data from the output of the model is used to fine-tune it. Previously trained CNNs are fine-tuned with a small set of manually labeled images for addressing domain transfer. |
[19] | Automatic Speech Recognition (ASR) for obtaining transcriptions of videos. POS tagging for extracting text-based sentiment features. A Maximum Entropy (ME) model with feature tuning for sentiment classification. |
[20] | Algorithm for emotion classification using region switching between Vowel-like Regions (VLR) and non-VLRs from audio data. |
[21] | Sparse autoencoder-based feature transfer learning method, using a single-layer autoencoder to find a common structure in small target data and then applying such structure to reconstruct source data. |
[22] | ASR engine: The AT&T Watson speech recognizer was used to convert spoken review summaries to text. Linear interpolation of the three Katz’s backoff language models. In the experiments, Ada-boost with acoustic features combined with a text-based prediction feature was used and compared to LR, SVM, and C4.5 decision tree. |
[23] | Acoustic emotion estimation: SVR. Visual emotion estimation: SVR. Decision-level fusion: weighted linear combination of the acoustic and visual estimations for a given sentence using different weights for the estimation of valence, activation and dominance. |
[24] | Hidden Markov Models (HMMs) using multi-modal feature sets. Electroencephalogram (EEG) from the central nervous system and three kinds of Peripheral physiological signals (PERI) from the peripheral nervous system (Respiration or RSP, Electromyogram or EMG, and Skin Temperature or TMP) are used. Fusion at feature level is performed, and at decision level employing six different strategies, classification is performed using feature fusion, decision fusion, and non-fusion models. |
[25] | SVR and bidirectional Long Short-Term Memory Neural Networks (BLSTM-NN) both used for single-cue prediction of valence and arousal. BLSTM-NNs are used for feature-level and model-level fusion, as well as for output-associative fusion of different cues (facial Expressions, shoulder cues, and audio cues). Model-level fusion performs fusion of the output of BLSTM-NNs predicting valence or arousal, using different cues (one cue in each BLSTM-NN), and uses it as the input for another BLSTM-NN. Output-associative fusion fuses the output of BLSTM-NNs predicting valence and BLSTM-NNs predicting arousal using different cues (again one cue in each BLSTM-NN). |
[26] | Audio-visual big data emotion recognition system, using Multi-directional Regression (MDR) features for speech and Weber Local Descriptor (WLD) features for face images. SVM classifiers are used for each modality and decision-level fusion using Bayesian sum rule. |
[27] | Fuzzy SVMs for sentiment analysis on customer reviews of products. Case-based Reasoning (CBR) for generating ordinary and extraordinary use cases, from the sentiment labeled product attributes obtained from the SVMs. |
[28] | CBR to compare text documents and different sentiment lexicons for sentiment classification associated to the cases. |
[29] | Domain ontology and POS tagging for sentiment analysis, using CBR for reusing past cases of sentiment detection on text. |
[7] | Algorithmic approach using a lexicon of stress and relaxation terms to detect stress and relaxation magnitude on text. The value predicted on a sentence is based on the score of the highest stress or relaxation term found within that sentence. Sentiment on a text with more than one sentence is computed as the highest value from any constituent sentence. Corrections such as negation of stress terms or spelling correction are applied. |
[30] | Decision Tree (DT), SVM, k-Nearest Neighbor (kNN), AdaBoost, using DecisionStump as a base classifier, and ANN are used. DT was used to select features as input for the other methods. |
Reference | Dataset or Datasets | Partitions |
---|---|---|
[10] | Euthanasia dataset: 851 Chinese articles on “euthanasia”, manually labeled into 502 positive and 349 negative articles. AmazonCN dataset: 458,522 reviews from six categories (book, music, movie, electrical appliance, digital product, and camera), labeled according to Amazon user’s five-star rating into 310,390 positive and 29,540 negative reviews. | Euthanasia dataset: Standard 10-fold cross-validation was performed. AmazonCN dataset: Up to 200 positive and 200 negative randomly selected reviews of each product category as the training dataset, and up to 500 positive and 500 negative randomly selected reviews of each product category as the test dataset. |
[11] | Customer reviews from Amazon.com and C|net.com about five products (2 digital cameras, 1 DVD player, 1 mp3 player, and 1 cellular phone). 100 reviews for each product. A person extracted features manually for evaluation, resulting in 79, 96, 67, 57, and 49 manual features for Digital camera1, Digital camera2, Cellular phone, Mp3 player, and DVD player, respectively. | The data was used in the proposed system to perform feature extraction and compare it to the manually extracted features. |
[12] | Four datasets annotated with individual opinion target instances on a sentence level. Movies: reviews for 20 movies from the Internet Movie Database (1829 documents containing 24,555 sentences). Annotated with opinion target—opinion expression pairs. Web-services: reviews for two web-services collected from epinions.com (234 documents containing 6091 sentences). Cars: Reviews of cars (336 documents containing 10,969 sentences). Cameras: blog postings regarding digital cameras (234 documents containing 6091 sentences). In order for datasets movies, web-services, cars and cameras: sentences with targets: 21.4%, 22.4%, 51.1% and 54.0%; sentences with opinions: 21.4%, 22.4%, 53.5% and 56.1%. | As development data for the CRF model, 29 documents from the movies dataset, 23 documents from the web-services dataset, and 15 documents from the cars and cameras datasets were used. 10-fold cross-validation in single-domain (single dataset) experiments. In the cross-domain experiments, the system is trained on the complete set of data from one or various datasets and tested on all the data of a dataset not used in training. |
[13] | OpenTable: 73,495 reviews (29,596 after excluding excessively long and short reviews) and their associated overall, food, service, and ambiance aspect ratings for all restaurants in the New York/Tri-State area appearing on OpenTable.com. Not labeled; CitySearch: 652 restaurant reviews from CitySearch.com. Each sentence manually labeled with one of six aspects: food, service, ambiance, price, anecdotes, or miscellaneous; TripAdvisor: 66,512 hotel reviews. Each review labeled with overall rating and ratings for 7 aspects: value, room, location, cleanliness, check-in/front desk, service, and business services. | Multi-aspect sentence labeling: For evaluation, 1490 singly-labeled sentences from the annotated portion of the CitySearch corpus were used. Inference is performed on all 652 documents of CitySearch; multi-aspect rating prediction with indirect supervision: OpenTable and TripAdvisor datasets sentences are labeled with aspects using weakly supervised topic models. All reviews for each entity (hotel or restaurant) are combined into a single review, aspect ratings are obtained by averaging the overall/aspect ratings for each combined review. 5-fold cross-validation is performed; supervised multi-aspect rating prediction: 5-fold cross-validation on subsets of the OpenTable and TripAdvisor data. |
[14] | Two sets of 1307 reviews downloaded from tripadvisor.com for Hotels and amazon.com for Scanners. Two annotators labeled a set of 450 feature extractions from the algorithm as correct or incorrect. The annotators extracted explicit features from 800 review sentences (400 for each domain); Word semantic dataset: 13,841 sentences and 538 previously extracted features; Opinion phrase dataset: 550 sentences containing previously extracted features. The sentences were annotated with opinion phrases corresponding to the known features and with opinion polarity. | Explicit feature extraction: the algorithm was evaluated on the two sets from TripAdvisor and amazon. Finding word semantic orientation: the algorithm was evaluated on the Word semantic dataset. Extracting pinion phrases and opinion phrase polarity detection: the algorithm was evaluated on the Opinion phrase dataset. |
[15] | Benchmark Corpus: 175 samples of subject terms within context text. Contains 118 favorable sentiment samples and 58 unfavorable samples; Open Test Corpus: 2000 samples related to camera reviews. Half the samples are labeled favorable or unfavorable and the other half neutral; 6415 web pages with 16,862 subject references, 1618 news articles with 5600 subject references, 1198 pharmaceutical web pages with 3804 subject references. | The system was directly used with data from the datasets. |
[16] | Movies dataset: 500 reviews about 5 movies. Contains 2207 sentences; Products dataset: 601 reviews about 4 products. Contains 2533 sentences. Both datasets are labeled manually by humans. Labels for object features, positive opinions, negative opinions, and the object feature-opinion pairs for all sentences are given. | Each dataset is split into 5 parts, and four are used for training while one for testing. |
[17] | Flickr dataset: 150,034 images and videos with 3,138,795 tags; YouTube dataset: 166,342 images and videos with 3,079,526 tags; Amazon Mechanical Turk (AMT) experiment: randomly sampled images of 200 Adjective Noun Pair (ANP) concepts from the Flickr images, manually labeled by AMT crowdsource; Twitter Images dataset: Tweets containing images crawled using popular hashtags. Three labeling runs using AMT, namely image-based, text-based, and joint text-image based are performed. The dataset includes 470 positive tweets and 133 negative tweets over 21 hashtags; ArtPhotos dataset: ArtPhotos retrieved from DeviantArt.com. Contains 807 images from 8 emotion categories. | Training dataset with Flickr ANP labeled images: 80% of pseudo positive images of each ANP and twice as many negative images. Test datasets (full and reduced test sets): both use 20% of pseudo positive samples of a given ANP as positive test samples. The full test set includes 20% pseudo positive samples from each of the other ANPs (except those with the same adjective or noun) as negative samples. The reduced test set contains twice as many negative samples for each ANP as the positive samples. 5 versions of the reduced test set are created varying the negative samples. |
[18] | Half million Flickr images weakly labeled with one ANP; Image tweets dataset: Tweets that contain images. The total is 1269 images. AMT is used to generate sentiment labels. Three sub-datasets are created: 581 positive and 301 negative images where 5 labelers agree, 689 positive and 427 negative images where at least 4 labelers agree, and 769 positive and 500 negative images where at least 3 labelers agree. | Randomly chosen 90% of the images from Flickr are the training dataset. The remaining 10% images are used as the testing dataset in the experiments with CNN and PCNN without domain transfer. 5-fold cross-validation is performed with Twitter images, using the training images to fine-tune a pre-trained model on Flickr images and the testing images to validate this model. |
[19] | Amazon product reviews dataset: contains review comments about a large range of products including books, movies, electronic goods, apparel, and so forth; Pros and Cons and Comparative Sentence Set databases containing a list of positive and negative sentiment words/phrases; selected 28 youtube videos rated manually (16 negative and 12 positive sentiment) containing expressive speakers sharing their opinion on a wide variety of topics including movies, products, and social issues. | From the combination of the Amazon product reviews, Pros and Cons and Comparative Sentence Set datasets, extracted 800,000 reviews for training, and 250,000 reviews for evaluation were used. |
[20] | EMODB database: Ten professional speakers for ten german sentences, 535 speech files, seven emotions (anger, anxiety, boredom, disgust, happiness, neutral and sadness), recorded at 48 kHz; IEMOCAP database: Audio-visual data in English, only audio track considered for this work, five male speakers and five female speakers, six emotions of the IEMOCAP database are considered (anger, excited, frustration, happiness, neutral and sadness), recorded at 16 kHz; FAU AIBO database: spontaneous emotional speech, contains recordings of 51 German children (21 male and 30 female) at the age of 10–13 years interacting with a pet robot. Contains 9959 training chunks and 8257 testing chunks with length approximately 1.7 s. Chunks are categorized into five different emotions (anger, emphatic, neutral, positive, and rest). | Leave-one-speaker-out cross-validation protocol was used for EMODB, IEMOCAP, and FAU AIBO databases. Additionally, with FAU AIBO database, a predefined partition of one children’s data is used for validation purposes, and the remaining children’s data is used for training purposes. |
Reference | Dataset or Datasets | Partitions |
---|---|---|
[21] | FAU AEC database: based on the FAU AIBO emotion corpus, which contains recordings of children interacting with a pet robot in German speech. In the training set there are 6601 instances of positive and 3358 negative valence, and in the test set 5792 positive 2465 negative valence; TUM Audio-Visual Interest Corpus (TUM AVIC), Berlin Emotional Speech Database (EMO-DB), eNTERFACE, Speech Under Simulated and Actual Stress (SUSAS), and the “Vera am Mittag” (VAM) database. The age, language, kind of speech, emotion type, number of positive and negative utterances and sampling rate are: children, German, variable, natural, 5823, 12393 and 16 kHz for FAU AIBO; adults, English, variable, natural, 553, 2449 and 44 kHz for TUM AVIC; adults, German, fixed, acted, 352, 142 and 16 kHz for EMO-DB; adults, English, fixed, induced, 855, 422 and 16 kHz for eNTERFACE; adults, English, fixed, natural, 1616, 1977 and 8 kHz for SUSAS; adults, German, variable, natural, 876, 71 and 16 kHz for VAM. | FAU AEC is chosen as target set and the rest are used as source sets. |
[22] | Corpus of 3,268 textual review summaries produced by 384 annotators, resulting in 1055 rated as negative, 1600 as positive, and 613 as mixed; CitySearch dataset: 87000 reviews describing more than 6000 restaurant businesses from the citysearch.com website; AMT text dataset: short text reviews summarized by Amazon turkers; GoodRec dataset: a set of short restaurant and bar recommendations mined from the goodrec.com website; short reviews of restaurants dataset: 84 participants made short reviews of restaurants on phone, answering questions, rating them and making a short free review. Resulted in 52 positive and 32 negative reviews. | The text-based classification is done training with the complete set of textual review summaries. The speech recognition models were trained using the CitySearch, AMT, and GoodRec datasets. Sentiment analysis from acoustic features models were trained on the short reviews of restaurants dataset, performing 10-fold cross-validation. |
[23] | VAM corpus: consists of audio-visual spontaneous speech. signals were sampled at 16 kHz and 16 bit resolution. Facial image sequences were taken at a rate of 25 fps. Labeled with emotion by human listeners. The signals were sampled at 16 kHz | The VAM corpus was used for all the experiments. 245 utterances of 20 speakers for acoustic emotion estimation were used, performing 10-fold cross-validation. For the visual emotion estimation, 1600 images were used, again performing 10-fold cross-validation. For audio-visual fusion emotion estimation, 234 sentences and 1600 images were used. |
[24] | Database for Emotion Analysis using Physiological Signals (DEAP): physiological signals Electroencephalogram (EEG) and Peripheral physiological signals (PERI) are used. EEG was recorded from 32 active electrodes (32 channels). PERI (8 channels) from Peripheral Nervous System (PNS) include Galvanic Skin Response (GSR), Skin Temperature (TMP), Blood Volume Pulse (BVP), Respiration (RSP), Electromyogram (EMG) collected from zygomaticus major and trapezius muscles, and horizontal and vertical Electrooculograms (hEOG and vEOG). The signals were recorded while playing 41 different music clips, and self-report of valence and arousal was done by the participants. Ten participants did 400 self-reports on valence and 400 on arousal. | For the feature-level fusion method, the DEAP database was used in the training and the most significant feature sets were selected for testing. Nested five-fold cross-validation was used in the testing phase. For decision-level, features were extracted in the training as well by performing nested five-fold cross-validation. DEAP database was used for all the experiments. |
[25] | Sensitive Artificial Listener Database: spontaneous audio-visual interaction between a human and an operator with different personalities (happy, gloomy, angry, and pragmatic). The Sampling rate for video is 25 fps, and for audio 16 kHz. A set of coders annotated the recordings in the continuous valence-arousal 2D space confined to [−1,1], although not all the data in the database has been labeled. | For validation, a subset of the SAL-DB that consists of 134 audiovisual segments (a total of 30,042 video frames) obtained by automatic segmentation was used. In this work subject-dependent leave-one-out-validation evaluation was used for the experiments. |
[26] | eNTERFACE database: 42 non-professional subjects, 81% were male and 19% female, reacting to 5 sentences for each emotion between anger, disgust, fear, happiness, sadness, and surprise. The average length of the samples was 3 seconds. Berlin’s emotional speech database and Kanade-Cohn emotional face database were used as single modality databases. Subjects were well-trained for acting according to the emotions. The extra emotion category is found in the Berlin database. Additionally, a massive amount of continuous video (voice or speech and facial video) generated from a video camera or smart mobile-phone based cameras, while the person is using social network service or smart health monitoring services was compiled into five datasets of different sizes. | In the experiments without using big data tools, four-fold validation was performed on the eNTERFACE, Berlin, and Kanade-Cohn databases. In the experiment with big data tools, the five datasets of continuous video generated were used, and a block replication number of three and a block size of 64 MB were used as the default settings in Hadoop. The authors varied the settings in the experiments in order to examine how the performance varies with respect to various: cluster sizes, block sizes, and block replication numbers. |
[27] | Kindle Fire HD 7 reviews. Unstructured review data collected from 2 October 2012 to 20 November 2013. User-provided ratings. | A ten-fold cross-validation method is adopted for fine-tuning the parameters of the fuzzy Support Vector Machine (SVM) models and for sentiment prediction, using the Kindle Fire reviews data. |
[28] | 6 Text user review datasets: IMDB dataset of film reviews (2000 reviews); hotel reviews (2874 reviews); Amazon apparel products reviews (2072 reviews); Amazon music products reviews (2034 reviews); Amazon book products reviews (566 reviews); Amazon electronic products reviews (5902 reviews). All of the datasets have an equal number of positive and negative labeled reviews. | 6 distinct case bases are created by training on datasets of all but one of the domains, and then each case base is used to classify documents on the hold out domain, which is the domain not used for populating the case base of the Case-Based Reasoning (CBR) module. |
[29] | 1999 reviews about digital cameras labeled by users of Amazon with sentiment polarity. 1000 positive reviews and 999 negative reviews; 1991 reviews about DVD movies labeled by users of Amazon with sentiment polarity. 996 positive reviews and 995 negative reviews. | Leave-one-out cross-validation on the two datasets (cameras and movies) was performed. |
[7] | Development corpus: this corpus is a collection of 3000 stress-related tweets, manually classified by the author for stress and relaxation. These tweets were identified by monitoring a set of stress and relaxation keywords over a week; six corpora of English short text messages extracted from Twitter.com (tweets), and coded by humans with stress and relaxation strengths. They were extracted from Twitter monitoring certain keywords in a 1 month period in July 2015. The corpora are: Common short words (608 tweets); Emotion terms (619 tweets); Insults (180 tweets); Opinions (476 tweets); Stress terms (655 tweets); Transport (528 tweets). | For assigning term strengths, identify missing terms, and to refine the sentiment term scores the development corpus was used. The performance of the supervised version of TensiStrength was evaluated using 10-fold cross-validation 30 times on the data of the six English short text datasets, with the average scores across the 30 iterations recorded. |
[30] | 24 participants, with ages ranged from 18 to 56, being 14 female, 10 male and 22 right-handed. They were asked to type with a keyboard after cognitive and physical stress tasks. Sessions spread over at least 3 days, ranging from 3 to 22 per participant, with a median of 9 days. The data collected was information about event (key op or down), time stamp (10 ms resolution), and key code. After each task, participants self-reported their stress level. | Baseline condition, control condition, and 2 experimental conditions were used. Baseline: 10 samples under no stress. Control: two samples under no stress. Experimental: completed either a cognitively or physically challenging task prior to providing a typing sample. The performance of each machine learning model was evaluated with three-fold cross-validation. |
Reference | Problem to Address | Technique | Contributions of the Proposal |
---|---|---|---|
[38] | Enhancing communication between users in Social Network Sites (SNSs). | Multi-agent System (MAS) and agents working as mediators. | Enhanced user engagement and collaboration in SNSs. |
[39] | Collaborative filtering recommendation. | User ontology created by monitoring user behavior, and calculation of inter-ontology similarities. MAS that integrates the previous tasks, representing users as agents with an ontology representing their behavior. | Automatically creating a model of users by creating ontologies monitoring users, and computing similarities between users using such ontologies for recommendation. |
[40] | Trust and reputation in MAS. | MAS implementing agents that perform certified recommendations. The certifications are achieved by using signed transactions or witnessed transactions by other agents as certificate. | Certified recommendations and possibility of the MAS to determine how much agents can be trusted as experts. |
[41] | Business-to-customer (B2C) e-commerce activities. | XML-based MAS architecture with users personalized profiles. Such profiles are built and updated by weighting activities performed in B2C processes. | Implementation of business-to-customer e-commerce through using user profiles built with information of user actions in previous transactions. |
[42] | Privacy-preserving recommendation systems. | Privacy-preserving protocol for information filtering processes that makes use of a MAS architecture and suitable filtering techniques (feature-based approaches and knowledge-based filtering). | The proposed approach provides information filtering while preserving privacy. An application of the proposal supporting users in planning entertainment-related activities is presented. |
[43] | Content-based recommendation system, aiming to solve the new user and overspecialization problems. | MAS architecture as a recommendation system. Semantic enhancement of user preference through domain ontology and semantic association discovery in user profile database. | Addresses two existing problems in an existing technique, and experimental results show an improvement in positive feedback rate. |
[44] | Group recommendation. | MAS approach based on negotiation techniques. A multilateral monotonic concession protocol is used to combine individual recommendations into a group recommendation. | Implementation of group recommendation using a MAS architecture and a multilateral protocol. Testing the proposed approach in the movies domain, users were found more evenly satisfied in the groups than with ranking aggregation. |
[45] | Detecting the social emotion of a group of entities and influencing them. | Social-emotional model computed using an ANN, based on the pleasure, arousal, and dominance (PAD) three-dimensional emotional space. Application of the model in a group of Human-Immersed agents. | Artificial Neural Network (ANN) that computes the social emotion of a group of agents. Experiments show that using the proposed model to predict the emotion of a group of agents and computing the distance to a target emotion ’happiness’ for selecting the action for the system to take achieves the distance to the target emotion to diminish after few iterations. |
[46] | Cyber-bullying and online grooming prevention in SNSs through the use of different techniques. | Sentiment analysis on text by using different text mining modules, adult image detection using Skin Tone Pixels detection and message classification using Natural Language Processing (NLP) algorithms, through keyword search in the text. | Combination of different data analysis techniques including text and image analysis for prevention of user negative behaviors such as bullying and grooming. |
[47] | Prevention of negative outcomes in SNSs, negative sentiment, and high stress levels through decision-level fusion of sentiment and stress analysis on text. | Sentiment, stress, and combined analysis of sentiment and stress using decision-level fusion on text using ANNs. MAS architecture with agents integrating different unimodal analyses, and an agent performing decision-level fusion and feedback generation to users in SNSs. | Combination of different data analysis techniques and a fusion technique with a MAS architecture for prevention of negative outcomes in SNSs. Experiments with data from Twitter that show significant differences between the analyzers predicting negative outcomes, and with a real-life SNS. |
[48] | Prevention of negative outcomes in SNSs, negative sentiment, and high stress levels through decision-level fusion of sentiment and stress analysis on text and keystroke dynamics data. | Extension of a MAS architecture that employs ANNs for sentiment and stress analysis on text with new ANNs performing sentiment and stress analysis on keystroke dynamics data. Design of different decision-level fusion methods employing sentiment and stress analysis on text and keystroke dynamics data. | Addition of analyzers performing sentiment and stress analysis on keystroke dynamics data to a MAS with analysis on text data. Experiments performed with data from Twitter exploring different decision-level fusion methods, and proposal of a novel rule-based feedback generation agent in the MAS, in accordance with the results of the experiments. |
[49] | Learning the sentiment associated with specific keywords from different data sources. | MAS architecture with agents implementing reinforcement learning algorithms, learning the sentiment associated with keywords, with each agent analyzing data from a different source. | Implements sentiment analysis on keywords by applying collective learning from different data sources and reinforcement learning algorithms in a MAS architecture. |
[50] | Sentiment analysis on different SNSs using user opinion to construct a collective sentiment as the opinion of a product. | MAS architecture with agents implementing naïve Bayes classification for performing sentiment analysis on user opinions from different SNSs. A final sentiment is calculated using a common blackboard. | Collective sentiment or opinion about a product computed using sentiment analysis on different SNSs with a MAS architecture. |
[51] | Design and implementation of an actor-based software library for building distributed data analysis applications. | Prototype library implemented using the ActoDeS software framework for the development of concurrent and distributed systems. The library implemented includes a MAS architecture and different implementations of five agent types, which are acquirer, preprocessor, engine, controller, and master. | Prototype of a library that provides a MAS architecture and agent implementations that wrap the different tasks of a data analysis application. |
[52] | Framework for understanding and predicting the emergence of collective emotions. | Framework built as an agent-based model with agents modeled with individual emotion states and communication between agents. | Proposal of a framework that allows to understand and predict collective emotions, based on interactions between agents, who have individual emotional states. |
[53] | Product opinion mining from SNSs data using big data analysis. | MAS architecture including a data extraction, analysis, management and manager agents. It is implemented using JADEX, an agent architecture for representing mental states in JADE agents. Agents make use of Hadoop MapReduce for data process and analysis, and HBase for data storage. Influence of the poster, knowledge about the topic, and sentiment analysis are computed on text messages in MapReduce. | Implementation of distributed data analysis using big data tools for opinion mining from SNSs. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Aguado, G.; Julián, V.; García-Fornes, A.; Espinosa, A. A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis. Appl. Sci. 2020, 10, 6746. https://doi.org/10.3390/app10196746
Aguado G, Julián V, García-Fornes A, Espinosa A. A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis. Applied Sciences. 2020; 10(19):6746. https://doi.org/10.3390/app10196746
Chicago/Turabian StyleAguado, Guillem, Vicente Julián, Ana García-Fornes, and Agustín Espinosa. 2020. "A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis" Applied Sciences 10, no. 19: 6746. https://doi.org/10.3390/app10196746
APA StyleAguado, G., Julián, V., García-Fornes, A., & Espinosa, A. (2020). A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis. Applied Sciences, 10(19), 6746. https://doi.org/10.3390/app10196746