Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems
Abstract
:1. Introduction
- Self-awareness.
- Self-management.
- Social awareness.
- Relationship management.
- Electrocardiography (ECG), electrodermal activity (EDA) [22,23,24,25,26], and skin temperature sensors [27]—for assessment of the human body’s physiological reactions related to stress, anxiety, agitation, cognitive load, etc. It was also shown that the detection of heart rate variability could be based on photoplethysmography (PPG) sensors integrated into wearable devices [28,29], as well as on traditional electrode-based technology integrated into plasters [30] or smart textiles [31,32];
- Electroencephalography (EEG) [33,34,35,36,37,38]—for capturing the cognition processes, even in cases of a lack of behavioral reaction caused by a specific stimulus. The potential of this technology is promising, primarily through the development of brain–computer interfaces (BCIs) [39,40,41,42] in that number those with direct brain connection [43].
- Equipment-extensive technologies for human state assessment, such as functional near-infrared spectroscopy (fNIRS) [44,45], capturing the hemodynamics in different parts of the brain; magnetic resonance imaging (MRI) [46], enabling the creation of a digital twin of a human brain [47]; electromyographic sensors [48,49,50] for detection as well as activation of controlled activities of a particular muscle group.
2. Materials and Methods
2.1. Background
2.2. Research Design
2.2.1. Conceptual Design
2.2.2. Experimental Setup
2.2.3. Dataset
- Timestamp/stimuli/answer (user response)/performance/speed/reaction time—from the application manager.
- Timestamp/arousal/valence/attention/angry/disgust/fear/happy/neutral/sad/surprise—from MorphCast platform.
- Data fusion of the above, matched by timestamp in the moment of user response.
2.2.4. Experimental Protocol
3. Results
3.1. Results in Person-Independent Scenario
- Data extraction from all participants in the experiment (the sample).
- Combining the data (in RapidMiner—through the “Append” operator).
- Sorting the data in ascending order concerning the attribute “arousal”.
- Setting the values of the “success” attribute as follows: 1 on success and −1 on failure.
- Creating a new attribute, “cumulative performance”, and applying a cumulative function about the “performance” attribute (taking values −1 and 1).
- Using the cumulative performance rate values to establish arousal dependence.
3.2. Results in Person-Specific Scenario
3.2.1. Valence–Arousal Relationship
3.2.2. Arousal–Performance Relationship
- Data extraction from all participants in the experiment.
- Sorting of the data of each participant in ascending order regarding the attribute “arousal”.
- Setting the values of the “performance” attribute as follows: 1 on success and −1 on failure.
- Creating a new attribute, “cumulative performance rate”, and applying a cumulative function concerning the “success rate” attribute (accepting values −1 and 1).
- Creating a new attribute “id_” and setting unique values for each participant.
- Combining the data (in RapidMiner—by using the “Append” operator).
- Use of cumulative performance rate values to establish dependence on arousal.
3.2.3. Arousal–Attention Relationship
4. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, F.; Liu, Y.; Shi, Y. Three IQs of AI systems and their testing methods. J. Eng. 2020, 2020, 566–571. [Google Scholar] [CrossRef]
- Salovey, P.; Mayer, J.D. Emotional Intelligence. Imagin. Cogn. Personal. 1990, 9, 185–211. [Google Scholar] [CrossRef]
- Goleman, D. Emotional Intelligence; Bantam Books: New York, NY, USA, 2005. [Google Scholar]
- Hazarika, D.; Poria, S.; Zimmermann, R.; Mihalcea, R. Conversational transfer learning for emotion recognition. Inf. Fusion 2021, 65, 1–12. [Google Scholar] [CrossRef]
- Mohammadi Baghmolaei, R.; Ahmadi, A. TET: Text emotion transfer. Knowl-Based Syst. 2023, 262, 110236. [Google Scholar] [CrossRef]
- You, L.; Han, F.; Peng, J.; Jin, H.; Claramunt, C. ASK-RoBERTa: A pretraining model for aspect-based sentiment classification via sentiment knowledge mining. Knowl-Based Syst. 2022, 253, 109511. [Google Scholar] [CrossRef]
- Zhang, X.; Ma, Y. An ALBERT-based TextCNN-Hatt hybrid model enhanced with topic knowledge for sentiment analysis of sudden-onset disasters. Eng. Appl. Artif. Intell. 2023, 123, 106136. [Google Scholar] [CrossRef]
- Vekkot, S.; Gupta, D. Fusion of spectral and prosody modelling for multilingual speech emotion conversion. Knowl-Based Syst. 2022, 242, 108360. [Google Scholar] [CrossRef]
- Leippold, M. Sentiment spin: Attacking financial sentiment with GPT-3. Financ. Res. Lett. 2023, 55, 103957. [Google Scholar] [CrossRef]
- Gupta, A.; Singhal, A.; Mahajan, A.; Jolly, A.; Kumar, S. Empirical Framework for Automatic Detection of Neural and Human Authored Fake News. In Proceedings of the 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 25–27 May 2022; IEEE: Madurai, India, 2022; pp. 1625–1633. [Google Scholar]
- Malhotra, A.; Jindal, R. Deep learning techniques for suicide and depression detection from online social media: A scoping review. Appl. Soft Comput. 2022, 130, 109713. [Google Scholar] [CrossRef]
- Mi, C.; Xie, L.; Zhang, Y. Improving data augmentation for low resource speech-to-text translation with diverse paraphrasing. Neural Netw. 2022, 148, 194–205. [Google Scholar] [CrossRef] [PubMed]
- Korzekwa, D.; Lorenzo-Trueba, J.; Drugman, T.; Kostek, B. Computer-assisted pronunciation training—Speech synthesis is almost all you need. Speech Commun. 2022, 142, 22–33. [Google Scholar] [CrossRef]
- Zhang, H.; Yang, X.; Qu, D.; Li, Z. Bridging the cross-modal gap using adversarial training for speech-to-text translation. Digit. Signal Process. 2022, 131, 103764. [Google Scholar] [CrossRef]
- Lim, Y.; Gardi, A.; Pongsakornsathien, N.; Sabatini, R.; Ezer, N.; Kistan, T. Experimental characterisation of eye-tracking sensors for adaptive human-machine systems. Measurement 2019, 140, 151–160. [Google Scholar] [CrossRef]
- Shi, L.; Bhattacharya, N.; Das, A.; Gwizdka, J. True or false? Cognitive load when reading COVID-19 news headlines: An eye-tracking study. In Proceedings of the CHIIR ’23: ACM SIGIR Conference on Human Information Interaction and Retrieval, Austin, TX, USA, 19–23 March 2023; ACM: Austin, TX, USA, 2023; pp. 107–116. [Google Scholar]
- Erdogan, R.; Saglam, Z.; Cetintav, G.; Karaoglan Yilmaz, F.G. Examination of the usability of Tinkercad application in educational robotics teaching by eye tracking technique. Smart Learn. Environ. 2023, 10, 27. [Google Scholar] [CrossRef]
- Li, S.; Duffy, M.C.; Lajoie, S.P.; Zheng, J.; Lachapelle, K. Using eye tracking to examine expert-novice differences during simulated surgical training: A case study. Comput. Hum. Behav. 2023, 144, 107720. [Google Scholar] [CrossRef]
- Fernandes, A.S.; Murdison, T.S.; Proulx, M.J. Leveling the Playing Field: A Comparative Reevaluation of Unmodified Eye Tracking as an Input and Interaction Modality for VR. IEEE Trans. Visual. Comput. Graphics 2023, 29, 2269–2279. [Google Scholar] [CrossRef] [PubMed]
- Shadiev, R.; Li, D. A review study on eye-tracking technology usage in immersive virtual reality learning environments. Comput. Educ. 2023, 196, 104681. [Google Scholar] [CrossRef]
- Pan, H.; Xie, L.; Wang, Z. C3DBed: Facial micro-expression recognition with three-dimensional convolutional neural network embedding in transformer model. Eng. Appl. Artif. Intell. 2023, 123, 106258. [Google Scholar] [CrossRef]
- Sung, G.; Bhinder, H.; Feng, T.; Schneider, B. Stressed or engaged? Addressing the mixed significance of physiological activity during constructivist learning. Comput. Educ. 2023, 199, 104784. [Google Scholar] [CrossRef]
- Campanella, S.; Altaleb, A.; Belli, A.; Pierleoni, P.; Palma, L.A. Method for Stress Detection Using Empatica E4 Bracelet and Machine-Learning Techniques. Sensors 2023, 23, 3565. [Google Scholar] [CrossRef]
- Chen, K.; Han, J.; Baldauf, H.; Wang, Z.; Chen, D.; Kato, A.; Ward, J.A.; Kunze, K. Affective Umbrella—A Wearable System to Visualize Heart and Electrodermal Activity, towards Emotion Regulation through Somaesthetic Appreciation. In Proceedings of the AHs ’23: Augmented Humans Conference, Glasgow, UK, 12–14 March 2023; ACM: Glasgow, UK, 2023; pp. 231–242. [Google Scholar]
- Sagastibeltza, N.; Salazar-Ramirez, A.; Martinez, R.; Jodra, J.L.; Muguerza, J. Automatic detection of the mental state in responses towards relaxation. Neural Comput. Appl. 2023, 35, 5679–5696. [Google Scholar] [CrossRef] [PubMed]
- Stržinar, Ž.; Sanchis, A.; Ledezma, A.; Sipele, O.; Pregelj, B.; Škrjanc, I. Stress Detection Using Frequency Spectrum Analysis of Wrist-Measured Electrodermal Activity. Sensors 2023, 23, 963. [Google Scholar] [CrossRef] [PubMed]
- Castro-García, J.A.; Molina-Cantero, A.J.; Gómez-González, I.M.; Lafuente-Arroyo, S.; Merino-Monge, M. Towards Human Stress and Activity Recognition: A Review and a First Approach Based on Low-Cost Wearables. Electronics 2022, 11, 155. [Google Scholar] [CrossRef]
- Mach, S.; Storozynski, P.; Halama, J.; Krems, J.F. Assessing mental workload with wearable devices—Reliability and applicability of heart rate and motion measurements. Appl. Ergon. 2022, 105, 103855. [Google Scholar] [CrossRef]
- Ngoc-Thang, B.; Tien Nguyen, T.M.; Truong, T.T.; Nguyen, B.L.-H.; Nguyen, T.T. A dynamic reconfigurable wearable device to acquire high quality PPG signal and robust heart rate estimate based on deep learning algorithm for smart healthcare system. Biosens. Bioelectron. X 2022, 12, 100223. [Google Scholar] [CrossRef]
- Wang, Z.; Matsuhashi, R.; Onodera, H. Towards wearable thermal comfort assessment framework by analysis of heart rate variability. Build. Environ. 2022, 223, 109504. [Google Scholar] [CrossRef]
- Goumopoulos, C.; Stergiopoulos, N.G. Mental stress detection using a wearable device and heart rate variability monitoring. In Edge-of-Things in Personalized Healthcare Support Systems; Elsevier: Amsterdam, The Netherlands, 2022; pp. 261–290. [Google Scholar]
- Chen, Y.; Wang, Z.; Tian, X.; Liu, W. Evaluation of cognitive performance in high temperature with heart rate: A pilot study. Build. Environ. 2023, 228, 109801. [Google Scholar] [CrossRef]
- Du, H.; Riddell, R.P.; Wang, X. A hybrid complex-valued neural network framework with applications to electroencephalogram (EEG). Biomed. Signal Process. Control. 2023, 85, 104862. [Google Scholar] [CrossRef]
- Soni, S.; Seal, A.; Mohanty, S.K.; Sakurai, K. Electroencephalography signals-based sparse networks integration using a fuzzy ensemble technique for depression detection. Biomed. Signal Process. Control. 2023, 85, 104873. [Google Scholar] [CrossRef]
- Zali-Vargahan, B.; Charmin, A.; Kalbkhani, H.; Barghandan, S. Deep time-frequency features and semi-supervised dimension reduction for subject-independent emotion recognition from multi-channel EEG signals. Biomed. Signal Process. Control. 2023, 85, 104806. [Google Scholar] [CrossRef]
- Liu, S.; Zhao, Y.; An, Y.; Zhao, J.; Wang, S.-H.; Yan, J. GLFANet: A global to local feature aggregation network for EEG emotion recognition. Biomed. Signal Process. Control. 2023, 85, 104799. [Google Scholar] [CrossRef]
- Gong, L.; Li, M.; Zhang, T.; Chen, W. EEG emotion recognition using attention-based convolutional transformer neural network. Biomed. Signal Process. Control. 2023, 84, 104835. [Google Scholar] [CrossRef]
- Quan, J.; Li, Y.; Wang, L.; He, R.; Yang, S.; Guo, L. EEG-based cross-subject emotion recognition using multi-source domain transfer learning. Biomed. Signal Process. Control. 2023, 84, 104741. [Google Scholar] [CrossRef]
- Baradaran, F.; Farzan, A.; Danishvar, S.; Sheykhivand, S. Automatic Emotion Recognition from EEG Signals Using a Combination of Type-2 Fuzzy and Deep Convolutional Networks. Electronics 2023, 12, 2216. [Google Scholar] [CrossRef]
- Baradaran, F.; Farzan, A.; Danishvar, S.; Sheykhivand, S. Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals. Electronics 2023, 12, 2232. [Google Scholar] [CrossRef]
- Cardona-Álvarez, Y.N.; Álvarez-Meza, A.M.; Cárdenas-Peña, D.A.; Castaño-Duque, G.A.; Castellanos-Dominguez, G.A. Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments. Sensors 2023, 23, 3763. [Google Scholar] [CrossRef]
- Li, X.; Chen, J.; Shi, N.; Yang, C.; Gao, P.; Chen, X.; Wang, Y.; Gao, S.; Gao, X. A hybrid steady-state visual evoked response-based brain-computer interface with MEG and EEG. Expert Syst. Appl. 2023, 223, 119736. [Google Scholar] [CrossRef]
- Musk, E. Neuralink. An Integrated Brain-Machine Interface Platform with Thousands of Channels. J. Med. Internet. Res. 2019, 21, e16194. [Google Scholar] [CrossRef]
- Zhou, L.; Wu, B.; Deng, Y.; Liu, M. Brain activation and individual differences of emotional perception and imagery in healthy adults: A functional near-infrared spectroscopy (fNIRS) study. Neurosci. Lett. 2023, 797, 137072. [Google Scholar] [CrossRef]
- Karmakar, S.; Kamilya, S.; Dey, P.; Guhathakurta, P.K.; Dalui, M.; Bera, T.K.; Halder, S.; Koley, C.; Pal, T.; Basu, A. Real time detection of cognitive load using fNIRS: A deep learning approach. Biomed. Signal Process. Control. 2023, 80, 104227. [Google Scholar] [CrossRef]
- Roberts, G.S.; Hoffman, C.A.; Rivera-Rivera, L.A.; Berman, S.E.; Eisenmenger, L.B.; Wieben, O. Automated hemodynamic assessment for cranial 4D flow MRI. Magn. Reson. Imaging 2023, 97, 46–55. [Google Scholar] [CrossRef]
- Paul, G. From the visible human project to the digital twin. In Digital Human Modeling and Medicine; Elsevier: Amsterdam, The Netherlands, 2023; pp. 3–17. [Google Scholar]
- Bangaru, S.S.; Wang, C.; Busam, S.A.; Aghazadeh, F. ANN-based automated scaffold builder activity recognition through wearable EMG and IMU sensors. Autom. Constr. 2021, 126, 103653. [Google Scholar] [CrossRef]
- Nicholls, B.; Ang, C.S.; Kanjo, E.; Siriaraya, P.; Mirzaee Bafti, S.; Yeo, W.-H.; Tsanas, A. An EMG-based Eating Behaviour Monitoring system with haptic feedback to promote mindful eating. Comput. Biol. Med. 2022, 149, 106068. [Google Scholar] [CrossRef] [PubMed]
- Tian, H.; Li, X.; Wei, Y.; Ji, S.; Yang, Q.; Gou, G.-Y.; Wang, X.; Wu, F.; Jian, J.; Guo, H.; et al. Bioinspired dual-channel speech recognition using graphene-based electromyographic and mechanical sensors. Cell Rep. Phys. Sci. 2022, 3, 101075. [Google Scholar] [CrossRef]
- Markov, M.; Ganchev, T. Intelligent human-machine interface framework. Int. J. Adv. Electron. Comput. Sci. 2022, 9, 41–46. [Google Scholar]
- Markov, M. Workflow adaptation for intelligent human-machine interfaces. Comput. Sci. Technol. J. Tech. Univ. Varna 2022, 1, 51–58. [Google Scholar]
- Markov, M.; Kalinin, Y.; Ganchev, T. A Task-related Adaptation in Intelligent Human-Machine Interfaces. In Proceedings of the 2022 International Conference on Communications, Information, Electronic and Energy Systems (CIEES), Veliko Tarnovo, Bulgaria, 24–26 November 2022; IEEE: Veliko Tarnovo, Bulgaria, 2022; pp. 1–4. [Google Scholar]
- Anon. Emotion AI Provider. Facial Emotion Recognition MorphCast. 2023. Available online: https://www.morphcast.com (accessed on 7 September 2023).
- O’Keeffe, K.; Hodder, S.; Lloyd, A. A comparison of methods used for inducing mental fatigue in performance research: Individualised, dual-task and short duration cognitive tests are most effective. Ergonomics 2020, 63, 1–12. [Google Scholar] [CrossRef]
- Anon RapidMiner|Amplify the Impact of Your People, Expertise & Data RapidMiner. Available online: https://www.rapidminer.com (accessed on 7 September 2023).
Name | Type | Histogram | Min | Max | Average | Deviation |
---|---|---|---|---|---|---|
Av. Perf | Real | 0.400 | 1 | 0.879 | 0.116 | |
speed | Integer | 600 | 1600 | 985.401 | 176.338 | |
reaction time | Integer | 4 | 1380 | 612.346 | 165.508 | |
arousal | Real | −0.775 | 0.389 | −0.391 | 0.223 | |
valence | Real | −0.866 | 0.462 | −0.311 | 0.101 | |
attention | Real | 0 | 1 | 0.495 | 0.401 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Markov, M.; Kalinin, Y.; Markova, V.; Ganchev, T. Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems. Electronics 2023, 12, 3852. https://doi.org/10.3390/electronics12183852
Markov M, Kalinin Y, Markova V, Ganchev T. Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems. Electronics. 2023; 12(18):3852. https://doi.org/10.3390/electronics12183852
Chicago/Turabian StyleMarkov, Miroslav, Yasen Kalinin, Valentina Markova, and Todor Ganchev. 2023. "Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems" Electronics 12, no. 18: 3852. https://doi.org/10.3390/electronics12183852
APA StyleMarkov, M., Kalinin, Y., Markova, V., & Ganchev, T. (2023). Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems. Electronics, 12(18), 3852. https://doi.org/10.3390/electronics12183852