sensors-logo

Journal Browser

Journal Browser

Advances in Social Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (15 July 2023) | Viewed by 2600

Special Issue Editors


E-Mail Website
Guest Editor
Engineering, Science and Technology Faculty, Valencian International University, 4600 Valencia, Spain
Interests: big data; data science; parallel and distributed computing; semantic web; mobile and sensing computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
Department of Electronics & Communication Engineering, PES University, 100 Feet Ring Road, BSK III Stage, Bangalore 560085, India
Interests: emotion recognition/synthesis for HRI (multimodal and sensor based); signal processing and control for robotic applications; image/video/speech processing

Special Issue Information

Dear Colleagues,

Social robotics is an active research area in artificial intelligence that deals with the study of robots capable of interacting and communicating with each other, humans, and the environment, respecting social and cultural restrictions as much as possible, while performing their function.

Some challenges in this area include managing the sensory capacity of robots to allow them to gather the information needed for the recognition of human emotions; improving machine learning techniques for social robots; managing different modalities with efficient fusion methods; discovering how DILMO (Distance, Identity, Location, Movement, Orientation) dimensions can be used to improve human–robot interaction (HRI); and elucidating how social robots can be taught social behaviors based on proxemics. This Special Issue aims to enable synergy among these aspects and to provide a space to analyze, present, and discuss the latest research and development issues, as well as propose theoretical foundations for research on combining solutions to develop efficient and suitable solutions for emotion and sentiment detection, including proxemic HRI for social robots.

Dr. Yudith Cardinale
Prof. Dr. Shikha Tripathi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sentiment analysis
  • emotion recognition
  • multimodal sources
  • proxemics
  • HRI
  • social robots

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 33898 KiB  
Article
An Assessment of In-the-Wild Datasets for Multimodal Emotion Recognition
by Ana Aguilera, Diego Mellado and Felipe Rojas
Sensors 2023, 23(11), 5184; https://doi.org/10.3390/s23115184 - 30 May 2023
Cited by 3 | Viewed by 2120
Abstract
Multimodal emotion recognition implies the use of different resources and techniques for identifying and recognizing human emotions. A variety of data sources such as faces, speeches, voices, texts and others have to be processed simultaneously for this recognition task. However, most of the [...] Read more.
Multimodal emotion recognition implies the use of different resources and techniques for identifying and recognizing human emotions. A variety of data sources such as faces, speeches, voices, texts and others have to be processed simultaneously for this recognition task. However, most of the techniques, which are based mainly on Deep Learning, are trained using datasets designed and built in controlled conditions, making their applicability in real contexts with real conditions more difficult. For this reason, the aim of this work is to assess a set of in-the-wild datasets to show their strengths and weaknesses for multimodal emotion recognition. Four in-the-wild datasets are evaluated: AFEW, SFEW, MELD and AffWild2. A multimodal architecture previously designed is used to perform the evaluation and classical metrics such as accuracy and F1-Score are used to measure performance in training and to validate quantitative results. However, strengths and weaknesses of these datasets for various uses indicate that by themselves they are not appropriate for multimodal recognition due to their original purpose, e.g., face or speech recognition. Therefore, we recommend a combination of multiple datasets in order to obtain better results when new samples are being processed and a good balance in the number of samples by class. Full article
(This article belongs to the Special Issue Advances in Social Robotics)
Show Figures

Figure 1

Back to TopTop