Special Issue "Multimodal Conversational Interaction and Interfaces"
A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).
Deadline for manuscript submissions: closed (30 April 2019)
Dr. Catharine Oertel
EPFL- ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE, Lausanne, Switzerland
Interests: Epfl-Ecole Polytechnique Federale De Lausanne, Lausanne, Switzerland. Her interests are: group interaction; social signal processing; social robot; educational technologies
In face-to-face interaction, multiple communicative behaviors including verbal information and nonverbal signals, such as gestures, facial expressions, and gaze, are exchanged between the conversation participants. They also interpret the combination and co-occurrence of verbal and nonverbal behaviors to understand the conversation. In multiparty communication, where more than two people participate in the conversation, patterns of multimodal information become more complex. Aiming to shed light on such a complex process of face-to-face communication, studies on multimodal interaction have employed a variety of machine learning techniques. From an application point of view, implementing some aspects of multimodal interaction is indispensable in enhancing human-agent/robot communication and supporting human-human communication in computer-mediated environments.
The purpose of this special issue is to solicit contributions from both a theoretical and a practical perspective, and envision the future directions of research on multimodal interaction and its application to multimodal conversational interfaces. We encourage authors to submit original research articles on the following topics, but not limited to:
- Theoretical and computational models that shed light on the process and the characteristics of multimodal interaction
- New data-driven methodologies for investigating big data of multimodal interaction
- Virtual agents and humanoid robots with multimodal and/or multiparty conversational functionality
- Communication support systems that facilitate multimodal and /or multiparty conversation in computer-mediated communication
- Tools and platforms that contribute to research on multimodal interaction and building novel multimodal conversational interfaces
Prof. Yukiko I. Nakano
Prof. Toyoaki Nishida
Assoc. Prof. Gabriel Murray
Dr. Catharine Oertel
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access quarterly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
- Verbal and nonverbal information
- Multiparty interaction
- Computational and statistical models
- Conversational virtual agents
- Communication robots
- Multimodal interfaces for human-human communication
- Social signal processing