Topical Collection "Multimodal User Interfaces Modelling and Development"

Editors

Guest Editor
Prof. Dr. Denis Lalanne

Human-IST Institute & Department of Informatics, University of Fribourg, Switzerland
Website | E-Mail
Interests: human-computer interaction; multimodal interaction; information visualization; tangible interaction; gestural interaction; affective user interfaces; adaptive user interfaces
Guest Editor
Prof. Dr. Bruno Dumas

Faculty of Computer Science, University of Namur, Belgium
Website | E-Mail
Interests: human–computer interaction; multimodal interaction; adaptation to user and context; cross-media systems

Topical Collection Information

Dear Colleagues,

Multimodal interfaces, relying on input, as well as output, modalities, such as speech, gestures, or emotions, are considered one of the most promising tracks for next-generation user interfaces. However, most multimodal user interfaces created nowadays still rely on ad-hoc development with little thorough modelling. Beyond the software engineering challenge of having to “reinvent the wheel” every time a new multimodal user interface is being designed and developed, multimodality can greatly benefit from research in fields such as fusion and fission of interactive modalities, modelling of user and task, multimodal interaction modelling, requirements engineering or software architectures for interactive multimodal systems. Beyond these challenges, integrating multimodal interfaces with other interaction styles, such as tangible interfaces or adaptive systems are also active fields. This Special Issue aims to provide a collection of high quality research articles that address broad challenges in both theoretical and applied aspects of multimodal interfaces development.

We welcome contributions related to the following topics related to multimodal interaction:

  • Multimodal interaction modelling
  • User and task modelling
  • Fusion and fission of interactive modalities
  • Software architectures for interactive multimodal systems
  • User Interfaces for the multimodal interface development
  • Multimodal systems and applications
  • Novel individual recognizers for multimodal interaction

Prof. Dr. Denis Lalanne
Prof. Dr. Bruno Dumas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

2018

Open AccessArticle Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
Multimodal Technologies Interact. 2018, 2(4), 81; https://doi.org/10.3390/mti2040081
Received: 14 October 2018 / Revised: 16 November 2018 / Accepted: 4 December 2018 / Published: 6 December 2018
PDF Full-text (15991 KB) | HTML Full-text | XML Full-text
Abstract
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user [...] Read more.
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. Full article
Figures

Figure 1

Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top