Next Article in Journal
Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective
Next Article in Special Issue
A Survey on Psycho-Physiological Analysis & Measurement Methods in Multimodal Systems
Previous Article in Journal
A Phenomenological Framework of Architectural Paradigms for the User-Centered Design of Virtual Environments
Open AccessArticle

Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks

Chair for Human–Computer Interaction, University of Würzburg, Am Hubland, 97074 Würzburg, Germany
*
Author to whom correspondence should be addressed.
Multimodal Technologies Interact. 2018, 2(4), 81; https://doi.org/10.3390/mti2040081
Received: 14 October 2018 / Revised: 16 November 2018 / Accepted: 4 December 2018 / Published: 6 December 2018
(This article belongs to the Special Issue Multimodal User Interfaces Modelling and Development)
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. View Full-Text
Keywords: multimodal fusion; multimodal interface; semantic fusion; procedural fusion methods; natural interfaces; human-computer interaction multimodal fusion; multimodal interface; semantic fusion; procedural fusion methods; natural interfaces; human-computer interaction
Show Figures

Figure 1

MDPI and ACS Style

Zimmerer, C.; Fischbach, M.; Latoschik, M.E. Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks. Multimodal Technologies Interact. 2018, 2, 81.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop