Next Article in Journal
Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective
Previous Article in Journal
A Phenomenological Framework of Architectural Paradigms for the User-Centered Design of Virtual Environments
Article Menu

Export Article

Open AccessArticle
Multimodal Technologies Interact. 2018, 2(4), 81; https://doi.org/10.3390/mti2040081

Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks

Chair for Human–Computer Interaction, University of Würzburg, Am Hubland, 97074 Würzburg, Germany
*
Author to whom correspondence should be addressed.
Received: 14 October 2018 / Revised: 16 November 2018 / Accepted: 4 December 2018 / Published: 6 December 2018
(This article belongs to the Collection Multimodal User Interfaces Modelling and Development)
  |  
PDF [15991 KB, uploaded 17 December 2018]
  |     |  

Abstract

Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. View Full-Text
Keywords: multimodal fusion; multimodal interface; semantic fusion; procedural fusion methods; natural interfaces; human-computer interaction multimodal fusion; multimodal interface; semantic fusion; procedural fusion methods; natural interfaces; human-computer interaction
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Zimmerer, C.; Fischbach, M.; Latoschik, M.E. Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks. Multimodal Technologies Interact. 2018, 2, 81.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top