Next Issue
Previous Issue

Table of Contents

Multimodal Technologies Interact., Volume 1, Issue 3 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation
Multimodal Technologies Interact. 2017, 1(3), 13; doi:10.3390/mti1030013
Received: 31 May 2017 / Revised: 3 July 2017 / Accepted: 4 July 2017 / Published: 8 July 2017
PDF Full-text (458 KB) | HTML Full-text | XML Full-text
Abstract
There has been extensive research on dimensionality reduction techniques. While these make it possible to present visually the high-dimensional data in 2D or 3D, it remains a challenge for users to make sense of such projected data. Recently, interactive techniques, such as Feature
[...] Read more.
There has been extensive research on dimensionality reduction techniques. While these make it possible to present visually the high-dimensional data in 2D or 3D, it remains a challenge for users to make sense of such projected data. Recently, interactive techniques, such as Feature Transformation, have been introduced to address this. This paper describes a user study that was designed to understand how the feature transformation techniques affect user’s understanding of multi-dimensional data visualisation. It was compared with the traditional dimension reduction techniques, both unsupervised (PCA) and supervised (MCML). Thirty-one participants were recruited to detect visual clusters and outliers using visualisations produced by these techniques. Six different datasets with a range of dimensionality and data size were used in the experiment. Five of these are benchmark datasets, which makes it possible to compare with other studies using the same datasets. Both task accuracy and completion time were recorded for comparison. The results show that there is a strong case for the feature transformation technique. Participants performed best with the visualisations produced with high-level feature transformation, in terms of both accuracy and completion time. The improvements over other techniques are substantial, particularly in the case of the accuracy of the clustering task. However, visualising data with very high dimensionality (i.e., greater than 100 dimensions) remains a challenge. Full article
(This article belongs to the Special Issue Coupling Computation and Human Cognition through Interaction Design)
Figures

Figure 1

Open AccessArticle Exploration of the 3D World on the Internet Using Commodity Virtual Reality Devices
Multimodal Technologies Interact. 2017, 1(3), 15; doi:10.3390/mti1030015
Received: 27 April 2017 / Revised: 12 July 2017 / Accepted: 17 July 2017 / Published: 21 July 2017
PDF Full-text (6051 KB) | HTML Full-text | XML Full-text
Abstract
This article describes technical basics and applications of graphically interactive and online Virtual Reality (VR) frameworks. It automatically extracts and displays left and right stereo images from the Internet search engines, e.g., Google Image Search. Within a short waiting time, many 3D related
[...] Read more.
This article describes technical basics and applications of graphically interactive and online Virtual Reality (VR) frameworks. It automatically extracts and displays left and right stereo images from the Internet search engines, e.g., Google Image Search. Within a short waiting time, many 3D related results are returned to the users regarding aligned left and right stereo photos; these results are viewable through VR glasses. The system automatically filters different types of available 3D data from redundant pictorial datasets on the public networks (the Internet). To reduce possible copyright issues, only the search for images that are “labelled for reuse” is performed; meaning that the obtained pictures can be used for any purpose, in any area, without being modified. The system then automatically specifies if the picture is a side-by-side stereo pair, an anaglyph, a stereogram, or just a “normal” 2D image (not optically 3D viewable). The system then generates a stereo pair from the collected dataset, to seamlessly display 3D visualisation on State-of-the-art VR devices such as the low-cost Google Cardboard, Samsung Gear VR or Google Daydream. These devices are used to provide an immediate, controllable 3D display. In this article, we propose an image type classification technique that dynamically extracts co-aligned stereo pairs with rich 3D visualisation to VR viewers. This system is portable, simple to set up and operate. From some initial experiment results; our system is shown to be relatively fast, accurate, and easy to implement. With such system, Internet users all over theWorld could easily visualise millions of real life stereo datasets publicly available on the Internet; which are believed to be useful for VR testing and learning purposes. Full article
(This article belongs to the Special Issue Virtual Reality and Games)
Figures

Figure 1

Open AccessArticle Sense-making Strategies for the Interpretation of Visualizations—Bridging the Gap between Theory and Empirical Research
Multimodal Technologies Interact. 2017, 1(3), 16; doi:10.3390/mti1030016
Received: 31 May 2017 / Revised: 5 July 2017 / Accepted: 19 July 2017 / Published: 26 July 2017
PDF Full-text (4071 KB) | HTML Full-text | XML Full-text
Abstract
Making sense of visualizations is often an open and explorative process. This process is still not very well understood. On the one hand, it is an open question which theoretical models are appropriate for the explanation of these activities. Heuristics and theories of
[...] Read more.
Making sense of visualizations is often an open and explorative process. This process is still not very well understood. On the one hand, it is an open question which theoretical models are appropriate for the explanation of these activities. Heuristics and theories of everyday thinking probably describe this process better than more formal models. On the other hand, there are only few detailed investigations of interaction processes with information visualizations. We will try to relate approaches describing the usage of heuristics and everyday thinking with existing empirical studies describing sense-making of visualizations. Full article
(This article belongs to the Special Issue Coupling Computation and Human Cognition through Interaction Design)
Figures

Figure 1

Open AccessArticle A Work Area Visualization by Multi-View Camera-Based Diminished Reality
Multimodal Technologies Interact. 2017, 1(3), 18; doi:10.3390/mti1030018
Received: 15 July 2017 / Revised: 12 August 2017 / Accepted: 31 August 2017 / Published: 5 September 2017
PDF Full-text (1488 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Hand-held tools are indispensable for efficient manual working in fields ranging from woodworking to surgery. In this paper, we present a diminished reality (DR) method to visualize areas occluded by hands and tools in various hand-working scenarios. We propose a redesigned existing arbitrary
[...] Read more.
Hand-held tools are indispensable for efficient manual working in fields ranging from woodworking to surgery. In this paper, we present a diminished reality (DR) method to visualize areas occluded by hands and tools in various hand-working scenarios. We propose a redesigned existing arbitrary viewpoint image generation method for DR applications as a core DR background rendering to recover views without undesirable objects. We conducted quantitative and qualitative experiments using real data to validate the performance of the proposed method. The experimental results showed that our method runs in real-time (40.1 fps) and surpasses conventional methods (including image inpainting-based and geometry-based approaches) in terms of image similarity measures. Full article
(This article belongs to the Special Issue Recent Advances in Augmented Reality)
Figures

Figure 1

Open AccessArticle Three-Dimensional, Kinematic, Human Behavioral Pattern-Based Features for Multimodal Emotion Recognition
Multimodal Technologies Interact. 2017, 1(3), 19; doi:10.3390/mti1030019
Received: 17 August 2017 / Revised: 3 September 2017 / Accepted: 8 September 2017 / Published: 11 September 2017
PDF Full-text (1427 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency
[...] Read more.
This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification. Full article
Figures

Figure 1

Open AccessArticle Pictorial AR Tag with Hidden Multi-Level Bar-Code and Its Potential Applications
Multimodal Technologies Interact. 2017, 1(3), 20; doi:10.3390/mti1030020
Received: 15 July 2017 / Revised: 11 September 2017 / Accepted: 15 September 2017 / Published: 19 September 2017
PDF Full-text (7277 KB) | HTML Full-text | XML Full-text
Abstract
For decades, researchers have been trying to create intuitive virtual environments by blending reality and virtual reality, thus enabling general users to interact with the digital domain as easily as with the real world. The result is “augmented reality” (AR). AR seamlessly superimposes
[...] Read more.
For decades, researchers have been trying to create intuitive virtual environments by blending reality and virtual reality, thus enabling general users to interact with the digital domain as easily as with the real world. The result is “augmented reality” (AR). AR seamlessly superimposes virtual objects on to a real environment in three dimensions (3D) and in real time. One of the most important parts that helps close the gap between virtuality and reality is the marker used in the AR system. While pictorial marker and bar-code marker are the two most commonly used marker types in the market, they have some disadvantages in visual and processing performance. In this paper, we present a novelty method that combines the bar-code with the original feature of a colour picture (e.g., photos, trading cards, advertisement’s figure). Our method decorates on top of the original pictorial images additional features with a single stereogram image that optically conceals a multi-level (3D) bar-code. Thus, it has a larger capability of storing data compared to the general 1D barcode. This new type of marker has the potential of addressing the issues that the current types of marker are facing. It not only keeps the original information of the picture but also contains encoded numeric information. In our limited evaluation, this pictorial bar-code shows a relatively robust performance under various conditions and scaling; thus, it provides a promising AR approach to be used in many applications such as trading card games, educations, and advertisements. Full article
(This article belongs to the Special Issue Recent Advances in Augmented Reality)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview A Systematic Review of Adaptivity in Human-Robot Interaction
Multimodal Technologies Interact. 2017, 1(3), 14; doi:10.3390/mti1030014
Received: 21 April 2017 / Revised: 14 July 2017 / Accepted: 14 July 2017 / Published: 20 July 2017
PDF Full-text (233 KB) | HTML Full-text | XML Full-text
Abstract
As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of
[...] Read more.
As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains. Full article
Open AccessReview A State-of-the-Art Review of Augmented Reality in Engineering Analysis and Simulation
Multimodal Technologies Interact. 2017, 1(3), 17; doi:10.3390/mti1030017
Received: 18 July 2017 / Revised: 12 August 2017 / Accepted: 23 August 2017 / Published: 5 September 2017
PDF Full-text (2412 KB) | HTML Full-text | XML Full-text
Abstract
Augmented reality (AR) has recently become a worldwide research topic. AR technology renders intuitive computer-generated contents on users’ physical surroundings. To improve process efficiency and productivity, researchers and developers have paid increasing attention to AR applications in engineering analysis and simulation. The integration
[...] Read more.
Augmented reality (AR) has recently become a worldwide research topic. AR technology renders intuitive computer-generated contents on users’ physical surroundings. To improve process efficiency and productivity, researchers and developers have paid increasing attention to AR applications in engineering analysis and simulation. The integration of AR with numerical simulation, such as the finite element method, provides a cognitive and scientific way for users to analyze practical problems. By incorporating scientific visualization technologies, an AR-based system superimposes engineering analysis and simulation results directly on real-world objects. Engineering analysis and simulation involving diverse types of data are normally processed using specific computer software. Correct and effective visualization of these data using an AR platform can reduce the misinterpretation in spatial and logical aspects. Moreover, tracking performance of the AR platforms in engineering analysis and simulation is crucial as it influences the overall user experience. The operating environment of the AR platforms requires robust tracking performance to deliver stable and accurate information to the users. In addition, over the past several decades, AR has undergone a transition from desktop to mobile computing. The portability and propagation of mobile platforms has provided engineers with convenient access to relevant information in situ. However, on-site working environment imposes constraints on the development of mobile AR-based systems. This paper aims to provide a systematic overview of AR in engineering analysis and simulation. The visualization, tracking techniques as well as the implementation on mobile platforms are discussed. Each technique is analyzed with respect to its pros and cons, as well its suitability to particular types of applications. Full article
(This article belongs to the Special Issue Recent Advances in Augmented Reality)
Figures

Figure 1

Back to Top