Next Issue
Previous Issue

Table of Contents

Multimodal Technologies Interact., Volume 2, Issue 4 (December 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-22
Export citation of selected articles as:
Open AccessArticle Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective
Multimodal Technologies Interact. 2018, 2(4), 82; https://doi.org/10.3390/mti2040082
Received: 6 November 2018 / Revised: 8 December 2018 / Accepted: 12 December 2018 / Published: 16 December 2018
Viewed by 289 | PDF Full-text (3420 KB) | HTML Full-text | XML Full-text
Abstract
Social robots are expected gradually to be used by more and more people in a wider range of settings, domestic as well as professional. As a consequence, the features and quality requirements on human–robot interaction will increase, comprising possibilities to communicate emotions, establishing
[...] Read more.
Social robots are expected gradually to be used by more and more people in a wider range of settings, domestic as well as professional. As a consequence, the features and quality requirements on human–robot interaction will increase, comprising possibilities to communicate emotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is on depicting how humans, as the users of robots, experience tactile emotional communication with the Nao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitative investigation was conducted as part of a larger experiment. The major findings consist of 15 different aspects that vary along one or more dimensions and how those influence the four dimensions of user experience that are present in the study, as well as the different parts of touch behavior of conveying emotions. Full article
Figures

Figure 1

Open AccessArticle Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
Multimodal Technologies Interact. 2018, 2(4), 81; https://doi.org/10.3390/mti2040081
Received: 14 October 2018 / Revised: 16 November 2018 / Accepted: 4 December 2018 / Published: 6 December 2018
Viewed by 264 | PDF Full-text (15991 KB) | HTML Full-text | XML Full-text
Abstract
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user
[...] Read more.
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. Full article
(This article belongs to the Special Issue Multimodal User Interfaces Modelling and Development)
Figures

Figure 1

Open AccessArticle A Phenomenological Framework of Architectural Paradigms for the User-Centered Design of Virtual Environments
Multimodal Technologies Interact. 2018, 2(4), 80; https://doi.org/10.3390/mti2040080
Received: 31 October 2018 / Revised: 19 November 2018 / Accepted: 28 November 2018 / Published: 30 November 2018
Viewed by 355 | PDF Full-text (4679 KB) | HTML Full-text | XML Full-text
Abstract
In some circumstances, immersion in virtual environments with the aid of virtual reality (VR) equipment can create feelings of anxiety in users and be experienced as something “frightening”, “oppressive”, “alienating”, “dehumanizing”, or “dystopian”. Sometimes (e.g., in exposure therapy or VR gaming), a virtual
[...] Read more.
In some circumstances, immersion in virtual environments with the aid of virtual reality (VR) equipment can create feelings of anxiety in users and be experienced as something “frightening”, “oppressive”, “alienating”, “dehumanizing”, or “dystopian”. Sometimes (e.g., in exposure therapy or VR gaming), a virtual environment is intended to have such psychological impacts on users; however, such effects can also arise unintentionally due to the environment’s poor architectural design. Designers of virtual environments may employ user-centered design (UCD) to incrementally improve a design and generate a user experience more closely resembling the type desired; however, UCD can yield suboptimal results if an initial design relied on an inappropriate architectural approach. This study developed a framework that can facilitate the purposeful selection of the most appropriate architectural approach by drawing on Norberg-Schulz’s established phenomenological account of real-world architectural modes. By considering the unique possibilities for structuring and experiencing space within virtual environments and reinterpreting Norberg-Schulz’s schemas in the context of virtual environment design, a novel framework was formulated that explicates six fundamental “architectural paradigms” available to designers of virtual environments. It was shown that the application of this framework could easily be incorporated as an additional step within the UCD process. Full article
(This article belongs to the Special Issue New Directions in User-Centered Interaction Design)
Figures

Figure 1

Open AccessReview An Overview of Participatory Design Applied to Physical and Digital Product Interaction for Older People
Multimodal Technologies Interact. 2018, 2(4), 79; https://doi.org/10.3390/mti2040079
Received: 14 October 2018 / Revised: 5 November 2018 / Accepted: 12 November 2018 / Published: 14 November 2018
Viewed by 348 | PDF Full-text (1093 KB) | HTML Full-text | XML Full-text
Abstract
An understanding of the need for user-centred and participatory design continues to gain universal momentum both in academia and industry. It is essential this momentum is maintained as the population changes and technology develops. The contribution of this work draws on research from
[...] Read more.
An understanding of the need for user-centred and participatory design continues to gain universal momentum both in academia and industry. It is essential this momentum is maintained as the population changes and technology develops. The contribution of this work draws on research from different disciplines to provide the design community with new knowledge and an awareness of the diversity of user needs, particularly the needs and skills of older people. A collection of usability and accessibility guidelines are referenced in terms of their applicability toward designing interfaces and interaction for an ageing population, in conjunction with results from studies that highlight the extent to which familiarity and successful interaction with contemporary products decreases according to age and prior experience, and identifies the problems users experience during interaction with technology. The hope is that more widespread awareness of this knowledge will encourage greater understanding and assist in the development of better design methods and better on- and offline products and tools for those of any age, but particularly those within an increasingly ageing demographic. Full article
(This article belongs to the Special Issue New Directions in User-Centered Interaction Design)
Figures

Figure 1

Open AccessArticle Allocentric Emotional Affordances in HRI: The Multimodal Binding
Multimodal Technologies Interact. 2018, 2(4), 78; https://doi.org/10.3390/mti2040078
Received: 18 October 2018 / Accepted: 3 November 2018 / Published: 6 November 2018
Viewed by 310 | PDF Full-text (3188 KB) | HTML Full-text | XML Full-text
Abstract
The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of “emotional affordances” by proposing a
[...] Read more.
The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of “emotional affordances” by proposing a viable model for implementation into HRI; which considers allocentric and multimodal perception. We consider “2-ways” affordances: perceived object triggering an emotion; and perceived human emotion expression triggering an action. In order to make the implementation generic; the proposed model includes a library that can be customised depending on the specific robot and application scenario. We present the AAA (Affordance-Appraisal-Arousal) model; which incorporates Plutchik’s Wheel of Emotions; and we outline some numerical examples of how it can be used in different scenarios. Full article
Figures

Figure 1

Open AccessArticle Using Game Design to Teach Informatics and Society Topics in Secondary Schools
Multimodal Technologies Interact. 2018, 2(4), 77; https://doi.org/10.3390/mti2040077
Received: 10 September 2018 / Revised: 15 October 2018 / Accepted: 1 November 2018 / Published: 6 November 2018
Viewed by 325 | PDF Full-text (894 KB) | HTML Full-text | XML Full-text
Abstract
This article discusses the use of game design as a method for interdisciplinary project-based teaching in secondary school education to convey informatics and society topics, which encompass the larger social context of computing. There is a lot of knowledge about learning games but
[...] Read more.
This article discusses the use of game design as a method for interdisciplinary project-based teaching in secondary school education to convey informatics and society topics, which encompass the larger social context of computing. There is a lot of knowledge about learning games but little background on using game design as a method for project-based teaching of social issues in informatics. We present the results of an analysis of student-created games and an evaluation of a student-authored database on learning contents found in commercial off-the-shelf games. We further contextualise these findings using a group discussion with teachers. The results underline the effectiveness of project-based teaching to raise awareness for informatics and society topics. We further outline informatics and society topics that are particularly interesting to students, genre preferences, and potentially engaging game mechanics stemming from our analyses. Full article
(This article belongs to the Special Issue Human Computer Interaction in Education)
Figures

Figure 1

Open AccessArticle “There Was No Green Tick”: Discovering the Functions of a Widget in a Joint Problem-Solving Activity and the Consequences for the Participants’ Discovering Process
Multimodal Technologies Interact. 2018, 2(4), 76; https://doi.org/10.3390/mti2040076
Received: 13 August 2018 / Revised: 25 September 2018 / Accepted: 9 October 2018 / Published: 26 October 2018
Viewed by 241 | PDF Full-text (5653 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, tangible user interfaces (TUI) have gained in popularity in educational contexts, among others to implement problem-solving and discovery learning science activities. In the context of an interdisciplinary and cross-institutional collaboration, we conducted a multimodal EMCA-based video user study involving a
[...] Read more.
In recent years, tangible user interfaces (TUI) have gained in popularity in educational contexts, among others to implement problem-solving and discovery learning science activities. In the context of an interdisciplinary and cross-institutional collaboration, we conducted a multimodal EMCA-based video user study involving a TUI-mediated bicycle mechanics simulation. This article focusses on the discovering work of a group of three students with regard to a particular tangible object (a red button), designed to support participants engagement with the underlying physics aspects and its consequences with regard to their engagement with the targeted mechanics aspects. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle A Survey to Understand Emotional Situations on the Road and What They Mean for Affective Automotive UIs
Multimodal Technologies Interact. 2018, 2(4), 75; https://doi.org/10.3390/mti2040075
Received: 1 August 2018 / Revised: 8 October 2018 / Accepted: 18 October 2018 / Published: 25 October 2018
Viewed by 308 | PDF Full-text (237 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present the results of an online survey (N = 170) on emotional situations on the road. In particular, we asked potential early adopters to remember a situation where they felt either an intense positive or negative emotion while driving.
[...] Read more.
In this paper, we present the results of an online survey (N = 170) on emotional situations on the road. In particular, we asked potential early adopters to remember a situation where they felt either an intense positive or negative emotion while driving. Our research is motivated by imminent disruptions in the automotive sector due to automated driving and the accompanying switch to selling driving experiences over horsepower. This creates a need to focus on the driver’s emotion when designing in-car interfaces. As a result of our research, we present a set of propositions for affective car interfaces based on real-life experiences. With our work we aim to support the design of affective car interfaces and give designers a foundation to build upon. We find respondents often connect positive emotions with enjoying their independence, while negative experiences are associated mostly with traffic behavior. Participants who experienced negative situations wished for better information management and a higher degree of automation. Drivers with positive emotions generally wanted to experience the situation more genuinely, for example, by switching to a “back-to-basic” mode. We explore these statements and discuss recommendations for the design of affective interfaces in future cars. Full article
(This article belongs to the Special Issue Automotive User Interfaces)
Figures

Figure 1

Open AccessArticle Participatory Prototyping to Inform the Development of a Remote UX Design System in the Automotive Domain
Multimodal Technologies Interact. 2018, 2(4), 74; https://doi.org/10.3390/mti2040074
Received: 31 July 2018 / Revised: 14 October 2018 / Accepted: 18 October 2018 / Published: 24 October 2018
Viewed by 276 | PDF Full-text (1263 KB) | HTML Full-text | XML Full-text
Abstract
This study reports on the empirical findings of participatory design workshops for the development of a supportive automotive user experience design system. Identifying and addressing this area with traditional research methods is problematic due to the different user experience (UX) design perspectives that
[...] Read more.
This study reports on the empirical findings of participatory design workshops for the development of a supportive automotive user experience design system. Identifying and addressing this area with traditional research methods is problematic due to the different user experience (UX) design perspectives that might conflict and the related limitations of the automotive domain. To help resolve this problem, we conducted research with 12 user experience (UX) designers through individual participatory prototyping activities to gain insights into their explicit, observable, tacit and latent needs. These activities allowed us to explore their motivation to use different technologies; the system’s architecture; detailed features of interactivity; and to describe user needs including efficiency, effectiveness, engagement, naturalness, ease of use, information retrieval, self-image awareness, politeness, and flexibility. Our analysis led us to design implications that translate participants’ needs into UX design goals, informing practitioners on how to develop relevant systems further. Full article
(This article belongs to the Special Issue Automotive User Interfaces)
Figures

Graphical abstract

Open AccessArticle Designing to the Pattern: A Storytelling Prototype for Food Growers
Multimodal Technologies Interact. 2018, 2(4), 73; https://doi.org/10.3390/mti2040073
Received: 23 June 2018 / Revised: 11 October 2018 / Accepted: 15 October 2018 / Published: 18 October 2018
Viewed by 255 | PDF Full-text (3207 KB) | HTML Full-text | XML Full-text
Abstract
We present the design and pilot study of QuickTales, a mobile storytelling platform through which urban gardeners can share gardening experiences. QuickTales was built as a response to design patterns, drawing on previous studies we conducted with residential gardeners and different gardening communities
[...] Read more.
We present the design and pilot study of QuickTales, a mobile storytelling platform through which urban gardeners can share gardening experiences. QuickTales was built as a response to design patterns, drawing on previous studies we conducted with residential gardeners and different gardening communities in a large Australian city. Given the diversity of needs and wants of urban gardeners, the intent for QuickTales was for it to serve as a multi-purpose tool for different individuals and groups across the local urban agriculture ecology. The evaluation provides initial insights into the use of storytelling in this context. We reflect on the use of design patterns to as they were used to inform the design of QuickTales, and propose opportunities for further design pattern development. Full article
(This article belongs to the Special Issue Human-Food Interaction)
Figures

Figure 1

Open AccessArticle X-Reality System Architecture for Industry 4.0 Processes
Multimodal Technologies Interact. 2018, 2(4), 72; https://doi.org/10.3390/mti2040072
Received: 6 August 2018 / Revised: 22 September 2018 / Accepted: 2 October 2018 / Published: 15 October 2018
Viewed by 451 | PDF Full-text (1311 KB) | HTML Full-text | XML Full-text
Abstract
Information visualization has been widely adopted to represent and visualize data patterns as it offers users fast access to data facts and can highlight specific points beyond plain figures and words. As data comes from multiple sources, in all types of formats, and
[...] Read more.
Information visualization has been widely adopted to represent and visualize data patterns as it offers users fast access to data facts and can highlight specific points beyond plain figures and words. As data comes from multiple sources, in all types of formats, and in unprecedented volumes, the need intensifies for more powerful and effective data visualization tools. In the manufacturing industry, immersive technology can enhance the way users artificially perceive and interact with data linked to the shop floor. However, showcases of prototypes of such technology have shown limited results. The low level of digitalization, the complexity of the required infrastructure, the lack of knowledge about Augmented Reality (AR), and the calibration processes that are required whenever the shop floor configuration changes hinders the adoption of the technology. In this paper, we investigate the design of middleware that can automate the configuration of X-Reality (XR) systems and create tangible in-site visualizations and interactions with industrial assets. The main contribution of this paper is a middleware architecture that enables communication and interaction across different technologies without manual configuration or calibration. This has the potential to turn shop floors into seamless interaction spaces that empower users with pervasive forms of data sharing, analysis and presentation that are not restricted to a specific hardware configuration. The novelty of our work is due to its autonomous approach for finding and communicating calibrations and data format transformations between devices, which does not require user intervention. Our prototype middleware has been validated with a test case in a controlled digital-physical scenario composed of a robot and industrial equipment. Full article
Figures

Figure 1

Open AccessArticle Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface
Multimodal Technologies Interact. 2018, 2(4), 71; https://doi.org/10.3390/mti2040071
Received: 6 August 2018 / Revised: 13 September 2018 / Accepted: 8 October 2018 / Published: 11 October 2018
Viewed by 355 | PDF Full-text (6277 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Broad access to automated cars (ACs) that can reliably and unconditionally drive in all environments is still some years away. Urban areas pose a particular challenge to ACs, since even perfectly reliable systems may be forced to execute sudden reactive driving maneuvers in
[...] Read more.
Broad access to automated cars (ACs) that can reliably and unconditionally drive in all environments is still some years away. Urban areas pose a particular challenge to ACs, since even perfectly reliable systems may be forced to execute sudden reactive driving maneuvers in hard-to-predict hazardous situations. This may negatively surprise the driver, possibly causing discomfort, anxiety or loss of trust, which might be a risk for the acceptance of the technology in general. To counter this, we suggest an explanatory windshield display interface with augmented reality (AR) elements to support driver situation awareness (SA). It provides the driver with information about the car’s perceptive capabilities and driving decisions. We created a prototype in a human-centered approach and implemented the interface in a mixed-reality driving simulation. We conducted a user study to assess its influence on driver SA. We collected objective SA scores and self-ratings, both of which yielded a significant improvement with our interface in good (medium effect) and in bad (large effect) visibility conditions. We conclude that explanatory AR interfaces could be a viable measure against unwarranted driver discomfort and loss of trust in critical urban situations by elevating SA. Full article
(This article belongs to the Special Issue Automotive User Interfaces)
Figures

Figure 1

Open AccessArticle Multimodal Technologies in LEGO House: A Social Semiotic Perspective
Multimodal Technologies Interact. 2018, 2(4), 70; https://doi.org/10.3390/mti2040070
Received: 21 August 2018 / Revised: 28 September 2018 / Accepted: 5 October 2018 / Published: 10 October 2018
Viewed by 706 | PDF Full-text (3584 KB) | HTML Full-text | XML Full-text
Abstract
Children’s playworlds are a complex interweaving of modes, with the border areas between the digital and non-digital often becoming increasingly blurred. Growing in popularity and prevalence, multimodal technologies blending digital and non-digital elements present novel opportunities for designers of toys and play-spaces as
[...] Read more.
Children’s playworlds are a complex interweaving of modes, with the border areas between the digital and non-digital often becoming increasingly blurred. Growing in popularity and prevalence, multimodal technologies blending digital and non-digital elements present novel opportunities for designers of toys and play-spaces as well as being of interest to researchers of young children’s contemporary play and learning. Opened in Denmark in September 2017, LEGO House defines itself as the ‘Home of the Brick’, a public attraction aiming to support play, creativity and learning through multiple interactive LEGO experiences spanning digital and non-digital forms. Offering a rich context for considering multimodal perspectives on contemporary play, this article reports on a range of multimodal technologies featured in LEGO House, including digital cameras, scanners, and interactive tables used in combination with traditional LEGO bricks. Three LEGO House experiences are considered from a multimodal social semiotic perspective, focusing on the affordances of multimodal technologies for play, and the process of transduction across modes, in order to explore the liminal border-areas where digital and non-digital play are increasingly mixed. This article proposes that LEGO House presents an innovative ‘third space’ that creates opportunities for playful interaction with multimodal technologies. LEGO House can be seen as part of a growing recognition of the power of play, both in its own right and in relation to learning, acknowledging that meaning-making happens in informal times and places that are not positioned as direct acts of teaching. Furthermore, it is suggested that multimodal technologies embedded into the play-space expand opportunities for learning in new ways, whilst highlighting that movement between digital and non-digital forms always entails both gains and losses: A matter which needs to be explored. Highlighting the opportunities for meaning-making in informal, play-based settings such as LEGO House therefore has the potential to recognise and give value to playful meaning-making with multimodal technologies which may otherwise be taken for granted or go unnoticed. In this way, experiences such as those found in LEGO House can contribute towards conceptualisations of learning which support children to develop the playfully creative skills and knowledge required for the digital age. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games
Multimodal Technologies Interact. 2018, 2(4), 69; https://doi.org/10.3390/mti2040069
Received: 16 August 2018 / Revised: 20 September 2018 / Accepted: 2 October 2018 / Published: 6 October 2018
Viewed by 336 | PDF Full-text (2944 KB) | HTML Full-text | XML Full-text
Abstract
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in
[...] Read more.
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in the design process, and situating the game platform in a real context—an educational center for children. We report the results of both the preliminary studies and the final testing session, which focused on the evaluation of usability factors. By providing a detailed description of the process and the results, this work aims at sharing the findings and the lessons learned about both the implications of adopting a human-centered approach across the whole design process and the specific challenges of developing a mixed-reality playground. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Figures

Figure 1

Open AccessArticle Takeover Requests in Highly Automated Truck Driving: How Do the Amount and Type of Additional Information Influence the Driver–Automation Interaction?
Multimodal Technologies Interact. 2018, 2(4), 68; https://doi.org/10.3390/mti2040068
Received: 1 August 2018 / Revised: 10 September 2018 / Accepted: 28 September 2018 / Published: 4 October 2018
Viewed by 340 | PDF Full-text (1987 KB) | HTML Full-text | XML Full-text
Abstract
Vehicle automation is linked to various benefits, such as increase in fuel and transport efficiency as well as increase in driving comfort. However, automation also comes with a variety of possible downsides, e.g., loss of situational awareness, loss of skills, and inappropriate trust
[...] Read more.
Vehicle automation is linked to various benefits, such as increase in fuel and transport efficiency as well as increase in driving comfort. However, automation also comes with a variety of possible downsides, e.g., loss of situational awareness, loss of skills, and inappropriate trust levels regarding system functionality. Drawbacks differ at different automation levels. As highly automated driving (HAD, level 3) requires the driver to take over the driving task in critical situations within a limited period of time, the need for an appropriate human–machine interface (HMI) arises. To foster adequate and efficient human–machine interaction, this contribution presents a user-centered, iterative approach for HMI evaluation of highly automated truck driving. For HMI evaluation, a driving simulator study [n = 32] using a dynamic truck driving simulator was conducted to let users experience the HMI in a semi-real driving context. Participants rated three HMI concepts, differing in their informational content for HAD regarding acceptance, workload, user experience, and controllability. Results showed that all three HMI concepts achieved good to very good results in these measures. Overall, HMI concepts offering more information to the driver about the HAD system showed significantly higher ratings, depicting the positive effect of additional information on the driver–automation interaction. Full article
(This article belongs to the Special Issue Automotive User Interfaces)
Figures

Figure 1

Open AccessArticle Modelling Adaptation through Social Allostasis: Modulating the Effects of Social Touch with Oxytocin in Embodied Agents
Multimodal Technologies Interact. 2018, 2(4), 67; https://doi.org/10.3390/mti2040067
Received: 18 July 2018 / Revised: 19 September 2018 / Accepted: 21 September 2018 / Published: 3 October 2018
Viewed by 336 | PDF Full-text (2147 KB) | HTML Full-text | XML Full-text
Abstract
Social allostasis is a mechanism of adaptation that permits individuals to dynamically adapt their physiology to changing physical and social conditions. Oxytocin (OT) is widely considered to be one of the hormones that drives and adapts social behaviours. While its precise effects remain
[...] Read more.
Social allostasis is a mechanism of adaptation that permits individuals to dynamically adapt their physiology to changing physical and social conditions. Oxytocin (OT) is widely considered to be one of the hormones that drives and adapts social behaviours. While its precise effects remain unclear, two areas where OT may promote adaptation are by affecting social salience, and affecting internal responses of performing social behaviours. Working towards a model of dynamic adaptation through social allostasis in simulated embodied agents, and extending our previous work studying OT-inspired modulation of social salience, we present a model and experiments that investigate the effects and adaptive value of allostatic processes based on hormonal (OT) modulation of affective elements of a social behaviour. In particular, we investigate and test the effects and adaptive value of modulating the degree of satisfaction of tactile contact in a social motivation context in a small simulated agent society across different environmental challenges (related to availability of food) and effects of OT modulation of social salience as a motivational incentive. Our results show that the effects of these modulatory mechanisms have different (positive or negative) adaptive value across different groups and under different environmental circumstance in a way that supports the context-dependent nature of OT, put forward by the interactionist approach to OT modulation in biological agents. In terms of simulation models, this means that OT modulation of the mechanisms that we have described should be context-dependent in order to maximise viability of our socially adaptive agents, illustrating the relevance of social allostasis mechanisms. Full article
Figures

Figure 1

Open AccessArticle A Two-Study Approach to Explore the Effect of User Characteristics on Users’ Perception and Evaluation of a Virtual Assistant’s Appearance
Multimodal Technologies Interact. 2018, 2(4), 66; https://doi.org/10.3390/mti2040066
Received: 30 July 2018 / Revised: 5 September 2018 / Accepted: 22 September 2018 / Published: 2 October 2018
Viewed by 320 | PDF Full-text (1852 KB) | HTML Full-text | XML Full-text
Abstract
This research investigates the effect of different user characteristics on the perception and evaluation of an agent’s appearance variables. Therefore, two different experiments have been conducted. In a 3 × 3 × 5 within-subjects design (Study 1; N = 59), three different target
[...] Read more.
This research investigates the effect of different user characteristics on the perception and evaluation of an agent’s appearance variables. Therefore, two different experiments have been conducted. In a 3 × 3 × 5 within-subjects design (Study 1; N = 59), three different target groups (students, elderly, and cognitively impaired people) evaluated 30 different agent appearances that varied in species (human, animal, and robot) and realism (high detail, low detail, stylized shades, stylized proportion, and stylized shade with stylized proportion). Study 2 (N = 792) focused on the effect of moderating variables regarding the same appearance variables and aims to supplement findings of Study 1 based on a 3 × 5 between-subjects design. Results showed effects of species and realism on person perception, users’ liking, and using intention. In a direct comparison, a higher degree of realism was perceived as more positive, while those effects were not replicated in Study 2. Further on, a majority evaluated nonhumanoid agents more positively. Since no interaction effects of species and realism have been found, the effects of stylization seem to equally influence the perception for all kind of species. Moreover, the importance of the target group’s preference was demonstrated, since differences with regard to the appearance evaluation were found. Full article
(This article belongs to the Special Issue Intelligent Virtual Agents)
Figures

Figure 1

Open AccessReview Gesture Elicitation Studies for Mid-Air Interaction: A Review
Multimodal Technologies Interact. 2018, 2(4), 65; https://doi.org/10.3390/mti2040065
Received: 8 September 2018 / Revised: 20 September 2018 / Accepted: 26 September 2018 / Published: 29 September 2018
Viewed by 401 | PDF Full-text (277 KB) | HTML Full-text | XML Full-text
Abstract
Mid-air interaction involves touchless manipulations of digital content or remote devices, based on sensor tracking of body movements and gestures. There are no established, universal gesture vocabularies for mid-air interactions with digital content or remote devices based on sensor tracking of body movements
[...] Read more.
Mid-air interaction involves touchless manipulations of digital content or remote devices, based on sensor tracking of body movements and gestures. There are no established, universal gesture vocabularies for mid-air interactions with digital content or remote devices based on sensor tracking of body movements and gestures. On the contrary, it is widely acknowledged that the identification of appropriate gestures depends on the context of use, thus the identification of mid-air gestures is an important design decision. The method of gesture elicitation is increasingly applied by designers to help them identify appropriate gesture sets for mid-air applications. This paper presents a review of elicitation studies in mid-air interaction based on a selected set of 47 papers published within 2011–2018. It reports on: (1) the application domains of mid-air interactions examined; (2) the level of technological maturity of systems at hand; (3) the gesture elicitation procedure and its variations; (4) the appropriateness criteria for a gesture; (5) participants number and profile; (6) user evaluation methods (of the gesture vocabulary); (7) data analysis and related metrics. This paper confirms that the elicitation method has been applied extensively but with variability and some ambiguity and discusses under-explored research questions and potential improvements of related research. Full article
(This article belongs to the Special Issue Embodied and Spatial Interaction)
Open AccessArticle ERIKA—Early Robotics Introduction at Kindergarten Age
Multimodal Technologies Interact. 2018, 2(4), 64; https://doi.org/10.3390/mti2040064
Received: 27 July 2018 / Revised: 1 September 2018 / Accepted: 14 September 2018 / Published: 27 September 2018
Viewed by 366 | PDF Full-text (27662 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we report on our attempt to design and implement an early introduction to basic robotics principles for children at kindergarten age. One of the main challenges of this effort is to explain complex robotics contents in a way that pre-school
[...] Read more.
In this work, we report on our attempt to design and implement an early introduction to basic robotics principles for children at kindergarten age. One of the main challenges of this effort is to explain complex robotics contents in a way that pre-school children could follow the basic principles and ideas using examples from their world of experience. What sets apart our effort from other work is that part of the lecturing is actually done by a robot itself and that a quiz at the end of the lesson is done using robots as well. The humanoid robot Pepper from Softbank, which is a great platform for human–robot interaction experiments, was used to present a lecture on robotics by reading out the contents to the children making use of its speech synthesis capability. A quiz in a Runaround-game-show style after the lecture activated the children to recap the contents they acquired about how mobile robots work in principle. In this quiz, two LEGO Mindstorm EV3 robots were used to implement a strongly interactive scenario. Besides the thrill of being exposed to a mobile robot that would also react to the children, they were very excited and at the same time very concentrated. We got very positive feedback from the children as well as from their educators. To the best of our knowledge, this is one of only few attempts to use a robot like Pepper not as a tele-teaching tool, but as the teacher itself in order to engage pre-school children with complex robotics contents. Full article
(This article belongs to the Special Issue Human Computer Interaction in Education)
Figures

Figure 1

Open AccessArticle Designing Behavioural Artificial Intelligence to Record, Assess and Evaluate Human Behaviour
Multimodal Technologies Interact. 2018, 2(4), 63; https://doi.org/10.3390/mti2040063
Received: 13 March 2018 / Revised: 28 August 2018 / Accepted: 28 August 2018 / Published: 25 September 2018
Viewed by 420 | PDF Full-text (1414 KB) | HTML Full-text | XML Full-text
Abstract
The context of the work presented in this article is the assessment and automated evaluation of human behaviour. To facilitate this, a formalism is presented which is unambiguous as well as such that it can be implemented and interpreted in an automated manner.
[...] Read more.
The context of the work presented in this article is the assessment and automated evaluation of human behaviour. To facilitate this, a formalism is presented which is unambiguous as well as such that it can be implemented and interpreted in an automated manner. In the greater scheme of things, comparable behaviour evaluation requires comparable assessment scenarios and, to this end, computer games are considered as controllable and abstract environments. Within this context, a model for behavioural AI is presented which was designed around the objectives of: (a) being able to play rationally; (b) adhering to formally stated behaviour preferences; and (c) ensuring that very specific circumstances can be forced to arise within a game. The presented work is based on established models from the field of behavioural psychology, formal logic as well as approaches from game theory and related fields. The suggested model for behavioural AI has been used to implement and test a game, as well as AI players that exhibit specific behavioural preferences. The overall aim of this article is to enable the readers to design their own AI implementation, using the formalisms and models they prefer and to a level of complexity they desire. Full article
Figures

Figure 1

Open AccessArticle Enhancing Trust in Autonomous Vehicles through Intelligent User Interfaces That Mimic Human Behavior
Multimodal Technologies Interact. 2018, 2(4), 62; https://doi.org/10.3390/mti2040062
Received: 19 July 2018 / Revised: 31 August 2018 / Accepted: 21 September 2018 / Published: 24 September 2018
Viewed by 534 | PDF Full-text (1456 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns.
[...] Read more.
Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them. Full article
(This article belongs to the Special Issue Intelligent Virtual Agents)
Figures

Figure 1

Open AccessArticle Effect of Sensory Feedback on Turn-Taking Using Paired Devices for Children with ASD
Multimodal Technologies Interact. 2018, 2(4), 61; https://doi.org/10.3390/mti2040061
Received: 20 July 2018 / Revised: 28 August 2018 / Accepted: 18 September 2018 / Published: 20 September 2018
Viewed by 413 | PDF Full-text (577 KB) | HTML Full-text | XML Full-text
Abstract
Most children can naturally engage in play and by this, develop skills while interacting with their peers and toys. However, children with Autism Spectrum Disorder (ASD) often show impairments in play skills which result in limited opportunities for interaction with others and the
[...] Read more.
Most children can naturally engage in play and by this, develop skills while interacting with their peers and toys. However, children with Autism Spectrum Disorder (ASD) often show impairments in play skills which result in limited opportunities for interaction with others and the learning of social skills. In this regard, robotic devices/toys that can provide simple and attractive indications are advantageous to engage children with ASD in play activities that require social and interaction skills. This project proposes a new interaction method using paired robotic devices called COLOLO to facilitate a fundamental exchange of intention in communication so-called turn-taking. These tangible devices are designed to sense the user’s manipulation, send a message to the paired device, and display visual cues for assisting children to achieve turn-taking through play. On the sessions with COLOLO there are two devices, one held by the therapist and one by the child, and they take turns to manipulate the toys and change their colors. In this article, two experimental conditions or interaction rules: the “two-sided lighting rule” and the “one-sided lighting rule" were introduced. The two interactions rules differ from each on the way the devices used the visual cues to indicate the turn-holder. The effect of each interaction rule on children’s turn-taking behaviors was investigated through an experimental study with four children with ASD. From the results, we found that with the one-sided lighting rule participants tended to shift their gaze more and to decrease the failed attempts of turn-taking. The discussion covers the possibilities of using paired devices to describe participants’ behaviors related to turn-taking quantitatively. Full article
(This article belongs to the Special Issue Human Computer Interaction in Education)
Figures

Figure 1

Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top