Next Issue
Previous Issue

Table of Contents

Multimodal Technologies Interact., Volume 2, Issue 3 (September 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-27
Export citation of selected articles as:
Open AccessArticle Perceptions on Authenticity in Chat Bots
Multimodal Technologies Interact. 2018, 2(3), 60; https://doi.org/10.3390/mti2030060
Received: 19 June 2018 / Revised: 6 September 2018 / Accepted: 12 September 2018 / Published: 17 September 2018
PDF Full-text (248 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In 1950, Alan Turing proposed his concept of universal machines, emphasizing their abilities to learn, think, and behave in a human-like manner. Today, the existence of intelligent agents imitating human characteristics is more relevant than ever. They have expanded to numerous aspects of
[...] Read more.
In 1950, Alan Turing proposed his concept of universal machines, emphasizing their abilities to learn, think, and behave in a human-like manner. Today, the existence of intelligent agents imitating human characteristics is more relevant than ever. They have expanded to numerous aspects of daily life. Yet, while they are often seen as work simplifiers, their interactions usually lack social competence. In particular, they miss what one may call authenticity. In the study presented in this paper, we explore how characteristics of social intelligence may enhance future agent implementations. Interviews and an open question survey with experts from different fields have led to a shared understanding of what it would take to make intelligent virtual agents, in particular messaging agents (i.e., chat bots), more authentic. Results suggest that showcasing a transparent purpose, learning from experience, anthropomorphizing, human-like conversational behavior, and coherence, are guiding characteristics for agent authenticity and should consequently allow for and support a better coexistence of artificial intelligence technology with its respective users. Full article
(This article belongs to the Special Issue Intelligent Virtual Agents)
Open AccessArticle The Communicative Effectiveness of Education Videos: Towards an Empirically-Motivated Multimodal Account
Multimodal Technologies Interact. 2018, 2(3), 59; https://doi.org/10.3390/mti2030059
Received: 20 April 2018 / Revised: 15 August 2018 / Accepted: 4 September 2018 / Published: 12 September 2018
PDF Full-text (6386 KB) | HTML Full-text | XML Full-text
Abstract
Educational content of many kinds and from many disciplines are increasingly presented in the form of short videos made broadly accessible via platforms such as YouTube. We argue that understanding how such communicative forms function effectively (or not) demands a more thorough theoretical
[...] Read more.
Educational content of many kinds and from many disciplines are increasingly presented in the form of short videos made broadly accessible via platforms such as YouTube. We argue that understanding how such communicative forms function effectively (or not) demands a more thorough theoretical foundation in the principles of multimodal communication that is also capable of engaging with, and driving, empirical studies. We introduce the basic concepts adopted and discuss an empirical study showing how functional measures derived from the theory of multimodality we employ and results from a recipient-based study that we conducted align. We situate these results with respect to the state of the art in cognitive research in multimodal learning and argue that the more complex multimodal interactions and artifacts become, the more a fine-grained view of multimodal communication of the kind we propose will be essential for engaging with such media, both theoretically and empirically. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessEditorial Digital Cultural Heritage
Multimodal Technologies Interact. 2018, 2(3), 58; https://doi.org/10.3390/mti2030058
Received: 7 September 2018 / Accepted: 10 September 2018 / Published: 11 September 2018
PDF Full-text (198 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Digital Cultural Heritage)
Open AccessReview Review of Deep Learning Methods in Robotic Grasp Detection
Multimodal Technologies Interact. 2018, 2(3), 57; https://doi.org/10.3390/mti2030057
Received: 31 May 2018 / Revised: 22 August 2018 / Accepted: 3 September 2018 / Published: 7 September 2018
PDF Full-text (877 KB) | HTML Full-text | XML Full-text
Abstract
For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab
[...] Read more.
For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed. Full article
(This article belongs to the Special Issue Deep Learning)
Figures

Figure 1

Open AccessArticle Architecting 3D Interactive Educational Applications for the Web: The Case Study of CrystalWalk
Multimodal Technologies Interact. 2018, 2(3), 56; https://doi.org/10.3390/mti2030056
Received: 20 August 2018 / Revised: 1 September 2018 / Accepted: 5 September 2018 / Published: 7 September 2018
PDF Full-text (2518 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes the technical development of CrystalWalk, a crystal editor and visualization software designed for teaching materials science and engineering aiming to provide an accessible and interactive platform to students, professors and researchers. Justified by the lack of proper didactic tools, an
[...] Read more.
This paper describes the technical development of CrystalWalk, a crystal editor and visualization software designed for teaching materials science and engineering aiming to provide an accessible and interactive platform to students, professors and researchers. Justified by the lack of proper didactic tools, an evaluation of the existing crystallographic software has further shown opportunities for the development of a new software, more focused on the educational approach. CrystalWalk’s was guided by principles of free software, accessibility and democratization of knowledge, which was reflected in the application’s architecture strategy and the adoption of state-of-the-art technologies for the development of interactive web applications, such as HTML5/WebGL, service-oriented architecture (SOA) and responsive, resilient and elastic distributed systems. CrystalWalk’s architecture was successful in supporting the implementation of all specified software requirements proposed by state-of-the-art research and deemed to exert a positive impact in building accessible 3D interactive educational applications for the web. Full article
Figures

Figure 1

Open AccessArticle Tele-Guidance System to Support Anticipation during Communication
Multimodal Technologies Interact. 2018, 2(3), 55; https://doi.org/10.3390/mti2030055
Received: 1 August 2018 / Revised: 29 August 2018 / Accepted: 29 August 2018 / Published: 6 September 2018
PDF Full-text (7215 KB) | HTML Full-text | XML Full-text
Abstract
Tele-guidance systems for the remote monitoring and maintenance of equipment have been extensively investigated. Such systems enable a remote helper to provide guidance to a local worker while perceiving local conditions. In this study, we propose a tele-guidance system that supports the anticipation
[...] Read more.
Tele-guidance systems for the remote monitoring and maintenance of equipment have been extensively investigated. Such systems enable a remote helper to provide guidance to a local worker while perceiving local conditions. In this study, we propose a tele-guidance system that supports the anticipation of an interlocutor’s actions during communication. Our proposed system enables a helper and worker to anticipate each other’s actions by allowing them to move around in the workspace freely and observe each other’s non-verbal cues (e.g., body motions and other gestures) through a head-mounted display. We conducted an experiment to compare the effectiveness of our proposed method with that of existing methods (a simple tele-pointer) that support anticipation during communication. Full article
(This article belongs to the Special Issue Spatial Augmented Reality)
Figures

Figure 1

Open AccessArticle Tangible Representational Properties: Implications for Meaning Making
Multimodal Technologies Interact. 2018, 2(3), 54; https://doi.org/10.3390/mti2030054
Received: 22 June 2018 / Revised: 20 August 2018 / Accepted: 3 September 2018 / Published: 5 September 2018
PDF Full-text (1789 KB) | HTML Full-text | XML Full-text
Abstract
Tangible technologies are considered promising tools for learning, by enabling multimodal interaction through physical action and manipulation of physical and digital elements, thus facilitating representational concrete–abstract links. A key concept in a tangible system is that its physical components are objects of interest,
[...] Read more.
Tangible technologies are considered promising tools for learning, by enabling multimodal interaction through physical action and manipulation of physical and digital elements, thus facilitating representational concrete–abstract links. A key concept in a tangible system is that its physical components are objects of interest, with associated meanings relevant to the context. Tangible technologies are said to provide ‘natural’ mappings that employ spatial analogies and adhere to cultural standards, capitalising on people’s familiarity with the physical world. Students with intellectual disabilities particularly benefit from interaction with tangibles, given their difficulties with perception and abstraction. However, symbolic information does not always have an obvious physical equivalent, and meanings do not reside in the representations used in the artefacts themselves, but in the ways they are manipulated and interpreted. In educational contexts, meaning attached to artefacts by designers is not necessarily transparent to students, nor interpreted by them as the designer predicted. Using artefacts and understanding their significance is of utmost importance for the construction of knowledge within the learning process; hence the need to study the use of the artefacts in contexts of practice and how they are transformed by the students. This article discusses how children with intellectual disabilities conceptually interpreted the elements of four tangible artefacts, and which characteristics of these tangibles were key for productive, multimodal interaction, thus potentially guiding designers and educators. Analysis shows the importance of designing physical-digital semantic mappings that capitalise on conceptual metaphors related to children’s familiar contexts, rather than using more abstract representations. Such metaphorical connections, preferably building on physical properties, contribute to children’s comprehension and facilitate their exploration of the systems. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessReview Reviews of Social Embodiment for Design of Non-Player Characters in Virtual Reality-Based Social Skill Training for Autistic Children
Multimodal Technologies Interact. 2018, 2(3), 53; https://doi.org/10.3390/mti2030053
Received: 6 July 2018 / Revised: 17 August 2018 / Accepted: 28 August 2018 / Published: 4 September 2018
PDF Full-text (470 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment,
[...] Read more.
The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place. Full article
Figures

Figure 1

Open AccessArticle Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot
Multimodal Technologies Interact. 2018, 2(3), 52; https://doi.org/10.3390/mti2030052
Received: 18 July 2018 / Revised: 17 August 2018 / Accepted: 28 August 2018 / Published: 3 September 2018
PDF Full-text (6815 KB) | HTML Full-text | XML Full-text
Abstract
Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful
[...] Read more.
Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area. Full article
Figures

Figure 1

Open AccessReview Animals Make Music: A Look at Non-Human Musical Expression
Multimodal Technologies Interact. 2018, 2(3), 51; https://doi.org/10.3390/mti2030051
Received: 20 April 2018 / Revised: 17 August 2018 / Accepted: 28 August 2018 / Published: 2 September 2018
PDF Full-text (3353 KB) | HTML Full-text | XML Full-text
Abstract
The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible
[...] Read more.
The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible benefits and risks for the animals involved. Animal–Computer Interaction is a novel field of research that offers a framework (ACI manifesto) for implementing interactive technology for animals. Based on this framework, we have examined several projects focusing on the interplay between animals and music technology in order to arrive at a better understanding of animal-based musical projects. Building on this, we will discuss how the implementation of new musical instruments and interfaces could provide new opportunities for improving the quality of life for grey parrots living in captivity. Full article
(This article belongs to the Special Issue Multimodal Technologies in Animal–Computer Interaction)
Figures

Figure 1

Open AccessArticle Maker Literacies and Maker Citizenship in the MakEY (Makerspaces in the Early Years) Project
Multimodal Technologies Interact. 2018, 2(3), 50; https://doi.org/10.3390/mti2030050
Received: 24 May 2018 / Revised: 11 August 2018 / Accepted: 17 August 2018 / Published: 28 August 2018
PDF Full-text (4020 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the potential relationship between creative citizenship and what may be termed ‘maker literacies’ is examined in the light of emergent findings from an international project on the use of makerspaces in early childhood, “MakEY” (see http://makeyproject.eu). The paper outlines the
[...] Read more.
In this paper, the potential relationship between creative citizenship and what may be termed ‘maker literacies’ is examined in the light of emergent findings from an international project on the use of makerspaces in early childhood, “MakEY” (see http://makeyproject.eu). The paper outlines the concept of creative citizenship and considers the notion of maker literacies before moving on to examine how maker literacies might be developed in early-years curricula in ways that foster civic engagement. Three vignettes are offered of makerspaces in early-years settings and a museum in Finland, Norway, and the UK. The activities outlined in the vignettes might be conceived of as ‘maker citizenship’, a concept which draws together understandings of making, digital literacies, and citizenship. The paper considers the implications of this analysis for future research and practice. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle To Boldly Go: Feedback as Digital, Multimodal Dialogue
Multimodal Technologies Interact. 2018, 2(3), 49; https://doi.org/10.3390/mti2030049
Received: 22 May 2018 / Revised: 16 August 2018 / Accepted: 22 August 2018 / Published: 27 August 2018
PDF Full-text (784 KB) | HTML Full-text | XML Full-text
Abstract
This article is concerned with digital, multimodal feedback that supports learning and assessment within education. Drawing on the research literature alongside a case study from a postgraduate program in digital education, I argue that approaching feedback as an ongoing dialogue presented in richly
[...] Read more.
This article is concerned with digital, multimodal feedback that supports learning and assessment within education. Drawing on the research literature alongside a case study from a postgraduate program in digital education, I argue that approaching feedback as an ongoing dialogue presented in richly multimodal and digital form can support opportunities for learning that are imaginative, critical, and in-tune with our increasingly digitally-mediated society. Using the examples of a reflective blogging exercise and an assignment built in the Second Life virtual world, I demonstrate how the tutor’s emphasis on providing feedback in multimodal form, alongside more conventional print-based approaches, inspired and emboldened students towards the creation of apt and sophisticated coursework. At the same time, the crafting of multimodal feedback carries resource implications and can sit uncomfortably with some deep-rooted assumptions around language-based representations of academic knowledge. This article should be seen in the context of a growing pedagogic and institutional interest in feedback around assessment, alongside the emergence of new ways of communicating and consuming academic content in richly multimodal ways. In this setting, multimodality, technology, and interaction refers to the digitally-mediated dialogue that takes place between the student and tutor around assessment. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle The Impact of Multimodal Communication on a Shared Mental Model, Trust, and Commitment in Human–Intelligent Virtual Agent Teams
Multimodal Technologies Interact. 2018, 2(3), 48; https://doi.org/10.3390/mti2030048
Received: 19 June 2018 / Revised: 26 July 2018 / Accepted: 15 August 2018 / Published: 18 August 2018
PDF Full-text (2099 KB) | HTML Full-text | XML Full-text
Abstract
There is an increasing interest in the use of intelligent virtual agents (IVAs) to work in teams with humans. To achieve successful outcomes for these heterogeneous teams, many of the aspects found in successful human teams will need to be supported. These aspects
[...] Read more.
There is an increasing interest in the use of intelligent virtual agents (IVAs) to work in teams with humans. To achieve successful outcomes for these heterogeneous teams, many of the aspects found in successful human teams will need to be supported. These aspects include behavioural (i.e., multimodal communication), cognitive (i.e., a shared mental model (SMM)), and social (trust and commitment). Novelly, this paper aims to investigate the impact of IVA’s multimodal communication on the development of a SMM between humans and IVAs. Moreover, this paper aims to explore the impact of the developed SMM on a human’s trust in an IVA’s decisions and a human’s commitment to honour his/her promises to an IVA. The results from two studies involving a collaborative activity showed a significant positive correlation between team multimodal communication (i.e., behavioural aspect) and a SMM between teammates (i.e., cognitive aspect). Moreover, the result showed that there is a significant positive correlation between the developed SMM and a human’s trust in the IVA’s decision and the human’s commitment to honour his/her promises (the establishment of the social aspect of teamwork). Additionally, the results showed a cumulative effect of all of these aspects on human–agent team performance. These results can guide the design of human–agent teamwork multimodal communication models. Full article
(This article belongs to the Special Issue Intelligent Virtual Agents)
Figures

Figure 1

Open AccessReview Deep Learning and Medical Diagnosis: A Review of Literature
Multimodal Technologies Interact. 2018, 2(3), 47; https://doi.org/10.3390/mti2030047
Received: 20 June 2018 / Revised: 10 August 2018 / Accepted: 14 August 2018 / Published: 17 August 2018
PDF Full-text (442 KB) | HTML Full-text | XML Full-text
Abstract
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and
[...] Read more.
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields. Full article
(This article belongs to the Special Issue Deep Learning)
Figures

Figure 1

Open AccessArticle Analyzing Iterative Training Game Design: A Multi-Method Postmortem Analysis of CYCLES Training Center and CYCLES Carnivale
Multimodal Technologies Interact. 2018, 2(3), 46; https://doi.org/10.3390/mti2030046
Received: 18 June 2018 / Revised: 31 July 2018 / Accepted: 7 August 2018 / Published: 10 August 2018
PDF Full-text (3392 KB) | HTML Full-text | XML Full-text
Abstract
That games can be used to teach specific content has been demonstrated numerous times. However, although specific game features have been conjectured to have an impact on learning outcomes, little empirical research exists on the impact of iterative design on learning outcomes. This
[...] Read more.
That games can be used to teach specific content has been demonstrated numerous times. However, although specific game features have been conjectured to have an impact on learning outcomes, little empirical research exists on the impact of iterative design on learning outcomes. This article analyzes two games that have been developed to train an adult audience to recognize and avoid relying on six cognitive biases (three per game) in their decision making. The games were developed iteratively and were evaluated through a series of experiments. Although the experimental manipulations did not find a significant impact of the manipulated game features on the learning outcomes, each game iteration proved more successful than its predecessors at training players. Here, we outline a mixed-methods approach to postmortem game design analysis that helps us understand what might account for the improvement across games, and to identify new variables for future experimental training game studies. Full article
(This article belongs to the Special Issue Human Computer Interaction in Education)
Figures

Figure 1

Open AccessArticle EyeSpot: Leveraging Gaze to Protect Private Text Content on Mobile Devices from Shoulder Surfing
Multimodal Technologies Interact. 2018, 2(3), 45; https://doi.org/10.3390/mti2030045
Received: 10 June 2018 / Revised: 23 July 2018 / Accepted: 25 July 2018 / Published: 9 August 2018
PDF Full-text (1345 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
As mobile devices allow access to an increasing amount of private data, using them in public can potentially leak sensitive information through shoulder surfing. This includes personal private data (e.g., in chat conversations) and business-related content (e.g., in emails). Leaking the former might
[...] Read more.
As mobile devices allow access to an increasing amount of private data, using them in public can potentially leak sensitive information through shoulder surfing. This includes personal private data (e.g., in chat conversations) and business-related content (e.g., in emails). Leaking the former might infringe on users’ privacy, while leaking the latter is considered a breach of the EU’s General Data Protection Regulation as of May 2018. This creates a need for systems that protect sensitive data in public. We introduce EyeSpot, a technique that displays content through a spot that follows the user’s gaze while hiding the rest of the screen from an observer’s view through overlaid masks. We explore different configurations for EyeSpot in a user study in terms of users’ reading speed, text comprehension, and perceived workload. While our system is a proof of concept, we identify crystallized masks as a promising design candidate for further evaluation with regard to the security of the system in a shoulder surfing scenario. Full article
(This article belongs to the Special Issue Smart Devices Interaction)
Figures

Figure 1

Open AccessArticle Curating Inclusive Cities through Food and Art
Multimodal Technologies Interact. 2018, 2(3), 44; https://doi.org/10.3390/mti2030044
Received: 28 June 2018 / Revised: 26 July 2018 / Accepted: 31 July 2018 / Published: 4 August 2018
PDF Full-text (216 KB) | HTML Full-text | XML Full-text
Abstract
Flavours of Glenroy (2013–4) was an action research project where artists imagined mobile edible gardens as a way to connect and engage with locals through project presentation and execution. As a socially engaged art project, it focused on developing ways to connect the
[...] Read more.
Flavours of Glenroy (2013–4) was an action research project where artists imagined mobile edible gardens as a way to connect and engage with locals through project presentation and execution. As a socially engaged art project, it focused on developing ways to connect the mobile, diverse and transforming community of Glenroy, Victoria, Australia. The transnational, Australian dream suburb, reflecting the fluid and globalizing conditions of our cities, was emphasized through the strategy of growing and distributing plants using a mobile system that aligned with the mobility and diversity of the suburb. The project emphasized how social relations, encouraged through art, has the capacity to transform public spaces, providing a platform to introduce new voices and narratives of a community and encourage inclusive participation in sustainable citizenship. Full article
(This article belongs to the Special Issue Human-Food Interaction)
Open AccessReview Technology for Remote Health Monitoring in an Older Population: A Role for Mobile Devices
Multimodal Technologies Interact. 2018, 2(3), 43; https://doi.org/10.3390/mti2030043
Received: 31 May 2018 / Revised: 23 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
PDF Full-text (174 KB) | HTML Full-text | XML Full-text
Abstract
The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over
[...] Read more.
The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over monitoring their own health. Importantly, mobile devices such as smartphones and tablets can allow older adults to have “on the go” access to health-related information. This paper explores mobile health apps that enable older adults and those who care for them to track health-related factors such as body readings and medication adherence, and it serves as a review of the literature on the usability and acceptance of mobile health apps in an older population. Full article
(This article belongs to the Special Issue Smart Devices Interaction)
Open AccessArticle Debugging in Programming as a Multimodal Practice in Early Childhood Education Settings
Multimodal Technologies Interact. 2018, 2(3), 42; https://doi.org/10.3390/mti2030042
Received: 21 May 2018 / Revised: 28 June 2018 / Accepted: 4 July 2018 / Published: 13 July 2018
PDF Full-text (1025 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this article is to broadly elaborate on how programming can be understood as a new teaching scope in preschools, focusing specifically on debugging as one of the phases involved in learning to program. The research question How can debugging as
[...] Read more.
The aim of this article is to broadly elaborate on how programming can be understood as a new teaching scope in preschools, focusing specifically on debugging as one of the phases involved in learning to program. The research question How can debugging as part of teaching and learning programming be understood as multimodal learning? has guided the analysis and the presentation of the data. In this study, and its analysis process, we have combined a multimodal understanding of teaching and learning practices with understandings of programming and how it is practiced. Consequently, the multidisciplinary approach in this study, combining theories from social sciences with theories and concepts from computer science, is central throughout the article. This is therefore also a creative, explorative process as there are no clear norms to follow when conducting multidisciplinary analyses. The data consist of video recordings of teaching sessions with children and a teacher engaged in programming activities. The video material was recorded in a preschool setting during the school year 2017–2018 and consists of 25 sessions of programming activities with children, who were four or five years old. The results show how debugging in early childhood education is a multimodal activity socially established by use of speech, pointing and gaze. Our findings also indicate that artefacts are central to learning debugging, and a term ‘instructional artefacts’ is therefore added. Finally, the material shows how basic programming concepts and principles can be explored with young children. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Opportunities and Challenges of Bodily Interaction for Geometry Learning to Inform Technology Design
Multimodal Technologies Interact. 2018, 2(3), 41; https://doi.org/10.3390/mti2030041
Received: 23 May 2018 / Revised: 25 June 2018 / Accepted: 2 July 2018 / Published: 9 July 2018
PDF Full-text (2832 KB) | HTML Full-text | XML Full-text
Abstract
An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric
[...] Read more.
An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Animal-to-Animal Data Sharing Mechanism for Wildlife Monitoring in Fukushima Exclusion Zone
Multimodal Technologies Interact. 2018, 2(3), 40; https://doi.org/10.3390/mti2030040
Received: 20 April 2018 / Revised: 21 June 2018 / Accepted: 22 June 2018 / Published: 3 July 2018
PDF Full-text (3059 KB) | HTML Full-text | XML Full-text
Abstract
We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas
[...] Read more.
We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research. Full article
(This article belongs to the Special Issue Multimodal Technologies in Animal–Computer Interaction)
Figures

Figure 1

Open AccessArticle Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation
Multimodal Technologies Interact. 2018, 2(3), 39; https://doi.org/10.3390/mti2030039
Received: 22 May 2018 / Revised: 14 June 2018 / Accepted: 21 June 2018 / Published: 2 July 2018
PDF Full-text (3453 KB) | HTML Full-text | XML Full-text
Abstract
Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable
[...] Read more.
Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle A Predictive Fingerstroke-Level Model for Smartwatch Interaction
Multimodal Technologies Interact. 2018, 2(3), 38; https://doi.org/10.3390/mti2030038
Received: 24 May 2018 / Revised: 19 June 2018 / Accepted: 25 June 2018 / Published: 2 July 2018
PDF Full-text (2477 KB) | HTML Full-text | XML Full-text
Abstract
The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard
[...] Read more.
The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard interactions. However, it has since been adapted to predict interactions with smartphones, in-vehicle information systems, and natural user interfaces. The simplicity of the KLM and its extensions, along with their resource- and time-saving capabilities, has driven their adoption. In recent years, the popularity of smartwatches has grown, introducing new design challenges due to the small touch screens and bimanual interactions involved, which make current extensions to the KLM unsuitable for modelling smartwatches. Therefore, it is necessary to study these interfaces and interactions. This paper reports on three studies performed to modify the original KLM and its extensions for smartwatch interaction. First, an observational study was conducted to characterise smartwatch interactions. Second, the unit times for the observed interactions were derived through another study, in which the times required to perform the relevant physical actions were measured. Finally, a third study was carried out to validate the model for interactions with the Apple Watch and Samsung Gear S3. The results show that the new model can accurately predict the performance of smartwatch users with a percentage error of 12.07%; a value that falls below the acceptable percentage dictated by the original KLM ~21%. Full article
Figures

Figure 1

Open AccessArticle What Characterizes the Polymodal Media of the Mobile Phone? The Multiple Media within the World’s Most Popular Medium
Multimodal Technologies Interact. 2018, 2(3), 37; https://doi.org/10.3390/mti2030037
Received: 1 June 2018 / Revised: 21 June 2018 / Accepted: 21 June 2018 / Published: 26 June 2018
PDF Full-text (1867 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
While the mobile phone is the world’s most popular media device, it is actually not one single medium, but is effectively used as a different medium by different user groups. The article characterizes polymodal differences in mobile apps usage among different user groups,
[...] Read more.
While the mobile phone is the world’s most popular media device, it is actually not one single medium, but is effectively used as a different medium by different user groups. The article characterizes polymodal differences in mobile apps usage among different user groups, including gender, education, occupation, screen size, and price. We monitored the complete app usage of 10,725 smartphone users for one month each (56 million sessions, recording almost 1 million hours). Our key contribution consists in developing and analyzing a theoretical framework to classify the over 16,000 apps used into five categories. Exploring nine research questions we provide a broad characterization by asking who, with which characteristics, uses which kinds of apps in what extensity and intensity? For example, it is not the young and high occupational grades that use the mobile phone as a human-to-machine computer (including gaming and artificial intelligence tools). Large screen size is related to extensive long sessions, while a small screen size is related to intensive frequent usage. The results go beyond providing ample empirical evidence for the inherently polymodal nature of the mobile phone, but also proposes a framework on how to deal with it analytically. Full article
Figures

Graphical abstract

Open AccessArticle An Exploratory Study of the Uses of a Multisensory Map—With Visually Impaired Children
Multimodal Technologies Interact. 2018, 2(3), 36; https://doi.org/10.3390/mti2030036
Received: 21 May 2018 / Revised: 19 June 2018 / Accepted: 21 June 2018 / Published: 24 June 2018
PDF Full-text (10585 KB) | HTML Full-text | XML Full-text
Abstract
This paper reports an empirical study of a multisensory map used by visually impaired primary school pupils, to study human habitats and differences between urban, suburban and rural areas using a local example. Using multimodal analysis, we propose to examine how the use
[...] Read more.
This paper reports an empirical study of a multisensory map used by visually impaired primary school pupils, to study human habitats and differences between urban, suburban and rural areas using a local example. Using multimodal analysis, we propose to examine how the use of smell and taste shape pupils’ engagement and the development of a non-visual knowledge of geography. Our research questions include: How do pupils try to make sense of this unusual material, in conjunction with the tactile, audio and tangible material used in this lesson? How does the special education teacher support the development of these interpretations? Multisensory material has the potential to support experiential and embodied learning: were these promises achieved? Our findings show how this multisensory map reconfigures spatial occupation and interaction dynamics, and that it has the potential to make the classroom more pervasive to pupils’ social, spatial and emotional lives. In doing so, it provides opportunities for the teacher to develop citizenship education. The paper provides concrete examples of uses of smell and taste in learning activities to support engagement, and has implications for pedagogical design beyond special education. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Documenting the Elusive and Ephemeral in Embodied Design Ideation Activities
Multimodal Technologies Interact. 2018, 2(3), 35; https://doi.org/10.3390/mti2030035
Received: 12 November 2017 / Revised: 15 June 2018 / Accepted: 18 June 2018 / Published: 24 June 2018
PDF Full-text (48186 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Documenting embodied ideation activities is challenging, as they often result in ephemeral design constructs and elusive design knowledge difficult to document and represent. Here, we explore documentation forms designers can use internally during the design process in the domain of movement-based interaction in
[...] Read more.
Documenting embodied ideation activities is challenging, as they often result in ephemeral design constructs and elusive design knowledge difficult to document and represent. Here, we explore documentation forms designers can use internally during the design process in the domain of movement-based interaction in collocated, social settings. Using previous work and our experience from embodied ideation workshops, we propose three documentation forms with complementing perspectives of embodied action from a first and a third person view. We discuss how they capture ephemeral embodied action and elusive design and experiential knowledge, in relation to two interdependent aspects of documentation forms: their performativity and the medium they use. The novelty of these forms lies in what is being captured: ephemeral design constructs that emerge as designers engage with the embodied ideation activity; how it is portrayed: in aggregation forms that highlight elusive design knowledge; and their purpose: to clarify and augment analytical results improving the designer-researchers’ understanding of key aspects of the embodied ideation process and its outcomes, useful to advance the design process and for research dissemination. Full article
(This article belongs to the Special Issue Designing for the Body)
Figures

Figure 1

Open AccessArticle Wunderkammers: Powerful Metaphors for ‘Tangible’ Experiential Knowledge Building
Multimodal Technologies Interact. 2018, 2(3), 34; https://doi.org/10.3390/mti2030034
Received: 30 April 2018 / Revised: 8 June 2018 / Accepted: 19 June 2018 / Published: 22 June 2018
PDF Full-text (4917 KB) | HTML Full-text | XML Full-text
Abstract
Research problem: The paper identifies the need to support powerful metaphors that capture innovations of new emerging human computer interaction (HCI) technologies and innovative question and answering (Q&A) systems in the context of spatial learning and inquiry-based learning in education. Aim/goals of the
[...] Read more.
Research problem: The paper identifies the need to support powerful metaphors that capture innovations of new emerging human computer interaction (HCI) technologies and innovative question and answering (Q&A) systems in the context of spatial learning and inquiry-based learning in education. Aim/goals of the research: Explore the potential of ‘Wunderkammer’ (curiosity cabinet) as a powerful metaphor to design new types of learning experiences catering for an ecology of artefacts (real of virtual objects) to provide a holistic context for educators to share and extend learning in action. Conclusions: We provide insight into the emergence of smart interactive objects with different types of sensors that can potentially support everyday life and the increasing access to new visual experiences through augment reality and virtual reality, for new types of tangible knowledge building that can be personalised and shared. This reshaping of human centred design and creating new experiences through tangible creations that externalize in real time and through new materials, the creative power of the ‘imaginations of movement’ provides new user experience design thinking through the concept of powerful metaphors, to provide core design requirements where the blending of worlds is common place. Full article
(This article belongs to the Special Issue Human Computer Communications and Internet of Things)
Figures

Figure 1

Back to Top