Previous Issue

Table of Contents

Multimodal Technologies Interact., Volume 2, Issue 3 (September 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-9
Export citation of selected articles as:
Open AccessArticle Debugging in Programming as a Multimodal Practice in Early Childhood Education Settings
Multimodal Technologies Interact. 2018, 2(3), 42; https://doi.org/10.3390/mti2030042
Received: 21 May 2018 / Revised: 28 June 2018 / Accepted: 4 July 2018 / Published: 13 July 2018
PDF Full-text (1025 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this article is to broadly elaborate on how programming can be understood as a new teaching scope in preschools, focusing specifically on debugging as one of the phases involved in learning to program. The research question How can debugging as
[...] Read more.
The aim of this article is to broadly elaborate on how programming can be understood as a new teaching scope in preschools, focusing specifically on debugging as one of the phases involved in learning to program. The research question How can debugging as part of teaching and learning programming be understood as multimodal learning? has guided the analysis and the presentation of the data. In this study, and its analysis process, we have combined a multimodal understanding of teaching and learning practices with understandings of programming and how it is practiced. Consequently, the multidisciplinary approach in this study, combining theories from social sciences with theories and concepts from computer science, is central throughout the article. This is therefore also a creative, explorative process as there are no clear norms to follow when conducting multidisciplinary analyses. The data consist of video recordings of teaching sessions with children and a teacher engaged in programming activities. The video material was recorded in a preschool setting during the school year 2017–2018 and consists of 25 sessions of programming activities with children, who were four or five years old. The results show how debugging in early childhood education is a multimodal activity socially established by use of speech, pointing and gaze. Our findings also indicate that artefacts are central to learning debugging, and a term ‘instructional artefacts’ is therefore added. Finally, the material shows how basic programming concepts and principles can be explored with young children. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Opportunities and Challenges of Bodily Interaction for Geometry Learning to Inform Technology Design
Multimodal Technologies Interact. 2018, 2(3), 41; https://doi.org/10.3390/mti2030041
Received: 23 May 2018 / Revised: 25 June 2018 / Accepted: 2 July 2018 / Published: 9 July 2018
PDF Full-text (2832 KB) | HTML Full-text | XML Full-text
Abstract
An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric
[...] Read more.
An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Animal-to-Animal Data Sharing Mechanism for Wildlife Monitoring in Fukushima Exclusion Zone
Multimodal Technologies Interact. 2018, 2(3), 40; https://doi.org/10.3390/mti2030040
Received: 20 April 2018 / Revised: 21 June 2018 / Accepted: 22 June 2018 / Published: 3 July 2018
PDF Full-text (3059 KB) | HTML Full-text | XML Full-text
Abstract
We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas
[...] Read more.
We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research. Full article
(This article belongs to the Special Issue Multimodal Technologies in Animal–Computer Interaction)
Figures

Figure 1

Open AccessArticle Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation
Multimodal Technologies Interact. 2018, 2(3), 39; https://doi.org/10.3390/mti2030039
Received: 22 May 2018 / Revised: 14 June 2018 / Accepted: 21 June 2018 / Published: 2 July 2018
PDF Full-text (3453 KB) | HTML Full-text | XML Full-text
Abstract
Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable
[...] Read more.
Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle A Predictive Fingerstroke-Level Model for Smartwatch Interaction
Multimodal Technologies Interact. 2018, 2(3), 38; https://doi.org/10.3390/mti2030038
Received: 24 May 2018 / Revised: 19 June 2018 / Accepted: 25 June 2018 / Published: 2 July 2018
PDF Full-text (2477 KB) | HTML Full-text | XML Full-text
Abstract
The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard
[...] Read more.
The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard interactions. However, it has since been adapted to predict interactions with smartphones, in-vehicle information systems, and natural user interfaces. The simplicity of the KLM and its extensions, along with their resource- and time-saving capabilities, has driven their adoption. In recent years, the popularity of smartwatches has grown, introducing new design challenges due to the small touch screens and bimanual interactions involved, which make current extensions to the KLM unsuitable for modelling smartwatches. Therefore, it is necessary to study these interfaces and interactions. This paper reports on three studies performed to modify the original KLM and its extensions for smartwatch interaction. First, an observational study was conducted to characterise smartwatch interactions. Second, the unit times for the observed interactions were derived through another study, in which the times required to perform the relevant physical actions were measured. Finally, a third study was carried out to validate the model for interactions with the Apple Watch and Samsung Gear S3. The results show that the new model can accurately predict the performance of smartwatch users with a percentage error of 12.07%; a value that falls below the acceptable percentage dictated by the original KLM ~21%. Full article
Figures

Figure 1

Open AccessArticle What Characterizes the Polymodal Media of the Mobile Phone? The Multiple Media within the World’s Most Popular Medium
Multimodal Technologies Interact. 2018, 2(3), 37; https://doi.org/10.3390/mti2030037
Received: 1 June 2018 / Revised: 21 June 2018 / Accepted: 21 June 2018 / Published: 26 June 2018
PDF Full-text (1867 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
While the mobile phone is the world’s most popular media device, it is actually not one single medium, but is effectively used as a different medium by different user groups. The article characterizes polymodal differences in mobile apps usage among different user groups,
[...] Read more.
While the mobile phone is the world’s most popular media device, it is actually not one single medium, but is effectively used as a different medium by different user groups. The article characterizes polymodal differences in mobile apps usage among different user groups, including gender, education, occupation, screen size, and price. We monitored the complete app usage of 10,725 smartphone users for one month each (56 million sessions, recording almost 1 million hours). Our key contribution consists in developing and analyzing a theoretical framework to classify the over 16,000 apps used into five categories. Exploring nine research questions we provide a broad characterization by asking who, with which characteristics, uses which kinds of apps in what extensity and intensity? For example, it is not the young and high occupational grades that use the mobile phone as a human-to-machine computer (including gaming and artificial intelligence tools). Large screen size is related to extensive long sessions, while a small screen size is related to intensive frequent usage. The results go beyond providing ample empirical evidence for the inherently polymodal nature of the mobile phone, but also proposes a framework on how to deal with it analytically. Full article
Figures

Graphical abstract

Open AccessArticle An Exploratory Study of the Uses of a Multisensory Map—With Visually Impaired Children
Multimodal Technologies Interact. 2018, 2(3), 36; https://doi.org/10.3390/mti2030036
Received: 21 May 2018 / Revised: 19 June 2018 / Accepted: 21 June 2018 / Published: 24 June 2018
PDF Full-text (10585 KB) | HTML Full-text | XML Full-text
Abstract
This paper reports an empirical study of a multisensory map used by visually impaired primary school pupils, to study human habitats and differences between urban, suburban and rural areas using a local example. Using multimodal analysis, we propose to examine how the use
[...] Read more.
This paper reports an empirical study of a multisensory map used by visually impaired primary school pupils, to study human habitats and differences between urban, suburban and rural areas using a local example. Using multimodal analysis, we propose to examine how the use of smell and taste shape pupils’ engagement and the development of a non-visual knowledge of geography. Our research questions include: How do pupils try to make sense of this unusual material, in conjunction with the tactile, audio and tangible material used in this lesson? How does the special education teacher support the development of these interpretations? Multisensory material has the potential to support experiential and embodied learning: were these promises achieved? Our findings show how this multisensory map reconfigures spatial occupation and interaction dynamics, and that it has the potential to make the classroom more pervasive to pupils’ social, spatial and emotional lives. In doing so, it provides opportunities for the teacher to develop citizenship education. The paper provides concrete examples of uses of smell and taste in learning activities to support engagement, and has implications for pedagogical design beyond special education. Full article
(This article belongs to the Special Issue Multimodal Learning)
Figures

Figure 1

Open AccessArticle Documenting the Elusive and Ephemeral in Embodied Design Ideation Activities
Multimodal Technologies Interact. 2018, 2(3), 35; https://doi.org/10.3390/mti2030035
Received: 12 November 2017 / Revised: 15 June 2018 / Accepted: 18 June 2018 / Published: 24 June 2018
PDF Full-text (48186 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Documenting embodied ideation activities is challenging, as they often result in ephemeral design constructs and elusive design knowledge difficult to document and represent. Here, we explore documentation forms designers can use internally during the design process in the domain of movement-based interaction in
[...] Read more.
Documenting embodied ideation activities is challenging, as they often result in ephemeral design constructs and elusive design knowledge difficult to document and represent. Here, we explore documentation forms designers can use internally during the design process in the domain of movement-based interaction in collocated, social settings. Using previous work and our experience from embodied ideation workshops, we propose three documentation forms with complementing perspectives of embodied action from a first and a third person view. We discuss how they capture ephemeral embodied action and elusive design and experiential knowledge, in relation to two interdependent aspects of documentation forms: their performativity and the medium they use. The novelty of these forms lies in what is being captured: ephemeral design constructs that emerge as designers engage with the embodied ideation activity; how it is portrayed: in aggregation forms that highlight elusive design knowledge; and their purpose: to clarify and augment analytical results improving the designer-researchers’ understanding of key aspects of the embodied ideation process and its outcomes, useful to advance the design process and for research dissemination. Full article
(This article belongs to the Special Issue Designing for the Body)
Figures

Figure 1

Open AccessArticle Wunderkammers: Powerful Metaphors for ‘Tangible’ Experiential Knowledge Building
Multimodal Technologies Interact. 2018, 2(3), 34; https://doi.org/10.3390/mti2030034
Received: 30 April 2018 / Revised: 8 June 2018 / Accepted: 19 June 2018 / Published: 22 June 2018
PDF Full-text (4917 KB) | HTML Full-text | XML Full-text
Abstract
Research problem: The paper identifies the need to support powerful metaphors that capture innovations of new emerging human computer interaction (HCI) technologies and innovative question and answering (Q&A) systems in the context of spatial learning and inquiry-based learning in education. Aim/goals of the
[...] Read more.
Research problem: The paper identifies the need to support powerful metaphors that capture innovations of new emerging human computer interaction (HCI) technologies and innovative question and answering (Q&A) systems in the context of spatial learning and inquiry-based learning in education. Aim/goals of the research: Explore the potential of ‘Wunderkammer’ (curiosity cabinet) as a powerful metaphor to design new types of learning experiences catering for an ecology of artefacts (real of virtual objects) to provide a holistic context for educators to share and extend learning in action. Conclusions: We provide insight into the emergence of smart interactive objects with different types of sensors that can potentially support everyday life and the increasing access to new visual experiences through augment reality and virtual reality, for new types of tangible knowledge building that can be personalised and shared. This reshaping of human centred design and creating new experiences through tangible creations that externalize in real time and through new materials, the creative power of the ‘imaginations of movement’ provides new user experience design thinking through the concept of powerful metaphors, to provide core design requirements where the blending of worlds is common place. Full article
(This article belongs to the Special Issue Human Computer Communications and Internet of Things)
Figures

Figure 1

Back to Top