-
A Literature Survey of Transparency in Human–Robot Interaction
-
Multimodal Communication and Peer Interaction during Equation-Solving Sessions with and without Tangible Technologies
-
“AR The Gods of Olympus”: Design and Pilot Evaluation of an Augmented Reality Educational Game for Greek Mythology
-
Does Augmented Reality Help to Understand Chemical Phenomena during Hands-On Experiments?–Implications for Cognitive Load and Learning
-
Online Platforms for Remote Immersive Virtual Reality Testing: An Emerging Tool for Experimental Behavioral Research
Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, scientific, peer-reviewed, open access journal of multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Science Applications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.5 days after submission; acceptance to publication is undertaken in 5.5 days (median values for papers published in this journal in the second half of 2022).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Latest Articles
On the Effectiveness of Using Virtual Reality to View BIM Metadata in Architectural Design Reviews for Healthcare
Multimodal Technol. Interact. 2023, 7(6), 60; https://doi.org/10.3390/mti7060060 - 07 Jun 2023
Abstract
This article reports on a study that assessed whether Virtual Reality (VR) can be used to display Building Information Modelling (BIM) metadata alongside spatial data in a virtual environment, and by doing so determine if it increases the effectiveness of the design review
[...] Read more.
This article reports on a study that assessed whether Virtual Reality (VR) can be used to display Building Information Modelling (BIM) metadata alongside spatial data in a virtual environment, and by doing so determine if it increases the effectiveness of the design review by improving participants’ understanding of the design. Previous research has illustrated the potential for VR to enhance design reviews, especially the ability to convey spatial information, but there has been limited research into how VR can convey additional BIM metadata. A user study with 17 healthcare professionals assessed participants’ performances and preferences for completing design reviews in VR or using a traditional design review system of PDF drawings and a 3D model. The VR condition had a higher task completion rate, a higher SUS score and generally faster completion times. VR increases the effectiveness of a design review conducted by healthcare professionals.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►
Show Figures
Open AccessArticle
Location-Based Game for Thought-Provoking Evacuation Training
Multimodal Technol. Interact. 2023, 7(6), 59; https://doi.org/10.3390/mti7060059 - 07 Jun 2023
Abstract
Participation in evacuation training can aid survival in the event of an unpredictable disaster, such as an earthquake. However, conventional evacuation training is not well designed for provoking critical thinking in participants regarding the processes involved in a successful evacuation. To realize thought-provoking
[...] Read more.
Participation in evacuation training can aid survival in the event of an unpredictable disaster, such as an earthquake. However, conventional evacuation training is not well designed for provoking critical thinking in participants regarding the processes involved in a successful evacuation. To realize thought-provoking evacuation training, we developed a location-based game that presents digital materials that express disaster situations corresponding to locations or times preset in a scenario and providing scenario-based multi-ending as the game element. The developed game motivates participants to take decisions by providing high situational and audiovisual realism. In addition, the game encourages the participants to think objectively about the evacuation process by working together with a reflection-support system. We practiced thought-provoking evacuation training with fifth-grade students, focusing on tsunami evacuation and lifesaving-related moral dilemmas. In this practice, we observed that the participants took decisions as if they were dealing with actual disaster situations and objectively thought about the evacuation process by reflecting on their decisions. Meanwhile, we found that lifesaving-related moral dilemmas are difficult to address in evacuation training.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Development and Evaluation of a Mobile Application with Augmented Reality for Guiding Visitors on Hiking Trails
Multimodal Technol. Interact. 2023, 7(6), 58; https://doi.org/10.3390/mti7060058 - 31 May 2023
Abstract
Tourism on the island of Santa Maria, Azores, has been increasing due to its characteristics in terms of biodiversity and geodiversity. This island has several hiking trails; the available information can be consulted in pamphlets and physical placards, whose maintenance and updating is
[...] Read more.
Tourism on the island of Santa Maria, Azores, has been increasing due to its characteristics in terms of biodiversity and geodiversity. This island has several hiking trails; the available information can be consulted in pamphlets and physical placards, whose maintenance and updating is difficult and expensive. Thus, the need to improve the visitors’ experience arises, in this case, by using the technological means currently available to everyone: a smartphone. This paper describes the development and evaluation of the user experience of a mobile application for guiding visitors on said hiking trails, as well as the design principles and main issues observed during this process. The application is based on an augmented reality interaction model providing visitors with an interactive and recreational experience through Augmented Reality in outdoor environments (without additional marks in the physical space and using georeferenced information), helping in navigation during the route and providing updated information with easy maintenance. For the design and evaluation of the application, two studies were carried out with users on-site (Santa Maria, Azores). The first had 77 participants, to analyze users and define the application’s characteristics, and the second had 10 participants to evaluate the user experience. The feedback from participants was obtained through questionnaires. In these questionnaires, an average SUS (System Usability Scale) score of 83 (excellent) and positive results in the UEQ (User Experience Questionnaire) were obtained.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
A Digital Coach to Promote Emotion Regulation Skills
Multimodal Technol. Interact. 2023, 7(6), 57; https://doi.org/10.3390/mti7060057 - 29 May 2023
Abstract
There is growing awareness that effective emotion regulation is critical for health, adjustment and wellbeing. Emerging evidence suggests that interventions that promote flexible emotion regulation may have the potential to reduce the incidence and prevalence of mental health problems in specific at-risk populations.
[...] Read more.
There is growing awareness that effective emotion regulation is critical for health, adjustment and wellbeing. Emerging evidence suggests that interventions that promote flexible emotion regulation may have the potential to reduce the incidence and prevalence of mental health problems in specific at-risk populations. The challenge is how best to engage with at risk populations, who may not be actively seeking assistance, to deliver this early intervention approach. One possible solution is via digital technology and development, which has rapidly accelerated in this space. Such rapid growth has, however, occurred at the expense of developing a deep understanding of key elements of successful program design and specific mechanisms that influence health behavior change. This paper presents a detailed description of the design, development and evaluation of an emotion regulation intervention conversational agent (ERICA) who acts as a digital coach. ERICA uses interactive conversation to encourage self-reflection and to support and empower users to learn a range of cognitive emotion regulation strategies including Refocusing, Reappraisal, Planning and Putting into Perspective. A pilot evaluation of ERICA was conducted with 138 university students and confirmed that ERICA provided a feasible and highly usable method for delivering an emotion regulation intervention. The results also indicated that ERICA was able to develop a therapeutic relationship with participants and increase their intent to use a range of cognitive emotion regulation strategies. These findings suggest that ERICA holds potential to be an effective approach for delivering an early intervention to support mental health and wellbeing. ERICA’s dialogue, embedded with interactivity, therapeutic alliance and empathy cues, provide the basis for the development of other psychoeducation interventions.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Similarities and Differences between Immersive Virtual Reality, Real World, and Computer Screens: A Systematic Scoping Review in Human Behavior Studies
by
and
Multimodal Technol. Interact. 2023, 7(6), 56; https://doi.org/10.3390/mti7060056 - 27 May 2023
Abstract
►▼
Show Figures
In the broader field of human behavior studies, there are several trade-offs for on-site experiments. To be tied to a specific location can limit both the availability and diversity of participants. However, current and future technological advances make it possible to replicate real-world
[...] Read more.
In the broader field of human behavior studies, there are several trade-offs for on-site experiments. To be tied to a specific location can limit both the availability and diversity of participants. However, current and future technological advances make it possible to replicate real-world scenarios in a virtual environment up to a certain level of detail. How these differences add up and affect the cross-media validity of findings remains a topic of debate. How a virtual world is accessed, through a computer screen or a head-mounted display, may have a significant impact. Not surprisingly, the literature has presented various comparisons. However, while previous research has compared the different devices for a specific research question, a systematic review is lacking. To fill this gap, we conducted this review. We identified 1083 articles in accordance with the PRISMA guidelines. Following screening, 56 articles remained and were compared for a qualitative synthesis to provide the reader a summary of current research on the differences between head-mounted displays (HMDs), computer screens, and the real world. Overall, the data show that virtual worlds presented in an HMD are more similar to real-world situations than to computer screens. This supports the thesis that HMDs are more suitable than computer screens for conducting experiments in the field of human behavioral studies.
Full article

Figure 1
Open AccessArticle
Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments
Multimodal Technol. Interact. 2023, 7(6), 55; https://doi.org/10.3390/mti7060055 - 26 May 2023
Abstract
With the rapid expansion of technology and hardware availability within the field of Augmented Reality, building and deploying Augmented Reality learning environments has become more logistically viable than ever before. In this paper, we focus on the development of a new mobile learning
[...] Read more.
With the rapid expansion of technology and hardware availability within the field of Augmented Reality, building and deploying Augmented Reality learning environments has become more logistically viable than ever before. In this paper, we focus on the development of a new mobile learning experience for a museum by combining multiple technologies to provide additional Human–computer interaction possibilities. This is both to reduce barriers to entry for end-users as well as provide natural interaction methods. Using our method, we implemented a new approach to gesture-based interactions for Augmented Reality interactions by combining two devices, a Leap Motion and a Microsoft HoloLens (1st Generation), via an intermediary device with the use of local-area networking. This was carried out with the intention of comparing this method against alternative forms of Augmented Reality to determine which implementation has the largest impact on adult learners’ ability to retain information. A control group has been used to establish data on memory retention without the use of Augmented Reality technology, along with three focus groups to explore the different methods and locations. Results found that adult learners retain the most overall information when being educated through a traditional lecture, with a statistically significant difference between the methods; however, the use of Augmented Reality resulted in a slower rate of knowledge decay between testing intervals. This contrasts with existing research as adult learners did not respond to the technology in the same way that child and teenage audiences previously have, which suggests that prior research may not be generalisable to all audiences.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Linking Personality and Trust in Intelligent Virtual Assistants
Multimodal Technol. Interact. 2023, 7(6), 54; https://doi.org/10.3390/mti7060054 - 25 May 2023
Abstract
Throughout the last years, Intelligent Virtual Assistants (IVAs), such as Alexa and Siri, have increasingly gained in popularity. Yet, privacy advocates raise great concerns regarding the amount and type of data these systems collect and consequently process. Among many other things, it is
[...] Read more.
Throughout the last years, Intelligent Virtual Assistants (IVAs), such as Alexa and Siri, have increasingly gained in popularity. Yet, privacy advocates raise great concerns regarding the amount and type of data these systems collect and consequently process. Among many other things, it is technology trust which seems to be of high significance here, particularly when it comes to the adoption of IVAs, for they usually provide little transparency as to how they function and use personal and potentially sensitive data. While technology trust is influenced by many different socio-technical parameters, this article focuses on human personality and its connection to respective trust perceptions, which in turn may further impact the actual adoption of IVA products. To this end, we report on the results of an online survey ( ). Findings show that on a scale from 0 to , people trust IVAs on average. Furthermore, the data point to a significant positive correlation between people’s propensity to trust in general technology and their trust in IVAs. Yet, they also show that those who exhibit a higher propensity to trust in technology tend to also have a higher affinity for technology interaction and are consequently more likely to adopt IVAs.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA
Multimodal Technol. Interact. 2023, 7(5), 53; https://doi.org/10.3390/mti7050053 - 17 May 2023
Abstract
Sidewalk delivery robots are being deployed as a form of last-mile delivery. While many such robots have been deployed on college campuses, fewer have been piloted on public sidewalks. Furthermore, there have been few observational studies of robots and their interactions with pedestrians.
[...] Read more.
Sidewalk delivery robots are being deployed as a form of last-mile delivery. While many such robots have been deployed on college campuses, fewer have been piloted on public sidewalks. Furthermore, there have been few observational studies of robots and their interactions with pedestrians. To better understand how sidewalk robots might integrate into public spaces, the City of Pittsburgh, Pennsylvania conducted a pilot of sidewalk delivery robots to understand possible uses and the challenges that could arise in interacting with people in the city. Our team conducted ethnographic observations and intercept interviews to understand how residents perceived of and interacted with sidewalk delivery robots over the course of the public pilot. We found that people with limited knowledge about the robots crafted stories about their purpose and function. We observed the robots causing distractions and obstructions with different sidewalk users (including children and dogs), witnessed people helping immobilized robots, and learned about potential accessibility issues that the robots may pose. Based on our findings, we contribute a set of recommendations for future pilots, as well as questions to guide future design for robots in public spaces.
Full article
(This article belongs to the Special Issue Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods)
►▼
Show Figures

Figure 1
Open AccessReview
Impact of Screen Time on Children’s Development: Cognitive, Language, Physical, and Social and Emotional Domains
Multimodal Technol. Interact. 2023, 7(5), 52; https://doi.org/10.3390/mti7050052 - 16 May 2023
Abstract
Technology has become integral to children’s lives, impacting many aspects, from academic to socialization. Children of today’s generation are growing up with digital devices, such as mobile phones, iPads, computers, video games, and smart gadgets; therefore, screen time has become ubiquitous in children’s
[...] Read more.
Technology has become integral to children’s lives, impacting many aspects, from academic to socialization. Children of today’s generation are growing up with digital devices, such as mobile phones, iPads, computers, video games, and smart gadgets; therefore, screen time has become ubiquitous in children’s daily routines. This paper provides a review of screen time usage and its impact in children across multiple developmental domains: cognitive, language, physical, and socio-emotional domain of children under eight years of age. The cognitive domain considers factors such as attention span and memory; language domain examines vocabulary, speech, and language development; physical domain focuses on motor development, exercise, sleep, and diet; and social-emotional domain considers relationships, self-identity, and emotional behaviors/regulation. Our findings are mixed, as there are both benefits and drawbacks in technology use, but screen time in children requires controlled observation and monitoring for sustainable improved progress across developmental domains. Specific recommendations advise that children’s screen time per day should be limited to zero minutes (min) (0–2 years), <60 min (3–5-years), and 60 min (6–8 years).
Full article
(This article belongs to the Topic Youth Engagement in Social Media in the Post COVID-19 Era)
►▼
Show Figures

Figure 1
Open AccessArticle
Revealing Unknown Aspects: Sparking Curiosity and Engagement with a Tourist Destination through a 360-Degree Virtual Tour
by
, , , , , and
Multimodal Technol. Interact. 2023, 7(5), 51; https://doi.org/10.3390/mti7050051 - 16 May 2023
Abstract
Over the past decades, 360-degree virtual tours have been used to provide the public access to accurate representations of cultural heritage sites and museums. The COVID-19 pandemic has contributed to a rise in the popularity of virtual tours as a means of engaging
[...] Read more.
Over the past decades, 360-degree virtual tours have been used to provide the public access to accurate representations of cultural heritage sites and museums. The COVID-19 pandemic has contributed to a rise in the popularity of virtual tours as a means of engaging with locations remotely and has raised an interesting question: How could we use such experiences to bring the public closer to locations that are otherwise unreachable in real life or not considered to be tourist destinations? In this study, we examine the effectiveness of promoting engagement with a city through the virtual presentation of unknown and possibly also inaccessible points of interest through a 360-degree panoramic virtual tour. The evaluation of the experience with 31 users through an online questionnaire confirms its potential to spark curiosity, promote engagement, foster reflection, and motivate users to explore the location and its attractions at their leisure, thus enabling them to experience it from their personal point of view. The outcomes highlight the need for further research to explore this potential and identify best practices for virtual experience design.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessReview
Reviewing Simulation Technology: Implications for Workplace Training
Multimodal Technol. Interact. 2023, 7(5), 50; https://doi.org/10.3390/mti7050050 - 10 May 2023
Abstract
Organizations have maintained a commitment to using simulation technology for training purposes because it prepares employees for realistic work scenarios they may encounter and provides a relevant method for teaching hands-on skills. One challenge that simulation technology has faced is the persistent threat
[...] Read more.
Organizations have maintained a commitment to using simulation technology for training purposes because it prepares employees for realistic work scenarios they may encounter and provides a relevant method for teaching hands-on skills. One challenge that simulation technology has faced is the persistent threat of obsolescence, where investment in an up-to-date solution can rapidly become irrelevant in a matter of months or years as technology progresses. This can be particularly challenging for organizations who seek out the best solutions to help develop and train employees while facing the constraints of limited resources and lengthy acquisition times for tools and equipment. Some industries and organizations may benefit from anticipating which technologies might best serve employees and stakeholders in the future. In this manuscript, we took a historical approach, looking at the history of training and the use of simulation-like experiences over time, which helps us identify historical themes in workplace training. Next, we carried out a systematic review of the recent training research using simulation technology to understand how these recent findings help us understand the identified historical themes. Lastly, we summarized the research literature on simulation technology used for training, and highlighted future directions and made recommendations for practitioners and researchers.
Full article
Open AccessSystematic Review
Usability Assessments for Augmented Reality Head-Mounted Displays in Open Surgery and Interventional Procedures: A Systematic Review
by
, , , , and
Multimodal Technol. Interact. 2023, 7(5), 49; https://doi.org/10.3390/mti7050049 - 09 May 2023
Abstract
Augmented reality (AR) head-mounted displays (HMDs) are an increasingly popular technology. For surgical applications, the use of AR HMDs to display medical images or models may reduce invasiveness and improve task performance by enhancing understanding of the underlying anatomy. This technology may be
[...] Read more.
Augmented reality (AR) head-mounted displays (HMDs) are an increasingly popular technology. For surgical applications, the use of AR HMDs to display medical images or models may reduce invasiveness and improve task performance by enhancing understanding of the underlying anatomy. This technology may be particularly beneficial in open surgeries and interventional procedures for which the use of endoscopes, microscopes, or other visualization tools is insufficient or infeasible. While the capabilities of AR HMDs are promising, their usability for surgery is not well-defined. This review identifies current trends in the literature, including device types, surgical specialties, and reporting of user demographics, and provides a description of usability assessments of AR HMDs for open surgeries and interventional procedures. Assessments applied to other extended reality technologies are included to identify additional usability assessments for consideration when assessing AR HMDs. The PubMed, Web of Science, and EMBASE databases were searched through September 2022 for relevant articles that described user studies. User assessments most often addressed task performance. However, objective measurements of cognitive, visual, and physical loads, known to affect task performance and the occurrence of adverse events, were limited. There was also incomplete reporting of user demographics. This review reveals knowledge and methodology gaps for usability of AR HMDs and demonstrates the potential impact of future usability research.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Team Success: A Mixed Methods Approach to Evaluating Virtual Team Leadership Behaviors
Multimodal Technol. Interact. 2023, 7(5), 48; https://doi.org/10.3390/mti7050048 - 05 May 2023
Abstract
The virtuality of organizational teams have gained interest and popularity in recent years, and have become more prevalent amidst the COVID-19 pandemic. Organizational productivity and team relationship-building may suffer certain pitfalls in virtual communication and support without the understanding of the dynamics of
[...] Read more.
The virtuality of organizational teams have gained interest and popularity in recent years, and have become more prevalent amidst the COVID-19 pandemic. Organizational productivity and team relationship-building may suffer certain pitfalls in virtual communication and support without the understanding of the dynamics of short-term, project-based virtual teams. The manuscript aimed to expand what is currently known about short-term virtual team dynamics related to types of effective leadership behaviors. The present study employed a mixed method approach to understanding the dynamics of these teams at both the individual and team level. Small teams were formed and instructed to collaborate on a virtual survival task. Team-related outcomes were measured at the individual level, such as team coordination, team support, and team success. Additionally, distinct latent profiles of leadership behaviors were developed and analyzed at the team level. Team support, more so than team coordination, significantly predicted team success at the individual level, with instrumental support having the strongest effect. Distinct leadership behaviors emerged in teams and were classified through a latent profile analysis, but none of the profiles were significantly related to team performance scores. Demonstrating instrumental support in short-term virtual teams may improve team success. It is important to understand that distinct leadership behaviors exist and future research should explore the impact of these leadership behaviors on other team-related outcomes.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Identifying Strategies to Mitigate Cybersickness in Virtual Reality Induced by Flying with an Interactive Travel Interface
Multimodal Technol. Interact. 2023, 7(5), 47; https://doi.org/10.3390/mti7050047 - 28 Apr 2023
Abstract
As Virtual Reality (VR) technology has improved in hardware, accessibility of development and availability of applications, its interest has increased. However, the problem of Cybersickness (CS) still remains, causing uncomfortable symptoms in users. Therefore, this research seeks to identify and understand new CS
[...] Read more.
As Virtual Reality (VR) technology has improved in hardware, accessibility of development and availability of applications, its interest has increased. However, the problem of Cybersickness (CS) still remains, causing uncomfortable symptoms in users. Therefore, this research seeks to identify and understand new CS mitigation strategies that can contribute to developer guidelines. Three hypotheses for strategies were devised and tested in an experiment. This involved a physical travel interface for flying through a Virtual Environment (VE) as a Control (CT) condition. On top of this, three manipulation conditions referred to as Gaze-tracking Vignette (GV), First-person Perspective with members representation (FP) and Fans and Vibration (FV) were applied. The experiment was between subjects, with 37 participants randomly allocated across conditions. According to the Simulator Sickness Questionnaire (SSQ) scores, significant evidence was found that GV and FP made CS worse. Evidence was also found that FV did not have an effect on CS. However, from the physiological data recorded, an overall lowering of heart rate for FV indicated that it might have some effect on the experience, but cannot be strongly linked with CS. Additionally, comments from some participants identified that they experienced symptoms consistent with CS. Amongst these, dizziness was the most common, with a few having issues with the usability of the travel interface. Despite some CS symptoms, most participants reported little negative impact of CS on the overall experience and feelings of immersion.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Forest Classroom: A Case Study of Educational Augmented Reality Design to Facilitate Classroom Engagement
Multimodal Technol. Interact. 2023, 7(5), 46; https://doi.org/10.3390/mti7050046 - 28 Apr 2023
Abstract
►▼
Show Figures
The transition from kindergarten to primary school involves preparing students for a more structured classroom-based learning environment, which is typically different from the play-based model in kindergartens. Building on the Forest Room concept, which connects restless and disengaged students to nature as a
[...] Read more.
The transition from kindergarten to primary school involves preparing students for a more structured classroom-based learning environment, which is typically different from the play-based model in kindergartens. Building on the Forest Room concept, which connects restless and disengaged students to nature as a calming medium, this case study describes the design of a combined storybook and augmented reality application to provide a literacy primer that integrates this concept. The design case study is presented relative to three frameworks that review the support for educational content, motivation and engagement mechanisms, and features of the AR application. This serves to validate the design process relative to these criteria and identifies opportunities for enhancement, including opportunities for meaningful interaction. The resulting application demonstrates appropriate design strategies to support its target age group and focus. It provides a stimulating and flexible learning activity that can be readily integrated into the classroom and that supports the kindergarten transition to appropriate classroom behaviour by encouraging active engagement and collaboration, blending aspects of both outdoor and classroom-based activities.
Full article

Figure 1
Open AccessReview
The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays
by
and
Multimodal Technol. Interact. 2023, 7(5), 45; https://doi.org/10.3390/mti7050045 - 27 Apr 2023
Abstract
Light field displays offer glasses-free 3D visualization, which means that multiple individuals may observe the same content simultaneously from a virtually infinite number of perspectives without the need of viewing devices. The practical utilization of such visualization systems include various passive and active
[...] Read more.
Light field displays offer glasses-free 3D visualization, which means that multiple individuals may observe the same content simultaneously from a virtually infinite number of perspectives without the need of viewing devices. The practical utilization of such visualization systems include various passive and active use cases. In the case of the latter, users often engage with the utilized system via human–computer interaction. Beyond conventional controls and interfaces, it is also possible to use advanced solutions such as motion tracking, which may seem seamless and highly convenient when paired with glasses-free 3D visualization. However, such solutions may not necessarily outperform conventional controls, and their true potentials may fundamentally depend on the use case in which they are deployed. In this paper, we provide a review on the 3D interaction of light field displays. Our work takes into consideration the different requirements posed by passive and active use cases, discusses the numerous challenges, limitations and potentials, and proposes research initiatives that could progress the investigated field of science.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Real-Time Flood Forecasting and Warning: A Comprehensive Approach toward HCI-Centric Mobile App Development
by
and
Multimodal Technol. Interact. 2023, 7(5), 44; https://doi.org/10.3390/mti7050044 - 26 Apr 2023
Abstract
►▼
Show Figures
This article discusses the design, development, and usability assessment of a mobile system for producing hydrological predictions and sending flood warnings in response to the desire for human-centered technology to better the management of flood occurrences. Our work acts as a bibliographic reference
[...] Read more.
This article discusses the design, development, and usability assessment of a mobile system for producing hydrological predictions and sending flood warnings in response to the desire for human-centered technology to better the management of flood occurrences. Our work acts as a bibliographic reference for understanding what others have attempted and found, as well as gives an integrated set of recommendations. Furthermore, our guidelines offer guidance to aid in the design of mobile GIS-based hydrological models for mobile devices. We concentrate on the full design of a human–computer interaction framework for an effective flood prediction and warning system. In addition, we analyze and address the current user needs and requirements for building a user interface for mobile real-time flood forecasting in a methodical manner. Although a functional prototype was created, the primary objective of this research was to comprehend the complexity of possible users’ demands and actual use situations in order to solve the problem of comparable systems being difficult to use. After consulting with possible consumers, application design standards were established and implemented in the initial prototype. Focusing on user demands and attitudes, special consideration was given to the usability of the mobile interface. To develop the application, a variety of assessment methods are added. The conclusion of the examination was that the system is efficient and effective.
Full article

Figure 1
Open AccessArticle
Building Community Resiliency through Immersive Communal Extended Reality (CXR)
by
, , , , and
Multimodal Technol. Interact. 2023, 7(5), 43; https://doi.org/10.3390/mti7050043 - 26 Apr 2023
Abstract
Situated and shared experiences can motivate community members to plan shared action, promoting community engagement. We deployed and evaluated a communal extended-reality (CXR) bus tour that depicts the possible impacts of flooding and climate change. This paper describes the results of seven community
[...] Read more.
Situated and shared experiences can motivate community members to plan shared action, promoting community engagement. We deployed and evaluated a communal extended-reality (CXR) bus tour that depicts the possible impacts of flooding and climate change. This paper describes the results of seven community engagement sessions with a total of N = 74 members of the Roosevelt Island community. We conducted pre- and post-bus tour focus groups to understand how the tour affected these community members’ awareness and motivation to take action. We found that the unique qualities of immersive, situated, and geo-located virtual reality (VR) on a bus made climate change feel real, brought the consequences of climate change closer to home, and highlighted existing community resources to address the issue. Our results showed that the CXR experience helped to simulate a physical emergency state, which empowered the community to translate feelings of hopelessness into creative and actionable ideas. Our finding exemplifies that geo-located VR on a bus can be a powerful tool to motivate innovations and collective action. Our work is a first-of-its-kind empirical contribution showing that CXR experiences can inspire action. It offers a proof-of-concept of a large-scale community engagement process featuring simulated communal experiences, leading to creative ideas for a bottom-up community resiliency plan.
Full article
(This article belongs to the Special Issue Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods)
►▼
Show Figures

Figure 1
Open AccessArticle
Service Selection Using an Ensemble Meta-Learning Classifier for Students with Disabilities
by
, , , , , and
Multimodal Technol. Interact. 2023, 7(5), 42; https://doi.org/10.3390/mti7050042 - 23 Apr 2023
Abstract
►▼
Show Figures
Students with special needs should be empowered to use assistive technologies and services that suit their individual circumstances and environments to maximize their learning attainment. Fortunately, modern distributed computing paradigms, such as the Internet of Things (IoT), cloud computing, and mobile computing, provide
[...] Read more.
Students with special needs should be empowered to use assistive technologies and services that suit their individual circumstances and environments to maximize their learning attainment. Fortunately, modern distributed computing paradigms, such as the Internet of Things (IoT), cloud computing, and mobile computing, provide ample opportunities to create and offer a multitude of digital assistive services and devices for people with disabilities. However, choosing the appropriate services from a pool of competing services while satisfying the unique requirements of disabled learners remains a challenging research endeavor. In this article, we propose an ensemble meta-learning model that ranks and selects the best IoT services while considering the diverse needs of disabled students within the educational context. We train and test our deep ensemble meta-learning model using two synthetically generated assistive services datasets. The first dataset incorporates 50,000 records representing the possible use of 12 learning activities, fulfilled by 60 distinct assistive services. The second dataset includes a range of 120,000 service ratings of seven quality features, including response, availability, successibility, latency, cost, quality of service, and accessibility. Our deep learning model uses an ensemble of multiple input learners fused using a meta-classification network shared by all the outputs representing individual assistive services. The model achieves significantly better results than traditional machine learning models (i.e., support vector machine and random forest) and a simple feed-forward neural network model without the ensemble technique. Furthermore, we extended our model to utilize the accessibility rating of services to suggest appropriate educational services for disabled learners. The empirical results show the acceptability of our assistive service recommender for learners with disabilities.
Full article

Figure 1
Open AccessArticle
State of the Art of Mobile Learning in Jordanian Higher Education: An Empirical Study
Multimodal Technol. Interact. 2023, 7(4), 41; https://doi.org/10.3390/mti7040041 - 10 Apr 2023
Abstract
A new approach to learning is mobile learning (m-learning), which makes use of special features of mobile devices in the education sector. M-learning is becoming increasingly common in higher education institutions all around the world. The use of mobile devices for education and
[...] Read more.
A new approach to learning is mobile learning (m-learning), which makes use of special features of mobile devices in the education sector. M-learning is becoming increasingly common in higher education institutions all around the world. The use of mobile devices for education and learning has also gained popularity in Jordan. Unlike studies about Jordan, there are many studies that thoroughly analyze the situation of m-learning in other countries. Thus, it is important to understand the current situation of m-learning at Jordanian universities, especially in light of the COVID-19 pandemic. While there have been some studies conducted prior to COVID-19 and a few studies after COVID-19, there is a need for a comprehensive study that provides an in-depth exploration of the current situation, student adoption, benefits, disadvantages, and challenges, particularly following COVID-19. Therefore, this study utilizes a sequential exploratory mixed research method to investigate the current state of the art of m-learning in Jordanian higher education with a particular focus on student adoption, benefits, disadvantages, and challenges. Firstly, the study explores the existing literature on m-learning and conducts 15 interviews with educators and learners in three Jordanian universities to gain insights into their experiences with m-learning. The study then distributes a survey to students at four Jordanian universities, representing both public and private universities, to generalize the results from the qualitative study. Additionally, the study investigates the relationship between student enrollment in public/private universities and the adoption of m-learning. The study came to the conclusion that students have a positive opinion of m-learning and are also willing to use it. However, there are a number of disadvantages and challenges to its adoption. Additionally, there is a relationship between student enrolment in public/private universities and the adoption of m-learning. These findings have important implications for institutions that want to incorporate m-learning into their undergraduate and graduate degree programs, as they aid decision-makers in these universities in creating frameworks that may be able to meet the needs of m-learning.
Full article
(This article belongs to the Special Issue Designing EdTech and Virtual Learning Environments)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Algorithms, Information, MTI, Sensors
Lightweight Deep Neural Networks for Video Analytics
Topic Editors: Amin Ullah, Tanveer Hussain, Mohammad Farhad BulbulDeadline: 31 December 2023
Topic in
Entropy, Future Internet, Algorithms, Computation, MAKE, MTI
Interactive Artificial Intelligence and Man-Machine Communication
Topic Editors: Christos Troussas, Cleo Sgouropoulou, Akrivi Krouska, Ioannis Voyiatzis, Athanasios VoulodimosDeadline: 20 February 2024
Topic in
Informatics, Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2024

Conferences
Special Issues
Special Issue in
MTI
Feature Papers in Multimodal Technologies and Interaction—Edition 2023
Guest Editor: Cristina Portalés RicartDeadline: 30 June 2023
Special Issue in
MTI
Designing EdTech and Virtual Learning Environments
Guest Editors: Stephan Schlögl, Deepak Khazanchi, Peter Mirski, Teresa Spieß, Reinhard Bernsteiner, Christian Ploder, Pascal Schöttle, Matthias JanetschekDeadline: 10 August 2023
Special Issue in
MTI
3D User Interfaces and Virtual Reality
Guest Editors: Arun K. Kulshreshth, Kevin PfeilDeadline: 30 September 2023
Special Issue in
MTI
Intricacies of Child–Robot Interaction - 2nd Edition
Guest Editors: Shruti Chandra, Marta CoutoDeadline: 20 October 2023