-
Wellbeing at Work—Emotional Impact on Workers Using a Worker Guidance System Designed for Positive User Experience
-
Effects of Cognitive Behavioral Stress Management Delivered by a Virtual Human, Teletherapy, and an E-Manual on Psychological and Physiological Outcomes in Adult Women: An Experimental Test
-
Data as a Resource for Designing Digitally Enhanced Consumer Packaged Goods
-
When Self-Driving Fails: Evaluating Social Media Posts Regarding Problems and Misconceptions about Tesla’s FSD Mode
-
User Authentication Recognition Process Using Long Short-Term Memory Model
Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, scientific, peer-reviewed, open access journal of multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Science Applications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.5 days after submission; acceptance to publication is undertaken in 5.5 days (median values for papers published in this journal in the second half of 2022).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Latest Articles
Modeling “Stag and Hare Hunting” Behaviors Using Interaction Data from an mCSCL Application for Grade 5 Mathematics
Multimodal Technol. Interact. 2023, 7(4), 34; https://doi.org/10.3390/mti7040034 - 27 Mar 2023
Abstract
This study attempted to model the stag and hare hunting behaviors of students using their interaction data in a mobile computer-supported collaborative learning application for Grade 5 mathematics. Twenty-five male and 12 female Grade 5 students with an average age of 10.5 years
[...] Read more.
This study attempted to model the stag and hare hunting behaviors of students using their interaction data in a mobile computer-supported collaborative learning application for Grade 5 mathematics. Twenty-five male and 12 female Grade 5 students with an average age of 10.5 years participated in this study. Stag hunters are more likely to display personality dimensions characterized by Openness while students belonging to hare hunters display personality dimensions characterized by Extraversion and Neuroticism. Students who display personality dimensions characterized by Agreeableness and Conscientiousness may tend to be either hare or stag hunters, depending on the difficulty, types of arithmetic problems solved, and the amount of time spent solving arithmetic problems. Students engaged in a stag hunting behavior performed poorly in mathematics. Decision tree modeling and lag sequential analysis revealed that stag and hare hunting behaviors could be identified based on personality dimensions, types of arithmetic problems solved, difficulty level of problems solved, time spent solving problems, and problem-solving patterns. Future research and practical implications were also discussed.
Full article
(This article belongs to the Special Issue Child–Computer Interaction and Multimodal Child Behavior Analysis)
►
Show Figures
Open AccessArticle
Evaluating Social Impact of Smart City Technologies and Services: Methods, Challenges, Future Directions
by
, , , , , , , , , , , and
Multimodal Technol. Interact. 2023, 7(3), 33; https://doi.org/10.3390/mti7030033 - 22 Mar 2023
Abstract
This study examines motivations, definitions, methods and challenges of evaluating the social impacts of smart city technologies and services. It outlines concepts of social impact assessment and discusses how social impact has been included in smart city evaluation frameworks. Thematic analysis is used
[...] Read more.
This study examines motivations, definitions, methods and challenges of evaluating the social impacts of smart city technologies and services. It outlines concepts of social impact assessment and discusses how social impact has been included in smart city evaluation frameworks. Thematic analysis is used to investigate how social impact is addressed in eight smart city projects that prioritise human-centred design across a variety of contexts and development phases, from design research and prototyping to completed and speculative projects. These projects are notable for their emphasis on human, organisational and natural stakeholders; inclusion, participation and empowerment; new methods of citizen engagement; and relationships between sustainability and social impact. At the same time, there are gaps in the evaluation of social impact in both the smart city indexes and the eight projects. Based on our analysis, we contend that more coherent, consistent and analytical approaches are needed to build narratives of change and to comprehend impacts before, during and after smart city projects. We propose criteria for social impact evaluation in smart cities and identify new directions for research. This is of interest for smart city developers, researchers, funders and policymakers establishing protocols and frameworks for evaluation, particularly as smart city concepts and complex technologies evolve in the context of equitable and sustainable development.
Full article
(This article belongs to the Special Issue Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods)
►▼
Show Figures

Figure 1
Open AccessArticle
Online Platforms for Remote Immersive Virtual Reality Testing: An Emerging Tool for Experimental Behavioral Research
Multimodal Technol. Interact. 2023, 7(3), 32; https://doi.org/10.3390/mti7030032 - 21 Mar 2023
Abstract
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments
[...] Read more.
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments that require millisecond timing. Participants were recruited via an online crowdsourcing platform and accessed a task on the classic cognitive phenomenon “Inhibition of Return” through a web browser using their own VR headset or desktop computer (68 participants in each group). The results confirm previous research that remote participants using desktop computers can be used effectively for conducting time-critical cognitive experiments. However, inhibition of return was only partially replicated for the VR headset group. Exploratory analyses revealed that technical factors, such as headset type, were likely to significantly impact variability and must be mitigated to obtain accurate results. This study demonstrates the potential for remote VR testing to broaden the research scope and reach a larger participant population. Crowdsourcing services appear to be an efficient and effective way to recruit participants for remote behavioral testing using high-end VR headsets.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Toward Creating Software Architects Using Mobile Project-Based Learning Model (Mobile-PBL) for Teaching Software Architecture
Multimodal Technol. Interact. 2023, 7(3), 31; https://doi.org/10.3390/mti7030031 - 15 Mar 2023
Abstract
Project-based learning (PBL) promotes increased levels of learning, deepens student understanding of acquired knowledge, and improves learning motivation. Students develop their ability to think and learn independently through depending on themselves in searching for knowledge, planning, exploration, and looking for solutions to practical
[...] Read more.
Project-based learning (PBL) promotes increased levels of learning, deepens student understanding of acquired knowledge, and improves learning motivation. Students develop their ability to think and learn independently through depending on themselves in searching for knowledge, planning, exploration, and looking for solutions to practical problems. Information availability, student engagement, and motivation to learn all increase with mobile learning. The teaching process may be enhanced by combining the two styles. This paper proposes and evaluates a teaching model called Mobile Project-Based Learning (Mobile-PBL) that combines the two learning styles. The paper investigates how significantly Mobile-PBL can benefit students. The traditional lecture method used to teach the software architecture module in the classroom is not sufficient to provide students with the necessary practical experience to earn a career as software architects in the future. Therefore, the first author tested the use of the model for teaching the software architecture module at Philadelphia University’s Software Engineering Department on 62 students who registered for a software architecture course over three semesters. She compared the results of using the model for teaching with those results that were obtained when using the project-based learning (PBL) approach alone. The students’ opinions regarding the approach, any problems they had, and any recommendations for improvement were collected through a focus group session after finishing each semester and by distributing a survey to students to evaluate the effectiveness of the used model. Comments from the students were positive, according to the findings. The projects were well-received by the students, who agreed that it gave them a good understanding of several course ideas and concepts, as well as providing them with the required practical experience. The students also mentioned a few difficulties encountered while working on the projects, including student distraction from social media and the skills that educators and learners in higher education institutions are expected to have.
Full article
(This article belongs to the Special Issue Designing EdTech and Virtual Learning Environments)
►▼
Show Figures

Figure 1
Open AccessArticle
Higher Education in the Pacific Alliance: Descriptive and Exploratory Analysis of the Didactic Potential of Virtual Reality
Multimodal Technol. Interact. 2023, 7(3), 30; https://doi.org/10.3390/mti7030030 - 15 Mar 2023
Abstract
In this paper, we conducted descriptive quantitative research on the assessment of virtual reality (VR) technologies in higher education in the countries of the Pacific Alliance (PA). Specifically, differences between PA countries in terms of the above perceptions were identified and the behavior
[...] Read more.
In this paper, we conducted descriptive quantitative research on the assessment of virtual reality (VR) technologies in higher education in the countries of the Pacific Alliance (PA). Specifically, differences between PA countries in terms of the above perceptions were identified and the behavior of the gender and knowledge area gaps in each of them was analyzed. A validated quantitative questionnaire was used for this purpose. As a result, we found that PA professors express high ratings of VR but point out strong disadvantages regarding its use in lectures; in addition, they have low self-concept of their digital competence. In this regard, it was identified that there are notable differences among the PA countries. Mexico is the country with the most marked gender gaps, while Chile has strong gaps by areas of knowledge. We give some recommendations towards favoring a homogeneous process of integration of VR in higher education in the PA countries.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Location- and Physical-Activity-Based Application for Japanese Vocabulary Acquisition for Non-Japanese Speakers
Multimodal Technol. Interact. 2023, 7(3), 29; https://doi.org/10.3390/mti7030029 - 13 Mar 2023
Abstract
There are various mobile applications to support foreign-language learning. While providing interactive designs and playful games to keep learners interested, these applications do not focus on motivating learners to continue learning after a long time. Our goal for this study was to develop
[...] Read more.
There are various mobile applications to support foreign-language learning. While providing interactive designs and playful games to keep learners interested, these applications do not focus on motivating learners to continue learning after a long time. Our goal for this study was to develop an application that guides learners to achieve small goals by creating small lessons that are related to their real-life situations, with a main focus on vocabulary acquisition. Therefore, we present MiniHongo, a smartphone application that recognizes learners’ current locations and activities to compose lessons that comprise words that are strongly related to the learners’ real-time situations and can be studied in a short time period, thereby improving user motivation. MiniHongo uses a cloud service for its database and public application programming interfaces for location tracking. A between-subject experiment was conducted to evaluate MiniHongo, which involved comparing it to two other versions of itself. One composed lessons without location recognition, and the other composed lessons without location and activity recognition. The experimental results indicate that users have a strong interest in learning Japanese with MiniHongo, and some difference was found in how well users could memorize what they learned via the application. It is also suggested that the application requires improvements.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►▼
Show Figures

Figure 1
Open AccessArticle
Learning about Victims of Holocaust in Virtual Reality: The Main, Mediating and Moderating Effects of Technology, Instructional Method, Flow, Presence, and Prior Knowledge
Multimodal Technol. Interact. 2023, 7(3), 28; https://doi.org/10.3390/mti7030028 - 06 Mar 2023
Abstract
The goal of the current study was to investigate the effects of a virtual reality (VR) simulation of Anne Frank’s hiding place on learning. In a 2 × 2 experiment, 132 middle school students learned about the living conditions of Anne Frank, a
[...] Read more.
The goal of the current study was to investigate the effects of a virtual reality (VR) simulation of Anne Frank’s hiding place on learning. In a 2 × 2 experiment, 132 middle school students learned about the living conditions of Anne Frank, a girl of Jewish heritage during the Second World War, through desktop VR (DVR) and head-mounted display VR (HMD-VR) (media conditions). Approximately half of each group engaged in an explorative vs. an expository learning approach (method condition). The exposition group received instructions on how to explore the hiding place stepwise, whereas the exploration group experienced it autonomously. Next to the main effects of media and methods, the mediating effects of the learning process variables of presence and flow and the moderating effects of contextual variables (e.g., prior technical knowledge) have been analyzed. The results revealed that the HMD-VR led to significantly improved evaluation, and—even if not statistically significant—perspective-taking in Anne, but less knowledge gain compared to DVR. Further results showed that adding instructions and segmentation within the exposition group led to significantly increased knowledge gain compared to the exploration group. For perspective-taking and evaluation, no differences were detected. A significant interaction between media and methods was not found. No moderating effects by contextual variables but mediating effects were observed: For example, the feeling of presence within VR can fully explain the relationships between media and learning. These results support the view that learning processes are crucial for learning in VR and that studies neglecting these learning processes may be confounded. Hence, the results pointed out that media comparison studies are limited because they do not consider the complex interaction structures of media, instructional methods, learning processes, and contextual variables.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability
by
, , , , , , , and
Multimodal Technol. Interact. 2023, 7(3), 27; https://doi.org/10.3390/mti7030027 - 28 Feb 2023
Abstract
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial,
[...] Read more.
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial, corporate and social challenges has not yet been fully exploited. Research shows the insufficient practicality of AI in domain-specific contexts as one of the main application hurdles. Focusing on industrial demands, this publication introduces a new paradigm in terms of applicability of AI methods, called Usable AI (UAI). Aspects of easily accessible, domain-specific AI methods are derived, which address essential user-oriented AI services within the UAI paradigm: usability, suitability, integrability and interoperability. The relevance of UAI is clarified by describing challenges, hurdles and peculiarities of AI applications in the production area, whereby the following user roles have been abstracted: developers of cyber–physical production systems (CPPS), developers of processes and operators of processes. The analysis shows that target artifacts, motivation, knowledge horizon and challenges differ for the user roles. Therefore, UAI shall enable domain- and user-role-specific adaptation of affordances accompanied by adaptive support of vertical and horizontal integration across the domains and user roles.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Developing Usability Guidelines for mHealth Applications (UGmHA)
Multimodal Technol. Interact. 2023, 7(3), 26; https://doi.org/10.3390/mti7030026 - 28 Feb 2023
Abstract
Mobile health (mHealth) is a branch of electronic health (eHealth) technology that provides healthcare services using smartphones and wearable devices. However, most mHealth applications were developed without applying mHealth specialized usability guidelines. Although many researchers have used various guidelines to design and evaluate
[...] Read more.
Mobile health (mHealth) is a branch of electronic health (eHealth) technology that provides healthcare services using smartphones and wearable devices. However, most mHealth applications were developed without applying mHealth specialized usability guidelines. Although many researchers have used various guidelines to design and evaluate mHealth applications, these guidelines have certain limitations. First, some of them are general guidelines. Second, others are specified for mHealth applications; however, they only cover a few features of mHealth applications. Third, some of them did not consider accessibility needs for the elderly and people with special needs. Therefore, this paper proposes a new set of usability guidelines for mHealth applications (UGmHA) based on Quinones et al.’s formal methodology, which consists of seven stages starting from the Exploratory stage and ending with the Refining stage. What distinguishes these proposed guidelines is that they are easy to follow, consider the feature of accessibility for the elderly and people with special needs and cover different features of mHealth applications. In order to validate UGmHA, an experiment was conducted on two applications in Saudi Arabia using UGmHA versus other well-known usability guidelines to discover usability issues. The experimental results show that the UGmHA discovered more usability issues than did the other guidelines.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
A Literature Survey of How to Convey Transparency in Co-Located Human–Robot Interaction
Multimodal Technol. Interact. 2023, 7(3), 25; https://doi.org/10.3390/mti7030025 - 25 Feb 2023
Abstract
In human–robot interaction, transparency is essential to ensure that humans understand and trust robots. Understanding is vital from an ethical perspective and benefits interaction, e.g., through appropriate trust. While there is research on explanations and their content, the methods used to convey the
[...] Read more.
In human–robot interaction, transparency is essential to ensure that humans understand and trust robots. Understanding is vital from an ethical perspective and benefits interaction, e.g., through appropriate trust. While there is research on explanations and their content, the methods used to convey the explanations are underexplored. It remains unclear which approaches are used to foster understanding. To this end, we contribute a systematic literature review exploring how robot transparency is fostered in papers published in the ACM Digital Library and IEEE Xplore. We found that researchers predominantly rely on monomodal visual or verbal explanations to foster understanding. Commonly, these explanations are external, as opposed to being integrated in the robot design. This paper provides an overview of how transparency is communicated in human–robot interaction research and derives a classification with concrete recommendations for communicating transparency. Our results establish a solid base for consistent, transparent human–robot interaction designs.
Full article
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)
►▼
Show Figures

Figure 1
Open AccessArticle
Can AI-Oriented Requirements Enhance Human-Centered Design of Intelligent Interactive Systems? Results from a Workshop with Young HCI Designers
Multimodal Technol. Interact. 2023, 7(3), 24; https://doi.org/10.3390/mti7030024 - 25 Feb 2023
Abstract
In this paper, we show that the evolution of artificial intelligence (AI) and its increased presence within an interactive system pushes designers to rethink the way in which AI and its users interact and to highlight users’ feelings towards AI. For novice designers,
[...] Read more.
In this paper, we show that the evolution of artificial intelligence (AI) and its increased presence within an interactive system pushes designers to rethink the way in which AI and its users interact and to highlight users’ feelings towards AI. For novice designers, it is crucial to acknowledge that both the user and artificial intelligence possess decision-making capabilities. Such a process may involve mediation between humans and artificial intelligence. This process should also consider the mutual learning that can occur between the two entities over time. Therefore, we explain how to adapt the Human-Centered Design (HCD) process to give centrality to AI as the user, further empowering the interactive system, and to adapt the interaction design to the actual capabilities, limitations, and potentialities of AI. This is to encourage designers to explore the interactions between AI and humans and focus on the potential user experience. We achieve such centrality by extracting and formalizing a new category of AI requirements. We have provocatively named this extension: “Intelligence-Centered”. A design workshop with MsC HCI students was carried out as a case study supporting this change of perspective in design.
Full article
(This article belongs to the Topic Interactive Artificial Intelligence and Man-Machine Communication)
►▼
Show Figures

Figure 1
Open AccessArticle
Comparison of Using an Augmented Reality Learning Tool at Home and in a Classroom Regarding Motivation and Learning Outcomes
Multimodal Technol. Interact. 2023, 7(3), 23; https://doi.org/10.3390/mti7030023 - 23 Feb 2023
Abstract
The recent pandemic brought on considerable changes in terms of learning activities, which were moved from in-person classroom-based lessons to virtual work performed at home in most world regions. One of the most considerable challenges faced by educators was keeping students motivated toward
[...] Read more.
The recent pandemic brought on considerable changes in terms of learning activities, which were moved from in-person classroom-based lessons to virtual work performed at home in most world regions. One of the most considerable challenges faced by educators was keeping students motivated toward learning activities. Interactive learning environments in general, and augmented reality (AR)-based learning environments in particular, are thought to foster emotional and cognitive engagement when used in the classroom. This study aims to compare the motivation and learning outcomes of middle school students in two educational settings: in the classroom and at home. The study involved 55 middle school students using the AR application to practice basic chemistry concepts. The results suggested that students’ general motivation towards the activity was similar in both settings. However, students who worked at home reported better satisfaction and attention levels compared with those who worked in the classroom. Additionally, students who worked at home made fewer mistakes and achieved better grades compared with those who worked in the classroom. Overall, the study suggests that AR can be exploited as an effective learning environment for learning the basic principles of chemistry in home settings.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessReview
A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons
Multimodal Technol. Interact. 2023, 7(2), 22; https://doi.org/10.3390/mti7020022 - 17 Feb 2023
Abstract
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text
[...] Read more.
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text entry for persons with vision loss or impairment. We identify the practices and standards of the research community and compare them against the practices in research for non-impaired persons. We find that there are significant shortcomings in the methodological and result-reporting practices in both population types. In highlighting these issues, we hope to inspire more and better quality research in the domain of mobile text entry for persons with and without vision impairment.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Simulating Wearable Urban Augmented Reality Experiences in VR: Lessons Learnt from Designing Two Future Urban Interfaces
Multimodal Technol. Interact. 2023, 7(2), 21; https://doi.org/10.3390/mti7020021 - 16 Feb 2023
Abstract
Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute
[...] Read more.
Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute to this domain by assessing the use of virtual reality (VR) for simulating wearable urban AR experiences, allowing participants to interact with future AR interfaces in a realistic, safe and controlled setting. This paper describes two wearable urban AR applications (pedestrian navigation and autonomous mobility) simulated in VR. Based on a thematic analysis of interview data collected across the two studies, we find that the VR simulation successfully elicited feedback on the functional benefits of AR concepts and the potential impact of urban contextual factors, such as safety concerns, attentional capacity, and social considerations. At the same time, we highlight the limitations of this approach in terms of assessing the AR interface’s visual quality and providing exhaustive contextual information. The paper concludes with recommendations for simulating wearable urban AR experiences in VR.
Full article
(This article belongs to the Special Issue Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods)
►▼
Show Figures

Figure 1
Open AccessArticle
How Can One Share a User’s Activity during VR Synchronous Augmentative Cooperation?
Multimodal Technol. Interact. 2023, 7(2), 20; https://doi.org/10.3390/mti7020020 - 14 Feb 2023
Abstract
Collaborative virtual environments allow people to work together while being distant. At the same time, empathic computing aims to create a deeper shared understanding between people. In this paper, we investigate how to improve the perception of distant collaborative activities in a virtual
[...] Read more.
Collaborative virtual environments allow people to work together while being distant. At the same time, empathic computing aims to create a deeper shared understanding between people. In this paper, we investigate how to improve the perception of distant collaborative activities in a virtual environment by sharing users’ activity. We first propose several visualization techniques for sharing the activity of multiple users. We selected one of these techniques for a pilot study and evaluated its benefits in a controlled experiment using a virtual reality adaptation of the NASA MATB-II (Multi-Attribute Task Battery). Results show (1) that instantaneous indicators of users’ activity are preferred to indicators that continuously display the progress of a task, and (2) that participants are more confident in their ability to detect users needing help when using activity indicators.
Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction (Volume II))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Assessing Heuristic Evaluation in Immersive Virtual Reality—A Case Study on Future Guidance Systems
Multimodal Technol. Interact. 2023, 7(2), 19; https://doi.org/10.3390/mti7020019 - 09 Feb 2023
Abstract
A variety of evaluation methods for user interfaces (UI) exist such as usability testing, cognitive walkthrough, and heuristic evaluation. However, UIs such as guidance systems at transit hubs must be evaluated in their intended application field to allow the effective and valid identification
[...] Read more.
A variety of evaluation methods for user interfaces (UI) exist such as usability testing, cognitive walkthrough, and heuristic evaluation. However, UIs such as guidance systems at transit hubs must be evaluated in their intended application field to allow the effective and valid identification of usability flaws. However, what if evaluations are not feasible in real environments, or laboratorial conditions cannot be ensured? Based on adapted heuristics, in the present study, the method of heuristic evaluation is combined with immersive Virtual Reality (VR) for the identification of usability flaws of dynamic guidance systems (DGS) at transit hubs. The study involved usability evaluations of nine DGS concepts using the newly proposed method. The results show that compared to computer-based heuristic evaluations, the use of immersive VR led to the identification of an increased amount of “severe” usability flaws as well as overall usability flaws. Within a qualitative assessment, immersive VR is validated as a suitable tool for conducting heuristic evaluations involving significant advantages such as the creation of realistic experiences in laboratorial conditions. Future work seeks to further prove the suitability of using immersive VR for heuristic evaluations and compare the proposed method to other evaluative methods.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Designing to Leverage Presence in VR Rhythm Games
by
and
Multimodal Technol. Interact. 2023, 7(2), 18; https://doi.org/10.3390/mti7020018 - 09 Feb 2023
Abstract
Rhythm games are known for their engaging gameplay and have gained renewed popularity with the adoption of virtual reality (VR) technology. While VR rhythm games have achieved commercial success, there is a lack of research on how and why they are engaging, and
[...] Read more.
Rhythm games are known for their engaging gameplay and have gained renewed popularity with the adoption of virtual reality (VR) technology. While VR rhythm games have achieved commercial success, there is a lack of research on how and why they are engaging, and the connection between that engagement and immersion or presence. This study aims to understand how the design of two popular VR rhythm games, Beat Saber and Ragnarock, leverages presence to immerse players. Through a mixed-methods approach, utilising the Multimodal Presence Scale and a thematic analysis of open-ended questions, we discovered four mentalities which characterise user experiences: action, game, story and musical. We discuss how these mentalities can mediate presence and immersion, suggesting considerations for how designers can leverage this mapping for similar or related games.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Roadmap for the Development of EnLang4All: A Video Game for Learning English
Multimodal Technol. Interact. 2023, 7(2), 17; https://doi.org/10.3390/mti7020017 - 03 Feb 2023
Abstract
Nowadays, people are more predisposed to being self-taught due to the availability of online information. With digitalization, information appears not only in its conventional state, as blogs, articles, newspapers, or e-books, but also in more interactive and enticing ways. Video games have become
[...] Read more.
Nowadays, people are more predisposed to being self-taught due to the availability of online information. With digitalization, information appears not only in its conventional state, as blogs, articles, newspapers, or e-books, but also in more interactive and enticing ways. Video games have become a transmission vehicle for information and knowledge, but they require specific treatment in respect of their presentation and the way in which users interact with them. This treatment includes usability guidelines and heuristics that provide video game properties that are favorable to a better user experience, conducive to captivating the user and to assimilating the content. In this research, usability guidelines and heuristics, complemented with recommendations from educational video game studies, were gathered and analyzed for application to a video game for English language learning called EnLang4All, which was also developed in the scope of this project and evaluated in terms of its reception by users.
Full article
(This article belongs to the Special Issue Feature Papers in Multimodal Technologies and Interaction—Edition 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Ranking Crossing Scenario Complexity for eHMIs Testing: A Virtual Reality Study
Multimodal Technol. Interact. 2023, 7(2), 16; https://doi.org/10.3390/mti7020016 - 02 Feb 2023
Abstract
External human–machine interfaces (eHMIs) have the potential to benefit AV–pedestrian interactions. The majority of studies investigating eHMIs have used relatively simple traffic environments, i.e., a single pedestrian crossing in front of a single eHMI on a one-lane straight road. While this approach has
[...] Read more.
External human–machine interfaces (eHMIs) have the potential to benefit AV–pedestrian interactions. The majority of studies investigating eHMIs have used relatively simple traffic environments, i.e., a single pedestrian crossing in front of a single eHMI on a one-lane straight road. While this approach has proved to be efficient in providing an initial understanding of how pedestrians respond to eHMIs, it over-simplifies interactions which will be substantially more complex in real-life circumstances. A process is illustrated in a small-scale study (N = 10) to rank different crossing scenarios by level of complexity. Traffic scenarios were first developed for varying traffic density, visual complexity of the road scene, road geometry, weather and visibility conditions, and presence of distractions. These factors have been previously shown to increase difficulty and riskiness of the crossing task. The scenarios were then tested in a motion-based, virtual reality environment. Pedestrians’ perceived workload and objective crossing behaviour were measured as indirect indicators of the level of complexity of the crossing scenario. Sense of presence and simulator sickness were also recorded as a measure of the ecological validity of the virtual environment. The results indicated that some crossing scenarios were more taxing for pedestrians than others, such as those with road geometries where traffic approached from multiple directions. Further, the presence scores showed that the virtual environments experienced were found to be realistic. This paper concludes by proposing a “complex” environment to test eHMIs under more challenging crossing circumstances.
Full article
(This article belongs to the Special Issue Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods)
►▼
Show Figures

Figure 1
Open AccessReview
Research in Computational Expressive Music Performance and Popular Music Production: A Potential Field of Application?
Multimodal Technol. Interact. 2023, 7(2), 15; https://doi.org/10.3390/mti7020015 - 31 Jan 2023
Abstract
►▼
Show Figures
In music, the interpreter manipulates the performance parameters in order to offer a sonic rendition of the piece that is capable of conveying specific expressive intentions. Since the 1980s, there has been growing interest in expressive music performance (EMP) and its computational modeling.
[...] Read more.
In music, the interpreter manipulates the performance parameters in order to offer a sonic rendition of the piece that is capable of conveying specific expressive intentions. Since the 1980s, there has been growing interest in expressive music performance (EMP) and its computational modeling. This research field has two fundamental objectives: understanding the phenomenon of human musical interpretation and the automatic generation of expressive performances. Rule-based, statistical, machine, and deep learning approaches have been proposed, most of them devoted to the classical repertoire, in particular to piano pieces. On the contrary, we introduce the role of expressive performance within popular music and the contemporary ecology of pop music production based on the use of digital audio workstations (DAWs) and virtual instruments. After an analysis of the tools related to expressiveness commonly available to modern producers, we propose a detailed survey of research into the computational EMP field, highlighting the potential and limits of what is present in the literature with respect to the context of popular music, which by its nature cannot be completely superimposed to the classical one. In the concluding discussion, we suggest possible lines of future research in the field of computational expressiveness applied to pop music.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Algorithms, Information, MTI, Sensors
Lightweight Deep Neural Networks for Video Analytics
Topic Editors: Amin Ullah, Tanveer Hussain, Mohammad Farhad BulbulDeadline: 31 December 2023
Topic in
Informatics, Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2024

Conferences
Special Issues
Special Issue in
MTI
Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods
Guest Editors: Marius Hoggenmueller, Martin Tomitsch, Jessica R. Cauchard, Luke Hespanhol, Maria Luce Lupetti, Ronald Schroeter, Sharon Yavo-Ayalon, Alexander Wiethoff, Stewart WorrallDeadline: 20 April 2023
Special Issue in
MTI
Child–Computer Interaction and Multimodal Child Behavior Analysis
Guest Editors: Heysem Kaya, Maryam Najafian, Saeid SafaviDeadline: 25 May 2023
Special Issue in
MTI
Feature Papers in Multimodal Technologies and Interaction—Edition 2023
Guest Editor: Cristina Portalés RicartDeadline: 30 June 2023
Special Issue in
MTI
Designing EdTech and Virtual Learning Environments
Guest Editors: Stephan Schlögl, Deepak Khazanchi, Peter Mirski, Teresa Spieß, Reinhard Bernsteiner, Christian Ploder, Pascal Schöttle, Matthias JanetschekDeadline: 10 August 2023