Next Issue
Volume 5, September
Previous Issue
Volume 5, July

Multimodal Technol. Interact., Volume 5, Issue 8 (August 2021) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Augmented Reality for Autistic Children to Enhance Their Understanding of Facial Expressions
Multimodal Technol. Interact. 2021, 5(8), 48; https://doi.org/10.3390/mti5080048 - 23 Aug 2021
Cited by 4 | Viewed by 1947
Abstract
Difficulty in understanding the feelings and behavior of other people is considered one of the main symptoms of autism. Computer technology has increasingly been used in interventions with Autism Spectrum Disorder (ASD), especially augmented reality, to either treat or alleviate ASD symptomatology. Augmented [...] Read more.
Difficulty in understanding the feelings and behavior of other people is considered one of the main symptoms of autism. Computer technology has increasingly been used in interventions with Autism Spectrum Disorder (ASD), especially augmented reality, to either treat or alleviate ASD symptomatology. Augmented reality is an engaging type of technology that helps children interact easily and understand and remember information, and it is not limited to one age group or level of education. This study utilized AR to display faces with six different basic facial expressions—happiness, sadness, surprise, fear, disgust, and anger—to help children to recognize facial features and associate facial expressions with a simultaneous human condition. The most important point of this system is that children can interact with the system in a friendly and safe way. Additionally, our results showed the system enhanced social interactions, talking, and facial expressions for both autistic and typical children. Therefore, AR might have a significant upcoming role in talking about the therapeutic necessities of children with ASD. This paper presents evidence for the feasibility of one of the specialized AR systems. Full article
(This article belongs to the Special Issue Theoretical and Pedagogical Perspectives on Augmented Reality)
Show Figures

Figure 1

Article
Accuracy and Repeatability Tests on HoloLens 2 and HTC Vive
Multimodal Technol. Interact. 2021, 5(8), 47; https://doi.org/10.3390/mti5080047 - 23 Aug 2021
Cited by 4 | Viewed by 2366
Abstract
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability [...] Read more.
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of Microsoft’s HoloLens 2 (augmented reality device) and HTC Vive (virtual reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, whereas, in HTC Vive, the object tracked was the system’s hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimeter range, while the HoloLens 2 revealed not very accurate measurements (around 2 cm). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user’s hand and not the system’s controller made a huge impact. The results are considered a significant step for the ongoing project of developing a human–robot interface by demonstrating an industrial robot using extended reality, which shows great potential to succeed based on our data. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

Article
Acquisition and User Behavior in Online Science Laboratories before and during the COVID-19 Pandemic
Multimodal Technol. Interact. 2021, 5(8), 46; https://doi.org/10.3390/mti5080046 - 16 Aug 2021
Cited by 2 | Viewed by 1575
Abstract
The COVID-19 pandemic has resulted in the closure of schools at every level, globally, forcing education to move online. Meeting the needs of students online for Science Lab classes, in particular, is a challenge since the physical labs are not available to the [...] Read more.
The COVID-19 pandemic has resulted in the closure of schools at every level, globally, forcing education to move online. Meeting the needs of students online for Science Lab classes, in particular, is a challenge since the physical labs are not available to the teachers or students. OLabs is a virtual Science Lab providing a complete learning environment of theory, experimental procedures, videos, animations, simulations, and assessments that capture real lab experiences with the relevant pedagogy. This study looks at the acquisition and behaviors of users, on the OLabs platform, during pre and COVID-19 times. Using Google Analytics, we observe that, during the pandemic time, users increasingly adopted OLabs as a new learning pedagogy for performing experiments as indicated by parameters like the number of users; the number of unique pages viewed per session; time spent on viewing content; bounce rate; and preference for content types such as theory, simulations, videos, and animations. Full article
Show Figures

Figure 1

Article
Innovative Teacher Education with the Augmented Reality Device Microsoft HoloLens—Results of an Exploratory Study and Pedagogical Considerations
Multimodal Technol. Interact. 2021, 5(8), 45; https://doi.org/10.3390/mti5080045 - 13 Aug 2021
Cited by 4 | Viewed by 2333
Abstract
Augmented Reality (AR) tools are increasingly finding their way into education settings. Although their use is still not widespread in educational contexts, the research literature indicates their potential and effectiveness. However, overall and specifically for the education sector there are still numerous research [...] Read more.
Augmented Reality (AR) tools are increasingly finding their way into education settings. Although their use is still not widespread in educational contexts, the research literature indicates their potential and effectiveness. However, overall and specifically for the education sector there are still numerous research gaps. This study investigates how the use of head-mounted AR displays such as the Microsoft HoloLens can change learning and what needs to be considered from a didactic perspective. The researched sample consists of 18 student teachers with a nature and technology teaching profile of a German-speaking university of teacher education. The data collection included a written questionnaire, video recordings of a teaching unit with HoloLens examining molecular structures, and one-to-one semi-structured interviews. The results of questionnaires and interviews presented in this paper show that all students were highly motivated to work with this technology in teacher education. The usability of the HoloLens was rated very satisfactory, although many students expressed minor problems. Most students attributed a positive impact on learning to the AR device and stated that the usage of the devices increased their motivation for learning the topic. Overall, the results show that the use of AR in teacher education is considered very valuable and should be increasingly employed in the future. Full article
(This article belongs to the Special Issue Theoretical and Pedagogical Perspectives on Augmented Reality)
Show Figures

Figure 1

Article
Multimodal Warnings in Remote Operation: The Case Study on Remote Driving
Multimodal Technol. Interact. 2021, 5(8), 44; https://doi.org/10.3390/mti5080044 - 12 Aug 2021
Viewed by 1620
Abstract
Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation [...] Read more.
Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation is the lack of proper sensory feedback from the machine, which in turn negatively affects situational awareness and, consequently, may risk remote operations. This article explores how to improve situational awareness via multimodal feedback (visual, auditory, and haptic) and studies how it can be utilized to communicate warnings to remote operators. To reach our goals, we conducted a controlled, within-subjects experiment in eight conditions with twenty-four participants on a simulated remote driving system. Additionally, we gathered further insights with a UX questionnaire and semi-structured interviews. Gathered data showed that the use of multimodal feedback positively affected situational awareness when driving remotely. Our findings indicate that the combination of added haptic and visual feedback was considered the best feedback combination to communicate the slipperiness of the road. We also found that the feeling of presence is an important aspect of remote driving tasks, and a requested one, especially by those with more experience in operating real heavy machinery. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

Article
Security Issues in Shared Automated Mobility Systems: A Feminist HCI Perspective
Multimodal Technol. Interact. 2021, 5(8), 43; https://doi.org/10.3390/mti5080043 - 07 Aug 2021
Cited by 2 | Viewed by 1766
Abstract
The spread of automated vehicles (AVs) is expected to disrupt our mobility behavior. Currently, a male bias is prevalent in the technology industry in general, and in the automotive industry in particular, mainly focusing on white men. This leads to an under-representation of [...] Read more.
The spread of automated vehicles (AVs) is expected to disrupt our mobility behavior. Currently, a male bias is prevalent in the technology industry in general, and in the automotive industry in particular, mainly focusing on white men. This leads to an under-representation of groups of people with other social, physiological, and psychological characteristics. The advent of automated driving (AD) should be taken as an opportunity to mitigate this bias and consider a diverse variety of people within the development process. We conducted a qualitative, exploratory study to investigate how shared automated vehicles (SAVs) should be designed from a pluralistic perspective considering a holistic viewpoint on the whole passenger journey by including booking, pick-up, and drop-off points. Both, men and women, emphasized the importance of SAVs being flexible and clean, whereas security issues were mentioned exclusively by our female participants. While proposing different potential solutions to mitigate security matters, we discuss them through the lens of the feminist HCI framework. Full article
(This article belongs to the Special Issue Interface and Experience Design for Future Mobility)
Show Figures

Figure 1

Article
Perspective-Taking in Virtual Reality and Reduction of Biases against Minorities
Multimodal Technol. Interact. 2021, 5(8), 42; https://doi.org/10.3390/mti5080042 - 31 Jul 2021
Cited by 1 | Viewed by 1806
Abstract
This study examines the effect of perspective-taking via embodiment in virtual reality (VR) in improving biases against minorities. It tests theoretical arguments about the affective and cognitive routes underlying perspective-taking and examines the moderating role of self-presence in VR through experiments. In Study [...] Read more.
This study examines the effect of perspective-taking via embodiment in virtual reality (VR) in improving biases against minorities. It tests theoretical arguments about the affective and cognitive routes underlying perspective-taking and examines the moderating role of self-presence in VR through experiments. In Study 1, participants embodied an ethnic minority avatar and experienced workplace microaggression from a first-person perspective in VR. They were randomly assigned to affective (focus on emotions) vs. cognitive (focus on thoughts) perspective-taking conditions. Results showed that ingroup bias improved comparably across both conditions and that this effect was driven by more negative perceptions of the majority instead of more positive perceptions of minorities. In Study 2, participants experienced the same VR scenario from the third-person perspective. Results replicated those from Study 1 and extended them by showing that the effect of condition on ingroup bias was moderated by self-presence. At high self-presence, participants in the affective condition reported higher ingroup bias than those in the cognitive condition. The study showed that in VR, the embodiment of an ethnic minority is somewhat effective in improving perceptions towards minority groups. It is difficult to clearly distinguish between the effect of affective and cognitive routes underlying the process of perspective-taking. Full article
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
Show Figures

Figure 1

Article
Students as Designers of Augmented Reality: Impact on Learning and Motivation in Computer Science
by and
Multimodal Technol. Interact. 2021, 5(8), 41; https://doi.org/10.3390/mti5080041 - 24 Jul 2021
Cited by 2 | Viewed by 2469
Abstract
In this study, we report findings from the PCBuildAR project, in which students developed augmented reality (AR) artifacts following a guided design-based learning (DBL) approach. Sixty-two students participated in the study and were either in their first year to learn about computer science [...] Read more.
In this study, we report findings from the PCBuildAR project, in which students developed augmented reality (AR) artifacts following a guided design-based learning (DBL) approach. Sixty-two students participated in the study and were either in their first year to learn about computer science or were more experienced computer science students. In terms of learning performance, only the first-year students benefited from our guided DBL approach. In contrast, the experienced students were highly motivated to learn computer science not only immediately after the intervention, but also in the long term. For first-year students, this effect was only evident directly after the intervention. Overall, the guided DBL design proved to be effective for both motivation and learning, especially for younger students. For older learners, a better balance between guidance and autonomy is recommended. Full article
(This article belongs to the Special Issue Theoretical and Pedagogical Perspectives on Augmented Reality)
Show Figures

Figure 1

Article
User Monitoring in Autonomous Driving System Using Gamified Task: A Case for VR/AR In-Car Gaming
Multimodal Technol. Interact. 2021, 5(8), 40; https://doi.org/10.3390/mti5080040 - 21 Jul 2021
Viewed by 1861
Abstract
Background: As Automated Driving Systems (ADS) technology gets assimilated into the market, the driver’s obligation will be changed to a supervisory role. A key point to consider is the driver’s engagement in the secondary task to maintain the driver/user in the control loop. [...] Read more.
Background: As Automated Driving Systems (ADS) technology gets assimilated into the market, the driver’s obligation will be changed to a supervisory role. A key point to consider is the driver’s engagement in the secondary task to maintain the driver/user in the control loop. This paper aims to monitor driver engagement with a game and identify any impacts the task has on hazard recognition. Methods: We designed a driving simulation using Unity3D and incorporated three tasks: No-task, AR-Video, and AR-Game tasks. The driver engaged in an AR object interception game while monitoring the road for threatening road scenarios. Results: There was a significant difference in the tasks (F(2,33) = 4.34, p = 0.0213), identifying the game-task as significant with respect to reaction time and ideal for the present investigation. Game scoring followed three profiles/phases: learning, saturation, and decline profile. From the profiles, it is possible to quantify/infer drivers’ engagement with the game task. Conclusion: The paper proposes alternative monitoring that has utility, i.e., entertaining the user. Further experiments with AR-Games focusing on the real-world car environment will be performed to confirm the performance following the recommendations derived from the current test. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop