Next Issue
Volume 9, January
Previous Issue
Volume 8, November
 
 

Multimodal Technol. Interact., Volume 8, Issue 12 (December 2024) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
23 pages, 776 KiB  
Systematic Review
Performance of Commercial Deep Learning-Based Auto-Segmentation Software for Breast Cancer Radiation Therapy Planning: A Systematic Review
by Curtise K. C. Ng
Multimodal Technol. Interact. 2024, 8(12), 114; https://doi.org/10.3390/mti8120114 - 20 Dec 2024
Cited by 1 | Viewed by 1108
Abstract
As yet, no systematic review on commercial deep learning-based auto-segmentation (DLAS) software for breast cancer radiation therapy (RT) planning has been published, although NRG Oncology has highlighted the necessity for such. The purpose of this systematic review is to investigate the performances of [...] Read more.
As yet, no systematic review on commercial deep learning-based auto-segmentation (DLAS) software for breast cancer radiation therapy (RT) planning has been published, although NRG Oncology has highlighted the necessity for such. The purpose of this systematic review is to investigate the performances of commercial DLAS software packages for breast cancer RT planning and methods for their performance evaluation. A literature search was conducted with the use of electronic databases. Fifteen papers met the selection criteria and were included. The included studies evaluated eight software packages (Limbus Contour, Manteia AccuLearning, Mirada DLCExpert, MVision.ai Contour+, Radformation AutoContour, RaySearch RayStation, Siemens syngo.via RT Image Suite/AI-Rad Companion Organs RT, and Therapanacea Annotate). Their findings show that the DLAS software could contour ten organs at risk (body, contralateral breast, esophagus-overlapping area, heart, ipsilateral humeral head, left and right lungs, liver, and sternum and trachea) and three clinical target volumes (CTVp_breast, CTVp_chestwall, and CTVn_L1) up to the clinically acceptable standard. This can contribute to 45.4%–93.7% contouring time reduction per patient. Although NRO Oncology has suggested that every clinical center should conduct its own DLAS software evaluation before clinical implementation, such testing appears particularly crucial for Manteia AccuLearning, Mirada DLCExpert, and MVision.ai Contour+ as a result of the methodological weaknesses of the corresponding studies such as the use of small datasets collected retrospectively from single centers for the evaluation. Full article
Show Figures

Figure 1

13 pages, 1025 KiB  
Article
Virtual Reality-Based Approach to Evaluate Emotional Everyday Scenarios for a Digital Health Application
by Valentin Wunsch, Effi Freya Picka, Hanna Schumm, Joshua Kopp, Tamer Abdulbaki Alshirbaji, Herag Arabian, Knut Möller and Verena Wagner-Hartl
Multimodal Technol. Interact. 2024, 8(12), 113; https://doi.org/10.3390/mti8120113 - 20 Dec 2024
Viewed by 979
Abstract
Social interactions are a part of our everyday lives. This can be challenging for individuals who experience social interactions as demanding, such as persons with autism spectrum disorder (ASD). Therefore, different types of training exist to help individuals affected by ASD practice in [...] Read more.
Social interactions are a part of our everyday lives. This can be challenging for individuals who experience social interactions as demanding, such as persons with autism spectrum disorder (ASD). Therefore, different types of training exist to help individuals affected by ASD practice in challenging situations. Digital applications offer advantages over traditional training because they can better address the individual needs of people with ASD. The development of a therapeutic application initially requires identifying appropriate emotion-relevant scenarios of social interaction. Based on a previous study evaluating text-based scenarios with different levels of complexity, a virtual reality (VR) environment was developed to assess the applicability of the scenarios in VR. Therefore, an experimental study was conducted. Two different scenarios of social interaction, each with four different levels of complexity, were presented and evaluated by 18 participants (10 males, eight females). A multidimensional approach was used to combine subjective assessments and psychophysiological measures (ECG and EDA). The results showed that the implementation of the scenarios in VR was able to differentiate between different levels of complexity. As the long-term target is to implement the findings in a therapeutic application for people with ASD, the results of the study are promising for the achievement of this goal. Full article
Show Figures

Figure 1

26 pages, 10443 KiB  
Article
Metaverse-Based Evacuation Training: Design, Implementation, and Experiment Focusing on Earthquake Evacuation
by Hiroyuki Mitsuhara
Multimodal Technol. Interact. 2024, 8(12), 112; https://doi.org/10.3390/mti8120112 - 20 Dec 2024
Viewed by 1313
Abstract
Virtual reality (VR) can realize evacuation training in an immersive, interactive, safe, three-dimensional virtual world. Many VR-based evacuation training systems have been developed; however, they typically notify participants explicitly or implicitly before the evacuation training; thus, participants are mentally ready for successful evacuation. [...] Read more.
Virtual reality (VR) can realize evacuation training in an immersive, interactive, safe, three-dimensional virtual world. Many VR-based evacuation training systems have been developed; however, they typically notify participants explicitly or implicitly before the evacuation training; thus, participants are mentally ready for successful evacuation. To satisfy a prerequisite where participants do not have mental readiness, this study proposes a prototype of a metaverse-based evacuation training system called “Metavearthquake”. The main characteristic of the proposed prototype system is that evacuation training begins unexpectedly due to a sudden earthquake in the metaverse (virtual world); participants are then required to evacuate to a safe place while making decisions under difficult earthquake-caused situations. The prototype system introduces scenarios and nonplayable characters to express difficult situations that may occur after an earthquake occurrence. To heighten training effects, the prototype system supports reflection (reflection-on-action) by replaying the evacuation of participants. An experiment implied that a sudden earthquake is indispensable for realistic simulated evacuation experiences. In summary, Metavearthquake is a metaverse-based evacuation training system that provides realistic simulated earthquake evacuation experiences in terms of evacuation behaviors, emotions, and training effects. Full article
Show Figures

Graphical abstract

22 pages, 7210 KiB  
Article
Unlocking Trust and Acceptance in Tomorrow’s Ride: How In-Vehicle Intelligent Agents Redefine SAE Level 5 Autonomy
by Cansu Demir, Alexander Meschtscherjakov and Magdalena Gärtner
Multimodal Technol. Interact. 2024, 8(12), 111; https://doi.org/10.3390/mti8120111 - 17 Dec 2024
Viewed by 1054
Abstract
As fully automated vehicles (FAVs) advance towards SAE Level 5 automation, the role of in-vehicle intelligent agents (IVIAs) in shaping passenger experience becomes critical. Even at SAE Level 5 automation, effective communication between the vehicle and the passenger will remain crucial to ensure [...] Read more.
As fully automated vehicles (FAVs) advance towards SAE Level 5 automation, the role of in-vehicle intelligent agents (IVIAs) in shaping passenger experience becomes critical. Even at SAE Level 5 automation, effective communication between the vehicle and the passenger will remain crucial to ensure a sense of safety, trust, and engagement. This study explores how different types and combinations of information provided by IVIAs influence user experience, acceptance, and trust. A sample of 25 participants was recruited for the study, which experienced a fully automated ride in a driving simulator, interacting with Iris, an IVIA designed for voice-only communication. The study utilized both qualitative and quantitative methods to assess participants’ perceptions. Findings indicate that critical and vehicle-status-related information had the highest positive impact on trust and acceptance, while personalized information, though valued, raised privacy concerns. Participants showed high engagement with non-driving-related activities, reflecting a high level of trust in the FAV’s performance. Interaction with the anthropomorphic IVIA was generally well received, but concerns about system transparency and information overload were noted. The study concludes that IVIAs play a crucial role in fostering passenger trust in FAVs, with implications for future design enhancements that emphasize emotional intelligence, personalization, and transparency. These findings contribute to the ongoing development of IVIAs and the broader adoption of automated driving technologies. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

35 pages, 5660 KiB  
Article
“Warning!” Benefits and Pitfalls of Anthropomorphising Autonomous Vehicle Informational Assistants in the Case of an Accident
by Christopher D. Wallbridge, Qiyuan Zhang, Victoria Marcinkiewicz, Louise Bowen, Theodor Kozlowski, Dylan M. Jones and Phillip L. Morgan
Multimodal Technol. Interact. 2024, 8(12), 110; https://doi.org/10.3390/mti8120110 - 5 Dec 2024
Viewed by 1135
Abstract
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The [...] Read more.
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The aim of the current paper is to investigate the efficacy of informational assistants (IAs) varying by anthropomorphism (humanoid robot vs. no robot) and dialogue style (conversational vs. informational) on trust in and blame on a highly autonomous vehicle in the event of an accident. The accident scenario involved a pedestrian violating the Highway Code by stepping out in front of a parked bus and the AV not being able to stop in time during an overtake manoeuvre. The humanoid (Nao) robot IA did not improve trust (across three measures) or reduce blame on the AV in Experiment 1, although communicated intentions and actions were perceived by some as being assertive and risky. Reducing assertiveness in Experiment 2 resulted in higher trust (on one measure) in the robot condition, especially with the conversational dialogue style. However, there were again no effects on blame. In Experiment 3, participants had multiple experiences of the AV negotiating parked buses without negative outcomes. Trust significantly increased across each event, although it plummeted following the accident with no differences due to anthropomorphism or dialogue style. The perceived capabilities of the AV and IA before the critical accident event may have had a counterintuitive effect. Overall, evidence was found for a few benefits and many pitfalls of anthropomorphising an AV with a humanoid robot IA in the event of an accident situation. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

27 pages, 15486 KiB  
Article
Mixed-Presence Collaboration with Wall-Sized Displays: Empirical Findings on the Benefits of Awareness Cues
by Valérie Maquil, Adrien Coppens, Lou Schwartz and Dimitra Anastasiou
Multimodal Technol. Interact. 2024, 8(12), 109; https://doi.org/10.3390/mti8120109 - 5 Dec 2024
Viewed by 999
Abstract
Collaborative decision-making increasingly involves wall-sized displays (WSDs), allowing teams to view, analyse and discuss large amounts of data. To enhance workspace awareness for mixed-presence meetings, previous work proposes digital cues to share gestures, gaze, or entire postures. While several isolated cues were proposed [...] Read more.
Collaborative decision-making increasingly involves wall-sized displays (WSDs), allowing teams to view, analyse and discuss large amounts of data. To enhance workspace awareness for mixed-presence meetings, previous work proposes digital cues to share gestures, gaze, or entire postures. While several isolated cues were proposed and demonstrated useful in different workspaces, it is unknown whether results from previous studies can be transferred to a mixed-presence WSD context and to what extend such cues can be used in a combined way. In this paper, we report on the results from a user study with 24 participants (six groups of four participants), testing a mixed-presence collaboration scenario on two different setups of connected WSDs: audio-video link only vs. full setup with seven complementary cues. Our results show that the version with cues enhances workspace awareness, user experience, team orientation and coordination, and leads teams to take more correct decisions. Full article
Show Figures

Figure 1

24 pages, 9111 KiB  
Review
Bi-Directional Gaze-Based Communication: A Review
by Björn Rene Severitt, Nora Castner and Siegfried Wahl
Multimodal Technol. Interact. 2024, 8(12), 108; https://doi.org/10.3390/mti8120108 - 4 Dec 2024
Viewed by 1508
Abstract
Bi-directional gaze-based communication offers an intuitive and natural way for users to interact with systems. This approach utilizes the user’s gaze not only to communicate intent but also to obtain feedback, which promotes mutual understanding and trust between the user and the system. [...] Read more.
Bi-directional gaze-based communication offers an intuitive and natural way for users to interact with systems. This approach utilizes the user’s gaze not only to communicate intent but also to obtain feedback, which promotes mutual understanding and trust between the user and the system. In this review, we explore the state of the art in gaze-based communication, focusing on both directions: From user to system and from system to user. First, we examine how eye-tracking data is processed and utilized for communication from the user to the system. This includes a range of techniques for gaze-based interaction and the critical role of intent prediction, which enhances the system’s ability to anticipate the user’s needs. Next, we analyze the reverse pathway—how systems provide feedback to users via various channels, highlighting their advantages and limitations. Finally, we discuss the potential integration of these two communication streams, paving the way for more intuitive and efficient gaze-based interaction models, especially in the context of Artificial Intelligence. Our overview emphasizes the future prospects for combining these approaches to create seamless, trust-building communication between users and systems. Ensuring that these systems are designed with a focus on usability and accessibility will be critical to making them effective communication tools for a wide range of users. Full article
Show Figures

Graphical abstract

47 pages, 3641 KiB  
Review
Innovative and Interactive Technologies in Creative Product Design Education: A Review
by Ioanna Nazlidou, Nikolaos Efkolidis, Konstantinos Kakoulis and Panagiotis Kyratsis
Multimodal Technol. Interact. 2024, 8(12), 107; https://doi.org/10.3390/mti8120107 - 4 Dec 2024
Viewed by 2165
Abstract
When discussing the Education 4.0 concept and the role of technology-based learning systems along with creativity, it is interesting to explore how these are reflected as educational innovations in the case of design education. This study aims to provide an overview of interactive [...] Read more.
When discussing the Education 4.0 concept and the role of technology-based learning systems along with creativity, it is interesting to explore how these are reflected as educational innovations in the case of design education. This study aims to provide an overview of interactive technologies used in product design education and examine their integration into the learning process. A literature search was conducted, analyzing scientific papers to review relevant articles. The findings highlight several categories of technologies utilized in design education, including virtual and augmented reality, robotics, interactive embedded systems, immersive technologies, and computational intelligence systems. These technologies are primarily integrated as supportive tools throughout different stages of the design process within learning environments. This study suggests that integrating such technologies alongside pedagogical methods positively impacts education, offering numerous opportunities for further research and innovation. In conclusion, this review contributes to ongoing research in technological advancements and innovations in design education, offering insights into the diverse applications of interactive technologies in enhancing learning environments. Full article
Show Figures

Graphical abstract

22 pages, 3907 KiB  
Article
Science Mapping of AI as an Educational Tool Exploring Digital Inequalities: A Sociological Perspective
by Isotta Mac Fadden, Elena-María García-Alonso and Eloy López Meneses
Multimodal Technol. Interact. 2024, 8(12), 106; https://doi.org/10.3390/mti8120106 - 21 Nov 2024
Cited by 2 | Viewed by 3492
Abstract
This study aims to explore the evolution of the literature on the sociological implications of integrating artificial intelligence (AI) as an educational tool, particularly its influence on digital inequalities. While AI technologies, such as AI-based language models, have begun transforming educational practices by [...] Read more.
This study aims to explore the evolution of the literature on the sociological implications of integrating artificial intelligence (AI) as an educational tool, particularly its influence on digital inequalities. While AI technologies, such as AI-based language models, have begun transforming educational practices by personalizing learning, fostering student autonomy, and supporting educators, concerns remain regarding access disparities, ethical implications, and the potential reinforcement of existing social inequalities. To address these issues, a bibliometric analysis employing science mapping was conducted on 1515 studies sourced from the Web of Science Core Collection. This analysis traces the thematic evolution of social science perspectives on AI’s role in education and its relationship with digital inequalities. The results indicate a growing academic interest in AI in education, with a notable progression from understanding its basic impact to exploring complex themes such as vulnerability, disability, bias, and community. The studies show that AI’s application has expanded from isolated research on specific populations to broader discussions on inclusivity, equity, and the impact of AI on governance, policy, and community. However, the findings also reveal a significant gap in sociological perspectives, particularly regarding issues like digital illiteracy and socio-economic access disparities. Although AI holds promise for promoting more inclusive education, further research is essential to address these sociological concerns and to guide the ethical, equitable implementation of AI as its influence on governance, policy, and community impact continues to grow. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop