Next Issue
Volume 9, March
Previous Issue
Volume 9, January
 
 

Multimodal Technol. Interact., Volume 9, Issue 2 (February 2025) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
13 pages, 1830 KiB  
Article
Application of 9-Channel Pseudo-Color Maps in Deep Learning for Intracranial Hemorrhage Detection
by Shimpei Sato, Daisuke Oura and Hiroyuki Sugimori
Multimodal Technol. Interact. 2025, 9(2), 17; https://doi.org/10.3390/mti9020017 - 14 Feb 2025
Viewed by 649
Abstract
[Background] In computed tomography (CT) for intracranial hemorrhage (ICH), the various window settings and the continuity of slices are critical factors for accurate diagnosis. However, traditional convolutional neural networks typically accept only single-slice images. Since ICH lesions often extend across multiple slices, using [...] Read more.
[Background] In computed tomography (CT) for intracranial hemorrhage (ICH), the various window settings and the continuity of slices are critical factors for accurate diagnosis. However, traditional convolutional neural networks typically accept only single-slice images. Since ICH lesions often extend across multiple slices, using only single-slice images may result in reduced diagnostic accuracy by neglecting spatial continuity. Our approach addresses this limitation by integrating multi-slice information through a 9-channel pseudo-color map. To address this limitation, we explored the use of a 9-channel pseudo-color map for the discrimination of ICH in CT. [Method] A total of 21,744 cases (normal controls: 12,862; abnormal cases: 8882) from an open dataset were utilized for model training and validation. Abnormal cases included a variety of ICHs. The 9-channel pseudo-color map was generated by combining three different window settings with three continuous slices. ResNeXt50-32x4d architecture with five-fold cross-validation used. A total of 956 clinical cases were used for model testing. [Result] A total of 558,738 images were included in the model training process. The optimal model performance metrics were as follows: accuracy: 95.92%, sensitivity: 96.37%, and specificity: 95.24%. The average processing time for each case was recorded as 3.29 s. [Conclusions] The 9-channel pseudo-color map demonstrates high accuracy in the discrimination of ICH in CT images using deep learning methodologies. Full article
Show Figures

Figure 1

10 pages, 8942 KiB  
Article
An Implementation of a Crime-Safety-Map Application Based on a Safety Index
by Seong-Cho Hong, Svetlana Kim and Sun-Young Ihm
Multimodal Technol. Interact. 2025, 9(2), 16; https://doi.org/10.3390/mti9020016 - 13 Feb 2025
Viewed by 681
Abstract
This paper presents the development of a crime-safety-map application and a safety index using the heatmap and geofence methods. The need for a tool that can satisfy safety needs has become more important than ever due to society’s growing fear of crime. One [...] Read more.
This paper presents the development of a crime-safety-map application and a safety index using the heatmap and geofence methods. The need for a tool that can satisfy safety needs has become more important than ever due to society’s growing fear of crime. One way to satisfy the general public’s safety needs is by informing them of crime data and the safety level of the surrounding environment, but it is not disclosed by law enforcement agencies. Therefore, this study focused on crime prevention through environmental design for developing a user-friendly, open to the public crime-safety-map application. Data from the Republic of Korean Open Government Data Portal’s nationwide safety and crime related data were used and the application was designed using Android Studio. The developed application visualizes the characteristic of the surrounding environment and can also inform crime safety level through a heatmap and the geofence technique. This application can reduce the general public’s fear of crime and crime incidents by informing and warning them about the crime prone areas. Full article
Show Figures

Figure 1

25 pages, 9799 KiB  
Article
A Diamond Approach to Develop Virtual Object Interaction: Fusing Augmented Reality and Kinesthetic Haptics
by Alma Rodriguez-Ramirez, Osslan Osiris Vergara Villegas, Manuel Nandayapa, Francesco Garcia-Luna and María Cristina Guevara Neri
Multimodal Technol. Interact. 2025, 9(2), 15; https://doi.org/10.3390/mti9020015 - 13 Feb 2025
Viewed by 619
Abstract
Using the senses is essential to interacting with objects in real-world environments. However, not all the senses are available when interacting with virtual objects in virtual environments. This paper presents a diamond methodology to fuse two technologies to represent the senses of sight [...] Read more.
Using the senses is essential to interacting with objects in real-world environments. However, not all the senses are available when interacting with virtual objects in virtual environments. This paper presents a diamond methodology to fuse two technologies to represent the senses of sight and touch when interacting with a virtual object. The sense of sight is represented through augmented reality, and the sense of touch is represented through kinesthetic haptics. The diamond methodology is centered on the user experience and comprises five general stages: (i) experience design, (ii) sensory representation, (iii) development, (iv) display, and (v) fusion. The first stage is the expected, proposed, or needed user experience. Then, each technology takes its homologous activities from the second to the fourth stage, diverging from each other along their development. Finally, the technologies converge to the fifth stage for fusion in the user experience. The diamond methodology was tested by generating a user’s dual sensation when interacting with the elasticity of a tension virtual spring. The user can simultaneously perceive the visual and tactile change of the virtual spring during the interaction, representing the object’s deformation. The experimental results demonstrated that an interactive experience can be felt and seen in augmented reality following the diamond methodology. Full article
Show Figures

Graphical abstract

21 pages, 1913 KiB  
Article
Social Robot Interactions in a Pediatric Hospital Setting: Perspectives of Children, Parents, and Healthcare Providers
by Katarzyna Kabacińska, Katelyn A. Teng and Julie M. Robillard
Multimodal Technol. Interact. 2025, 9(2), 14; https://doi.org/10.3390/mti9020014 - 11 Feb 2025
Viewed by 925
Abstract
Socially assistive robots are embodied technological artifacts that can interact socially with people. These devices are increasingly investigated as a means of mental health support in different populations, especially for alleviating loneliness, depression, and anxiety. While the number of available, increasingly sophisticated social [...] Read more.
Socially assistive robots are embodied technological artifacts that can interact socially with people. These devices are increasingly investigated as a means of mental health support in different populations, especially for alleviating loneliness, depression, and anxiety. While the number of available, increasingly sophisticated social robots is growing, their adoption is slower than anticipated. There is much effort to determine the effectiveness of social robots in various settings, including healthcare; however, little is known about the acceptability of these devices by the following distinct user groups: healthcare providers, parents, and children. To better understand the priorities and attitudes of social robot users, we carried out (1) a survey of parents and children who have previously been admitted to a hospital and (2) a series of three modified focus group meetings with healthcare providers. The online survey (n = 71) used closed and open-ended questions as well as validated measures to establish the attitudes of children and parents towards social human–robot interaction and identify any potential barriers to the implementation of a robot intervention in a hospital setting. In the focus group meetings with healthcare providers (n = 10), we identified novel potential applications and interaction modalities of social robots in a hospital setting. Several concerns and barriers to the implementation of social robots were discussed. Overall, all user groups have positive attitudes towards interactions with social robots, provided that their concerns regarding robot use are addressed during interaction development. Our results reveal novel social robot application areas in hospital settings, such as rapport-building between patients and healthcare providers and fostering patient involvement in their own care. Healthcare providers highlighted the value of being included and consulted throughout the process of child–robot interaction development to ensure the acceptability of social robots in this setting and minimize potential harm. Full article
Show Figures

Figure 1

17 pages, 408 KiB  
Article
Craft-Based Methodologies in Human–Computer Interaction: Exploring Interdisciplinary Design Approaches
by Arminda Guerra
Multimodal Technol. Interact. 2025, 9(2), 13; https://doi.org/10.3390/mti9020013 - 10 Feb 2025
Viewed by 1015
Abstract
Craft-based methodologies have emerged as a vital human-computer interaction (HCI) approach, bridging digital and physical materials in interactive system design. This study, born from a collaboration between two research networks focused on affective design and interaction design, investigates how diverse professionals use craft-based [...] Read more.
Craft-based methodologies have emerged as a vital human-computer interaction (HCI) approach, bridging digital and physical materials in interactive system design. This study, born from a collaboration between two research networks focused on affective design and interaction design, investigates how diverse professionals use craft-based approaches to transform design processes. Through carefully curated workshops, participants from varied backgrounds worked to identify specific problems, select technologies, and consider contextual factors within a creative framework. The workshops served as a platform for observing participant behaviors and goals in real-world settings, with researchers systematically collecting data through material engagement and visual problem-solving exercises. Drawing inspiration from concepts like Chindogu (Japanese “unuseless” inventions), the research demonstrates how reframing interaction design through craft-based methodologies can lead to more intuitive and contextually aware solutions. The findings highlight how interdisciplinary collaboration and sustainable and socially responsible design principles generate innovative solutions that effectively address user requirements. This integration of creative frameworks with physical and digital materials advances our understanding of meaningful technological interactions while establishing more holistic approaches to interactive system design that can inform future research directions in the field. Full article
Show Figures

Figure 1

28 pages, 4588 KiB  
Article
Modeling and Co-Simulation of Fuzzy Logic Controller for Artificial Cybernetic Hand
by Michal Miloslav Uličný, Ján Cigánek, Vladimír Kutiš, Erik Kučera and Ján Šedivý
Multimodal Technol. Interact. 2025, 9(2), 12; https://doi.org/10.3390/mti9020012 - 7 Feb 2025
Viewed by 665
Abstract
This work focuses on the design and fuzzy logic-based control of a cybernetic hand, which was developed as a communication tool for sign language demonstration. The hand model, initially created in SOLIDWORKS and refined in MSC Adams, was integrated into a MATLAB Simulink [...] Read more.
This work focuses on the design and fuzzy logic-based control of a cybernetic hand, which was developed as a communication tool for sign language demonstration. The hand model, initially created in SOLIDWORKS and refined in MSC Adams, was integrated into a MATLAB Simulink simulation. A fuzzy logic controller was used to control finger movements, ensuring efficient and precise motion control. Testing confirmed the system’s ability to achieve rapid and accurate responses, enabling the robotic hand to present sign language gestures effectively. Full article
Show Figures

Figure 1

29 pages, 7285 KiB  
Review
Combining Artificial Intelligence with Augmented Reality and Virtual Reality in Education: Current Trends and Future Perspectives
by Georgios Lampropoulos
Multimodal Technol. Interact. 2025, 9(2), 11; https://doi.org/10.3390/mti9020011 - 28 Jan 2025
Cited by 1 | Viewed by 2347
Abstract
The combination of artificial intelligence with extended reality technologies can significantly impact the educational domain. This study aims to present an overview regarding the combination of artificial intelligence with augmented reality and virtual reality technologies and their integration in education through an analysis [...] Read more.
The combination of artificial intelligence with extended reality technologies can significantly impact the educational domain. This study aims to present an overview regarding the combination of artificial intelligence with augmented reality and virtual reality technologies and their integration in education through an analysis of the existing literature. Hence, this study examines 201 documents from Scopus and the Web of Science (WoS). This study focuses on examining the basic characteristics of the document collection, highlighting the most prevalent themes, areas, and topics, exploring the thematic evolution of the topic, revealing current challenges and limitations and on identifying emerging topics and future research directions. Based on the outcomes, a significant annual growth rate (60.58%) was observed indicating the increasing interest in the topic. Additionally, the potential of combining artificial intelligence with virtual reality and augmented reality technologies to provide personalized, affective, interactive, and immersive learning experiences across educational levels in both formal and informal settings supporting both teachers and students arose. Therefore, through this combination, intelligent tutoring systems (ITSs), which offer behavioral, cognitive, and social personalization, have a virtual presence, and can effectively be used as tutors or peer learners, can be created. Such ITSs can be characterized as affective and social entities that can increase students’ learning performance, learning motivation, and engagement and promote both self-directed learning and collaborative learning. This study also highlights the need to examine how the physical presence that characterizes some new technologies compares to the virtual presence that extended reality technologies offer in terms of overall learning outcomes and students’ development. Full article
Show Figures

Figure 1

18 pages, 12564 KiB  
Article
The Presenter in the Browser: Design and Evaluation of Human Interactive Overlays with Web Content
by Maxime Cordeil, Anais Servais, Guillaume Truong, Tim Dwyer, Dhaval Vyas and Christophe Hurter
Multimodal Technol. Interact. 2025, 9(2), 10; https://doi.org/10.3390/mti9020010 - 27 Jan 2025
Viewed by 858
Abstract
This research explores the design and evaluation of a webcam-based presentation tool that enables presenters to directly interact with web content via free-hand gestures. Our approach consists of overlaying the webcam video feed on top of web browser content to enable live presentations [...] Read more.
This research explores the design and evaluation of a webcam-based presentation tool that enables presenters to directly interact with web content via free-hand gestures. Our approach consists of overlaying the webcam video feed on top of web browser content to enable live presentations of any webpage. To support interactive presentations, we designed free-hand gesture interactions with the webpage to enable pointing, clicking, panning, and zooming interactions. We propose three alternatives to enable free-hand clicking: dwell time, modal key control, and a pinching interaction technique. We conducted an exploratory user study of these alternative designs to gather insights on the usability of such systems from a presenter point of view, with a focus on understanding the impact of the three techniques on flow interruptions. The results indicate that the system we propose can be used to deliver presentations effectively and that natural gestures do not disturb the flow of the presentation. Full article
Show Figures

Figure 1

13 pages, 874 KiB  
Article
Intention to Work with Social Robots: The Role of Perceived Robot Use Self-Efficacy, Attitudes Towards Robots, and Beliefs in Human Nature Uniqueness
by Jean-Christophe Giger, Nuno Piçarra, Grzegorz Pochwatko, Nuno Almeida and Ana Susana Almeida
Multimodal Technol. Interact. 2025, 9(2), 9; https://doi.org/10.3390/mti9020009 - 21 Jan 2025
Viewed by 1321
Abstract
Recent studies have enlightened the crucial role of perceived robot use self-efficacy in human robot interaction. This paper investigates the interplay between perceived robot use self-efficacy, attitudes towards robots, and beliefs in human nature uniqueness (BHNU) on the intention to work with social [...] Read more.
Recent studies have enlightened the crucial role of perceived robot use self-efficacy in human robot interaction. This paper investigates the interplay between perceived robot use self-efficacy, attitudes towards robots, and beliefs in human nature uniqueness (BHNU) on the intention to work with social robots. Participants (N = 117) first filled out a questionnaire measuring their BHNU and attitudes towards robots. Then, they were randomly exposed to a video displaying a humanoid social robot (either humanlike or mechanical). Finally, participants indicated their robot use self-efficacy and their intention to work with the displayed social robot. Regression and serial mediation analyses showed the following: (1) the intention to work with social robots was significantly predicted by robot use self-efficacy and attitudes towards robots; (2) BHNU has a direct influence on attitudes towards robots and an indirect influence on the intention to work with social robots through attitudes towards robots and robot use self-efficacy. Our findings expand the current research on the impact of perceived robot use self-efficacy on intention to work with social robots. Implications for human robot interaction and human resource management are discussed. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop