Advances in Tangible and Embodied Interaction for Virtual and Augmented Reality

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 January 2023) | Viewed by 46014

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, University of Coimbra, 3030-790 Coimbra, Portugal
Interests: human-computer interaction; virtual and augmented reality; tangible user interfaces; interaction techniques; interaction design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Research Center for Science and Technology of the Arts, School of Arts, Catholic University of Portugal, Portugal
2. Department of Informatics Engineering, University of Coimbra, 3004-531 Coimbra, Portugal
Interests: human-computer interaction; brain-computer interaction; computer graphics; physical computing; digital fabrication

E-Mail Website
Guest Editor
Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, University of Coimbra, 3004-531 Coimbra, Portugal
Interests: human–computer interaction; active and healthy living; connected health; design and use of health and care technologies; virtual and augmented reality

E-Mail Website
Guest Editor
1.Department of Architecture, University of Coimbra, 3000-143 Coimbra, Portugal
2. Center for Studies in Architecture and Urbanism, Faculty of Architecture, University of Porto, 4150-564, Porto, Portugal.
Interests: digital fabrication; generative design; augmented reality; human–robotic interaction; architecture; construction technology

Special Issue Information

Dear Colleagues,

Virtual and augmented reality technology has seen tremendous progress in recent years, enabling novel and exciting ways to interact inside virtual environments. An interesting approach to interaction in VR and AR is the use of tangible user interfaces, which leverage on our natural understanding of the physical world and how our bodies move and interact with it, and on our learned capabilities to manipulate physical objects.

For this Special Issue, we are interested in gathering research that bridges atoms and bits in virtual and augmented environments. Basic research papers that demonstrate new interaction technologies or techniques, descriptions of applications that use tangibles in VR/AR in any domain, and survey papers that can help structure and systematize current knowledge on this area are all suitable. Topics of interest include, but are not limited to:

  • Technologies for object detection/recognition in AR
  • Interaction techniques that use tangibles/embodied interactions
  • Haptic systems and interactions for VR/AR
  • Authoring systems
  • Programming libraries, toolkits, and frameworks
  • Applications in cultural heritage, medicine, rehabilitation, gaming, entertainment, teaching, training, visualization, etc.
  • Design methodologies
  • Evaluation methodologies

Dr. Jorge C. S. Cardoso
Dr. André Perrotta
Dr. Paula Alexandra Silva
Dr. Pedro Martins
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • VR/AR applications
  • brain science for VR/AR
  • VR/AR collaboration
  • computer graphics for VR/AR
  • tangible interaction
  • embodied interaction
  • multimodal VR/AR
  • haptics
  • human–computer interactions in VR/AR
  • human augmentations with VR/AR
  • human–computer interactions in VR/AR
  • human factors in VR/AR
  • programming libraries/toolkits for VR/AR
  • 360 video
  • 3D experiences
  • digital storytelling
  • mobile technologies
  • human–robot interactions with VR/AR

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

4 pages, 170 KiB  
Editorial
Advances in Tangible and Embodied Interaction for Virtual and Augmented Reality
by Jorge C. S. Cardoso, André Perrotta, Paula Alexandra Silva and Pedro Martins
Electronics 2023, 12(8), 1893; https://doi.org/10.3390/electronics12081893 - 17 Apr 2023
Viewed by 1087
Abstract
Virtual Reality (VR) and Augmented Reality (AR) technologies have the potential to revolutionise the way we interact with digital content [...] Full article

Research

Jump to: Editorial, Other

18 pages, 14556 KiB  
Article
Design and Implementation of Two Immersive Audio and Video Communication Systems Based on Virtual Reality
by Hanqi Zhang, Jing Wang, Zhuoran Li and Jingxin Li
Electronics 2023, 12(5), 1134; https://doi.org/10.3390/electronics12051134 - 26 Feb 2023
Cited by 3 | Viewed by 2213
Abstract
Due to the impact of the COVID-19 pandemic in recent years, remote communication has become increasingly common, which has also spawned many online solutions. Compared with an in-person scenario, the feeling of immersion and participation is lacking in these solutions, and the effect [...] Read more.
Due to the impact of the COVID-19 pandemic in recent years, remote communication has become increasingly common, which has also spawned many online solutions. Compared with an in-person scenario, the feeling of immersion and participation is lacking in these solutions, and the effect is thus not ideal. In this study, we focus on two typical virtual reality (VR) application scenarios with immersive audio and video experience: VR conferencing and panoramic live broadcast. We begin by introducing the core principles of traditional video conferencing, followed by the existing research results of VR conferencing along with the similarities, differences, pros, and cons of each solution. Then, we outline our view about what elements a virtual conferencing room should have. After that, a simple implementation scheme for VR conferencing is provided. Regarding panoramic video, we introduce the steps to produce and transmit a panoramic live broadcast and analyze several current mainstream encoding optimization schemes. By comparing traditional video streams, the various development bottlenecks of panoramic live broadcast are identified and summarized. A simple implementation of a panoramic live broadcast is presented in this paper. To conclude, the main points are illustrated along with the possible future directions of the two systems. The simple implementation of two immersive systems provides a research and application reference for VR audio and video transmission, which can guide subsequent relevant research studies. Full article
Show Figures

Figure 1

16 pages, 3102 KiB  
Article
An Interactive Augmented Reality Graph Visualization for Chinese Painters
by Jingya Li and Zheng Wang
Electronics 2022, 11(15), 2367; https://doi.org/10.3390/electronics11152367 - 28 Jul 2022
Cited by 6 | Viewed by 2505
Abstract
Recent research in the area of immersive analytics demonstrated the utility of augmented reality for data analysis. However, there is a lack of research on how to facilitate engaging, embodied, and interactive AR graph visualization. In this paper, we explored the design space [...] Read more.
Recent research in the area of immersive analytics demonstrated the utility of augmented reality for data analysis. However, there is a lack of research on how to facilitate engaging, embodied, and interactive AR graph visualization. In this paper, we explored the design space for combining the capabilities of AR with node-link diagrams to create immersive data visualization. We first systematically described the design rationale and the design process of the mobile based AR graph including the layout, interactions, and aesthetics. Then, we validated the AR concept by conducting a user study with 36 participants to examine users’ behaviors with an AR graph and a 2D graph. The results of our study showed the feasibility of using an AR graph to present data relations and also introduced interaction challenges in terms of the effectiveness and usability with mobile devices. Third, we iterated the AR graph by implementing embodied interactions with hand gestures and addressing the connection between the physical objects and the digital graph. This study is the first step in our research, aiming to guide the design of the application of immersive AR data visualization in the future. Full article
Show Figures

Figure 1

22 pages, 11493 KiB  
Article
Situating Learning in AR Fantasy, Design Considerations for AR Game-Based Learning for Children
by Tengjia Zuo, Jixiang Jiang, Erik Van der Spek, Max Birk and Jun Hu
Electronics 2022, 11(15), 2331; https://doi.org/10.3390/electronics11152331 - 27 Jul 2022
Cited by 5 | Viewed by 2136
Abstract
(1) Background: Augmented reality (AR) game-based learning, has received increased attention in recent years. Fantasy is a vital gaming feature that promotes engagement and immersion experience for children. However, situating learning with AR fantasy to engage learners and fit pedagogical contexts needs structured [...] Read more.
(1) Background: Augmented reality (AR) game-based learning, has received increased attention in recent years. Fantasy is a vital gaming feature that promotes engagement and immersion experience for children. However, situating learning with AR fantasy to engage learners and fit pedagogical contexts needs structured analysis of educational scenarios for different subjects. (2) Methods: We present a combined study using our own built AR games, MathMythosAR2 for mathematics learning, and FancyBookAR for English as second-language learning. For each game, we created a fantasy and a real-life narrative. We investigated player engagement and teachers’ scaffolding through qualitative and quantitative research with 62 participants aged from 7 to 11 years old. (3) Results: We discovered that fantasy narratives engage students in mathematics learning while disengaging them in second-language learning. Participants report a higher imagination with fantasy narratives and a higher analogy with real-life narratives. We found that teachers’ scaffolding for MathMythosAR2 focused on complex interactions, for FancyBookAR, focused on story interpretation and knowledge explanation. (4) Conclusions: It is recommended to mix fantasy and real-life settings, and use simple AR interaction and pedagogical agents that enable teachers’ scaffolding seamlessly. The design of AR fantasy should evaluate whether the story is intrinsically related to the learning subjects, as well as the requirements of explicit explanation. Full article
Show Figures

Figure 1

17 pages, 5039 KiB  
Article
Development of a Virtual Object Weight Recognition Algorithm Based on Pseudo-Haptics and the Development of Immersion Evaluation Technology
by Eunjin Son, Hayoung Song, Seonghyeon Nam and Youngwon Kim
Electronics 2022, 11(14), 2274; https://doi.org/10.3390/electronics11142274 - 21 Jul 2022
Cited by 2 | Viewed by 2028
Abstract
In this work, we propose a qualitative immersion evaluation technique based on a pseudo-haptic-based user-specific virtual object weight recognition algorithm and an immersive experience questionnaire (IEQ). The proposed weight recognition algorithm is developed by considering the moving speed of a natural hand tracking-based, [...] Read more.
In this work, we propose a qualitative immersion evaluation technique based on a pseudo-haptic-based user-specific virtual object weight recognition algorithm and an immersive experience questionnaire (IEQ). The proposed weight recognition algorithm is developed by considering the moving speed of a natural hand tracking-based, user-customized virtual object using a camera in a VR headset and the realistic offset of the object’s weight when lifting it in real space. Customized speeds are defined to recognize customized weights. In addition, an experiment is conducted to measure the speed of lifting objects by weight in real space to obtain the natural object lifting speed weight according to the weight. In order to evaluate the weight and immersion of the developed simulation content, the participants’ qualitative immersion evaluation is conducted through three IEQ-based immersion evaluation surveys. Based on the analysis results of the experimental participants and the interview, this immersion evaluation technique shows whether it is possible to evaluate a realistic tactile experience in VR content. It is predicted that the proposed weight recognition algorithm and evaluation technology can be applied to various fields, such as content production and service support, in line with market demand in the rapidly growing VR, AR, and MR fields. Full article
Show Figures

Figure 1

11 pages, 2468 KiB  
Article
A 3D Image Registration Method for Laparoscopic Liver Surgery Navigation
by Donghui Li and Monan Wang
Electronics 2022, 11(11), 1670; https://doi.org/10.3390/electronics11111670 - 24 May 2022
Cited by 3 | Viewed by 2617
Abstract
At present, laparoscopic augmented reality (AR) navigation has been applied to minimally invasive abdominal surgery, which can help doctors to see the location of blood vessels and tumors in organs, so as to perform precise surgery operations. Image registration is the process of [...] Read more.
At present, laparoscopic augmented reality (AR) navigation has been applied to minimally invasive abdominal surgery, which can help doctors to see the location of blood vessels and tumors in organs, so as to perform precise surgery operations. Image registration is the process of optimally mapping one or more images to the target image, and it is also the core of laparoscopic AR navigation. The key is how to shorten the registration time and optimize the registration accuracy. We have studied the three-dimensional (3D) image registration technology in laparoscopic liver surgery navigation and proposed a new registration method combining rough registration and fine registration. First, the adaptive fireworks algorithm (AFWA) is applied to rough registration, and then the optimized iterative closest point (ICP) algorithm is applied to fine registration. We proposed a method that is validated by the computed tomography (CT) dataset 3D-IRCADb-01. Experimental results show that our method is superior to other registration methods based on stochastic optimization algorithms in terms of registration time and accuracy. Full article
Show Figures

Figure 1

23 pages, 1343 KiB  
Article
Gaze-Based Interaction Intention Recognition in Virtual Reality
by Xiao-Lin Chen and Wen-Jun Hou
Electronics 2022, 11(10), 1647; https://doi.org/10.3390/electronics11101647 - 21 May 2022
Cited by 4 | Viewed by 2763
Abstract
With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes. In the present work, we explore whether gaze-based data and hand-eye coordination data can predict [...] Read more.
With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes. In the present work, we explore whether gaze-based data and hand-eye coordination data can predict a user’s interaction intention with the digital world, which could be used to develop predictive interfaces. We validate it on the eye-tracking data collected from 10 participants in item selection and teleporting tasks in virtual reality. We demonstrate successful prediction of the onset of item selection and teleporting with an 0.943 F1-Score using a Gradient Boosting Decision Tree, which is the best among the four classifiers compared, while the model size of the Support Vector Machine is the smallest. It is also proven that hand-eye-coordination-related features can improve interaction intention recognition in virtual reality environments. Full article
Show Figures

Figure 1

14 pages, 14941 KiB  
Article
Personalized Virtual Reality Environments for Intervention with People with Disability
by Manuel Lagos Rodríguez, Ángel Gómez García, Javier Pereira Loureiro and Thais Pousada García
Electronics 2022, 11(10), 1586; https://doi.org/10.3390/electronics11101586 - 16 May 2022
Cited by 3 | Viewed by 2431
Abstract
Background: Virtual reality (VR) is a technological resource that allows the generation of an environment of great realism while achieving user immersion. The purpose of this project is to use VR as a complementary tool in the rehabilitation process of people with physical [...] Read more.
Background: Virtual reality (VR) is a technological resource that allows the generation of an environment of great realism while achieving user immersion. The purpose of this project is to use VR as a complementary tool in the rehabilitation process of people with physical and cognitive disabilities. An approach based on performing activities of daily living is proposed. Methods: Through joint work between health and IT professionals, the VR scenarios and skills to be trained are defined. We organized discussion groups in which health professionals and users with spinal injury, stroke, or cognitive impairment participated. A testing phase was carried out, followed by a qualitative perspective. As materials, Unity was used as a development platform, HTC VIVE as a VR system, and Leap Motion as a hand tracking device and as a means of interacting with the scenarios. Results: A VR application was developed, consisting of four scenarios that allow for practicing different activities of daily living. Three scenarios are focused on hand mobility rehabilitation, while the remaining scenario is intended to work on a cognitive skill related to the identification of elements to perform a task. Conclusions: Performing activities of daily living using VR environments provides an enjoyable, motivating, and safe means of rehabilitation in the daily living process of people with disabilities and is a valuable source of information for healthcare professionals to assess a patient’s evolution. Full article
Show Figures

Figure 1

12 pages, 11013 KiB  
Article
Preoperative Virtual Reality Surgical Rehearsal of Renal Access during Percutaneous Nephrolithotomy: A Pilot Study
by Ben Sainsbury, Olivia Wilz, Jing Ren, Mark Green, Martin Fergie and Carlos Rossa
Electronics 2022, 11(10), 1562; https://doi.org/10.3390/electronics11101562 - 13 May 2022
Cited by 5 | Viewed by 2069
Abstract
Percutaneous Nephrolithotomy (PCNL) is a procedure used to treat kidney stones. In PCNL, a needle punctures the kidney through an incision in a patient’s back and thin tools are threaded through the incision to gain access to kidney stones for removal. Despite being [...] Read more.
Percutaneous Nephrolithotomy (PCNL) is a procedure used to treat kidney stones. In PCNL, a needle punctures the kidney through an incision in a patient’s back and thin tools are threaded through the incision to gain access to kidney stones for removal. Despite being one of the main endoscopic procedures for managing kidney stones, PCNL remains a difficult procedure to learn with a long and steep learning curve. Virtual reality simulation with haptic feedback is emerging as a new method for PCNL training. It offers benefits for both novices and experienced surgeons. In the first case, novices can practice and gain kidney access in a variety of simulation scenarios without offering any risk to patients. In the second case, surgeons can use the simulator for preoperative surgical rehearsal. This paper proposes the first preliminary study of PCNL surgical rehearsal using the Marion Surgical PCNL simulator. Preoperative CT scans of a patient scheduled to undergo PCNL are used in the simulator to create a 3D model of the renal system. An experienced surgeon then planned and practiced the procedure in the simulator before performing the surgery in the operating room. This is the first study involving survival rehearsal using a combination of VR and haptic feedback in PCNL before surgery. Preliminary results confirm that surgical rehearsal using a combination of virtual reality and haptic feedback strongly affects decision making during the procedure. Full article
Show Figures

Figure 1

13 pages, 3171 KiB  
Article
Digital Taste in Mulsemedia Augmented Reality: Perspective on Developments and Challenges
by Angel Swastik Duggal, Rajesh Singh, Anita Gehlot, Mamoon Rashid, Sultan S. Alshamrani and Ahmed Saeed AlGhamdi
Electronics 2022, 11(9), 1315; https://doi.org/10.3390/electronics11091315 - 21 Apr 2022
Cited by 16 | Viewed by 2424
Abstract
Digitalization of human taste has been on the back burners of multi-sensory media until the beginning of the decade, with audio, video, and haptic input/output(I/O) taking over as the major sensory mechanisms. This article reviews the consolidated literature on augmented reality (AR) in [...] Read more.
Digitalization of human taste has been on the back burners of multi-sensory media until the beginning of the decade, with audio, video, and haptic input/output(I/O) taking over as the major sensory mechanisms. This article reviews the consolidated literature on augmented reality (AR) in the modulation and stimulation of the sensation of taste in humans using low-amplitude electrical signals. Describing multiple factors that combine to produce a single taste, various techniques to stimulate/modulate taste artificially are described. The article explores techniques from prominent research pools with an inclination towards taste modulation. The goal is to seamlessly integrate gustatory augmentation into the commercial market. It highlights core benefits and limitations and proposes feasible extensions to the already established technological architecture for taste stimulation and modulation, namely, from the Internet of Things, artificial intelligence, and machine learning. Past research on taste has had a more software-oriented approach, with a few trends getting exceptions presented as taste modulation hardware. Using modern technological extensions, the medium of taste has the potential to merge with audio and video data streams as a viable multichannel medium for the transfer of sensory information. Full article
Show Figures

Figure 1

14 pages, 3222 KiB  
Article
Visual Positioning System Based on 6D Object Pose Estimation Using Mobile Web
by Ju-Young Kim, In-Seon Kim, Dai-Yeol Yun, Tae-Won Jung, Soon-Chul Kwon and Kye-Dong Jung
Electronics 2022, 11(6), 865; https://doi.org/10.3390/electronics11060865 - 09 Mar 2022
Cited by 2 | Viewed by 5001
Abstract
Recently, the demand for location-based services using mobile devices in indoor spaces without a global positioning system (GPS) has increased. However, to the best of our knowledge, solutions that are fully applicable to indoor positioning and navigation and ensure real-time mobility on mobile [...] Read more.
Recently, the demand for location-based services using mobile devices in indoor spaces without a global positioning system (GPS) has increased. However, to the best of our knowledge, solutions that are fully applicable to indoor positioning and navigation and ensure real-time mobility on mobile devices, such as global navigation satellite system (GNSS) solutions, cannot achieve remarkable researches in indoor circumstances. Indoor single-shot image positioning using smartphone cameras does not require a dedicated infrastructure and offers the advantages of low price and large potential markets owing to the popularization of smartphones. However, existing methods or systems based on smartphone cameras and image algorithms encounter various limitations when implemented in indoor environments. To address this, we designed an indoor visual positioning system for mobile devices that can locate users in indoor scenes. The proposed method uses a smartphone camera to detect objects through a single image in a web environment and calculates the location of the smartphone to find users in an indoor space. The system is inexpensive because it integrates deep learning and computer vision algorithms and does not require additional infrastructure. We present a novel method of detecting 3D model objects from single-shot RGB data, estimating the 6D pose and position of the camera and correcting errors based on voxels. To this end, the popular convolutional neural network (CNN) is improved by real-time pose estimation to handle the entire 6D pose estimate the location and direction of the camera. The estimated position of the camera is addressed to a voxel to determine a stable user position. Our VPS system provides the user with indoor information in 3D AR model. The voxel address optimization approach with camera 6D position estimation using RGB images in a mobile web environment outperforms real-time performance and accuracy compared to current state-of-the-art methods using RGB depth or point cloud. Full article
Show Figures

Graphical abstract

14 pages, 2923 KiB  
Article
Effects of Using Vibrotactile Feedback on Sound Localization by Deaf and Hard-of-Hearing People in Virtual Environments
by Mohammadreza Mirzaei, Peter Kán and Hannes Kaufmann
Electronics 2021, 10(22), 2794; https://doi.org/10.3390/electronics10222794 - 15 Nov 2021
Cited by 7 | Viewed by 2879
Abstract
Sound source localization is important for spatial awareness and immersive Virtual Reality (VR) experiences. Deaf and Hard-of-Hearing (DHH) persons have limitations in completing sound-related VR tasks efficiently because they perceive audio information differently. This paper presents and evaluates a special haptic VR suit [...] Read more.
Sound source localization is important for spatial awareness and immersive Virtual Reality (VR) experiences. Deaf and Hard-of-Hearing (DHH) persons have limitations in completing sound-related VR tasks efficiently because they perceive audio information differently. This paper presents and evaluates a special haptic VR suit that helps DHH persons efficiently complete sound-related VR tasks. Our proposed VR suit receives sound information from the VR environment wirelessly and indicates the direction of the sound source to the DHH user by using vibrotactile feedback. Our study suggests that using different setups of the VR suit can significantly improve VR task completion times compared to not using a VR suit. Additionally, the results of mounting haptic devices on different positions of users’ bodies indicate that DHH users can complete a VR task significantly faster when two vibro-motors are mounted on their arms and ears compared to their thighs. Our quantitative and qualitative analysis demonstrates that DHH persons prefer using the system without the VR suit and prefer mounting vibro-motors in their ears. In an additional study, we did not find a significant difference in task completion time when using four vibro-motors with the VR suit compared to using only two vibro-motors in users’ ears without the VR suit. Full article
Show Figures

Figure 1

Other

Jump to: Editorial, Research

22 pages, 1401 KiB  
Systematic Review
Virtual/Augmented Reality for Rehabilitation Applications Using Electromyography as Control/Biofeedback: Systematic Literature Review
by Cinthya Lourdes Toledo-Peral, Gabriel Vega-Martínez, Jorge Airy Mercado-Gutiérrez, Gerardo Rodríguez-Reyes, Arturo Vera-Hernández, Lorenzo Leija-Salas and Josefina Gutiérrez-Martínez
Electronics 2022, 11(14), 2271; https://doi.org/10.3390/electronics11142271 - 20 Jul 2022
Cited by 22 | Viewed by 4401
Abstract
Virtual reality (VR) and augmented reality (AR) are engaging interfaces that can be of benefit for rehabilitation therapy. However, they are still not widely used, and the use of surface electromyography (sEMG) signals is not established for them. Our goal is to explore [...] Read more.
Virtual reality (VR) and augmented reality (AR) are engaging interfaces that can be of benefit for rehabilitation therapy. However, they are still not widely used, and the use of surface electromyography (sEMG) signals is not established for them. Our goal is to explore whether there is a standardized protocol towards therapeutic applications since there are not many methodological reviews that focus on sEMG control/feedback. A systematic literature review using the PRISMA (preferred reporting items for systematic reviews and meta-analyses) methodology is conducted. A Boolean search in databases was performed applying inclusion/exclusion criteria; articles older than 5 years and repeated were excluded. A total of 393 articles were selected for screening, of which 66.15% were excluded, 131 records were eligible, 69.46% use neither VR/AR interfaces nor sEMG control; 40 articles remained. Categories are, application: neurological motor rehabilitation (70%), prosthesis training (30%); processing algorithm: artificial intelligence (40%), direct control (20%); hardware: Myo Armband (22.5%), Delsys (10%), proprietary (17.5%); VR/AR interface: training scene model (25%), videogame (47.5%), first-person (20%). Finally, applications are focused on motor neurorehabilitation after stroke/amputation; however, there is no consensus regarding signal processing or classification criteria. Future work should deal with proposing guidelines to standardize these technologies for their adoption in clinical practice. Full article
Show Figures

Figure 1

14 pages, 734 KiB  
Perspective
Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives
by Sergi Bermúdez i Badia, Paula Alexandra Silva, Diogo Branco, Ana Pinto, Carla Carvalho, Paulo Menezes, Jorge Almeida and Artur Pilacinski
Electronics 2022, 11(11), 1726; https://doi.org/10.3390/electronics11111726 - 29 May 2022
Cited by 13 | Viewed by 4844
Abstract
Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, [...] Read more.
Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, we outline the use of extended reality (XR) as a way to test and develop collaboration with robots. We focus on virtual reality (VR) in simulating collaboration scenarios and the use of cobot digital twins. This is specifically useful in situations that are difficult or even impossible to safely test in real life, such as dangerous scenarios. We describe using XR simulations as a means to evaluate collaboration with robots without putting humans at harm. We show how an XR setting enables combining human behavioral data, subjective self-reports, and biosignals signifying human comfort, stress and cognitive load during collaboration. Several works demonstrate XR can be used to train human operators and provide them with augmented reality (AR) interfaces to enhance their performance with robots. We also provide a first attempt at what could become the basis for a human–robot collaboration testing framework, specifically for designing and testing factors affecting human–robot collaboration. The use of XR has the potential to change the way we design and test cobots, and train cobot operators, in a range of applications: from industry, through healthcare, to space operations. Full article
Show Figures

Figure 1

Back to TopTop