Human Empowerment through Mixed Reality

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 October 2022) | Viewed by 6561

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Engineering, MIRo Lab, University of Trento, via Sommarive 9, 38123 Trento, Italy
Interests: measurements; mobile robotics; mixed reality; 3D vision system; space applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
MIRo (Measurement, Instrumentation and Robotics) laboratory, Department of Industrial Engineering (DII), University of Trento, 38123 Trento, Italy
Interests: mixed reality; human technologies; optimization algorithms; 3D vision system; human-machine interaction

Special Issue Information

Dear Colleagues,

Automation and robotics have fostered the strong growth of our society, replacing dirty, dangerous, and difficult jobs while increasing the production rate. On the other hand, an alarming societal challenge is facing our communities, with a decrease of work demand and power concentration in the hands of a few subjects/companies.

However, some emerging technologies have the potential to restore the centrality of the human role. Mixed reality (MR) is one of them. By adding virtual cues through animations of avatars or holograms in a real or virtual environment, by retrieving available information and superimposing it to the current context, by showing predictive simulations, and using augmented virtualized observations, just to cite a few examples, it is possible to empower human agents in several applications.

MR is thus a fertile research field in several applications ranging from applications of Industry 4.0 such as smart manufacturing, collaborative robotics, location-based support for mobile devices, and assistance for industrial maintenance (e.g., inspection, diagnostics, and simulation), to rehabilitation and clinics, training, and teaching.

This Special Issue encourages the contribution of original papers in MR, especially focusing on the empowerment of human agents in the abovementioned fields.

Dr. Mariolino De Cecco
Dr. Alessandro Luchetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Augmented human
  • Mixed reality in Industry 4.0
  • Mixed reality in education
  • Mixed reality for training and support
  • Mixed reality in rehabilitation and clinics

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 10485 KiB  
Article
Object Pose Detection to Enable 3D Interaction from 2D Equirectangular Images in Mixed Reality Educational Settings
by Matteo Zanetti, Alessandro Luchetti, Sharad Maheshwari, Denis Kalkofen, Manuel Labrador Ortega and Mariolino De Cecco
Appl. Sci. 2022, 12(11), 5309; https://doi.org/10.3390/app12115309 - 24 May 2022
Cited by 1 | Viewed by 1774
Abstract
In this paper, we address the challenge of estimating the 6DoF pose of objects in 2D equirectangular images. This solution allows the transition to the objects’ 3D model from their current pose. In particular, it finds application in the educational use of 360° [...] Read more.
In this paper, we address the challenge of estimating the 6DoF pose of objects in 2D equirectangular images. This solution allows the transition to the objects’ 3D model from their current pose. In particular, it finds application in the educational use of 360° videos, where it enhances the learning experience of students by making it more engaging and immersive due to the possible interaction with 3D virtual models. We developed a general approach usable for any object and shape. The only requirement is to have an accurate CAD model, even without textures of the item, whose pose must be estimated. The developed pipeline has two main steps: vehicle segmentation from the image background and estimation of the vehicle pose. To accomplish the first task, we used deep learning methods, while for the second, we developed a 360° camera simulator in Unity to generate synthetic equirectangular images used for comparison. We conducted our tests using a miniature truck model whose CAD was at our disposal. The developed algorithm was tested using a metrological analysis applied to real data. The results showed a mean difference of 1.5° with a standard deviation of 1° from the ground truth data for rotations, and 1.4 cm with a standard deviation of 1.5 cm for translations over a research range of ±20° and ±20 cm, respectively. Full article
(This article belongs to the Special Issue Human Empowerment through Mixed Reality)
Show Figures

Figure 1

18 pages, 2849 KiB  
Article
Virtual Reality as a Reflection Technique for Public Speaking Training
by Hangyu Zhou, Yuichiro Fujimoto, Masayuki Kanbara and Hirokazu Kato
Appl. Sci. 2021, 11(9), 3988; https://doi.org/10.3390/app11093988 - 28 Apr 2021
Cited by 13 | Viewed by 3659
Abstract
Video recording is one of the most commonly used techniques for reflection, because video allows people to know what they look like to others and how they could improve their performance, but it is problematic because some people easily fall into negative emotions [...] Read more.
Video recording is one of the most commonly used techniques for reflection, because video allows people to know what they look like to others and how they could improve their performance, but it is problematic because some people easily fall into negative emotions and worry about their performance, resulting in a low benefit. In this study, the possibility of applying a simple VR-based reflection method was explored. This method uses virtual reality (VR) and a head-mounted display (HMD) to allow presenters to watch their own presentations from the audience’s perspective and uses an avatar, which hides personal appearance, which has low relevance to the quality of presentation, to help reduce self-awareness during reflection. An experimental study was carried out, considering four personal characteristics—gender, personal anxiety, personal confidence and self-bias. The goal of this study is to discuss which populations can benefit more from this system and to assess the impact of the avatar and HMD-based VR. According to the results, the individuals with low self-confidence in their public speaking skills could benefit more on self-evaluation from VR reflection with HMD, while individuals with negative self-bias could reduce more anxiety by using an avatar. Full article
(This article belongs to the Special Issue Human Empowerment through Mixed Reality)
Show Figures

Figure 1

Back to TopTop