Topic Editors

Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi 24, I-10129 Torino, Italy
Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy
Dipartimento di Automatica e Informatica, Politecnico di Torino, c.so Duca degli Abruzzi 24, I-10129 Torino, Italy

Augmented and Mixed Reality

Abstract submission deadline
closed (31 March 2022)
Manuscript submission deadline
closed (31 May 2022)
Viewed by
41494

Topic Information

Dear Colleagues,

The aim of this article collection is to contribute to the advancement of the Augmented and Mixed Reality fields and their impact on different areas such as: industry, entertainment, tourism, medicine, architecture and so on.

Augmented and mixed reality technologies allow researchers and practitioners to design and develop applications that augment the real world by a set of computer-generated assets. Different paradigms can be employed: hand-held, see-through and projected. Different devices are used to support these paradigms, such as smartphones, tablets, headsets and projectors. Even if a lot of work has been carried out already and many scientific publications are known in the literature, several open problems and exciting challenges still need to be addressed.

AR and MR technologies are growing very quickly, and they are part of our everyday life. Technological improvements of smartphones and tablets have boosted the spread of applications that are able to augment the real world; on the other hand, AR is also one of the Industry 4.0 pillars, and the new interfaces to support human–machine and human–robot interaction could strongly benefit from augmentation. Moreover, the integration of AR/MR technologies with other input interfaces (speech, gesture, gaze, brain, etc.) can open new scenarios, thus allowing users to take advantage of novel interaction paradigms never experienced before.

Despite of all of the exciting opportunities related to AR/MR, several problems need to be tackled. Content generation, tracking systems, limited field of view, usability, technology acceptance, and affordability are just some of issues to be addressed. In this context, this Topic offers a framework for integrating augmented and mixed reality with other technologies, providing technological solutions, proposing new application areas, and defining new evaluations approaches. Topics of interest could include but are not limited to:

General topics:

Indoor and outdoor tracking systems;

Content generation for AR/MR;

AR/MR for industry;

AR/MR for medicine;

AR/MR for entertainment and tourism;

AR/MR for broadcasting;

AR/MR for architecture;

AR/MR for cultural heritage;

AR/MR for military;

AR/MR for education and training.

Particular Themes:

Augmented reality solutions to support assembly, repair and maintenance tasks;

Design of adaptive AR interfaces;

Human–robot interaction supported by AR;

Integration of AR/MR with other input technologies (i.e., speech, gaze, brain, gesture, etc.);

Augmented reality-based games;

Shared augmented and virtual environments;

Application of AR to support learning;

Augmented reality in surgery;

Advanced technologies in AR;

Combining artificial intelligence and augmented reality;

Augmented reality in advertising, shopping and e-commerce;

Augmented reality frameworks/libraries;

Augmented reality for situational awareness: tactical augmented reality;

Augmented reality in sport apps;

Augmented reality as a tool to foster multi-user collaboration.

Prof. Dr. Andrea Sanna
Dr. Federico Manuri
Dr. Francesco De Pace
Topic Editors

Keywords

  • augmented reality
  • mixed reality
  • tracking systems
  • Industry 4.0
  • augmented reality smart glasses
  • artificial intelligence
  • computer vision
  • data visualization
  • education

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600
Future Internet
futureinternet
2.8 7.1 2009 13.1 Days CHF 1600
Information
information
2.4 6.9 2010 14.9 Days CHF 1600
Journal of Imaging
jimaging
2.7 5.9 2015 20.9 Days CHF 1800
Multimodal Technologies and Interaction
mti
2.4 4.9 2017 14.5 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
29 pages, 1586 KiB  
Article
Current Challenges and Future Research Directions in Augmented Reality for Education
by Muhammad Zahid Iqbal, Eleni Mangina and Abraham G. Campbell
Multimodal Technol. Interact. 2022, 6(9), 75; https://doi.org/10.3390/mti6090075 - 1 Sep 2022
Cited by 41 | Viewed by 9600
Abstract
The progression and adoption of innovative learning methodologies signify that a respective part of society is open to new technologies and ideas and thus is advancing. The latest innovation in teaching is the use of Augmented Reality (AR). Applications using this technology have [...] Read more.
The progression and adoption of innovative learning methodologies signify that a respective part of society is open to new technologies and ideas and thus is advancing. The latest innovation in teaching is the use of Augmented Reality (AR). Applications using this technology have been deployed successfully in STEM (Science, Technology, Engineering, and Mathematics) education for delivering the practical and creative parts of teaching. Since AR technology already has a large volume of published studies about education that reports advantages, limitations, effectiveness, and challenges, classifying these projects will allow for a review of the success in the different educational settings and discover current challenges and future research areas. Due to COVID-19, the landscape of technology-enhanced learning has shifted more toward blended learning, personalized learning spaces and user-centered approach with safety measures. The main findings of this paper include a review of the current literature, investigating the challenges, identifying future research areas, and finally, reporting on the development of two case studies that can highlight the first steps needed to address these research areas. The result of this research ultimately details the research gap required to facilitate real-time touchless hand interaction, kinesthetic learning, and machine learning agents with a remote learning pedagogy. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

28 pages, 2727 KiB  
Article
AtAwAR Translate: Attention-Aware Language Translation Application in Augmented Reality for Mobile Phones
by Lisa-Marie Vortmann, Pascal Weidenbach and Felix Putze
Sensors 2022, 22(16), 6160; https://doi.org/10.3390/s22166160 - 17 Aug 2022
Cited by 4 | Viewed by 2153
Abstract
As lightweight, low-cost EEG headsets emerge, the feasibility of consumer-oriented brain–computer interfaces (BCI) increases. The combination of portable smartphones and easy-to-use EEG dry electrode headbands offers intriguing new applications and methods of human–computer interaction. In previous research, augmented reality (AR) scenarios have been [...] Read more.
As lightweight, low-cost EEG headsets emerge, the feasibility of consumer-oriented brain–computer interfaces (BCI) increases. The combination of portable smartphones and easy-to-use EEG dry electrode headbands offers intriguing new applications and methods of human–computer interaction. In previous research, augmented reality (AR) scenarios have been identified to profit from additional user state information—such as that provided by a BCI. In this work, we implemented a system that integrates user attentional state awareness into a smartphone application for an AR written language translator. The attentional state of the user is classified in terms of internally and externally directed attention by using the Muse 2 electroencephalography headband with four frontal electrodes. The classification results are used to adapt the behavior of the translation app, which uses the smartphone’s camera to display translated text as augmented reality elements. We present the first mobile BCI system that uses a smartphone and a low-cost EEG device with few electrodes to provide attention awareness to an AR application. Our case study with 12 participants did not fully support the assumption that the BCI improves usability. However, we are able to show that the classification accuracy and ease of setup are promising paths toward mobile consumer-oriented BCI usage. For future studies, other use cases, applications, and adaptations will be tested for this setup to explore the usability. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

16 pages, 12282 KiB  
Article
Practical Application of Augmented/Mixed Reality Technologies in Surgery of Abdominal Cancer Patients
by Vladimir M. Ivanov, Anton M. Krivtsov, Sergey V. Strelkov, Anton Yu. Smirnov, Roman Yu. Shipov, Vladimir G. Grebenkov, Valery N. Rumyantsev, Igor S. Gheleznyak, Dmitry A. Surov, Michail S. Korzhuk and Valery S. Koskin
J. Imaging 2022, 8(7), 183; https://doi.org/10.3390/jimaging8070183 - 30 Jun 2022
Cited by 10 | Viewed by 2885
Abstract
The technology of augmented and mixed reality (AR/MR) is useful in various areas of modern surgery. We considered the use of augmented and mixed reality technologies as a method of preoperative planning and intraoperative navigation in abdominal cancer patients. Practical use of AM/MR [...] Read more.
The technology of augmented and mixed reality (AR/MR) is useful in various areas of modern surgery. We considered the use of augmented and mixed reality technologies as a method of preoperative planning and intraoperative navigation in abdominal cancer patients. Practical use of AM/MR raises a range questions, which demand suitable solutions. The difficulties and obstacles we encountered in the practical use of AR/MR are presented, along with the ways we chose to overcome them. The most demonstrative case is covered in detail. The three-dimensional anatomical model obtained from the CT scan needed to be rigidly attached to the patient’s body, and therefore an invasive approach was developed, using an orthopedic pin fixed to the pelvic bones. The pin is used both similarly to an X-ray contrast marker and as a marker for augmented reality. This solution made it possible, not only to visualize the anatomical structures of the patient and the border zone of the tumor, but also to change the position of the patient during the operation. In addition, a noninvasive (skin-based) marking method was developed that allows the application of mixed and augmented reality during operation. Both techniques were used (8 clinical cases) for preoperative planning and intraoperative navigation, which allowed surgeons to verify the radicality of the operation, to have visual control of all anatomical structures near the zone of interest, and to reduce the time of surgical intervention, thereby reducing the complication rate and improving the rehabilitation period. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

15 pages, 4173 KiB  
Article
A Multi-User Collaborative AR System for Industrial Applications
by Junyi Wang and Yue Qi
Sensors 2022, 22(4), 1319; https://doi.org/10.3390/s22041319 - 9 Feb 2022
Cited by 18 | Viewed by 4220
Abstract
Augmented reality (AR) applications are increasingly being used in various fields (e.g., design, maintenance, assembly, repair, training, etc.), as AR techniques help improve efficiency and reduce costs. Moreover, collaborative AR systems extend applicability, allowing for collaborative environments for different roles. In this paper, [...] Read more.
Augmented reality (AR) applications are increasingly being used in various fields (e.g., design, maintenance, assembly, repair, training, etc.), as AR techniques help improve efficiency and reduce costs. Moreover, collaborative AR systems extend applicability, allowing for collaborative environments for different roles. In this paper, we propose a multi-user collaborative AR system (aptly called the “multi-user collaborative system”, or MUCSys); it is composed of three ends—MUCStudio, MUCView, and MUCServer. MUCStudio aims to construct industrial content with CAD model transformation, simplification, database update, marker design, scene editing, and exportation, while MUCView contains sensor data analysis, real-time localization, scene loading, annotation editing, and virtual–real rendering. MUCServer—as the bridge between MUCStudio and MUCView—presents collaborative and database services. To achieve this, we implemented the algorithms of local map establishment, global map registration, optimization, and network synchronization. The system provides AR services for diverse industrial processes via three collaborative ways—remote support, collaborative annotation, and editing. According to the system, applications for cutting machines were presented to improve efficiency and reduce costs, covering cutting head designs, production line sales, and cutting machine inspections. Finally, a user study was performed to prove the usage experience of the system. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

20 pages, 24430 KiB  
Article
Multisensory Testing Framework for Advanced Driver Assistant Systems Supported by High-Quality 3D Simulation
by Paweł Jabłoński, Joanna Iwaniec and Michał Jabłoński
Sensors 2021, 21(24), 8458; https://doi.org/10.3390/s21248458 - 18 Dec 2021
Cited by 2 | Viewed by 4991
Abstract
ADAS and autonomous technologies in vehicles become more and more complex, which increases development time and expenses. This paper presents a new real-time ADAS multisensory validation system, which can speed up the development and implementation processes while lowering its cost. The proposed test [...] Read more.
ADAS and autonomous technologies in vehicles become more and more complex, which increases development time and expenses. This paper presents a new real-time ADAS multisensory validation system, which can speed up the development and implementation processes while lowering its cost. The proposed test system integrates a high-quality 3D CARLA simulator with a real-time-based automation platform. We present system experimental verifications on several types of sensors and testing system architectures. The first, open-loop experiment explains the real-time capabilities of the system based on the Mobileye 6 camera sensor detections. The second experiment runs a real-time closed-loop test of a lane-keeping algorithm (LKA) based on the Mobileye 6 line detection. The last experiment presents a simulation of Velodyne VLP-16 lidar, which runs a free space detection algorithm. Simulated lidar output is compared with the real lidar performance. We show that the platform generates reproducible results and allows closed-loop operation which, combined with a real-time collection of event information, promises good scalability toward complex ADAS or autonomous functionalities testing. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

19 pages, 13007 KiB  
Article
An Augmented Reality Periscope for Submarines with Extended Visual Classification
by André Breitinger, Esteban Clua and Leandro A. F. Fernandes
Sensors 2021, 21(22), 7624; https://doi.org/10.3390/s21227624 - 17 Nov 2021
Cited by 5 | Viewed by 6499
Abstract
Submarines are considered extremely strategic for any naval army due to their stealth capability. Periscopes are crucial sensors for these vessels, and emerging to the surface or periscope depth is required to identify visual contacts through this device. This maneuver has many procedures [...] Read more.
Submarines are considered extremely strategic for any naval army due to their stealth capability. Periscopes are crucial sensors for these vessels, and emerging to the surface or periscope depth is required to identify visual contacts through this device. This maneuver has many procedures and usually has to be fast and agile to avoid exposure. This paper presents and implements a novel architecture for real submarine periscopes developed for future Brazilian naval fleet operations. Our system consists of a probe that is connected to the craft and carries a 360 camera. We project and take the images inside the vessel using traditional VR/XR devices. We also propose and implement an efficient computer vision-based MR technique to estimate and display detected vessels effectively and precisely. The vessel detection model is trained using synthetic images. So, we built and made available a dataset composed of 99,000 images. Finally, we also estimate distances of the classified elements, showing all the information in an AR-based interface. Although the probe is wired-connected, it allows for the vessel to stand in deep positions, reducing its exposure and introducing a new way for submarine maneuvers and operations. We validate our proposal through a user experience experiment using 19 experts in periscope operations. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

24 pages, 5514 KiB  
Article
CultReal—A Rapid Development Platform for AR Cultural Spaces, with Fused Localization
by Anca Morar, Maria-Anca Băluțoiu, Alin Moldoveanu, Florica Moldoveanu and Alex Butean
Sensors 2021, 21(19), 6618; https://doi.org/10.3390/s21196618 - 5 Oct 2021
Cited by 4 | Viewed by 2216
Abstract
Virtual and augmented reality technologies have known an impressive market evolution due to their potential to provide immersive experiences. However, they still have significant difficulties to enable fully fledged, consumer-ready applications that can handle complex tasks such as multi-user collaboration or time-persistent experiences. [...] Read more.
Virtual and augmented reality technologies have known an impressive market evolution due to their potential to provide immersive experiences. However, they still have significant difficulties to enable fully fledged, consumer-ready applications that can handle complex tasks such as multi-user collaboration or time-persistent experiences. In this context, CultReal is a rapid creation and deployment platform for augmented reality (AR), aiming to revitalize cultural spaces. The platform’s content management system stores a representation of the environment, together with a database of multimedia objects that can be associated with a location. The localization component fuses data from beacons and from video cameras, providing an accurate estimation of the position and orientation of the visitor’s smartphone. A mobile application running the localization component displays the augmented content, which is seamlessly integrated with the real world. The paper focuses on the series of steps required to compute the position and orientation of the user’s mobile device, providing a comprehensive evaluation with both virtual and real data. Pilot implementations of the system are also described in the paper, revealing the potential of the platform to enable rapid deployment in new cultural spaces. Offering these functionalities, CultReal will allow for the fast development of AR solutions in any location. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

14 pages, 3563 KiB  
Article
Programming Robots by Demonstration Using Augmented Reality
by Inês Soares, Marcelo Petry and António Paulo Moreira
Sensors 2021, 21(17), 5976; https://doi.org/10.3390/s21175976 - 6 Sep 2021
Cited by 21 | Viewed by 3878
Abstract
The world is living the fourth industrial revolution, marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machines [...] Read more.
The world is living the fourth industrial revolution, marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machines were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human–robot interaction systems, understanding the applications where they can be helpful to implement and what are the challenges they face. This work proposes the development of an industrial prototype of a human–machine interaction system through Augmented Reality, in which the objective is to enable an industrial operator without any programming experience to program a robot. The system itself is divided into two different parts: the tracking system, which records the operator’s hand movement, and the translator system, which writes the program to be sent to the robot that will execute the task. To demonstrate the concept, the user drew geometric figures, and the robot was able to replicate the operator’s path recorded. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

Back to TopTop