Special Issue "Augmented Reality: Current Trends, Challenges and Prospects"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2020).

Special Issue Editor

Prof. Dr. Jiro Tanaka
Website
Guest Editor
Graduate School of Information, Production and Systems, WASEDA University 2-7 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0135, Japan
Interests: human–computer interaction; augmented reality; gesture interface; fusion of real world and virtual world; next generation e-commerce service
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Augmented Reality (AR)/Mixed Reality (MR), which adds virtual stuff to our real world environment, are expected to become a big part of our future life. AR overlays digital information on real-world elements. MR brings real world and digital elements together. AR/MR technology bridges the gap between the cyber-physical IoT and the real world. With AR/MR, we can interact with physical and virtual world, using next-generation sensing and imaging technologies. AR/MR allows us to see and immerse ourselves in the world around us even as we interact with a virtual environment. 

The purpose of this Special Issue is to bring together state-of-the-art achievements on Augmented Reality and Its Applications. We encourage authors to submit original research articles, case studies, reviews, theoretical and critical perspectives, and viewpoint articles on the following topics, but not limited to:

  • Multimedia & sensory input, including affective computing and human behavior sensing for AR/MR, multisensory analysis, integration, and synchronization;
  • Speech, gestures, tracking techniques for AR/MR;
  • Multisensory experiences and improved immersion, including audio-visual installations, haptics/tactile, etc;
  • Interaction design & new approaches for interaction in AR/MR, incl. tangible interfaces, multimodal communication and collaborative experiences;
  • Applications, such as Healthcare, Virtual travel, Lifelogging, E-sports, Games, etc use cases, prototypes, or prove of concepts;
  • Social aspects of AR/MR interaction, etc.

Prof. Jiro Tanaka
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multimedia & sensory input for AR/MR;
  • Affective computing;
  • Human behavior sensing;
  • Gesture interface;
  • Machine learning techniques for input recognition;
  • New interaction design for AR/MR;
  • AR/MR applications: Healthcare, Virtual travel, Lifelogging, E-sports, Games, etc;
  • Issues on real world and virtual world integration;
  • Social aspects in AR/MR interaction…

Published Papers (54 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Augmented Reality as a Didactic Resource for Teaching Mathematics
Appl. Sci. 2020, 10(7), 2560; https://doi.org/10.3390/app10072560 - 08 Apr 2020
Abstract
This paper is an example of how to use new technologies to produce didactic innovative resources that ease the teaching and learning process of mathematics. This paper is focused on augmented reality technology with the aim of achieving the creation of didactic resources [...] Read more.
This paper is an example of how to use new technologies to produce didactic innovative resources that ease the teaching and learning process of mathematics. This paper is focused on augmented reality technology with the aim of achieving the creation of didactic resources related to the polyhedra taught in a course of compulsory secondary education in Spain. First, we introduce the basis of this technology and present the theoretical framework in which we make an exhaustive analysis that justifies its usage with educative purposes. Secondly, we explain how to build the polyhedra in augmented reality using the Unity game engine and the Vuforia software development kit (SDK), which enables the use of augmented reality. Using both tools, we create an augmented reality application and some augmented reality notes with the purpose of helping in the process of visualization and comprehension of the three-dimensional geometry related to polyhedra. Finally, we design an innovative, didactic proposal for teaching the polyhedra in the third course of compulsory Secondary Education in Spain, using the resources created with the augmented reality technology. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Performance and Usability of Smartglasses for Augmented Reality in Precision Livestock Farming Operations
Appl. Sci. 2020, 10(7), 2318; https://doi.org/10.3390/app10072318 - 28 Mar 2020
Abstract
In recent years, smartglasses for augmented reality are becoming increasingly popular in professional contexts. However, no commercial solutions are available for the agricultural field, despite the potential of this technology to help farmers. Many head-wearable devices in development possess a variety of features [...] Read more.
In recent years, smartglasses for augmented reality are becoming increasingly popular in professional contexts. However, no commercial solutions are available for the agricultural field, despite the potential of this technology to help farmers. Many head-wearable devices in development possess a variety of features that may affect the smartglasses wearing experience. Over the last decades, dairy farms have adopted new technologies to improve their productivity and profit. However, there remains a gap in the literature as regards the application of augmented reality in livestock farms. Head-wearable devices may offer invaluable benefits to farmers, allowing real-time information monitoring of each animal during on-farm activities. The aim of this study was to expand the knowledge base on how augmented reality devices (smartglasses) interact with farming environments, focusing primarily on human perception and usability. Research has been conducted examining the GlassUp F4 smartglasses during animal selection process. Sixteen participants performed the identification and grouping trials in the milking parlor, reading different types of contents on the augmented reality device optical display. Two questionnaires were used to evaluate the perceived workload and usability of the device. Results showed that the information type could influence the perceived workload and the animal identification process. Smart glasses for augmented reality were a useful tool in the animal genetic improvement program offering promising opportunities for adoption in livestock operations in terms of assessing data consultation and information about animals. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
SARA: A Microservice-Based Architecture for Cross-Platform Collaborative Augmented Reality
Appl. Sci. 2020, 10(6), 2074; https://doi.org/10.3390/app10062074 - 19 Mar 2020
Abstract
Augmented Reality (AR) functionalities may be effectively leveraged in collaborative service scenarios (e.g., remote maintenance, on-site building, street gaming, etc.). Standard development cycles for collaborative AR require to code for each specific visualization platform and implement the necessary control mechanisms over the shared [...] Read more.
Augmented Reality (AR) functionalities may be effectively leveraged in collaborative service scenarios (e.g., remote maintenance, on-site building, street gaming, etc.). Standard development cycles for collaborative AR require to code for each specific visualization platform and implement the necessary control mechanisms over the shared assets. in order to face this challenge, this paper describes SARA, an architecture to support cross-platform collaborative Augmented Reality applications based on microservices. The architecture is designed to work over the concept of collaboration models which regulate the interaction and permissions of each user over the AR assets. Five of these collaboration models were initially integrated in SARA (turn, layer, ownership, hierarchy-based and unconstrained examples) and the platform enables the definition of new ones. Thanks to the reusability of its components, during the development of an application, SARA enables focusing on the application logic while avoiding the implementation of the communication protocol, data model handling and orchestration between the different, possibly heterogeneous, devices involved in the collaboration (i.e., mobile or wearable AR devices using different operating systems). to describe how to build an application based on SARA, a prototype for HoloLens and iOS devices has been implemented. the prototype is a collaborative voxel-based game in which several players work real time together on a piece of land, adding or eliminating cubes in a collaborative manner to create buildings and landscapes. Turn-based and unconstrained collaboration models are applied to regulate the interaction. the development workflow for this case study shows how the architecture serves as a framework to support the deployment of collaborative AR services, enabling the reuse of collaboration model components, agnostically handling client technologies. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Uses and Gratifications on Augmented Reality Games: An Examination of Pokémon Go
Appl. Sci. 2020, 10(5), 1644; https://doi.org/10.3390/app10051644 - 01 Mar 2020
Abstract
Users are attracted by augmented reality games to fulfil their needs. Two objectives are proposed: (1) to research the motivations of those using augmented reality mobile games; (2) to define a structural model based on Uses and Gratifications Theory for the adoption of [...] Read more.
Users are attracted by augmented reality games to fulfil their needs. Two objectives are proposed: (1) to research the motivations of those using augmented reality mobile games; (2) to define a structural model based on Uses and Gratifications Theory for the adoption of augmented reality mobile games. The present study examines the case of Pokémon Go. The model is composed of eight constructs: enjoyment, fantasy, escapism, social interaction, social presence, achievement, self-presentation and continuance intention. The SEM model was empirically assessed based on 1183 responses from Pokémon Go users around the world. Results clearly confirmed the positive influence of almost all the proposed constructs on continuance intention for Pokémon Go. First, these findings may be helpful for the online gaming industry in identifying the game functions that retain more gamers and improve the user experience. Second, the online gaming industry might use these results in order to classify those players with behaviours that favour the use of online games. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Does Augmented Reality Affect Sociability, Entertainment, and Learning? A Field Experiment
Appl. Sci. 2020, 10(4), 1392; https://doi.org/10.3390/app10041392 - 19 Feb 2020
Abstract
Augmented reality (AR) applications have recently emerged for entertainment and educational purposes and have been proposed to have positive effects on social interaction. In this study, we investigated the impact of a mobile, indoor AR feature on sociability, entertainment, and learning. We conducted [...] Read more.
Augmented reality (AR) applications have recently emerged for entertainment and educational purposes and have been proposed to have positive effects on social interaction. In this study, we investigated the impact of a mobile, indoor AR feature on sociability, entertainment, and learning. We conducted a field experiment using a quiz game in a Finnish science center exhibition. We divided participants (N = 372) into an experimental group (AR app users) and two control groups (non-AR app users; pen-and-paper participants), including 28 AR users of follow-up interviews. We used Kruskal–Wallis rank test to compare the experimental groups and the content analysis method to explore AR users’ experiences. Although interviewed AR participants recognized the entertainment value and learning opportunities for AR, we did not detect an increase in perceived sociability, social behavior, positive affect, or learning performance when comparing the experimental groups. Instead, AR interviewees experienced a strong conflict between the two different realities. Despite the engaging novelty value of new technology, performance and other improvements do not automatically emerge. We also discuss potential conditional factors. Future research and development of AR and related technologies should note the possible negative effects of dividing attention to both realities. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
VES: A Mixed-Reality System to Assist Multisensory Spatial Perception and Cognition for Blind and Visually Impaired People
Appl. Sci. 2020, 10(2), 523; https://doi.org/10.3390/app10020523 - 10 Jan 2020
Abstract
In this paper, the Virtually Enhanced Senses (VES) System is described. It is an ARCore-based, mixed-reality system meant to assist blind and visually impaired people’s navigation. VES operates in indoor and outdoor environments without any previous in-situ installation. It provides users with specific, [...] Read more.
In this paper, the Virtually Enhanced Senses (VES) System is described. It is an ARCore-based, mixed-reality system meant to assist blind and visually impaired people’s navigation. VES operates in indoor and outdoor environments without any previous in-situ installation. It provides users with specific, runtime-configurable stimuli according to their pose, i.e., position and orientation, and the information of the environment recorded in a virtual replica. It implements three output data modalities: Wall-tracking assistance, acoustic compass, and a novel sensory substitution algorithm, Geometry-based Virtual Acoustic Space (GbVAS). The multimodal output of this algorithm takes advantage of natural human perception encoding of spatial data. Preliminary experiments of GbVAS have been conducted with sixteen subjects in three different scenarios, demonstrating basic orientation and mobility skills after six minutes training. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Augmented and Virtual Reality Evolution and Future Tendency
Appl. Sci. 2020, 10(1), 322; https://doi.org/10.3390/app10010322 - 01 Jan 2020
Cited by 1
Abstract
Augmented reality and virtual reality technologies are increasing in popularity. Augmented reality has thrived to date mainly on mobile applications, with games like Pokémon Go or the new Google Maps utility as some of its ambassadors. On the other hand, virtual reality has [...] Read more.
Augmented reality and virtual reality technologies are increasing in popularity. Augmented reality has thrived to date mainly on mobile applications, with games like Pokémon Go or the new Google Maps utility as some of its ambassadors. On the other hand, virtual reality has been popularized mainly thanks to the videogame industry and cheaper devices. However, what was initially a failure in the industrial field is resurfacing in recent years thanks to the technological improvements in devices and processing hardware. In this work, an in-depth study of the different fields in which augmented and virtual reality have been used has been carried out. This study focuses on conducting a thorough scoping review focused on these new technologies, where the evolution of each of them during the last years in the most important categories and in the countries most involved in these technologies will be analyzed. Finally, we will analyze the future trend of these technologies and the areas in which it is necessary to investigate to further integrate these technologies into society. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Mobile AR: User Evaluation in a Cultural Heritage Context
Appl. Sci. 2019, 9(24), 5454; https://doi.org/10.3390/app9245454 - 12 Dec 2019
Abstract
The growing number of mobile augmented reality applications has been favoring its awareness and usage among diversified areas. Focusing on cultural heritage applications, this study presents an evaluation of a mobile augmented reality application tested at Conimbriga, an archaeological site. The prototype developed [...] Read more.
The growing number of mobile augmented reality applications has been favoring its awareness and usage among diversified areas. Focusing on cultural heritage applications, this study presents an evaluation of a mobile augmented reality application tested at Conimbriga, an archaeological site. The prototype developed for this purpose, named DinofelisAR, allowed users to view, over 360 degrees, a majestic reconstruction of a Forum from the Roman Era superimposed over its current ruins. Thus, users were able to keep perceiving the present-day surroundings of a Roman city in ruins while, at the same time, had the possibility to explore the matching virtual model. The results presented, arising from 90 participants involved in this evaluation, praise the sense of opportunity for new augmented reality solutions targeted at cultural heritage sites. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Methodologies of Learning Served by Virtual Reality: A Case Study in Urban Interventions
Appl. Sci. 2019, 9(23), 5161; https://doi.org/10.3390/app9235161 - 28 Nov 2019
Cited by 1
Abstract
A computer-simulated reality and the human-machine interactions facilitated by computer technology and wearable computers may be used as an educational methodology that transforms the way students deal with information. This turns the learning process into a more participative and active process, which fits [...] Read more.
A computer-simulated reality and the human-machine interactions facilitated by computer technology and wearable computers may be used as an educational methodology that transforms the way students deal with information. This turns the learning process into a more participative and active process, which fits both the practical part of subjects and the learner’s profile, as students nowadays are more technology-savvy and familiar with current technological advances. This methodology is being used in architectural and urbanism degrees to support the design process and to help students visualize design alternatives in the context of existing environments. This paper proposes the use of virtual reality (VR) as a resource in the teaching of courses that focus on the design of urban spaces. A group of users—composed of architecture students and professionals related to the architecture field—participated in an immersing VR experience and had the opportunity to interact with the space that was being redesigned. Later, a quantitative tool was used in order to evaluate the effectiveness of virtual systems in the design of urban environments. The survey was designed using as a reference the competences required in the urbanism courses; this allowed the authors to identify positive and negative aspects in an objective way. The results prove that VR helps to expand digital abilities in complex representation and helps users in the evaluation and decision-making processes involved in the design of urban spaces. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Enhancing Interaction with Augmented Reality through Mid-Air Haptic Feedback: Architecture Design and User Feedback
Appl. Sci. 2019, 9(23), 5123; https://doi.org/10.3390/app9235123 - 26 Nov 2019
Abstract
Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this [...] Read more.
Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this sense, the integration of haptics within AR may help to deliver an enriched experience, while facilitating the performance of specific actions, such as repositioning or resizing tasks, that are still dependent on the user’s skills. In this direction, this paper gathers the description of a flexible architecture designed to deploy haptically enabled AR applications both for mobile and wearable visualization devices. The haptic feedback may be generated through a variety of devices (e.g., wearable, graspable, or mid-air ones), and the architecture facilitates handling the specificity of each. For this reason, within the paper, it is discussed how to generate a haptic representation of a 3D digital object depending on the application and the target device. Additionally, the paper includes an analysis of practical, relevant issues that arise when setting up a system to work with specific devices like HMD (e.g., HoloLens) and mid-air haptic devices (e.g., Ultrahaptics), such as the alignment between the real world and the virtual one. The architecture applicability is demonstrated through the implementation of two applications: (a) Form Inspector and (b) Simon Game, built for HoloLens and iOS mobile phones for visualization and for UHK for mid-air haptics delivery. These applications have been used to explore with nine users the efficiency, meaningfulness, and usefulness of mid-air haptics for form perception, object resizing, and push interaction tasks. Results show that, although mobile interaction is preferred when this option is available, haptics turn out to be more meaningful in identifying shapes when compared to what users initially expect and in contributing to the execution of resizing tasks. Moreover, this preliminary user study reveals some design issues when working with haptic AR. For example, users may be expecting a tailored interface metaphor, not necessarily inspired in natural interaction. This has been the case of our proposal of virtual pressable buttons, built mimicking real buttons by using haptics, but differently interpreted by the study participants. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Haptic Hybrid Prototyping (HHP): An AR Application for Texture Evaluation with Semantic Content in Product Design
Appl. Sci. 2019, 9(23), 5081; https://doi.org/10.3390/app9235081 - 25 Nov 2019
Abstract
The manufacture of prototypes is costly in economic and temporal terms and in order to carry this out it is necessary to accept certain deviations with respect to the final finishes. This article proposes haptic hybrid prototyping, a haptic-visual product prototyping method created [...] Read more.
The manufacture of prototypes is costly in economic and temporal terms and in order to carry this out it is necessary to accept certain deviations with respect to the final finishes. This article proposes haptic hybrid prototyping, a haptic-visual product prototyping method created to help product design teams evaluate and select semantic information conveyed between product and user through texturing and ribs of a product in early stages of conceptualization. For the evaluation of this tool, an experiment was realized in which the haptic experience was compared during the interaction with final products and through the HHP. As a result, it was observed that the answers of the interviewees coincided in both situations in 81% of the cases. It was concluded that the HHP enables us to know the semantic information transmitted through haptic-visual means between product and user as well as being able to quantify the clarity with which this information is transmitted. Therefore, this new tool makes it possible to reduce the manufacturing lead time of prototypes as well as the conceptualization phase of the product, providing information on the future success of the product in the market and its economic return. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Application of Augmented Reality, Mobile Devices, and Sensors for a Combat Entity Quantitative Assessment Supporting Decisions and Situational Awareness Development
Appl. Sci. 2019, 9(21), 4577; https://doi.org/10.3390/app9214577 - 28 Oct 2019
Abstract
This paper presents advances in the development of specialized mobile applications for combat decision support utilizing augmented reality technologies used for the production of contextual data delivered to any tactical smartphone. Handhelds and decision support systems have been present in military operations since [...] Read more.
This paper presents advances in the development of specialized mobile applications for combat decision support utilizing augmented reality technologies used for the production of contextual data delivered to any tactical smartphone. Handhelds and decision support systems have been present in military operations since the 1990s. Due to the development of hardware and software platforms, smartphones are capable of running complex algorithms for individual soldiers and low-level commander support. The utilization of tactical data (force location, composition, and tasks) in dynamic mobile networks that are accessible anywhere during a mission provides means for the development of situational awareness and decision superiority. These two elements are key factors in 21st-century military operations, as they influence the efficiency of recognition, identification, and targeting. Combat support tools and their analytical capabilities can serve as recon data hubs, but most of all they can support and simplify complex analytical tasks for commanders. These tasks mainly include topographical and tactical orientation within the battlespace. This paper documents the ideas for and construction details of mobile support tools used for supporting the specific operational activities of military personnel during combat and crisis management. The presented augmented reality-based evaluation methods formulate new capabilities for the visualization and identification of military threats, mission planning characteristics, tasks, and checkpoints, which help individuals to orientate within their current situation. The developed software platform, mobile common operational picture (mCOP), demonstrates all research findings and delivers a personalized combat-oriented distributed mobile system, supporting blue-force tracking capabilities and reconnaissance data fusion as well as threat-level evaluations for military and crisis management scenarios. The mission data are further fused with Geographic Information System (GIS) topographical and vector data, supporting terrain evaluations for mission planning and execution. The application implements algorithms for path finding, movement task scheduling, assistance, and analysis, as well as military potential evaluation, threat-level estimation, and location tracking. The features of the mCOP mobile application were designed and organized as mission-critical functions. The presented research demonstrates and proves the usefulness of deploying mobile applications for combat support, situation awareness development, and the delivery of augmented reality-based threat-level analytical data to extend the capabilities and properties of software tools applied for supporting military and border protection operations. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
An Automatic Marker–Object Offset Calibration Method for Precise 3D Augmented Reality Registration in Industrial Applications
Appl. Sci. 2019, 9(20), 4464; https://doi.org/10.3390/app9204464 - 22 Oct 2019
Cited by 1
Abstract
Industrial augmented reality (AR) applications demand high on the visual consistency of virtual-real registration. To present, the marker-based registration method is most popular because it is fast, robust, and convenient to obtain the registration matrix. In practice, the registration matrix should multiply an [...] Read more.
Industrial augmented reality (AR) applications demand high on the visual consistency of virtual-real registration. To present, the marker-based registration method is most popular because it is fast, robust, and convenient to obtain the registration matrix. In practice, the registration matrix should multiply an offset matrix that describes the transformation between the attaching position and the initial position of the marker relative to the object. However, the offset matrix is usually measured, calculated, and set manually, which is not accurate and convenient. This paper proposes an accurate and automatic marker–object offset matrix calibration method. First, the normal direction of the target object is obtained by searching and matching the top surface of the CAD model. Then, the spatial translation is estimated by aligning the projected and the imaged top surface. Finally, all six parameters of the offset matrix are iteratively optimized using a 3D image alignment framework. Experiments were performed on the publicity monocular rigid 3D tracking dataset and an automobile gearbox. The average translation and rotation errors of the optimized offset matrix are 2.10 mm and 1.56 degree respectively. The results validate that the proposed method is accurate and automatic, which contributes to a universal offset matrix calibration tool for marker-based industrial AR applications. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Indoor Localization for Augmented Reality Devices Using BIM, Point Clouds, and Template Matching
Appl. Sci. 2019, 9(20), 4260; https://doi.org/10.3390/app9204260 - 11 Oct 2019
Cited by 1
Abstract
Mobile devices are a common target for augmented reality applications, especially for showing contextual information in buildings or construction sites. A prerequisite of contextual information display is the localization of objects and the device in the real world. In this paper, we present [...] Read more.
Mobile devices are a common target for augmented reality applications, especially for showing contextual information in buildings or construction sites. A prerequisite of contextual information display is the localization of objects and the device in the real world. In this paper, we present our approach to the problem of mobile indoor localization with a given building model. The approach does not use external sensors or input. Accurate external sensors such as stationary cameras may be expensive and difficult to set up and maintain. Relying on already existing external sources may also prove to be difficult, as especially inside buildings, Internet connections can be unreliable and GPS signals can be inaccurate. Therefore, we try to find a localization solution for augmented reality devices that can accurately localize itself only with data from internal sensors and preexisting information about the building. If a building has an accurate model of its geometry, we can use modern spatial mapping techniques and point-cloud matching to find a mapping between local device coordinates and global model coordinates. We use normal analysis and 2D template matching on an inverse distance map to determine this mapping. The proposed algorithm is designed to have a high speed and efficiency, as mobile devices are constrained by hardware limitations. We show an implementation of the algorithm on the Microsoft HoloLens, test the localization accuracy, and offer use cases for the technology. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Real-Time Augmented Reality Physics Simulator for Education
Appl. Sci. 2019, 9(19), 4019; https://doi.org/10.3390/app9194019 - 25 Sep 2019
Abstract
Physics education applications using augmented reality technology, which has been developed extensively in recent years, have a lot of restrictions in terms of performance and accuracy. The purpose of our research is to develop a real-time simulation system for physics education that is [...] Read more.
Physics education applications using augmented reality technology, which has been developed extensively in recent years, have a lot of restrictions in terms of performance and accuracy. The purpose of our research is to develop a real-time simulation system for physics education that is based on parallel processing. In this paper, we present a video see-through AR (Augmented Reality) system that includes an environment recognizer using a depth image of Microsoft’s Kinect V2 and a real-time soft body simulator based on parallel processing using GPU (Graphic Processing Unit). Soft body simulation can provide more realistic simulation results than rigid body simulation, so it can be more effective in systems for physics education. We have designed and implemented a system that provides the physical deformation and movement of 3D volumetric objects, and uses them in education. To verify the usefulness of the proposed system, we conducted a questionnaire survey of 10 students majoring in physics education. As a result of the questionnaire survey, 93% of respondents answered that they would like to use it for education. We plan to use the stand-alone AR device including one or more cameras to improve the system in the future. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Deep-cARe: Projection-Based Home Care Augmented Reality System with Deep Learning for Elderly
Appl. Sci. 2019, 9(18), 3897; https://doi.org/10.3390/app9183897 - 17 Sep 2019
Cited by 1
Abstract
Developing innovative and pervasive smart technologies that provide medical support and improve the welfare of the elderly has become increasingly important as populations age. Elderly people frequently experience incidents of discomfort in their daily lives, including the deterioration of cognitive and memory abilities. [...] Read more.
Developing innovative and pervasive smart technologies that provide medical support and improve the welfare of the elderly has become increasingly important as populations age. Elderly people frequently experience incidents of discomfort in their daily lives, including the deterioration of cognitive and memory abilities. To provide auxiliary functions and ensure the safety of the elderly in daily living situations, we propose a projection-based augmented reality (PAR) system equipped with a deep-learning module. In this study, we propose three-dimensional space reconstruction of a pervasive PAR space for the elderly. In addition, we propose the application of a deep-learning module to lay the foundation for contextual awareness. Performance experiments were conducted for grafting the deep-learning framework (pose estimation, face recognition, and object detection) onto the PAR technology through the proposed hardware for verification of execution possibility, real-time execution, and applicability. The precision of the face pose is particularly high by pose estimation; it is used to determine an abnormal user state. For face recognition results of whole class, the average detection rate (DR) was 74.84% and the precision was 78.72%. However, for face occlusions, the average DR was 46.83%. It was confirmed that the face recognition can be performed properly if the face occlusion situation is not frequent. By object detection experiment results, the DR increased as the distance from the system decreased for a small object. For a large object, the miss rate increased when the distance between the object and the system decreased. Scenarios for supporting the elderly, who experience degradation in movement and cognitive functions, were designed and realized, constructed using the proposed platform. In addition, several user interfaces (UI) were implemented according to the scenarios regardless of distance between users and the proposed system. In this study, we developed a bidirectional PAR system that provides the relevant information by understanding the user environment and action intentions instead of a unidirectional PAR system for simple information provision. We present a discussion of the possibility of care systems for the elderly through the fusion of PAR and deep-learning frameworks. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Real-World Oriented Smartphone AR Supported Learning System Based on Planetarium Contents for Seasonal Constellation Observation
Appl. Sci. 2019, 9(17), 3508; https://doi.org/10.3390/app9173508 - 26 Aug 2019
Abstract
A popular astronomical concept covered by projection learning programs in the planetarium is seasonal constellation. However, a planetarium’s learning environment is limited to virtual scenes, where learners can observe seasonal constellations, but there is a significant difference between reality and the learners’ imagination [...] Read more.
A popular astronomical concept covered by projection learning programs in the planetarium is seasonal constellation. However, a planetarium’s learning environment is limited to virtual scenes, where learners can observe seasonal constellations, but there is a significant difference between reality and the learners’ imagination regarding constellations. It is important to create a real-world oriented observation learning environment for observing seasonal constellations. Augmented reality has proved to be a powerful tool for astronomical observation learning. In this paper, augmented reality (AR) contents and 2D contents are used to develop a smartphone-based learning system called the Real-World Oriented Smartphone AR Learning System (R-WOSARLS) for seasonal constellation observation, which is based on the planetarium contents of the planetarium of the Nagoya City Science Museum, for seasonal constellation learning. Two experiments were conducted to evaluate the usefulness, usability, and learner satisfaction of our system in university and junior high school, respectively. The results show that R-WOSARLS is an effective learning tool for constellation observation and learning, and it enhances learners’ motivation to pursue seasonal constellation learning. Moreover, R-WOSARLS could be a teaching tool not only to help students learn more than with traditional instruction, but also to stimulate their interest in astronomical phenomena outside of school. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
A Subjective Study on User Perception Aspects in Virtual Reality
Appl. Sci. 2019, 9(16), 3384; https://doi.org/10.3390/app9163384 - 16 Aug 2019
Cited by 2
Abstract
Three hundred and sixty degree video is becoming more and more popular on the Internet. By using a Head-Mounted Display, 360-degree video can render a Virtual Reality (VR) environment. However, it is still a big challenge to understand Quality of Experience (QoE) of [...] Read more.
Three hundred and sixty degree video is becoming more and more popular on the Internet. By using a Head-Mounted Display, 360-degree video can render a Virtual Reality (VR) environment. However, it is still a big challenge to understand Quality of Experience (QoE) of 360-degree video since user experience during watching 360-degree video is a very complex phenomenon. In this paper, we aim to investigate four QoE aspects of 360-degree video, namely, perceptual quality, presence, cybersickness, and acceptability. In addition, four key QoE-affecting factors of encoding parameters, content motion, rendering device, and rendering mode are considered in our study. To the best of our knowledge, this is the first work that covers a large number of factors and QoE aspects of 360-degree video. In this study, a subjective experiment is conducted using 60 video versions generated from three original 360-degree videos. Based on statistical analysis of the obtained results, various findings on the impacts of the factors on the QoE aspects are provided. In particular, regarding the impacts of encoding parameters, it is found that the difference of QoE is negligible between video versions encoded at 4 K and 2.5 K resolutions. Also, it is suggested that 360-degree video should not be encoded at HD resolution or lower when watching in VR mode using Head Mounted Display. In addition, the bitrate for good QoE varies widely across different video contents. With respect to the content motion factor, its impact is statistically significant on the perceptual quality, presence, and cybersickness. In a comparison of two rendering device sets used in this study, there is no statistically significant difference found for the acceptability and cybersickness. However, the differences of the perceptual quality and presence are indicated to be statistically significant. Regarding the rendering mode, a comparison between VR and non-VR modes is also conducted. Although the non-VR mode always achieves higher perceptual quality scores and higher acceptability rates, more than a half of the viewers prefer the VR mode to the non-VR mode when watching versions encoded at the resolutions of fHD or higher. By contrast, the non-VR mode is preferred at the HD resolution. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Reflections on the Limited Pervasiveness of Augmented Reality in Industrial Sectors
Appl. Sci. 2019, 9(16), 3382; https://doi.org/10.3390/app9163382 - 16 Aug 2019
Cited by 6
Abstract
The paper aims to investigate the reasons why Augmented Reality (AR) has not fully broken the industrial market yet, or found a wider application in industries. The main research question the paper tries to answer is: what are the factors (and to what [...] Read more.
The paper aims to investigate the reasons why Augmented Reality (AR) has not fully broken the industrial market yet, or found a wider application in industries. The main research question the paper tries to answer is: what are the factors (and to what extent) that are limiting AR? Firstly, a reflection on the state of art of AR applications in industries is proposed, to discover the sectors more commonly chosen for deploying the technology so far. Later, based on a survey conducted after that, three AR applications have been tested on manufacturing, automotive, and railway sectors, and the paper pinpoints key aspects that are conditioning its embedding in the daily working life. In order to compare whether the perception of employees from railway, automotive, and manufacturing sectors differs significantly, a one-way analysis of variance (ANOVA) has been used. Later, suggestions are formulated in order to improve these aspects in the industry world. Finally, the paper indicates the main conclusions, highlighting possible future researches to start. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Holographic Mixed Reality System for Air Traffic Control and Management
Appl. Sci. 2019, 9(16), 3370; https://doi.org/10.3390/app9163370 - 15 Aug 2019
Cited by 1
Abstract
Based on a long-term prediction by the International Civil Aviation Organization indicating steady increases in air traffic demand throughout the world, the workloads of air traffic controllers are expected to continuously increase. Air traffic control and management (ATC/M) includes the processing of various [...] Read more.
Based on a long-term prediction by the International Civil Aviation Organization indicating steady increases in air traffic demand throughout the world, the workloads of air traffic controllers are expected to continuously increase. Air traffic control and management (ATC/M) includes the processing of various unstructured composite data along with the real-time visualization of aircraft data. To prepare for future air traffic, research and development intended to effectively present various complex navigation data to air traffic controllers is necessary. This paper presents a mixed reality-based air traffic control system for the improvement of and support for air traffic controllers’ workflow using mixed reality technology that is effective for the delivery of information such as complex navigation data. The existing control systems involve difficulties in information access and interpretation. Therefore, taking notice of the necessity for the integration of air traffic control systems, this study presents the mixed reality (MR) system, which is a new approach, that enables the control of air traffic in interactive environments. This system is provided in a form usable in actual operational environments with a head-mounted see-through display installed with a controller to enable more structured work support. In addition, since this system can be controlled first-hand by air traffic controllers, it provides a new experience through improved work efficiency and productivity. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Comparison of Tracking Techniques on 360-Degree Videos
Appl. Sci. 2019, 9(16), 3336; https://doi.org/10.3390/app9163336 - 14 Aug 2019
Abstract
With the availability of 360-degree cameras, 360-degree videos have become popular recently. To attach a virtual tag on a physical object in 360-degree videos for augmented reality applications, automatic object tracking is required so the virtual tag can follow its corresponding physical object [...] Read more.
With the availability of 360-degree cameras, 360-degree videos have become popular recently. To attach a virtual tag on a physical object in 360-degree videos for augmented reality applications, automatic object tracking is required so the virtual tag can follow its corresponding physical object in 360-degree videos. Relative to ordinary videos, 360-degree videos in an equirectangular format have special characteristics such as viewpoint change, occlusion, deformation, lighting change, scale change, and camera shakiness. Tracking algorithms designed for ordinary videos may not work well on 360-degree videos. Therefore, we thoroughly evaluate the performance of eight modern trackers in terms of accuracy and speed on 360-degree videos. The pros and cons of these trackers on 360-degree videos are discussed. Possible improvements to adapt these trackers to 360-degree videos are also suggested. Finally, we provide a dataset containing nine 360-degree videos with ground truth of target positions as a benchmark for future research. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study
Appl. Sci. 2019, 9(15), 3177; https://doi.org/10.3390/app9153177 - 05 Aug 2019
Abstract
Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still confined to the touch [...] Read more.
Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still confined to the touch display, device camera, and built-in sensors of these handheld devices, which suffer from obtrusive interactions with AR content. Wearable fabric-based interfaces promote subtle, natural, and eyes-free interactions which are needed when performing interactions in dynamic environments. Prior studies explored the possibilities of using fabric-based wearable interfaces for head-mounted AR display (HMD) devices. The interface metaphors of HMD AR devices are inadequate for handheld AR devices as a typical HAR application require users to use only one hand to perform interactions. In this paper, we aim to investigate the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We elicited user-preferred gestures which are socially acceptable and comfortable to use for HAR devices. We also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures, and present broader design guidelines for fabric-based wearable interfaces for handheld augmented reality applications. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
User Interactions for Augmented Reality Smart Glasses: A Comparative Evaluation of Visual Contexts and Interaction Gestures
Appl. Sci. 2019, 9(15), 3171; https://doi.org/10.3390/app9153171 - 04 Aug 2019
Cited by 3
Abstract
Smart glasses for wearable augmented reality (AR) are widely used in various applications, such as training and task assistance. However, as the field of view (FOV) in the current AR smart glasses is narrow, it is difficult to visualize all the information on [...] Read more.
Smart glasses for wearable augmented reality (AR) are widely used in various applications, such as training and task assistance. However, as the field of view (FOV) in the current AR smart glasses is narrow, it is difficult to visualize all the information on the AR display. Besides, only simple interactions are supported. This paper presents a comparative and substantial evaluation of user interactions for wearable AR concerning visual contexts and gesture interactions using AR smart glasses. Based on the evaluation, it suggests new guidelines for visual augmentation focused on task assistance. Three different types of visual contexts for wearable AR were implemented and evaluated: stereo rendering and direct augmentation, and non-stereo rendering and indirect augmentation with/without video background. Also, gesture interactions, such as multi-touch interaction and hand gesture-based interaction, were implemented and evaluated. We performed quantitative and qualitative analyses, including performance measurement and questionnaire evaluation. The experimental assessment proves that both FOV and visual registration between virtual and physical artifacts are important, and they can complement each other. Hand gesture-based interaction can be more intuitive and useful. Therefore, by analyzing the advantages and disadvantages of the visual context and gesture interaction in wearable AR, this study suggests more effective and user-centric guidance for task assistance. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
AR Displays: Next-Generation Technologies to Solve the Vergence–Accommodation Conflict
Appl. Sci. 2019, 9(15), 3147; https://doi.org/10.3390/app9153147 - 02 Aug 2019
Cited by 4
Abstract
Augmenting reality (AR) holds many benefits in how people perceive information and use it in their workflow or leisure activities. A cohesive AR experience has many components; nevertheless, the key is display technologies. The current industry standard for the core solution is still [...] Read more.
Augmenting reality (AR) holds many benefits in how people perceive information and use it in their workflow or leisure activities. A cohesive AR experience has many components; nevertheless, the key is display technologies. The current industry standard for the core solution is still conventional stereoscopy, which has proven to be inadequate for near-work due to the caused vergence–accommodation conflict and the inability to precisely overlay the 3D content on the real world. To overcome this, next-generation technologies have been proposed. While the holographic method holds the highest potential of being the ultimate solution, its current level of maturity is not sufficient to yield a practical product. Consequently, the next solution for near-work-capable AR displays will be of another type. LightSpace Technologies have developed a static multifocal display architecture based on stacked liquid crystal-based optical diffuser elements and a synchronized high-refresh rate image projector. A stream of 2D image depth planes comprising a 3D scene is projected onto respective physically-separated diffuser elements, causing the viewer to perceive a scene as continuous and having all relevant physical as well as psychological depth cues. A system with six image depth planes yielding 6 cpd resolution and 72° horizontal field-of-view has been demonstrated to provide perceptually continuous accommodation over 3.2 Diopter range. A further optimization by using a conventional image combiner resulted in the compact and practical design of the AR display. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
A Reflective Augmented Reality Integral Imaging 3D Display by Using a Mirror-Based Pinhole Array
Appl. Sci. 2019, 9(15), 3124; https://doi.org/10.3390/app9153124 - 01 Aug 2019
Abstract
In this paper, we propose a reflective augmented reality (AR) display system based on integral imaging (II) using a mirror-based pinhole array (MBPA). The MBPA, obtained by punching pinholes on a mirror, functions as a three-dimensional (3D) imaging device, as well as an [...] Read more.
In this paper, we propose a reflective augmented reality (AR) display system based on integral imaging (II) using a mirror-based pinhole array (MBPA). The MBPA, obtained by punching pinholes on a mirror, functions as a three-dimensional (3D) imaging device, as well as an image combiner. The pinhole array of MBPA can realize a pinhole array-based II display, while the mirror of MBPA can image the real objects, so as to combine the images of the real objects with the reconstructed 3D images. The structure of the proposed reflective AR display is very simple, and only a projection system or a two-dimensional display screen is needed to combine with the MBPA. In our experiment, a 25cm × 14cm sized AR display was built up, a combination of a 3D virtual image and a real 3D object was presented by the proposed AR 3D display. The proposed device could realize an AR display of large size due to its compact form factor and low weight. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Augmented Interaction Systems for Supporting Autistic Children. Evolution of a Multichannel Expressive Tool: The SEMI Project Feasibility Study
Appl. Sci. 2019, 9(15), 3081; https://doi.org/10.3390/app9153081 - 31 Jul 2019
Abstract
Background: Over the past ten years, the authors have been designing, developing, and testing pervasive technology to support children with autism (ASD). Methods: In the present study, an integrated system based on multimedia and augmented interaction technologies have been tested on young subjects [...] Read more.
Background: Over the past ten years, the authors have been designing, developing, and testing pervasive technology to support children with autism (ASD). Methods: In the present study, an integrated system based on multimedia and augmented interaction technologies have been tested on young subjects with ASD and dyspraxia in the age range of 6–10 years, in charge for rehabilitation treatments; a team of clinical psychologists has analyzed the results of the experimentation. The ten children involved in the project underwent an initial assessment of praxis skills and motor coordination. Subsequently, the subjects were subdivided into two subgroups: five children participated in the experimentation and five were evaluated as the control group (treatment as usual). Results: The evaluation showed an increased score in the several aspects considered, and particularly those related to motor coordination. An improvement in balancing tests and in hands-movement testing was found. Conclusion: The children involved in the sessions showed greater ability to self-control the movement as well as to select specific motor areas. The methods used seem to be promising to improve emotional and social skills too in a motivating and enjoyable climate. A high level of acceptance by professionals was observed and parents’ feedback was also positive. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
AR Pointer: Advanced Ray-Casting Interface Using Laser Pointer Metaphor for Object Manipulation in 3D Augmented Reality Environment
Appl. Sci. 2019, 9(15), 3078; https://doi.org/10.3390/app9153078 - 30 Jul 2019
Abstract
In this paper, we propose AR Pointer, a new augmented reality (AR) interface that allows users to manipulate three-dimensional (3D) virtual objects in AR environment. AR Pointer uses a built-in 6-degrees of freedom (DoF) inertial measurement unit (IMU) sensor in an off-the-shelf mobile [...] Read more.
In this paper, we propose AR Pointer, a new augmented reality (AR) interface that allows users to manipulate three-dimensional (3D) virtual objects in AR environment. AR Pointer uses a built-in 6-degrees of freedom (DoF) inertial measurement unit (IMU) sensor in an off-the-shelf mobile device to cast a virtual ray that is used to accurately select objects. It is also implemented using simple touch gestures commonly used in smartphones for 3D object manipulation, so users can easily manipulate 3D virtual objects using the AR Pointer, without a long training period. To demonstrate the usefulness of AR Pointer, we introduce two use-cases, constructing an AR furniture layout and AR education. Then, we conducted two experiments, performance tests and usability tests, to represent the excellence of the designed interaction methods using AR Pointer. We found that AR Pointer is more efficient than other interfaces, achieving 39.4% faster task completion time in the object manipulation. In addition, the participants gave an average of 8.61 points (13.4%) on the AR Pointer in the usability test conducted through the system usability scale (SUS) questionnaires and 8.51 points (15.1%) on the AR Pointer in the fatigue test conducted through the NASA task load index (NASA-TLX) questionnaire. Previous AR applications have been implemented in a passive AR environment where users simply check and pop up the AR objects those are prepared in advance. However, if AR Pointer is used for AR object manipulation, it is possible to provide an immersive AR environment for the user who want/wish to actively interact with the AR objects. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Virtual Object Manipulation by Combining Touch and Head Interactions for Mobile Augmented Reality
Appl. Sci. 2019, 9(14), 2933; https://doi.org/10.3390/app9142933 - 22 Jul 2019
Abstract
This paper proposes an interaction method to conveniently manipulate a virtual object by combining touch interaction and head movements for a head-mounted display (HMD), which provides mobile augmented reality (AR). A user can conveniently manipulate a virtual object with touch interaction recognized from [...] Read more.
This paper proposes an interaction method to conveniently manipulate a virtual object by combining touch interaction and head movements for a head-mounted display (HMD), which provides mobile augmented reality (AR). A user can conveniently manipulate a virtual object with touch interaction recognized from the inertial measurement unit (IMU) attached to the index finger’s nail and head movements tracked by the IMU embedded in the HMD. We design two interactions that combine touch and head movements, to manipulate a virtual object on a mobile HMD. Each designed interaction method manipulates virtual objects by controlling ray casting and adjusting widgets. To evaluate the usability of the designed interaction methods, a user evaluation is performed in comparison with the hand interaction using Hololens. As a result, the designed interaction method receives positive feedback that virtual objects can be manipulated easily in a mobile AR environment. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Augmented Reality Implementations in Stomatology
Appl. Sci. 2019, 9(14), 2929; https://doi.org/10.3390/app9142929 - 22 Jul 2019
Cited by 1
Abstract
Augmented reality has a wide range of applications in many areas that can extend the study of real objects into the digital world, including stomatology. Real dental objects that were previously examined using their plaster casts are often replaced by their digital models [...] Read more.
Augmented reality has a wide range of applications in many areas that can extend the study of real objects into the digital world, including stomatology. Real dental objects that were previously examined using their plaster casts are often replaced by their digital models or three-dimensional (3D) prints in the cyber-physical world. This paper reviews a selection of digital methods that have been applied in dentistry, including the use of intra-oral scanning technology for data acquisition and evaluation of fundamental features of dental arches. The methodology includes the use of digital filters and morphological operations for spatial objects analysis, their registration, and evaluation of changes during the treatment of specific disorders. The results include 3D models of selected dental arch objects, which allow a comparison of their shape and position during repeated observations. The proposed methods present digital alternatives to the use of plaster casts for semiautomatic evaluation of dental arch measures. This paper describes some of the advantages of 3D digital technology replacing real world elements and plaster cast dental models in many areas of classical stomatology. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
The Motivation of Technological Scenarios in Augmented Reality (AR): Results of Different Experiments
Appl. Sci. 2019, 9(14), 2907; https://doi.org/10.3390/app9142907 - 19 Jul 2019
Cited by 3
Abstract
Augmented Reality (AR) is an emergent technology that is acquiring more and more relevance in teaching every day. Together with mobile technology, this combination arises as one of the most effective binomials to support significant and ubiquitous learning. Nevertheless, this binomial can only [...] Read more.
Augmented Reality (AR) is an emergent technology that is acquiring more and more relevance in teaching every day. Together with mobile technology, this combination arises as one of the most effective binomials to support significant and ubiquitous learning. Nevertheless, this binomial can only prove valid if the student is motivated to use it during the learning process. An attempt was made through the implementation of Keller’s Instructional Material Motivational Survey model o determine the degree of motivation of Pedagogy, Medicine and Art students from the University of Seville for using AR-enriched notes available by means of mobile devices in the classroom. Three applications designed for the subjects of Educational Technology, Anatomy and Art served to assess it positively in terms of the motivation raised by the participation in the experiment, as well as regarding academic performance improvement. It can additionally be stated that our main finding was a link between students’ motivation to use the enriched notes and the performance obtained in the subject in which they use them. Evidence was also found that the utilization of Augmented Reality benefits the learning process. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Wrist Rehabilitation System Using Augmented Reality for Hemiplegic Stroke Patient Rehabilitation: A Feasibility Study
Appl. Sci. 2019, 9(14), 2892; https://doi.org/10.3390/app9142892 - 19 Jul 2019
Abstract
Objective: Our objective was to investigate the effect of the rehabilitation system using augmented reality (AR) on upper extremity motor performance of patients with stroke. Methods: The system using AR applying mirror therapy mechanism provides the intervention protocol for the patient with hemiplegia [...] Read more.
Objective: Our objective was to investigate the effect of the rehabilitation system using augmented reality (AR) on upper extremity motor performance of patients with stroke. Methods: The system using AR applying mirror therapy mechanism provides the intervention protocol for the patient with hemiplegia after stroke. The system consists of a patient positioning tool (a chair), a white surface table, an image acquisition unit, an image processing unit, an image displaying unit, an arm holder, a Velcro-strap, and two blue circle stickers. To assess the feasibility of our system in motor function recovery, a stroke patient was recruited to receive the AR intervention. The treatment was performed two times a day for ten minutes over two weeks (ten days of treating weeks), except for the time of installation, calibration, and three minute breaks. Jebsen Taylor hand function test and Arm Motor Fugl-Meyer assessment were used as primary and secondary outcome measures, respectively, to evaluate the effect of motor function recovery. Additionally, stroke impact scale, Korean version-Modified Barthel Index (K-MBI), active range of motion of wrist joint (ROM), and the grasp force in Newtons were measured. Participants’ feedback and adverse effects were recorded as well. Results: Motor function improvements were exhibited in wrist and hand subtest of Arm Motor Fugl-Meyer (baseline: 19; post-intervention: 23), proximal arm subtest of Fugl-Meyer (baseline: 31; post-intervention: 34), ROM (extending ROM: 10° and 3° for flexion and extension, repeatedly), stroke impact scale (baseline: 46; post-intervention: 54), K-MBI (baseline: 92; post-intervention: 95), nine-hole pegboard (baseline: 30 s; post-intervention: 25 s), and grasp force in Newtons (baseline: 12.7; post-intervention: 17.7). However, the adverse effects were reported after the intervention. Conclusion: The system using AR applying mirror therapy mechanism demonstrated the feasibility in motor function recovery for the stroke patient. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
A Novel Real-Time Match-Moving Method with HoloLens
Appl. Sci. 2019, 9(14), 2889; https://doi.org/10.3390/app9142889 - 19 Jul 2019
Abstract
With the advancement of media and computing technologies, video compositing techniques have improved to a great extent. These techniques have been used not only in the entertainment industry but also in advertisement and new media. Match-moving is a cinematic technology in virtual-real image [...] Read more.
With the advancement of media and computing technologies, video compositing techniques have improved to a great extent. These techniques have been used not only in the entertainment industry but also in advertisement and new media. Match-moving is a cinematic technology in virtual-real image synthesis that allows the insertion of computer graphics (virtual objects) into real world scenes. To make a realistic virtual-real image synthesis, it is important to obtain internal parameters (such as focal length) and external parameters (position and rotation) from an Red-Green-Blue(RGB) camera. Conventional methods recover these parameters by extracting feature points from recorded video frames to guide the virtual camera. These methods fail when there is occlusion or motion blur in the recorded scene. In this paper, we propose a novel method (system) for pre-visualization and virtual-real image synthesis that overcomes the limitations of conventional methods. This system uses the spatial understanding principle of Microsoft HoloLens to perform the match-moving of virtual-real video scenes. Experimental results demonstrate that our system is much more accurate and efficient than existing systems for video compositing. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Augmented Reality in Heritage Apps:Current Trends in Europe
Appl. Sci. 2019, 9(13), 2756; https://doi.org/10.3390/app9132756 - 08 Jul 2019
Cited by 1
Abstract
Although augmented reality (AR) has come to play an increasingly important role in a wide range of areas, its use remains rather limited in the realm of heritage education. This paper sets out to analyze which heritage-related apps can be found in Europe [...] Read more.
Although augmented reality (AR) has come to play an increasingly important role in a wide range of areas, its use remains rather limited in the realm of heritage education. This paper sets out to analyze which heritage-related apps can be found in Europe that partly or wholly use AR as a tool to help users learn about different types of heritage. Our study only identified a limited number of such apps and we used this sample both to paint a portrait of the current state of the question and also to highlight certain observable trends. The results showed that most such apps used AR to reconstruct spaces and buildings, and to a lesser extent, objects. Many of these apps used an academic mode of communication to provide a temporal perspective of monumental and (mainly) historical heritage. The paper also outlines future lines of research dedicated to finding more apps that could be used to increase the current sample size. This would allow for a more comprehensive assessment of such apps from an educational point of view. Several case studies are proffered in order to highlight the keys to successful use of AR in heritage apps. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Medical Augmented-Reality Visualizer for Surgical Training and Education in Medicine
Appl. Sci. 2019, 9(13), 2732; https://doi.org/10.3390/app9132732 - 05 Jul 2019
Abstract
This paper presents a projection-based augmented-reality system (MARVIS) that supports the visualization of internal structures on the surface of a liver phantom. MARVIS is endowed with three key features: tracking of spatial relationship between the phantom and the operator’s head in real time, [...] Read more.
This paper presents a projection-based augmented-reality system (MARVIS) that supports the visualization of internal structures on the surface of a liver phantom. MARVIS is endowed with three key features: tracking of spatial relationship between the phantom and the operator’s head in real time, monoscopic projection of internal liver structures onto the phantom surface for 3D perception without additional head-mounted devices, and phantom internal electronic circuit to assess the accuracy of a syringe guidance system. An initial validation was carried out by 25 medical students (12 males and 13 females; mean age, 23.12 years; SD, 1.27 years) and 3 male surgeons (mean age, 43.66 years; SD, 7.57 years). The validation results show that the ratio of failed syringe insertions was reduced from 50% to 30% by adopting the MARVIS projection. The proposed system suitably enhances a surgeon’s spatial perception of a phantom internal structure. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Hand Gestures in Virtual and Augmented 3D Environments for Down Syndrome Users
Appl. Sci. 2019, 9(13), 2641; https://doi.org/10.3390/app9132641 - 29 Jun 2019
Cited by 1
Abstract
Studies have revealed that applications using virtual and augmented reality provide immersion, motivation, fun and engagement. However, to date, few studies have researched how users with Down syndrome interact with these technologies. This research has identified the most commonly used interactive 3D gestures [...] Read more.
Studies have revealed that applications using virtual and augmented reality provide immersion, motivation, fun and engagement. However, to date, few studies have researched how users with Down syndrome interact with these technologies. This research has identified the most commonly used interactive 3D gestures according to the literature and tested eight of these using Oculus, Atheer and Leap Motion technologies. By applying MANOVAs to measurements of the time taken to complete each gesture and the success rate of each gesture when performed by participants with Down syndrome versus neurotypical participants, it was determined that significant difference was not shown for age or gender between these two sample groups. From the results, a difference was only demonstrated for the independent variable Down syndrome when analysed as a group. By using ANOVAs, it was determined that both groups found it easier to perform the gestures Stop, Point, Pan and Grab; thus, it is argued that these gestures should be used when programming software to create more inclusive AR and VR environments. The hardest gestures were Take, Pinch, Tap and Swipe; thus, these should be used to confirm critical actions, such as deleting data or cancelling actions. Lastly, the authors gather and make recommendations on how to develop inclusive 3D interfaces for individuals with Down syndrome. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
AR Object Manipulation on Depth-Sensing Handheld Devices
Appl. Sci. 2019, 9(13), 2597; https://doi.org/10.3390/app9132597 - 27 Jun 2019
Cited by 1
Abstract
Recently released, depth-sensing-capable, and moderately priced handheld devices support the implementation of augmented reality (AR) applications without the requirement of tracking visually distinct markers. This relaxed constraint allows for applications with significantly increased augmentation space dimension, virtual object size, and user movement freedom. [...] Read more.
Recently released, depth-sensing-capable, and moderately priced handheld devices support the implementation of augmented reality (AR) applications without the requirement of tracking visually distinct markers. This relaxed constraint allows for applications with significantly increased augmentation space dimension, virtual object size, and user movement freedom. Being relatively new, there is currently a lack of study on issues concerning direct virtual object manipulation for AR applications on these devices. This paper presents the results from a survey of the existing object manipulation methods designed for traditional handheld devices and identifies potentially viable ones for newer, depth-sensing-capable devices. The paper then describes the following: a test suite that implements the identified methods, test cases designed specifically for the characteristics offered by the new devices, the user testing process, and the corresponding results. Based on the study, this paper concludes that AR applications on newer, depth-sensing-capable handheld devices should manipulate small-scale virtual objects by mapping directly to device movements and large-scale virtual objects by supporting separate translation and rotation modes. Our work and results are the first step in better understanding the requirements to support direct virtual object manipulation for AR applications running on a new generation of depth-sensing-capable handheld devices. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Psychophysiological Alteration After Virtual Reality Experiences Using Smartphone-Assisted Head Mount Displays: An EEG-Based Source Localization Study
Appl. Sci. 2019, 9(12), 2501; https://doi.org/10.3390/app9122501 - 19 Jun 2019
Abstract
Brain functional changes could be observed in people after an experience of virtual reality (VR). The present study investigated cyber sickness and changes of brain regional activity using electroencephalogram (EEG)-based source localization, before and after a VR experience involving a smartphone-assisted head mount [...] Read more.
Brain functional changes could be observed in people after an experience of virtual reality (VR). The present study investigated cyber sickness and changes of brain regional activity using electroencephalogram (EEG)-based source localization, before and after a VR experience involving a smartphone-assisted head mount display. Thirty participants (mean age = 25 years old) were recruited. All were physically healthy and had no ophthalmological diseases. Their corrected vision was better than 20/20. Resting state EEG and the simulator sickness questionnaire (SSQ) were measured before and after the VR experience. Source activity of each frequency band was calculated using the sLORETA program. After the VR experience, the SSQ total score and sub scores (nausea, oculomotor symptoms, and disorientation) were significantly increased, and brain source activations were significantly increased: alpha1 activity in the cuneus and alpha2 activity in the cuneus and posterior cingulate gyrus (PCG). The change of SSQ score (after–before) showed significant negative correlation with the change of PCG activation (after–before) in the alpha2 band. The study demonstrated increased cyber sickness and increased alpha band power in the cuneus and PCG after the VR experience. Reduced PCG activation in alpha band may be associated with the symptom severity of cyber sickness. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Model-Based 3D Pose Estimation of a Single RGB Image Using a Deep Viewpoint Classification Neural Network
Appl. Sci. 2019, 9(12), 2478; https://doi.org/10.3390/app9122478 - 18 Jun 2019
Cited by 1
Abstract
This paper presents a model-based approach for 3D pose estimation of a single RGB image to keep the 3D scene model up-to-date using a low-cost camera. A prelearned image model of the target scene is first reconstructed using a training RGB-D video. Next, [...] Read more.
This paper presents a model-based approach for 3D pose estimation of a single RGB image to keep the 3D scene model up-to-date using a low-cost camera. A prelearned image model of the target scene is first reconstructed using a training RGB-D video. Next, the model is analyzed using the proposed multiple principal analysis to label the viewpoint class of each training RGB image and construct a training dataset for training a deep learning viewpoint classification neural network (DVCNN). For all training images in a viewpoint class, the DVCNN estimates their membership probabilities and defines the template of the class as the one of the highest probability. To achieve the goal of scene reconstruction in a 3D space using a camera, using the information of templates, a pose estimation algorithm follows to estimate the pose parameters and depth map of a single RGB image captured by navigating the camera to a specific viewpoint. Obviously, the pose estimation algorithm is the key to success for updating the status of the 3D scene. To compare with conventional pose estimation algorithms which use sparse features for pose estimation, our approach enhances the quality of reconstructing the 3D scene point cloud using the template-to-frame registration. Finally, we verify the ability of the established reconstruction system on publicly available benchmark datasets and compare it with the state-of-the-art pose estimation algorithms. The results indicate that our approach outperforms the compared methods in terms of the accuracy of pose estimation. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
A State Validation System for Augmented Reality Based Maintenance Procedures
Appl. Sci. 2019, 9(10), 2115; https://doi.org/10.3390/app9102115 - 24 May 2019
Cited by 1
Abstract
Maintenance has been one of the most important domains for augmented reality (AR) since its inception. AR applications enable technicians to receive visual and audio computer-generated aids while performing different activities, such as assembling, repairing, or maintenance procedures. These procedures are usually organized [...] Read more.
Maintenance has been one of the most important domains for augmented reality (AR) since its inception. AR applications enable technicians to receive visual and audio computer-generated aids while performing different activities, such as assembling, repairing, or maintenance procedures. These procedures are usually organized as a sequence of steps, each one involving an elementary action to be performed by the user. However, since it is not possible to automatically validate the users actions, they might incorrectly execute or miss some steps. Thus, a relevant open problem is to provide users with some sort of automated verification tool. This paper presents a system, used to support maintenance procedures through AR, which tries to address the validation problem. The novel technology consists of a computer vision algorithm able to evaluate, at each step of a maintenance procedure, if the user correctly completed the assigned task or not. The validation occurs by comparing an image of the final status of the machinery, after the user has performed the task, and a virtual 3D representation of the expected final status. Moreover, in order to avoid false positives, the system can identify both motions in the scene and changes in the camera’s zoom and/or position, thus enhancing the robustness of the validation phase. Tests demonstrate that the proposed system can effectively help the user in detecting and avoiding errors during the maintenance process. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
User-Aware Audio Marker Using Low Frequency Ultrasonic Object Detection and Communication for Augmented Reality
Appl. Sci. 2019, 9(10), 2004; https://doi.org/10.3390/app9102004 - 16 May 2019
Abstract
In augmented reality (AR), audio markers can be alternatives to image markers for rendering virtual objects when an AR device camera fails to identify the image marker due to lighting conditions and/or the distance between the marker and device. However, conventional audio markers [...] Read more.
In augmented reality (AR), audio markers can be alternatives to image markers for rendering virtual objects when an AR device camera fails to identify the image marker due to lighting conditions and/or the distance between the marker and device. However, conventional audio markers simply broadcast a rendering queue to anonymous devices, making it difficult to provide specific virtual objects of interest to the user. To overcome this limitation without relying on camera-based sensing, we propose a user-aware audio marker system using low frequency ultrasonic signal processing. The proposed system detects users who stay within the marker using ultrasonic-based object detection, and then it uses ultrasonic communication based on windowed differential phase shift keying modulation in order to send a rendering queue only to those users near the marker. Since the proposed system uses commercial microphones and speakers, conventional telecommunication systems can be employed to deliver the audio markers. The performance of the proposed audio marker system is evaluated in terms of object detection accuracy and communication robustness. First, the object detection accuracy of the proposed system is compared with that of a pyroelectric infrared (PIR) sensor-based system in indoor environments, and it is shown that the proposed system achieves a lower equal error rate than the PIR sensor-based system. Next, the successful transmission rate of the proposed system is measured for various distances and azimuths under noisy conditions, and it is also shown that the proposed audio marker system can successfully operate up to approximately 4 m without any transmission errors, even with 70 dBSPL ambient noise. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Spatial Analysis of Navigation in Virtual Geographic Environments
Appl. Sci. 2019, 9(9), 1873; https://doi.org/10.3390/app9091873 - 07 May 2019
Cited by 1
Abstract
Human performance and navigation activity in virtual environments can be measured and assessed with the aim to draw specific conclusions about human cognition. This paper presents an original virtual geographic environment (VGE) designed and used for this purpose. The presented research is rooted [...] Read more.
Human performance and navigation activity in virtual environments can be measured and assessed with the aim to draw specific conclusions about human cognition. This paper presents an original virtual geographic environment (VGE) designed and used for this purpose. The presented research is rooted in an interdisciplinary approach combining knowledge and principles from the fields of psychology, cartography, and information technologies. The VGE was embedded with user logging functionality to provide a basis from which conclusions about human cognitive processes in a VGE could be drawn. The scope of this solution is introduced, described, and discussed under a behavioral measurement framework. An exploratory research design was adopted to demonstrate the environment’s utility in proof-of-concept user testing. Twenty participants were observed in interactive, semi-interactive and non-interactive tasks, their performance and individual differences were analyzed. The behavioral measurements were supplemented by Object-Spatial Imagery and a Verbal Questionnaire to determine the participants’ cognitive styles. In this sample, significant differences in exploration strategies between men and women were detected. Differences between experienced and non-experienced users were also found in their ability to identify spatial relations in virtual scenes. Finally, areas for future research areas and development were pinpointed. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Trends and Research Issues of Augmented Reality Studies in Architectural and Civil Engineering Education—A Review of Academic Journal Publications
Appl. Sci. 2019, 9(9), 1840; https://doi.org/10.3390/app9091840 - 04 May 2019
Cited by 5
Abstract
Architectural and civil engineering (ACE) education is inextricably connected to real-world practice. The application of augmented reality (AR) technology can help to establish a link between virtual and real-world information for students. Studies of applying AR in ACE education have increased annually, and [...] Read more.
Architectural and civil engineering (ACE) education is inextricably connected to real-world practice. The application of augmented reality (AR) technology can help to establish a link between virtual and real-world information for students. Studies of applying AR in ACE education have increased annually, and numerous research have indicated that AR possesses immense application potential. To address and analyze pertinent research issues, published studies in the Scopus database were explored, and revealed that problems persist and are worthy of attention, such as the selection of system types and devices, the application of research methods, and appropriate learning strategies and teaching methods. Courses with objective grading standards should be given priority in AR experimental courses for a meticulous investigation of AR influence on students’ learning outcomes and ultimately improvement of classroom quality. Suitable types of AR systems should be selected based on course content, prior to the design and development of the system. It is recommended to develop markerless systems for a larger application range to benefit students with additional convenience. Systems can also be accompanied by functions, such as instant online assessments, synchronized assessments, and exchange capabilities to assist learning what has been taught and develop critical thinking abilities. The combination of AR and building information modeling (BIM) in architectural and civil practice, which has immense application potential, has become an emerging research trend. Collaboration between academics and practice should be enhanced with roles and knowledge of instructors, engineers, designers, and computer experts integrated for an optimal connection between general pedagogy and domain-specific learning. Teaching methods that emphasize “locations”, as well as “roles”, can be adopted in order to create a superior reality learning environment with diversified learning methods. The trends and research have become an integration and collaboration issue that should be performed interactively with pedagogical findings, and resources integrated across roles, fields, and university departments. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Virtual Object Replacement Based on Real Environments: Potential Application in Augmented Reality Systems
Appl. Sci. 2019, 9(9), 1797; https://doi.org/10.3390/app9091797 - 29 Apr 2019
Abstract
Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly [...] Read more.
Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Automatic Association of Scents Based on Visual Content
Appl. Sci. 2019, 9(8), 1697; https://doi.org/10.3390/app9081697 - 24 Apr 2019
Cited by 2
Abstract
Although olfaction can enhance the user’s experience in virtual environments, the approach is not widely utilized by virtual contents. This is because the olfaction displays are either not aware of the content in the virtual world or they are application specific. Enabling wide [...] Read more.
Although olfaction can enhance the user’s experience in virtual environments, the approach is not widely utilized by virtual contents. This is because the olfaction displays are either not aware of the content in the virtual world or they are application specific. Enabling wide context awareness is possible through the use of image recognition via machine learning. Screenshots from the virtual worlds can be analyzed for the presence of virtual scent emitters, allowing the olfactory display to respond by generating the corresponding smells. The Convolutional Neural Network (CNN), using Inception Model for image recognition was used for training the system. To evaluate the performance of the accuracy of the model, we trained it on a computer game called Minecraft. The results and performance of the model was 97% accurate, while in some cases the accuracy reached 99%. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Design and Analysis of Cloud Upper Limb Rehabilitation System Based on Motion Tracking for Post-Stroke Patients
Appl. Sci. 2019, 9(8), 1620; https://doi.org/10.3390/app9081620 - 18 Apr 2019
Cited by 2
Abstract
In order to improve the convenience and practicability of home rehabilitation training for post-stroke patients, this paper presents a cloud-based upper limb rehabilitation system based on motion tracking. A 3-dimensional reachable workspace virtual game (3D-RWVG) was developed to achieve meaningful home rehabilitation training. [...] Read more.
In order to improve the convenience and practicability of home rehabilitation training for post-stroke patients, this paper presents a cloud-based upper limb rehabilitation system based on motion tracking. A 3-dimensional reachable workspace virtual game (3D-RWVG) was developed to achieve meaningful home rehabilitation training. Five movements were selected as the criteria for rehabilitation assessment. Analysis was undertaken of the upper limb performance parameters: relative surface area (RSA), mean velocity (MV), logarithm of dimensionless jerk (LJ) and logarithm of curvature (LC). A two-headed convolutional neural network (TCNN) model was established for the assessment. The experiment was carried out in the hospital. The results show that the RSA, MV, LC and LJ could reflect the upper limb motor function intuitively from the graphs. The accuracy of the TCNN models is 92.6%, 80%, 89.5%, 85.1% and 87.5%, respectively. A therapist could check patient training and assessment information through the cloud database and make a diagnosis. The system can realize home rehabilitation training and assessment without the supervision of a therapist, and has the potential to become an effective home rehabilitation system. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Automatic Lip-Reading System Based on Deep Convolutional Neural Network and Attention-Based Long Short-Term Memory
Appl. Sci. 2019, 9(8), 1599; https://doi.org/10.3390/app9081599 - 17 Apr 2019
Cited by 2
Abstract
With the improvement of computer performance, virtual reality (VR) as a new way of visual operation and interaction method gives the automatic lip-reading technology based on visual features broad development prospects. In an immersive VR environment, the user’s state can be successfully captured [...] Read more.
With the improvement of computer performance, virtual reality (VR) as a new way of visual operation and interaction method gives the automatic lip-reading technology based on visual features broad development prospects. In an immersive VR environment, the user’s state can be successfully captured through lip movements, thereby analyzing the user’s real-time thinking. Due to complex image processing, hard-to-train classifiers and long-term recognition processes, the traditional lip-reading recognition system is difficult to meet the requirements of practical applications. In this paper, the convolutional neural network (CNN) used to image feature extraction is combined with a recurrent neural network (RNN) based on attention mechanism for automatic lip-reading recognition. Our proposed method for automatic lip-reading recognition can be divided into three steps. Firstly, we extract keyframes from our own established independent database (English pronunciation of numbers from zero to nine by three males and three females). Then, we use the Visual Geometry Group (VGG) network to extract the lip image features. It is found that the image feature extraction results are fault-tolerant and effective. Finally, we compare two lip-reading models: (1) a fusion model with an attention mechanism and (2) a fusion model of two networks. The results show that the accuracy of the proposed model is 88.2% in the test dataset and 84.9% for the contrastive model. Therefore, our proposed method is superior to the traditional lip-reading recognition methods and the general neural networks. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
BIM-Based AR Maintenance System (BARMS) as an Intelligent Instruction Platform for Complex Plumbing Facilities
Appl. Sci. 2019, 9(8), 1592; https://doi.org/10.3390/app9081592 - 17 Apr 2019
Cited by 2
Abstract
The traditional architectural design of facilities requires that maintenance workers refer between plumbing layout drawings and the actual facilities in complex, hidden, sealed in-wall, or low illumination environments. The purpose of a Building information modeling-based Augmented Reality Maintenance System (BARMS) in this study [...] Read more.
The traditional architectural design of facilities requires that maintenance workers refer between plumbing layout drawings and the actual facilities in complex, hidden, sealed in-wall, or low illumination environments. The purpose of a Building information modeling-based Augmented Reality Maintenance System (BARMS) in this study was to provide a smartphone-based platform and a new application scenario for cooling tower and pipe shutdown protocol in a real-world old campus building. An intelligent instruction framework was built considering subject, path, and actions. Challenges and solutions were created to monitor the subject and maintenance protocol while moving from inside to outside, between bright and dark environments, and when crossing building enclosures at roof level. Animated instruction using AR was interactive and followed the knowledge and management protocols of associated instruction aids. The results demonstrated a straightforward mapping of in-wall pipes and their connected valves, with practical auxiliary components of walking direction and path guidance. The suggested maintenance routes also ensured a worker’s safety. Statistical analysis showed a positive user response. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessArticle
Design of an Interactive Spatial Augmented Reality System for Stage Performance Based on UWB Positioning and Wireless Triggering Technology
Appl. Sci. 2019, 9(7), 1318; https://doi.org/10.3390/app9071318 - 29 Mar 2019
Abstract
In this research, the authors designed an interactive spatial augmented reality system for stage performance based on the technologies of UWB positioning and Bluetooth® triggering. The position of the actor is obtained through the antenna tag carried by the actor and the signal [...] Read more.
In this research, the authors designed an interactive spatial augmented reality system for stage performance based on the technologies of UWB positioning and Bluetooth® triggering. The position of the actor is obtained through the antenna tag carried by the actor and the signal base station placed on the stage. Special effects can be triggered through the Bluetooth® module according to the actor, and rendered at the relevant location on the screen, which has higher concealment. The system has a higher degree of freedom in practical applications, which can present an interactive spatial augmented reality effect, and therefore provide new possibilities for the application of spatial augmented reality in the stage performance. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Application of Virtual Reality for Learning the Material Properties of Shape Memory Alloys
Appl. Sci. 2019, 9(3), 580; https://doi.org/10.3390/app9030580 - 10 Feb 2019
Cited by 3
Abstract
A shape memory alloy (SMA) is an alloy which can eliminate deformation at lower temperatures and restore its original shape upon heating. SMAs have been receiving considerable attention in the research field of materials science, and their applications include robotics, automotive, aerospace, and [...] Read more.
A shape memory alloy (SMA) is an alloy which can eliminate deformation at lower temperatures and restore its original shape upon heating. SMAs have been receiving considerable attention in the research field of materials science, and their applications include robotics, automotive, aerospace, and biomedical industries. Observing the SMA’s shaping and restoration processes is important for understanding its working principles and applications. However, the transformation of its crystal structure with temperature can only be seen using special equipment, such as a transmission electron microscope (TEM), which is an expensive apparatus and the operation requires professional skills. In this study, a teaching module is designed using virtual reality (VR) technology and research results of an SMA to show its shape memory properties, shaping and restoration processes, as well as the real-life applications in an immersive and interactive way. A teaching experiment has been conducted to analyze students’ learning effectiveness using the teaching module (the experimental group) compared with that of using real SMA materials as the teaching aids (the control group). Two classes of students in the Department of Materials Science (one as the experimental group and the other as the control group) were selected as the samples by convenience sampling from a university in North Taiwan. The experimental group contained 52 students and the control group contained 70 students. A nonequivalent pretest-posttest design was adopted to explore whether the two groups had a significant difference in learning effectiveness. The experimental results reveal that the teaching module can improve the learning effectiveness significantly (p = 0.001), and the questionnaire results also show that a majority of the students had positive attitudes about the teaching module. They believed that it could increase their learning motivation and help them understand the properties and applications of the SMA. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Graphical abstract

Open AccessArticle
Developing and Evaluating a Virtual Reality-Based Navigation System for Pre-Sale Housing Sales
Appl. Sci. 2018, 8(6), 952; https://doi.org/10.3390/app8060952 - 08 Jun 2018
Cited by 4
Abstract
Virtual reality (VR) technologies have advanced rapidly in the past few years, and many industries have adopted these cutting-edge technologies for diverse applications to improve their industrial competitiveness. VR has also received considerable recognition in the architecture, engineering, and construction industries, because it [...] Read more.
Virtual reality (VR) technologies have advanced rapidly in the past few years, and many industries have adopted these cutting-edge technologies for diverse applications to improve their industrial competitiveness. VR has also received considerable recognition in the architecture, engineering, and construction industries, because it can potentially reduce project costs, delivery time, and quality risks, by allowing users to experience unbuilt spaces before breaking ground, resolving construction conflicts virtually, and reviewing complex details in immersive environments. In the real estate market, VR can also play an important role in affecting buyers’ housing purchasing decisions, especially for housing markets in Asia, where the pre-sale system is extremely common. Applying VR to the pre-sale housing system is promising, because the concept of pre-sale refers to a strategy adopted by developers that sell housing through agreements on residential units that have not been constructed yet, and VR at this stage could be a useful tool for visual communication in a true-to-scale environment. However, does VR really benefit sales in the housing market? Can clients accept using VR, instead of using traditional materials (i.e., paper-based images and physical models), to navigate and experience housing projects? The objective of this study is to develop a VR-based navigation system for a pre-sale housing project in Taiwan. We invited 30 potential clients to test the system and explore the implications of using it for project navigation. The results reveal that VR enhances the understandings of a project (perceived usefulness) and increases clients’ intention to purchase, while the operation of VR (perceived ease-of-use) is still the major challenge to affect clients’ satisfaction and the developer’s acceptance with respect to applying it to future housing sales. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
A Review on Mixed Reality: Current Trends, Challenges and Prospects
Appl. Sci. 2020, 10(2), 636; https://doi.org/10.3390/app10020636 - 16 Jan 2020
Cited by 1
Abstract
Currently, new technologies have enabled the design of smart applications that are used as decision-making tools in the problems of daily life. The key issue in designing such an application is the increasing level of user interaction. Mixed reality (MR) is an emerging [...] Read more.
Currently, new technologies have enabled the design of smart applications that are used as decision-making tools in the problems of daily life. The key issue in designing such an application is the increasing level of user interaction. Mixed reality (MR) is an emerging technology that deals with maximum user interaction in the real world compared to other similar technologies. Developing an MR application is complicated, and depends on the different components that have been addressed in previous literature. In addition to the extraction of such components, a comprehensive study that presents a generic framework comprising all components required to develop MR applications needs to be performed. This review studies intensive research to obtain a comprehensive framework for MR applications. The suggested framework comprises five layers: the first layer considers system components; the second and third layers focus on architectural issues for component integration; the fourth layer is the application layer that executes the architecture; and the fifth layer is the user interface layer that enables user interaction. The merits of this study are as follows: this review can act as a proper resource for MR basic concepts, and it introduces MR development steps and analytical models, a simulation toolkit, system types, and architecture types, in addition to practical issues for stakeholders such as considering MR different domains. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessReview
Potential of Augmented Reality and Virtual Reality Technologies to Promote Wellbeing in Older Adults
Appl. Sci. 2019, 9(17), 3556; https://doi.org/10.3390/app9173556 - 30 Aug 2019
Cited by 1
Abstract
Older adults face significant loss and limitations in terms of mobility, cognitive ability, and socialization. By using augmented reality and virtual reality technologies they have the potential to overcome such loss and limitations, and to eventually improve their quality of life. However, this [...] Read more.
Older adults face significant loss and limitations in terms of mobility, cognitive ability, and socialization. By using augmented reality and virtual reality technologies they have the potential to overcome such loss and limitations, and to eventually improve their quality of life. However, this group is often excluded in augmented reality and virtual reality deployment. Further, limited studies address their challenges when using augmented reality and virtual reality. Therefore, for a critical review of augmented reality and virtual reality for older adults, we developed a framework to evaluate related factors, including physical, social, and psychological wellbeing. Through the critical review, we identified that most augmented reality and virtual reality studies focus on physical wellbeing of older adults but also make substantial efforts to increase their psychological wellbeing. Fun factors that would motivate them are also extensively considered. Further, social isolation continues to be a significant issue for older adults, but the appropriate content to increase their social wellbeing is insufficient, although many commercial products have been developed. The contribution of the present study is to provide a contextual framework and an evaluation framework for the critical review of augmented reality and virtual reality technologies to promote wellbeing in older adults. This study also suggests the augmented reality and virtual reality research direction for studies on this group by identifying the research gap through the critical review process. Lastly, this study investigates design directions of augmented reality and virtual reality for older adults by introducing challenges and design issues that emerged through the critical review. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Open AccessReview
Survey of Finite Element Method-Based Real-Time Simulations
Appl. Sci. 2019, 9(14), 2775; https://doi.org/10.3390/app9142775 - 10 Jul 2019
Cited by 1
Abstract
The finite element method (FEM) has deservedly gained the reputation of the most powerful, highly efficient, and versatile numerical method in the field of structural analysis. Though typical application of FE programs implies the so-called “off-line” computations, the rapid pace of hardware development [...] Read more.
The finite element method (FEM) has deservedly gained the reputation of the most powerful, highly efficient, and versatile numerical method in the field of structural analysis. Though typical application of FE programs implies the so-called “off-line” computations, the rapid pace of hardware development over the past couple of decades was the major impetus for numerous researchers to consider the possibility of real-time simulation based on FE models. Limitations of available hardware components in various phases of developments demanded remarkable innovativeness in the quest for suitable solutions to the challenge. Different approaches have been proposed depending on the demands of the specific field of application. Though it is still a relatively young field of work in global terms, an immense amount of work has already been done calling for a representative survey. This paper aims to provide such a survey, which of course cannot be exhaustive. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Open AccessReview
Usability Measures in Mobile-Based Augmented Reality Learning Applications: A Systematic Review
Appl. Sci. 2019, 9(13), 2718; https://doi.org/10.3390/app9132718 - 05 Jul 2019
Cited by 1
Abstract
The implementation of usability in mobile augmented reality (MAR) learning applications has been utilized in a myriad of standards, methodologies, and techniques. The usage and combination of techniques within research approaches are important in determining the quality of usability data collection. The purpose [...] Read more.
The implementation of usability in mobile augmented reality (MAR) learning applications has been utilized in a myriad of standards, methodologies, and techniques. The usage and combination of techniques within research approaches are important in determining the quality of usability data collection. The purpose of this study is to identify, study, and analyze existing usability metrics, methods, techniques, and areas in MAR learning. This study adapts systematic literature review techniques by utilizing research questions and Boolean search strings to identify prospective studies from six established databases that are related to the research context area. Seventy-two articles, consisting of 45 journals, 25 conference proceedings, and two book chapters, were selected through a systematic process. All articles underwent a rigorous selection protocol to ensure content quality according to formulated research questions. Post-synthesis and analysis, the output of this article discusses significant factors in usability-based MAR learning applications. This paper presents five identified gaps in the domain of study, modes of contributions, issues within usability metrics, technique approaches, and hybrid technique combinations. This paper concludes five recommendations based on identified gaps concealing potential of usability-based MAR learning research domains, varieties of unexplored research types, validation of emerging usability metrics, potential of performance metrics, and untapped correlational areas to be discovered. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Back to TopTop