Special Issue "Applications of Virtual, Augmented, and Mixed Reality"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 15 July 2021.

Special Issue Editor

Prof. Jorge Martin-Gutierrez
Website
Guest Editor
Técnicas y Proyectos en Ingeniería y Arquitectura, Universidad de La Laguna, Santa Cruz de Tenerife, Spain
Interests: augmented reality; virtual reality; mixed reality; human–computer interaction; wearable interaction; user experience; usability
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The term XR (extended reality), which includes the technology of virtual reality, augmented reality, and mixed reality, is beginning to become more widely known. In recent years, XR has made remarkable progress, and usage expectations are very high. There is no doubt about the potential of this technology.

As some basic research has come to fruition, expectations for XR have increased, as have opportunities for it to be applied in different fields. That way, these technologies provide great opportunities for education, medicine, architecture, Industry 4.0, e-commerce, gaming, healthcare, the military, emergency response, entertainment, engineering, advertising, entertainment, retail, etc., and we can consider that we are facing a technological change as big as the massive use of PC, internet or smartphone was at its time.

The applications that we can develop through smartphones, tablets, and new XR wearable (glasses and headset) devices that free workers and users from having to hold on to devices are more than we can imagine and can help to save time and reduce production costs, improving quality of life.

In the manufacturing field, the use of augmented reality has been the topic of conversation for years, but actual deployment has been slow. This is changing, however, as manufacturers explore the technology in their plants and move beyond pilots and trials to the wider, day-to-day use of AR. Although AR is still at an early stage in manufacturing, there is a lot of innovation going on, and a lot of movement in the industry around AR. On the other hand, XR provides great opportunities in education and training that are not possible with traditional instruction methods and other technologies used in education. VR, AR, and MR allow learners, in a safe way, to experience environments and virtual scenarios that would normally be dangerous to learn in. Even for academic institutions and companies, it is difficult to have some infrastructures to teach or train their learners or workers. Unlike some traditional instruction methods, VR, AR, and MR applications offer consistent education and training that do not vary from instructor to instructor. These virtual technologies also afford the development of psychomotor skills through physical 3D interactions with virtual elements. This is especially important when resources are limited for training purposes.

This Special Issue calls for many interesting studies, applications, and experiences that will open up new uses of XR. In addition to research that has steadily improved existing issues, we welcome research papers that present new possibilities of VR, AR, and MX. Topics of interest include but are not limited to the following:

  • VR/AR/MR applications: manufacturing, healthcare, virtual travel, e-sports, games, cultural heritage, military, e-commerce, military, psychology, medicine, emergency response, entertainment, engineering, advertising, etc.
  • Brain science for VR/AR/MX
  • VR/AR/MX collaboration
  • Context awareness for VR/AR
  • Education with VR/AR/MX
  • Use 360° video for VR
  • Display technologies for VR/AR/MX
  • Human–computer interactions in VR/AR/MX
  • Human factors in VR/AR/MX
  • Perception/presence in VR/AR/MX
  • Physiological sensing for VR/AR/MX
  • Cybersickness
  • User experience/usability in VR/AR/MX
  • Interfaces for VR/AR
  • Virtual humans/avatars in VR/AR/MX
  • Wellbeing with VR/AR/MX
  • Human behavior sensing
  • Gesture interface
  • Interactive simulation
  • New interaction design for VR/AR/MR
  • AR/VR devices and technologies integrated
  • Issues on real world and virtual world integration
  • Social aspects in VR/AR/MR interaction

Prof. Jorge Martin-Gutierrez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Virtual, augmented, and mixed reality
  • Interactive simulation
  • HCI (human–computer interaction)
  • Human-centered design

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
An Immersive Serious Game for the Behavioral Assessment of Psychological Needs
Appl. Sci. 2021, 11(4), 1971; https://doi.org/10.3390/app11041971 - 23 Feb 2021
Abstract
Motivation is an essential component in mental health and well-being. In this area, researchers have identified four psychological needs that drive human behavior: attachment, self-esteem, orientation and control, and maximization of pleasure and minimization of distress. Various self-reported scales and interviews tools have [...] Read more.
Motivation is an essential component in mental health and well-being. In this area, researchers have identified four psychological needs that drive human behavior: attachment, self-esteem, orientation and control, and maximization of pleasure and minimization of distress. Various self-reported scales and interviews tools have been developed to assess these dimensions. Despite the validity of these, they are showing limitations in terms of abstractation and decontextualization and biases, such as social desirability bias, that can affect responses veracity. Conversely, virtual serious games (VSGs), that are games with specific purposes, can potentially provide more ecologically valid and objective assessments than traditional approaches. Starting from these premises, the aim of this study was to investigate the feasibility of a VSG to assess the four personality needs. Sixty subjects participated in five VSG sessions. Results showed that the VSG was able to recognize attachment, self-esteem, and orientation and control needs with a high accuracy, and to a lesser extent maximization of pleasure and minimization of distress need. In conclusion, this study showed the feasibility to use a VSG to enhance the assessment of psychological behavioral-based need, overcoming biases presented by traditional assessment. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Spatial Skills and Perceptions of Space: Representing 2D Drawings as 3D Drawings inside Immersive Virtual Reality
Appl. Sci. 2021, 11(4), 1475; https://doi.org/10.3390/app11041475 - 06 Feb 2021
Abstract
Rapid freehand drawings are of great importance in the early years of university studies of architecture, because both the physical characteristics of spaces and their sensory characteristics can be communicated through them. In order to draw architectural spaces, it is necessary to have [...] Read more.
Rapid freehand drawings are of great importance in the early years of university studies of architecture, because both the physical characteristics of spaces and their sensory characteristics can be communicated through them. In order to draw architectural spaces, it is necessary to have the ability to visualize and manipulate them mentally, which leads us to the concept of spatial skills; but it also requires a development of spatial perception to express them in the drawings. The purpose of this research is to analyze the improvement of spatial skills through the full-scale sketching of architectural spaces in virtual immersive environments and to analyze spatial perception in reference to the capture of spatial sensations in virtual immersive environments. Spatial skills training was created based on the freehand drawing of architectural spaces using Head Mounted Displays (HMD) and registered the spatial sensations experienced also using HMD, but only in previously modeled realistic spaces. It was found that the training significantly improved orientation, rotation and visualization, and that the sensory journey and experimentation of architectural spaces realistically modeled in immersive virtual reality environments allows for the same sensations that the designer initially sought to convey. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Compassionate Embodied Virtual Experience Increases the Adherence to Meditation Practice
Appl. Sci. 2021, 11(3), 1276; https://doi.org/10.3390/app11031276 - 30 Jan 2021
Abstract
Virtual Reality (VR) could be useful to overcome imagery and somatosensory difficulties of compassion-based meditations given that it helps generate empathy by facilitating the possibility of putting oneself into the mind of others. Thus, the aim of this study was to evaluate the [...] Read more.
Virtual Reality (VR) could be useful to overcome imagery and somatosensory difficulties of compassion-based meditations given that it helps generate empathy by facilitating the possibility of putting oneself into the mind of others. Thus, the aim of this study was to evaluate the effectiveness of an embodied-VR system in generating a compassionate response and increasing the quality and adherence to meditation practice. Health professionals or healthcare students (n = 41) were randomly assigned to a regular audio guided meditation or to a meditation supported by an embodied-VR system, “The machine to be another”. In both conditions, there was an initial in-person session and two weeks of meditation practice at home. An implicit measure was used to measure prosocial behavior, and self-report questionnaires were administered to assess compassion related constructs, quality of meditation, and frequency of meditation. Results revealed that participants from the embodied-VR condition meditated for double the amount of time at home than participants who only listened to the usual guided meditation. However, there were no significant differences in the overall quality of at-home meditation. In conclusion, this study confirms that embodied-VR systems are useful for increasing adherence to meditation practice. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessFeature PaperArticle
An Immersive Virtual Reality Game for Predicting Risk Taking through the Use of Implicit Measures
Appl. Sci. 2021, 11(2), 825; https://doi.org/10.3390/app11020825 - 17 Jan 2021
Abstract
Risk taking (RT) measurement constitutes a challenge for researchers and practitioners and has been addressed from different perspectives. Personality traits and temperamental aspects such as sensation seeking and impulsivity influence the individual’s approach to RT, prompting risk-seeking or risk-aversion behaviors. Virtual reality has [...] Read more.
Risk taking (RT) measurement constitutes a challenge for researchers and practitioners and has been addressed from different perspectives. Personality traits and temperamental aspects such as sensation seeking and impulsivity influence the individual’s approach to RT, prompting risk-seeking or risk-aversion behaviors. Virtual reality has emerged as a suitable tool for RT measurement, since it enables the exposure of a person to realistic risks, allowing embodied interactions, the application of stealth assessment techniques and physiological real-time measurement. In this article, we present the assessment on decision making in risk environments (AEMIN) tool, as an enhanced version of the spheres and shield maze task, a previous tool developed by the authors. The main aim of this article is to study whether it is possible is to discriminate participants with high versus low scores in the measures of personality, sensation seeking and impulsivity, through their behaviors and physiological responses during playing AEMIN. Applying machine learning methods to the dataset we explored: (a) if through these data it is possible to discriminate between the two populations in each variable; and (b) which parameters better discriminate between the two populations in each variable. The results support the use of AEMIN as an ecological assessment tool to measure RT, since it brings to light behaviors that allow to classify the subjects into high/low risk-related psychological constructs. Regarding physiological measures, galvanic skin response seems to be less salient in prediction models. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring
Appl. Sci. 2021, 11(1), 331; https://doi.org/10.3390/app11010331 - 31 Dec 2020
Cited by 1
Abstract
There are more than 800 million people in the world with chronic diseases. Many of these people do not have easy access to healthcare facilities for recovery. Telerehabilitation seeks to provide a solution to this problem. According to the researchers, the topic has [...] Read more.
There are more than 800 million people in the world with chronic diseases. Many of these people do not have easy access to healthcare facilities for recovery. Telerehabilitation seeks to provide a solution to this problem. According to the researchers, the topic has been treated as medical aid, making an exchange between technological issues such as the Internet of Things and virtual reality. The main objective of this work is to design a distributed platform to monitor the patient’s movements and status during rehabilitation exercises. Later, this information can be processed and analyzed remotely by the doctor assigned to the patient. In this way, the doctor can follow the patient’s progress, enhancing the improvement and recovery process. To achieve this, a case study has been made using a PANGEA-based multi-agent system that coordinates different parts of the architecture using ubiquitous computing techniques. In addition, the system uses real-time feedback from the patient. This feedback system makes the patients aware of their errors so that they can improve their performance in later executions. An evaluation was carried out with real patients, achieving promising results. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
An Aerial Mixed-Reality Environment for First-Person-View Drone Flying
Appl. Sci. 2020, 10(16), 5436; https://doi.org/10.3390/app10165436 - 06 Aug 2020
Cited by 1
Abstract
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to [...] Read more.
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
The Effect of the Degree of Anxiety of Learners during the Use of VR on the Flow and Learning Effect
Appl. Sci. 2020, 10(14), 4932; https://doi.org/10.3390/app10144932 - 17 Jul 2020
Cited by 2
Abstract
Virtual reality (VR) learning content that provides negative experiences makes learners anxious. Thus, experimental research was conducted to determine how anxiety felt by learners using VR impacts learning. To measure the learning effects, flow, a leading element of learning effects, was measured. Flow [...] Read more.
Virtual reality (VR) learning content that provides negative experiences makes learners anxious. Thus, experimental research was conducted to determine how anxiety felt by learners using VR impacts learning. To measure the learning effects, flow, a leading element of learning effects, was measured. Flow has a positive effect on learning as a scale of how immersed an individual is in the work he or she is currently performing. The evaluation method used the empirical recognition scale by Kwon (2020) and the six-item short-form State-Trait Anxiety Inventory (STAI) from Marteau and Becker (1992), which were used in the preceding study. The difference in flow between high- and low-anxiety groups was explored by measuring the degree the study participants felt using an Fire Safety Education Game based on VR that allows learners to feel the heat and wind of the fire site with their skin. As a result of the experiment, no difference in flow was found between the high- and low-anxiety groups that played the same VR game with cutaneous sensation. However, the high-anxiety group who played the VR game with cutaneous sensation showed a higher flow than the group that played the basic fire safety education VR game. Based on these results, the following conclusions were drawn: the closer to reality the VR learning and training system for negative situations is reproduced, the more realistically the learner feels the anxiety. In other words, the closer to reality the virtual environment is reproduced, the more realistically the learner feels the feelings in the virtual space. In turn, through this realistic experience, the learner becomes immersed in the flow more deeply. In addition, considering that flow is a prerequisite for the learning effect, the anxiety that learners feel in the virtual environment will also have a positive effect on the learning effect. As a result, it can be assumed that the more realistically VR is reproduced, the more effective experiential learning using VR can be. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Examining User Perception of the Size of Multiple Objects in Virtual Reality
Appl. Sci. 2020, 10(11), 4049; https://doi.org/10.3390/app10114049 - 11 Jun 2020
Cited by 1
Abstract
This article presents a user study into user perception of an object’s size when presented in virtual reality. Critical for users understanding of virtual worlds is their perception of the size of virtual objects. This article is concerned with virtual objects that are [...] Read more.
This article presents a user study into user perception of an object’s size when presented in virtual reality. Critical for users understanding of virtual worlds is their perception of the size of virtual objects. This article is concerned with virtual objects that are within arm’s reach of the user. Examples of such virtual objects could be virtual controls such as buttons, dials and levers that the users manipulate to control the virtual reality application. This article explores the issue of a user’s ability to judge the size of an object relative to a second object of a different colour. The results determined that the points of subjective equality for height and width judgement tasks ranging from 10 to 90 mm were all within an acceptable value. That is to say, participants were able to perceive height and width judgements very close to the target values. The results for height judgement task for just-noticeable difference were all less than 1.5 mm and for the width judgement task less than 2.3 mm. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Comparing Augmented Reality-Assisted Assembly Functions—A Case Study on Dougong Structure
Appl. Sci. 2020, 10(10), 3383; https://doi.org/10.3390/app10103383 - 14 May 2020
Cited by 3
Abstract
The Dougong structure is an ancient architectural innovation of the East. Its construction method is complex and challenging to understand from drawings. Scale models were developed to preserve this culturally-unique architectural technique by learning through their assembly process. In this work, augmented reality [...] Read more.
The Dougong structure is an ancient architectural innovation of the East. Its construction method is complex and challenging to understand from drawings. Scale models were developed to preserve this culturally-unique architectural technique by learning through their assembly process. In this work, augmented reality (AR)-based systems that support the manual assembly of the Dougong models with instant interactions were developed. The first objective was to design new AR-assisted functions that overcome existing limitations of paper-based assembly instructions. The second one was to clarify whether or not and how AR can improve the operational efficiency or quality of the manual assembly process through experiments. The experimental data were analyzed with both qualitative and quantitative measures to evaluate the assembly efficiency, accuracy, and workload of these functions. The results revealed essential requirements for improving the functional design of the systems. They also showed the potential of AR as an effective human interfacing technology for assisting the manual assembly of complex objects. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Efficacy of Virtual Reality in Painting Art Exhibitions Appreciation
Appl. Sci. 2020, 10(9), 3012; https://doi.org/10.3390/app10093012 - 26 Apr 2020
Cited by 2
Abstract
Virtual reality (VR) technology has been employed in a wide range of fields, from entertainment to medicine and engineering. Advances in VR also provide new opportunities in art exhibitions. This study discusses the experience of art appreciation through desktop virtual reality (Desktop VR) [...] Read more.
Virtual reality (VR) technology has been employed in a wide range of fields, from entertainment to medicine and engineering. Advances in VR also provide new opportunities in art exhibitions. This study discusses the experience of art appreciation through desktop virtual reality (Desktop VR) or head-mounted display virtual reality (HMD VR) and compares it with appreciating a physical painting. Seventy-eight university students participated in the study. According to the findings of this study, painting evaluation and the emotions expressed during the appreciation show no significant difference under these three conditions, indicating that the participants believe that paintings, regardless of whether they are viewed through VR, are similar. Owing to the limitation of the operation, the participants considered HMD VR to be a tool that hinders free appreciation of paintings. In addition, attention should be paid to the proper projected size of words and paintings for better reading and viewing. The above indicates that through digital technology, we can shorten the gap between a virtual painting and a physical one; however, we must still improve the design of object size and the interaction in the VR context so that a virtual exhibition can be as impressive as a physical one. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
SoundFields: A Virtual Reality Game Designed to Address Auditory Hypersensitivity in Individuals with Autism Spectrum Disorder
Appl. Sci. 2020, 10(9), 2996; https://doi.org/10.3390/app10092996 - 25 Apr 2020
Cited by 2
Abstract
Individuals with autism spectrum disorder (ASD) are characterised as having impairments in social-emotional interaction and communication, alongside displaying repetitive behaviours and interests. Additionally, they can frequently experience difficulties in processing sensory information with particular prevalence in the auditory domain. Often triggered by everyday [...] Read more.
Individuals with autism spectrum disorder (ASD) are characterised as having impairments in social-emotional interaction and communication, alongside displaying repetitive behaviours and interests. Additionally, they can frequently experience difficulties in processing sensory information with particular prevalence in the auditory domain. Often triggered by everyday environmental sounds, auditory hypersensitivity can provoke self-regulatory fear responses such as crying and isolation from sounds. This paper presents SoundFields, an interactive virtual reality game designed to address this area by integrating exposure based therapy techniques into game mechanics and delivering target auditory stimuli to the player rendered via binaural based spatial audio. A pilot study was conducted with six participants diagnosed with ASD who displayed hypersensitivity to specific sounds to evaluate the use of SoundFields as a tool to reduce levels of anxiety associated with identified problematic sounds. During the course of the investigation participants played the game weekly over four weeks and all participants actively engaged with the virtual reality (VR) environment and enjoyed playing the game. Following this period, a comparison of pre- and post-study measurements showed a significant decrease in anxiety linked to target auditory stimuli. The study results therefore suggest that SoundFields could be an effective tool for helping individuals with autism manage auditory hypersensitivity. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
ALCC-Glasses: Arriving Light Chroma Controllable Optical See-Through Head-Mounted Display System for Color Vision Deficiency Compensation
Appl. Sci. 2020, 10(7), 2381; https://doi.org/10.3390/app10072381 - 31 Mar 2020
Abstract
About 250 million people in the world suffer from color vision deficiency (CVD). Contact lenses and glasses with a color filter are available to partially improve the vision of people with CVD. Tinted glasses uniformly change the colors in a user’s field of [...] Read more.
About 250 million people in the world suffer from color vision deficiency (CVD). Contact lenses and glasses with a color filter are available to partially improve the vision of people with CVD. Tinted glasses uniformly change the colors in a user’s field of view (FoV), which can improve the contrast of certain colors while making others hard to identify. On the other hand, an optical see-through head-mounted display (OST-HMD) provides a new alternative by applying a controllable overlay to a user’s FoV. The method of color calibration for people with CVD, such as the Daltonization process, needs to make the calibrated color darker, which has not yet been featured on recent commercial OST-HMDs. We propose a new approach to realize light subtraction on OST-HMDs using a transmissive LCD panel, a prototype system, named ALCC-glasses, to validate and demonstrate the new arriving light chroma controllable augmented reality technology for CVD compensation. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Real-Time Application for Generating Multiple Experiences from 360° Panoramic Video by Tracking Arbitrary Objects and Viewer’s Orientations
Appl. Sci. 2020, 10(7), 2248; https://doi.org/10.3390/app10072248 - 26 Mar 2020
Abstract
We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be [...] Read more.
We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Graphical abstract

Open AccessArticle
Comparative Performance Characterization of Mobile AR Frameworks in the Context of AR-Based Grocery Shopping Applications
Appl. Sci. 2020, 10(4), 1547; https://doi.org/10.3390/app10041547 - 24 Feb 2020
Abstract
A number of Augmented Reality (AR) frameworks are now available and used to support the development of mobile AR applications. In this paper, we measure and compare the recognition performance of the commercial AR frameworks and identify potential issues that can occur in [...] Read more.
A number of Augmented Reality (AR) frameworks are now available and used to support the development of mobile AR applications. In this paper, we measure and compare the recognition performance of the commercial AR frameworks and identify potential issues that can occur in the real application environment. For experiments, we assume a situation in which a consumer purchases food products in a grocery store and consider an application scenario in which AR content related to the products is displayed on a smartphone screen by recognizing such products. We use four performance metrics to compare the performance of the selected AR frameworks, Vuforia, ARCore, and MAXST. Experimental results show that Vuforia is relatively superior to the others. The limitation of the AR frameworks is also identified when they are used in a real grocery store environment. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
Towards Next Generation Technical Documentation in Augmented Reality Using a Context-Aware Information Manager
Appl. Sci. 2020, 10(3), 780; https://doi.org/10.3390/app10030780 - 22 Jan 2020
Abstract
Technical documentation is evolving from static contents presented on paper or via digital publishing to real-time on-demand contents displayed via virtual and augmented reality (AR) devices. However, how best to provide personalized and context-relevant presentation of technical information is still an open field [...] Read more.
Technical documentation is evolving from static contents presented on paper or via digital publishing to real-time on-demand contents displayed via virtual and augmented reality (AR) devices. However, how best to provide personalized and context-relevant presentation of technical information is still an open field of research. In particular, the systems described in the literature can manage a limited number of modalities to convey technical information, and do not consider the ‘people’ factor. Then, in this work, we present a Context-Aware Technical Information Management (CATIM) system, that dynamically manages (1) what information as well as (2) how information is presented in an augmented reality interface. The system was successfully implemented, and we made a first evaluation in the real industrial scenario of the maintenance of a hydraulic valve. We also measured the time performance of the system, and results revealed that CATIM performs fast enough to support interactive AR. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
The Limited Effect of Graphic Elements in Video and Augmented Reality on Children’s Listening Comprehension
Appl. Sci. 2020, 10(2), 527; https://doi.org/10.3390/app10020527 - 10 Jan 2020
Cited by 1
Abstract
There is currently significant interest in the use of instructional strategies in learning environments thanks to the emergence of new multimedia systems that combine text, audio, graphics and video, such as augmented reality (AR). In this light, this study compares the effectiveness of [...] Read more.
There is currently significant interest in the use of instructional strategies in learning environments thanks to the emergence of new multimedia systems that combine text, audio, graphics and video, such as augmented reality (AR). In this light, this study compares the effectiveness of AR and video for listening comprehension tasks. The sample consisted of thirty-two elementary school students with different reading comprehension. Firstly, the experience, instructions and objectives were introduced to all the students. Next, they were divided into two groups to perform activities—one group performed an activity involving watching an Educational Video Story of the Laika dog and her Space Journey available by mobile devices app Blue Planet Tales, while the other performed an activity involving the use of AR, whose contents of the same history were visualized by means of the app Augment Sales. Once the activities were completed participants answered a comprehension test. Results (p = 0.180) indicate there are no meaningful differences between the lesson format and test performance. But there are differences between the participants of the AR group according to their reading comprehension level. With respect to the time taken to perform the comprehension test, there is no significant difference between the two groups but there is a difference between participants with a high and low level of comprehension. To conclude SUS (System Usability Scale) questionnaire was used to establish the measure usability for the AR app on a smartphone. An average score of 77.5 out of 100 was obtained in this questionnaire, which indicates that the app has fairly good user-centered design. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Open AccessArticle
The Influence of Display Parameters and Display Devices over Spatial Ability Test Answers in Virtual Reality Environments
Appl. Sci. 2020, 10(2), 526; https://doi.org/10.3390/app10020526 - 10 Jan 2020
Cited by 2
Abstract
This manuscript analyzes the influence of display parameters and display devices over the spatial skills of the users in virtual reality environments. For this, the authors of this manuscript developed a virtual reality application which tests the spatial skills of the users. 240 [...] Read more.
This manuscript analyzes the influence of display parameters and display devices over the spatial skills of the users in virtual reality environments. For this, the authors of this manuscript developed a virtual reality application which tests the spatial skills of the users. 240 students used an LG desktop display and 61 students used the Gear VR for the tests. Statistical data are generated when the users do the tests and the following factors are logged by the application and evaluated in this manuscript: virtual camera type, virtual camera field of view, virtual camera rotation, contrast ratio parameters, the existence of shadows and the device used. The probabilities of correct answers were analyzed based on these factors by logistic regression (logit) analysis method. The influences and interactions of all factors were analyzed. The perspective camera, lighter contrast ratio, no or large camera rotations and the use of the Gear VR greatly and positively influenced the probability of correct answers on the tests. Therefore, for the assessment of spatial ability in virtual reality, the use of these parameters and device present the optimal user-centric human–computer interaction practice. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Back to TopTop