Special Issue "Virtual and Augmented Reality Systems"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 October 2021.

Special Issue Editors

Prof. Dr. Chang-Hun Kim
E-Mail Website
Guest Editor
Department of Computer Science & Engineering, Korea University, Seoul 02841, Korea
Interests: VR/AR; CG; visual simulation
Prof. Soo Kyun Kim
E-Mail Website
Guest Editor
Department of Computer Engineering, Jeju National University, Jeju 63243, Korea
Interests: geometric modeling; VR/AR; MR

Special Issue Information

Dear Colleagues,

This Special Issue of the journal of Applied Sciences entitled “Virtual and Augmented Reality Systems” aims to present recent advances in virtual and augmented reality. Virtual and augmented reality is a form of human–computer interaction in which a real or imaginary environment is simulated and users interact with and manipulate that world. Virtual and augmented reality systems have emerged as a disruptive technology to enhance the performance of existing computer graphic techniques and to tackle the related intractable problems in human–computer interactions. Recently, the growing developments in VR/AR technology and hardware such as Oculus Rift, HTC Vive, and smartphone-based HMD have led to an increase in the interest of people in virtual reality. Many related types of research have been published, including graphical simulation, modeling, user interfaces, and AI technologies. This Special Issue is an opportunity for the scientific community to present recent high-quality research works regarding any branch of virtual and augmented reality.

Topics of interest include but are not limited to:

  • Human–computer interactions in the virtual and augmented environment;
  • 3D user interaction in virtual and augmented space;
  • Shared VR and AR technologies;
  • Untact technologies in virtual and augmented space;
  • Graphical simulation for VR and AR systems;
  • AI-based technologies for VR and AR systems;
  • Immersive analytics and visualization;
  • Multiuser and distributed systems;
  • Graphical advancements in virtual and augmented space

Prof. Chang-Hun Kim
Prof. Soo Kyun Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Virtual reality
  • Virtual reality application
  • Virtual reality system
  • Augmented reality
  • Augmented reality application
  • Augmented reality system
  • Human–computer interaction
  • 3D user interfaces (3DUIs)
  • Multisensory experiences
  • Virtual environments
  • AI Problems in VR and AR

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
MonoMR: Synthesizing Pseudo-2.5D Mixed Reality Content from Monocular Videos
Appl. Sci. 2021, 11(17), 7946; https://doi.org/10.3390/app11177946 - 27 Aug 2021
Viewed by 522
Abstract
MonoMR is a system that synthesizes pseudo-2.5D content from monocular videos for mixed reality (MR) head-mounted displays (HMDs). Unlike conventional systems that require multiple cameras, the MonoMR system can be used by casual end-users to generate MR content from a single camera only. [...] Read more.
MonoMR is a system that synthesizes pseudo-2.5D content from monocular videos for mixed reality (MR) head-mounted displays (HMDs). Unlike conventional systems that require multiple cameras, the MonoMR system can be used by casual end-users to generate MR content from a single camera only. In order to synthesize the content, the system detects people in the video sequence via a deep neural network, and then the detected person’s pseudo-3D position is estimated by our proposed novel algorithm through a homography matrix. Finally, the person’s texture is extracted using a background subtraction algorithm and is placed on an estimated 3D position. The synthesized content can be played in MR HMD, and users can freely change their viewpoint and the content’s position. In order to evaluate the efficiency and interactive potential of MonoMR, we conducted performance evaluations and a user study with 12 participants. Moreover, we demonstrated the feasibility and usability of the MonoMR system to generate pseudo-2.5D content using three example application scenarios. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
An ARCore-Based Augmented Reality Campus Navigation System
Appl. Sci. 2021, 11(16), 7515; https://doi.org/10.3390/app11167515 - 16 Aug 2021
Viewed by 436
Abstract
Currently, the route planning functions in 2D/3D campus navigation systems in the market are unable to process indoor and outdoor localization information simultaneously, and the UI experiences are not optimal because they are limited by the service platforms. An ARCore-based augmented reality campus [...] Read more.
Currently, the route planning functions in 2D/3D campus navigation systems in the market are unable to process indoor and outdoor localization information simultaneously, and the UI experiences are not optimal because they are limited by the service platforms. An ARCore-based augmented reality campus navigation system is designed in this paper in order to solve the relevant problems. Firstly, the proposed campus navigation system uses ARCore to enhance reality by presenting 3D information in real scenes. Secondly, a visual inertial ranging algorithm is proposed for real-time locating and map generating in mobile devices. Finally, rich Unity3D scripts are designed in order to enhance users’ autonomy and enjoyment during navigation experience. In this paper, indoor navigation and outdoor navigation experiments are carried out at the Lingang campus of Shanghai University of Electric Power. Compared with the AR outdoor navigation system of Gaode, the proposed AR system can achieve increased precise outdoor localization by deploying the visual inertia odometer on the mobile phone and realizes the augmented reality function of 3D information and real scene, thus enriching the user’s interactive experience. Furthermore, four groups of students have been selected for system testing and evaluation. Compared with traditional systems, such as Gaode map or Internet media, experimental results show that our system could facilitate the effectiveness and usability of learning on campus. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
A Novel Anatomy Education Method Using a Spatial Reality Display Capable of Stereoscopic Imaging with the Naked Eye
Appl. Sci. 2021, 11(16), 7323; https://doi.org/10.3390/app11167323 - 09 Aug 2021
Viewed by 691
Abstract
Several efforts have been made to use virtual reality (VR) and augmented reality (AR) for medical and dental education and surgical support. The current methods still require users to wear devices such as a head-mounted display (HMD) and smart glasses, which pose challenges [...] Read more.
Several efforts have been made to use virtual reality (VR) and augmented reality (AR) for medical and dental education and surgical support. The current methods still require users to wear devices such as a head-mounted display (HMD) and smart glasses, which pose challenges in hygiene management and long-term use. Additionally, it is necessary to measure the user’s inter-pupillary distance and to reflect it in the device settings each time to accurately display 3D images. This setting is difficult for daily use. We developed and implemented a novel anatomy education method using a spatial reality display capable of stereoscopic viewing with the naked eye without an HMD or smart glasses. In this study, we developed two new applications: (1) a head and neck anatomy education application, which can display 3D-CG models of the skeleton and blood vessels of the head and neck region using 3D human body data available free of charge from public research institutes, and (2) a DICOM image autostereoscopic viewer, which can automatically convert 2D CT/MRI/CBCT image data into 3D-CG models. In total, 104 students at the School of Dentistry experienced and evaluated the system, and the results suggest its usefulness. A stereoscopic display without a head-mounted display is highly useful and promising for anatomy education. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Effects of the Weight and Balance of Head-Mounted Displays on Physical Load
Appl. Sci. 2021, 11(15), 6802; https://doi.org/10.3390/app11156802 - 24 Jul 2021
Viewed by 391
Abstract
To maximize user experience in VR environments, optimizing the comfortability of head-mounted displays (HMDs) is essential. To date, few studies have investigated the fatigue induced by wearing commercially available HMDs. Here, we focus on the effects of HMD weight and balance on the [...] Read more.
To maximize user experience in VR environments, optimizing the comfortability of head-mounted displays (HMDs) is essential. To date, few studies have investigated the fatigue induced by wearing commercially available HMDs. Here, we focus on the effects of HMD weight and balance on the physical load experienced by the user. We conducted an experiment in which participants completed a shooting game while wearing differently weighted and balanced HMDs. Afterwards, the participants completed questionnaires to assess levels of discomfort and fatigue. The results clarify that the weight of the HMD affects user fatigue, with the degree of fatigue varying depending on the center of mass position. Additionally, they suggest that the torque at the neck joint corresponds to the physical load imparted by the HMD. Therefore, our results provide valuable insights, demonstrating that, to improve HMD comfortability, it is necessary to consider both the balance and reduction of weight during HMD design. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Study on Hand–Eye Cordination Area with Bare-Hand Click Interaction in Virtual Reality
Appl. Sci. 2021, 11(13), 6146; https://doi.org/10.3390/app11136146 - 01 Jul 2021
Viewed by 462
Abstract
In virtual reality, users’ input and output interactions are carried out in a three-dimensional space, and bare-hand click interaction is one of the most common interaction methods. Apart from the limitations of the device, the movements of bare-hand click interaction in virtual reality [...] Read more.
In virtual reality, users’ input and output interactions are carried out in a three-dimensional space, and bare-hand click interaction is one of the most common interaction methods. Apart from the limitations of the device, the movements of bare-hand click interaction in virtual reality involve head, eye, and hand movements. Consequently, clicking performance varies among locations in the binocular field of view. In this study, we explored the optimal interaction area of hand–eye coordination within the binocular field of view in a 3D virtual environment (VE), and implemented a bare-hand click experiment in a VE combining click performance data, namely, click accuracy and click duration, following a gradient descent method. The experimental results show that click performance is significantly influenced by the area where the target is located. The performance data and subjective preferences for clicks show a high degree of consistency. Combining reaction time and click accuracy, the optimal operating area for bare-hand clicking in virtual reality is from 20° to the left to 30° to the right horizontally and from 15° in the upward direction to 20° in the downward direction vertically. The results of this study have implications for guidelines and applications for bare-hand click interaction interface designs in the proximal space of virtual reality. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Personalized Augmented Reality Based Tourism System: Big Data and User Demographic Contexts
Appl. Sci. 2021, 11(13), 6047; https://doi.org/10.3390/app11136047 - 29 Jun 2021
Viewed by 392
Abstract
A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a [...] Read more.
A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Augmented Reality and Machine Learning Incorporation Using YOLOv3 and ARKit
Appl. Sci. 2021, 11(13), 6006; https://doi.org/10.3390/app11136006 - 28 Jun 2021
Viewed by 481
Abstract
Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and [...] Read more.
Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and tracking techniques. The disadvantage of applying these techniques is that the markers must be modified to match the unique classified algorithms or they suffer from low detection accuracy. Machine learning is an ideal solution to overcome the current drawbacks of image processing in augmented reality applications. However, traditional data annotation requires extensive time and labour, as it is usually done manually. This study incorporates machine learning to detect and track augmented reality marker targets in an application using deep neural networks. We firstly implement the auto-generated dataset tool, which is used for the machine learning dataset preparation. The final iOS prototype application incorporates object detection, object tracking and augmented reality. The machine learning model is trained to recognise the differences between targets using one of YOLO’s most well-known object detection methods. The final product makes use of a valuable toolkit for developing augmented reality applications called ARKit. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Recognition of Customers’ Impulsivity from Behavioral Patterns in Virtual Reality
Appl. Sci. 2021, 11(10), 4399; https://doi.org/10.3390/app11104399 - 12 May 2021
Cited by 1 | Viewed by 591
Abstract
Virtual reality (VR) in retailing (V-commerce) has been proven to enhance the consumer experience. Thus, this technology is beneficial to study behavioral patterns by offering the opportunity to infer customers’ personality traits based on their behavior. This study aims to recognize impulsivity using [...] Read more.
Virtual reality (VR) in retailing (V-commerce) has been proven to enhance the consumer experience. Thus, this technology is beneficial to study behavioral patterns by offering the opportunity to infer customers’ personality traits based on their behavior. This study aims to recognize impulsivity using behavioral patterns. For this goal, 60 subjects performed three tasks—one exploration task and two planned tasks—in a virtual market. Four noninvasive signals (eye-tracking, navigation, posture, and interactions), which are available in commercial VR devices, were recorded, and a set of features were extracted and categorized into zonal, general, kinematic, temporal, and spatial types. They were input into a support vector machine classifier to recognize the impulsivity of the subjects based on the I-8 questionnaire, achieving an accuracy of 87%. The results suggest that, while the exploration task can reveal general impulsivity, other subscales such as perseverance and sensation-seeking are more related to planned tasks. The results also show that posture and interaction are the most informative signals. Our findings validate the recognition of customer impulsivity using sensors incorporated into commercial VR devices. Such information can provide a personalized shopping experience in future virtual shops. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Virtual Marker Technique to Enhance User Interactions in a Marker-Based AR System
Appl. Sci. 2021, 11(10), 4379; https://doi.org/10.3390/app11104379 - 12 May 2021
Viewed by 393
Abstract
In marker-based augmented reality (AR) systems, markers are usually relatively independent and predefined by the system creator in advance. Users can only use these predefined markers to complete the construction of certain specified content. Such systems usually lack flexibility and cannot allow users [...] Read more.
In marker-based augmented reality (AR) systems, markers are usually relatively independent and predefined by the system creator in advance. Users can only use these predefined markers to complete the construction of certain specified content. Such systems usually lack flexibility and cannot allow users to create content freely. In this paper, we propose a virtual marker technique to build a marker-based AR system framework, where multiple AR markers including virtual and physical markers work together. Information from multiple markers can be merged, and virtual markers are used to provide user-defined information. We conducted a pilot study to understand the multi-marker cooperation framework based on virtual markers. The pilot study shows that the virtual marker technique will not significantly increase the user’s time and operational burdens, while actively improving the user’s cognitive experience. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
High-Speed Dynamic Projection Mapping onto Human Arm with Realistic Skin Deformation
Appl. Sci. 2021, 11(9), 3753; https://doi.org/10.3390/app11093753 - 21 Apr 2021
Viewed by 580
Abstract
Dynamic projection mapping for a moving object according to its position and shape is fundamental for augmented reality to resemble changes on a target surface. For instance, augmenting the human arm surface via dynamic projection mapping can enhance applications in fashion, user interfaces, [...] Read more.
Dynamic projection mapping for a moving object according to its position and shape is fundamental for augmented reality to resemble changes on a target surface. For instance, augmenting the human arm surface via dynamic projection mapping can enhance applications in fashion, user interfaces, prototyping, education, medical assistance, and other fields. For such applications, however, conventional methods neglect skin deformation and have a high latency between motion and projection, causing noticeable misalignment between the target arm surface and projected images. These problems degrade the user experience and limit the development of more applications. We propose a system for high-speed dynamic projection mapping onto a rapidly moving human arm with realistic skin deformation. With the developed system, the user does not perceive any misalignment between the arm surface and projected images. First, we combine a state-of-the-art parametric deformable surface model with efficient regression-based accuracy compensation to represent skin deformation. Through compensation, we modify the texture coordinates to achieve fast and accurate image generation for projection mapping based on joint tracking. Second, we develop a high-speed system that provides a latency between motion and projection below 10 ms, which is generally imperceptible by human vision. Compared with conventional methods, the proposed system provides more realistic experiences and increases the applicability of dynamic projection mapping. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Augmented Reality, Virtual Reality and Artificial Intelligence in Orthopedic Surgery: A Systematic Review
Appl. Sci. 2021, 11(7), 3253; https://doi.org/10.3390/app11073253 - 05 Apr 2021
Cited by 2 | Viewed by 769
Abstract
Background: The application of virtual and augmented reality technologies to orthopaedic surgery training and practice aims to increase the safety and accuracy of procedures and reducing complications and costs. The purpose of this systematic review is to summarise the present literature on this [...] Read more.
Background: The application of virtual and augmented reality technologies to orthopaedic surgery training and practice aims to increase the safety and accuracy of procedures and reducing complications and costs. The purpose of this systematic review is to summarise the present literature on this topic while providing a detailed analysis of current flaws and benefits. Methods: A comprehensive search on the PubMed, Cochrane, CINAHL, and Embase database was conducted from inception to February 2021. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used to improve the reporting of the review. The Cochrane Risk of Bias Tool and the Methodological Index for Non-Randomized Studies (MINORS) was used to assess the quality and potential bias of the included randomized and non-randomized control trials, respectively. Results: Virtual reality has been proven revolutionary for both resident training and preoperative planning. Thanks to augmented reality, orthopaedic surgeons could carry out procedures faster and more accurately, improving overall safety. Artificial intelligence (AI) is a promising technology with limitless potential, but, nowadays, its use in orthopaedic surgery is limited to preoperative diagnosis. Conclusions: Extended reality technologies have the potential to reform orthopaedic training and practice, providing an opportunity for unidirectional growth towards a patient-centred approach. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Silhouettes from Real Objects Enable Realistic Interactions with a Virtual Human in Mobile Augmented Reality
Appl. Sci. 2021, 11(6), 2763; https://doi.org/10.3390/app11062763 - 19 Mar 2021
Cited by 1 | Viewed by 609
Abstract
Realistic interactions with real objects (e.g., animals, toys, robots) in an augmented reality (AR) environment enhances the user experience. The common AR apps on the market achieve realistic interactions by superimposing pre-modeled virtual proxies on the real objects in the AR environment. This [...] Read more.
Realistic interactions with real objects (e.g., animals, toys, robots) in an augmented reality (AR) environment enhances the user experience. The common AR apps on the market achieve realistic interactions by superimposing pre-modeled virtual proxies on the real objects in the AR environment. This way user perceives the interaction with virtual proxies as interaction with real objects. However, catering to environment change, shape deformation, and view update is not a trivial task. Our proposed method uses the dynamic silhouette of a real object to enable realistic interactions. Our approach is practical, lightweight, and requires no additional hardware besides the device camera. For a case study, we designed a mobile AR application to interact with real animal dolls. Our scenario included a virtual human performing four types of realistic interactions. Results demonstrated our method’s stability that does not require pre-modeled virtual proxies in case of shape deformation and view update. We also conducted a pilot study using our approach and reported significant improvements in user perception of spatial awareness and presence for realistic interactions with a virtual human. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
BlocklyXR: An Interactive Extended Reality Toolkit for Digital Storytelling
Appl. Sci. 2021, 11(3), 1073; https://doi.org/10.3390/app11031073 - 25 Jan 2021
Cited by 3 | Viewed by 887
Abstract
Traditional in-app virtual reality (VR)/augmented reality (AR) applications pose a challenge of reaching users due to their dependency on operating systems (Android, iOS). Besides, it is difficult for general users to create their own VR/AR applications and foster their creative ideas without advanced [...] Read more.
Traditional in-app virtual reality (VR)/augmented reality (AR) applications pose a challenge of reaching users due to their dependency on operating systems (Android, iOS). Besides, it is difficult for general users to create their own VR/AR applications and foster their creative ideas without advanced programming skills. This paper addresses these issues by proposing an interactive extended reality toolkit, named BlocklyXR. The objective of this research is to provide general users with a visual programming environment to build an extended reality application for digital storytelling. The contextual design was generated from real-world map data retrieved from Mapbox GL. ThreeJS was used for setting up, rendering 3D environments, and controlling animations. A block-based programming approach was adapted to let users design their own story. The capability of BlocklyXR was illustrated with a use case where users were able to replicate the existing PalmitoAR utilizing the block-based authoring toolkit with fewer efforts in programming. The technology acceptance model was used to evaluate the adoption and use of the interactive extended reality toolkit. The findings showed that visual design and task technology fit had significantly positive effects on user motivation factors (perceived ease of use and perceived usefulness). In turn, perceived usefulness had statistically significant and positive effects on intention to use, while there was no significant impact of perceived ease of use on intention to use. Study implications and future research directions are discussed. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Situated AR Simulations of a Lantern Festival Using a Smartphone and LiDAR-Based 3D Models
Appl. Sci. 2021, 11(1), 12; https://doi.org/10.3390/app11010012 - 22 Dec 2020
Cited by 1 | Viewed by 753
Abstract
A lantern festival was 3D-scanned to elucidate its unique complexity and cultural identity in terms of Intangible Cultural Heritage (ICH). Three augmented reality (AR) instancing scenarios were applied to the converted scanned data from an interaction to the entire site; a forward additive [...] Read more.
A lantern festival was 3D-scanned to elucidate its unique complexity and cultural identity in terms of Intangible Cultural Heritage (ICH). Three augmented reality (AR) instancing scenarios were applied to the converted scanned data from an interaction to the entire site; a forward additive instancing and interactions with a pre-defined model layout. The novelty and contributions of this study are three-fold: documentation, development of an AR app for situated tasks, and AR verification. We presented ready-made and customized smartphone apps for AR verification to extend the model’s elaboration of different site contexts. Both were applied to assess their feasibility in the restructuring and management of the scene. The apps were implemented under a homogeneous and heterogeneous combination of contexts, originating from an as-built event description to a remote site as a sustainable cultural effort. A second reconstruction of screenshots in an AR loop process of interaction, reconstruction, and confirmation verification was also made to study the manipulated result in 3D prints. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Voxel-Based Scene Representation for Camera Pose Estimation of a Single RGB Image
Appl. Sci. 2020, 10(24), 8866; https://doi.org/10.3390/app10248866 - 11 Dec 2020
Viewed by 588
Abstract
Deep learning has been utilized in end-to-end camera pose estimation. To improve the performance, we introduce a camera pose estimation method based on a 2D-3D matching scheme with two convolutional neural networks (CNNs). The scene is divided into voxels, whose size and number [...] Read more.
Deep learning has been utilized in end-to-end camera pose estimation. To improve the performance, we introduce a camera pose estimation method based on a 2D-3D matching scheme with two convolutional neural networks (CNNs). The scene is divided into voxels, whose size and number are computed according to the scene volume and the number of 3D points. We extract inlier points from the 3D point set in a voxel using random sample consensus (RANSAC)-based plane fitting to obtain a set of interest points consisting of a major plane. These points are subsequently reprojected onto the image using the ground truth camera pose, following which a polygonal region is identified in each voxel using the convex hull. We designed a training dataset for 2D–3D matching, consisting of inlier 3D points, correspondence across image pairs, and the voxel regions in the image. We trained the hierarchical learning structure with two CNNs on the dataset architecture to detect the voxel regions and obtain the location/description of the interest points. Following successful 2D–3D matching, the camera pose was estimated using n-point pose solver in RANSAC. The experiment results show that our method can estimate the camera pose more precisely than previous end-to-end estimators. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
“Blurry Touch Finger”: Touch-Based Interaction for Mobile Virtual Reality with Clip-on Lenses
Appl. Sci. 2020, 10(21), 7920; https://doi.org/10.3390/app10217920 - 08 Nov 2020
Viewed by 736
Abstract
In this paper, we propose and explore a touch screen based interaction technique, called the “Blurry Touch Finger” for EasyVR, a mobile VR platform with non-isolating flip-on glasses that allows the fingers accessible to the screen. We demonstrate that, with the proposed technique, [...] Read more.
In this paper, we propose and explore a touch screen based interaction technique, called the “Blurry Touch Finger” for EasyVR, a mobile VR platform with non-isolating flip-on glasses that allows the fingers accessible to the screen. We demonstrate that, with the proposed technique, the user is able to accurately select virtual objects, seen under the lenses, directly with the fingers even though they are blurred and physically block the target object. This is possible owing to the binocular rivalry that renders the fingertips semi-transparent. We carried out a first stage basic evaluation assessing the object selection performance and general usability of Blurry Touch Finger. The study has revealed that, for objects with the screen space sizes greater than about 0.5 cm, the selection performance and usability of the Blurry Touch Finger, as applied in the EasyVR configuration, was comparable to or higher than those with both the conventional head-directed and hand/controller based ray-casting selection methods. However, for smaller sized objects, much below the size of the fingertip, the touch based selection was both less performing and usable due to the usual fat finger problem and difficulty in stereoscopic focus. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Article
Enhancing English-Learning Performance through a Simulation Classroom for EFL Students Using Augmented Reality—A Junior High School Case Study
Appl. Sci. 2020, 10(21), 7854; https://doi.org/10.3390/app10217854 - 05 Nov 2020
Cited by 3 | Viewed by 860
Abstract
In non-English-speaking countries, students learning EFL (English as a Foreign Language) without a “real” learning environment mostly shows poor English-learning performance. In order to improve the English-learning effectiveness of EFL students, we propose the use of augmented reality (AR) to support situational classroom [...] Read more.
In non-English-speaking countries, students learning EFL (English as a Foreign Language) without a “real” learning environment mostly shows poor English-learning performance. In order to improve the English-learning effectiveness of EFL students, we propose the use of augmented reality (AR) to support situational classroom learning and conduct teaching experiments for situational English learning. The purpose of this study is to examine whether the learning performance of EFL students can be enhanced using augmented reality within a situational context. The learning performance of the experimental student group is validated by means of the attention, relevance, confidence, and satisfaction (ARCS) model. According to statistical analysis, the experimental teaching method is much more effective than that of the control group (i.e., the traditional teaching method). The learning performance of the experimental group students is obviously enhanced and the feedback of using AR by EFL students is positive. The experimental results reveal that (1) students can concentrate more on the practice of speaking English as a foreign language; (2) the real-life AR scenarios enhanced student confidence in learning English; and (3) applying AR teaching materials in situational context classes can provide near real-life scenarios and improve the learning satisfaction of students. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Review

Jump to: Research

Review
Review of Microsoft HoloLens Applications over the Past Five Years
Appl. Sci. 2021, 11(16), 7259; https://doi.org/10.3390/app11167259 - 06 Aug 2021
Viewed by 544
Abstract
Since Microsoft HoloLens first appeared in 2016, HoloLens has been used in various industries, over the past five years. This study aims to review academic papers on the applications of HoloLens in several industries. A review was performed to summarize the results of [...] Read more.
Since Microsoft HoloLens first appeared in 2016, HoloLens has been used in various industries, over the past five years. This study aims to review academic papers on the applications of HoloLens in several industries. A review was performed to summarize the results of 44 papers (dated between January 2016 and December 2020) and to outline the research trends of applying HoloLens to different industries. This study determined that HoloLens is employed in medical and surgical aids and systems, medical education and simulation, industrial engineering, architecture, civil engineering and other engineering fields. The findings of this study contribute towards classifying the current uses of HoloLens in various industries and identifying the types of visualization techniques and functions. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Review
A Narrative Review of Virtual Reality Applications for the Treatment of Post-Traumatic Stress Disorder
Appl. Sci. 2021, 11(15), 6683; https://doi.org/10.3390/app11156683 - 21 Jul 2021
Cited by 1 | Viewed by 338
Abstract
Virtual reality (VR) technologies allow for the creation of 3D environments that can be exploited at the human level, maximizing humans’ use of perceptual skills through their sensory channels, and enabling them to actively influence the course of events that take place in [...] Read more.
Virtual reality (VR) technologies allow for the creation of 3D environments that can be exploited at the human level, maximizing humans’ use of perceptual skills through their sensory channels, and enabling them to actively influence the course of events that take place in the virtual environment (VE). As such, they constitute a significant asset in the treatment of post-traumatic stress disorder (PTSD) via exposure therapy. In this article, we review the VR tools that have been developed to date for the treatment of PTSD. The article aims to analyze how VR technologies can be exploited from a sensorimotor and interactive perspective. The findings from this analysis suggest a significant emphasis on sensory stimulation to the detriment of interaction. Finally, we propose new ideas regarding the more successful integration of sensorimotor activities and interaction into VR exposure therapy for PTSD. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Review
Multimodal Interaction Systems Based on Internet of Things and Augmented Reality: A Systematic Literature Review
Appl. Sci. 2021, 11(4), 1738; https://doi.org/10.3390/app11041738 - 16 Feb 2021
Cited by 2 | Viewed by 787
Abstract
Technology developments have expanded the diversity of interaction modalities that can be used by an agent (either a human or machine) to interact with a computer system. This expansion has created the need for more natural and user-friendly interfaces in order to achieve [...] Read more.
Technology developments have expanded the diversity of interaction modalities that can be used by an agent (either a human or machine) to interact with a computer system. This expansion has created the need for more natural and user-friendly interfaces in order to achieve effective user experience and usability. More than one modality can be provided to an agent for interaction with a system to accomplish this goal, which is referred to as a multimodal interaction (MI) system. The Internet of Things (IoT) and augmented reality (AR) are popular technologies that allow interaction systems to combine the real-world context of the agent and immersive AR content. However, although MI systems have been extensively studied, there are only several studies that reviewed MI systems that used IoT and AR. Therefore, this paper presents an in-depth review of studies that proposed various MI systems utilizing IoT and AR. A total of 23 studies were identified and analyzed through a rigorous systematic literature review protocol. The results of our analysis of MI system architectures, the relationship between system components, input/output interaction modalities, and open research challenges are presented and discussed to summarize the findings and identify future research and development avenues for researchers and MI developers. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality Systems)
Show Figures

Figure 1

Back to TopTop