Special Issue "Virtual Reality and Scientific Visualization"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 18410

Special Issue Editors

Prof. Dr. Osvaldo Gervasi
E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Perugia, Perugia, Italy
Interests: parallel and distributed systems; grid computing; cloud computing; virtual reality and scientific visualization; implementation of algorithms for molecular studies; multimedia and internet computing; e-learning
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. JungYoon Kim
E-Mail Website1 Website2
Guest Editor
Graduate School of Game, Gachon University, 1342 Seongnamdaero, Sujeong-gu, Seongnam 461-701, Gyeonggi-do, Korea
Interests: AR and VR; game design; game therapy; application design
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, the availability of innovative, powerful, and low-cost immersive devices, accompanied by the availability of increasingly high-performing computers and smartphones, has generated a growing interest in virtual reality and augmented reality technologies.

Moreover, the availability of extremely high-performing development environments, such as Web3D, Blender, Unity 3D, Unreal Engine, etc., allow the development of applications in a fast and very efficient way.

Scientific visualization represents a fundamental field of research for the understanding of properties often hidden in information and raw data. This sector has also benefited from the enormous development that has been observed in both hardware and software technologies. This Special Issue aims to collect articles that can represent the state of the art in the development of virtual reality and augmented reality and scientific visualization.

The Special Issue is focused on (but not limited to) the following themes:

  • Virtual reality systems;
  • Virtual reality tools and toolkits;
  • Virtual, augmented, and mixed reality;
  • Virtual reality languages (X3D, VRML, Collada, OpenGL, Swift);
  • Advances on game engines (Unity 3D, Unreal Engine, Amazon Lumberyard, AppGameKit VR, CryEngine, Godot);
  • Immersive virtual reality devices (digital gloves, motion trackers, body trackers, HMDs);
  • Virtual reality-based scientific visualization;
  • Multi-user and distributed virtual reality and games;
  • Immersive learning;
  • Molecular virtual reality techniques;
  • Virtual classes and practice;
  • Virtual laboratories;
  • Educational games;
  • Virtual reality applied to cultural heritage;
  • Virtual reality applied to medicine and surgery;
  • VR systems for telecare and disabilities treatments;
  • Advances in scientific visualization;
  • Virtual Reality UI/UX;
  • Virtual Reality Reduce sickness;
  • Virtual Reality Contents;
  • Virtual Reality system Developer Training.

Prof. Dr. Osvaldo Gervasi
Prof. Dr. Jung-yoon Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Virtual reality systems
  • Virtual reality tools and toolkits
  • Virtual, augmented, and mixed reality
  • Virtual reality languages (X3D, VRML, Collada, OpenGL, Swift)
  • Advances on game engines (Unity 3D, Unreal Engine, Amazon Lumberyard, AppGameKit VR, CryEngine, Godot)
  • Immersive virtual reality devices (digital gloves, motion trackers, body trackers, HMDs)
  • Virtual reality-based scientific visualization
  • Multi-user and distributed virtual reality and games
  • Immersive learning
  • Molecular virtual reality techniques
  • Virtual classes and practice
  • Virtual laboratories
  • Educational games
  • Virtual reality applied to cultural heritage
  • Virtual reality applied to medicine and surgery
  • VR systems for telecare and disabilities treatments
  • Advances in scientific visualization
  • Virtual Reality UI/UX
  • Virtual Reality Reduce sickness
  • Virtual Reality Contents
  • Virtual Reality system Developer Training

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Development and Application of a Virtual Reality Biphasic Separator as a Learning System for Industrial Process Control
Electronics 2022, 11(4), 636; https://doi.org/10.3390/electronics11040636 - 18 Feb 2022
Viewed by 347
Abstract
In this study, we propose a virtual reality biphasic separator methodology in an immersive industrial environment. It allows the training of students or engineers in process and automatic control. On the other hand, the operating performance of a biphasic separator requires advanced automatic [...] Read more.
In this study, we propose a virtual reality biphasic separator methodology in an immersive industrial environment. It allows the training of students or engineers in process and automatic control. On the other hand, the operating performance of a biphasic separator requires advanced automatic control strategies because this industrial process has multivariable and nonlinear characteristics. In this context, the virtual biphasic separator allows the testing of several control techniques. The methodology, involving the immersive virtualization of the biphasic separator, includes three stages. First, a multivariable mathematical model of the industrial process is obtained. The second stage corresponds to virtualization, in which the 3D modelling of the industrial process is undertaken. Then, the process dynamic is captured by the plant model implemented, in the software Unity. In the third stage, the control strategies are designed. The interaction between the virtual biphasic separator and the control system is implemented using shared variables. Three control strategies are implemented and compared to validate the applicability: a classic control algorithm, namely, the proportional integral derivative (PID) control method, as well as two advanced controllers—a numerical controller and model predictive control (MPC). The results demonstrate the virtual separator’s usability regarding the operating performance of the virtual biphasic separator, considering the control techniques implemented. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Virtual Reality Tool for Exploration of Three-Dimensional Cellular Automata
Electronics 2022, 11(3), 497; https://doi.org/10.3390/electronics11030497 - 08 Feb 2022
Viewed by 450
Abstract
We present a Virtual Reality (VR) tool for exploration of three-dimensional cellular automata. In addition to the traditional visual representation offered by other implementations, this tool allows users to aurally render the active (alive) cells of an automaton in sequence along one axis [...] Read more.
We present a Virtual Reality (VR) tool for exploration of three-dimensional cellular automata. In addition to the traditional visual representation offered by other implementations, this tool allows users to aurally render the active (alive) cells of an automaton in sequence along one axis or simultaneously create melodic and harmonic textures, while preserving in all cases the relative locations of these cells to the user. The audio spatialization method created for this research can render the maximum number of audio sources specified by the underlying software (255) without audio dropouts. The accuracy of the achieved spatialization is unrivaled since it is based on actual distance measurements as opposed to coarse distance approximations used by other spatialization methods. A subjective evaluation (effectively, self-reported measurements) of our system (n=30) indicated no significant differences in user experience or intrinsic motivation between VR and traditional desktop versions (PC). However, participants in the PC group explored more of the universe than the VR group. This difference is likely to be caused by the familiarity of our cohort with PC-based games. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
AViLab—Gamified Virtual Educational Tool for Introduction to Agent Theory Fundamentals
Electronics 2022, 11(3), 344; https://doi.org/10.3390/electronics11030344 - 24 Jan 2022
Viewed by 678
Abstract
The development and increased popularity of interactive computer games, metaverses, and virtual worlds in general, has over the years attracted the attention of various researchers. Therefore, it is not surprising that the educational potential of these virtual environments (e.g., virtual laboratories) is of [...] Read more.
The development and increased popularity of interactive computer games, metaverses, and virtual worlds in general, has over the years attracted the attention of various researchers. Therefore, it is not surprising that the educational potential of these virtual environments (e.g., virtual laboratories) is of particular interest to a wider scientific community, with numerous successful examples coming from different fields, starting from social sciences, to STEM disciplines. However, when it comes to agent theory, which is a highly important part of the general AI (Artificial Intelligence) research focus, there is a noticeable absence of such educational tools. To be more precise, there is a certain lack of virtual educational systems dedicated primarily to agents. That was the motivation for the development of the AViLab (Agents Virtual Laboratory) gamified system, as a demonstration tool for educational purposes in the related subject of agent theory. The developed system is thoroughly described in this paper. The current version of the AViLab consists of several agents (developed according to the agenda elaborated in the manuscript), aiming to demonstrate certain insights into fundamental agent structures. Although the task imposed to our agents essentially represents a sort of “picking” or “collecting” task, the scenario in the system is rather gamified, in order to be more immersive for potential users, spectators, or possible test subjects. This kind of task was chosen because of its wide applicability in both, gaming scenarios and real-world everyday scenarios. In order to demonstrate how AViLab can be utilized, we conducted an exemplar experiment, described in the paper. Alongside its educational purpose, the AViLab system also has the potential to be used for research purposes in the related subjects of agent theory, AI, and game AI, especially regarding future system extensions (including the introduction of new scenarios, more advanced agents, etc.). Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
VoRtex Metaverse Platform for Gamified Collaborative Learning
Electronics 2022, 11(3), 317; https://doi.org/10.3390/electronics11030317 - 20 Jan 2022
Cited by 2 | Viewed by 2958
Abstract
Metaverse platforms are becoming an increasingly popular form of collaboration within virtual worlds. Such platforms provide users with the ability to build virtual worlds that can simulate real-life experiences through different social activities. In the paper, we introduce a novel platform that provides [...] Read more.
Metaverse platforms are becoming an increasingly popular form of collaboration within virtual worlds. Such platforms provide users with the ability to build virtual worlds that can simulate real-life experiences through different social activities. In the paper, we introduce a novel platform that provides assistive tools for building an educational experience in virtual worlds and overcoming the boundaries caused by pandemic situations. Therefore, the authors developed a high-level software architecture and design for a metaverse platform named VoRtex. VoRtex is primarily designed to support collaborative learning activities with the virtual environment. It is designed to support educational standards and it represents an open-source accessible solution developed using modern technology stack and metaverse concepts. For this study, we conducted a comparative analysis of the implemented VoRtex prototype and some popular virtual world platforms using Mannien’s matrix. Afterwards, based on the comparison, we evaluated the potential of the chosen virtual world platform and the VoRtex platform for online education. After an interactive demonstration of the VoRtex platform, participants were asked to fill out a questionnaire form. The aim was to enable participants to identify the main advantages of online teaching using the VoRtex platform. Finally, the authors analyzed benefits and disadvantages of collaborative learning between the metaverse platform and real-world classroom sessions. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Persistent Postural-Perceptual Dizziness Interventions—An Embodied Insight on the Use Virtual Reality for Technologists
Electronics 2022, 11(1), 142; https://doi.org/10.3390/electronics11010142 - 03 Jan 2022
Viewed by 300
Abstract
Persistent and inconsistent unsteadiness with nonvertiginous dizziness (persistent postural-perceptual dizziness (PPPD)) could negatively impact quality of life. This study highlights that the use of virtual reality (VR) systems offers bimodal benefits to PPPD, such as understanding symptoms and providing a basis for treatment. [...] Read more.
Persistent and inconsistent unsteadiness with nonvertiginous dizziness (persistent postural-perceptual dizziness (PPPD)) could negatively impact quality of life. This study highlights that the use of virtual reality (VR) systems offers bimodal benefits to PPPD, such as understanding symptoms and providing a basis for treatment. The aim is to develop an understanding of PPPD and its interventions, including current trends of VR involvement to extrapolate and re-evaluate VR design strategies. Therefore, recent virtual-reality-based research work that progressed in understanding PPPD is identified, collected, and analysed. This study proposes a novel approach to the understanding of PPPD, specifically for VR technologists, and examines the principles of effectively aligning VR development for PPPD interventions. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Caffe2Unity: Immersive Visualization and Interpretation of Deep Neural Networks
Electronics 2022, 11(1), 83; https://doi.org/10.3390/electronics11010083 - 28 Dec 2021
Viewed by 490
Abstract
Deep neural networks (DNNs) dominate many tasks in the computer vision domain, but it is still difficult to understand and interpret the information contained within these networks. To gain better insight into how a network learns and operates, there is a strong need [...] Read more.
Deep neural networks (DNNs) dominate many tasks in the computer vision domain, but it is still difficult to understand and interpret the information contained within these networks. To gain better insight into how a network learns and operates, there is a strong need to visualize these complex structures, and this remains an important research direction. In this paper, we address the problem of how the interactive display of DNNs in a virtual reality (VR) setup can be used for general understanding and architectural assessment. We compiled a static library as a plugin for the Caffe framework in the Unity gaming engine. We used routines from this plugin to create and visualize a VR-based AlexNet architecture for an image classification task. Our layered interactive model allows the user to freely navigate back and forth within the network during visual exploration. To make the DNN model even more accessible, the user can select certain connections to understand the activity flow at a particular neuron. Our VR setup also allows users to hide the activation maps/filters or even interactively occlude certain features in an image in real-time. Furthermore, we added an interpretation module and reframed the Shapley values to give a deeper understanding of the different layers. Thus, this novel tool offers more direct access to network structures and results, and its immersive operation is especially instructive for both novices and experts in the field of DNNs. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Synthetic Data Generation to Speed-Up the Object Recognition Pipeline
Electronics 2022, 11(1), 2; https://doi.org/10.3390/electronics11010002 - 21 Dec 2021
Viewed by 876
Abstract
This paper provides a methodology for the production of synthetic images for training neural networks to recognise shapes and objects. There are many scenarios in which it is difficult, expensive and even dangerous to produce a set of images that is satisfactory for [...] Read more.
This paper provides a methodology for the production of synthetic images for training neural networks to recognise shapes and objects. There are many scenarios in which it is difficult, expensive and even dangerous to produce a set of images that is satisfactory for the training of a neural network. The development of 3D modelling software has nowadays reached such a level of realism and ease of use that it seemed natural to explore this innovative path and to give an answer regarding the reliability of this method that bases the training of the neural network on synthetic images. The results obtained in the two proposed use cases, that of the recognition of a pictorial style and that of the recognition of men at sea, lead us to support the validity of the approach, provided that the work is conducted in a very scrupulous and rigorous manner, exploiting the full potential of the modelling software. The code produced, which automatically generates the transformations necessary for the data augmentation of each image, and the generation of random environmental conditions in the case of Blender and Unity3D software, is available under the GPL licence on GitHub. The results obtained lead us to affirm that through the good practices presented in the article, we have defined a simple, reliable, economic and safe method to feed the training phase of a neural network dedicated to the recognition of objects and features to be applied to various contexts. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Analyzing Visual Attention of People with Intellectual Disabilities during Virtual Reality-Based Job Training
Electronics 2021, 10(14), 1652; https://doi.org/10.3390/electronics10141652 - 11 Jul 2021
Viewed by 905
Abstract
Virtual reality (VR) has proven an effective means of job training for people with intellectual disabilities who may experience difficulties in learning. However, it is unlikely for them to successfully complete a certain task using only VR-based job training contents without receiving supplemental [...] Read more.
Virtual reality (VR) has proven an effective means of job training for people with intellectual disabilities who may experience difficulties in learning. However, it is unlikely for them to successfully complete a certain task using only VR-based job training contents without receiving supplemental help from others. Accordingly, to increase the effectiveness of virtual job training for people with intellectual disabilities in training situations in which they may experience difficulty and become unable to proceed further, the contents of the training program need to automatically identify such moments and provide support so that they may correctly perform the task. To identity the moment of intervention, we conducted an experiment (n = 21) to collect eye tracking data of people with intellectual disabilities while performing VR-based barista training. We measured eye scanning patterns to identify any difference between people with intellectual disabilities who complete a given step independently and those who request intervention. We found that the information about the types of fixated objects did not help to identify any difference, but the information about eye transition, eye movements between two different areas of interest, was useful in identifying the difference. Our findings provide implications for identifying the moment of intervention for people with intellectual disabilities. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Virtual Reality Usability and Accessibility for Cultural Heritage Practices: Challenges Mapping and Recommendations
Electronics 2021, 10(12), 1430; https://doi.org/10.3390/electronics10121430 - 14 Jun 2021
Cited by 4 | Viewed by 1074
Abstract
In recent years, virtual reality (VR) is at its maturity level for real practical exercises amongst many fields of studies, especially in the virtual walkthrough exploration system of cultural heritage (CH). However, this study remains scattered and limited. This work presents a systematic [...] Read more.
In recent years, virtual reality (VR) is at its maturity level for real practical exercises amongst many fields of studies, especially in the virtual walkthrough exploration system of cultural heritage (CH). However, this study remains scattered and limited. This work presents a systematic review that maps out the usability and accessibility issues that are challenging in using VR in CH. We identified 45 challenges that are mapped into five problem groups: system design, development process, technology, assessment process and knowledge transfer. This mapping is then used to propose 58 recommendations to improve the usability and accessibility of VR in CH that are categorized in three different recommendation groups namely, discovery and planning, design and development, and finally the assessment factors. This analysis identified the persistence in certain accessibility and usability problems such as there is a limit in navigating the view and space that constraint the users’ free movement and the navigation control is not ideal with the keyboard arrow button. This work is important because it provides an overview of usability and accessibility based challenges that are faced in applying, developing, deploying and assessing VR in the usage of digitalizing CH and proposed a great number of constructive recommendations to guide future studies. The main contribution of this paper is the mapping of usability and accessibility challenges into categories and the development of recommendations based on the identified problems. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
A Case Study of Educational Games in Virtual Reality as a Teaching Method of Lean Management
Electronics 2021, 10(7), 838; https://doi.org/10.3390/electronics10070838 - 01 Apr 2021
Cited by 7 | Viewed by 989
Abstract
(1) At present, it is important to bring the latest technologies from industrial practice into the teaching process of educational institutions, including universities. The presented case study addresses the application of educational games in virtual reality to the teaching process in a university [...] Read more.
(1) At present, it is important to bring the latest technologies from industrial practice into the teaching process of educational institutions, including universities. The presented case study addresses the application of educational games in virtual reality to the teaching process in a university environment. (2) The study took place at the Department of Industrial Engineering of the University of Žilina in Žilina and consisted of two phases. In the first phase, students’ satisfaction with current teaching methods was examined. The second phase focused on an educational game in virtual reality, which introduced a non-traditional approach for teaching lean management, namely the tool 5S. (3) This game was designed by the study authors and created in the Godot game engine. The educational game was provided to students during class. After completing the game, participants were asked to fill out a questionnaire. The aim was to enable students to express their opinion on the educational game and to identify the main benefits of this approach in the teaching process. (4) In the study’s final phase, based on the acquired knowledge, the authors examined the benefits and disadvantages of virtual reality educational games for the teaching process of industrial engineering tools. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Rapid Prototyping of Virtual Reality Cognitive Exercises in a Tele-Rehabilitation Context
Electronics 2021, 10(4), 457; https://doi.org/10.3390/electronics10040457 - 13 Feb 2021
Cited by 2 | Viewed by 910
Abstract
In recent years, the need to contain healthcare costs due to the growing public debt of many countries, combined with the need to reduce costly travel by patients unable to move autonomously, have captured the attention of public administrators towards tele-rehabilitation. This trend [...] Read more.
In recent years, the need to contain healthcare costs due to the growing public debt of many countries, combined with the need to reduce costly travel by patients unable to move autonomously, have captured the attention of public administrators towards tele-rehabilitation. This trend has been consolidated overwhelmingly following the Covid-19 pandemic, which has made it precarious, difficult and even dangerous for patients to access hospital facilities. We present an approach based on the rapid prototyping of virtual reality, cognitive tele-rehabilitation exercises, which reinforce the group of exercises available in the Nu!reha platform. Patients who experienced injury or pathology need to practice continuous training in order to recover functional abilities, and the therapist needs to monitor the outcomes of such practices. The group of new exercises based on the rapid prototyping approach, become crucial especially in this pandemic period. The Virtual Reality exercises are designed on Unity 3D to empower the therapist to set up personalized exercises in an easy way, enabling the patient to receive personalized stimuli, which are essential for a positive outcome in the practice. Furthermore, the reaction speed of the system is of fundamental importance, as the temporal evolution of the scene must proceed parallel to the patient’s movements, to ensure an effective and efficient therapeutic response. So, we optimized the virtual reality application in order to make the loading phase and the startup phase as fast as possible and we have tested the results obtained with many devices: in particular computers and smartphones with different operating systems and hardware. The implemented method powers up the Nu!Reha system®, a collection of tele-rehabilitation services that helps patients to recover cognitive and functional capabilities. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Augmented-Reality-Based 3D Emotional Messenger for Dynamic User Communication with Smart Devices
Electronics 2020, 9(7), 1127; https://doi.org/10.3390/electronics9071127 - 10 Jul 2020
Cited by 3 | Viewed by 1284
Abstract
With the development of Internet technologies, chat environments have migrated from PCs to mobile devices. Conversations have moved from phone calls and text messages to mobile messaging services or “messengers,” which has led to a significant surge in the use of mobile messengers [...] Read more.
With the development of Internet technologies, chat environments have migrated from PCs to mobile devices. Conversations have moved from phone calls and text messages to mobile messaging services or “messengers,” which has led to a significant surge in the use of mobile messengers such as Line and WhatsApp. However, because these messengers mainly use text as the communication medium, they have the inherent disadvantage of not effectively representing the user’s nonverbal expressions. In this context, we propose a new emotional communication messenger that improves upon the limitations of existing static expressions in current messenger applications. We develop a chat messenger based on augmented reality (AR) technology using smartglasses, which are a type of a wearable device. To this end, we select a server model that is suitable for AR, and we apply an effective emotional expression method based on 16 different basic emotions classified as per Russell’s model. In our app, these emotions can be expressed via emojis, animations, particle effects, and sound clips. Finally, we verify the efficacy of our messenger by conducting a user study to compare it with current 2D-based messenger services. Our messenger service can serve as a prototype for future AR-based messenger apps. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
A Mobile Augmented Reality System for the Real-Time Visualization of Pipes in Point Cloud Data with a Depth Sensor
Electronics 2020, 9(5), 836; https://doi.org/10.3390/electronics9050836 - 19 May 2020
Cited by 3 | Viewed by 2036
Abstract
Augmented reality (AR) is a useful visualization technology that displays information by adding virtual images to the real world. In AR systems that require three-dimensional information, point cloud data is easy to use after real-time acquisition, however, it is difficult to measure and [...] Read more.
Augmented reality (AR) is a useful visualization technology that displays information by adding virtual images to the real world. In AR systems that require three-dimensional information, point cloud data is easy to use after real-time acquisition, however, it is difficult to measure and visualize real-time objects due to the large amount of data and a matching process. In this paper we explored a method of estimating pipes from point cloud data and visualizing them in real-time through augmented reality devices. In general, pipe estimation in a point cloud uses a Hough transform and is performed through a preprocessing process, such as noise filtering, normal estimation, or segmentation. However, there is a disadvantage in that the execution time is slow due to a large amount of computation. Therefore, for the real-time visualization in augmented reality devices, the fast cylinder matching method using random sample consensus (RANSAC) is required. In this paper, we proposed parallel processing, multiple frames, adjustable scale, and error correction for real-time visualization. The real-time visualization method through the augmented reality device obtained a depth image from the sensor and configured a uniform point cloud using a voxel grid algorithm. The constructed data was analyzed according to the fast cylinder matching method using RANSAC. The real-time visualization method through augmented reality devices is expected to be used to identify problems, such as the sagging of pipes, through real-time measurements at plant sites due to the spread of various AR devices. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
Automated Spatiotemporal Classification Based on Smartphone App Logs
Electronics 2020, 9(5), 755; https://doi.org/10.3390/electronics9050755 - 04 May 2020
Cited by 3 | Viewed by 977
Abstract
In this paper, a framework for user app behavior analysis using an automated supervised learning method in smartphone environments is proposed. This framework exploits the collective location data of users and their smartphone app logs. Based on these two datasets, the framework determines [...] Read more.
In this paper, a framework for user app behavior analysis using an automated supervised learning method in smartphone environments is proposed. This framework exploits the collective location data of users and their smartphone app logs. Based on these two datasets, the framework determines the apps with a high probability of usage in a geographic area. The framework extracts the app-usage behavior data of a mobile user from an Android phone and transmits them to a server. The server learns the representative trajectory patterns of the user by combining the collected app usage patterns and trajectory data. The proposed method performs supervised learning with automated labeled trajectory data using the user app data. Furthermore, it uses the behavioral characteristics data of users linked to the app usage data by area without a labeling cost. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Article
A Study on Design and Case Analysis of Virtual Reality Contents Developer Training based on Industrial Requirements
Electronics 2020, 9(3), 437; https://doi.org/10.3390/electronics9030437 - 05 Mar 2020
Cited by 4 | Viewed by 1528
Abstract
The fourth industrial revolution has evolved at an exponential pace that has raised the level of virtual reality (VR) technology industry development. The VR-based contents industry has significantly grown in convergence with other areas; however, its production suffers from lack of skilled human [...] Read more.
The fourth industrial revolution has evolved at an exponential pace that has raised the level of virtual reality (VR) technology industry development. The VR-based contents industry has significantly grown in convergence with other areas; however, its production suffers from lack of skilled human resources. In this regard, this paper provides a study on educational courses dealing with VR contents production. The study identifies the current status of VR contents industry and evaluates the training contents currently being used at relevant training institutions. As a result, an industrial demand-customized educational model and its operations based on cooperative relationship between training institutions and industrial companies by means of cooperative projects and mentoring was designed. The outcome of the evaluation on the results of training courses operation served as a basis for the design of the proposed educational model and indicates the effectiveness of the training. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Other

Jump to: Research

Tutorial
Development and Application of an Augmented Reality Oyster Learning System for Primary Marine Education
Electronics 2021, 10(22), 2818; https://doi.org/10.3390/electronics10222818 - 17 Nov 2021
Cited by 4 | Viewed by 465
Abstract
Marine knowledge is such an important part of education that it has been integrated into various subjects and courses across educational levels. Previous research has indicated the importance of AR assisted students’ learning during the learning process. This study proposed the AR Oyster [...] Read more.
Marine knowledge is such an important part of education that it has been integrated into various subjects and courses across educational levels. Previous research has indicated the importance of AR assisted students’ learning during the learning process. This study proposed the AR Oyster Learning System (AROLS) that integrates mobile AR with a marine education teaching strategy for teachers in primary schools. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a primary school natural science course regarding oysters. The participants consisted of 22 fourth-grade students. The experimental group comprised 11 students who learned with the AROLS learning approach and the control group comprised 11 students who learned with the conventional multimedia learning approach. The results indicate that (1) students were interested in the AR learning approach, (2) students’ learning achievement and motivation were improved by the AR learning approach, (3) students acquired the target knowledge through the oyster course, and (4) students learned the importance of sustainability when taking online courses at home during the pandemic. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

Back to TopTop