New Advances in Visual Computing and Virtual Reality

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 29393

Special Issue Editors

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: computer vision; virtual reality; digital twin
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia MO 65211, USA
Interests: computer graphics; computer vision; machine learning; biomedical imaging; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Visual computing and virtual reality are key technologies that will facilitate a major paradigm shift in the way users interact with data, and has been recognized as a viable solution for solving many critical needs. Both visual computing and virtual reality handle images and 3D models, i.e. computer graphics, image processing, visualization, computer vision, virtual and augmented reality, and video processing, but also include aspects of pattern recognition, human–computer interaction, and machine learning. In particular, machine learning ushers in a new wave of innovation to computer vision and computer graphics, which is gradually bringing visual computing and virtual reality to a whole new level.

The aim of this Special Issue of Electronics is to seek high-quality submissions that highlight emerging applications, address recent breakthroughs in the broad area of visual computing and virtual reality, including virtual reality (VR), augmented reality (AR), mixed reality (MR), 3D interaction, visualization, computer graphics, computer vision, and deep learning. We invite researchers to contribute original and unique articles, as well as sophisticated review articles. Topics include, but not limited to, the following areas:

  • Visualization;
  • VR/AR/MR computer graphics;
  • 3D object reconstruction;
  • 3D deep learning;
  • Signal and image processing;
  • Deep learning for computer vision;
  • Image and video communication;
  • Tracking and sensing;
  • Human-computer interaction;
  • 3D display techniques and display devices;
  • Modeling, simulation and animation;
  • Emerging applications and systems, including techniques, performance, and implementation.

Dr. Hai Huang 
Prof. Dr. Ye Duan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Visualization
  • Computer graphics
  • 3D object reconstruction
  • Deep learning
  • Signal and image processing
  • Computer vision
  • Image and video communication
  • Tracking and sensing
  • Human-computer interaction
  • 3D display techniques and display devices
  • Modeling, simulation and animation

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3892 KiB  
Article
CAPNet: Context and Attribute Perception for Pedestrian Detection
by Yueyan Zhu, Hai Huang, Huayan Yu, Aoran Chen and Guanliang Zhao
Electronics 2023, 12(8), 1781; https://doi.org/10.3390/electronics12081781 - 10 Apr 2023
Cited by 1 | Viewed by 1460
Abstract
With a focus on practical applications in the real world, a number of challenges impede the progress of pedestrian detection. Scale variance, cluttered backgrounds and ambiguous pedestrian features are the main culprits of detection failures. According to existing studies, consistent feature fusion, semantic [...] Read more.
With a focus on practical applications in the real world, a number of challenges impede the progress of pedestrian detection. Scale variance, cluttered backgrounds and ambiguous pedestrian features are the main culprits of detection failures. According to existing studies, consistent feature fusion, semantic context mining and inherent pedestrian attributes seem to be feasible solutions. In this paper, to tackle the prevalent problems of pedestrian detection, we propose an anchor-free pedestrian detector, named context and attribute perception (CAPNet). In particular, we first generate features with consistent well-defined semantics and local details by introducing a feature extraction module with a multi-stage and parallel-stream structure. Then, a global feature mining and aggregation (GFMA) network is proposed to implicitly reconfigure, reassign and aggregate features so as to suppress irrelevant features in the background. At last, in order to bring more heuristic rules to the network, we improve the detection head with an attribute-guided multiple receptive field (AMRF) module, leveraging the pedestrian shape as an attribute to guide learning. Experimental results demonstrate that introducing the context and attribute perception greatly facilitates detection. As a result, CAPNet achieves new state-of-the-art performance on Caltech and CityPersons datasets. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

20 pages, 1523 KiB  
Article
AP Association Algorithm Based on VR User Behavior Awareness
by Jinjia Ruan, Yuchuan Wang, Zhenming Fan, Yongqiang Sun and Taoning Yang
Electronics 2022, 11(21), 3542; https://doi.org/10.3390/electronics11213542 - 30 Oct 2022
Viewed by 1077
Abstract
With the rapid development of virtual reality (VR) technology, this paper proposes an access point (AP) correlation method based on VR user behavior awareness to address the problem of how current AP correlation methods only focus on the performance improvements of ordinary users [...] Read more.
With the rapid development of virtual reality (VR) technology, this paper proposes an access point (AP) correlation method based on VR user behavior awareness to address the problem of how current AP correlation methods only focus on the performance improvements of ordinary users and ignore the impact of VR user behavior on service quality. This paper analyzes the AP association method under the coverage scenario of a multi-access point (multi-AP) scenario environment and controls the performance improvement of VR user APs or APs under the access controller (AC) by association. Firstly, the VR network application scenario and system model were constructed, and secondly, the user behavior was sensed by analyzing the viewing habits of users. Then, the VR user association problem based on VR user behavior perception was transformed into a “many-to-many” matching problem between VR user devices and APs, and the generalized multidimensional multiple choice knapsack (GMMKP) model was established to solve the problem using the backpack problem theory; the suboptimal solution algorithm was selected to obtain the best VR user AP association strategy. The experimental results show by simulation that the proposed algorithm in this paper performed better in terms of the AP load balancing and average network download latency compared to the comparison algorithms. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

14 pages, 3012 KiB  
Article
Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning
by Dongxing Zhao, Junan Yang, Hui Liu and Keju Huang
Electronics 2022, 11(21), 3485; https://doi.org/10.3390/electronics11213485 - 27 Oct 2022
Cited by 4 | Viewed by 1499
Abstract
Specific emitter identification (SEI) is extracting the features of the received radio signals and determining the emitter individuals that generate the signals. Although deep learning-based methods have been effectively applied for SEI, their performance declines dramatically with the smaller number of labeled training [...] Read more.
Specific emitter identification (SEI) is extracting the features of the received radio signals and determining the emitter individuals that generate the signals. Although deep learning-based methods have been effectively applied for SEI, their performance declines dramatically with the smaller number of labeled training samples and in the presence of significant noise. To address this issue, we propose an improved Bootstrap Your Own Late (BYOL) self-supervised learning scheme to fully exploit the unlabeled samples, which comprises the pretext task adopting contrastive learning conception and the downstream task. We designed three optimized data augmentation methods for communication signals in the former task to serve the contrastive concept. We built two neural networks, online and target networks, which interact and learn from each other. The proposed scheme demonstrates the generality of handling the small and sufficient sample cases across a wide range from 10 to 400, being labeled in each group. The experiment also shows promising accuracy and robustness where the recognition results increase at 3-8% from 3 to 7 signal-to-noise ratio (SNR). Our scheme can accurately identify the individual emitter in a complicated electromagnetic environment. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

18 pages, 43034 KiB  
Article
Research on High-Resolution Face Image Inpainting Method Based on StyleGAN
by Libo He, Zhenping Qiang, Xiaofeng Shao, Hong Lin, Meijiao Wang and Fei Dai
Electronics 2022, 11(10), 1620; https://doi.org/10.3390/electronics11101620 - 19 May 2022
Cited by 11 | Viewed by 4302
Abstract
In face image recognition and other related applications, incomplete facial imagery due to obscuring factors during acquisition represents an issue that requires solving. Aimed at tackling this issue, the research surrounding face image completion has become an important topic in the field of [...] Read more.
In face image recognition and other related applications, incomplete facial imagery due to obscuring factors during acquisition represents an issue that requires solving. Aimed at tackling this issue, the research surrounding face image completion has become an important topic in the field of image processing. Face image completion methods require the capability of capturing the semantics of facial expression. A deep learning network has been widely shown to bear this ability. However, for high-resolution face image completion, the network training of high-resolution image inpainting is difficult to converge, thus rendering high-resolution face image completion a difficult problem. Based on the study of the deep learning model of high-resolution face image generation, this paper proposes a high-resolution face inpainting method. First, our method extracts the latent vector of the face image to be repaired through ResNet, then inputs the latent vector to the pre-trained StyleGAN model to generate the face image. Next, it calculates the loss between the known part of the face image to be repaired and the corresponding part of the generated face imagery. Afterward, the latent vector is cut to generate a new face image iteratively until the number of iterations is reached. Finally, the Poisson fusion method is employed to process the last generated face image and the face image to be repaired in order to eliminate the difference in boundary color information of the repaired image. Through the comparison and analysis between two classical face completion methods in recent years on the CelebA-HQ data set, we discovered our method can achieve better completion results of 256*256 resolution face image completion. For 1024*1024 resolution face image restoration, we have also conducted a large number of experiments, which prove the effectiveness of our method. Our method can obtain a variety of repair results by editing the latent vector. In addition, our method can be successfully applied to face image editing, face image watermark clearing and other applications without the network training process of different masks in these applications. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

28 pages, 10997 KiB  
Article
Design of Three-Dimensional Virtual Simulation Experiment Platform for Integrated Circuit Course
by Ziliang Lai, Yansong Cui, Tonggang Zhao and Qiang Wu
Electronics 2022, 11(9), 1437; https://doi.org/10.3390/electronics11091437 - 29 Apr 2022
Cited by 2 | Viewed by 2682
Abstract
The integrated circuit (IC) is a subject for which researchers need practical experience, but its experiment cost is high, the risk involved is high, and it is not easy to carry out experiments on a large scale. This paper designs a three-dimensional integrated [...] Read more.
The integrated circuit (IC) is a subject for which researchers need practical experience, but its experiment cost is high, the risk involved is high, and it is not easy to carry out experiments on a large scale. This paper designs a three-dimensional integrated circuit virtual experiment platform based on Unity3d. The platform uses the Unity3d and 3ds Max tools to build a three-dimensional model of instruments, equipment, electronic components, and ultra cleanroom laboratory scenes in an integrated circuit experiment. In addition, it uses C# script to develop the functions and three-dimensional simulation of general virtual instruments and equipment and deploys the experimental website using the frame of the jspxcms open source website. The experimental platform arranges the three-dimensional web file WebGL file in the cloud. Students can use video and text materials to acquire basic IC knowledge at any time and conduct IC virtual experiments safely, efficiently, and without constraints. At present, the platform has been tested and used in teaching and has received high praise and recognition from students. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

22 pages, 8059 KiB  
Article
A Novel Method for Tunnel Digital Twin Construction and Virtual-Real Fusion Application
by Zhaohui Wu, Ying Chang, Qing Li and Rongbin Cai
Electronics 2022, 11(9), 1413; https://doi.org/10.3390/electronics11091413 - 28 Apr 2022
Cited by 16 | Viewed by 4506
Abstract
Tunnels play important roles in integrated transport infrastructure. A digital twin reproduces a real tunnel scene in virtual space and provides new means for tunnel digital maintenance. Aiming at the existing problems of video fragmentation, separation of video and business data, and lack [...] Read more.
Tunnels play important roles in integrated transport infrastructure. A digital twin reproduces a real tunnel scene in virtual space and provides new means for tunnel digital maintenance. Aiming at the existing problems of video fragmentation, separation of video and business data, and lack of two- and three-dimensional linkage response methods in tunnel digital operation, in this paper, we propose a novel method for tunnel digital twin construction and virtual-real integration operation. Firstly, the digital management requirements of tunnel operations are systematically analyzed to clarify the purpose of digital twin construction. Secondly, BIM technology is used to construct a static model of the tunnel scene that conforms to the real tunnel main structure. Thirdly, a three-dimensional registration and projection calculation method is proposed to integrate tunnel surveillance video into a three-dimensional virtual scene in real time. Fourthly, multi-source sensing data are gathered and fused to form a digital twin scene that is basically the same as the real tunnel traffic operations scene. Finally, a management model suitable for digital twins is discussed to improve the efficiency of tunnel operations and management, and a tunnel in China is selected to verify this method. The results show that the proposed method is helpful to realize the application of two- and three-dimensional linkages of tunnel traffic smooth, accident rescue, facility management, and emergency response, and to improve the efficiency of tunnel digital management. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

14 pages, 25324 KiB  
Article
Convincing 3D Face Reconstruction from a Single Color Image under Occluded Scenes
by Dapeng Zhao, Jinkang Cai and Yue Qi
Electronics 2022, 11(4), 543; https://doi.org/10.3390/electronics11040543 - 11 Feb 2022
Cited by 3 | Viewed by 2814
Abstract
The last few years have witnessed the great success of generative adversarial networks (GANs) in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction works often pursue higher resolutions and ignore occlusion. We study the problem of detailed 3D facial reconstruction [...] Read more.
The last few years have witnessed the great success of generative adversarial networks (GANs) in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction works often pursue higher resolutions and ignore occlusion. We study the problem of detailed 3D facial reconstruction under occluded scenes. This is a challenging problem; currently, the collection of such a large scale high resolution 3D face dataset is still very costly. In this work, we propose a deep learning based approach for detailed 3D face reconstruction that does not require large-scale 3D datasets. Motivated by generative face image inpainting and weakly-supervised 3D deep reconstruction, we propose a complete 3D face model generation method guided by the contour. In our work, the 3D reconstruction framework based on weak supervision can generate convincing 3D models. We further test our method on the MICC, Florence and LFW datasets, showing its strong generalization capacity and superior performance. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

21 pages, 4819 KiB  
Article
An Interactive Self-Learning Game and Evolutionary Approach Based on Non-Cooperative Equilibrium
by Yan Li, Mengyu Zhao, Huazhi Zhang, Fuling Yang and Suyu Wang
Electronics 2021, 10(23), 2977; https://doi.org/10.3390/electronics10232977 - 29 Nov 2021
Cited by 3 | Viewed by 1602
Abstract
Most current studies on multi-agent evolution based on deep learning take a cooperative equilibrium strategy, while interactive self-learning is not always considered. An interactive self-learning game and evolution method based on non-cooperative equilibrium (ISGE-NCE) is proposed to take the benefits of both game [...] Read more.
Most current studies on multi-agent evolution based on deep learning take a cooperative equilibrium strategy, while interactive self-learning is not always considered. An interactive self-learning game and evolution method based on non-cooperative equilibrium (ISGE-NCE) is proposed to take the benefits of both game theory and interactive learning for multi-agent confrontation evolution. A generative adversarial network (GAN) is designed combining with multi-agent interactive self-learning, and the non-cooperative equilibrium strategy is well adopted within the framework of interactive self-learning, aiming for high evolution efficiency and interest. For assessment, three typical multi-agent confrontation experiments are designed and conducted. The results show that, first, in terms of training speed, the ISGE-NCE produces a training convergence rate of at least 46.3% higher than that of the method without considering interactive self-learning. Second, the evolution rate of the interference and detection agents reaches 60% and 80%, respectively, after training by using our method. In the three different experiment scenarios, compared with the DDPG, our ISGE-NCE method improves the multi-agent evolution effectiveness by 43.4%, 50%, and 20%, respectively, with low training costs. The performances demonstrate the significant superiority of our ISGE-NCE method in swarm intelligence. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

15 pages, 7083 KiB  
Article
Real-Time Application of Computer Graphics Improvement Techniques Using Hyperspectral Textures in a Virtual Reality System
by Francisco Díaz-Barrancas, Halina Cwierz and Pedro J. Pardo
Electronics 2021, 10(22), 2852; https://doi.org/10.3390/electronics10222852 - 19 Nov 2021
Cited by 2 | Viewed by 2368
Abstract
In virtual reality technology, it is necessary to develop improvements and apply new techniques that allow rapid progress and innovative development. Nowadays, virtual reality devices have not yet demonstrated the great potential they could develop in the future. One main reason for this [...] Read more.
In virtual reality technology, it is necessary to develop improvements and apply new techniques that allow rapid progress and innovative development. Nowadays, virtual reality devices have not yet demonstrated the great potential they could develop in the future. One main reason for this is the lack of precision to represent three-dimensional scenarios with a similar solvency to what our visual system obtains from the real world. One of the main problems is the representation of images using the RGB color system. This digital colorimetry system has many limitations when it comes to representing faithful images. In this work we propose to develop a virtual reality environment incorporating hyperspectral textures into a virtual reality system. Based on these hyperspectral textures, the aim of our scientific contribution is to improve the fidelity of the chromatic representation, especially when the lighting conditions of the scenes and its precision are relevant. Therefore, we will present the steps followed to render three-dimensional objects with hyperspectral textures within a virtual reality scenario. Additionally, we will check the results obtained by applying such hyperspectral textures by calculating the chromaticity coordinates of known samples. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

14 pages, 1671 KiB  
Article
FaceVAE: Generation of a 3D Geometric Object Using Variational Autoencoders
by Sungsoo Park and Hyeoncheol Kim
Electronics 2021, 10(22), 2792; https://doi.org/10.3390/electronics10222792 - 14 Nov 2021
Cited by 4 | Viewed by 3646
Abstract
Deep learning for 3D data has become a popular research theme in many fields. However, most of the research on 3D data is based on voxels, 2D images, and point clouds. At actual industrial sites, face-based geometry data are being used, but their [...] Read more.
Deep learning for 3D data has become a popular research theme in many fields. However, most of the research on 3D data is based on voxels, 2D images, and point clouds. At actual industrial sites, face-based geometry data are being used, but their direct application to industrial sites remains limited due to a lack of existing research. In this study, to overcome these limitations, we present a face-based variational autoencoder (FVAE) model that generates 3D geometry data using a variational autoencoder (VAE) model directly from face-based geometric data. Our model improves the existing node and edge-based adjacency matrix and optimizes it for geometric learning by using a face- and edge-based adjacency matrix according to the 3D geometry structure. In the experiment, we achieved the result of generating adjacency matrix information with 72% precision and 69% recall through end-to-end learning of Face-Based 3D Geometry. In addition, we presented various structurization methods for 3D unstructured geometry and compared their performance, and proved the method to effectively perform reconstruction of the learned structured data through experiments. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

Back to TopTop