Next Article in Journal
Evaluation of Deep Learning-Based Automatic Floor Plan Analysis Technology: An AHP-Based Assessment
Next Article in Special Issue
An Advanced Spectral–Spatial Classification Framework for Hyperspectral Imagery Based on DeepLab v3+
Previous Article in Journal
Evolutionary Integrated Heuristic with Gudermannian Neural Networks for Second Kind of Lane–Emden Nonlinear Singular Models
Previous Article in Special Issue
How to Correctly Detect Face-Masks for COVID-19 from Visual Information?
 
 
Article

Supervised Learning Based Peripheral Vision System for Immersive Visual Experiences for Extended Display

1
School of Electronics Engineering, IT College, Kyungpook National University, 1370 Sankyuk-dong, Buk-gu, Daegu 41566, Korea
2
Haptics, Human-Robotics and Condition Monitoring Lab (National Center of Robotics and Automation), NED University of Engineering and Technology, Karachi 75270, Pakistan
3
Department of Electrical Engineering, NED University of Engineering and Technology, Karachi 75270, Pakistan
4
Research Center for Neurosurgical Robotic System, Kyungpook National University, 1370 Sankyuk-dong, Buk-gu, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
Academic Editor: Jiro Tanaka
Appl. Sci. 2021, 11(11), 4726; https://doi.org/10.3390/app11114726
Received: 27 March 2021 / Revised: 27 April 2021 / Accepted: 8 May 2021 / Published: 21 May 2021
(This article belongs to the Special Issue Deep Image Semantic Segmentation and Recognition)
Video display content can be extended to the walls of the living room around the TV using projection. The problem of providing appropriate projection content is hard for the computer and we solve this problem with deep neural network. We propose the peripheral vision system that provides the immersive visual experiences to the user by extending the video content using deep learning and projecting that content around the TV screen. The user may manually create the appropriate content for the existing TV screen, but it is too expensive to create it. The PCE (Pixel context encoder) network considers the center of the video frame as input and the outside area as output to extend the content using supervised learning. The proposed system is expected to pave a new road to the home appliance industry, transforming the living room into the new immersive experience platform. View Full-Text
Keywords: augmented video; human vision; immersion; large field of view; spatial augmented reality; video extrapolation; neural network; AI augmented video; human vision; immersion; large field of view; spatial augmented reality; video extrapolation; neural network; AI
Show Figures

Figure 1

MDPI and ACS Style

Shirazi, M.A.; Uddin, R.; Kim, M.-Y. Supervised Learning Based Peripheral Vision System for Immersive Visual Experiences for Extended Display. Appl. Sci. 2021, 11, 4726. https://doi.org/10.3390/app11114726

AMA Style

Shirazi MA, Uddin R, Kim M-Y. Supervised Learning Based Peripheral Vision System for Immersive Visual Experiences for Extended Display. Applied Sciences. 2021; 11(11):4726. https://doi.org/10.3390/app11114726

Chicago/Turabian Style

Shirazi, Muhammad Ayaz, Riaz Uddin, and Min-Young Kim. 2021. "Supervised Learning Based Peripheral Vision System for Immersive Visual Experiences for Extended Display" Applied Sciences 11, no. 11: 4726. https://doi.org/10.3390/app11114726

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop