Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = novel view video synthesis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 5182 KiB  
Article
Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints
by Baigan Zhao, Yingping Huang, Wenyan Ci and Xing Hu
Sensors 2022, 22(4), 1383; https://doi.org/10.3390/s22041383 - 11 Feb 2022
Cited by 8 | Viewed by 3084
Abstract
This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion estimation from monocular video. The framework exploits the optical flow (OF) property to jointly train the depth and the ego-motion models. Unlike the existing unsupervised methods, our method extracts [...] Read more.
This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion estimation from monocular video. The framework exploits the optical flow (OF) property to jointly train the depth and the ego-motion models. Unlike the existing unsupervised methods, our method extracts the features from the optical flow rather than from the raw RGB images, thereby enhancing unsupervised learning. In addition, we exploit the forward-backward consistency check of the optical flow to generate a mask of the invalid region in the image, and accordingly, eliminate the outlier regions such as occlusion regions and moving objects for the learning. Furthermore, in addition to using view synthesis as a supervised signal, we impose additional loss functions, including optical flow consistency loss and depth consistency loss, as additional supervision signals on the valid image region to further enhance the training of the models. Substantial experiments on multiple benchmark datasets demonstrate that our method outperforms other unsupervised methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 32604 KiB  
Article
Object-Wise Video Editing
by Ashraf Siddique and Seungkyu Lee
Appl. Sci. 2021, 11(2), 671; https://doi.org/10.3390/app11020671 - 12 Jan 2021
Cited by 3 | Viewed by 3453
Abstract
Beyond time frame editing in video data, object level video editing is a challenging task; such as object removal in a video or viewpoint changes. These tasks involve dynamic object segmentation, novel view video synthesis and background inpainting. Background inpainting is a task [...] Read more.
Beyond time frame editing in video data, object level video editing is a challenging task; such as object removal in a video or viewpoint changes. These tasks involve dynamic object segmentation, novel view video synthesis and background inpainting. Background inpainting is a task of the reconstruction of unseen regions presented by object removal or viewpoint change. In this paper, we propose a video editing method including foreground object removal background inpainting and novel view video synthesis under challenging conditions such as complex visual pattern, occlusion, overlaid clutter and variation of depth in a moving camera. Our proposed method calculates a weighted confidence score on the basis of normalized difference between observed depth and predicted distance in 3D space. A set of potential points from epipolar lines from neighbor frames are collected, refined, and weighted to select a few number of highly qualified observations to fill the desired region of interest area in the current frame from video. Based on the background inpainting method, novel view video synthesis is conducted with arbitrary viewpoint. Our method is evaluated with both a public dataset and our own video clips and compared with multiple state of the art methods showing a superior performance. Full article
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology Ⅱ)
Show Figures

Figure 1

Back to TopTop