Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = DIBR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2541 KiB  
Article
Channel Interaction Mamba-Guided Generative Adversarial Network for Depth-Image-Based Rendering 3D Image Watermarking
by Qingmo Chen, Zhongxing Sun, Rui Bai and Chongchong Jin
Electronics 2025, 14(10), 2050; https://doi.org/10.3390/electronics14102050 - 18 May 2025
Viewed by 457
Abstract
In the field of 3D technology, depth-image-based rendering (DIBR) has been widely adopted due to its inherent advantages including low data volume and strong compatibility. However, during network transmission of DIBR 3D images, both center and virtual views are susceptible to unauthorized copying [...] Read more.
In the field of 3D technology, depth-image-based rendering (DIBR) has been widely adopted due to its inherent advantages including low data volume and strong compatibility. However, during network transmission of DIBR 3D images, both center and virtual views are susceptible to unauthorized copying and distribution. To protect the copyright of these images, this paper proposes a channel interaction mamba-guided generative adversarial network (CIMGAN) for DIBR 3D image watermarking. To capture cross-modal feature dependencies, a channel interaction mamba (CIM) is designed. This module enables lightweight cross-modal channel interaction through a channel exchange mechanism and leverages mamba for global modeling of RGB and depth information. In addition, a feature fusion module (FFM) is devised to extract complementary information from cross-modal features and eliminate redundant information, ultimately generating high-quality 3D image features. These features are used to generate an attention map, enhancing watermark invisibility and identifying robust embedding regions. Compared to the current state-of-the-art (SOTA) 3D image watermarking methods, the proposed watermark model shows superior performance in terms of robustness and invisibility while maintaining computational efficiency. Full article
Show Figures

Figure 1

15 pages, 292 KiB  
Article
Identity Formation in Individuals between 16 and 25 Years Old with Borderline Personality Disorder
by Anaïs Mungo, Marie Delhaye, Camille Blondiau and Matthieu Hein
J. Clin. Med. 2024, 13(11), 3221; https://doi.org/10.3390/jcm13113221 - 30 May 2024
Viewed by 2141
Abstract
Background/Objectives: Identity disruption is a key feature of borderline personality disorder (BPD), characterized by disturbances in self-image. This study aimed to use the Dimensions of Identity Development Scale (DIDS) in a population aged 16–25, to assess differences in identity status and correlations [...] Read more.
Background/Objectives: Identity disruption is a key feature of borderline personality disorder (BPD), characterized by disturbances in self-image. This study aimed to use the Dimensions of Identity Development Scale (DIDS) in a population aged 16–25, to assess differences in identity status and correlations with BPD features as well as whether a correlation exists between the BPD features, the scores obtained on the DIDS and the scores of the different dimensions of this disorder. Methods: We analyzed data from 132 individuals: 44 with BPD using the Diagnostic Interview for Borderline—Revised (DIB-R). Statistical analyses included quantile regression to determine the differences in the DIDS after adjusting for confounding factors identified during group comparisons and Spearman correlation between the DIDS, the BPD features and the DIB-R. Results: Results indicated significantly lower DIDS scores in the BPD group, particularly in commitment making, exploration breadth (EB), identity with commitment (IM) and ruminative exploration (RE). After adjusting, only EB differs significantly between the two groups. All dimensions of the DIDS except for the exploration in depth (ED) are correlated with BPD features. Significant correlations could be demonstrated between cognitive dimension and ED, between the total DIDS and the number of suicide attempt (SA) and between the IM and the number of SA. Conclusions: Our clinical sample showed distinct identity formation compared to controls, with a lower EB associated with BPD. RE correlated with BPD, suggesting that the individuals engage in repetitive exploratory processes. SA was negatively associated with overall identity development and commitment, indicating impulsive behaviors in BPD intersect with identity struggles. Full article
19 pages, 15710 KiB  
Article
Adaptable 2D to 3D Stereo Vision Image Conversion Based on a Deep Convolutional Neural Network and Fast Inpaint Algorithm
by Tomasz Hachaj
Entropy 2023, 25(8), 1212; https://doi.org/10.3390/e25081212 - 15 Aug 2023
Cited by 2 | Viewed by 5212
Abstract
Algorithms for converting 2D to 3D are gaining importance following the hiatus brought about by the discontinuation of 3D TV production; this is due to the high availability and popularity of virtual reality systems that use stereo vision. In this paper, several depth [...] Read more.
Algorithms for converting 2D to 3D are gaining importance following the hiatus brought about by the discontinuation of 3D TV production; this is due to the high availability and popularity of virtual reality systems that use stereo vision. In this paper, several depth image-based rendering (DIBR) approaches using state-of-the-art single-frame depth generation neural networks and inpaint algorithms are proposed and validated, including a novel very fast inpaint (FAST). FAST significantly exceeds the speed of currently used inpaint algorithms by reducing computational complexity, without degrading the quality of the resulting image. The role of the inpaint algorithm is to fill in missing pixels in the stereo pair estimated by DIBR. Missing estimated pixels appear at the boundaries of areas that differ significantly in their estimated distance from the observer. In addition, we propose parameterizing DIBR using a singular, easy-to-interpret adaptable parameter that can be adjusted online according to the preferences of the user who views the visualization. This single parameter governs both the camera parameters and the maximum binocular disparity. The proposed solutions are also compared with a fully automatic 2D to 3D mapping solution. The algorithm proposed in this work, which features intuitive disparity steering, the foundational deep neural network MiDaS, and the FAST inpaint algorithm, received considerable acclaim from evaluators. The mean absolute error of the proposed solution does not contain statistically significant differences from state-of-the-art approaches like Deep3D and other DIBR-based approaches using different inpaint functions. Since both the source codes and the generated videos are available for download, all experiments can be reproduced, and one can apply our algorithm to any selected video or single image to convert it. Full article
(This article belongs to the Special Issue Deep Learning Models and Applications to Computer Vision)
Show Figures

Figure 1

21 pages, 13912 KiB  
Article
Deep Learning-Based Synthesized View Quality Enhancement with DIBR Distortion Mask Prediction Using Synthetic Images
by Huan Zhang, Jiangzhong Cao, Dongsheng Zheng, Ximei Yao and Bingo Wing-Kuen Ling
Sensors 2022, 22(21), 8127; https://doi.org/10.3390/s22218127 - 24 Oct 2022
Cited by 6 | Viewed by 3074
Abstract
Recently, deep learning-based image quality enhancement models have been proposed to improve the perceptual quality of distorted synthesized views impaired by compression and the Depth Image-Based Rendering (DIBR) process in a multi-view video system. However, due to the lack of Multi-view Video plus [...] Read more.
Recently, deep learning-based image quality enhancement models have been proposed to improve the perceptual quality of distorted synthesized views impaired by compression and the Depth Image-Based Rendering (DIBR) process in a multi-view video system. However, due to the lack of Multi-view Video plus Depth (MVD) data, the training data for quality enhancement models is small, which limits the performance and progress of these models. Augmenting the training data to enhance the synthesized view quality enhancement (SVQE) models is a feasible solution. In this paper, a deep learning-based SVQE model using more synthetic synthesized view images (SVIs) is suggested. To simulate the irregular geometric displacement of DIBR distortion, a random irregular polygon-based SVI synthesis method is proposed based on existing massive RGB/RGBD data, and a synthetic synthesized view database is constructed, which includes synthetic SVIs and the DIBR distortion mask. Moreover, to further guide the SVQE models to focus more precisely on DIBR distortion, a DIBR distortion mask prediction network which could predict the position and variance of DIBR distortion is embedded into the SVQE models. The experimental results on public MVD sequences demonstrate that the PSNR performance of the existing SVQE models, e.g., DnCNN, NAFNet, and TSAN, pre-trained on NYU-based synthetic SVIs could be greatly promoted by 0.51-, 0.36-, and 0.26 dB on average, respectively, while the MPPSNRr performance could also be elevated by 0.86, 0.25, and 0.24 on average, respectively. In addition, by introducing the DIBR distortion mask prediction network, the SVI quality obtained by the DnCNN and NAFNet pre-trained on NYU-based synthetic SVIs could be further enhanced by 0.02- and 0.03 dB on average in terms of the PSNR and 0.004 and 0.121 on average in terms of the MPPSNRr. Full article
Show Figures

Figure 1

16 pages, 3127 KiB  
Article
Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception
by Haozhi Shi, Lanmei Wang and Guibao Wang
Sensors 2022, 22(18), 7081; https://doi.org/10.3390/s22187081 - 19 Sep 2022
Cited by 3 | Viewed by 2045
Abstract
The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evaluate the quality [...] Read more.
The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evaluate the quality of DIBR-synthesized images, where the DIBR process is computationally expensive and time-consuming. In addition, the existing view synthesis quality metrics cannot achieve robustness due to the shallow hand-crafted features. To avoid the complicated DIBR process and learn more efficient features, this paper presents a blind quality prediction model for view synthesis based on HEterogeneous DIstortion Perception, dubbed HEDIP, which predicts the image quality of view synthesis from texture and depth images. Specifically, the texture and depth images are first fused based on discrete cosine transform to simulate the distortion of view synthesis images, and then the spatial and gradient domain features are extracted in a Two-Channel Convolutional Neural Network (TCCNN). Finally, a fully connected layer maps the extracted features to a quality score. Notably, the ground-truth score of the source image cannot effectively represent the labels of each image patch during training due to the presence of local distortions in view synthesis image. So, we design a Heterogeneous Distortion Perception (HDP) module to provide effective training labels for each image patch. Experiments show that with the help of the HDP module, the proposed model can effectively predict the quality of view synthesis. Experimental results demonstrate the effectiveness of the proposed model. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 2823 KiB  
Article
Modification of the DIBR and MABAC Methods by Applying Rough Numbers and Its Application in Making Decisions
by Duško Tešić, Marko Radovanović, Darko Božanić, Dragan Pamucar, Aleksandar Milić and Adis Puška
Information 2022, 13(8), 353; https://doi.org/10.3390/info13080353 - 25 Jul 2022
Cited by 19 | Viewed by 3024
Abstract
This study considers the problem of selecting an anti-tank missile system (ATMS). The mentioned problem is solved by applying a hybrid multi-criteria decision-making model (MCDM) based on two methods: the DIBR (Defining Interrelationships Between Ranked criteria) and the MABAC (Multi-Attributive Border Approximation area [...] Read more.
This study considers the problem of selecting an anti-tank missile system (ATMS). The mentioned problem is solved by applying a hybrid multi-criteria decision-making model (MCDM) based on two methods: the DIBR (Defining Interrelationships Between Ranked criteria) and the MABAC (Multi-Attributive Border Approximation area Comparison) methods. The methods are modified by applying rough numbers, which present a very suitable area for considering uncertainty following decision-making processes. The DIBR method is a young method with a simple mathematical apparatus which is based on defining the relation between ranked criteria, that is, adjacent criteria, reducing the number of comparisons. This method defines weight coefficients of criteria, based on the opinion of experts. The MABAC method is used to select the best alternative from the set of the offered ones, based on the distance of the criteria function of every observed alternative from the border approximate area. The paper has two main innovations. With the presented decision-making support model, the ATMS selection problem is raised to a higher level, which is based on a proven mathematical apparatus. In terms of methodology, the main innovation is successful application of the rough DIBR method, which has not been treated in this way in the literature so far. Additionally, an analysis of the literature related to the research problem as well as to the methods used is carried out. After the application of the model, the sensitivity analysis of the output results of the presented model to the change of the weight coefficients of criteria is performed, as well as the comparison of the results of the presented model with other methods. Finally, the proposed model is concluded to be stable and multi-criteria decision-making methods can be a reliable tool to help decision makers in the selection process. The presented model has the potential of being applied in other case studies as it has proven to be a good means for considering uncertainty. Full article
(This article belongs to the Special Issue Intelligent Information Technology)
Show Figures

Figure 1

40 pages, 4791 KiB  
Article
Palladium(II) Complexes of Substituted Salicylaldehydes: Synthesis, Characterization and Investigation of Their Biological Profile
by Ariadni Zianna, George Geromichalos, Augusta-Maria Fiotaki, Antonios G. Hatzidimitriou, Stavros Kalogiannis and George Psomas
Pharmaceuticals 2022, 15(7), 886; https://doi.org/10.3390/ph15070886 - 18 Jul 2022
Cited by 21 | Viewed by 3227
Abstract
Five palladium(II) complexes of substituted salicylaldehydes (X-saloH, X = 4-Et2N (for 1), 3,5-diBr (for 2), 3,5-diCl (for 3), 5-F (for 4) or 4-OMe (for 5)) bearing the general formula [Pd(X-salo)2] were synthesized and structurally [...] Read more.
Five palladium(II) complexes of substituted salicylaldehydes (X-saloH, X = 4-Et2N (for 1), 3,5-diBr (for 2), 3,5-diCl (for 3), 5-F (for 4) or 4-OMe (for 5)) bearing the general formula [Pd(X-salo)2] were synthesized and structurally characterized. The crystal structure of complex [Pd(4-Et2N-salo)2] was determined by single-crystal X-ray crystallography. The complexes can scavenge 1,1-diphenyl-picrylhydrazyl and 2,2′-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) radicals and reduce H2O2. They are active against two Gram-positive (Staphylococcus aureus and Bacillus subtilis) and two Gram-negative (Escherichia coli and Xanthomonas campestris) bacterial strains. The complexes interact strongly with calf-thymus DNA via intercalation, as deduced by diverse techniques and via the determination of their binding constants. Complexes interact reversibly with bovine and human serum albumin. Complementary insights into their possible mechanisms of bioactivity at the molecular level were provided by molecular docking calculations, exploring in silico their ability to bind to calf-thymus DNA, Escherichia coli and Staphylococcus aureus DNA-gyrase, 5-lipoxygenase, and membrane transport lipid protein 5-lipoxygenase-activating protein, contributing to the understanding of the role complexes 15 can play both as antioxidant and antibacterial agents. Furthermore, in silico predictive tools have been employed to study the chemical reactivity, molecular properties and drug-likeness of the complexes, and also the drug-induced changes of gene expression profile (as protein- and mRNA-based prediction results), the sites of metabolism, the substrate/metabolite specificity, the cytotoxicity for cancer and non-cancer cell lines, the acute rat toxicity, the rodent organ-specific carcinogenicity, the anti-target interaction profiles, the environmental ecotoxicity, and finally the activity spectra profile of the compounds. Full article
(This article belongs to the Special Issue Pd Derivatives in Drug Discovery)
Show Figures

Figure 1

10 pages, 7456 KiB  
Article
Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness
by Lijuan Tang, Kezheng Sun, Shuaifeng Huang, Guangcheng Wang and Kui Jiang
Electronics 2022, 11(9), 1384; https://doi.org/10.3390/electronics11091384 - 26 Apr 2022
Cited by 2 | Viewed by 2188
Abstract
Depth-Image-Based-Rendering (DIBR) is one of the core techniques for generating new views in 3D video applications. However, the distortion characteristics of the DIBR synthetic view are different from the 2D image. It is necessary to study the unique distortion characteristics of DIBR views [...] Read more.
Depth-Image-Based-Rendering (DIBR) is one of the core techniques for generating new views in 3D video applications. However, the distortion characteristics of the DIBR synthetic view are different from the 2D image. It is necessary to study the unique distortion characteristics of DIBR views and design effective and efficient algorithms to evaluate the DIBR-synthesized image and guide DIBR algorithms. In this work, the visual saliency and texture natrualness features are extracted to evaluate the quality of the DIBR views. After extracting the feature, we adopt machine learning method for mapping the extracted feature to the quality score of the DIBR views. Experiments constructed on two synthetic view databases IETR and IRCCyN/IVC, and the results show that our proposed algorithm performs better than the compared synthetic view quality evaluation methods. Full article
Show Figures

Figure 1

20 pages, 7952 KiB  
Article
No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
by Chongchong Jin, Zongju Peng, Wenhui Zou, Fen Chen, Gangyi Jiang and Mei Yu
Entropy 2021, 23(6), 770; https://doi.org/10.3390/e23060770 - 18 Jun 2021
Cited by 5 | Viewed by 2829
Abstract
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric [...] Read more.
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images. Full article
Show Figures

Figure 1

17 pages, 15764 KiB  
Article
Quality Assessment of 3D Synthesized Images Based on Textural and Structural Distortion Estimation
by Hafiz Muhammad Usama Hassan Alvi, Muhammad Shahid Farid, Muhammad Hassan Khan and Marcin Grzegorzek
Appl. Sci. 2021, 11(6), 2666; https://doi.org/10.3390/app11062666 - 17 Mar 2021
Cited by 2 | Viewed by 2585
Abstract
Emerging 3D-related technologies such as augmented reality, virtual reality, mixed reality, and stereoscopy have gained remarkable growth due to their numerous applications in the entertainment, gaming, and electromedical industries. In particular, the 3D television (3DTV) and free-viewpoint television (FTV) enhance viewers’ television experience [...] Read more.
Emerging 3D-related technologies such as augmented reality, virtual reality, mixed reality, and stereoscopy have gained remarkable growth due to their numerous applications in the entertainment, gaming, and electromedical industries. In particular, the 3D television (3DTV) and free-viewpoint television (FTV) enhance viewers’ television experience by providing immersion. They need an infinite number of views to provide a full parallax to the viewer, which is not practical due to various financial and technological constraints. Therefore, novel 3D views are generated from a set of available views and their depth maps using depth-image-based rendering (DIBR) techniques. The quality of a DIBR-synthesized image may be compromised for several reasons, e.g., inaccurate depth estimation. Since depth is important in this application, inaccuracies in depth maps lead to different textural and structural distortions that degrade the quality of the generated image and result in a poor quality of experience (QoE). Therefore, quality assessment DIBR-generated images are essential to guarantee an appreciative QoE. This paper aims at estimating the quality of DIBR-synthesized images and proposes a novel 3D objective image quality metric. The proposed algorithm aims to measure both textural and structural distortions in the DIBR image by exploiting the contrast sensitivity and the Hausdorff distance, respectively. The two measures are combined to estimate an overall quality score. The experimental evaluations performed on the benchmark MCL-3D dataset show that the proposed metric is reliable and accurate, and performs better than existing 2D and 3D quality assessment metrics. Full article
(This article belongs to the Special Issue Advances in Perceptual Image Quality Metrics)
Show Figures

Figure 1

25 pages, 9089 KiB  
Article
Fast Hole Filling for View Synthesis in Free Viewpoint Video
by Hui-Yu Huang and Shao-Yu Huang
Electronics 2020, 9(6), 906; https://doi.org/10.3390/electronics9060906 - 29 May 2020
Cited by 8 | Viewed by 3843
Abstract
The recent emergence of three-dimensional (3D) movies and 3D television (TV) indicates an increasing interest in 3D content. Stereoscopic displays have enabled visual experiences to be enhanced, allowing the world to be viewed in 3D. Virtual view synthesis is the key technology to [...] Read more.
The recent emergence of three-dimensional (3D) movies and 3D television (TV) indicates an increasing interest in 3D content. Stereoscopic displays have enabled visual experiences to be enhanced, allowing the world to be viewed in 3D. Virtual view synthesis is the key technology to present 3D content, and depth image-based rendering (DIBR) is a classic virtual view synthesis method. With a texture image and its corresponding depth map, a virtual view can be generated using the DIBR technique. The depth and camera parameters are used to project the entire pixel in the image to the 3D world coordinate system. The results in the world coordinates are then reprojected into the virtual view, based on 3D warping. However, these projections will result in cracks (holes). Hence, we herein propose a new method of DIBR for free viewpoint videos to solve the hole problem due to these projection processes. First, the depth map is preprocessed to reduce the number of holes, which does not produce large-scale geometric distortions; subsequently, improved 3D warping projection is performed collectively to create the virtual view. A median filter is used to filter the hole regions in the virtual view, followed by 3D inverse warping blending to remove the holes. Next, brightness adjustment and adaptive image blending are performed. Finally, the synthesized virtual view is obtained using the inpainting method. Experimental results verify that our proposed method can produce a pleasant visibility of the synthetized virtual view, maintain a high peak signal-to-noise ratio (PSNR) value, and efficiently decrease execution time compared with state-of-the-art methods. Full article
(This article belongs to the Special Issue Multimedia Systems and Signal Processing)
Show Figures

Figure 1

19 pages, 12068 KiB  
Article
Virtual View Synthesis Based on Asymmetric Bidirectional DIBR for 3D Video and Free Viewpoint Video
by Xiaodong Chen, Haitao Liang, Huaiyuan Xu, Siyu Ren, Huaiyu Cai and Yi Wang
Appl. Sci. 2020, 10(5), 1562; https://doi.org/10.3390/app10051562 - 25 Feb 2020
Cited by 13 | Viewed by 3045
Abstract
Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them [...] Read more.
Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them appropriately becomes a challenge. In this paper, a virtual view synthesis approach based on asymmetric bidirectional DIBR is proposed. A depth image preprocessing method is applied to detect and correct unreliable depth values around the foreground edges. For the primary view, all pixels are warped to the virtual view by the modified DIBR method. For the auxiliary view, only the selected regions are warped, which contain the contents that are not visible in the primary view. This approach reduces the computational cost and prevents irrelevant foreground pixels from being warped to the holes. During the merging process, a color correction approach is introduced to make the result appear more natural. In addition, a depth-guided inpainting method is proposed to handle the remaining holes in the merged image. Experimental results show that, compared with bidirectional DIBR, the proposed rendering method can reduce about 37% rendering time and achieve 97% hole reduction. In terms of visual quality and objective evaluation, our approach performs better than the previous methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 21424 KiB  
Article
Artifact Handling Based on Depth Image for View Synthesis
by Xiaodong Chen, Haitao Liang, Huaiyuan Xu, Siyu Ren, Huaiyu Cai and Yi Wang
Appl. Sci. 2019, 9(9), 1834; https://doi.org/10.3390/app9091834 - 3 May 2019
Cited by 6 | Viewed by 3240
Abstract
The depth image based rendering (DIBR) is a popular technology for 3D video and free viewpoint video (FVV) synthesis, by which numerous virtual views can be generated from a single reference view and its depth image. However, some artifacts are produced in the [...] Read more.
The depth image based rendering (DIBR) is a popular technology for 3D video and free viewpoint video (FVV) synthesis, by which numerous virtual views can be generated from a single reference view and its depth image. However, some artifacts are produced in the DIBR process and reduce the visual quality of virtual view. Due to the diversity of artifacts, effectively handling them becomes a challenging task. In this paper, an artifact handling method based on depth image is proposed. The reference image and its depth image are extended to fill the holes that belong to the out-of-field regions. A depth image preprocessing method is applied to project the ghosts to their correct place. The 3D warping process is optimized by an adaptive one-to-four method to deal with the cracks and pixel overlapping. For disocclusions, we calculate depth and background terms of the filling priority based on depth information. The search for the best matching patch is performed simultaneously in the reference image and the virtual image. Moreover, adaptive patch size is used in all hole-filling processes. Experimental results demonstrate the effectiveness of the proposed method, which has better performance compared with previous methods in subjective and objective evaluation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 2858 KiB  
Article
Robust Template-Based Watermarking for DIBR 3D Images
by Wook-Hyung Kim, Jong-Uk Hou, Han-Ul Jang and Heung-Kyu Lee
Appl. Sci. 2018, 8(6), 911; https://doi.org/10.3390/app8060911 - 1 Jun 2018
Cited by 8 | Viewed by 5035
Abstract
Several depth image-based rendering (DIBR) watermarking methods have been proposed, but they have various drawbacks, such as non-blindness, low imperceptibility and vulnerability to signal or geometric distortion. This paper proposes a template-based DIBR watermarking method that overcomes the drawbacks of previous methods. The [...] Read more.
Several depth image-based rendering (DIBR) watermarking methods have been proposed, but they have various drawbacks, such as non-blindness, low imperceptibility and vulnerability to signal or geometric distortion. This paper proposes a template-based DIBR watermarking method that overcomes the drawbacks of previous methods. The proposed method exploits two properties to resist DIBR attacks: the pixel is only moved horizontally by DIBR, and the smaller block is not distorted by DIBR. The one-dimensional (1D) discrete cosine transform (DCT) and curvelet domains are adopted to utilize these two properties. A template is inserted in the curvelet domain to restore the synchronization error caused by geometric distortion. A watermark is inserted in the 1D DCT domain to insert and detect a message from the DIBR image. Experimental results of the proposed method show high imperceptibility and robustness to various attacks, such as signal and geometric distortions. The proposed method is also robust to DIBR distortion and DIBR configuration adjustment, such as depth image preprocessing and baseline distance adjustment. Full article
Show Figures

Figure 1

15 pages, 2968 KiB  
Article
Reliability-Based View Synthesis for Free Viewpoint Video
by Zengming Deng and Mingjiang Wang
Appl. Sci. 2018, 8(5), 823; https://doi.org/10.3390/app8050823 - 20 May 2018
Cited by 9 | Viewed by 3770
Abstract
View synthesis is a crucial technique for free viewpoint video and multi-view video coding because of its capability to render an unlimited number of virtual viewpoints from adjacent captured texture images and corresponding depth maps. The accuracy of depth maps is very important [...] Read more.
View synthesis is a crucial technique for free viewpoint video and multi-view video coding because of its capability to render an unlimited number of virtual viewpoints from adjacent captured texture images and corresponding depth maps. The accuracy of depth maps is very important to the rendering quality, since depth image–based rendering (DIBR) is the most widely used technology among synthesis algorithms. There are some issues due to the fact that stereo depth estimation is error-prone. In addition, filling occlusions is another challenge in producing desirable synthesized images. In this paper, we propose a reliability-based view synthesis framework. A depth refinement method is used to check the reliability of depth values and refine some of the unreliable pixels, and an adaptive background modeling algorithm is utilized to construct a background image aiming to fill the remaining empty regions after a proposed weighted blending process. Finally, the proposed approach is implemented and tested on test video sequences, and experimental results indicate objective and subjective improvements compared to previous view synthesis methods. Full article
Show Figures

Figure 1

Back to TopTop