Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = DIBR watermarking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2541 KiB  
Article
Channel Interaction Mamba-Guided Generative Adversarial Network for Depth-Image-Based Rendering 3D Image Watermarking
by Qingmo Chen, Zhongxing Sun, Rui Bai and Chongchong Jin
Electronics 2025, 14(10), 2050; https://doi.org/10.3390/electronics14102050 - 18 May 2025
Viewed by 468
Abstract
In the field of 3D technology, depth-image-based rendering (DIBR) has been widely adopted due to its inherent advantages including low data volume and strong compatibility. However, during network transmission of DIBR 3D images, both center and virtual views are susceptible to unauthorized copying [...] Read more.
In the field of 3D technology, depth-image-based rendering (DIBR) has been widely adopted due to its inherent advantages including low data volume and strong compatibility. However, during network transmission of DIBR 3D images, both center and virtual views are susceptible to unauthorized copying and distribution. To protect the copyright of these images, this paper proposes a channel interaction mamba-guided generative adversarial network (CIMGAN) for DIBR 3D image watermarking. To capture cross-modal feature dependencies, a channel interaction mamba (CIM) is designed. This module enables lightweight cross-modal channel interaction through a channel exchange mechanism and leverages mamba for global modeling of RGB and depth information. In addition, a feature fusion module (FFM) is devised to extract complementary information from cross-modal features and eliminate redundant information, ultimately generating high-quality 3D image features. These features are used to generate an attention map, enhancing watermark invisibility and identifying robust embedding regions. Compared to the current state-of-the-art (SOTA) 3D image watermarking methods, the proposed watermark model shows superior performance in terms of robustness and invisibility while maintaining computational efficiency. Full article
Show Figures

Figure 1

21 pages, 2858 KiB  
Article
Robust Template-Based Watermarking for DIBR 3D Images
by Wook-Hyung Kim, Jong-Uk Hou, Han-Ul Jang and Heung-Kyu Lee
Appl. Sci. 2018, 8(6), 911; https://doi.org/10.3390/app8060911 - 1 Jun 2018
Cited by 8 | Viewed by 5042
Abstract
Several depth image-based rendering (DIBR) watermarking methods have been proposed, but they have various drawbacks, such as non-blindness, low imperceptibility and vulnerability to signal or geometric distortion. This paper proposes a template-based DIBR watermarking method that overcomes the drawbacks of previous methods. The [...] Read more.
Several depth image-based rendering (DIBR) watermarking methods have been proposed, but they have various drawbacks, such as non-blindness, low imperceptibility and vulnerability to signal or geometric distortion. This paper proposes a template-based DIBR watermarking method that overcomes the drawbacks of previous methods. The proposed method exploits two properties to resist DIBR attacks: the pixel is only moved horizontally by DIBR, and the smaller block is not distorted by DIBR. The one-dimensional (1D) discrete cosine transform (DCT) and curvelet domains are adopted to utilize these two properties. A template is inserted in the curvelet domain to restore the synchronization error caused by geometric distortion. A watermark is inserted in the 1D DCT domain to insert and detect a message from the DIBR image. Experimental results of the proposed method show high imperceptibility and robustness to various attacks, such as signal and geometric distortions. The proposed method is also robust to DIBR distortion and DIBR configuration adjustment, such as depth image preprocessing and baseline distance adjustment. Full article
Show Figures

Figure 1

Back to TopTop