Next Article in Journal
Statistical Analysis of Physical Characteristics Calculated by NEMO Model After Data Assimilation
Previous Article in Journal
A Note on New Near-Extremal Type I Z4-Codes of Length 48
Previous Article in Special Issue
TraceGuard: Fine-Tuning Pre-Trained Model by Using Stego Images to Trace Its User
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Deep Learning Zero-Watermark Method for Interior Design Protection Based on Image Fusion

1
Faculty of Innovation Engineering, Macau University of Science and Technology, Avenida Wai Long, Macau 999078, China
2
Faculty of Humanities and Arts, Macau University of Science and Technology, Avenida Wai Long, Macau 999078, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(6), 947; https://doi.org/10.3390/math13060947
Submission received: 24 February 2025 / Revised: 7 March 2025 / Accepted: 11 March 2025 / Published: 13 March 2025
(This article belongs to the Special Issue Mathematics Methods in Image Processing and Computer Vision)

Abstract

:
Interior design, which integrates art and science, is vulnerable to infringements such as copying and tampering. The unique and often intricate nature of these designs makes them vulnerable to unauthorized replication and misuse, posing significant challenges for designers seeking to protect their intellectual property. To solve the above problems, we propose a deep learning-based zero-watermark copyright protection method. The method aims to embed undetectable and unique copyright information through image fusion technology without destroying the interior design image. Specifically, the method fuses the interior design and a watermark image through deep learning to generate a highly robust zero-watermark image. This study also proposes a zero-watermark verification network with U-Net to verify the validity of the watermark and extract the copyright information efficiently. This network can accurately restore watermark information from protected interior design images, thus effectively proving the copyright ownership of the work and the copyright ownership of the interior design. According to verification on an experimental dataset, the zero-watermark copyright protection method proposed in this study is robust against various image-oriented attacks. It avoids the problem of image quality loss that traditional watermarking techniques may cause. Therefore, this method can provide a strong means of copyright protection in the field of interior design.

1. Introduction

Interior design is a type of design that is a creative output of human society. As a discipline that integrates art and science, it produces a variety of media forms that carry designers’ unique creative concepts. The digital age has provided more possibilities for design innovation and application. Many researchers have tried to use human designers’ design data to batch-generate designs with specific decorative styles and spatial functions and have retrained diffusion models to create new datasets of interior decorative styles, thus further expanding the creative boundaries of design [1]. In addition, the increasing expansion of the channels for the dissemination of design works, which promotes design sharing and communication, also brings new challenges. Design works are more susceptible to theft, tampering, and illegal copying in digital environments and are exposed to the risk of copyright infringement [2,3,4]. These actions not only infringe upon the intellectual property rights of the original creators but also may lead to substantial economic losses. Therefore, protecting the rights and interests of original designs and establishing a sound mechanism for copyright protection and technology maintenance are of great significance for safeguarding the rights and interests of designers and promoting the benign development of the industry.
As an essential method of embedding copyright information into digital media, watermarking technology can achieve the purpose of copyright protection and content verification. However, traditional watermarking technology has certain limitations in protecting interior design. Firstly, visible watermarking can reduce the visual quality and original expression of design works and weaken the aesthetic relationships in artistic communication. Secondly, to ensure the robustness of watermarking, traditional techniques often require a sizable embedding intensity, especially in high-contrast areas of a watermark embedded in a design image, which can also lead to the destruction of the color hierarchy and texture details. For this reason, the development of zero-watermarking techniques brings a new perspective. Unlike traditional watermarking techniques that directly modify the carrier image, zero-watermarking techniques provide copyright protection by extracting the inherent features of an image [5]. However, there are still some problems with existing zero-watermarking methods. Firstly, most of the zero-watermarking techniques based on image transformations have poor robustness when an image is geometrically oriented because their manual features are easily affected by the relative positions of images. Secondly, interior design has rich color levels and texture details, but most zero-watermarking methods are limited to gray-scale images for feature extraction. This feature extraction method can only capture shallow information, and it is not easy to fully reflect the complex structure of an image. Especially when facing interior design with complex colors and textures, the robustness of existing methods is insufficient, which limits their application in real scenarios.
To solve the above problems, we propose an interior design protection scheme based on image fusion through deep learning. The scheme first extracts the higher-order features of the color and texture of the host image and the watermarked image’s salient regions and edge features, respectively, using deep learning techniques. Subsequently, to construct a more complex and robust feature space, we organically fuse these two types of features to generate a zero-watermark image for copyright protection. In the copyright determination stage, the host image is superimposed with the zero-watermark image and input into the zero-watermark authentication network to extract the copyright protection information and realize accurate copyright determination. This scheme effectively solves the problem of the insufficient robustness of traditional methods in complex image scenarios. It is vital to improve the copyright protection system for interior design and promote the development of the industry.
The contributions of this study are as follows:
  • A novel zero-watermark method for interior design preservation based on image fusion through deep learning is proposed.
  • A zero-watermark authentication network for extracting copyright protection information for accurate copyright identification for interior design is proposed.
  • Our proposed method has good robustness against various types of attacks.
The remainder of this article is structured as follows: Section 2 introduces the related work on zero-watermark methods. Section 3 presents the proposed method. Section 4 describes the experimental validation of this study’s method. Section 5 and Section 6 discuss and summarize the process, respectively.

2. Related Work

2.1. Watermark Protection Methods

Currently, the development of watermarking technology, as an essential method of embedding copyright information into digital media, has evolved from simply visible watermarking to complex invisible watermarking [6]. Traditional watermarking technology achieves the purpose of copyright protection and content verification by embedding information such as text, images, or digital data into the spatial or frequency domain [7,8]. Research has shown that an effective watermarking system needs to satisfy the three basic metrics of covertness, robustness, and security simultaneously [9]. Watermarking to protect copyrights should not affect the standard display of the disseminated creative content and should provide resistance to external theft and attacks [10]. In recent years, with the rapid development of digital media technology, it has been gradually revealed that, in practical applications, traditional watermarking technology has the limitations of insufficient attack resistance and limited embedding capacity [11]. Since the current watermarking technology for interior design image protection is mainly reflected in the visual domain [12], traditional watermarking technology in the protection of interior design will face two types of limitations. Firstly, the direct embedding method of traditional watermarking can lead to an irreversible loss of image quality, destroying an image’s color hierarchy and texture details [13]. As a result, visible watermarking can degrade the visual quality and original expression of design work and even weaken the aesthetic relationships in artistic communication. For example, embedding traditional watermarks in marble design images resulted in the distortion of material gloss and texture details. Secondly, traditional watermarking often lacks robustness in complex image processing, and it is usually challenging to determine the optimal embedding region [14]. In addition, although watermarking is widely used as an effective protection method, its technology also has certain security risks. In particular, if the watermarking algorithm is not secure enough, an attacker may remove or tamper with the watermark information.
Therefore, as interior design ideas flow through the marketplace in the future, watermarking technology should be compatible with different image formats and processing flows to better enable watermarking algorithms to strike a balance between protecting information and maintaining image quality.

2.2. Zero-Watermark Protection Methods

In contrast to traditional watermarking methods, zero-watermarking techniques can protect intellectual property rights in design works without changing the initial design by extracting image features for formal copyright authentication [5]. Based on the concept of zero-watermarking, Liu et al. designed a zero-watermarking technology scheme by combining a dual-tree complex wavelet transform and discrete cosine transforms and experimentally verified that the scheme showed better performance in the presence of multiple image attacks (DCT) [15]. However, in current practice, the problem of the poor robustness of extracted image features remains, which leads to the emergence of the weak performance of zero-watermarking methods. Therefore, more and more teams are trying to utilize deep neural networks to learn and build automatic image watermarking algorithms [16]. Such deep learning-based zero-watermarking techniques have attracted the interest of scholars concerned with current digital image copyright protection due to their automated and efficient image feature extraction.
Several existing studies have shown that a fusion mechanism based on deep feature extraction not only enhances the robustness of the zero-watermarking method but also provides a barrier to intellectual property protection for artistic creators [17]. Xiang et al. constructed image style features for zero-watermarking construction [18]. In addition, Shi et al. proposed a zero-watermarking algorithm based on multiple features and chaotic encryption to improve the distinguishability of different zero-watermark images [19], and Li et al. pointed out that deep learning-based zero-watermarking technology is driving change in the field of copyright protection at an unprecedented speed [20]. Further, other studies have also attempted the application of deep feature extraction in zero-watermarking methods. Cao’s team [21] designed a multi-scale feature fusion mechanism to accurately extract watermark information during image propagation even when it encounters malicious attacks such as geometric transformation, compression, or cropping. This deep learning-based zero-watermarking method performs well against various attacks and maintains the image’s visual quality, providing new ideas for copyright protection for interior design works [22].

3. Method

3.1. Overview

The flowchart of the methodology proposed in this study is shown in Figure 1. It consists of two primary parts: zero-watermark construction based on an image fusion network and zero-watermark authentication utilizing an inspection network (ISN).
In the zero-watermark construction phase, the protected interior design and watermark images are processed through an encoder to extract their respective features. Subsequently, these extracted features are fused, and the fused feature representation is passed through the decoder to generate a robust zero-watermark image, which is then stored. The construction approach aims to leverage deep learning techniques to integrate the interior design with the watermark image, ensuring that the zero-watermark image contains the salient features of both the interior design and the watermark image. Our construction approach can significantly enhance the robustness of the watermark against various forms of image attacks.
An inspection network (ISN) is employed to verify the zero-watermark image. The ISN is trained to separate the zero-watermark image and reconstruct the embedded watermark image. During the authentication process, the interior design is verified, and the zero-watermark image is used as the input to the network, thereby obtaining the extraction and authentication of the initially embedded watermark image information.

3.2. Zero-Watermark Construction

Here, the interior design is defined as I D = { F ( i , j ) } N × M . The watermark image is denoted by W I = { G ( i , j ) } N × M . F ( i , j ) denotes the pixel value at position ( i , j ) of I D . G ( i , j ) denotes the pixel value at position ( i , j ) of W I .
Firstly, zero-watermark generation takes I D and W I as inputs and performs feature extraction based on the encoder.
Secondly, the features extracted from I D and W I will undergo feature fusion.
Thirdly, the decoder turns the fused features into a zero-watermark image.
Finally, the zero-watermark image is sent back to the previous feature extraction encoder, and the loss for training is calculated.

3.2.1. Feature Extraction

Input: Interior design I D = { F ( i , j ) } N × M and watermark image W I = { G ( i , j ) } N × M .
Output: The extracted features of the interior design F e a I D R H × W × C and the watermark image F e a W I R H × W × C . H and W represent the height and weight of the feature, respectively. C represents the channel of the feature.
During zero-watermark construction, the encoder can transform the interior design and watermark image into high-dimensional feature representations, providing a reliable base for zero-watermark generation. Considering the efficiency and performance requirements of zero-watermark construction, MobileNet v2 [23], with its lightweight design, is introduced for the efficient extraction of features from image content. It is a lightweight model that employs depthwise-separable convolution and an inverted residual structure to achieve efficient feature extraction by reducing the amount of computation and the number of parameters. In this study, MobileNet v2 adopts a four-layer inverse residual structure, with a step size of 1 for the first and third layers and a step size of 2 for the second and fourth layers. The feature extraction network is denoted by E n c , and its structure is summarized in Table 1.
Firstly, the input interior design I D and watermark image W I perform convolution operations with a kernel size of 3 × 3 to obtain the shallow features. The process can be represented as follows:
F e a 1 = E n c 1 ( I D )
F e a 2 = E n c 2 ( W I )
E n c ( X ) = R e L U 6 ( B N ( C o n v 3 × 3 ( X ) ) )
B N ( X ) = γ X μ X σ X 2 + ϵ + β
μ X = 1 m i = 1 m X i
σ X 2 = 1 m i = 1 m ( X i μ X ) 2
R e L U 6 ( x ) = m i n ( m a x ( 0 , x ) , 6 )
Here, E n c 1 and E n c 2 denote the feature extraction networks for the interior design and the watermark, respectively. F e a 1 and F e a 2 represent the results of the shallow features. C o n v 3 × 3 denotes convolution operations with a kernel size of 3 × 3 , B N denotes batch normalization, R e L U 6 is the activation function, μ X represents the mean value of X, and σ X 2 represents the variance of X. γ and β represent the scaling factor and offset, respectively.
Secondly, four inverted residual blocks are introduced to reduce the model parameters while maintaining model performance. Each block contains an expansion layer, depthwise-separable convolution, and residual connections. The extension layer aims to increase the number of channels through 1 × 1 convolution. After the inverted residual blocks, a convolution layer with a kernel size of 1 × 1 is used to obtain the features extracted from images. The process can be represented as follows:
F e a 3 = I R B ( 4 ) ( F e a 1 )
F e a 4 = I R B ( 4 ) ( F e a 2 )
I R B ( X ) = X + D S C o n v 3 × 3 ( C o n v 1 × 1 ( X ) ) s t r i d e = 1 D S C o n v 3 × 3 ( C o n v 1 × 1 ( X ) ) s t r i d e = 2
F e a I D = C o n v 1 × 1 ( F e a 3 )
F e a W I = C o n v 1 × 1 ( F e a 4 )
Here, I R B ( 4 ) represents residual blocks that have been inverted four times, and D S C o n v 3 × 3 denotes depthwise-separable convolution with a kernel size of 3 × 3 . The extracted features of the interior design and watermark image are denoted by F e a I D and F e a W I , and they will be used for the subsequent feature fusion.

3.2.2. Feature Fusion

Input: The extracted feature of the interior design is F e a I D R H 1 × W 1 × C 1 , and the extracted feature of the watermark image is F e a W I R H 1 × W 1 × C 1 . H 1 and W 1 represent the height and weight of the extracted feature, and C 1 represents the channel of the feature.
Output: The fused feature is F e a F R H 2 × W 2 × C 2 . H 2 and W 2 represent the height and weight of the feature, and C 2 represents the channel of the feature.
Generally, zero-watermark images constructed by relying only on a single feature are prone to loss or corruption in the face of common attacks, such as noise, compression, and cropping. To this end, we effectively improve the performance of zero-watermark features by fusing the features of protected and watermarked images to generate more complex and robust feature representations.
After feature extraction, we take F e a I D and F e a W I as inputs and fuse them to obtain the fused feature F e a F with Equation (13).
F e a F = α × F e a I D + ( 1 α ) × F e a W I
Here, α is a control coefficient responsible for regulating the fusion ratio of F e a I D and F e a W I .

3.2.3. Zero-Watermark Image Generation

Input: The fused feature F e a F R H 2 × W 2 × C 2
Output: A zero-watermark image Z.
The generation of zero-watermark images is based on the decoder. The decoder reduces the fused features to a zero-watermarked image, aiming to generate a robust zero-watermarked image based on semantic features. This approach ensures that the generated zero-watermark image contains both the fused features of the interior design and the watermark image and makes the generated zero-watermark image highly resistant to attacks, which improves the overall robustness and applicability of the zero-watermarking method.
Here, the decoder achieves feature upsampling and reduction through transpose convolution and batch normalization to gradually generate zero-watermarked images from the fused feature maps. Meanwhile, the ReLU activation function enhances the nonlinear representation capability. The detailed structure is shown in Table 2.
The calculation process can be represented as follows:
Z = D e c o d e r ( F e a F )
D e c o d e r ( X ) = F i n a l C o n v ( U p B l o c k ( 3 ) ( X ) )
F i n a l C o n v ( X ) = T a n h ( C o n v 3 × 3 ( X ) )
T a n h ( x ) = e x e x e x + e x
U p B l o c k ( X ) = R e L U ( B N ( T C o n v 3 × 3 ( X ) ) )
Here, Z denotes the generated zero-watermark image, and F i n a l C o n v denotes the convolution operation to adjust the number of channels. T a n h and R e L U are the activation functions. U p B l o c k ( 3 ) denotes a block that has been upsampled three times. T C o n v represents the transpose convolution operation.
To ensure that the extracted interior design and watermarked image features maintain integrity and their correlation with the generated zero-watermarked image, we designed a two-way reconstruction mechanism. Specifically, the generated zero-watermark image is sequentially fed into the encoder, which extracts the features of the interior design ( E n c 1 ) and the watermark image ( E n c 2 ) for further processing. It can ensure the quality and feature correlation of the generated zero-watermark image. The process is expressed as follows:
F e a I D = E n c 1 ( Z )
F e a W I = E n c 2 ( Z )
Here, F e a W I is the generated feature of Z for F e a W I , and F e a I D is the generated feature of Z for F e a I D .
Thus, the overall loss can be defined as L l o s s , and the calculation can be expressed as follows:
L l o s s ( W I , I D , Z ) = η L W a t e r m a r k ( W I , Z ) + ( 1 η ) L I m a g e ( I D , Z )
where η is the control parameter.
This consists of two parts. Firstly, L W a t e r m a r k ( W I , Z ) is used to measure the watermark difference between the watermark image W I and the zero-watermark image Z, and it is expressed as
L W a t e r m a r k ( W I , Z ) = 1 H 1 × W 1 i , j F e a W I ( i , j ) F e a W I ( i , j ) 2
Here, F e a W I ( i , j ) represents the generated feature of Z for F e a W I at position ( i , j ) , while F e a W I represents the pixel value of the watermark image at position ( i , j ) .
Secondly, L W a t e r m a r k ( I D , Z ) is used to measure the interior design difference between the interior design I D and the zero-watermark image Z, and it is expressed as
L I m a g e ( I D , Z ) = 1 H 1 × W 1 i , j F e a I D ( i , j ) F e a I D ( i , j ) 2
Here, F e a I D ( i , j ) represents the generated feature of Z for F e a I D at position ( i , j ) , and F e a I D represents the pixel value of the watermark image at position ( i , j ) .
Through network training and by minimizing the loss L l o s s , the network constructs a zero-watermark image containing information on both the interior design and the watermark, which provides a certain degree of security and is challenging to illegally crack due to the complexity of the fused features. Further, the zero-watermark image is robust due to the extraction and fusion for the generation of stable image features.

3.3. Zero-Watermark Verification

The zero-watermark verification process consists of two steps. First, the interior design to be tested (denoted by I D ) and the zero-watermark image are fed into the inspection network (ISN). Second, the ISN transforms the input to obtain the reconstructed copyright image W I .

ISN

The ISN is based on the UNet [24] shown in Figure 1. The network consists of three parts: the encoder, bottleneck layer, and decoder. The encoder section consists of four convolutional blocks, and each is followed by a 2 × 2 max pooling layer, thus gradually reducing the size of the feature map and increasing the number of feature channels. The bottleneck layer contains a convolutional module that processes the feature map output by the encoder for input into the decoder. The decoder restores the feature map size through a stepwise upsampling operation (using transposed convolutional layers) and combines it with the encoder output of the corresponding layer for skip connections. This section contains four convolutional blocks that process the concatenated results of the upsampled feature map and the corresponding layer’s feature map in the encoder. Finally, the output layer uses a 1 × 1 convolution to map the number of channels to the target output channel number.
Firstly, we simulate various image attacks on image I D and obtain the set of attacked images I D a t t . I D a t t is expressed in Equation (24).
I D a t t = { I D a t t | I D a t t = τ ( I D , θ ) , θ Θ }
Here, τ is the attack operation function, and θ Θ represents the parameter combinations of different attack strategies or intensities.
Secondly, we take the generated zero-watermark image Z and the set of attacked images I D a t t as the input and regard the watermark image W I as the target output for ISN training. Secondly, the training loss for verification (denoted by L v ) is defined in Equation (25).
L v = 1 N × M i N , j M ( W I ( i , j ) W I ( i , j ) ) 2
Here, W I denotes the reconstructed copyright image, and N × M is the dimension of the copyright image.
After the ISN completes training, we take the generated zero-watermark image Z and the detected image I D as the input for the ISN to obtain the reconstructed copyright image W I for copyright verification.
Compared with traditional verification methods, this method can still effectively recover the original watermark features from a damaged image when facing high-intensity image attacks, enhancing its overall robustness.

4. Experiments

4.1. Dataset and Evaluation Indexes

We selected two interior design datasets for testing, namely, interior_design https://www.kaggle.com/datasets/aishahsofea/interior-design (accessed on 24 February 2025) and a synthetic dataset for home interiors https://www.kaggle.com/datasets/luznoc/synthetic-dataset-for-home-interior/ (accessed on 24 February 2025). Both datasets were taken from the Kaggle project. The interior_design (I_D) dataset includes 4147 interior design photos with different locations and styles. The synthetic dataset for home interiors (SHI) consists of diverse annotated composite data for computer vision projects, and it includes 85 high-quality composite RGB interior design images showcasing rich interior scenes.
In our experiment, five images were selected from the I_D dataset, and four were selected from the SHI dataset for performance evaluation. The image size was 256 × 256 × 3 . A copyrighted watermark image is shown in Figure 2, with a size of 256 × 256 × 3 . Adam was adopted in the model as the optimization method.
We adopted the peak signal-to-noise ratio (PSNR) and normalized coefficient (NC) to verify the performance of the proposed method.
Firstly, the PSNR is commonly used to measure the quality of an image—or, more precisely, the quality of a watermark image and reconstructed copyright image [25]. The higher the PSNR value, the better the image quality. The calculation is demonstrated in Equation (26).
P S N R = 10 · log 10 M A X I m g 2 M S E
where M A X I m g represents the maximum value of the image, and M S E denotes the mean squared error.
Secondly, the NC verifies the robustness of the watermarking method by calculating the similarity between the reconstructed copyright image and the original embedded watermark image [26]. The range of NC is (−1,1), and the closer the NC value is to 1, the better the algorithm’s robustness. The calculation is demonstrated in Equation (27).
N C = k = 1 q i = 1 m j = 1 n Y ( i , j , k ) × Y ( i , j , k ) k = 1 q i = 1 m j = 1 n Y ( i , j , k ) 2
Here, Y ( i , j , k ) and Y ( i , j , k ) denote the pixel values of the watermark image and reconstructed copyright image at position ( i , j , k ) , respectively.

4.2. Robustness Evaluation

4.2.1. Conventional Attacks

To verify the robustness of the proposed method when facing conventional attacks, we applied different conventional attacks on interior design images and evaluated the watermark images with the extracted copyright image of the attacked images. The conventional attacks comprised three types of noise attacks on images (Gaussian, salt-and-pepper, and speckle noise) and three types of filtering attacks (Gaussian filter, mean filter, and median filter). Table 3 shows the specific attack types and intensities. Table 4 and Table 5, respectively, show the PSNR and NC results of the proposed method for the I_D dataset, while Table 6 and Table 7, respectively, show the PSNR and NC results of the proposed method for the SHI dataset. Figure 2 shows the images receiving the conventional attacks and the extracted copyright image.
The experimental results show that the proposed method is robust against conventional noise and filter processing attacks. For the three noise types (Gaussian, salt-and-pepper, and speckle noise), the values of NC and PSNR decrease slightly with the increase in the noise intensity. Still, the overall change is small, and in the experiments, the average PSNR and NC values were 25.30 and 0.987, which shows that the method has good robustness. For the three filters (Gaussian blur, mean blur, and median blur), the effects of different filtering treatment strengths on the NC and PSNR values are more moderate. In the experiments, the average PSNRs of the test were 25.46 and 0.988, indicating that the method can effectively resist filtering interference. Generally, traditional zero-watermark schemes suffer from degradation under intense noise or filtering attacks. Our method can maintain high robustness under attacks compared with traditional zero-watermark methods. This stability can be attributed to the advantage of deep learning in stable feature extraction. In addition, the experimental results indicate that although different types of noise and filtering can cause defects in the original image, the overall structural integrity of the extracted copyright information remains largely unchanged. Overall, the technique demonstrates better robustness when oriented to conventional noise.

4.2.2. Geometric Attacks

To verify the robustness of the method proposed in this study under geometric attacks, we set up several geometric attacks, as shown in Table 8, to test on the images, and the experimental results of their PSNRs and NCs for the I_D dataset are shown in Table 9 and Table 10. The PSNR and NC results for the SHI dataset are shown in Table 11 and Table 12. In addition, Figure 3 shows the images receiving the geometric attacks and the extracted copyright images.
According to the results in Table 9, Table 10, Table 11 and Table 12, it can be seen that the average PSNR and NC values of the method proposed in this study are more than 24.874 and 0.986 for the extraction of watermarks when encountering geometric attacks, indicating that the process is robust against geometric attacks. Specifically, in rotation attacks, the average PSNR and NC values are more significant than 24.716 and 0.985, respectively. In cropping attacks, the average PSNR and NC values are greater than 25.032 and 0.987, respectively, which shows superior resistance to geometric attacks. Geometric attacks are inherently more challenging than traditional noise or filtering attacks, as they alter the spatial position of the original image. However, the experimental results confirm that our proposed method outperforms traditional methods in handling these distortions, making it more suitable for common real-world applications of geometric transformations.

4.2.3. Comparisons with Existing Methods

To further verify the robustness of our method, we compared it with several zero-watermark methods while including conventional and geometric attacks. Table 13 shows the results of the comparison. As shown in Table 13, our proposed method exhibits higher robustness than other methods in the presence of different attack modes. Our new method demonstrates significant advantages, especially in resisting geometric attacks, such as rotation attacks. When the image is rotated by 20 degrees, many other methods show a significant decrease in performance, but our method still maintains a high performance index of 0.9879 under the same conditions. Therefore, the zero-watermark algorithm proposed in this study has more advantages in overall robustness.

4.3. Uniqueness Evaluation

Since zero-watermark information is generated based on image features, it is necessary to ensure that the generated watermark information is unique to a specific image. Also, zero-watermark images generated from distinct datasets should be different [31].
Here, an experiment was conducted to confirm the uniqueness of the generated zero-watermark images. Specifically, we generated zero-watermark images of five tested images and calculated the NC values among them in pairs. Figure 4 shows the generated zero-watermark information. In addition, the NC results are summarized in Table 14. The results show that the similarity between the zero-watermark images generated from the five images is low, as it is less than 0.2. This indicates significant differences between the zero-watermark images generated based on different images, which verifies the uniqueness of the proposed zero-watermark methods for generating zero-watermark information. Since this method generates zero-watermark information by fusing the features of the watermark and image, the generated zero-watermark content has high uniqueness and can effectively distinguish different images, providing strong technical support for copyright protection.

4.4. Efficiency Analysis

Efficiency analysis provides a deeper understanding of the model’s feasibility in practical applications. Here, we referred to the experimental setup used by Li et al. [32] and conducted tests using images of the same size. The proposed method was performed 10 times, and the average time was calculated. Meanwhile, we compared the model parameters of three other methods. Table 15 shows the efficiency results for the different models. According to the results, the time required for zero-watermark generation by the proposed method is lower than that of the other methods. Although the compared methods are based on convolutional neural networks, Liu’s method requires a different style of learning information for the host image. It requires more extended training and inference time, resulting in increased memory requirements due to the model’s complexity. In addition, the ResNet101 network based on Nawaz’s model has a relatively large depth, thus requiring more parameters and a greater inference time. However, the proposed method uses a lightweight MobileNet as the image encoder. Compared with CNNs, the lightweight model reduces the required parameters and inference time while retaining a good feature extraction ability and improving the overall efficiency.

5. Discussion

Original interior design drafts are highly valuable and face significant risks of piracy and tampering [34]. Protecting these works is crucial to preserving designers’ intellectual property rights and fostering innovation within creative industries [35]. However, existing methods for protecting intellectual property often lack robustness or practicality in real-world scenarios, as they suffer from insufficient resilience to complex attacks [36]. Our proposed method addresses these challenges by proposing a zero-watermark method that promises to enhance the security and usability of copyright protection for interior design.
Deep learning provides a robust foundation for zero-watermarking techniques to adapt to increasingly complex image types and diverse application scenarios [29]. By integrating image fusion and deep learning, this study effectively fused the features of interior design and watermark images in a high-dimensional space. The fused features not only contained the visual features of the interior design and the features of the watermarked image but also enhanced the robustness of the zero-watermark information. This also promises to improve the security and usability of copyright protection systems for interior design drafts.

5.1. Ambiguity Attack Analysis

Watermark methods often face the challenge of ambiguity attacks. However, our method has certain advantages in effectively resisting such attacks. On the one hand, our zero-watermark method is based on deep learning image fusion, which integrates complex information between the host and watermark images. Through uniqueness experiments, we found that each generated zero-watermark image has significant differences, which makes it difficult for attackers to approximate the original zero-watermark image through similar zero-watermark images, thus increasing the difficulty of deception attacks. On the other hand, our deep learning-based zero-watermark method flexibly adjusts different hyperparameters when generating zero-watermark information. It is also necessary to verify the information of these hyperparameters. This feature further enhances the resistance of our method in the face of ambiguity attacks. In summary, our zero-watermarking method has a certain degree of resistance against ambiguity attacks.

5.2. Model Application Analysis

With its demonstrated robustness against common attacks and the ability to maintain uniqueness, the proposed method is well suited for real-world applications in protecting the intellectual property of interior design. Figure 5 shows a copyright protection and verification scenario based on the proposed zero-watermark method.
Firstly, after completing an interior design, the designer combines the copyright information that needs to be embedded into the interior design and generates a zero-watermark image using the proposed method. Secondly, the generated zero-watermark image will be sent to an intellectual property protection agency for protection. At this point, the interior design can be communicated with clients online or directly used for spatial layout planning. If the designer discovers an interior design with a copyright dispute online, a verification application can be submitted to the intellectual property protection agency. The intellectual property protection agency will provide designers with saved zero-watermark images. At this point, the designer can extract embedded copyright information through the proposed zero-watermark verification network and verify their copyright ownership.
To further verify the application of the model in actual scenarios, we selected five modern interior designs online https://www.decorilla.com/online-decorating/modern-interior-design-ideas/ (accessed on 24 February 2025). and verified them with the proposed zero-watermark method. The results are shown in Figure 6. In these selected samples, the model demonstrates good robustness. Even under strong attacks, it remains feasible to extract relatively intact copyright information. This highlights the effectiveness of our method in preserving essential copyright ownership details under different challenging conditions. The proposed method generates robust and distinctive zero-watermark information, which can serve as reliable evidence for establishing copyright ownership and ensuring the integrity of interior designs. Furthermore, in cases of unauthorized use or tampering, the method enables efficient extraction and verification of the embedded zero-watermark information, thus facilitating the rapid detection of infringement and providing robust legal support for copyright enforcement.
Beyond interior design, the proposed method has broad applicability in protecting intellectual property rights in various design-related fields. In architectural design, it can protect floor plans and 3D renderings, ensure the traceability of original designs, and prevent unauthorized use. In fashion design, this method can protect digital sketches and textile patterns, helping designers maintain ownership of their works. In addition, in product and industrial design, prototypes and conceptual models are often shared digitally, and the proposed method ensures that design ownership is verifiable and can prevent tampering. This method may provide creators with a broad solution, reducing the risk of unauthorized copying or abuse.
Despite its promising results, the method still faces some challenges. First, attacks on interior design are often more complex and varied, which can reduce the feasibility of extracting watermark images with our method. Second, the time and computational costs bring specific challenges. In the future, we will continue to explore zero-watermark approaches that can handle more challenging and advanced attack scenarios. In addition, we will consider employing knowledge distillation to reduce the time and computational costs.

6. Conclusions

This study presents a novel zero-watermark method designed to protect interior design. By leveraging deep learning, this method achieves robust feature extraction and the effective integration of watermarks and interior design through image fusion. The experimental results demonstrate that the proposed method is robust against conventional and geometric attacks. In addition, this work highlights the potential of deep learning and image fusion in advancing zero-watermark technology and provides a solid foundation for addressing complex copyright protection scenarios.

Author Contributions

Conceptualization, Y.P. and Q.H.; data curation, J.X.; formal analysis, Y.P.; investigation, Y.P.; methodology, Y.P.; project administration, J.C.; resources, Q.H.; software, Y.P.; supervision, K.U.; validation, K.U.; visualization, Y.P.; writing—original draft, Y.P. and J.X.; writing—review and editing, K.U. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available in a publicly accessible repository: https://www.kaggle.com/datasets/aishahsofea/interior-design (accessed on 24 February 2025); https://www.kaggle.com/datasets/luznoc/synthetic-dataset-for-home-interior (accessed on 24 February 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, J.; Shao, Z.; Hu, B. Generating Interior Design from Text: A New Diffusion Model-Based Method for Efficient Creative Design. Buildings 2023, 13, 1861. [Google Scholar] [CrossRef]
  2. Kirillova, E.A.; Koval’, V.; Zenin, S.; Parshin, N.M.; Shlyapnikova, O.V. Digital Right Protection Principles under Digitalization. Webology 2021, 18, 910–930. [Google Scholar] [CrossRef]
  3. Pasa, B. Industrial Design and Artistic Expression: The Challenge of Legal Protection. Brill Res. Perspect. Art Law 2020, 3, 1–137. [Google Scholar] [CrossRef]
  4. Chakraborty, D. Copyright Challenges in the Digital Age: Balancing Intellectual Property Rights and Data Privacy in India’s Online Ecosystem. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4647960 (accessed on 24 February 2025).
  5. Wen, Q.; Sun, T.F.; Wang, S.X. Concept and application of zero-watermark. Acta Electron. Sin. 2003, 31, 214–216. [Google Scholar]
  6. Panchal, U.H.; Srivastava, R. A comprehensive survey on digital image watermarking techniques. In Proceedings of the 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India, 4–6 April 2015; pp. 591–595. [Google Scholar]
  7. Luo, Y.; Tan, X.; Cai, Z. Robust Deep Image Watermarking: A Survey. Comput. Mater. Contin. 2024, 81, 133. [Google Scholar] [CrossRef]
  8. Abraham, J.; Paul, V. An imperceptible spatial domain color image watermarking scheme. J. King Saud-Univ.-Comput. Inf. Sci. 2019, 31, 125–133. [Google Scholar] [CrossRef]
  9. Hu, R.; Xiang, S. Reversible data hiding by using CNN prediction and adaptive embedding. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 10196–10208. [Google Scholar] [CrossRef] [PubMed]
  10. Ray, A.; Roy, S. Recent trends in image watermarking techniques for copyright protection: A survey. Int. J. Multimed. Inf. Retr. 2020, 9, 249–270. [Google Scholar] [CrossRef]
  11. Kadian, P.; Arora, S.M.; Arora, N. Robust digital watermarking techniques for copyright protection of digital data: A survey. Wirel. Pers. Commun. 2021, 118, 3225–3249. [Google Scholar] [CrossRef]
  12. Zheng, L.; Zhang, Y.; Thing, V.L. A survey on image tampering and its detection in real-world photos. J. Vis. Commun. Image Represent. 2019, 58, 380–399. [Google Scholar] [CrossRef]
  13. Liu, H.; Chen, Y.; Shen, G.; Guo, C.; Cui, Y. Robust Image Watermarking Based on Hybrid Transform and Position-Adaptive Selection. In Circuits, Systems, and Signal Processing; Springer: Berlin/Heidelberg, Germany, 2024; pp. 1–28. [Google Scholar]
  14. Zhang, X.; Jiang, R.; Sun, W.; Song, A.; Wei, X.; Meng, R. LKAW: A robust watermarking method based on large kernel convolution and adaptive weight assignment. Comput. Mater. Contin. 2023, 75, 1–17. [Google Scholar] [CrossRef]
  15. Liu, J.; Li, J.; Cheng, J.; Ma, J.; Sadiq, N.; Han, B.; Geng, Q.; Ai, Y. A novel robust watermarking algorithm for encrypted medical image based on DTCWT-DCT and chaotic map. Comput. Mater. Contin. 2019, 61, 889–910. [Google Scholar] [CrossRef]
  16. Zhong, X.; Huang, P.C.; Mastorakis, S.; Shih, F.Y. An automated and robust image watermarking scheme based on deep neural networks. IEEE Trans. Multimed. 2020, 23, 1951–1961. [Google Scholar] [CrossRef]
  17. Downs, R.; Illangovan, D.; Alférez, G.H. Open Zero-Watermarking Approach to Prevent the Unauthorized Use of Images in Deep Learning. In Proceedings of the Intelligent Systems Conference, Guilin, China, 26–27 October 2024; Springer: Cham, Switzerland, 2024; pp. 395–405. [Google Scholar]
  18. Xiang, R.; Liu, G.; Li, K.; Liu, J.; Zhang, Z.; Dang, M. Zero-watermark scheme for medical image protection based on style feature and ResNet. Biomed. Signal Process. Control 2023, 86, 105127. [Google Scholar] [CrossRef]
  19. Shi, H.; Zhou, S.; Chen, M.; Li, M. A novel zero-watermarking algorithm based on multi-feature and DNA encryption for medical images. Multimed. Tools Appl. 2023, 82, 36507–36552. [Google Scholar] [CrossRef]
  20. Li, D.; Deng, L.; Gupta, B.B.; Wang, H.; Choi, C. A novel CNN based security guaranteed image watermarking generation scenario for smart city applications. Inf. Sci. 2019, 479, 432–447. [Google Scholar] [CrossRef]
  21. Cao, F.; Yao, S.; Zhou, Y.; Yao, H.; Qin, C. Perceptual authentication hashing for digital images based on multi-domain feature fusion. Signal Process. 2024, 223, 109576. [Google Scholar] [CrossRef]
  22. Taj, R.; Tao, F.; Kanwal, S.; Almogren, A.; Altameem, A.; Ur Rehman, A. A reversible-zero watermarking scheme for medical images. Sci. Rep. 2024, 14, 17320. [Google Scholar] [CrossRef]
  23. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  25. Thanh, T.M.; Tanaka, K. An image zero-watermarking algorithm based on the encryption of visual map feature with watermark information. Multimed. Tools Appl. 2017, 76, 13455–13471. [Google Scholar] [CrossRef]
  26. Dong, F.; Li, J.; Bhatti, U.A.; Liu, J.; Chen, Y.W.; Li, D. Robust zero watermarking algorithm for medical images based on improved NasNet-mobile and DCT. Electronics 2023, 12, 3444. [Google Scholar] [CrossRef]
  27. Shen, Z.; Kintak, U. A novel image zero-watermarking scheme based on non-uniform rectangular. In Proceedings of the 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Ningbo, China, 9–12 June 2017; pp. 78–82. [Google Scholar]
  28. Gong, C.; Liu, J.; Gong, M.; Li, J.; Bhatti, U.A.; Ma, J. Robust medical zero-watermarking algorithm based on Residual-DenseNet. IET Biom. 2022, 11, 547–556. [Google Scholar] [CrossRef]
  29. Nawaz, S.A.; Li, J.; Shoukat, M.U.; Bhatti, U.A.; Raza, M.A. Hybrid medical image zero watermarking via discrete wavelet transform-ResNet101 and discrete cosine transform. Comput. Electr. Eng. 2023, 112, 108985. [Google Scholar] [CrossRef]
  30. Li, F.; Wang, Z.X. A Zero-Watermarking Algorithm Based on Scale-Invariant Feature Reconstruction Transform. Appl. Sci. 2024, 14, 4756. [Google Scholar] [CrossRef]
  31. Ren, N.; Guo, S.; Zhu, C.; Hu, Y. A zero-watermarking scheme based on spatial topological relations for vector dataset. Expert Syst. Appl. 2023, 226, 120217. [Google Scholar] [CrossRef]
  32. Li, C.; Sun, H.; Wang, C.; Chen, S.; Liu, X.; Zhang, Y.; Ren, N.; Tong, D. ZWNET: A deep-learning-powered zero-watermarking scheme with high robustness and discriminability for images. Appl. Sci. 2024, 14, 435. [Google Scholar] [CrossRef]
  33. Liu, G.; Xiang, R.; Liu, J.; Pan, R.; Zhang, Z. An invisible and robust watermarking scheme using convolutional neural networks. Expert Syst. Appl. 2022, 210, 118529. [Google Scholar] [CrossRef]
  34. Sadnyini, I.A.; Putra, I.G.P.A.W.; Gorda, A.N.S.R.; Gorda, A.N.T.R. Legal Protection of Interior Design in Industrial Design Intellectual Property Rights. NOTARIIL J. Kenotariatan 2021, 6, 27–37. [Google Scholar] [CrossRef]
  35. Wang, B.; Jiawei, S.; Wang, W.; Zhao, P. Image copyright protection based on blockchain and zero-watermark. IEEE Trans. Netw. Sci. Eng. 2022, 9, 2188–2199. [Google Scholar] [CrossRef]
  36. Anand, A.; Bedi, J.; Aggarwal, A.; Khan, M.A.; Rida, I. Authenticating and securing healthcare records: A deep learning-based zero watermarking approach. Image Vis. Comput. 2024, 145, 104975. [Google Scholar] [CrossRef]
Figure 1. The overall structure of the proposed zero-watermark method.
Figure 1. The overall structure of the proposed zero-watermark method.
Mathematics 13 00947 g001
Figure 2. The interior designs after conventional attacks and the extracted copyright images.
Figure 2. The interior designs after conventional attacks and the extracted copyright images.
Mathematics 13 00947 g002
Figure 3. The interior designs after geometric attacks and the extracted copyright images.
Figure 3. The interior designs after geometric attacks and the extracted copyright images.
Mathematics 13 00947 g003
Figure 4. The generated zero-watermark results.
Figure 4. The generated zero-watermark results.
Mathematics 13 00947 g004
Figure 5. Copyright protection and verification scenarios based on the proposed zero-watermark method.
Figure 5. Copyright protection and verification scenarios based on the proposed zero-watermark method.
Mathematics 13 00947 g005
Figure 6. The practical effects of applying the copyright protection model to real-life scenarios.
Figure 6. The practical effects of applying the copyright protection model to real-life scenarios.
Mathematics 13 00947 g006
Table 1. The structure of the feature extraction network.
Table 1. The structure of the feature extraction network.
LayerKernel SizeInput ChannelOutput ChannelStrideExpansion Ratio
Conv2d33811
Inverted Residual Block8811
Inverted Residual Block81626
Inverted Residual Block161616
Inverted Residual Block162426
Conv2d124241
Table 2. The structure of the decoder.
Table 2. The structure of the decoder.
LayerInput ChannelOutput ChannelKernel SizeStridePadding
ConvTranspose2D + BN + ReLU2432321
ConvTranspose2D + BN + ReLU3216321
ConvTranspose2D + BN + ReLU168321
Conv2D + Tanh83311
Table 3. The types and intensities of conventional attacks.
Table 3. The types and intensities of conventional attacks.
TypeIntensity
Gaussian noise0.005, 0.01, 0.05, 0.1
Salt-and-pepper noise0.05, 0.01, 0.05, 0.1
Speckle noise0.01, 0.05, 0.1, 0.2
Gaussian blur 3 × 3 , 5 × 5 , 7 × 7 , 9 × 9
Mean blur 3 × 3 , 5 × 5 , 7 × 7 , 9 × 9
Median blur 3 × 3 , 5 × 5 , 7 × 7 , 9 × 9
Table 4. The PSNR results of conventional attacks for the I_D dataset.
Table 4. The PSNR results of conventional attacks for the I_D dataset.
Noise TypeIntensityImg1Img2Img3Img4Img5
Gaussian noise0.00527.2524.7127.4222.9918.86
0.0127.2024.7027.3722.9518.84
0.0526.7624.4827.1022.7418.78
0.126.2523.9726.8522.6418.69
Salt-and-pepper noise0.00527.3524.7627.4423.0118.86
0.0127.3224.7627.4123.0018.86
0.0527.1224.6127.2922.9118.81
0.126.8224.5027.0322.8418.75
Speckle noise0.0127.3224.7527.4523.0118.86
0.0527.2524.7127.4022.9718.85
0.127.0924.6727.3022.9318.85
0.226.8624.5727.1522.8518.83
Gaussian blur327.3624.7827.4823.0218.87
527.3124.7527.4723.0218.86
727.2324.6927.4623.0118.86
927.2124.6327.4423.0018.85
Mean blur327.3324.7727.4623.0218.86
527.2124.6627.4523.0018.85
727.1524.4727.3922.9818.84
927.0924.1627.3422.9518.83
Median blur327.2924.7227.4623.0118.86
527.2824.6027.4423.0018.86
727.2124.4927.4022.9918.84
927.1224.2727.3622.9718.84
Table 5. The NC results of conventional attacks for the I_D dataset.
Table 5. The NC results of conventional attacks for the I_D dataset.
Noise TypeIntensityImg1Img2Img3Img4Img5
Gaussian0.0050.99210.99020.99280.98030.9717
0.010.99200.99010.99270.98010.9713
0.050.99090.98880.99220.97890.9698
0.10.98950.98670.99160.97850.9673
Salt-and-pepper0.0050.99230.99040.99290.98040.9720
0.010.99230.99030.99280.98040.9719
0.050.99180.98960.99260.97990.9707
0.10.99110.98890.99200.97950.9691
Speckle0.010.99230.99040.99280.98040.9720
0.050.99210.99020.99270.98020.9719
0.10.99170.99000.99250.98000.9716
0.20.99110.98960.99210.97960.9712
Gaussian blur30.99240.99040.99290.98040.9720
50.99220.99030.99290.98040.9720
70.99200.99000.99280.98030.9718
90.99190.98970.99280.98030.9717
Mean blur30.99230.99030.99290.98040.9720
50.99190.98990.99280.98030.9717
70.99170.98910.99260.98010.9715
90.99150.98780.99250.97990.9713
Median blur30.99220.99010.99290.98040.9719
50.99210.98960.99280.98030.9718
70.99180.98910.99270.98020.9716
90.99160.98820.99260.98010.9714
Table 6. The PSNR results of conventional attacks for the SHI dataset.
Table 6. The PSNR results of conventional attacks for the SHI dataset.
Noise TypeIntensity792_0_10792_0_20792_0_30792_0_40
Gaussian noise0.00526.36828.545925.177226.8649
0.0126.291628.506925.033126.8171
0.0525.92328.278624.266926.5831
0.125.476627.990223.329326.3135
Salt-and-pepper noise0.00526.392328.575325.225426.8768
0.0126.384728.587625.187626.8667
0.0526.246828.442924.837726.7824
0.126.065528.391424.28926.6147
Speckle noise0.0126.364628.560325.232926.8644
0.0526.237128.438125.16926.8023
0.126.036228.271124.881326.6255
0.225.521328.133724.383126.3736
Gaussian blur326.377328.57325.13526.8334
526.333428.576125.152526.7931
726.2928.57825.131526.7627
926.25328.572825.095426.7397
Mean blur326.389128.585825.160826.8557
526.37328.582325.159926.8361
726.345528.577525.162626.8059
926.326428.577625.157426.798
Median blur326.383728.590125.148526.832
526.321228.583625.143526.8021
726.215728.591825.094826.7724
926.179928.593625.084526.7587
Table 7. The NC results of conventional attacks for the SHI dataset.
Table 7. The NC results of conventional attacks for the SHI dataset.
Noise TypeIntensity792_0_10792_0_20792_0_30792_0_40
Gaussian noise0.0050.99130.99300.98520.9948
0.010.99120.99290.98460.9946
0.050.98990.99240.98090.9939
0.10.98850.99180.97530.9930
Salt-and-pepper noise0.0050.99150.99310.98540.9948
0.010.99140.99310.98520.9948
0.050.99100.99280.98370.9945
0.10.99040.99270.98100.9940
Speckle noise0.010.99140.99300.98540.9948
0.050.99090.99280.98520.9946
0.10.99040.99250.98390.9941
0.20.98860.99220.98160.9933
Gaussian blur30.99140.99310.98500.9948
50.99120.99310.98500.9947
70.99110.99310.98490.9946
90.99100.99310.98470.9945
Mean blur30.99140.99310.98510.9948
50.99140.99310.98510.9948
70.99130.99310.98510.9947
90.99120.99310.98500.9947
Median blur30.99140.99310.98510.9948
50.99130.99310.98510.9947
70.99090.99310.98480.9947
90.99080.99310.98480.9947
Table 8. The types and intensities of geometric attacks.
Table 8. The types and intensities of geometric attacks.
TypeIntensity
Rotation (clockwise and counterclockwise) 5 , 10 , 15 , 20
Crop 1 / 16 , 1 / 8 , 1 / 4 , 1 / 2
Table 9. The PSNR results of geometric attacks for the I_D dataset.
Table 9. The PSNR results of geometric attacks for the I_D dataset.
Noise TypeIntensityImg1Img2Img3Img4Img5
Counterclockwise526.1423.6026.7522.8018.78
1025.9923.5026.5822.7118.75
1526.0223.4226.5122.6418.70
2026.0423.3426.5022.6218.70
Clockwise526.1023.6726.7622.7918.78
1025.9023.4726.5722.6818.76
1525.8623.4526.5022.6418.75
2025.8623.4126.4622.6118.72
Crop 1 / 16 27.1124.5427.2222.9918.86
1 / 8 26.9724.5427.1722.9818.86
1 / 4 26.8824.4526.8222.9518.83
1 / 2 26.4324.3626.4222.9018.85
Table 10. The NC results of geometric attacks for the I_D dataset.
Table 10. The NC results of geometric attacks for the I_D dataset.
Noise TypeIntensityImg1Img2Img3Img4Img5
Left Rotate50.98890.98460.99110.97900.9699
100.98830.98370.99050.97830.9688
150.98850.98350.99030.97790.9677
200.98840.98280.99010.97780.9674
Right Rotate50.98880.98500.99100.97890.9700
100.98820.98360.99050.97820.9697
150.98800.98310.99030.97790.9690
200.98790.98320.99010.97770.9681
Crop 1 / 16 0.99170.98940.99230.98030.9720
1 / 8 0.99130.98960.99220.98020.9720
1 / 4 0.99090.98920.99140.98000.9718
1 / 2 0.98950.98870.99040.98040.9717
Table 11. The PSNR results of geometric attacks for the SHI dataset.
Table 11. The PSNR results of geometric attacks for the SHI dataset.
Noise TypeIntensity792_0_10792_0_20792_0_30792_0_40
Counterclockwise525.595327.597824.416526.2865
1025.523427.385024.265726.2495
1525.468427.219224.162326.2174
2025.814227.916724.586126.4263
Clockwise525.570827.542324.306526.3431
1025.490927.281424.265126.2974
1525.459527.140524.198126.2672
2025.773927.912524.525326.4627
Crop 1 / 16 26.297727.884224.749426.8182
1 / 8 26.209327.347824.677226.8672
1 / 4 26.205426.542724.502326.8722
1 / 2 26.039525.269723.751226.8912
Table 12. The NC results of geometric attacks for the SHI dataset.
Table 12. The NC results of geometric attacks for the SHI dataset.
Noise TypeIntensity792_0_10792_0_20792_0_30792_0_40
Counterclockwise50.98900.99130.98210.9934
100.98870.99100.98150.9932
150.98830.99060.98100.9932
200.98970.99190.98280.9938
Clockwise50.98890.99120.98160.9936
100.98860.99070.98150.9935
150.98840.99050.98120.9934
200.98950.99190.98250.9939
Crop 1 / 16 0.99110.99180.98360.9947
1 / 8 0.99090.99080.98350.9949
1 / 4 0.99090.98920.98290.9949
1 / 2 0.99060.98630.98020.9950
Table 13. Comparative results for the NC under different attacks.
Table 13. Comparative results for the NC under different attacks.
Attack MethodGaussian Noise (Intensity)Median Blur (Intensity)Rotation (Intensity)Crop (Intensity)
Shen et al. [27]0.9012 (0.05)0.9746 (3 × 3)0.9609 (30)0.9377 (25%)
Gong et al.  [28]0.8900 (0.05)-0.9400 (2)0.9500 (8%)
Nawaz et al. [29]0.7300 (0.01)0.8200 (3 × 3)0.8600 (20)0.9600 (20%)
Li et al. [30]0.9543 (0.1)0.9957 (3 × 3)0.9832 (3)0.9688 (6.25%)
Our Proposed Method0.9867 (0.1)0.9929 (3 × 3)0.9879 (20)0.9718 (25%)
Table 14. Verification of uniqueness using the NC values for zero-watermark images.
Table 14. Verification of uniqueness using the NC values for zero-watermark images.
Img 1Img 2Img 3Img 4Img 5
Img 11.0000.0450.0260.0420.071
Img 20.0451.0000.1140.0360.020
Img 30.0260.1141.0000.0130.080
Img 40.0420.0360.0131.0000.052
Img 50.0710.0200.0800.0521.000
Table 15. Comparison of processing time and model parameters with other models.
Table 15. Comparison of processing time and model parameters with other models.
MethodLiu et al. [33]Nawaz et al. [29]Our Proposed Method with CNNOur Proposed Method
Average processing time2440 ms1384 ms49.6 ms46.8 ms
Model parameters2.51 M44.54 M0.039 M0.037 M
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, Y.; Hu, Q.; Xu, J.; U, K.; Chen, J. A Novel Deep Learning Zero-Watermark Method for Interior Design Protection Based on Image Fusion. Mathematics 2025, 13, 947. https://doi.org/10.3390/math13060947

AMA Style

Peng Y, Hu Q, Xu J, U K, Chen J. A Novel Deep Learning Zero-Watermark Method for Interior Design Protection Based on Image Fusion. Mathematics. 2025; 13(6):947. https://doi.org/10.3390/math13060947

Chicago/Turabian Style

Peng, Yiran, Qingqing Hu, Jing Xu, KinTak U, and Junming Chen. 2025. "A Novel Deep Learning Zero-Watermark Method for Interior Design Protection Based on Image Fusion" Mathematics 13, no. 6: 947. https://doi.org/10.3390/math13060947

APA Style

Peng, Y., Hu, Q., Xu, J., U, K., & Chen, J. (2025). A Novel Deep Learning Zero-Watermark Method for Interior Design Protection Based on Image Fusion. Mathematics, 13(6), 947. https://doi.org/10.3390/math13060947

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop