You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Article
  • Open Access

20 January 2023

DNN-Based Forensic Watermark Tracking System for Realistic Content Copyright Protection

,
,
,
and
1
Protocol Engineering Lab., Sejong University, Seoul 143-747, Republic of Korea
2
Department of Computer and Information Security, Sejong University, Seoul 143-747, Republic of Korea
3
Department of Artificial Intelligence, Korea University, Seoul 143-747, Republic of Korea
4
Department of Computer and Information Security & Convergence Engineering for Intelligent Drone, Sejong University, Seoul 143-747, Republic of Korea
This article belongs to the Special Issue Feature Papers in Computer Science & Engineering

Abstract

The metaverse-related content market is active and the demand for immersive content is increasing. However, there is no definition for granting copyrights to the content produced using artificial intelligence and discussions are still ongoing. We expect that the need for copyright protection for immersive content used in the metaverse environment will emerge and that related copyright protection techniques will be required. In this paper, we present the idea of 3D-to-2D watermarking so that content creators can protect the copyright of immersive content available in the metaverse environment. We propose an immersive content copyright protection using a deep neural network (DNN), a neural network composed of multiple hidden layers, and a forensic watermark.

1. Introduction

With the development of information, technology, and computing technology, the metaverse, which means the digital virtual world, is attracting attention. It utilizes virtual reality (VR) and augmented reality (AR) technologies to allow users to experience various contents in virtual space.
With the recent development of virtual reality and augmented reality technologies, the metaverse-related content market is expanding and the demand for immersive content such as 3D/4D media is increasing. In addition, as rendering technology using artificial intelligence technology has developed rapidly, general users can easily model 2D content into 3D models and create content through rendering technology. As a result, content authors can easily create immersive content, but there are concerns about copyright disputes and infringement of immersive content because there is currently no clear copyright definition for artificial intelligence creations.
In February 2022, in the United States, there was a case in which it was ruled that content created by artificial intelligence cannot be copyrighted because there is no human intervention [1] and, in September 2022, there was a discussion about whether content created by artificial intelligence can be copyrighted; copyright protection is for such content is in progress [2]. If the discussion on granting copyright in the production of artificial intelligence-based immersive content is completed, it is expected that the development of copyright protection technology suitable for the metaverse environment will be necessary.
In this paper, an immersive content protection system using a DNN and a forensic watermark is proposed to prevent illegal copying and the distribution of content produced with artificial intelligence [3,4,5,6] and rendering technology. Note that a DNN means a neural network used for deep learning through a multi-layer perceptron structure based on a data input layer, an output layer, and a hidden layer.
In the proposed system, when the immersive content created through the 3D mesh and rendering technology is illegally distributed or copied and used, a user other than the 3D mesh can extract a forensic watermark from the immersive content in use. In addition, through the user information stored in the forensic watermark, it is possible to track the suspect who distributed the illegal realistic content on the Internet.
In this paper, which is an extension of the paper will be published in [7], we propose ideas for building a safe metaverse environment and developing the immersive content market. Section 2 explains the background knowledge of the technologies used in the proposed system and Section 3 explains the structure and operating process of the proposed system. Section 4 analyzes the performance of the proposed system utilizing an existing one [8] and discusses the pros and cons of the system. Finally, Section 5 concludes this paper.

3. Proposed DNN-Based Forensic Watermark Tracking System

In this section, we propose a DNN-based forensic watermark tracking system to prevent the illegal distribution of 3D content created based on artificial intelligence. The system utilizes DNN technology to construct a forensic watermark insertion and extraction process and prevent watermarks from being altered by artificial intelligence algorithms. It also adds a layer of attack between the forensic watermark insertion process to ensure that the forensic watermark is robust against both malicious and non-malicious distortion. Therefore, even if 3D contents with forensic watermarks are unintentionally modified while being used by ordinary users or by malicious users on intentionally attack pirated copies, the unmodified forensic watermarks are extracted from the 3D contents. Forensic watermarks are used to track piracy and distributors of 3D content. To this end, during the forensic watermark insertion process, the forensic watermark is inserted to include transaction details (e.g., buyer, purchase details, purchase price, etc.) for each content ID. Afterward, a strong uncensored watermark is extracted from the 3D content that is illegally reported or monitored and, based on the content ID, the user transaction history is retrieved to track the users who have copied or distributed the content illegally. This system is designed with the coming metaverse environment in mind and is a tracking model for determining and protecting from copyright infringement of 3D content. The system was inspired by the 3D-to-2D watermarking technology proposed by I. Yoo. the components of their thesis were referred to [8] and the overall structure of the proposed system is shown in Figure 4. The proposed system consists of a total of 11 modules and the functions of each module are shown in Table 1 below.
Figure 4. Proposed architecture of the DNN-based forensic watermark tracking system.
Table 1. System modules.
Based on the modules defined in Table 1, the proposed system is described by two operational processes. The first is a procedure for inserting a forensic watermark with guaranteed robustness and the second is a procedure for extracting the forensic watermark from illegally distributed 3D content and tracking an illegal distributor. Table 2 is a notation for expressing the flow of messages between the modules in each procedure.
Table 2. Used notations.

3.1. Forensic Watermark Embedding Procedure

To create 3D content with forensic watermarking, the forensic watermark generator creates information templates related to purchasing 3D content. The template includes the purchased 3D content ID, user ID, copyright holder ID, etc., and records the transaction date and transaction price. Based on this, a forensic watermark is created through a forensic watermark generator and the created forensic watermark is transmitted to the forensic watermark database. A forensic watermark is a textual representation of a content ID. Forensic watermarks are stored and managed in the form of a table by the forensic watermark database and are later used to detect illegal 3D content. Furthermore, the generated forensic watermark is transferred as an input to the deep forensic watermark embedder. A deep forensic watermark embedder trains the embedding procedure and assigns weights to the embedding procedure to embed the forensic watermark. Next, after inserting the trained forensic watermark into the 3D mesh, robustness is added by training the attack process. 3D content is then created by rendering the 3D mesh. Figure 5 below shows the process of embedding a DNN-based forensic watermark in 3D content.
Figure 5. DNN-based forensic watermark embedding procedure on the proposed system.
1.
A 3D mesh generator creates a 3D mesh using 2D images and 3D data.
2.
A forensic watermark generator creates a forensic watermark using a content ID and a user ID.
3.
The 3D mesh generator transfers the generated 3D mesh to a deep forensic watermark embedder.
4.
The forensic watermark generator transfers the forensic watermark to the deep forensic watermark embedder.
5.
The forensic watermark generator stores the generated forensic watermark in a forensic watermark database.
6.
The deep forensic watermark embedder trains a procedure to embed the forensic watermark.
7.
The deep forensic watermark embedder embeds the forensic watermark into the 3D mesh.
8.
The deep forensic watermark embedder transfers the 3D mesh with the forensic watermark to an attack simulator.
9.
The attack simulator trains possible attacks on the 3D mesh.
10.
The attack simulator provides a the robust 3D mesh to a renderer.
11.
The renderer creates a 3D content by robustly rendering the 3D mesh.

3.1.1. Forensic Watermark Embedding

For forensic watermark embedding of the system proposed in this paper, the deep forensic watermark embedder utilizes I. Yoo’s encoder [8] to embed messages into the vertex components of the 3D mesh. In the deep forensic watermark embedder, the secret message, which is a forensic watermark, is replicated N v times and N v × F W b tensors are constructed according to the dimension of the vertex. Here, N v means the number of vertices in the 3D mesh and F W b means the binary message inserted as a forensic watermark. A 3D mesh is expressed as M ( V , F , T , P ) and the vertices are expressed as V R N v × C v . C v refers to factors such as 3D positions, normal, and vertex colors. We define V m R N v × ( C v + F W b ) by concatenating the vertices with the tensor composed of a watermarked vertex. V e is obtained by configuring E G ( V m ) R N v × C v , which is a vertex message embedding neural network. In the proposed system, Equation (1) below can be expressed through T r a i n i n g i n s e r t ( F W ) and I n s 3 D M e s h ( F W ) . Figure 6 below shows the DNN-based forensic watermark embedding architecture.
V e = E G ( V m ) R N v × C v
Figure 6. DNN-based forensic watermark embedding architecture.

3.1.2. Attack Simulation and Rendering

The attack simulator iterates and simulates the procedure of performing and extracting possible attacks from a 3D mesh to create a forensic watermarked 3D mesh while ensuring robustness against forensic watermark. We consider gaussian noise, rotation, random scaling, and 3D vertex cropping as simulated attacks and add them to the forensic watermark embedding pipeline. To create 3D content, the 3D mesh is used as an input to the renderer and, finally, 3D content with forensic watermark is created. This corresponds to the T r a i n i n g A t t a c k ( 3 D M e s h ( F W ) ) procedure in the proposed system.
Additionally, a differentiable rendering layer is required to extract forensic watermark from 2D rendered image for immersive content. For immersive content, a rendered 2D image is generated through 3D mesh M, lighting camera matrix K, and lighting parameter L. This means the R e n d e r i n g ( 3 D M e s h R o b u s t ( F W ) ) procedure. In this case, H r in Equation (2) means the height of the rendered image and W r means the width of the rendered image. Figure 7 shows an architecture that ensures the robustness of forensic watermark.
I = R D ( M , K , L ) R H r × W r × 3
Figure 7. DNN-based forensic watermark attack simulation architecture.

3.2. Track Down Suspects of Illegal 3D Content Procedure

Illegal 3D content can be flagged by others or monitored by web crawlers that monitor 3D content on the Internet. When illegal 3D content distribution is detected in the 3D content randomly fetched by the web crawler, the system’s process of tracking the distributor is initiated. If the illegal 3D content is detected through monitoring, the detected illegal 3D content is transmitted to the forensic watermark extractor. The forensic watermark extractor extracts the forensic watermark and transfers it to the forensic watermark tracker. Next, the forensic watermark tracker tracks the illegal 3D content based on the extracted forensic watermark. Figure 8 below shows the process of tracking illegal 3D content based on forensic watermarking.
Figure 8. Illegal 3D content distributor tracking procedure on the proposed system.
1.
A web crawler is monitoring 3D content with a forensic watermark.
2.
The web crawler randomly transfers the 3D content uploaded on the Internet to a deep forensic watermark extractor for verification.
3.
The deep forensic watermark extractor trains a forensic watermark extraction procedure.
4.
The deep forensic watermark extractor extracts the forensic watermark from the 3D content.
5.
The deep forensic watermark extractor transfers the forensic watermark to a forensic watermark tracker.
6.
The forensic watermark tracker analyzes a content ID using the forensic watermarks.
7.
The forensic watermark tracker requests the content ID to a forensic watermark database.
8.
When the content ID in the forensic watermark database is the same as the requested content ID, a response regarding whether the requested content ID exists is transferred to the forensic watermark tracker.
9.
The forensic watermark tracker, receiving the response about the content ID, analyzes a user ID using the forensic watermark.
10.
The forensic watermark tracker requests the user ID from the forensic watermark database.
11.
The forensic watermark database transfers user information corresponding to the user ID to the forensic watermark tracker as a response.
12.
The forensic watermark tracker tracks users using the response from the forensic watermark database.

3.2.1. Forensic Watermark Extraction

For the forensic watermark extraction of the system proposed in this paper, the deep forensic watermark extractor extracts the forensic watermark from the 2D rendered image using I. Yoo’s decoder [8]. The deep forensic watermark extractor extracts message M r from the 2D rendered image I using neural network D and is expressed as Equation (3). Here, neural network D uses multiple convolution layers and global pooling layers for input 2D images of various sizes. The final message bit M r b is extracted through Equation (4) and M r can be used to calculate the message loss and M r b to calculate the bit accuracy. Figure 9 below shows the DNN-based forensic watermark extraction architecture.
M r = D ( I )
M r b = c l a m p ( s i g n ( M r 0.5 ) , 0.1 )
Figure 9. DNN-based forensic watermark extraction architecture.

3.2.2. Forensic Watermark-Based Tracking

Figure 10 is the process of tracking users who illegally distributed content using forensic watermarks. The information contained in forensic watermarks can be utilized to track users. The forensic watermark consists of a transaction table containing content transaction information and three master tables containing information on the content, copyright holders, and users included in the transaction information. In the case of the transaction table, it has information on content ID, copyright holder ID, user ID, transaction date, transaction price, and transaction address. When a user purchases content, the content purchased, the copyright holder of the content, the user who purchased the content, and the date, price, and location of the purchase are recorded. In the case of the content table, the content information is listed in detail in connection with the content ID in the transaction table. The content information is organized by content ID, content name, category, and registration date. It also organizes the name of each content and the field of the content by category and records the date the content was registered as a copyright. In the case of the copyright holder table, the copyright holder information is listed in detail in connection with the copyright holder ID in the transaction table. The copyright holder information records the copyright holder’s ID, name, and address. In the case of the user table, detailed user information is listed in connection with the user ID in the transaction table. The user information records the user ID, username, and address.
Figure 10. Illegal user tracking procedure.
When the forensic watermark is inserted, the user can use the content in which the transaction table information is inserted. At this time, if a user who normally purchases and downloads content illegally copies and uploads or distributes content on the Internet, this is regarded as copyright infringement. When content is copied and distributed illegally, the illegal distributor can be tracked by utilizing the content ID and user ID among the forensic watermark information. First, a web crawler visits various sites on the Internet and fetches arbitrary content to verify forensic watermarks. Web crawler usually browse content by visiting sites with many links, such as illegal sites. The content subject to randomly search by the web crawler is delivered to the forensic watermark extractor and the forensic watermark extractor extracts the forensic watermark of the corresponding content. By comparing the content ID stored in the extracted forensic watermark with the content ID stored in the forensic watermark database, it is determined whether the currently uploaded content is the content already registered in the database. If the extracted content ID is an ID that exists in the database, it is regarded as a case of illegally copying and re-uploading content for which copyright has already been registered and it is possible to determine that the content is illegally copied content. Next, to track the user who illegally copied and distributed the content, the user ID stored in the forensic watermark is used. The corresponding user ID can be retrieved from the database to identify the user that purchased or downloaded the content last; a suspect who copied and distributed illegal content can be identified by searching for the user information in the user table mapped with the user ID.

4. Discussion

This paper aims to track illegal piracy and the illegal distribution of immersive content using forensic watermarking. The paper proposed software that has not been developed and is in the stage of presenting an idea. Several papers on 3D watermarking have been proposed, but 3D-to-2D watermarking is the first to embed a watermark in a 3D model and extract the watermark from a rendered 2D image accessible from a real application or user perspective. Therefore, since this study intends to utilize I. Yoo’s 3D-to-2D watermarking method [8], its performance is summarized and provided. In I. Yoo’s experiment [8], a ModelNet 40 data set was used to evaluate the performance of the encoder, decoder, and distortion. The input mesh has parameters of 5000 vertices, 5000 faces, 5 vertex elements, and 3 mesh indexes. For the evaluation of the encoder, PointNet and fully convolutional PointNet (PointNet v2) were used to measure the best accuracy, average ( μ ), and standard deviation ( σ ) of bit accuracy and Normal and Texcoord were measured for the difference in geometry L 1 . Finally, PSNR and SSIM of the rendered 2D image were measured. The 3D-to-2D watermark encoder performance measurement results are provided in Table 3. In the decoder, four-layer Simple CNN, Residual CNN, and HiDDeN, a deep learning based image watermarking technology, were compared in terms of bit accuracy. The 3D-to-2D watermark decoder performance measurement results are provided in Table 4. As an evaluation of distortion, a well-known distortion was applied and the bit accuracy was measured and shown. This means robustness against attacks, and is shown in Table 5.
Table 3. 3D-to-2D watermark encoder performance.
Table 4. 3D-to-2D watermark decoder performance.
Table 5. Effect on distortions.
As far as we know, papers on technologies that can insert and extract watermarks in 3D models have been presented, but papers presenting systems or ideas using extracted watermarks are insufficient. In the future, it seems that discussions on methodologies that can utilize 3D watermarks should be continued. Since the idea we propose extracts a forensic watermark from a rendered 2D image that can be used for immersive content, there is a possibility that it can be used for the copyright protection of immersive content such as AR/VR. In addition, by tracking a suspect who illegally copies and distributes immersive content through forensic watermarking, it is possible to contribute to copyright protection technology by preventing illegal acts on immersive content. However, the currently proposed system needs to check the requirements and detailed design requirements for applying the technology and forensic watermark as a technology for extracting a watermark from a 2D rendered image that has recently been developed. In addition, for the verification of the proposed system, the performance should be measured through actual developments and analyzed by comparing with the existing deep learning-based watermarking system. To this end, future research plans to produce actual implementations of the proposed system.

5. Conclusions

As the current metaverse industry develops and the demand for 3D content increases, it is expected that copyright disputes over 3D content will arise. Therefore, in this paper, we propose a system that can track suspects of illegal 3D content distribution by using DNN and forensic watermark technology to build a fair metaverse environment that is safe from copyright. Through the proposed system, forensic watermarks with guaranteed robustness can be inserted into 3D content and illegal suspects can be tracked through the inserted forensic watermarks. In future research, we plan to implement the proposed system to track the actual illegal 3D content suspects by supplementing them.

Author Contributions

J.P. and J.K. wrote most of this paper using ideas from their research on DNNs and forensic watermarks. J.S. and S.K. reviewed this thesis and presented their opinions on 3D content watermarking. J.-H.L. revised the paper and supervised all work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

This research was supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2022. (Project Name: 4D Content Generation and Copyright Protection with Artificial Intelligence, Project Number: R2022020068).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. U.S. Copyright Off. Re: Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise (Correspondence ID 13ZPC6C3; SR # 17100387071). Available online: https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf (accessed on 7 January 2023).
  2. AI-Generated Artwork Is Copyrighted for the First Time. Available online: https://petapixel.com/2022/09/27/ai--generated--artwork--is--copyrighted--for--the--first--time/ (accessed on 7 January 2023).
  3. Madani, M.; Lin, K.; Tarakanova, A. DSResSol: A sequence-based solubility predictor created with Dilated Squeeze Excitation Residual Networks. Int. J. Mol. Sci. 2021, 22, 13555. [Google Scholar] [CrossRef] [PubMed]
  4. Roshani, G.H.; Hannus, R.; Khazaei, A.; Zych, M.; Nazemi, E.; Mosorov, V. Density and velocity determination for single-phase flow based on radiotracer technique and neural networks. Flow Meas. Instrum. 2018, 61, 9–14. [Google Scholar] [CrossRef]
  5. Mahmoodi, F.; Darvishi, P.; Vaferi, B. Prediction of coefficients of the Langmuir adsorption isotherm using various artificial intelligence (AI) techniques. J. Iran. Chem. Soc. 2018, 15, 2747–2757. [Google Scholar] [CrossRef]
  6. Roshani, G.H.; Nazemi, E.; Roshani, M.M. Intelligent recognition of gas-oil-water three-phase flow regime and determination of volume fraction using radial basis function. Flow Meas. Instrum. 2017, 54, 39–45. [Google Scholar] [CrossRef]
  7. Park, J.; Kim, J.; Seo, J.; Kim, S.; Lee, J.-H. Illegal 3D Content Distribution Tracking System based on DNN Forensic Watermarking. In Proceedings of the 2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Bali, Indonesia, 20–23 February 2023. [Google Scholar]
  8. Yoo, I.; Chang, H.; Luo, X.; Stava, O.; Liu, C.; Milanfar, P.; Yang, F. Deep 3D–to–2D Watermarking: Embedding Mes-sages in 3D Meshes and Extracting Them from 2D Renderings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10031–10040. [Google Scholar]
  9. Kim, D.-H. Copyright Protection and Ownership Authentication of Video Using Watermarking. Ph.D. Thesis, Chosun University, Gwangju, Republic of Korea, 2006. [Google Scholar]
  10. Singh, P.; Chadha, R.S. A survey of digital watermarking techniques, applications and attacks. Int. J. Engi Neering. Innov. Technol. 2013, 2, 165–175. [Google Scholar]
  11. Liu, F.; Liu, Y. A Watermarking Algorithm for Digital Image Based on DCT and SVD. In Proceedings of the Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; pp. 27–30. [Google Scholar]
  12. Obimbo, C.; Salami, B. Using digital watermarking for copyright protection. In Watermarking; IntechOpen: London, UK, 2012; Volume 2. [Google Scholar]
  13. Zhang, J.; Li, B.; Zhao, L.; Yang, S.-Q. License management scheme with anonymous trust for digital rights management. In Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6–8 July 2005; p. 4. [Google Scholar]
  14. Jang, H.W.; Kim, W.G.; Lee, S.H. An illegal contents tracing system based on web robot and fingerprinting scheme. In Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems and Applications, Monterey, CA, USA, 9–10 October 2003; pp. 415–419. [Google Scholar]
  15. Lee, J.-S.; Yoon, K.-S. The system integration of DRM and fingerprinting. In Proceedings of the 2006 8th International Conference Advanced Communication Technology, Pyeongchang, Republic of Korea, 20–22 February 2006; p. 2183. [Google Scholar]
  16. Dustin, W.E. Circulation gatekeepers: Unbundling the platform politics of YouTube’s content ID. Comput. Compos. 2018, 47, 61–74. [Google Scholar]
  17. Mahbuba, B.; Mohammad, S.U. Digital image watermarking techniques: A review. Information 2020, 11, 110. [Google Scholar]
  18. Wan, W.; Wang, J.; Zhang, Y.; Li, J.; Yu, H.; Sun, J. A comprehensive survey on robust image watermarking. Neurocomputing 2022, 488, 226–247. [Google Scholar] [CrossRef]
  19. Francesca, U.; Massimiliano, C.; Mauro, B. Wavelet-based blind watermarking of 3D models. In Proceedings of the 2004 Workshop on Multimedia and Security, Magdeburg, Germany, 20–21 September 2004; pp. 143–154. [Google Scholar]
  20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, F.; Zhou, H.; Fang, H.; Zhang, W.; Yu, N. Deep 3D mesh watermarking with self-adaptive robustness. Cybersecurity 2022, 5, 1–14. [Google Scholar] [CrossRef]
  22. Bennour, J.; Dugelay, J. Toward a 3D watermarking benchmark. In Proceedings of the 2007 IEEE 9th Workshop on Multimedia Signal Processing, Crete, Greece, 1–3 October 2007; pp. 369–372. [Google Scholar]
  23. Wang, B.; Ding, J.; Wen, Q.; Liao, X.; Liu, C. An image watermarking algorithm based on DWT DCT and SVD. In Proceedings of the 2009 IEEE International Conference on Network Infrastructure and Digital Content, Beijing, China, 6–8 November 2009; pp. 1034–1038. [Google Scholar]
  24. Haj, A. Combined DWT-DCT digital image watermarking. J. Comput. Sci. 2007, 3, 740–746. [Google Scholar]
  25. Li, Y.; Wang, H.; Barni, M. A survey of Deep Neural Network watermarking techniques. Neurocomputing 2021, 461, 171–193. [Google Scholar] [CrossRef]
  26. Zhu, J.; Kaplan, R.; Johnso, J.; Fei, L. Hidden: Hiding data with deep networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 657–672. [Google Scholar]
  27. Ahmadi, M.; Norouzi, A.; Karimi, N.; Samavi, S. Redmark: Framework for residual diffusion watermarking based on deep networks. Expert Syst. Appl. 2020, 146, 113157. [Google Scholar] [CrossRef]
  28. Chaoning, Z.; Philipp, B.; Adil, K.; In-So, K. Universal adversarial perturbations through the lens of deep steganography: Towards a fourier perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; pp. 3296–3304. [Google Scholar]
  29. Luo, X.; Zhan, R.; Chang, H.; Yang, F.; Milanfar, P. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 13548–13557. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.