A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance
Abstract
1. Introduction
2. Literature Review
2.1. Deepfake Detection Approaches
2.2. Deepfake Prevention Techniques
2.3. Integration of Technical and Non-Technical Countermeasures
2.4. Research Gaps
2.5. Contribution of the Present Research
3. Deepfake Prevention Framework
3.1. Modules of Deepfake Prevention Framework
- A.
- Module 1: Trusted Content Assurance (Technical)This module focuses on providing the main security services that ensure the content is trusted, which are authentication, integrity, and non-repudiation. It also employs blockchain and smart contract technologies to enable content tracking, avoid manipulation, and enhance the transparency of the deepfake prevention environment. Consequently, this module guarantees that all media content has a verifiable origin before distribution. To provide essential security services, the trusted content assurance module involves the implementation of several mechanisms. The key mechanisms involved in this module can be described as follows:
- (1)
- Digital Watermarking and Cryptographic SignaturesTo provide a more effective defense against deepfakes, it is essential to implement security mechanisms that ensure authentication, data integrity, and non-repudiation. Authentication is achieved through digital watermarking, which verifies the authenticity of the content’s origin. Moreover, cryptographic digital signature algorithms are used to provide integrity and non-repudiation, ensuring that the media content has not been altered since its creation and that the signer cannot deny his responsibility for the media and the corresponding signature. This research adopts the cryptographic watermarking technique, which uses the hash function SHA-3 to compute the hash code of the media content and its metadata. Then, the hash code is signed using the media creator’s private key. Finally, the cryptographic digital signature is embedded within the media as a watermark.Any party who wants to verify the originality of the content can use the public key of the signer (assumed to be the media creator). The inputs to the verification operation are the signer’s public key, the content, and the metadata. The output is either a valid signature, indicating that the media content is authentic and not manipulated, or an invalid signature, indicating that the signature and media are not authenticated.This study conducts a comparison in terms of time and resource consumption between the various digital signature schemes, including classical algorithms, such as RSA and ECDSA, and modern PQDSAs, such as Dilithium, Falcon, and SPHINCS+. The aim is to understand the variation in performance level and resource requirements for both classical and post-quantum DSAs.The current study uses a hybrid watermarking embedding technique, which combines both the Discrete Wavelet Transform (DWT) and the Singular Value Decomposition (SVD). This hybrid technique offers a high degree of imperceptibility for the media content, as well as robustness against different types of attacks. It is worth mentioning that metadata must include a pledge by the media creator or the signer that the content is original and not manipulated by deepfake technology, as well as bear the legal consequences for violating the relevant legislation. Additionally, the metadata contains useful information about ownership, copyright, time stamp, content tracking, and authentication.
- (2)
- Blockchain and Smart Contracts TechnologiesBlockchain is employed to ensure transparency and immutability, as well as to enable effective tracking of media content, its status, and metadata. Every piece of media created is registered on one of the key types of the distributed ledger. This research suggests using a public permissioned blockchain to ensure high levels of security and publicity, while maintaining a reliable user authentication method.Original media files can be stored off-chain due to the storage limitation of blockchain. Only the metadata of the media and its corresponding digital signature need to be stored on-chain to enhance performance and scalability. IPFS private storage technologies can be used to store the media files so they can be accessed for the purpose of verification.Smart contracts are utilized to provide governance and facilitate interaction among involved parties, including media creators, authenticity verifiers, and monitoring systems. Before registering media content on the blockchain, the media creator must insert a link to the media file, its metadata, and the associated signature into the smart contract and then sign the contract. The verification authority must first verify the signature using the signer’s public key. If the signature is valid and deepfake detection confirms that the media is original, the verification authority signs the smart contract. Once the authentication authority signs the contract, the status of the media content will be changed to “authentic” and it will be immediately registered on the blockchain platform. However, if the verification fails, the authority rejects the media and notify concerned parties, including the media creator, monitors, and law enforcement agencies. The output of the blockchain is an authenticity token, which includes status and metadata. Any individual or organization wishing to check the authenticity and origin of the media content can search and find its status on the blockchain and validate its authenticity, origin, time of creation, …, etc.
- B.
- Module 2: Detection and Monitoring (Technical)This module employs sophisticated deepfake detection technologies to catch manipulated media that bypasses the authentication module. This module is added to the deepfake prevention framework for three reasons: First, the module aims to detect any deepfake content that might be uploaded by malicious users. In such cases, the deepfake detector triggers the status in the smart contract to change to “fake”, and the status along with metadata will be stored on the blockchain. The second reason the detection module may be used is if the media content has no authentication content, such as digital signature and metadata. In this case, the detection module can determine whether media has been manipulated by deepfake technology or not and trigger a smart contract to store the status and hash code on the blockchain. If the deepfake detector fails to identify the status of the media content, it will remain “unknown” and will seek feedback from the human intervention and training module. The key components of this module can be outlined as follows.
- (1)
- Deep Learning DetectorThis sub-module operates a deep learning detector based on GAN-fingerprint to check media content, including audio and video, and classify it as either fake, real, or uncertain. This module is designed to be flexible, allowing the replacement of current deepfake detection technologies with more powerful options that may emerge in the future. Additionally, open-source tools such as FaceForensics++ and the DeepFake Detection Challenge (DFDC) models can be utilized at this stage to enhance deepfake detection capabilities.The aim of the detection sub-module is to identify deepfake content from media files uploaded by registered users, subsequently updating the status in the smart contracts to “fake”. Moreover, this sub-module can be utilized by guest users seeking to verify the authenticity of a media artifact, determining whether it is real or manipulated by deepfake technology. In this scenario, the hash code of the media, along with its metadata and status, will be registered on the blockchain.
- (2)
- Real-Time MonitoringThis sub-module operates an online monitoring system that scans social media and other platforms, as well as receiving reports from users about suspected media content. Countermeasures can be implemented, including rate limiting and AI-based anomaly detection to avoid overwhelming the system with false flags.Additionally, the monitoring system forwards the suspected content to the deepfake detector, which in turn checks and updates the status to be stored in the blockchain. This will help to limit the threat of deepfake distribution, as users can easily search the status and metadata of the media content to determine its authenticity.The output of the detection module indicates the status of the analyzed media content, which can be “real,” “fake,” or “uncertain.” If the output of the detection module is “uncertain”, it means that the deepfake detector was unable to determine the status; the suspected content will then be forwarded to the next module, titled “awareness, training, and human-in-the-loop.” This module requires human intervention and experience to assist in identifying the status of the media content.
- C.
- Module 3: Awareness, Training, and Human-in-the-Loop (Non-Technical)This module integrates non-technical countermeasures against deepfakes to complement the technical countermeasures implemented in modules 1 and 2. The purpose of this module is to raise awareness among various groups involved in deepfake prevention, including end-users, journalists, regulators, and others, about the threats posed by deepfakes and related consequences. Additionally, it aims to train individuals to recognize suspicious or fake content and to incorporate human oversight into technical detection systems. Human intervention enhances deepfake defense, as trained and experienced humans can identify anomalies or manipulated content and flag them, even if an AI-based detection system fails to detect something. This module contains the following sub-modules.
- (1)
- Awareness CampaignsStudies showed that most people do not realize how convincing deepfakes have become. Therefore, public awareness is crucial to avoid the spread of fake media. To enhance public awareness, this module involves awareness initiatives directed to various parties. The initiatives include a brief explanation about deepfakes with simple examples. The awareness initiative should highlight the threats and consequences posed by deepfakes. Moreover, the awareness campaigns can be customized to target specific groups such as policymakers and regulatory organizations. One more important component of awareness initiatives is to teach attendees the basic red flags to identify manipulated artifacts, such as lip-sync mismatch, unnatural eye blinking, and inconsistent reflections. The awareness campaigns can be in the form of short directed learning materials or videos accompanied by an exam. The involved parties need to investigate the learning materials and pass the exam. This will enhance public awareness and humans’ ability to detect deepfakes.
- (2)
- Training ProgramsThis sub-module involves designing targeted education and training courses for different stakeholders. For example, training that targets journalists to avoid spreading fake news and training for employees to spot and avoid fake content used in social engineering scams. Also, special training courses may target law enforcement and legal professionals to help them understand the evidentiary value of deepfake detection.The training could be in a form of case studies, E-learning modules, and workshops.Well-trained humans can assist the deepfake detection module by investigating the media content and confirming its status.
- D.
- Module 4: Policy, Governance, and Regulation (Non-Technical)Prior studies and real-world experiences have shown that technical deepfake prevention countermeasures, while essential, are not sufficient on their own and should be supported by policies, regulations, and governance procedures. The fourth module aims to provide enforcement mechanisms and ensure compliance with internationally recognized policies and regulations concerning the use of deepfake technology. This module includes the following main components.
- (1)
- Laws and RegulationsThis component includes acts, regulations, and national and international cybersecurity laws related to the use of deepfake technology. Additionally, other types of laws concerned with deepfake creation are considered, such as copyright laws, safety laws, and nondiscrimination laws [64]. These laws and regulations, along with their updates, must be publicly announced and communicated to all relevant parties.
- (2)
- Policy EnforcementThis component focuses on providing effective ways to enforce policies and laws related to violating acts and regulations related to the use of deepfake technology. The proposed framework assumes that all major stakeholders are involved in the deepfake prevention ecosystem. Therefore, when a violation is detected (e.g., the spread of deepfake content), the governance module ensures that notifications and informative reports are communicated to law enforcement agencies and to social media or other venues where the deepfake content was found. Additionally, the framework guarantees that the necessary evidence related to violating AI acts is securely stored and accessible via distributed ledger technology, facilitating future investigations and digital forensic efforts.
3.2. The Operating Model of the Deepfake Prevention Framework
3.2.1. Stakeholders and Their Roles
- Technical Stakeholders: This includes AI researchers, developers, cybersecurity specialists, and forensic analysts. Their roles include improving deepfake detection and content authentication mechanisms. Additionally, forensic analysts can assist regulatory institutions by providing the necessary evidence to avoid the distribution of deepfake and prosecute violators.
- Operational Stakeholders: This category include non-technical parties that are registered in the framework’s ecosystem and are eligible to perform essential operations, such creating digital signatures for their media content and assisting in the deepfake detection process with expert intervention. Examples of stakeholders in this category include journalists and governmental or nongovernmental agencies who need to secure the media content they produce by using authentication and digital signature services offered by the framework. This category also includes non-profit independent organizations responsible for managing and operating the framework’s platforms. Their role includes registering other parties, granting privileges, and performing necessary maintenance for the system platform. Additionally, this category includes experts or trained groups who contribute to improving the accuracy of deepfake detection.
- Regulatory Stakeholders: This category comprises regulatory and law enforcement agencies responsible for updating rules and regulations relevant to deepfake issues, as well as taking legal actions against offenders. Additionally, domestic and international technical standardization bodies are considered in the framework.
- End-users: This category refers to the general public, including consumers, victims of manipulation, and other parties who can utilize the platform to verify media content or report suspected cases.
3.2.2. Operational Workflow and Module Interaction
- Content Upload and Metadata Capture: The process begins when a verified user, such as journalist or any Registered Entity (RE) uploads a digital image or video to the system. Upon uploading the media, the Trusted Content Assurance module automatically captures metadata and generates a cryptographic hash and digital signature. These attributes are recorded temporarily in a secure local storage area pending verification.
- Authenticity Verification: The first module checks whether the uploaded media contains a valid watermark or digital signature embedded during content creation. If the signature is verified using the public key of an RE, the metadata and the cryptographic digital signature of the media proceeds directly to blockchain registration. If not, it is flagged for further investigation and verification by the second module.
- AI-Driven Deepfake Detection: The second module, detection and monitoring, applies AI-based detection methods to verify the authenticity of the media. If the second module fails in determining the authenticity status, the media content is automatically escalated to human verification.
- Human-in-the-Loop Verification: Expert reviewers receive flagged media content for additional verification. The reviewers check the AI decision, visual indicators, and metadata. Verified outcomes (authentic or fake) are digitally signed by the reviewer and returned to the previous module.
- Blockchain Registration and Provenance Tracking: The verified hash and metadata of each authentic media item are immutably recorded on the blockchain platform through a smart contract. This record includes necessary information, such as the source ID, timestamp, digital signature, metadata, and verification status. If the media is classified as manipulated, a “fake” flag is written on-chain to prevent re-uploads and provide traceability for future detection.
- Governance and Policy Enforcement: As the fourth module, the governance and policy management module ensures compliance with standards and regulatory, ethical, and data-protection requirements. It also issues notifications to rules enforcement agencies and media content platforms and triggers awareness or takedown procedures in accordance with applicable laws and organizational policies.
3.2.3. Component and Stakeholder Interactions
- (1)
- An end-user seeks to verify the authenticity of a media artifact.
- (2)
- A registered entity (governmental or nongovernmental agency) wants to upload media content it has produced and obtain the security services offered by the framework.
- In the first option, users are allowed to upload the media content along with a digital signature, enabling the first module of the prevention framework to verify the content and the signature using the creator’s public key.
- Alternatively, users can upload media without a signature. In this case, the prevention system needs to extract the watermark that contains the digital signature and then perform signature verification.
4. Experimental Results and Discussion
4.1. Implementation Environment
4.1.1. Algorithms and Technologies
- Cryptography Digital Signature Algorithms: The DSA can provide valuable security services such as authentication, integrity, and non-repudiation. This research utilized PQDSAs to provide long-term security and resilience against cyberattacks that employ advanced quantum computing technology. In particular, the experiments implemented the PQCAs standardized by NIST, which are Dilithium, Falcon, and SPHINCS+. Moreover, this research conducted a comparison with classical DSAs, such as RSA and ECDSA to find the most efficient DSA to be utilized in the proposed framework [56].
- Cryptography Hash Algorithms: Hash algorithms are essential to ensure integrity and efficient implementation of digital signature algorithms. The experiments conducted in this study used the secure hash algorithm SHA-256 due to its provable security and high performance level.
- Digital Watermarking: This research employed a hybrid digital watermarking embedding technique which combines DWT and SVD to offer robustness against different types of attacks. Mainly, the watermarking technique is used to invisibly embed metadata and cryptographic signatures within media content. This facilitates content authentication later via extracting the watermark and performing the digital signature verification operation.
- Blockchain and Smart Contracts: This study used a permissioned public blockchain built on Ethereum. This offers an enhanced level of security since it adds an access control module to ensure that only registered and authenticated entities can perform operations like signing or verifying digital content. It is also public, allowing end-users to access the blockchain and verify the media content. Smart contracts are used to improve governance and facilitate key operations on the blockchain.
- Deepfake Detector: This research uses open-source tools for deepfake detection such as FaceForensics++ and DeepFake Detection Challenge (DFDC) models. The detection stage comes after content authentication and offers an extra layer of defense.
- Likert Scale Rating: This research applies a rating system over a scale from 1 to 10, where 10 indicates strong confidence that the content is authentic and not manipulated, while 1 signifies that the content is a deepfake. If detection fails, experienced participants will be asked to rate the authenticity of the media and if the average rating result exceeds 75%, the content is considered authentic; otherwise, it is classified as deepfake content.
4.1.2. Datasets
4.1.3. Evaluation Metrics
- Performance: This indicates the time delay of digital signature operations, the time consumed by watermarking techniques, and the time elapsed to append data into the blockchain. For DSAs, the performance represents the time consumed by signing and verification operations, while watermarking indicates the time required to generate the payload and embed the payload within the media. The payload generation time refers to the time taken to pack the metadata and digital signature together and convert them to a format suitable for embedding within the media. The embedding time indicates the total time elapsed when embedding the payload into the host image using the hybrid DWT and SVD algorithms, including all transformation, modification, and reconstruction steps. As a summary for the performance metric, it includes the following quantifiable metrics: (1) time consumption of digital signature operations, (2) time consumption for watermarking creation and embedding, and (3) latency in milliseconds for appending a data block into the blockchain.
- Accuracy: This refers to the accuracy of detecting deepfakes via the proposed framework. In particular, this metric is evaluated via the accuracy level of the deepfake detection methods adopted in this research.
- Cost: This metric is estimated by the gas cost consumed by blockchain operations. This metric varies according to the size of the data (signature and metadata) produced by each DSA.
- Resource Consumption: The resources are estimated by the RAM space in bytes used by the DSAs to store the signature and the corresponding metadata.
- Statistical Testing: Detection accuracy and false-positive rate differences between AI-based detection models are employed to judge the performance of deepfake detection.
4.1.4. Comparisons
4.2. Evaluation Scenarios
- (1)
- A registered entity authenticates and stores a piece of media content:In this use-case scenario, a registered organization uploads a piece of media content to the system platform and utilizes security services provided by the framework’s modules, including creating a digital signature, using digital watermarking, and signing the smart contract to store the signature and metadata permanently on the blockchain. Additionally, this scenario activates the second module in the framework to perform deepfake detection, as well as the human intervention module as needed to assist in determining the status of a piece of media content. Finally, if a deepfake is detected, this use-case activates the fourth module to take necessary governance and legal actions.
- (2)
- An end-user seeks to verify the authenticity of a piece of media content:In this use case, end-users can upload a piece of media content to the system’s platform that will activate the first module to verify the signature and the content. If the signature is missing, the detection module is activated to determine the status of the content. If the second module is unable to determine the status, the third module is activated. Additionally, the fourth module is always active whenever a deepfake content is found.
4.3. Experimental Results and Analysis
4.3.1. Performance Analysis
4.3.2. Resource Consumption Analysis
4.3.3. Cost Analysis
4.3.4. Detection Accuracy Analysis
4.4. Security Analysis
4.4.1. Overview of Security Objectives
- (1)
- Authenticity: Indicates verifying that a piece of media content originates from a legitimate source or creator.
- (2)
- Integrity: Means ensuring that media content and its metadata remain unaltered.
- (3)
- Non-Repudiation: Emphasizes ensuring that the creator of a piece of media content cannot deny authorship or responsibility.
- (4)
- Traceability and Accountability: Seeks to provide provenance information and enable tracing and authenticating media content and its metadata.
- (5)
- Robustness Against Quantum Computing Attacks: Indicates resisting cyberattacks that may utilize quantum computing technology.
- (6)
- Governance: Refers to the ability to impose rules and regulation and enforce related laws. Moreover, it encompasses offering awareness and training to assist in deepfake prevention.
- (7)
- Deepfake Detection: Indicates the ability to detect unauthentic or manipulated media content.
4.4.2. Security Services Provided by the Framework
4.5. Research Findings and Contributions
- Develop an effective method for media content authentication by leveraging state-of-the-art watermarking and cryptographic techniques. In particular, the hybrid watermarking embedding technique using DWT and SVD was utilized to embed a payload consisting of the digital signature and the metadata of the media content. This approach offers robust authentication by leveraging the powerful security features of cryptographic DSAs. Additionally, the hybrid watermarking method makes it difficult to manipulate the content or remove the authenticating watermark.
- This study investigated the implementation of various advanced cryptographic DSAs, including classical and PQDSAs. A thorough analysis and benchmarking of the different digital signature schemes were conducted. The experimental result indicated that the Falcon DSA is particularly promising and is recommended for use in deepfake prevention due to its strong resilience against quantum computing attacks and its high performance in digital signature operations, including signing and verification. Moreover, the Falcon algorithm uses a much smaller key size compared to other PQDSAs, which leads to reduced resource consumption such as memory space and gas costs for blockchain operations. As a result, it is a more efficient and cost-effective choice in comparison to other PQDSAs.
- The framework utilizes Ethereum blockchain technology and smart contracts to enable more efficient media content authentication and tracking. The metadata of the content and its corresponding digital signature are stored permanently on the blockchain. By utilizing the powerful features of this technology such as immutability and transparency, it becomes easy to trace media content and verify its authenticity.
- The second module of the proposed framework incorporates a deepfake detection process to add another layer of security and mitigate the spread of deepfake content. It identifies deepfake content that might be intentionally or mistakenly signed by an RO before allowing storage of the signature in the blockchain. It also enables guest users to verify the authenticity of media that has no signature. This research involved comparing various known deepfake detection methods in terms of accuracy and performance to find the most efficient method. The results indicated that the FaceForensics++ method is a strong candidate for adaptation in the deepfake detection module.
- What makes the proposed framework unique is its utilization of both technical and non-technical countermeasures. The third module enables experienced individuals to assist in evaluating the authenticity of media content when the detection module (e.g., second module) is unable to determine its status. This approach enhances detection accuracy and helps to identify undetected deepfakes, allowing us to take necessary corrective actions to prevent the distribution of deepfakes and mitigate its impacts.
- The fourth module in the proposed framework, which focuses on policy, governance, and regulation, enhances the landscape for preventing the distribution of deepfake content by offering improved governance and compliance with policies and regulations. In particular, this module supports law enforcement agencies by enabling them to efficiently impose restrictions on perpetrators, enforce rules and regulations, collect forensic evidence, and take necessary legal action.
5. Conclusions and Future Work
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| PQCA | Post-Quantum Cryptography Algorithm |
| DSA | Digital Signature Algorithm |
| AI | Artificial Intelligence |
| DL | Deep Learning |
| ECC | Elliptic Curve Cryptography |
References
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 1–9. [Google Scholar]
- Mirsky, Y.; Lee, W. The creation and detection of Deepfake: A survey. ACM Comput. Surv. (CSUR) 2021, 54, 1–41. [Google Scholar] [CrossRef]
- Kaur, A.; Noori Hoshyar, A.; Saikrishna, V.; Firmin, S.; Xia, F. Deepfake video detection: Challenges and opportunities. Artif. Intell. Rev. 2024, 57, 159. [Google Scholar] [CrossRef]
- Kietzmann, J.; Lee, L.W.; McCarthy, I.P.; Kietzmann, T.C. Deepfake: Trick or treat? Bus. Horiz. 2020, 63, 135–146. [Google Scholar] [CrossRef]
- Chesney, R.; Citron, D. Deepfake and the new disinformation war: The coming age of post-truth geopolitics. Foreign Aff. 2019, 98, 147. [Google Scholar]
- Korshunov, P.; Marcel, S. Deepfake: A new threat to face recognition? assessment and detection. arXiv 2018, arXiv:1812.08685. [Google Scholar] [CrossRef]
- Mustak, M.; Salminen, J.; Mäntymäki, M.; Rahman, A.; Dwivedi, Y.K. Deepfake: Deceptions, mitigations, and opportunities. J. Bus. Res. 2023, 154, 113368. [Google Scholar] [CrossRef]
- Shahzad, H.F.; Rustam, F.; Flores, E.S.; Luis Vidal Mazon, J.; de la Torre Diez, I.; Ashraf, I. A review of image processing techniques for Deepfake. Sensors 2022, 22, 4556. [Google Scholar] [CrossRef]
- Khalil, H.A.; Maged, S.A. Deepfake creation and detection using deep learning. In Proceedings of the 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt, 26–27 May 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar]
- Wazid, M.; Mishra, A.K.; Mohd, N.; Das, A.K. A secure Deepfake mitigation framework: Architecture, issues, challenges, and societal impact. Cyber Secur. Appl. 2024, 2, 100040. [Google Scholar] [CrossRef]
- Ahmad, J.; Salman, W.; Amin, M.; Ali, Z.; Shokat, S. A Survey on Enhanced Approaches for Cyber Security Challenges Based on Deep Fake Technology in Computing Networks. Spectr. Eng. Sci. 2024, 2, 133–149. [Google Scholar]
- Sandotra, N.; Arora, B. A comprehensive evaluation of feature-based AI techniques for Deepfake detection. Neural Comput. Appl. 2024, 36, 3859–3887. [Google Scholar] [CrossRef]
- Gao, J.; Micheletto, M.; Orrù, G.; Concas, S.; Feng, X.; Marcialis, G.L.; Roli, F. Texture and artifact decomposition for improving generalization in deep-learning-based Deepfake detection. Eng. Appl. Artif. Intell. 2024, 133, 108450. [Google Scholar] [CrossRef]
- Ghiurău, D.; Popescu, D.E. Distinguishing Reality from AI: Approaches for Detecting Synthetic Content. Computers 2024, 14, 1. [Google Scholar] [CrossRef]
- Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Morales, A.; Ortega-Garcia, J. Deepfake and beyond: A survey of face manipulation and fake detection. Inf. Fusion 2020, 64, 131–148. [Google Scholar] [CrossRef]
- Sharma, V.K.; Garg, R.; Caudron, Q. A systematic literature review on deepfake detection techniques. Multimed. Tools Appl. 2025, 84, 22187–22229. [Google Scholar] [CrossRef]
- Alrashoud, M. Deepfake video detection methods, approaches, and challenges. Alex. Eng. J. 2025, 125, 265–277. [Google Scholar] [CrossRef]
- Zhu, X.; Qian, Y.; Zhao, X.; Sun, B.; Sun, Y. A deep learning approach to patch-based image inpainting forensics. Signal Process. Image Commun. 2018, 67, 90–99. [Google Scholar] [CrossRef]
- Lai, Z.; Li, J.; Wang, C.; Wu, J.; Jiang, D. LIDeepDet: Deepfake Detection via Image Decomposition and Advanced Lighting Information Analysis. Electronics 2024, 13, 4466. [Google Scholar] [CrossRef]
- Raza, A.; Munir, K.; Almutairi, M. A novel deep learning approach for Deepfake image detection. Appl. Sci. 2022, 12, 9820. [Google Scholar] [CrossRef]
- Hsu, C.C.; Zhuang, Y.X.; Lee, C.Y. Deep fake image detection based on pairwise learning. Appl. Sci. 2020, 10, 370. [Google Scholar] [CrossRef]
- Cao, J.; Deng, J.; Yin, X.; Yan, S.; Li, Z. WPCA: Wavelet Packets with Channel Attention for Detecting Face Manipulation. In Proceedings of the 2023 15th International Conference on Machine Learning and Computing, Zhuhai, China, 17–20 February 2023; pp. 284–289. [Google Scholar]
- Ni, Y.; Zeng, W.; Xia, P.; Tan, R. A Deepfake Detection Algorithm Based on Fourier Transform of Biological Signal. Comput. Mater. Contin. 2024, 79, 5295–5312. [Google Scholar] [CrossRef]
- Yasir, S.M.; Kim, H. Lightweight Deepfake Detection Based on Multi-Feature Fusion. Appl. Sci. 2025, 15, 1954. [Google Scholar] [CrossRef]
- Stanciu, D.C.; Ionescu, B. Improving generalization in Deepfake detection via augmentation with recurrent adversarial attacks. In Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, Phuket, Thailand, 10–13 June 2024; pp. 46–54. [Google Scholar]
- Alattar, A.; Sharma, R.; Scriven, J. A system for mitigating the problem of Deepfake news videos using watermarking. Electron. Imaging 2020, 32, 1–10. [Google Scholar] [CrossRef]
- Truepic. Truepic App Lets Journalists Instantly Verify Images, Videos. International Journalists’ Network (IJNet), 30 October 2018. Available online: https://ijnet.org/en/story/truepic-app-lets-journalists-instantly-verify-images-videos (accessed on 2 September 2025).
- Content Authenticity Initiative. How it Works. Content Authenticity Initiative (CAI), 2024. Available online: https://contentauthenticity.org/how-it-works (accessed on 2 September 2025).
- Thomson Reuters. Reuters New Proof of Concept Employs Authentication System to Securely Capture, Store and Verify Photographs. Thomson Reuters Press Release, 30 August 2023. Available online: https://www.thomsonreuters.com/en/press-releases/2023/august/reuters-new-proof-of-concept-employs-authentication-system-to-securely-capture-store-and-verify-photographs (accessed on 20 September 2025).
- Rashid, M.M.; Lee, S.H.; Kwon, K.R. Blockchain technology for combating Deepfake and protect video/image integrity. J. Korea Multimed. Soc. 2021, 24, 1044–1058. [Google Scholar]
- Hasan, K.; Karimian, N.; Tehranipoor, S. Combating Deepfake: A Novel Hybrid Hardware-Software Approach. In Proceedings of the 2024 Silicon Valley Cybersecurity Conference (SVCC), Seoul, Republic of Korea, 17–19 June 2024; IEEE: New York, NY, USA, 2024; pp. 1–2. [Google Scholar]
- Jing, T.W.; Murugesan, R.K. Protecting data privacy and prevent fake news and Deepfake in social media via Blockchain technology. In Proceedings of the International Conference on Advances in Cyber Security, Penang, Malaysia, 8–9 December 2020; Springer: Singapore, 2020; pp. 674–684. [Google Scholar]
- Mao, D.; Zhao, S.; Hao, Z. A shared updatable method of content regulation for Deepfake videos based on Blockchain. Appl. Intell. 2022, 52, 15557–15574. [Google Scholar] [CrossRef]
- Seneviratne, O. Blockchain for social good: Combating misinformation on the web with AI and Blockchain. In Proceedings of the 14th ACM Web Science Conference 2022, Barcelona, Spain, 26–29 June 2022; pp. 435–442. [Google Scholar]
- Chen, C.C.; Du, Y.; Peter, R.; Golab, W. An implementation of fake news prevention by Blockchain and entropy-based incentive mechanism. Soc. Netw. Anal. Min. 2022, 12, 114. [Google Scholar] [CrossRef]
- Parlak, M.; Altunel, N.F.; Akkaş, U.A.; Arici, E.T. Tamper-proof evidence via Blockchain for autonomous vehicle accident monitoring. In Proceedings of the 2022 IEEE 1st Global Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETBlockchain), Irvine, CA, USA, 7–11 November 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
- Nagothu, D.; Xu, R.; Chen, Y.; Blasch, E.; Aved, A. Defakepro: Decentralized Deepfake attacks detection using enf authentication. IT Prof. 2022, 24, 46–52. [Google Scholar] [CrossRef]
- Miotti, A.; Wasil, A. Combatting Deepfake: Policies to address national security threats and rights violations. arXiv 2024, arXiv:2402.09581. [Google Scholar] [CrossRef]
- Pawelec, M. Decent Deepfake? Professional Deepfake developers’ ethical considerations and their governance potential. AI Ethics 2024, 5, 2641–2666. [Google Scholar] [CrossRef]
- Fabuyi, J.; Olaniyi, O.O.; Olateju, O.; Aideyan, N.T.; Selesi-Aina, O.; Olaniyi, F.G. Deepfake Regulations and Their Impact on Content Creation in the Entertainment Industry. Arch. Curr. Res. Int. 2024, 24, 10–9734. [Google Scholar] [CrossRef]
- Putra, G.P.; Multazam, M.T. Law Enforcement Against Deepfake Porn AI: Penegakan Hukum Terhadap Deepfake Porn AI. Eur. J. Contemp. Bus. Law Technol. 2024, 1, 58–77. [Google Scholar]
- Mahashreshty Vishweshwar, S. Implications of Deepfake Technology on Individual Privacy and Security. 2023. Available online: https://repository.stcloudstate.edu/msia_etds/142/ (accessed on 25 September 2025).
- Lingyun, Y. Regulations on Detecting, Punishing, Preventing Deepfake Technologies Based Forgery. Cent. Asian J. Acad. Res. 2024, 2, 62–66. [Google Scholar]
- Kira, B. When non-consensual intimate Deepfake go viral: The insufficiency of the UK Online Safety Act. Comput. Law Secur. Rev. 2024, 54, 106024. [Google Scholar] [CrossRef]
- Vizoso, Á.; Vaz-Álvarez, M.; López-García, X. Fighting Deepfake: Media and internet giants’ converging and diverging strategies against hi-tech misinformation. Media Commun. 2021, 9, 291–300. [Google Scholar] [CrossRef]
- Temir, E. Deepfake: New era in the age of disinformation & end of reliable journalism. Selçuk İletişim 2020, 13, 1009–1024. [Google Scholar]
- Gonzales, N.H.; Lobian, A.L.; Hengky, F.M.; Andhika, M.R.; Achmad, S.; Sutoyo, R. Deepfake Technology: Negative Impacts, Mitigation Methods, and Preventive Algorithms. In Proceedings of the 2023 IEEE 8th International Conference on Recent Advances and Innovations in Engineering (ICRAIE), Kuala Lumpur, Malaysia, 2–3 December 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
- Alanazi, S.; Asif, S. Exploring deepfake technology: Creation, consequences and countermeasures. Hum.-Intell. Syst. Integr. 2024, 6, 49–60. [Google Scholar] [CrossRef]
- Singh, S.; Amol, D. Unmasking Digital Deceptions: An Integrative Review of Deepfake Detection, Multimedia Forensics, and Cybersecurity Challenges. Multimed. Forensics Cybersecur. Chall. 2025, 15, 103632. [Google Scholar] [CrossRef]
- Seng, L.K.; Mamat, N.; Abas, H.; Ali, W.N. AI Integrity Solutions for Deepfake Identification and Prevention. Open Int. J. Inform. 2024, 12, 35–46. [Google Scholar]
- Ghediri, K. Countering the negative impacts of Deepfake technology: Approaches for effective combat. Int. J. Econ. Perspect. 2024, 18, 2871–2890. [Google Scholar]
- Taha, M.A.; Khudhair, W.M.; Khudhur, A.M.; Mahmood, O.A.; Hammadi, Y.I.; Al-husseinawi, R.S.; Aziz, A. Emerging threat of deep fake: How to identify and prevent it. In Proceedings of the 6th International Conference on Future Networks & Distributed Systems, Tashkent, Uzbekistan, 15–16 December 2022; pp. 645–651. [Google Scholar]
- Buo, S.A. The emerging threats of Deepfake attacks and countermeasures. arXiv 2020, arXiv:2012.07989. [Google Scholar] [CrossRef]
- Wang, S. How will users respond to the adversarial noise that prevents the generation of Deepfake? In Proceedings of the 23rd ITS Biennial Conference, Online, 21–23 June 2021. [Google Scholar]
- Tuysuz, M.K.; Kılıç, A. Analyzing the legal and ethical considerations of Deepfake Technology. Interdiscip. Stud. Soc. Law Politics 2023, 2, 4–10. [Google Scholar]
- Romero-Moreno, F. Deepfake Fraud Detection: Safeguarding Trust in Generative AI. Comput. Law Secur. Rev. 2025, 58, 106162. [Google Scholar] [CrossRef]
- Alexander, S. Deepfake Cyberbullying: The Psychological Toll on Students and Institutional Challenges of AI-Driven Harassment. Clear. House J. Educ. Strateg. Issues Ideas 2025, 98, 36–50. [Google Scholar] [CrossRef]
- Pedersen, K.T.; Pepke, L.; Stærmose, T.; Papaioannou, M.; Choudhary, G.; Dragoni, N. Deepfake-Driven Social Engineering: Threats, Detection Techniques, and Defensive Strategies in Corporate Environments. J. Cybersecur. Priv. 2025, 5, 18. [Google Scholar] [CrossRef]
- Mi, X.; Zhang, B. Digital Communication Strategies for Coping with Deepfake Content Distribution. In Proceedings of the 2025 Communication Strategies in Digital Society Seminar (ComSDS), St. Petersburg, Russia, 9 April 2025; IEEE: New York, NY, USA, 2025; pp. 82–86. [Google Scholar]
- Al-Dabbagh, R.; Alkhatib, M.; Albalawi, T. Efficient Post-Quantum Cryptography Algorithms for Auto-Enrollment in Public Key Infrastructure. Electronics 2025, 14, 1980. [Google Scholar] [CrossRef]
- Rossler, A.; Cozzolino, D.; Verdoliva, L.; Riess, C.; Thies, J.; Nießner, M. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1–11. [Google Scholar]
- Abbasi, M.; Paulo, V.; José, S.; Pedro, M. Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks. Appl. Sci. 2025, 15, 1225. [Google Scholar] [CrossRef]
- Segate, R.V. Cognitive bias, privacy rights, and digital evidence in international criminal proceedings: Demystifying the double-edged ai revolution. Int. Crim. Law Rev. 2021, 21, 242–279. [Google Scholar] [CrossRef]
- Ma’arif, A.; Hari, M.; Iswanto, S.; Denis, P.; Syahrani, L.; Abdel-Nasser, S. Social, legal, and ethical implications of AI-Generated deepfake pornography on digital platforms: A systematic literature review. Soc. Sci. Humanit. Open 2025, 12, 101882. [Google Scholar] [CrossRef]
- Nair, K. Deepfake Detection: Comparison of Pretrained Xception and VGG16 Models. Ph.D. Thesis, National College of Ireland, Dublin, Ireland, 2025. [Google Scholar]







| Digital Signature Algorithm | Signature Creation (ms) | Signature Verification (ms) | Payload Generation (ms) | Payload Embedding (ms) | Total Time Delay (ms) |
|---|---|---|---|---|---|
| RSA | 31.339 | 0.528 | 0.045 | 177.263 | 209.175 |
| ECDSA | 6.503 | 5.762 | 0.024 | 73.007 | 85.296 |
| Dilithium | 12.977 | 2.638 | 0.012 | 125.470 | 141.097 |
| Falcon | 9.529 | 0.708 | 0.009 | 50.438 | 60.684 |
| SLH-DSA | 1449.607 | 2.312 | 0.012 | 57.761 | 1509.692 |
| Deepfake Detection Method | Dataset | Inference Time per Image (ms) |
|---|---|---|
| FaceForensics++ [62] | FF++ raw | 5–20 ms |
| XceptionNet on DFDC [66] | DFDC test | 10–30 ms |
| Digital Signature Algorithms with Hash Functions | Area Consumed (Bytes) |
|---|---|
| RSA/SHA3-512 | 135.04 |
| ECDSA(SHA3-512)/P-521 | 74.88 |
| Dilithium-5 (PQDSA) | 1196.416 |
| Falcon-512 (PQDSA) | 203.264 |
| SLH-DSA (SPHINCS + SHA3-128s) (PQDSA) | 7489.024 |
| Digital Signature Algorithm | Size of Signature + Metadata (Bytes) | Estimated Gas Cost |
|---|---|---|
| RSA | 135.04 | 105,400 gas |
| ECDSA | 74.88 | 67,800 gas |
| Dilithium | 1196.416 | 768,760 gas |
| Falcon | 203.264 | 148,040 gas |
| SLH-DSA | 7489.024 | 4,701,640 gas |
| Deepfake Detection Method | Dataset | Reported Accuracy | False-Positive Rate |
|---|---|---|---|
| FaceForensics++ [62] | FF++ raw | 95% | 4.8 |
| XceptionNet on DFDC [66] | DFDC test | 89.2% | 6.2 |
| Security Services | Framework Module | Security Mechanism | Description |
|---|---|---|---|
| Authentication | Module 1: Trusted Content Assurance | Digital Watermarking, Cryptography Signatures, and Ethereum Blockchain | The hybrid watermarking embedding technique was used to create and embed a watermark. Cryptography DSAs were used to sign the media and its metadata. The signature and metadata were appended into Ethereum blockchain. |
| Integrity | Module 1: Trusted Content Assurance | Cryptography Hash Functions and Digital Signatures | SHA-3 and DSAs were employed to provide an integrity service. |
| Non-Repudiation | Module 1: Trusted Content Assurance | Digital Signature Algorithms | Cryptography DSAs were implemented to provide a non-repudiation service. |
| Traceability and Accountability | Module 1: Trusted Content Assurance | Ethereum Blockchain and Smart Contracts | Blockchain and smart contracts enable tracing and verification of the status and metadata of media content. The use of smart contracts provides accountability for each transaction. |
| Quantum Attacks Resilience | Module 1: Trusted Content Assurance | Post-Quantum Cryptography Digital Signature Algorithms | The use of post-quantum algorithms offers long-term security and provides resistance against quantum computing attacks. |
| Governance | Module 4: Policy, Governance, and Regulation | Regulatory and Rules Enforcement Agencies | The involvement of regulatory and rules enforcement ensures governance and support the prevention of deepfake. |
| Deepfake Detection | Module 2: Detection and Monitoring, and Module 3: Awareness, Training, and Human-in-the-Loop | Deepfake Detection Models, and Human Intervention | Adopting deepfake detectors and human assistants contributes significantly in enhancing deepfake detection accuracy. |
| Deepfake Prevention | Module 3: Awareness, Training, and Human-in-the-Loop | Awareness and Training | Providing awareness and training in the third module supports resilience and enhances deepfake prevention. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alkhatib, M. A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance. Computers 2025, 14, 488. https://doi.org/10.3390/computers14110488
Alkhatib M. A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance. Computers. 2025; 14(11):488. https://doi.org/10.3390/computers14110488
Chicago/Turabian StyleAlkhatib, Mohammad. 2025. "A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance" Computers 14, no. 11: 488. https://doi.org/10.3390/computers14110488
APA StyleAlkhatib, M. (2025). A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance. Computers, 14(11), 488. https://doi.org/10.3390/computers14110488
