Previous Article in Journal
Artificial Intelligence in Stock Market Investment Through the RSI Indicator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance

by
Mohammad Alkhatib
Department of Computer Science, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11623, Saudi Arabia
Computers 2025, 14(11), 488; https://doi.org/10.3390/computers14110488 (registering DOI)
Submission received: 11 October 2025 / Revised: 1 November 2025 / Accepted: 3 November 2025 / Published: 8 November 2025

Abstract

Deepfake technology, driven by advances in artificial intelligence (AI) and deep learning (DL), has become one of the foremost threats to digital trust and the authenticity of information. Despite the rapid development of deepfake detection methods, the dynamic evolution of generative models continues to outpace current mitigation efforts. This highlights the pressing need for more effective and proactive deepfake prevention strategy. This study introduces a comprehensive and multifaceted deepfake prevention framework that leverages both technical and non-technical countermeasures and involves collaboration among key stakeholders in a unified structure. The proposed framework has four modules: trusted content assurance, detection and monitoring, awareness and human-in-the-loop verification, and policy, governance, and regulation. The framework uses a combination of hybrid watermarking and embedding techniques, as well as cryptographic digital signature algorithms (DSAs) and blockchain technologies, to make sure that the media is authentic, traceable, and cannot be denied. Comparative experiments were conducted in this research using both classical and post-quantum DSAs to evaluate their efficiency, resource consumption, and gas costs in blockchain operations. The results revealed that the Falcon-512 algorithm outperformed other post-quantum algorithms while consuming fewer resources and lowering gas costs, making it a preferable option for real-time, quantum-resilient deepfake prevention. The framework also employed AI-based detection models and human oversight to enhance detection accuracy and robustness. Overall, this research offers a novel, multifaceted, and governance-aware strategy for deepfake prevention. The proposed approach significantly contributes to mitigating deepfake threats and offers a practical foundation for secure and transparent digital media ecosystems.

1. Introduction

Recent advances in deep learning (DL) have led to the rise of deepfake technology. It uses AI and DL to make fake media content, which represents a growing threat to digital trust, privacy, and social stability. Deepfakes have become incredibly realistic, thanks in large part to Generative Adversarial Networks (GANs), which has enabled bad people to change audio, video, and image content easily and with frightening accuracy. Initially examined for creative and recreational objectives, their exploitation for disinformation, identity theft, and reputational damage has elicited considerable global apprehension [1,2,3]. The rapid growth in the availability of generative tools and the emergence of open-source and user-friendly tools have made these risks even worse, and it is thus crucial to act quickly and work together to stop widespread abuse [4,5,6,7,8,9,10,11,12,13,14].
Current studies have primarily concentrated on deepfake detection, utilizing both traditional image forensics and sophisticated deep learning models. Convolutional Neural Networks (CNNs) and hybrid architectures, including CNN–LSTM, have attained significant accuracy on benchmark datasets such as FaceForensics++ and DFDC [15,16,17,18]. Nevertheless, these detection methods are still reactive, meaning they only find altered content after it has been shared. As a result, their effect on stopping the spread of false information and protecting reputations is still limited [19,20,21,22,23,24]. Moreover, detection models encounter significant obstacles in cross-dataset generalization, adversarial resilience, and computational efficiency, which diminishes their applicability in real-world settings [16,25].
To mitigate these constraints, recent research has advocated for preventive strategies that prioritize content authenticity and provenance verification. Digital watermarking, blockchain integration, and cryptographic signatures are some of the ways that content metadata has been protected and made traceable [26,27,28,29,30,31,32,33,34,35,36,37]. For instance, blockchain-based systems use decentralization and immutability to make it easy to check the authenticity of media [26,27]. Various research projects have employed blockchain to provide authentication for media content. Known examples include Truepic and the Content Authenticity Initiatives that used blockchain to verify the authenticity of photojournalistic material [28,29]. Another famous example is the Reuters blockchain project that utilized the security features of blockchain to verify the source authenticity of digital photos in conflict zones [30]. At the same time, hybrid methods that combine watermarking with smart contracts or distributed storage (like IPFS) have shown promise, but they have problems with scalability and cost [36,37,38]. In addition to technical solutions, a wide range of research studies have underscored the significance of regulatory governance, awareness campaigns, and media literacy initiatives to alleviate the societal risks posed by deepfakes [38,39,40,41,42,43,44,45,46,47].
A significant research gap related to deepfake prevention is revealed in the literature. By analyzing the literature, it can be observed that various studies have examined specific aspects of the deepfake issue, especially technical methods used in the creation and detection of deepfakes. Authors have also highlighted the importance of adopting interdisciplinary defenses that use technical methods and ethical AI policies [48,49,50]. However, there still a need for further research studies to develop a comprehensive operational framework that leverages both technical and non-technical countermeasures to support a cohesive multifaceted prevention strategy. To counter deepfakes, it is essential to adopt a multifaceted strategy that leverages technical and non-technical countermeasures and involves collaboration among key stakeholders, including policymakers, standardization organizations, media content creators, …, etc. [50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65].
This study fills this gap by suggesting a deepfake prevention framework that combines technical and non-technical methods to ensure the authenticity, traceability, and accountability of media content and provide a proactive approach to mitigate the deepfake distribution problem and its associated risks. The proposed framework consists of four main modules: (1) trusted content assurance through cryptographic watermarking, digital signatures, and blockchain; (2) detection and monitoring using AI-based and real-time scanning tools; (3) awareness, training, and human-in-the-loop systems for education and manual verification; (4) policy, governance, and regulation for enforcing compliance and legal accountability. The framework also includes both classical and post-quantum cryptographic schemes, such as RSA, ECDSA, Dilithium, Falcon, and SLH-DSA, to make sure that it will be safe from quantum-enabled threats for a long time.
The current research assesses the efficiency, robustness, and feasibility of the proposed framework through extensive experimentation and security analysis. The assessment aims to find the most appropriate methods and techniques to implement within the framework to ensure optimum efficiency. The experimental results demonstrated that Falcon DSA achieves the best performance–cost results and provides long-term security and resilience against quantum computing attacks. Moreover, the results showed that combining technical and human-intervention defenses can have a positive effect. This work ultimately provides a sustainable and interdisciplinary approach that integrates AI, cryptography, blockchain, and policy measures to preemptively protect against the forthcoming generation of deepfake threats.

2. Literature Review

Deepfake technology, driven by rapid advances in AI and DL and relevant applications, has evolved into one of the most significant challenges and threats to digital media integrity. Research studies in this field have explored both the potential benefits and the serious threats and consequences associated with deepfake technology. Recent studies have emphasized its impacts and misuse in misinformation, political manipulation, and privacy violations [1,2,3]. Moreover, another key challenge is the potential use of deepfakes to fabricate evidence to be tendered to courts, especially during criminal proceedings [61]. Particular technologies such as Generative Adversarial Networks (GANs) have been identified as principal enablers of deepfake content creation. This technology can produce highly realistic synthetic media that can deceive even expert observers [2]. As noted in studies published in [4,5,6], the sophistication and continuous advances of GAN-based models continue to outpace existing detection capabilities, underscoring the urgent need for more advanced countermeasures—categorized into detection and prevention approaches. The following subsections discuss the main deepfake detection and prevention approaches that exist in the literature.

2.1. Deepfake Detection Approaches

Many research works in the literature have focused on developing effective methods for deepfake detection. Various studies have explored techniques such as traditional image forensics and advanced DL architectures. The majority of conventional approaches have developed methods to enhance deepfake detection, including statistical and signal processing-based analyses and detection of inconsistencies in lighting, texture, or frequency domains [7,8]. DL-based models, particularly Convolutional Neural Networks (CNNs) and hybrid CNN–LSTM architectures, have shown superior performance in benchmark datasets such as FaceForensics++ and DFDC, achieving detection accuracies above 95% in controlled settings [15,16,17,18]. Although the results are promising, these detection methods face major challenges in generalization, adversarial robustness, and computational efficiency [21,22,23]. For example, the majority of detection methods achieve significantly lower accuracy levels when used with new datasets. Recent studies highlight the potential of frequency-domain decomposition and explainable AI (XAI) methods to enhance interpretability and resilience to adversarial manipulation [17,18].
Despite progress in detection methods, researchers agree that those reactive detection models alone are insufficient for combating deepfake threats [16,25]. The main reason for this is that detection systems, while effective for forensic validation, operate after content dissemination, limiting their ability to prevent reputational damage or misinformation propagation. In some cases, this may cause people to lose trust in all media, as they do not have the resources to distinguish between what is real and what is fake. As a result, no media will be considered reliably trustworthy. This limitation has prompted more studies towards developing complementary deepfake prevention strategies. Those strategies are specifically designed to inhibit or trace the creation and spread of deepfake content before it reaches the public domain, and hence mitigate the threats posed by deepfake distribution.

2.2. Deepfake Prevention Techniques

A common approach adopted in deepfake preventive strategies is to embed authenticity markers or cryptography signatures that assist in verifying the authenticity of media content. Other strategies rely on implementing a technology that can ensure provenance and accountability, such as distributed ledger technology. Technical prevention measures include watermarking, blockchain integration, and cryptographic signature mechanisms [26,27,28,29,30,31,32,33,34,35,36,37]. For instance, blockchain-based watermarking techniques have been proposed to store and verify content metadata securely, leveraging the powerful security features of blockchain such as immutability and decentralization [26,27]. Similarly, hybrid approaches combining various technical methods, including digital watermarking, IPFS storage, and Ethereum smart contracts have demonstrated strong potential for authenticity verification and traceability of media content. However, these methods suffer from scalability and cost limitations [36,37,38]. Other research studies have explored algorithmic and adversarial approaches, such as challenge–response authentication protocols and steganography-based deepfake disruption methods [56,57,58,59]. Although these methods sound technically promising, they often rely on predefined challenges or high computational resources, making them less adaptable to real-time or large-scale applications. Therefore, these methods are insufficient to avoid the spread of deepfakes.
Beyond purely technical defenses, numerous studies have emphasized the need for non-technical methods represented by regulatory and educational measures to mitigate the misuse of deepfake technologies. Recent studies have conducted legal and ethical analyses and revealed significant gaps in national and international legislation regarding digital identity, consent, and content manipulation [39,40,41,42,43,44,45,46,47,48,49]. For example, the authors of [42] and [43] examined case studies in specific countries, Indonesia and India, revealing substantial deficiencies in legal protections for victims of deepfake exploitation. Scholars recommend enhancing data protection frameworks and promoting ethical AI development standards to counter the societal harms of deepfake proliferation [48,49]. Additionally, parallel research underscores the importance of training and awareness campaigns and education to improve media literacy and empower users to recognize synthetic media [46,47,48].
The following subsection discusses research studies related to the integration of technical and non-technical countermeasures for deepfake prevention.

2.3. Integration of Technical and Non-Technical Countermeasures

While existing studies have made valuable contributions to deepfake detection and mitigation using technical and legal (non-technical) methods, a consistent finding across the literature is that isolated technical or legal approaches are insufficient to address the multifaceted risks of this phenomenon [49,50,51,52,53,54,55]. Works such as [49,50,51] explicitly call for multi-layered mitigation strategies that integrate technical safeguards with non-technical interventions, including policy development, education, and stakeholder collaboration.
The next subsection highlights the research gaps found in the literature.

2.4. Research Gaps

The reviewed studies in the literature collectively demonstrate a pressing need for a comprehensive, multifaceted deepfake prevention framework that bridges the divide between technical innovation and non-technical countermeasures. The key issue to be addressed is that existing solutions—whether algorithmic, blockchain-based, or regulatory—tend to operate in isolation, which leads to partially addressing the problem or addressing only a subset of the problem. Moreover, there remains limited coordination among stakeholders responsible for content creation, verification, and dissemination, which undermines global efforts that seek to maintain digital trust and authenticity [55,56,57,58]. The gap lies not only in developing robust technical mechanisms but also in establishing a collaborative multifaceted framework or ecosystem that ensures efficiency, interoperability, transparency, and user empowerment. The next section explains how current research addresses the identified gaps.

2.5. Contribution of the Present Research

To address the gap, the current study introduces an integrated multifaceted deepfake prevention framework that operationalizes a defense-in-depth strategy across four functional modules. The proposed framework combines various technical and non-technical methods, including cryptographic watermarking, blockchain-based metadata management, and post-quantum digital signature mechanisms with complementary awareness, training, and human-in-the-loop verification components. Moreover, the proposed framework involve key stakeholders, such as policymakers, rules enforcement agencies, standardization organizations, …, etc., to develop a holistic and more effective approach to deepfake prevention.
The next section introduces the proposed deepfake prevention framework and associated operating model.

3. Deepfake Prevention Framework

The proposed deepfake prevention framework adopts a multifaceted strategy to offer robust and comprehensive protection against the threats posed by deepfakes. It utilizes both technical and non-technical methods to provide an effective mechanism to identify manipulated content and resist the distribution of deepfake materials.
The next subsection describes the framework adopted to prevent deepfakes, focusing on the multifaceted approach adopted within the framework. It offers a detailed explanation of each framework of the module along with the security services it provides and the mechanisms implemented to deliver these services.

3.1. Modules of Deepfake Prevention Framework

The framework consists of four main modules that interact and complement one another to offer enhanced protection. The modules and mechanisms involved in the framework can be outlined as follows.
A. 
Module 1: Trusted Content Assurance (Technical)
This module focuses on providing the main security services that ensure the content is trusted, which are authentication, integrity, and non-repudiation. It also employs blockchain and smart contract technologies to enable content tracking, avoid manipulation, and enhance the transparency of the deepfake prevention environment. Consequently, this module guarantees that all media content has a verifiable origin before distribution. To provide essential security services, the trusted content assurance module involves the implementation of several mechanisms. The key mechanisms involved in this module can be described as follows:
(1)
Digital Watermarking and Cryptographic Signatures
To provide a more effective defense against deepfakes, it is essential to implement security mechanisms that ensure authentication, data integrity, and non-repudiation. Authentication is achieved through digital watermarking, which verifies the authenticity of the content’s origin. Moreover, cryptographic digital signature algorithms are used to provide integrity and non-repudiation, ensuring that the media content has not been altered since its creation and that the signer cannot deny his responsibility for the media and the corresponding signature. This research adopts the cryptographic watermarking technique, which uses the hash function SHA-3 to compute the hash code of the media content and its metadata. Then, the hash code is signed using the media creator’s private key. Finally, the cryptographic digital signature is embedded within the media as a watermark.
Any party who wants to verify the originality of the content can use the public key of the signer (assumed to be the media creator). The inputs to the verification operation are the signer’s public key, the content, and the metadata. The output is either a valid signature, indicating that the media content is authentic and not manipulated, or an invalid signature, indicating that the signature and media are not authenticated.
This study conducts a comparison in terms of time and resource consumption between the various digital signature schemes, including classical algorithms, such as RSA and ECDSA, and modern PQDSAs, such as Dilithium, Falcon, and SPHINCS+. The aim is to understand the variation in performance level and resource requirements for both classical and post-quantum DSAs.
The current study uses a hybrid watermarking embedding technique, which combines both the Discrete Wavelet Transform (DWT) and the Singular Value Decomposition (SVD). This hybrid technique offers a high degree of imperceptibility for the media content, as well as robustness against different types of attacks. It is worth mentioning that metadata must include a pledge by the media creator or the signer that the content is original and not manipulated by deepfake technology, as well as bear the legal consequences for violating the relevant legislation. Additionally, the metadata contains useful information about ownership, copyright, time stamp, content tracking, and authentication.
(2)
Blockchain and Smart Contracts Technologies
Blockchain is employed to ensure transparency and immutability, as well as to enable effective tracking of media content, its status, and metadata. Every piece of media created is registered on one of the key types of the distributed ledger. This research suggests using a public permissioned blockchain to ensure high levels of security and publicity, while maintaining a reliable user authentication method.
Original media files can be stored off-chain due to the storage limitation of blockchain. Only the metadata of the media and its corresponding digital signature need to be stored on-chain to enhance performance and scalability. IPFS private storage technologies can be used to store the media files so they can be accessed for the purpose of verification.
Smart contracts are utilized to provide governance and facilitate interaction among involved parties, including media creators, authenticity verifiers, and monitoring systems. Before registering media content on the blockchain, the media creator must insert a link to the media file, its metadata, and the associated signature into the smart contract and then sign the contract. The verification authority must first verify the signature using the signer’s public key. If the signature is valid and deepfake detection confirms that the media is original, the verification authority signs the smart contract. Once the authentication authority signs the contract, the status of the media content will be changed to “authentic” and it will be immediately registered on the blockchain platform. However, if the verification fails, the authority rejects the media and notify concerned parties, including the media creator, monitors, and law enforcement agencies. The output of the blockchain is an authenticity token, which includes status and metadata. Any individual or organization wishing to check the authenticity and origin of the media content can search and find its status on the blockchain and validate its authenticity, origin, time of creation, …, etc.
B. 
Module 2: Detection and Monitoring (Technical)
This module employs sophisticated deepfake detection technologies to catch manipulated media that bypasses the authentication module. This module is added to the deepfake prevention framework for three reasons: First, the module aims to detect any deepfake content that might be uploaded by malicious users. In such cases, the deepfake detector triggers the status in the smart contract to change to “fake”, and the status along with metadata will be stored on the blockchain. The second reason the detection module may be used is if the media content has no authentication content, such as digital signature and metadata. In this case, the detection module can determine whether media has been manipulated by deepfake technology or not and trigger a smart contract to store the status and hash code on the blockchain. If the deepfake detector fails to identify the status of the media content, it will remain “unknown” and will seek feedback from the human intervention and training module. The key components of this module can be outlined as follows.
(1)
Deep Learning Detector
This sub-module operates a deep learning detector based on GAN-fingerprint to check media content, including audio and video, and classify it as either fake, real, or uncertain. This module is designed to be flexible, allowing the replacement of current deepfake detection technologies with more powerful options that may emerge in the future. Additionally, open-source tools such as FaceForensics++ and the DeepFake Detection Challenge (DFDC) models can be utilized at this stage to enhance deepfake detection capabilities.
The aim of the detection sub-module is to identify deepfake content from media files uploaded by registered users, subsequently updating the status in the smart contracts to “fake”. Moreover, this sub-module can be utilized by guest users seeking to verify the authenticity of a media artifact, determining whether it is real or manipulated by deepfake technology. In this scenario, the hash code of the media, along with its metadata and status, will be registered on the blockchain.
(2)
Real-Time Monitoring
This sub-module operates an online monitoring system that scans social media and other platforms, as well as receiving reports from users about suspected media content. Countermeasures can be implemented, including rate limiting and AI-based anomaly detection to avoid overwhelming the system with false flags.
Additionally, the monitoring system forwards the suspected content to the deepfake detector, which in turn checks and updates the status to be stored in the blockchain. This will help to limit the threat of deepfake distribution, as users can easily search the status and metadata of the media content to determine its authenticity.
The output of the detection module indicates the status of the analyzed media content, which can be “real,” “fake,” or “uncertain.” If the output of the detection module is “uncertain”, it means that the deepfake detector was unable to determine the status; the suspected content will then be forwarded to the next module, titled “awareness, training, and human-in-the-loop.” This module requires human intervention and experience to assist in identifying the status of the media content.
C. 
Module 3: Awareness, Training, and Human-in-the-Loop (Non-Technical)
This module integrates non-technical countermeasures against deepfakes to complement the technical countermeasures implemented in modules 1 and 2. The purpose of this module is to raise awareness among various groups involved in deepfake prevention, including end-users, journalists, regulators, and others, about the threats posed by deepfakes and related consequences. Additionally, it aims to train individuals to recognize suspicious or fake content and to incorporate human oversight into technical detection systems. Human intervention enhances deepfake defense, as trained and experienced humans can identify anomalies or manipulated content and flag them, even if an AI-based detection system fails to detect something. This module contains the following sub-modules.
(1)
Awareness Campaigns
Studies showed that most people do not realize how convincing deepfakes have become. Therefore, public awareness is crucial to avoid the spread of fake media. To enhance public awareness, this module involves awareness initiatives directed to various parties. The initiatives include a brief explanation about deepfakes with simple examples. The awareness initiative should highlight the threats and consequences posed by deepfakes. Moreover, the awareness campaigns can be customized to target specific groups such as policymakers and regulatory organizations. One more important component of awareness initiatives is to teach attendees the basic red flags to identify manipulated artifacts, such as lip-sync mismatch, unnatural eye blinking, and inconsistent reflections. The awareness campaigns can be in the form of short directed learning materials or videos accompanied by an exam. The involved parties need to investigate the learning materials and pass the exam. This will enhance public awareness and humans’ ability to detect deepfakes.
(2)
Training Programs
This sub-module involves designing targeted education and training courses for different stakeholders. For example, training that targets journalists to avoid spreading fake news and training for employees to spot and avoid fake content used in social engineering scams. Also, special training courses may target law enforcement and legal professionals to help them understand the evidentiary value of deepfake detection.
The training could be in a form of case studies, E-learning modules, and workshops.
Well-trained humans can assist the deepfake detection module by investigating the media content and confirming its status.
D. 
Module 4: Policy, Governance, and Regulation (Non-Technical)
Prior studies and real-world experiences have shown that technical deepfake prevention countermeasures, while essential, are not sufficient on their own and should be supported by policies, regulations, and governance procedures. The fourth module aims to provide enforcement mechanisms and ensure compliance with internationally recognized policies and regulations concerning the use of deepfake technology. This module includes the following main components.
(1)
Laws and Regulations
This component includes acts, regulations, and national and international cybersecurity laws related to the use of deepfake technology. Additionally, other types of laws concerned with deepfake creation are considered, such as copyright laws, safety laws, and nondiscrimination laws [64]. These laws and regulations, along with their updates, must be publicly announced and communicated to all relevant parties.
(2)
Policy Enforcement
This component focuses on providing effective ways to enforce policies and laws related to violating acts and regulations related to the use of deepfake technology. The proposed framework assumes that all major stakeholders are involved in the deepfake prevention ecosystem. Therefore, when a violation is detected (e.g., the spread of deepfake content), the governance module ensures that notifications and informative reports are communicated to law enforcement agencies and to social media or other venues where the deepfake content was found. Additionally, the framework guarantees that the necessary evidence related to violating AI acts is securely stored and accessible via distributed ledger technology, facilitating future investigations and digital forensic efforts.
A summary of the four modules of the proposed framework and the components of each module is presented in Figure 1.
The operating model of the deepfake prevention framework is introduced and discussed in the following subsection.

3.2. The Operating Model of the Deepfake Prevention Framework

The operating model explains how the suggested framework for preventing deepfakes works as a whole, bringing together technical and non-technical parts to make it easier to stop the spread of deepfake content. It also clarifies the roles of important stakeholders and shows how each one interacts with and benefits from the framework’s functions.
The key stakeholders and their roles in the deepfake prevention framework are presented in the next subsection.

3.2.1. Stakeholders and Their Roles

The proposed deepfake prevention approach requires collaboration among various parties. The key stakeholders involved in this framework and their roles can be described as follows:
  • Technical Stakeholders: This includes AI researchers, developers, cybersecurity specialists, and forensic analysts. Their roles include improving deepfake detection and content authentication mechanisms. Additionally, forensic analysts can assist regulatory institutions by providing the necessary evidence to avoid the distribution of deepfake and prosecute violators.
  • Operational Stakeholders: This category include non-technical parties that are registered in the framework’s ecosystem and are eligible to perform essential operations, such creating digital signatures for their media content and assisting in the deepfake detection process with expert intervention. Examples of stakeholders in this category include journalists and governmental or nongovernmental agencies who need to secure the media content they produce by using authentication and digital signature services offered by the framework. This category also includes non-profit independent organizations responsible for managing and operating the framework’s platforms. Their role includes registering other parties, granting privileges, and performing necessary maintenance for the system platform. Additionally, this category includes experts or trained groups who contribute to improving the accuracy of deepfake detection.
  • Regulatory Stakeholders: This category comprises regulatory and law enforcement agencies responsible for updating rules and regulations relevant to deepfake issues, as well as taking legal actions against offenders. Additionally, domestic and international technical standardization bodies are considered in the framework.
  • End-users: This category refers to the general public, including consumers, victims of manipulation, and other parties who can utilize the platform to verify media content or report suspected cases.
The next subsection demonstrates how the different framework modules and stakeholders interact with each other within the deepfake prevention system.

3.2.2. Operational Workflow and Module Interaction

The framework operates as a unified comprehensive ecosystem that integrates four interdependent components or modules: (1) trusted content assurance, (2) detection and monitoring, (3) human-in-the-loop verification, and (4) governance and policy management. The following sequence outlines the practical workflow from media submission to on-chain registration:
  • Content Upload and Metadata Capture: The process begins when a verified user, such as journalist or any Registered Entity (RE) uploads a digital image or video to the system. Upon uploading the media, the Trusted Content Assurance module automatically captures metadata and generates a cryptographic hash and digital signature. These attributes are recorded temporarily in a secure local storage area pending verification.
  • Authenticity Verification: The first module checks whether the uploaded media contains a valid watermark or digital signature embedded during content creation. If the signature is verified using the public key of an RE, the metadata and the cryptographic digital signature of the media proceeds directly to blockchain registration. If not, it is flagged for further investigation and verification by the second module.
  • AI-Driven Deepfake Detection: The second module, detection and monitoring, applies AI-based detection methods to verify the authenticity of the media. If the second module fails in determining the authenticity status, the media content is automatically escalated to human verification.
  • Human-in-the-Loop Verification: Expert reviewers receive flagged media content for additional verification. The reviewers check the AI decision, visual indicators, and metadata. Verified outcomes (authentic or fake) are digitally signed by the reviewer and returned to the previous module.
  • Blockchain Registration and Provenance Tracking: The verified hash and metadata of each authentic media item are immutably recorded on the blockchain platform through a smart contract. This record includes necessary information, such as the source ID, timestamp, digital signature, metadata, and verification status. If the media is classified as manipulated, a “fake” flag is written on-chain to prevent re-uploads and provide traceability for future detection.
  • Governance and Policy Enforcement: As the fourth module, the governance and policy management module ensures compliance with standards and regulatory, ethical, and data-protection requirements. It also issues notifications to rules enforcement agencies and media content platforms and triggers awareness or takedown procedures in accordance with applicable laws and organizational policies.
The aforementioned sequential operations demonstrate how the four modules included in the proposed deepfake prevention framework function collaboratively to provide an effective multifaceted deepfake prevention strategy.

3.2.3. Component and Stakeholder Interactions

To provide a better understanding of the operating model, this section explains the flows of information between the technical and non-technical modules, decision-making, and responsibility within the current deepfake prevention framework.
Interaction and dataflow between the different modules within the deepfake prevention framework are depicted in Figure 2.
In the first module of the framework, media content can be authenticated through digital watermarking and cryptographic signatures which are then stored along with the metadata of the content on the blockchain. The second module conducts deepfake detection as needed to ensure that the content is authentic and not manipulated. The third module involves intervention from an experienced human to verify the authenticity of the content in case deepfake detection fails. Eventually, the fourth module is activated if deepfake content is identified, escalating the case to ensure law enforcement involvement, supporting forensic analysis efforts, and notifying concerned parties.
There are two main scenarios for utilizing the deepfake prevention framework, which are as follows:
(1)
An end-user seeks to verify the authenticity of a media artifact.
(2)
A registered entity (governmental or nongovernmental agency) wants to upload media content it has produced and obtain the security services offered by the framework.
The first scenario is illustrated in Figure 3. In this scenario, end-users can upload media content such as video or audio to verify its authenticity. There are two options available:
  • In the first option, users are allowed to upload the media content along with a digital signature, enabling the first module of the prevention framework to verify the content and the signature using the creator’s public key.
  • Alternatively, users can upload media without a signature. In this case, the prevention system needs to extract the watermark that contains the digital signature and then perform signature verification.
Module 1 is the starting stage for the first scenario. If the media content is authenticated and the digital signature is successfully verified, there is no need to proceed to the second module and perform detection. The user can be confident that the content is not manipulated by deepfake technology and originates from the intended party.
However, if the signature is missing, the system moves the media content to the second module, which is the detection module. In this module, an AI-based detection algorithm is used to determine whether the content is a deepfake, authentic, or of unknown status. If the detection process determines the media to be authentic, the result is returned back to the user and the process ends. On the other hand, if the result is “deepfake”, the system sends notifications to all concerned parties such as end-users and regulatory and law enforcement agencies to take necessary action. Moreover, the system stores the hash code of the content, the metadata, and the status on the blockchain to assist in future investigations and digital forensic efforts. If the result of the detection is “unknown”, the media is then moved to the third module where human intervention is required. Experienced individuals engage in this step to assess the content further and provide a rating to indicate whether the content is “authentic” or “deepfake”.
Figure 4 illustrates the second use-case scenario, in which a registered entity can utilize various functional modules of the prevention framework to mitigate the threats posed by deepfakes.
At the first step, the registered entity, which possesses both public and private cryptographic keys, can upload their media onto the system platform. The platform then computes the hash code using a cryptographic hash function that takes the following inputs: (1) the media content, (2) its metadata, and (3) an official agreement certifying that the content is original and not manipulated. Afterward, the authorized entity signs the hash code using its private key. The digital signature, along with the metadata, is embedded into the media as a digital watermark.
Next, the details including the metadata, the hash code, and the digital signature are input into the smart contract. An important step before storing these details on the blockchain is to perform deepfake detection using the second module, ensuring the authenticity of the signed media content. This step is necessary to prevent a registered entity from signing and uploading fake contents. As in the first scenario, if the detection status is “unknown,” the content will be evaluated in the third module with intervention from experienced evaluators. If the content is deemed fake, the functions of the fourth module are triggered, leading to involvement from law enforcement and regulatory agencies to take legal action against offenders and ensure compliance.
Conversely, if the deepfake detection process confirms the media content as authentic, the status will be updated in the smart contract. Then, the smart contract will be signed by the independent organization responsible for managing the system’s platform. At the final step, the hash code, the metadata of the content, and its corresponding signature will be stored on the blockchain.
The following section presents the experimental results, interpretation, and key findings and contributions of this study.

4. Experimental Results and Discussion

This section describes the implementation environment and the main evaluation scenarios conducted in this study. It also presents and discusses the key experimental results in detail. Additionally, the section introduces the main research findings and emphasizes the contributions of this research with respect to deepfake prevention efforts.

4.1. Implementation Environment

This subsection outlines the components of the implementation environment used to conduct experiments in this research study. It describes the algorithms, methods, and technologies utilized to implement the deepfake prevention framework.

4.1.1. Algorithms and Technologies

In the following, a brief description is given of the main algorithms and technologies used for the purpose of content authentication, as well as to provide necessary security services including integrity, non-repudiation, immutability, tracking, and transparency.
  • Cryptography Digital Signature Algorithms: The DSA can provide valuable security services such as authentication, integrity, and non-repudiation. This research utilized PQDSAs to provide long-term security and resilience against cyberattacks that employ advanced quantum computing technology. In particular, the experiments implemented the PQCAs standardized by NIST, which are Dilithium, Falcon, and SPHINCS+. Moreover, this research conducted a comparison with classical DSAs, such as RSA and ECDSA to find the most efficient DSA to be utilized in the proposed framework [56].
  • Cryptography Hash Algorithms: Hash algorithms are essential to ensure integrity and efficient implementation of digital signature algorithms. The experiments conducted in this study used the secure hash algorithm SHA-256 due to its provable security and high performance level.
  • Digital Watermarking: This research employed a hybrid digital watermarking embedding technique which combines DWT and SVD to offer robustness against different types of attacks. Mainly, the watermarking technique is used to invisibly embed metadata and cryptographic signatures within media content. This facilitates content authentication later via extracting the watermark and performing the digital signature verification operation.
  • Blockchain and Smart Contracts: This study used a permissioned public blockchain built on Ethereum. This offers an enhanced level of security since it adds an access control module to ensure that only registered and authenticated entities can perform operations like signing or verifying digital content. It is also public, allowing end-users to access the blockchain and verify the media content. Smart contracts are used to improve governance and facilitate key operations on the blockchain.
  • Deepfake Detector: This research uses open-source tools for deepfake detection such as FaceForensics++ and DeepFake Detection Challenge (DFDC) models. The detection stage comes after content authentication and offers an extra layer of defense.
  • Likert Scale Rating: This research applies a rating system over a scale from 1 to 10, where 10 indicates strong confidence that the content is authentic and not manipulated, while 1 signifies that the content is a deepfake. If detection fails, experienced participants will be asked to rate the authenticity of the media and if the average rating result exceeds 75%, the content is considered authentic; otherwise, it is classified as deepfake content.

4.1.2. Datasets

This study used images with sizes ranging from 500 KB to 1000 KB to test and benchmark the performance of digital signature schemes and watermarking embedding technologies. Moreover, the study used open-source deepfake datasets, such as FaceForensics++, and DeepFake Detection Challenge (DFDC) to test the performance of deepfake detectors. The research also involved a sample of 10 trained human evaluators to judge the authenticity of the media content.
For the sample size, experiments in this study utilized 2000 images/videos (1000 authentic and 1000 deepfakes) from the FaceForensics++ and DFDC datasets.

4.1.3. Evaluation Metrics

In the following, a list of evaluation metrics employed in this study is given with a brief description.
  • Performance: This indicates the time delay of digital signature operations, the time consumed by watermarking techniques, and the time elapsed to append data into the blockchain. For DSAs, the performance represents the time consumed by signing and verification operations, while watermarking indicates the time required to generate the payload and embed the payload within the media. The payload generation time refers to the time taken to pack the metadata and digital signature together and convert them to a format suitable for embedding within the media. The embedding time indicates the total time elapsed when embedding the payload into the host image using the hybrid DWT and SVD algorithms, including all transformation, modification, and reconstruction steps. As a summary for the performance metric, it includes the following quantifiable metrics: (1) time consumption of digital signature operations, (2) time consumption for watermarking creation and embedding, and (3) latency in milliseconds for appending a data block into the blockchain.
  • Accuracy: This refers to the accuracy of detecting deepfakes via the proposed framework. In particular, this metric is evaluated via the accuracy level of the deepfake detection methods adopted in this research.
  • Cost: This metric is estimated by the gas cost consumed by blockchain operations. This metric varies according to the size of the data (signature and metadata) produced by each DSA.
  • Resource Consumption: The resources are estimated by the RAM space in bytes used by the DSAs to store the signature and the corresponding metadata.
  • Statistical Testing: Detection accuracy and false-positive rate differences between AI-based detection models are employed to judge the performance of deepfake detection.

4.1.4. Comparisons

Various comparisons were performed in this study to highlight the contribution of the proposed approach. Various post-quantum DSAs were examined and compared with classical DSAs to find the most efficient algorithm in terms of performance, resource consumption, and cost. Moreover, this research investigated various watermarking techniques and deepfake detectors. The accuracy of the human involvement module was also tested by comparing it with a use-case scenario that involved no human intervention.

4.2. Evaluation Scenarios

This study implemented various use-case scenarios to evaluate the deepfake prevention framework. These evaluation scenarios aim to ensure that the framework components and modules are functioning as intended, as well as to assess the evaluation metrics introduced in the previous subsection. Specifically, two use-case scenarios have been implemented to evaluate the proposed framework’s modules, as follows.
(1)
A registered entity authenticates and stores a piece of media content:
In this use-case scenario, a registered organization uploads a piece of media content to the system platform and utilizes security services provided by the framework’s modules, including creating a digital signature, using digital watermarking, and signing the smart contract to store the signature and metadata permanently on the blockchain. Additionally, this scenario activates the second module in the framework to perform deepfake detection, as well as the human intervention module as needed to assist in determining the status of a piece of media content. Finally, if a deepfake is detected, this use-case activates the fourth module to take necessary governance and legal actions.
(2)
An end-user seeks to verify the authenticity of a piece of media content:
In this use case, end-users can upload a piece of media content to the system’s platform that will activate the first module to verify the signature and the content. If the signature is missing, the detection module is activated to determine the status of the content. If the second module is unable to determine the status, the third module is activated. Additionally, the fourth module is always active whenever a deepfake content is found.
The following section discusses the experimental results of this study for the various evaluation metrics.

4.3. Experimental Results and Analysis

This section provides the experimental results and interpretation revealed in this study. The results of the various evaluation metrics including performance, resource consumption, accuracy, and cost are presented and explained. This offers a clearer understanding of the framework’s functionalities and allows for a more informative evaluation of its effectiveness.

4.3.1. Performance Analysis

The performance metric indicates the time delay of the authentication or detection methods, which is crucial to provide real-time, effective deepfake countermeasures. Four main components are involved in determining the time delay, which are the DSA, the digital watermarking creation and embedding technique, the deepfake detection operation, and the time taken by blockchain operations to append the metadata and digital signatures to the blockchain. To obtain a deeper understanding of the performance level, the time delay of each of these components is analyzed independently and the overall time delay is calculated.
Table 1 shows the experimental results for measuring the time taken by digital signature operations: signing and verification for both classic and post-quantum DSAs. Additionally, the table shows the time delay for digital watermarking operations, which include creating the payload and embedding the payload in the media content.
Experimental results demonstrate that the Falcon post-quantum algorithm achieved the least time delay in comparison with other DSAs. Falcon uses the fast Fourier sampling technique and has less computational complexity, resulting in very fast signing and verification operations. Also, the results show that payload generation and embedding operations are performed faster using the Falcon algorithm than other signature schemes. Furthermore, Falcon’s performance closely rivals that of ECDSA while providing resistance against quantum computer attacks. As such, it is considered ideal for scalable, real-time content authentication functions necessary to provide effective deepfake prevention.
Conversely, SLH-DSA (SPHINCS + 128s) requires more time to complete the signature creation process because it relies on a large number of hash operations and Merkle tree constructions for each signature. Therefore, SLH-DSA is the most performance-costly; however, it provides long-term security which makes it potentially useful for offline or archival watermark verification.
The classical RSA algorithm consumes extra time due to its large key size, and it is vulnerable to quantum attacks; hence, it is not recommended for deepfake prevention applications.
Comparison between the total time delay for all DSAs implemented in this research is depicted in Figure 5.
It worth mentioning that digital signature and watermarking are the main operations performed in the first module of the deepfake prevention framework. This study also analyzed the performance of deepfake detection, which is the main operation of the second module. Two main modules were adopted in the proposed framework. Table 2 presents the performance level for each module according to recent research studies. It can be noted that the FaceForensics++ detector achieves a relatively higher performance because it takes less time to perform inference and determine whether the content is a deepfake or not. Therefore, it is recommended when a high-speed deepfake detector is needed.
Another factor affecting performance level is the time taken to append a data block into the Ethereum blockchain. The average time taken to insert a data block into Ethereum blockchain is about 12 s. The data block includes metadata and the digital signature of the media content. The 12-second estimate is the estimated optimistic time consumption as the actual time can vary due to several factors, including network congestion, gas price, and mempool depth.
One more factor affecting the performance level is the time taken by human intervention to assist in determining the status of a piece of media content. The observations of this study showed that the time taken by a human to judge the status of a piece of media content can vary according to their level of experience from less than 1 min up to 3 min.
The following subsection provides resource consumption analysis for the content authentication techniques to find out the most efficient for implementation within the proposed framework.

4.3.2. Resource Consumption Analysis

The experiments conducted in this study seek to identify the most efficient techniques and methods for implementation in the proposed deepfake prevention framework. To achieve this, the experiments examined the resources consumed by various DSAs and then benchmarking was conducted to identify a cost-effective option regarding resource utilization. It worth mentioning that resources are measured in terms of the memory space required for each DSA to store metadata and signatures.
Table 3 illustrates the RAM space in bytes used by each DSA for both classical and post-quantum algorithms, while Figure 4 provides a comparison of area consumption among different DSAs. Notably, ECDSA demonstrates the least area consumption compared to the other DSAs. However, this algorithm is vulnerable to attacks from quantum computing, similar to other classical asymmetric ciphers.
In the case of PQDSAs, the experimental results presented in Table 3 and the comparison in Figure 6 indicate that the Falcon-512 algorithm has the lowest resource consumption. Therefore, it is considered a preferable choice for ensuring efficient computations and lightweight operations in the authentication module of the deepfake prevention framework. This algorithm can contribute in developing high-speed and quantum-resilient authentication operations to avoid the threat of deepfakes.
It can be seen from Figure 6 that SLH-DSA occupies significantly more space than all other DSAs, perhaps due to its large signature size.
The following section discusses the costs related to blockchain operations in the authentication module in the deepfake prevention framework.

4.3.3. Cost Analysis

The cost analyzed in this research refers to the cost of blockchain transactions estimated by the “gas cost” for the key transactions performed per each DSA. In the first module, after signing a piece of media content, the created signature along with the metadata of the content will be inserted into the blockchain. Blockchain transactions consume gas fees, including the base cost estimated by 21,000 gas plus the gas per storage byte cost. The later cost varies as per the size of the digital signature generated by each DSA. In Ethereum blockchain, the gas price is calculated in Gwei, which is the ETH/gas conversion at the time of transaction. For example, it costs 20,000 gas to store 32 bytes of data in Ethereum blockchain. This means that 625 gas is required to store each byte because 20,000 gas/32 bytes = 625 gas per byte. These are estimation results since the actual cost is determined at the time of each transaction and is influenced by several factors. Furthermore, the gas cost can be converted to USD based on an assumed gas price at a given moment.
Table 4 shows the gas costs for each DSA, which is calculated via the following equation:
Gas Cost = 21,000 + 625 × B, where B is the number of bytes (signature + metadata) generated by each DSA.
For classical cryptosystem, the results demonstrate that ECDSA achieves the least gas cost. This was expected since ECDSA has a shorter signature size compared to other cryptosystems. Among post-quantum algorithms, the Falcon algorithm achieved the least gas cost (148,040 gas). This adds an additional advantage for the Falcon algorithm, making it a strong candidate for the development of a high-performance and cost-effective deepfake prevention approach.
Figure 7 illustrates how the gas cost varies according to the algorithm in use. The gas cost increases with the increment in metadata and signature size generated by each DSA. Among the options available, both the ECDSA and Falcon algorithms are more economical choices for executing blockchain operations.
The next section analyzes the accuracy of deepfake detection operation: the main operation in the second module of the proposed deepfake prevention framework.

4.3.4. Detection Accuracy Analysis

This section discusses the accuracy of various deepfake detectors adopted in the second module of the framework. Two main detectors have been introduced in the literature, which are FaceForensics++ and XceptionNet on DFDC. The accuracy levels and false-positive rates of these detectors are presented in Table 5. Research studies have demonstrated that FaceForensics++ achieves a higher accuracy for the specified dataset. Both detection methods can be used as open-source tools to perform detection and determine the authenticity status of a piece of media content. However, our experimental results have proven that accuracy might be negatively affected when these methods are experienced on different datasets or deliberately crafted manipulated content. Consequently, the human intervention component in the third module becomes crucial when the detection module fails to determine the authenticity status of a piece of media content.
The experimental results of this research indicate that well-trained individuals can significantly contribute to determining whether media content is “authentic” or a “deepfake.” This research involved an experiment with a group of ten participants who were asked to assess a group of selected images and determine the authenticity status of each image. The images were divided into two equal categories: one group contained authentic images, while the other group included manipulated images or deepfakes. A Likert scale rating from 1 to 10 was used for the assessment. The results were promising, with detection accuracy reaching 90%. Participants in the experiments took between one and five minutes to finalize their evaluation of each image.
This study emphasizes the importance of the human intervention component in detecting deepfake content. It calls for expanding experiments in this area and highlights the need to train participants on the key factors related to identifying manipulated content. By doing so, individuals who participate in evaluating the authenticity of a piece of media will be better equipped to distinguish between genuine and manipulated content, leading to more accurate deepfake detection.
The following subsection provides security analysis for the proposed deepfake prevention framework.

4.4. Security Analysis

The security analysis conducted in this study aims to highlight the main security services provided via the various mechanisms and methods implemented in the proposed framework. It illustrates how these services can successfully achieve the required security objectives.

4.4.1. Overview of Security Objectives

The aim of the introduced framework in this research is to provide effective deepfake prevention and avoid relevant threats. To achieve this aim, various mechanisms and methods were adopted in the framework and its modules. The primary security objectives of the framework are to ensure the authenticity, integrity, non-repudiation, and traceability of the media content. Achieving these objectives will provide strong resilience against a wide range of cyberattacks, including content manipulation attacks, thereby helping to mitigate the threats posed by deepfake distribution and its consequences.
The core security objectives for the deepfake prevention framework can be described as follows.
(1)
Authenticity: Indicates verifying that a piece of media content originates from a legitimate source or creator.
(2)
Integrity: Means ensuring that media content and its metadata remain unaltered.
(3)
Non-Repudiation: Emphasizes ensuring that the creator of a piece of media content cannot deny authorship or responsibility.
(4)
Traceability and Accountability: Seeks to provide provenance information and enable tracing and authenticating media content and its metadata.
(5)
Robustness Against Quantum Computing Attacks: Indicates resisting cyberattacks that may utilize quantum computing technology.
(6)
Governance: Refers to the ability to impose rules and regulation and enforce related laws. Moreover, it encompasses offering awareness and training to assist in deepfake prevention.
(7)
Deepfake Detection: Indicates the ability to detect unauthentic or manipulated media content.
The next subsection explains how each of the aforementioned security objectives were achieved using the proposed framework.

4.4.2. Security Services Provided by the Framework

The proposed deepfake prevention framework consists of four modules, each providing essential security services. These services are delivered through well-designed security mechanisms within every module.
Table 6 presents the key security services provided, the mechanism(s) implemented to provide each service, the framework module that provides each mechanism, and a brief description.
Each module of the proposed framework incorporates essential security mechanisms or countermeasures to enhance deepfake prevention. By integrating both technical and non-technical countermeasures, a more robust approach to deepfake prevention can be achieved.
The next subsection presents the key findings and contributions of this research.

4.5. Research Findings and Contributions

This study aims to develop an effective framework to improve deepfake prevention via a multifaceted strategy that utilizes both technical and non-technical countermeasures. The framework consists of four modules; each one provides certain security services required for effective deepfake prevention. The main findings and contributions of this research can be summarized as follows:
  • Develop an effective method for media content authentication by leveraging state-of-the-art watermarking and cryptographic techniques. In particular, the hybrid watermarking embedding technique using DWT and SVD was utilized to embed a payload consisting of the digital signature and the metadata of the media content. This approach offers robust authentication by leveraging the powerful security features of cryptographic DSAs. Additionally, the hybrid watermarking method makes it difficult to manipulate the content or remove the authenticating watermark.
  • This study investigated the implementation of various advanced cryptographic DSAs, including classical and PQDSAs. A thorough analysis and benchmarking of the different digital signature schemes were conducted. The experimental result indicated that the Falcon DSA is particularly promising and is recommended for use in deepfake prevention due to its strong resilience against quantum computing attacks and its high performance in digital signature operations, including signing and verification. Moreover, the Falcon algorithm uses a much smaller key size compared to other PQDSAs, which leads to reduced resource consumption such as memory space and gas costs for blockchain operations. As a result, it is a more efficient and cost-effective choice in comparison to other PQDSAs.
  • The framework utilizes Ethereum blockchain technology and smart contracts to enable more efficient media content authentication and tracking. The metadata of the content and its corresponding digital signature are stored permanently on the blockchain. By utilizing the powerful features of this technology such as immutability and transparency, it becomes easy to trace media content and verify its authenticity.
  • The second module of the proposed framework incorporates a deepfake detection process to add another layer of security and mitigate the spread of deepfake content. It identifies deepfake content that might be intentionally or mistakenly signed by an RO before allowing storage of the signature in the blockchain. It also enables guest users to verify the authenticity of media that has no signature. This research involved comparing various known deepfake detection methods in terms of accuracy and performance to find the most efficient method. The results indicated that the FaceForensics++ method is a strong candidate for adaptation in the deepfake detection module.
  • What makes the proposed framework unique is its utilization of both technical and non-technical countermeasures. The third module enables experienced individuals to assist in evaluating the authenticity of media content when the detection module (e.g., second module) is unable to determine its status. This approach enhances detection accuracy and helps to identify undetected deepfakes, allowing us to take necessary corrective actions to prevent the distribution of deepfakes and mitigate its impacts.
  • The fourth module in the proposed framework, which focuses on policy, governance, and regulation, enhances the landscape for preventing the distribution of deepfake content by offering improved governance and compliance with policies and regulations. In particular, this module supports law enforcement agencies by enabling them to efficiently impose restrictions on perpetrators, enforce rules and regulations, collect forensic evidence, and take necessary legal action.
The following section presents conclusions and future research directions related to the field of deepfake prevention.

5. Conclusions and Future Work

In order to provide end-to-end protection against the creation, dissemination, and misuse of deepfakes, this study proposed a thorough and multifaceted deepfake prevention framework (DPF) that combines technical and non-technical countermeasures. The framework was created to develop a robust ecosystem for digital media trust and authenticity by bridging the gap between sophisticated content authentication methods and socio-technical governance measures.
According to our experimental findings, the Falcon-512 post-quantum digital signature was the most effective algorithm among those tested in terms of balancing gas cost, computational efficiency, and performance. As a result, it is a prime candidate for scalable and quantum-resilient authentication. Blockchain integration provided immutability, traceability, and decentralized accountability, while the hybrid DWT–SVD watermarking technique effectively embedded cryptographic payloads with high imperceptibility and robustness. The detection and human-in-the-loop modules of the framework greatly improved accuracy and dependability in addition to these technical modules, especially in situations where AI-based detectors had to deal with confusing or hostile data.
Deepfake technology’s ethical and societal ramifications are addressed by the fourth module of the proposed framework, which incorporates policy and governance mechanisms. This further strengthens compliance, transparency, and legal enforceability. By involving a variety of stakeholders, including journalists, end-users, regulators, and policymakers, the framework promotes accountability and shared responsibility in the battle against the manipulation of synthetic media.
Together, this research integrates blockchain transparency, AI-assisted detection, human oversight, and innovative cryptography to offer an innovative, scalable, and secure paradigm for deepfake prevention. In addition to fortifying existing defenses against deepfakes, the proposed framework prepares us for post-quantum threats and changing AI manipulation methods. The development of international interoperability standards for digital content authenticity verification, federated learning-based detection, and blockchain storage efficiency optimization will be the main areas of future research.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PQCAPost-Quantum Cryptography Algorithm
DSADigital Signature Algorithm
AIArtificial Intelligence
DLDeep Learning
ECCElliptic Curve Cryptography

References

  1. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  2. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 1–9. [Google Scholar]
  3. Mirsky, Y.; Lee, W. The creation and detection of Deepfake: A survey. ACM Comput. Surv. (CSUR) 2021, 54, 1–41. [Google Scholar] [CrossRef]
  4. Kaur, A.; Noori Hoshyar, A.; Saikrishna, V.; Firmin, S.; Xia, F. Deepfake video detection: Challenges and opportunities. Artif. Intell. Rev. 2024, 57, 159. [Google Scholar] [CrossRef]
  5. Kietzmann, J.; Lee, L.W.; McCarthy, I.P.; Kietzmann, T.C. Deepfake: Trick or treat? Bus. Horiz. 2020, 63, 135–146. [Google Scholar] [CrossRef]
  6. Chesney, R.; Citron, D. Deepfake and the new disinformation war: The coming age of post-truth geopolitics. Foreign Aff. 2019, 98, 147. [Google Scholar]
  7. Korshunov, P.; Marcel, S. Deepfake: A new threat to face recognition? assessment and detection. arXiv 2018, arXiv:1812.08685. [Google Scholar] [CrossRef]
  8. Mustak, M.; Salminen, J.; Mäntymäki, M.; Rahman, A.; Dwivedi, Y.K. Deepfake: Deceptions, mitigations, and opportunities. J. Bus. Res. 2023, 154, 113368. [Google Scholar] [CrossRef]
  9. Shahzad, H.F.; Rustam, F.; Flores, E.S.; Luis Vidal Mazon, J.; de la Torre Diez, I.; Ashraf, I. A review of image processing techniques for Deepfake. Sensors 2022, 22, 4556. [Google Scholar] [CrossRef]
  10. Khalil, H.A.; Maged, S.A. Deepfake creation and detection using deep learning. In Proceedings of the 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt, 26–27 May 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar]
  11. Wazid, M.; Mishra, A.K.; Mohd, N.; Das, A.K. A secure Deepfake mitigation framework: Architecture, issues, challenges, and societal impact. Cyber Secur. Appl. 2024, 2, 100040. [Google Scholar] [CrossRef]
  12. Ahmad, J.; Salman, W.; Amin, M.; Ali, Z.; Shokat, S. A Survey on Enhanced Approaches for Cyber Security Challenges Based on Deep Fake Technology in Computing Networks. Spectr. Eng. Sci. 2024, 2, 133–149. [Google Scholar]
  13. Sandotra, N.; Arora, B. A comprehensive evaluation of feature-based AI techniques for Deepfake detection. Neural Comput. Appl. 2024, 36, 3859–3887. [Google Scholar] [CrossRef]
  14. Gao, J.; Micheletto, M.; Orrù, G.; Concas, S.; Feng, X.; Marcialis, G.L.; Roli, F. Texture and artifact decomposition for improving generalization in deep-learning-based Deepfake detection. Eng. Appl. Artif. Intell. 2024, 133, 108450. [Google Scholar] [CrossRef]
  15. Ghiurău, D.; Popescu, D.E. Distinguishing Reality from AI: Approaches for Detecting Synthetic Content. Computers 2024, 14, 1. [Google Scholar] [CrossRef]
  16. Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Morales, A.; Ortega-Garcia, J. Deepfake and beyond: A survey of face manipulation and fake detection. Inf. Fusion 2020, 64, 131–148. [Google Scholar] [CrossRef]
  17. Sharma, V.K.; Garg, R.; Caudron, Q. A systematic literature review on deepfake detection techniques. Multimed. Tools Appl. 2025, 84, 22187–22229. [Google Scholar] [CrossRef]
  18. Alrashoud, M. Deepfake video detection methods, approaches, and challenges. Alex. Eng. J. 2025, 125, 265–277. [Google Scholar] [CrossRef]
  19. Zhu, X.; Qian, Y.; Zhao, X.; Sun, B.; Sun, Y. A deep learning approach to patch-based image inpainting forensics. Signal Process. Image Commun. 2018, 67, 90–99. [Google Scholar] [CrossRef]
  20. Lai, Z.; Li, J.; Wang, C.; Wu, J.; Jiang, D. LIDeepDet: Deepfake Detection via Image Decomposition and Advanced Lighting Information Analysis. Electronics 2024, 13, 4466. [Google Scholar] [CrossRef]
  21. Raza, A.; Munir, K.; Almutairi, M. A novel deep learning approach for Deepfake image detection. Appl. Sci. 2022, 12, 9820. [Google Scholar] [CrossRef]
  22. Hsu, C.C.; Zhuang, Y.X.; Lee, C.Y. Deep fake image detection based on pairwise learning. Appl. Sci. 2020, 10, 370. [Google Scholar] [CrossRef]
  23. Cao, J.; Deng, J.; Yin, X.; Yan, S.; Li, Z. WPCA: Wavelet Packets with Channel Attention for Detecting Face Manipulation. In Proceedings of the 2023 15th International Conference on Machine Learning and Computing, Zhuhai, China, 17–20 February 2023; pp. 284–289. [Google Scholar]
  24. Ni, Y.; Zeng, W.; Xia, P.; Tan, R. A Deepfake Detection Algorithm Based on Fourier Transform of Biological Signal. Comput. Mater. Contin. 2024, 79, 5295–5312. [Google Scholar] [CrossRef]
  25. Yasir, S.M.; Kim, H. Lightweight Deepfake Detection Based on Multi-Feature Fusion. Appl. Sci. 2025, 15, 1954. [Google Scholar] [CrossRef]
  26. Stanciu, D.C.; Ionescu, B. Improving generalization in Deepfake detection via augmentation with recurrent adversarial attacks. In Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, Phuket, Thailand, 10–13 June 2024; pp. 46–54. [Google Scholar]
  27. Alattar, A.; Sharma, R.; Scriven, J. A system for mitigating the problem of Deepfake news videos using watermarking. Electron. Imaging 2020, 32, 1–10. [Google Scholar] [CrossRef]
  28. Truepic. Truepic App Lets Journalists Instantly Verify Images, Videos. International Journalists’ Network (IJNet), 30 October 2018. Available online: https://ijnet.org/en/story/truepic-app-lets-journalists-instantly-verify-images-videos (accessed on 2 September 2025).
  29. Content Authenticity Initiative. How it Works. Content Authenticity Initiative (CAI), 2024. Available online: https://contentauthenticity.org/how-it-works (accessed on 2 September 2025).
  30. Thomson Reuters. Reuters New Proof of Concept Employs Authentication System to Securely Capture, Store and Verify Photographs. Thomson Reuters Press Release, 30 August 2023. Available online: https://www.thomsonreuters.com/en/press-releases/2023/august/reuters-new-proof-of-concept-employs-authentication-system-to-securely-capture-store-and-verify-photographs (accessed on 20 September 2025).
  31. Rashid, M.M.; Lee, S.H.; Kwon, K.R. Blockchain technology for combating Deepfake and protect video/image integrity. J. Korea Multimed. Soc. 2021, 24, 1044–1058. [Google Scholar]
  32. Hasan, K.; Karimian, N.; Tehranipoor, S. Combating Deepfake: A Novel Hybrid Hardware-Software Approach. In Proceedings of the 2024 Silicon Valley Cybersecurity Conference (SVCC), Seoul, Republic of Korea, 17–19 June 2024; IEEE: New York, NY, USA, 2024; pp. 1–2. [Google Scholar]
  33. Jing, T.W.; Murugesan, R.K. Protecting data privacy and prevent fake news and Deepfake in social media via Blockchain technology. In Proceedings of the International Conference on Advances in Cyber Security, Penang, Malaysia, 8–9 December 2020; Springer: Singapore, 2020; pp. 674–684. [Google Scholar]
  34. Mao, D.; Zhao, S.; Hao, Z. A shared updatable method of content regulation for Deepfake videos based on Blockchain. Appl. Intell. 2022, 52, 15557–15574. [Google Scholar] [CrossRef]
  35. Seneviratne, O. Blockchain for social good: Combating misinformation on the web with AI and Blockchain. In Proceedings of the 14th ACM Web Science Conference 2022, Barcelona, Spain, 26–29 June 2022; pp. 435–442. [Google Scholar]
  36. Chen, C.C.; Du, Y.; Peter, R.; Golab, W. An implementation of fake news prevention by Blockchain and entropy-based incentive mechanism. Soc. Netw. Anal. Min. 2022, 12, 114. [Google Scholar] [CrossRef]
  37. Parlak, M.; Altunel, N.F.; Akkaş, U.A.; Arici, E.T. Tamper-proof evidence via Blockchain for autonomous vehicle accident monitoring. In Proceedings of the 2022 IEEE 1st Global Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETBlockchain), Irvine, CA, USA, 7–11 November 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
  38. Nagothu, D.; Xu, R.; Chen, Y.; Blasch, E.; Aved, A. Defakepro: Decentralized Deepfake attacks detection using enf authentication. IT Prof. 2022, 24, 46–52. [Google Scholar] [CrossRef]
  39. Miotti, A.; Wasil, A. Combatting Deepfake: Policies to address national security threats and rights violations. arXiv 2024, arXiv:2402.09581. [Google Scholar] [CrossRef]
  40. Pawelec, M. Decent Deepfake? Professional Deepfake developers’ ethical considerations and their governance potential. AI Ethics 2024, 5, 2641–2666. [Google Scholar] [CrossRef]
  41. Fabuyi, J.; Olaniyi, O.O.; Olateju, O.; Aideyan, N.T.; Selesi-Aina, O.; Olaniyi, F.G. Deepfake Regulations and Their Impact on Content Creation in the Entertainment Industry. Arch. Curr. Res. Int. 2024, 24, 10–9734. [Google Scholar] [CrossRef]
  42. Putra, G.P.; Multazam, M.T. Law Enforcement Against Deepfake Porn AI: Penegakan Hukum Terhadap Deepfake Porn AI. Eur. J. Contemp. Bus. Law Technol. 2024, 1, 58–77. [Google Scholar]
  43. Mahashreshty Vishweshwar, S. Implications of Deepfake Technology on Individual Privacy and Security. 2023. Available online: https://repository.stcloudstate.edu/msia_etds/142/ (accessed on 25 September 2025).
  44. Lingyun, Y. Regulations on Detecting, Punishing, Preventing Deepfake Technologies Based Forgery. Cent. Asian J. Acad. Res. 2024, 2, 62–66. [Google Scholar]
  45. Kira, B. When non-consensual intimate Deepfake go viral: The insufficiency of the UK Online Safety Act. Comput. Law Secur. Rev. 2024, 54, 106024. [Google Scholar] [CrossRef]
  46. Vizoso, Á.; Vaz-Álvarez, M.; López-García, X. Fighting Deepfake: Media and internet giants’ converging and diverging strategies against hi-tech misinformation. Media Commun. 2021, 9, 291–300. [Google Scholar] [CrossRef]
  47. Temir, E. Deepfake: New era in the age of disinformation & end of reliable journalism. Selçuk İletişim 2020, 13, 1009–1024. [Google Scholar]
  48. Gonzales, N.H.; Lobian, A.L.; Hengky, F.M.; Andhika, M.R.; Achmad, S.; Sutoyo, R. Deepfake Technology: Negative Impacts, Mitigation Methods, and Preventive Algorithms. In Proceedings of the 2023 IEEE 8th International Conference on Recent Advances and Innovations in Engineering (ICRAIE), Kuala Lumpur, Malaysia, 2–3 December 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  49. Alanazi, S.; Asif, S. Exploring deepfake technology: Creation, consequences and countermeasures. Hum.-Intell. Syst. Integr. 2024, 6, 49–60. [Google Scholar] [CrossRef]
  50. Singh, S.; Amol, D. Unmasking Digital Deceptions: An Integrative Review of Deepfake Detection, Multimedia Forensics, and Cybersecurity Challenges. Multimed. Forensics Cybersecur. Chall. 2025, 15, 103632. [Google Scholar] [CrossRef]
  51. Seng, L.K.; Mamat, N.; Abas, H.; Ali, W.N. AI Integrity Solutions for Deepfake Identification and Prevention. Open Int. J. Inform. 2024, 12, 35–46. [Google Scholar]
  52. Ghediri, K. Countering the negative impacts of Deepfake technology: Approaches for effective combat. Int. J. Econ. Perspect. 2024, 18, 2871–2890. [Google Scholar]
  53. Taha, M.A.; Khudhair, W.M.; Khudhur, A.M.; Mahmood, O.A.; Hammadi, Y.I.; Al-husseinawi, R.S.; Aziz, A. Emerging threat of deep fake: How to identify and prevent it. In Proceedings of the 6th International Conference on Future Networks & Distributed Systems, Tashkent, Uzbekistan, 15–16 December 2022; pp. 645–651. [Google Scholar]
  54. Buo, S.A. The emerging threats of Deepfake attacks and countermeasures. arXiv 2020, arXiv:2012.07989. [Google Scholar] [CrossRef]
  55. Wang, S. How will users respond to the adversarial noise that prevents the generation of Deepfake? In Proceedings of the 23rd ITS Biennial Conference, Online, 21–23 June 2021. [Google Scholar]
  56. Tuysuz, M.K.; Kılıç, A. Analyzing the legal and ethical considerations of Deepfake Technology. Interdiscip. Stud. Soc. Law Politics 2023, 2, 4–10. [Google Scholar]
  57. Romero-Moreno, F. Deepfake Fraud Detection: Safeguarding Trust in Generative AI. Comput. Law Secur. Rev. 2025, 58, 106162. [Google Scholar] [CrossRef]
  58. Alexander, S. Deepfake Cyberbullying: The Psychological Toll on Students and Institutional Challenges of AI-Driven Harassment. Clear. House J. Educ. Strateg. Issues Ideas 2025, 98, 36–50. [Google Scholar] [CrossRef]
  59. Pedersen, K.T.; Pepke, L.; Stærmose, T.; Papaioannou, M.; Choudhary, G.; Dragoni, N. Deepfake-Driven Social Engineering: Threats, Detection Techniques, and Defensive Strategies in Corporate Environments. J. Cybersecur. Priv. 2025, 5, 18. [Google Scholar] [CrossRef]
  60. Mi, X.; Zhang, B. Digital Communication Strategies for Coping with Deepfake Content Distribution. In Proceedings of the 2025 Communication Strategies in Digital Society Seminar (ComSDS), St. Petersburg, Russia, 9 April 2025; IEEE: New York, NY, USA, 2025; pp. 82–86. [Google Scholar]
  61. Al-Dabbagh, R.; Alkhatib, M.; Albalawi, T. Efficient Post-Quantum Cryptography Algorithms for Auto-Enrollment in Public Key Infrastructure. Electronics 2025, 14, 1980. [Google Scholar] [CrossRef]
  62. Rossler, A.; Cozzolino, D.; Verdoliva, L.; Riess, C.; Thies, J.; Nießner, M. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1–11. [Google Scholar]
  63. Abbasi, M.; Paulo, V.; José, S.; Pedro, M. Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks. Appl. Sci. 2025, 15, 1225. [Google Scholar] [CrossRef]
  64. Segate, R.V. Cognitive bias, privacy rights, and digital evidence in international criminal proceedings: Demystifying the double-edged ai revolution. Int. Crim. Law Rev. 2021, 21, 242–279. [Google Scholar] [CrossRef]
  65. Ma’arif, A.; Hari, M.; Iswanto, S.; Denis, P.; Syahrani, L.; Abdel-Nasser, S. Social, legal, and ethical implications of AI-Generated deepfake pornography on digital platforms: A systematic literature review. Soc. Sci. Humanit. Open 2025, 12, 101882. [Google Scholar] [CrossRef]
  66. Nair, K. Deepfake Detection: Comparison of Pretrained Xception and VGG16 Models. Ph.D. Thesis, National College of Ireland, Dublin, Ireland, 2025. [Google Scholar]
Figure 1. Summary of the four modules in the framework and their components.
Figure 1. Summary of the four modules in the framework and their components.
Computers 14 00488 g001
Figure 2. Interaction and dataflow between the four modules of the deepfake prevention framework.
Figure 2. Interaction and dataflow between the four modules of the deepfake prevention framework.
Computers 14 00488 g002
Figure 3. The process model for a user who wants to verify the authenticity of a piece of media content.
Figure 3. The process model for a user who wants to verify the authenticity of a piece of media content.
Computers 14 00488 g003
Figure 4. The process model for an RO who wants to generate a signature for a piece of media.
Figure 4. The process model for an RO who wants to generate a signature for a piece of media.
Computers 14 00488 g004
Figure 5. Total time delay for digital signature and watermarking operations using various DSAs.
Figure 5. Total time delay for digital signature and watermarking operations using various DSAs.
Computers 14 00488 g005
Figure 6. Comparison between resource comparisons for DSAs.
Figure 6. Comparison between resource comparisons for DSAs.
Computers 14 00488 g006
Figure 7. Comparison between gas cost for blockchain operations using all DSAs.
Figure 7. Comparison between gas cost for blockchain operations using all DSAs.
Computers 14 00488 g007
Table 1. Time delay for digital signature and watermarking operations using various DSAs.
Table 1. Time delay for digital signature and watermarking operations using various DSAs.
Digital Signature AlgorithmSignature Creation (ms)Signature Verification (ms)Payload Generation (ms)Payload Embedding (ms)Total Time Delay (ms)
RSA31.3390.5280.045177.263209.175
ECDSA6.5035.7620.02473.00785.296
Dilithium12.9772.6380.012125.470141.097
Falcon9.5290.7080.00950.43860.684
SLH-DSA1449.6072.3120.01257.7611509.692
Table 2. Time delay for deepfake detection operations.
Table 2. Time delay for deepfake detection operations.
Deepfake Detection MethodDatasetInference Time per Image (ms)
FaceForensics++ [62]FF++ raw5–20 ms
XceptionNet on DFDC [66]DFDC test10–30 ms
Table 3. Area consumption results for traditional and post-quantum DSAs.
Table 3. Area consumption results for traditional and post-quantum DSAs.
Digital Signature Algorithms with Hash FunctionsArea Consumed (Bytes)
RSA/SHA3-512135.04
ECDSA(SHA3-512)/P-52174.88
Dilithium-5 (PQDSA)1196.416
Falcon-512 (PQDSA)203.264
SLH-DSA (SPHINCS + SHA3-128s) (PQDSA)7489.024
Table 4. Estimated gas cost for blockchain transactions using various DSAs.
Table 4. Estimated gas cost for blockchain transactions using various DSAs.
Digital Signature AlgorithmSize of Signature + Metadata (Bytes)Estimated Gas Cost
RSA135.04105,400 gas
ECDSA74.8867,800 gas
Dilithium1196.416768,760 gas
Falcon203.264148,040 gas
SLH-DSA7489.0244,701,640 gas
Table 5. Accuracy of deepfake detection methods.
Table 5. Accuracy of deepfake detection methods.
Deepfake Detection MethodDatasetReported AccuracyFalse-Positive Rate
FaceForensics++ [62]FF++ raw95%4.8
XceptionNet on DFDC [66]DFDC test89.2%6.2
Table 6. Security services and associated security mechanisms.
Table 6. Security services and associated security mechanisms.
Security ServicesFramework ModuleSecurity MechanismDescription
AuthenticationModule 1: Trusted Content AssuranceDigital Watermarking, Cryptography Signatures, and Ethereum BlockchainThe hybrid watermarking embedding technique was used to create and embed a watermark. Cryptography DSAs were used to sign the media and its metadata. The signature and metadata were appended into Ethereum blockchain.
IntegrityModule 1: Trusted Content AssuranceCryptography Hash Functions and Digital SignaturesSHA-3 and DSAs were employed to provide an integrity service.
Non-RepudiationModule 1: Trusted Content AssuranceDigital Signature AlgorithmsCryptography DSAs were implemented to provide a non-repudiation service.
Traceability and AccountabilityModule 1: Trusted Content AssuranceEthereum Blockchain and Smart ContractsBlockchain and smart contracts enable tracing and verification of the status and metadata of media content. The use of smart contracts provides accountability for each transaction.
Quantum Attacks ResilienceModule 1: Trusted Content AssurancePost-Quantum Cryptography Digital Signature AlgorithmsThe use of post-quantum algorithms offers long-term security and provides resistance against quantum computing attacks.
GovernanceModule 4: Policy, Governance, and RegulationRegulatory and Rules Enforcement AgenciesThe involvement of regulatory and rules enforcement ensures governance and support the prevention of deepfake.
Deepfake DetectionModule 2: Detection and Monitoring, and Module 3: Awareness, Training, and Human-in-the-Loop Deepfake Detection Models, and Human InterventionAdopting deepfake detectors and human assistants contributes significantly in enhancing deepfake detection accuracy.
Deepfake PreventionModule 3: Awareness, Training, and Human-in-the-LoopAwareness and TrainingProviding awareness and training in the third module supports resilience and enhances deepfake prevention.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alkhatib, M. A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance. Computers 2025, 14, 488. https://doi.org/10.3390/computers14110488

AMA Style

Alkhatib M. A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance. Computers. 2025; 14(11):488. https://doi.org/10.3390/computers14110488

Chicago/Turabian Style

Alkhatib, Mohammad. 2025. "A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance" Computers 14, no. 11: 488. https://doi.org/10.3390/computers14110488

APA Style

Alkhatib, M. (2025). A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance. Computers, 14(11), 488. https://doi.org/10.3390/computers14110488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop