Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = deepfake detection using blockchain

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1195 KB  
Article
A Multifaceted Deepfake Prevention Framework Integrating Blockchain, Post-Quantum Cryptography, Hybrid Watermarking, Human Oversight, and Policy Governance
by Mohammad Alkhatib
Computers 2025, 14(11), 488; https://doi.org/10.3390/computers14110488 - 8 Nov 2025
Cited by 1 | Viewed by 3030
Abstract
Deepfake technology, driven by advances in artificial intelligence (AI) and deep learning (DL), has become one of the foremost threats to digital trust and the authenticity of information. Despite the rapid development of deepfake detection methods, the dynamic evolution of generative models continues [...] Read more.
Deepfake technology, driven by advances in artificial intelligence (AI) and deep learning (DL), has become one of the foremost threats to digital trust and the authenticity of information. Despite the rapid development of deepfake detection methods, the dynamic evolution of generative models continues to outpace current mitigation efforts. This highlights the pressing need for more effective and proactive deepfake prevention strategy. This study introduces a comprehensive and multifaceted deepfake prevention framework that leverages both technical and non-technical countermeasures and involves collaboration among key stakeholders in a unified structure. The proposed framework has four modules: trusted content assurance, detection and monitoring, awareness and human-in-the-loop verification, and policy, governance, and regulation. The framework uses a combination of hybrid watermarking and embedding techniques, as well as cryptographic digital signature algorithms (DSAs) and blockchain technologies, to make sure that the media is authentic, traceable, and cannot be denied. Comparative experiments were conducted in this research using both classical and post-quantum DSAs to evaluate their efficiency, resource consumption, and gas costs in blockchain operations. The results revealed that the Falcon-512 algorithm outperformed other post-quantum algorithms while consuming fewer resources and lowering gas costs, making it a preferable option for real-time, quantum-resilient deepfake prevention. The framework also employed AI-based detection models and human oversight to enhance detection accuracy and robustness. Overall, this research offers a novel, multifaceted, and governance-aware strategy for deepfake prevention. The proposed approach significantly contributes to mitigating deepfake threats and offers a practical foundation for secure and transparent digital media ecosystems. Full article
Show Figures

Figure 1

33 pages, 3827 KB  
Review
Distinguishing Reality from AI: Approaches for Detecting Synthetic Content
by David Ghiurău and Daniela Elena Popescu
Computers 2025, 14(1), 1; https://doi.org/10.3390/computers14010001 - 24 Dec 2024
Cited by 38 | Viewed by 21290
Abstract
The advancement of artificial intelligence (AI) technologies, including generative pre-trained transformers (GPTs) and generative models for text, image, audio, and video creation, has revolutionized content generation, creating unprecedented opportunities and critical challenges. This paper systematically examines the characteristics, methodologies, and challenges associated with [...] Read more.
The advancement of artificial intelligence (AI) technologies, including generative pre-trained transformers (GPTs) and generative models for text, image, audio, and video creation, has revolutionized content generation, creating unprecedented opportunities and critical challenges. This paper systematically examines the characteristics, methodologies, and challenges associated with detecting the synthetic content across multiple modalities, to safeguard digital authenticity and integrity. Key detection approaches reviewed include stylometric analysis, watermarking, pixel prediction techniques, dual-stream networks, machine learning models, blockchain, and hybrid approaches, highlighting their strengths and limitations, as well as their detection accuracy, independent accuracy of 80% for stylometric analysis and up to 92% using multiple modalities in hybrid approaches. The effectiveness of these techniques is explored in diverse contexts, from identifying deepfakes and synthetic media to detecting AI-generated scientific texts. Ethical concerns, such as privacy violations, algorithmic bias, false positives, and overreliance on automated systems, are also critically discussed. Furthermore, the paper addresses legal and regulatory frameworks, including intellectual property challenges and emerging legislation, emphasizing the need for robust governance to mitigate misuse. Real-world examples of detection systems are analyzed to provide practical insights into implementation challenges. Future directions include developing generalizable and adaptive detection models, hybrid approaches, fostering collaboration between stakeholders, and integrating ethical safeguards. By presenting a comprehensive overview of AIGC detection, this paper aims to inform stakeholders, researchers, policymakers, and practitioners on addressing the dual-edged implications of AI-driven content creation. Full article
Show Figures

Graphical abstract

24 pages, 25658 KB  
Article
AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework
by Masabah Bint E. Islam, Muhammad Haseeb, Hina Batool, Nasir Ahtasham and Zia Muhammad
Blockchains 2024, 2(4), 458-481; https://doi.org/10.3390/blockchains2040020 - 21 Nov 2024
Cited by 16 | Viewed by 29374
Abstract
The integrity of global elections is increasingly under threat from artificial intelligence (AI) technologies. As AI continues to permeate various aspects of society, its influence on political processes and elections has become a critical area of concern. This is because AI language models [...] Read more.
The integrity of global elections is increasingly under threat from artificial intelligence (AI) technologies. As AI continues to permeate various aspects of society, its influence on political processes and elections has become a critical area of concern. This is because AI language models are far from neutral or objective; they inherit biases from their training data and the individuals who design and utilize them, which can sway voter decisions and affect global elections and democracy. In this research paper, we explore how AI can directly impact election outcomes through various techniques. These include the use of generative AI for disseminating false political information, favoring certain parties over others, and creating fake narratives, content, images, videos, and voice clones to undermine opposition. We highlight how AI threats can influence voter behavior and election outcomes, focusing on critical areas, including political polarization, deepfakes, disinformation, propaganda, and biased campaigns. In response to these challenges, we propose a Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF) designed to detect and authenticate deepfake content in real time. It leverages the transparency of blockchain technology to reinforce electoral integrity. Finally, we also propose comprehensive countermeasures, including enhanced legislation, technological solutions, and public education initiatives, to mitigate the risks associated with AI in electoral contexts, proactively safeguard democracy, and promote fair elections. Full article
(This article belongs to the Special Issue Key Technologies for Security and Privacy in Web 3.0)
Show Figures

Figure 1

23 pages, 5951 KB  
Article
DDS: Deepfake Detection System through Collective Intelligence and Deep-Learning Model in Blockchain Environment
by Nakhoon Choi and Heeyoul Kim
Appl. Sci. 2023, 13(4), 2122; https://doi.org/10.3390/app13042122 - 7 Feb 2023
Cited by 12 | Viewed by 6675
Abstract
With the spread of mobile devices and the improvement of the mobile service environment, the use of various Internet content providers (ICPs), including content services such as YouTube and video hosting services, has increased significantly. Video content shared in ICP is used for [...] Read more.
With the spread of mobile devices and the improvement of the mobile service environment, the use of various Internet content providers (ICPs), including content services such as YouTube and video hosting services, has increased significantly. Video content shared in ICP is used for information delivery and issue checking based on accessibility. However, if the content registered and shared in ICP is manipulated through deepfakes and maliciously distributed to cause political attacks or social problems, it can cause a very large negative effect. This study aims to propose a deepfake detection system that detects manipulated video content distributed in video hosting services while ensuring the transparency and objectivity of the detection subject. The detection method of the proposed system is configured through a blockchain and is not dependent on a single ICP, establishing a cooperative system among multiple ICPs and achieving consensus for the common purpose of deepfake detection. In the proposed system, the deep-learning model for detecting deepfakes is independently driven by each ICP, and the results are ensembled through integrated voting. Furthermore, this study proposes a method to supplement the objectivity of integrated voting and the neutrality of the deep-learning model by ensembling collective intelligence-based voting through the participation of ICP users in the integrated voting process and ensuring high accuracy at the same time. Through the proposed system, the accuracy of the deep-learning model is supplemented by utilizing collective intelligence in the blockchain environment, and the creation of a consortium contract environment for common goals between companies with conflicting interests is illuminated. Full article
(This article belongs to the Special Issue Blockchain in Information Security and Privacy)
Show Figures

Figure 1

Back to TopTop