Next Article in Journal
Using Nearest-Neighbor Distributions to Quantify Machine Learning of Materials’ Microstructures
Previous Article in Journal
Investigation of the Internal Structure of Hard-to-Reach Objects Using a Hybrid Algorithm on the Example of Walls
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

DFST-UNet: Dual-Domain Fusion Swin Transformer U-Net for Image Forgery Localization

by
Jianhua Yang
,
Anjun Xie
,
Tao Mai
and
Yifang Chen
*
School of Cyber Security, Guangdong Polytechnic Normal University, Guangzhou 510630, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2025, 27(5), 535; https://doi.org/10.3390/e27050535 (registering DOI)
Submission received: 21 April 2025 / Revised: 13 May 2025 / Accepted: 15 May 2025 / Published: 17 May 2025
(This article belongs to the Section Signal and Data Analysis)

Abstract

Image forgery localization is critical in defending against the malicious manipulation of image content, and is attracting increasing attention worldwide. In this paper, we propose a Dual-domain Fusion Swin Transformer U-Net (DFST-UNet) for image forgery localization. DFST-UNet is built on a U-shaped encoder–decoder architecture. Swin Transformer blocks are integrated into the U-Net architecture to capture long-range context information and perceive forged regions at different scales. Considering the fact that high-frequency forgery information is an essential clue for forgery localization, a dual-stream encoder is proposed to comprehensively expose forgery clues in both the RGB domain and the frequency domain. A novel high-frequency feature extractor module (HFEM) is designed to extract robust high-frequency features. A hierarchical attention fusion module (HAFM) is designed to effectively fuse the dual-domain features. Extensive experimental results demonstrate the superiority of DFST-UNet over the state-of-the-art methods in the task of image forgery localization.
Keywords: image forgery localization; image forensics; dual-domain fusion; swin transformer image forgery localization; image forensics; dual-domain fusion; swin transformer

Share and Cite

MDPI and ACS Style

Yang, J.; Xie, A.; Mai, T.; Chen, Y. DFST-UNet: Dual-Domain Fusion Swin Transformer U-Net for Image Forgery Localization. Entropy 2025, 27, 535. https://doi.org/10.3390/e27050535

AMA Style

Yang J, Xie A, Mai T, Chen Y. DFST-UNet: Dual-Domain Fusion Swin Transformer U-Net for Image Forgery Localization. Entropy. 2025; 27(5):535. https://doi.org/10.3390/e27050535

Chicago/Turabian Style

Yang, Jianhua, Anjun Xie, Tao Mai, and Yifang Chen. 2025. "DFST-UNet: Dual-Domain Fusion Swin Transformer U-Net for Image Forgery Localization" Entropy 27, no. 5: 535. https://doi.org/10.3390/e27050535

APA Style

Yang, J., Xie, A., Mai, T., & Chen, Y. (2025). DFST-UNet: Dual-Domain Fusion Swin Transformer U-Net for Image Forgery Localization. Entropy, 27(5), 535. https://doi.org/10.3390/e27050535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop