Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = facial inpainting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4262 KiB  
Article
PigFRIS: A Three-Stage Pipeline for Fence Occlusion Segmentation, GAN-Based Pig Face Inpainting, and Efficient Pig Face Recognition
by Ruihan Ma, Seyeon Chung, Sangcheol Kim and Hyongsuk Kim
Animals 2025, 15(7), 978; https://doi.org/10.3390/ani15070978 - 28 Mar 2025
Viewed by 637
Abstract
Accurate animal face recognition is essential for effective health monitoring, behavior analysis, and productivity management in smart farming. However, environmental obstructions and animal behaviors complicate identification tasks. In pig farming, fences and frequent movements often occlude essential facial features, while high inter-class similarity [...] Read more.
Accurate animal face recognition is essential for effective health monitoring, behavior analysis, and productivity management in smart farming. However, environmental obstructions and animal behaviors complicate identification tasks. In pig farming, fences and frequent movements often occlude essential facial features, while high inter-class similarity makes distinguishing individuals even more challenging. To address these issues, we introduce the Pig Face Recognition and Inpainting System (PigFRIS). This integrated framework enhances recognition accuracy by removing occlusions and restoring missing facial features. PigFRIS employs state-of-the-art occlusion detection with the YOLOv11 segmentation model, a GAN-based inpainting reconstruction module using AOT-GAN, and a lightweight recognition module tailored for pig face classification. In doing so, our system detects occlusions, reconstructs obscured regions, and emphasizes key facial features, thereby improving overall performance. Experimental results validate the effectiveness of PigFRIS. For instance, YOLO11l achieves a recall of 94.92% and a AP50 of 96.28% for occlusion detection, AOTGAN records a FID of 51.48 and an SSIM of 91.50% for image restoration, and EfficientNet-B2 attains an accuracy of 91.62% with an F1 Score of 91.44% in classification. Additionally, heatmap analysis reveals that the system successfully focuses on relevant facial features rather than irrelevant occlusions, enhancing classification reliability. This work offers a novel and practical solution for animal face recognition in smart farming. It overcomes the limitations of existing methods and contributes to more effective livestock management and advancements in agricultural technology. Full article
Show Figures

Figure 1

22 pages, 4962 KiB  
Article
Face Image Inpainting of Tang Dynasty Female Terracotta Figurines Based on an Improved Global and Local Consistency Image Completion Algorithm
by Qiangqiang Fan, Cong Wei, Shangyang Wu and Jinhan Xie
Appl. Sci. 2024, 14(24), 11621; https://doi.org/10.3390/app142411621 (registering DOI) - 12 Dec 2024
Cited by 1 | Viewed by 1443
Abstract
Tang Dynasty female terracotta figurines, as important relics of ceramics art, have commonly suffered from natural and man-made damages, among which facial damage is severe. Image inpainting is widely used in cultural heritage fields such as murals and paintings, where rich datasets are [...] Read more.
Tang Dynasty female terracotta figurines, as important relics of ceramics art, have commonly suffered from natural and man-made damages, among which facial damage is severe. Image inpainting is widely used in cultural heritage fields such as murals and paintings, where rich datasets are available. However, its application in the restoration of Tang Dynasty terracotta figurines remains limited. This study first evaluates the extent of facial damage in Tang Dynasty female terracotta figurines, and then uses the Global and Local Consistency Image Completion (GLCIC) algorithm to restore the original appearance of female terracotta figurines, ensuring that the restored area is globally and locally consistent with the original image. To address the issues of scarce data and blurred facial features of the figurines, the study optimized the algorithm through data augmentation, guided filtering, and local enhancement techniques. The experimental results show that the improved algorithm has higher accuracy in restoring the shape features of the female figurines’ faces, but there is still room for improvement in terms of color and texture features. This study provides a new technical path for the protection and inpainting of Tang Dynasty terracotta figurines, and proposes an effective strategy for image inpainting with data scarcity. Full article
(This article belongs to the Special Issue Advanced Technologies in Cultural Heritage)
Show Figures

Figure 1

22 pages, 4002 KiB  
Article
UFCC: A Unified Forensic Approach to Locating Tampered Areas in Still Images and Detecting Deepfake Videos by Evaluating Content Consistency
by Po-Chyi Su, Bo-Hong Huang and Tien-Ying Kuo
Electronics 2024, 13(4), 804; https://doi.org/10.3390/electronics13040804 - 19 Feb 2024
Cited by 6 | Viewed by 2347
Abstract
Image inpainting and Deepfake techniques have the potential to drastically alter the meaning of visual content, posing a serious threat to the integrity of both images and videos. Addressing this challenge requires the development of effective methods to verify the authenticity of investigated [...] Read more.
Image inpainting and Deepfake techniques have the potential to drastically alter the meaning of visual content, posing a serious threat to the integrity of both images and videos. Addressing this challenge requires the development of effective methods to verify the authenticity of investigated visual data. This research introduces UFCC (Unified Forensic Scheme by Content Consistency), a novel forensic approach based on deep learning. UFCC can identify tampered areas in images and detect Deepfake videos by examining content consistency, assuming that manipulations can create dissimilarity between tampered and intact portions of visual data. The term “Unified” signifies that the same methodology is applicable to both still images and videos. Recognizing the challenge of collecting a diverse dataset for supervised learning due to various tampering methods, we overcome this limitation by incorporating information from original or unaltered content in the training process rather than relying solely on tampered data. A neural network for feature extraction is trained to classify imagery patches, and a Siamese network measures the similarity between pairs of patches. For still images, tampered areas are identified as patches that deviate from the majority of the investigated image. In the case of Deepfake video detection, the proposed scheme involves locating facial regions and determining authenticity by comparing facial region similarity across consecutive frames. Extensive testing is conducted on publicly available image forensic datasets and Deepfake datasets with various manipulation operations. The experimental results highlight the superior accuracy and stability of the UFCC scheme compared to existing methods. Full article
(This article belongs to the Special Issue Image/Video Processing and Encoding for Contemporary Applications)
Show Figures

Figure 1

12 pages, 2446 KiB  
Article
Recovery-Based Occluded Face Recognition by Identity-Guided Inpainting
by Honglei Li, Yifan Zhang, Wenmin Wang, Shenyong Zhang and Shixiong Zhang
Sensors 2024, 24(2), 394; https://doi.org/10.3390/s24020394 - 9 Jan 2024
Cited by 4 | Viewed by 3577
Abstract
Occlusion in facial photos poses a significant challenge for machine detection and recognition. Consequently, occluded face recognition for camera-captured images has emerged as a prominent and widely discussed topic in computer vision. The present standard face recognition methods have achieved remarkable performance in [...] Read more.
Occlusion in facial photos poses a significant challenge for machine detection and recognition. Consequently, occluded face recognition for camera-captured images has emerged as a prominent and widely discussed topic in computer vision. The present standard face recognition methods have achieved remarkable performance in unoccluded face recognition but performed poorly when directly applied to occluded face datasets. The main reason lies in the absence of identity cues caused by occlusions. Therefore, a direct idea of recovering the occluded areas through an inpainting model has been proposed. However, existing inpainting models based on an encoder-decoder structure are limited in preserving inherent identity information. To solve the problem, we propose ID-Inpainter, an identity-guided face inpainting model, which preserves the identity information to the greatest extent through a more accurate identity sampling strategy and a GAN-like fusing network. We conduct recognition experiments on the occluded face photographs from the LFW, CFP-FP, and AgeDB-30 datasets, and the results indicate that our method achieves state-of-the-art performance in identity-preserving inpainting, and dramatically improves the accuracy of normal recognizers in occluded face recognition. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

31 pages, 3580 KiB  
Review
A Review of Image Inpainting Methods Based on Deep Learning
by Zishan Xu, Xiaofeng Zhang, Wei Chen, Minda Yao, Jueting Liu, Tingting Xu and Zehua Wang
Appl. Sci. 2023, 13(20), 11189; https://doi.org/10.3390/app132011189 - 11 Oct 2023
Cited by 17 | Viewed by 15323
Abstract
Image Inpainting is an age-old image processing problem, with people from different eras attempting to solve it using various methods. Traditional image inpainting algorithms have the ability to repair minor damage such as scratches and wear. However, with the rapid development of deep [...] Read more.
Image Inpainting is an age-old image processing problem, with people from different eras attempting to solve it using various methods. Traditional image inpainting algorithms have the ability to repair minor damage such as scratches and wear. However, with the rapid development of deep learning in the field of computer vision in recent years, coupled with abundant computing resources, methods based on deep learning have increasingly highlighted their advantages in semantic feature extraction, image transformation, and image generation. As such, image inpainting algorithms based on deep learning have become the mainstream in this domain.In this article, we first provide a comprehensive review of some classic deep-learning-based methods in the image inpainting field. Then, we categorize these methods based on component optimization, network structure design optimization, and training method optimization, discussing the advantages and disadvantages of each approach. A comparison is also made based on public datasets and evaluation metrics in image inpainting. Furthermore, the article delves into the applications of current image inpainting technologies, categorizing them into three major scenarios: object removal, general image repair, and facial inpainting. Finally, current challenges and prospective developments in the field of image inpainting are discussed. Full article
(This article belongs to the Special Issue Advances in Intelligent Communication System)
Show Figures

Figure 1

22 pages, 8887 KiB  
Article
GANMasker: A Two-Stage Generative Adversarial Network for High-Quality Face Mask Removal
by Mohamed Mahmoud and Hyun-Soo Kang
Sensors 2023, 23(16), 7094; https://doi.org/10.3390/s23167094 - 10 Aug 2023
Cited by 13 | Viewed by 3630
Abstract
Deep-learning-based image inpainting methods have made remarkable advancements, particularly in object removal tasks. The removal of face masks has gained significant attention, especially in the wake of the COVID-19 pandemic, and while numerous methods have successfully addressed the removal of small objects, removing [...] Read more.
Deep-learning-based image inpainting methods have made remarkable advancements, particularly in object removal tasks. The removal of face masks has gained significant attention, especially in the wake of the COVID-19 pandemic, and while numerous methods have successfully addressed the removal of small objects, removing large and complex masks from faces remains demanding. This paper presents a novel two-stage network for unmasking faces considering the intricate facial features typically concealed by masks, such as noses, mouths, and chins. Additionally, the scarcity of paired datasets comprising masked and unmasked face images poses an additional challenge. In the first stage of our proposed model, we employ an autoencoder-based network for binary segmentation of the face mask. Subsequently, in the second stage, we introduce a generative adversarial network (GAN)-based network enhanced with attention and Masked–Unmasked Region Fusion (MURF) mechanisms to focus on the masked region. Our network generates realistic and accurate unmasked faces that resemble the original faces. We train our model on paired unmasked and masked face images sourced from CelebA, a large public dataset, and evaluate its performance on multi-scale masked faces. The experimental results illustrate that the proposed method surpasses the current state-of-the-art techniques in both qualitative and quantitative metrics. It achieves a Peak Signal-to-Noise Ratio (PSNR) improvement of 4.18 dB over the second-best method, with the PSNR reaching 30.96. Additionally, it exhibits a 1% increase in the Structural Similarity Index Measure (SSIM), achieving a value of 0.95. Full article
(This article belongs to the Special Issue Deep Learning Based Face Recognition and Feature Extraction)
Show Figures

Figure 1

13 pages, 4992 KiB  
Article
Efficient Face Region Occlusion Repair Based on T-GANs
by Qiaoyue Man and Young-Im Cho
Electronics 2023, 12(10), 2162; https://doi.org/10.3390/electronics12102162 - 9 May 2023
Cited by 2 | Viewed by 2135
Abstract
In the image restoration task, the generative adversarial network (GAN) demonstrates excellent performance. However, there remain significant challenges concerning the task of generative face region inpainting. Traditional model approaches are ineffective in maintaining global consistency among facial components and recovering fine facial details. [...] Read more.
In the image restoration task, the generative adversarial network (GAN) demonstrates excellent performance. However, there remain significant challenges concerning the task of generative face region inpainting. Traditional model approaches are ineffective in maintaining global consistency among facial components and recovering fine facial details. To address this challenge, this study proposes a facial restoration generation network combined a transformer module and GAN to accurately detect the missing feature parts of the face and perform effective and fine-grained restoration generation. We validate the proposed model using different image quality evaluation methods and several open-source face datasets and experimentally demonstrate that our model outperforms other current state-of-the-art network models in terms of generated image quality and the coherent naturalness of facial features in face image restoration generation tasks. Full article
(This article belongs to the Special Issue AI Technologies and Smart City)
Show Figures

Figure 1

18 pages, 43034 KiB  
Article
Research on High-Resolution Face Image Inpainting Method Based on StyleGAN
by Libo He, Zhenping Qiang, Xiaofeng Shao, Hong Lin, Meijiao Wang and Fei Dai
Electronics 2022, 11(10), 1620; https://doi.org/10.3390/electronics11101620 - 19 May 2022
Cited by 17 | Viewed by 6039
Abstract
In face image recognition and other related applications, incomplete facial imagery due to obscuring factors during acquisition represents an issue that requires solving. Aimed at tackling this issue, the research surrounding face image completion has become an important topic in the field of [...] Read more.
In face image recognition and other related applications, incomplete facial imagery due to obscuring factors during acquisition represents an issue that requires solving. Aimed at tackling this issue, the research surrounding face image completion has become an important topic in the field of image processing. Face image completion methods require the capability of capturing the semantics of facial expression. A deep learning network has been widely shown to bear this ability. However, for high-resolution face image completion, the network training of high-resolution image inpainting is difficult to converge, thus rendering high-resolution face image completion a difficult problem. Based on the study of the deep learning model of high-resolution face image generation, this paper proposes a high-resolution face inpainting method. First, our method extracts the latent vector of the face image to be repaired through ResNet, then inputs the latent vector to the pre-trained StyleGAN model to generate the face image. Next, it calculates the loss between the known part of the face image to be repaired and the corresponding part of the generated face imagery. Afterward, the latent vector is cut to generate a new face image iteratively until the number of iterations is reached. Finally, the Poisson fusion method is employed to process the last generated face image and the face image to be repaired in order to eliminate the difference in boundary color information of the repaired image. Through the comparison and analysis between two classical face completion methods in recent years on the CelebA-HQ data set, we discovered our method can achieve better completion results of 256*256 resolution face image completion. For 1024*1024 resolution face image restoration, we have also conducted a large number of experiments, which prove the effectiveness of our method. Our method can obtain a variety of repair results by editing the latent vector. In addition, our method can be successfully applied to face image editing, face image watermark clearing and other applications without the network training process of different masks in these applications. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

14 pages, 25324 KiB  
Article
Convincing 3D Face Reconstruction from a Single Color Image under Occluded Scenes
by Dapeng Zhao, Jinkang Cai and Yue Qi
Electronics 2022, 11(4), 543; https://doi.org/10.3390/electronics11040543 - 11 Feb 2022
Cited by 5 | Viewed by 3879
Abstract
The last few years have witnessed the great success of generative adversarial networks (GANs) in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction works often pursue higher resolutions and ignore occlusion. We study the problem of detailed 3D facial reconstruction [...] Read more.
The last few years have witnessed the great success of generative adversarial networks (GANs) in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction works often pursue higher resolutions and ignore occlusion. We study the problem of detailed 3D facial reconstruction under occluded scenes. This is a challenging problem; currently, the collection of such a large scale high resolution 3D face dataset is still very costly. In this work, we propose a deep learning based approach for detailed 3D face reconstruction that does not require large-scale 3D datasets. Motivated by generative face image inpainting and weakly-supervised 3D deep reconstruction, we propose a complete 3D face model generation method guided by the contour. In our work, the 3D reconstruction framework based on weak supervision can generate convincing 3D models. We further test our method on the MICC, Florence and LFW datasets, showing its strong generalization capacity and superior performance. Full article
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)
Show Figures

Figure 1

15 pages, 15970 KiB  
Article
Interactive Removal of Microphone Object in Facial Images
by Muhammad Kamran Javed Khan, Nizam Ud Din, Seho Bae and Juneho Yi
Electronics 2019, 8(10), 1115; https://doi.org/10.3390/electronics8101115 - 2 Oct 2019
Cited by 40 | Viewed by 4342
Abstract
Removing a specific object from an image and replacing the hole left behind with visually plausible backgrounds is a very intriguing task. While recent deep learning based object removal methods have shown promising results on this task for some structured scenes, none of [...] Read more.
Removing a specific object from an image and replacing the hole left behind with visually plausible backgrounds is a very intriguing task. While recent deep learning based object removal methods have shown promising results on this task for some structured scenes, none of them have addressed the problem of object removal in facial images. The objective of this work is to remove microphone object in facial images and fill hole with correct facial semantics and fine details. To make our solution practically useful, we present an interactive method called MRGAN, where the user roughly provides the microphone region. For filling the hole, we employ a Generative Adversarial Network based image-to-image translation approach. We break the problem into two stages: inpainter and refiner. The inpainter estimates coarse prediction by roughly filling in the microphone region followed by the refiner which produces fine details under the microphone region. We unite perceptual loss, reconstruction loss and adversarial loss as joint loss function for generating a realistic face and similar structure to the ground truth. Because facial image pairs with or without microphone do not exist, we have trained our method on a synthetically generated microphone dataset from CelebA face images and evaluated on real world microphone images. Our extensive evaluation shows that MRGAN performs better than state-of-the-art image manipulation methods on real microphone images although we only train our method using the synthetic dataset created. Additionally, we provide ablation studies for the integrated loss function and for different network arrangements. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

Back to TopTop