Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators
Abstract
:1. Introduction
- 1.
- Proposing a multiscale feature fusion mechanism based on double attention: an improved method with a dual-attention mechanism is used, based on the spatial information extracted by the network at different scales. We reduce the mutual interference of high intrapixel correlations and improve the impact of attentional feature maps on the overall feature analysis.
- 2.
- Improving the generator model: we add improved residual priors and attention mechanisms to improve the accuracy of the repaired structure. Our method addresses the problem of structure loss in the basic model by adding additional image priors to preserve the image structure and it obtains more accurate structural information. This method emphasizes making the model more focused on features relevant to the structure in image inpainting.
- 3.
- Constructing a dataset of the tomb murals of the tomb of Prince Zhang Huai of the Tang Dynasty and inpainting them with three kinds of diseases: experimental results show that the model’s performance is significantly improved compared with the current mainstream models. The tomb murals have suffered from many typical diseases associated with long-term environmental erosion and human damage. Fissures in the murals, large hollow drums in the floor of the burial tomb leading to the loss of the floor layer and missing images, artificial excavation, and other types of disease are in urgent need of repair, which is the focus of this paper.
2. Related Work
3. Methods
3.1. Improving the Generator Module
3.2. Multiscale Feature Fusion Based on Dual-Attention
3.3. Joint Segmented Loss Function
3.3.1. Joint Loss Function
3.3.2. Segmented Loss Function
4. Experiments
4.1. Dataset and Implementation Details
4.2. Assessment Indicators
4.3. Multiscale Feature Fusion Module Ranking for Dual-Attention
4.4. Ablation Experiments
4.5. Comparison Experiments
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Karianakis, N.; Maragos, P. An integrated system for digital restoration of prehistoric theran wall paintings. In Proceedings of the 2013 18th International Conference on Digital Signal Processing (DSP), Santorini, Greece, 1–3 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–6. [Google Scholar]
- Jaidilert, S.; Farooque, G. Crack detection and images Inpainting method for Thai mural painting images. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; IEEE: Piscataway, NJ, USA, 2013; pp. 143–148. [Google Scholar]
- Cao, J.; Zhang, Z.; Zhao, A.; Cui, H.; Zhang, Q. Ancient mural restoration based on a modified generative adversarial network. Herit. Sci. 2020, 8, 1–14. [Google Scholar] [CrossRef]
- Zhou, S.; Xie, Y. Intelligent Restoration Technology of Mural Digital Image Based on Machine Learning Algorithm. Wirel. Commun. Mob. Comput. 2022, 2022, 4446999. [Google Scholar] [CrossRef]
- Cao, J.; Li, Y.; Zhang, Q.; Cui, H. Restoration of an ancient temple mural by a local search algorithm of an adaptive sample block. Herit. Sci. 2019, 7, 1–14. [Google Scholar] [CrossRef]
- Priego, E.; Herráez, J.; Denia, J.L.; Navarro, P. Technical study for restoration of mural paintings through the transfer of a photographic image to the vault of a church. J. Cult. Herit. 2022, 58, 112–121. [Google Scholar] [CrossRef]
- Zeng, Y.; Gong, Y.; Zeng, X. Controllable digital restoration of ancient paintings using convolutional neural network and nearest neighbor. Pattern Recognit. Lett. 2020, 133, 158–164. [Google Scholar] [CrossRef]
- Gupta, V.; Sambyal, N.; Sharma, A.; Kumar, P. Restoration of artwork using deep neural networks. Evol. Syst. 2021, 12, 439–446. [Google Scholar] [CrossRef]
- Wang, Y.; Tao, X.; Qi, X.; Shen, X.; Jia, J. Image inpainting via generative multi-column convolutional neural networks. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
- Zhou, X.; Xu, Z.; Cheng, X.; Xing, Z. Restoration of Laser Interference Image Based on Large Scale Deep Learning. IEEE Access 2022, 10, 123057–123067. [Google Scholar] [CrossRef]
- Chan, T.F.; Shen, J. Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 2001, 12, 436–449. [Google Scholar] [CrossRef]
- Criminisi, A.; Pérez, P.; Toyama, K. Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 2004, 13, 1200–1212. [Google Scholar] [CrossRef] [PubMed]
- Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D.B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 2009, 28, 24. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar]
- Ren, Y.; Yu, X.; Zhang, R.; Li, T.H.; Liu, S.; Li, G. Structureflow: Image inpainting via structure-aware appearance flow. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 181–190. [Google Scholar]
- Liu, H.; Jiang, B.; Xiao, Y.; Yang, C. Coherent semantic attention for image inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 4170–4179. [Google Scholar]
- Li, J.; Wang, N.; Zhang, L.; Du, B.; Tao, D. Recurrent feature reasoning for image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7760–7768. [Google Scholar]
- Guo, X.; Yang, H.; Huang, D. Image inpainting via conditional texture and structure dual generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14134–14143. [Google Scholar]
- Suvorov, R.; Logacheva, E.; Mashikhin, A.; Remizova, A.; Ashukha, A.; Silvestrov, A.; Kong, N.; Goka, H.; Park, K.; Lempitsky, V. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 2149–2159. [Google Scholar]
- Li, R.; Tan, R.T.; Cheong, L.F. Robust optical flow in rainy scenes. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 288–304. [Google Scholar]
- Zhang, G.; Gao, X.; Yang, Y.; Wang, M.; Ran, S. Controllably deep supervision and multi-scale feature fusion network for cloud and snow detection based on medium-and high-resolution imagery dataset. Remote Sens. 2021, 13, 4805. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Qian, G.; Abualshour, A.; Li, G.; Thabet, A.; Ghanem, B. Pu-gcn: Point cloud upsampling using graph convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11683–11692. [Google Scholar]
- Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar]
- Wu, M.; Jia, M.; Wang, J. TMCrack-Net: A U-Shaped Network with a Feature Pyramid and Transformer for Mural Crack Segmentation. Appl. Sci. 2022, 12, 10940. [Google Scholar] [CrossRef]
Method | L1 | L2 | L2–L1 | L1–L2 |
---|---|---|---|---|
PSNR | 22.6238 | 22.2908 | 22.3710 | 22.8600 |
SSIM | 0.8523 | 0.8558 | 0.8654 | 0.8629 |
MSE | 0.0085 | 0.0074 | 0.0050 | 0.0068 |
Method | Baseline | J1 | J2 | J3 | J4 | J5 | J6 |
---|---|---|---|---|---|---|---|
PSNR | 22.6238 | 23.2908 | 24.3710 | 23.5600 | 23.0815 | 23.0912 | 23.3141 |
SSIM | 0.8523 | 0.8558 | 0.8654 | 0.8629 | 0.8463 | 0.8470 | 0.8590 |
MSE | 0.0085 | 0.0074 | 0.0055 | 0.0068 | 0.0075 | 0.0074 | 0.0071 |
Method | A | B | C | D | E |
---|---|---|---|---|---|
PSNR | 22.6238 | 22.8600 | 23.7136 | 23.6139 | 23.7365 |
SSIM | 0.8523 | 0.8629 | 0.8548 | 0.8657 | 0.8659 |
MSE | 0.0085 | 0.0068 | 0.0047 | 0.0048 | 0.0046 |
Mask Type | Method | Ours | CSA | RFR | LaMa | CTSDG |
---|---|---|---|---|---|---|
epitaxial mask | PSNR | 25.2366 | 25.0880 | 23.9347 | 25.1932 | 24.7238 |
SSIM | 0.8509 | 0.7990 | 0.8108 | 0.8468 | 0.8524 | |
MSE | 0.0062 | 0.0053 | 0.0067 | 0.0050 | 0.0086 | |
crack mask | PSNR | 31.8855 | 28.6886 | 31.1276 | 31.2935 | 30.8397 |
SSIM | 0.9132 | 0.8262 | 0.9102 | 0.9114 | 0.9150 | |
MSE | 0.0025 | 0.0035 | 0.0030 | 0.0019 | 0.0022 | |
small mask | PSNR | 35.6748 | 28.9423 | 34.4726 | 35.1649 | 32.2429 |
SSIM | 0.9597 | 0.8491 | 0.9562 | 0.9550 | 0.9538 | |
MSE | 0.0008 | 0.0032 | 0.0010 | 0.0009 | 0.0020 | |
large mask | PSNR | 25.2643 | 23.2033 | 23.5682 | 25.0319 | 24.9831 |
SSIM | 0.7311 | 0.7323 | 0.7201 | 0.7124 | 0.7306 | |
MSE | 0.0077 | 0.0094 | 0.0113 | 0.0091 | 0.0080 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, M.; Chang, X.; Wang, J. Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators. Appl. Sci. 2023, 13, 3972. https://doi.org/10.3390/app13063972
Wu M, Chang X, Wang J. Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators. Applied Sciences. 2023; 13(6):3972. https://doi.org/10.3390/app13063972
Chicago/Turabian StyleWu, Meng, Xiao Chang, and Jia Wang. 2023. "Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators" Applied Sciences 13, no. 6: 3972. https://doi.org/10.3390/app13063972
APA StyleWu, M., Chang, X., & Wang, J. (2023). Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators. Applied Sciences, 13(6), 3972. https://doi.org/10.3390/app13063972