Task-Driven Real-World Super-Resolution of Document Scans
Abstract
1. Introduction
- We propose a task-driven framework for training networks aimed at super-resolving text document images (see Figure 1), guided by text detection and recognition, keypoint detection, and color consistency.
- We employ DWA to dynamically balance the loss components, ensuring stable and balanced convergence across the tasks.
- We introduce a real-world dataset with accurate registration at 4× magnification ratio, which allows for realistic evaluation of SR for OCR-related tasks. In this work, we exploit the dataset for validating SISR techniques; however, the dataset can also be used for multi-image SR, as it contains multiple scans for each document.
- We report the results of a thorough quantitative and qualitative experimental analysis supported with statistical tests. It allows for understanding the model behavior on both real-world and simulated datasets, highlighting the practical benefits and risks of task-aware SR. Importantly, the elaborated methodology constitutes a solid foundation for our future work aimed at multi-image SR.
2. Related Work
2.1. Deep Learning for SR
2.2. Task-Specific SR
2.3. Multi-Task Loss Optimization
3. The Proposed Framework for Task-Driven Training
3.1. Network Architecture and Loss Function
3.1.1. SRResNet Architecture Overview
3.1.2. CTPN Architecture for Text Region Detection
3.1.3. CRNN Architecture for Text Recognition
3.1.4. Key.Net for Structural Consistency
3.1.5. Multi-Loss Approach
3.2. Multi-Task Loss Aggregation and Optimization Strategy
4. Experimental Validation
4.1. Experimental Setup
4.1.1. Datasets
4.1.2. Training Strategy
4.1.3. Investigated Variants and Evaluation Metrics
4.2. Experimental Results
4.3. Discussion
4.3.1. Interpreting the Quantitative Results
4.3.2. Qualitative Analysis
4.3.3. Limitations
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Guo, H.; Dai, T.; Zhu, M.; Meng, G.; Chen, B.; Wang, Z.; Xia, S.T. One-stage low-resolution text recognition with high-resolution knowledge transfer. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 2189–2198. [Google Scholar]
- Yu, M.; Shi, J.; Xue, C.; Hao, X.; Yan, G. A review of single image super-resolution reconstruction based on deep learning. Multimed. Tools Appl. 2024, 83, 55921–55962. [Google Scholar] [CrossRef]
- Wang, Z.; Chen, J.; Hoi, S.C.H. Deep Learning for Image Super-Resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3365–3387. [Google Scholar] [CrossRef] [PubMed]
- Yue, L.; Shen, H.; Li, J.; Yuan, Q.; Zhang, H.; Zhang, L. Image super-resolution: The techniques, applications, and future. Signal Process. 2016, 128, 389–408. [Google Scholar] [CrossRef]
- Valsesia, D.; Magli, E. Permutation Invariance and Uncertainty in Multitemporal Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the IEEE/CVF European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the IEEE/CVF European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 391–407. [Google Scholar]
- Kim, J.; Kwon Lee, J.; Mu Lee, K. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Cai, J.; Zeng, H.; Yong, H.; Cao, Z.; Zhang, L. Toward real-world single image super-resolution: A new benchmark and a new model. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Chen, H.; He, X.; Qing, L.; Wu, Y.; Ren, C.; Sheriff, R.E.; Zhu, C. Real-world single image super-resolution: A brief review. Inf. Fusion 2022, 79, 124–145. [Google Scholar] [CrossRef]
- Kawulok, M.; Kowaleczko, P.; Ziaja, M.; Nalepa, J.; Kostrzewa, D.; Latini, D.; De Santis, D.; Salvucci, G.; Petracca, I.; La Pegna, V.; et al. Hyperspectral image super-resolution: Task-based evaluation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 18949–18966. [Google Scholar] [CrossRef]
- Tian, Z.; Huang, W.; He, T.; He, P.; Qiao, Y. Detecting text in natural image with connectionist text proposal network. In Proceedings of the IEEE/CVF European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 56–72. [Google Scholar]
- Liu, Y.; Wang, Y.; Shi, H. A convolutional recurrent neural-network-based machine learning for scene text recognition application. Symmetry 2023, 15, 849. [Google Scholar] [CrossRef]
- Shi, B.; Bai, X.; Yao, C. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 2298–2304. [Google Scholar] [CrossRef] [PubMed]
- Barroso-Laguna, A.; Riba, E.; Ponsa, D.; Mikolajczyk, K. Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Liu, S.; Johns, E.; Davison, A.J. End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1871–1880. [Google Scholar]
- Dong, C.; Zhu, X.; Deng, Y.; Loy, C.C.; Qiao, Y. Boosting optical character recognition: A super-resolution approach. arXiv 2015, arXiv:1506.02211. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
- Blau, Y.; Michaeli, T. The perception-distortion tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6228–6237. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 1833–1844. [Google Scholar]
- Ayazoglu, M. Extremely lightweight quantization robust real-time single-image super resolution for mobile devices. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 2472–2479. [Google Scholar]
- Lu, Z.; Li, J.; Liu, H.; Huang, C.; Zhang, L.; Zeng, T. Transformer for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 457–466. [Google Scholar]
- Wan, C.; Yu, H.; Li, Z.; Chen, Y.; Zou, Y.; Liu, Y.; Yin, X.; Zuo, K. Swift parameter-free attention network for efficient super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 6246–6256. [Google Scholar]
- Xie, C.; Zhang, X.; Li, L.; Meng, H.; Zhang, T.; Li, T.; Zhao, X. Large kernel distillation network for efficient single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 1283–1292. [Google Scholar]
- Wang, W.; Xie, E.; Sun, P.; Wang, W.; Tian, L.; Shen, C.; Luo, P. TextSR: Content-aware text super-resolution guided by recognition. arXiv 2019, arXiv:1909.07113. [Google Scholar]
- Honda, K.; Fujita, H.; Kurematsu, M. Improvement of Text Image Super-Resolution Benefiting Multi-task Learning. In Proceedings of the International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, Kitakyushu, Japan, 19–22 July 2022; Springer: Cham, Switzerland, 2022; pp. 275–286. [Google Scholar]
- Gomez, R.; Shi, B.; Gomez, L.; Numann, L.; Veit, A.; Matas, J.; Belongie, S.; Karatzas, D. ICDAR2017 robust reading challenge on COCO-text. In Proceedings of the IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; Volume 1, pp. 1435–1443. [Google Scholar]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Task-driven super resolution: Object detection in low-resolution images. In Proceedings of the International Conference on Neural Information Processing (ICONIP), Sanur, Bali, Indonesia, 8–12 December 2021; Springer: Cham, Switzerland, 2021; pp. 387–395. [Google Scholar]
- Frizza, T.; Dansereau, D.G.; Seresht, N.M.; Bewley, M. Semantically accurate super-resolution generative adversarial networks. Comput. Vis. Image Underst. 2022, 221, 103464. [Google Scholar] [CrossRef]
- Rad, M.S.; Bozorgtabar, B.; Musat, C.; Marti, U.V.; Basler, M.; Ekenel, H.K.; Thiran, J.P. Benefiting from multitask learning to improve single image super-resolution. Neurocomputing 2020, 398, 304–313. [Google Scholar] [CrossRef]
- Madi, B.; Alaasam, R.; El-Sana, J. Text Edges Guided Network for Historical Document Super Resolution. In Proceedings of the International Conference on Frontiers in Handwriting Recognition, Hyderabad, India, 4–7 December 2022; Springer: Cham, Switzerland, 2022; pp. 18–33. [Google Scholar]
- Zyrek, M.; Kawulok, M. Task-driven single-image super-resolution reconstruction of document scans. In Proceedings of the 19th Conference on Computer Science and Intelligence Systems (FedCSIS), Belgrade, Serbia, 8–11 September 2024; pp. 259–264. [Google Scholar]
- Vandenhende, S.; Georgoulis, S.; Van Gansbeke, W.; Proesmans, M.; Dai, D.; Van Gool, L. Multi-task learning for dense prediction tasks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3614–3633. [Google Scholar] [CrossRef] [PubMed]
- Chen, Z.; Badrinarayanan, V.; Lee, C.Y.; Rabinovich, A. GradNorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 794–803. [Google Scholar]
- Kendall, A.; Gal, Y.; Cipolla, R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7482–7491. [Google Scholar]
- Guo, M.; Haque, A.; Huang, D.A.; Yeung, S.; Li, F.-F. Dynamic task prioritization for multitask learning. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 270–287. [Google Scholar]
- Tan, C.; Cheng, S.; Wang, L. Efficient Image Super-Resolution via Self-Calibrated Feature Fuse. Sensors 2022, 22, 329. [Google Scholar] [CrossRef] [PubMed]
- Prajapati, K.; Chudasama, V.; Upla, K. A light weight convolutional neural network for single image super-resolution. Procedia Comput. Sci. 2020, 171, 139–147. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Miami, Florida, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Liu, Z.; Li, L.; Wu, Y.; Zhang, C. Facial Expression Restoration Based on Improved Graph Convolutional Networks. arXiv 2019, arXiv:1910.10344. [Google Scholar] [CrossRef]
- Shermeyer, J.; Van Etten, A. The effects of super-resolution on object detection performance in satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1–10. [Google Scholar]
- Correia, P.H.B.; Rivera, G.A.R. Evaluation of OCR free software applied to old books. Rev. Trab. Iniciação Científica UNICAMP 2018, 26. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Lugmayr, A.; Danelljan, M.; Timofte, R.; Kim, K.-w.; Kim, Y.; Lee, J.-y.; Li, Z.; Pan, J.; Shim, D.; Song, K.-U.; et al. NTIRE 2022 challenge on learning the super-resolution space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 786–797. [Google Scholar]
- Benecki, P.; Kawulok, M.; Kostrzewa, D.; Skonieczny, L. Evaluating super-resolution reconstruction of satellite images. Acta Astronaut. 2018, 153, 15–25. [Google Scholar] [CrossRef]
- Lin, L.; Chen, H.; Kuruoglu, E.E.; Zhou, W. Robust structural similarity index measure for images with non-Gaussian distortions. Pattern Recognit. Lett. 2022, 163, 10–16. [Google Scholar] [CrossRef]
- Zhang, F.; Li, S.; Ma, L.; Ngan, K.N. Limitation and challenges of image quality measurement. In Proceedings of the Visual Communications and Image Processing 2010, Huangshan, China, 11–14 July 2010; SPIE: Bellingham, WA, USA, 2010; Volume 7744, pp. 25–32. [Google Scholar]
- Alaei, A.; Bui, V.; Doermann, D.; Pal, U. Document image quality assessment: A survey. ACM Comput. Surv. 2023, 56, 1–36. [Google Scholar] [CrossRef]
- Tarasiewicz, T.; Kawulok, M. A graph attention network for real-world multi-image super-resolution. Inf. Fusion 2025, 124, 103325. [Google Scholar] [CrossRef]
No. of | LR–HR Resolution | LR Patch Size | No. of Patches | |||
---|---|---|---|---|---|---|
Data Source | Images | (in DPI) | (in px) | Real-World | Simulated | |
Pretraining set | ||||||
MS COCO [28] | 81,574 | — | 32 × 32 | — | 81,574 | |
Training and validation set | ||||||
University Bulletin | 208 | 75–300 | 256 × 256 | 2046 | 2046 | |
208 | 150–600 | 256 × 256 | 6004 | 6004 | ||
Scientific Article | 48 | 75–300 | 256 × 256 | 482 | 482 | |
48 | 150–600 | 256 × 256 | 1392 | 1392 | ||
Test set | ||||||
MS COCO [28] | 1209 | — | 32 × 32 | — | 1209 | |
University Bulletin | 26 | 75–300 | 256 × 256 | 300 | 300 | |
26 | 150–600 | 256 × 256 | 875 | 875 | ||
Scientific Article | 6 | 75–300 | 256 × 256 | 72 | 72 | |
6 | 150–600 | 256 × 256 | 210 | 210 | ||
COVID Test Leaflet | 1 | 75–300 | 128 × 128 | 95 | 95 | |
Old Books [45] | 45 | 125–500 | 256 × 256 | 860 | 860 |
Variant Name | Training Dataset | |||
---|---|---|---|---|
Real-world | ✓ | ✓ | ✓ | |
Real-world | ✓ | ✓ | ||
Real-world | ✓ | ✓ | ||
Real-world | ✓ | |||
Real-world | ✓ | ✓ | ||
Real-world | ✓ | |||
Real-world | ✓ | |||
Real-world | ||||
Simulated | ✓ | ✓ | ✓ | |
Simulated | ✓ | ✓ | ||
Simulated | ✓ | ✓ | ||
Simulated | ✓ | |||
Simulated | ✓ | ✓ | ||
Simulated | ✓ | |||
Simulated | ✓ | |||
Simulated |
Simulated Images | Real-World Images | |||||||
---|---|---|---|---|---|---|---|---|
PSNR [dB] ↑ | SSIM ↑ | LPIPS ↓ | IoU ↑ | PSNR [dB] ↑ | SSIM ↑ | LPIPS ↓ | IoU ↑ | |
Int. | ||||||||
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 |
*** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 16 | |
*** | *** | — | *** | *** | *** | ** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 2 | |
*** | *** | *** | — | NS | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | NS | *** | 12 | |
*** | *** | *** | NS | — | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | NS | ** | 12 | |
*** | *** | *** | *** | *** | — | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | *** | ** | *** | *** | *** | — | *** | *** | *** | * | *** | * | *** | *** | *** | *** | *** | 3 | |
*** | *** | *** | *** | *** | *** | *** | — | *** | *** | ** | NS | ** | NS | NS | NS | *** | NS | 5 | |
*** | *** | *** | NS | NS | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | NS | *** | 12 | |
*** | *** | *** | *** | *** | NS | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | *** | *** | *** | *** | *** | * | ** | *** | *** | — | NS | NS | *** | * | * | *** | *** | 7 | |
*** | *** | *** | *** | *** | *** | *** | NS | *** | *** | NS | — | NS | NS | NS | NS | *** | NS | 4 | |
*** | *** | *** | *** | *** | *** | * | ** | *** | *** | NS | NS | — | *** | NS | NS | *** | *** | 4 | |
*** | *** | *** | *** | *** | *** | *** | NS | *** | *** | *** | NS | *** | — | NS | NS | *** | NS | 6 | |
*** | *** | *** | *** | *** | *** | *** | NS | *** | *** | * | NS | NS | NS | — | NS | *** | NS | 4 | |
*** | *** | *** | *** | *** | *** | *** | NS | *** | *** | * | NS | NS | NS | NS | — | *** | NS | 4 | |
*** | *** | *** | NS | NS | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | — | *** | 12 | |
*** | *** | *** | *** | ** | *** | *** | NS | *** | *** | *** | NS | *** | NS | NS | NS | *** | — | 6 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 |
*** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 16 | |
*** | *** | — | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | *** | *** | — | * | *** | *** | *** | * | *** | *** | *** | *** | *** | *** | *** | * | *** | 14 | |
*** | *** | *** | * | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
*** | *** | *** | *** | *** | — | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | 2 | |
*** | *** | *** | *** | *** | *** | — | *** | *** | *** | ** | * | *** | ** | *** | ** | *** | *** | 4 | |
*** | *** | NS | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | *** | *** | * | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | ** | *** | NS | *** | 12 | |
*** | *** | *** | *** | *** | NS | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | 2 | |
*** | *** | *** | *** | *** | *** | ** | *** | *** | *** | — | NS | *** | NS | *** | NS | *** | NS | 5 | |
*** | *** | *** | *** | *** | *** | * | *** | *** | *** | NS | — | *** | NS | *** | NS | *** | ** | 5 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | — | *** | ** | *** | *** | *** | 10 | |
*** | *** | *** | *** | *** | *** | ** | *** | *** | *** | NS | NS | *** | — | *** | NS | *** | * | 5 | |
*** | *** | *** | *** | *** | *** | *** | *** | ** | *** | *** | *** | ** | *** | — | *** | ** | *** | 11 | |
*** | *** | *** | *** | *** | *** | ** | *** | *** | *** | NS | NS | *** | NS | *** | — | *** | * | 5 | |
*** | *** | *** | * | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | ** | *** | — | *** | 12 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | NS | ** | *** | * | *** | * | *** | — | 8 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | *** | * | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 |
*** | — | *** | NS | *** | *** | *** | *** | NS | *** | NS | NS | * | NS | NS | NS | * | NS | 6 | |
*** | *** | — | *** | *** | NS | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | NS | *** | — | NS | *** | *** | *** | ** | *** | ** | *** | *** | * | ** | *** | NS | NS | 12 | |
* | *** | *** | NS | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | NS | ** | 14 | |
*** | *** | NS | *** | *** | — | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | *** | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 4 | |
*** | *** | *** | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 3 | |
*** | NS | *** | ** | *** | *** | *** | *** | — | *** | NS | NS | NS | NS | NS | NS | *** | NS | 5 | |
*** | *** | NS | *** | *** | NS | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | NS | *** | ** | *** | *** | *** | *** | NS | *** | — | NS | NS | NS | NS | NS | *** | NS | 5 | |
*** | NS | *** | *** | *** | *** | *** | *** | NS | *** | NS | — | NS | NS | NS | NS | *** | * | 5 | |
*** | * | *** | *** | *** | *** | *** | *** | NS | *** | NS | NS | — | NS | NS | NS | *** | * | 5 | |
*** | NS | *** | * | *** | *** | *** | *** | NS | *** | NS | NS | NS | — | NS | NS | ** | NS | 5 | |
*** | NS | *** | ** | *** | *** | *** | *** | NS | *** | NS | NS | NS | NS | — | NS | *** | NS | 5 | |
*** | NS | *** | *** | *** | *** | *** | *** | NS | *** | NS | NS | NS | NS | NS | — | *** | NS | 5 | |
*** | * | *** | NS | NS | *** | *** | *** | *** | *** | *** | *** | *** | ** | *** | *** | — | * | 14 | |
*** | NS | *** | NS | ** | *** | *** | *** | NS | *** | NS | * | * | NS | NS | NS | * | — | 7 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | ** | *** | *** | *** | NS | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | 9 |
** | — | *** | *** | *** | NS | *** | *** | *** | NS | NS | * | NS | * | * | ** | NS | ** | 5 | |
*** | *** | — | *** | *** | *** | * | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
*** | *** | *** | — | NS | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 12 | |
*** | *** | *** | NS | — | *** | *** | NS | * | *** | *** | *** | *** | *** | *** | *** | *** | *** | 12 | |
NS | NS | *** | *** | *** | — | *** | *** | *** | NS | NS | *** | * | *** | *** | *** | ** | *** | 7 | |
*** | *** | * | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 | |
*** | *** | *** | NS | NS | *** | *** | — | * | *** | *** | *** | *** | *** | *** | *** | *** | *** | 12 | |
*** | *** | NS | *** | * | *** | *** | * | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
*** | NS | *** | *** | *** | NS | *** | *** | *** | — | NS | NS | NS | * | NS | * | NS | * | 3 | |
NS | NS | *** | *** | *** | NS | *** | *** | *** | NS | — | *** | * | *** | *** | *** | ** | *** | 7 | |
*** | * | *** | *** | *** | *** | *** | *** | *** | NS | *** | — | NS | NS | NS | NS | NS | NS | 0 | |
*** | NS | *** | *** | *** | * | *** | *** | *** | NS | * | NS | — | NS | NS | NS | NS | * | 1 | |
*** | * | *** | *** | *** | *** | *** | *** | *** | * | *** | NS | NS | — | NS | NS | NS | NS | 0 | |
*** | * | *** | *** | *** | *** | *** | *** | *** | NS | *** | NS | NS | NS | — | NS | NS | NS | 0 | |
*** | ** | *** | *** | *** | *** | *** | *** | *** | * | *** | NS | NS | NS | NS | — | NS | NS | 0 | |
*** | NS | *** | *** | *** | ** | *** | *** | *** | NS | ** | NS | NS | NS | NS | NS | — | NS | 0 | |
*** | ** | *** | *** | *** | *** | *** | *** | *** | * | *** | NS | * | NS | NS | NS | NS | — | 0 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 |
*** | — | *** | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | NS | *** | 11 | |
*** | *** | — | *** | *** | * | *** | *** | *** | ** | NS | * | * | *** | *** | ** | *** | *** | 2 | |
*** | *** | *** | — | ** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 16 | |
*** | *** | *** | ** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
*** | *** | * | *** | *** | — | ** | *** | *** | NS | * | NS | NS | NS | NS | NS | *** | * | 3 | |
*** | *** | *** | *** | *** | ** | — | *** | *** | ** | *** | ** | ** | NS | NS | * | *** | NS | 6 | |
*** | NS | *** | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | NS | *** | 11 | |
*** | *** | *** | *** | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | 14 | |
*** | *** | ** | *** | *** | NS | ** | *** | *** | — | * | NS | NS | NS | NS | NS | *** | * | 4 | |
*** | *** | NS | *** | *** | * | *** | *** | *** | * | — | NS | * | *** | *** | ** | *** | *** | 1 | |
*** | *** | * | *** | *** | NS | ** | *** | *** | NS | NS | — | NS | NS | NS | NS | *** | * | 0 | |
*** | *** | * | *** | *** | NS | ** | *** | *** | NS | * | NS | — | NS | NS | NS | *** | * | 0 | |
*** | *** | *** | *** | *** | NS | NS | *** | *** | NS | *** | NS | NS | — | NS | NS | *** | NS | 2 | |
*** | *** | *** | *** | *** | NS | NS | *** | *** | NS | *** | NS | NS | NS | — | NS | *** | NS | 2 | |
*** | *** | ** | *** | *** | NS | * | *** | *** | NS | ** | NS | NS | NS | NS | — | *** | NS | 2 | |
*** | NS | *** | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | — | *** | 11 | |
*** | *** | *** | *** | *** | * | NS | *** | *** | * | *** | * | * | NS | NS | NS | *** | — | 4 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 |
*** | — | *** | ** | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | 13 | |
*** | *** | — | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | ** | *** | — | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
*** | *** | *** | NS | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
*** | *** | *** | *** | *** | — | *** | *** | *** | NS | *** | ** | *** | *** | *** | *** | *** | *** | 5 | |
*** | *** | *** | *** | *** | *** | — | *** | *** | * | NS | NS | *** | NS | *** | NS | *** | ** | 3 | |
*** | *** | NS | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | NS | *** | *** | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | 13 | |
*** | *** | *** | *** | *** | NS | * | *** | *** | — | ** | NS | *** | * | *** | ** | *** | *** | 6 | |
*** | *** | *** | *** | *** | *** | NS | *** | *** | ** | — | NS | *** | NS | *** | NS | *** | * | 3 | |
*** | *** | *** | *** | *** | ** | NS | *** | *** | NS | NS | — | *** | NS | *** | NS | *** | *** | 2 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | — | *** | NS | *** | *** | ** | 10 | |
*** | *** | *** | *** | *** | *** | NS | *** | *** | * | NS | NS | *** | — | *** | NS | *** | ** | 2 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | NS | *** | — | *** | ** | *** | 10 | |
*** | *** | *** | *** | *** | *** | NS | *** | *** | ** | NS | NS | *** | NS | *** | — | *** | * | 2 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | ** | *** | — | *** | 12 | |
*** | *** | *** | *** | *** | *** | ** | *** | *** | *** | * | *** | ** | ** | *** | * | *** | — | 9 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 17 |
*** | — | *** | * | *** | *** | *** | *** | NS | *** | *** | *** | *** | *** | *** | *** | ** | *** | 14 | |
*** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 0 | |
*** | * | *** | — | * | *** | *** | *** | ** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 14 | |
*** | *** | *** | * | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 16 | |
*** | *** | *** | *** | *** | — | *** | NS | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | 1 | |
*** | *** | *** | *** | *** | *** | — | *** | *** | *** | *** | NS | *** | *** | ** | * | *** | *** | 6 | |
*** | *** | *** | *** | *** | NS | *** | — | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | 1 | |
*** | NS | *** | ** | *** | *** | *** | *** | — | *** | *** | *** | *** | *** | *** | *** | * | *** | 13 | |
*** | *** | *** | *** | *** | NS | *** | NS | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | 1 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | — | * | NS | NS | NS | * | *** | NS | 7 | |
*** | *** | *** | *** | *** | *** | NS | *** | *** | *** | * | — | * | * | NS | NS | *** | *** | 4 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | NS | * | — | NS | NS | NS | *** | NS | 5 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | NS | * | NS | — | NS | NS | *** | NS | 6 | |
*** | *** | *** | *** | *** | *** | ** | *** | *** | *** | NS | NS | NS | NS | — | NS | *** | * | 5 | |
*** | *** | *** | *** | *** | *** | * | *** | *** | *** | * | NS | NS | NS | NS | — | *** | *** | 4 | |
*** | ** | *** | *** | *** | *** | *** | *** | * | *** | *** | *** | *** | *** | *** | *** | — | *** | 12 | |
*** | *** | *** | *** | *** | *** | *** | *** | *** | *** | NS | *** | NS | NS | * | *** | *** | — | 8 |
Int. | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Int. | — | *** | *** | NS | NS | *** | *** | NS | ** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 11 |
*** | — | *** | *** | *** | ** | *** | *** | *** | NS | NS | NS | NS | NS | NS | * | NS | * | 2 | |
*** | *** | — | *** | *** | *** | NS | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
NS | *** | *** | — | NS | *** | ** | NS | * | *** | *** | *** | *** | *** | *** | *** | *** | *** | 11 | |
NS | *** | *** | NS | — | *** | * | NS | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | 11 | |
*** | ** | *** | *** | *** | — | *** | ** | *** | NS | ** | *** | ** | *** | *** | *** | *** | *** | 9 | |
*** | *** | NS | ** | * | *** | — | *** | NS | *** | *** | *** | *** | *** | *** | *** | *** | *** | 15 | |
NS | *** | *** | NS | NS | ** | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | 11 | |
** | *** | NS | * | NS | *** | NS | *** | — | *** | *** | *** | *** | *** | *** | *** | *** | *** | 14 | |
*** | NS | *** | *** | *** | NS | *** | *** | *** | — | NS | ** | NS | *** | ** | *** | ** | *** | 6 | |
*** | NS | *** | *** | *** | ** | *** | *** | *** | NS | — | NS | NS | NS | NS | * | NS | * | 2 | |
*** | NS | *** | *** | *** | *** | *** | *** | *** | ** | NS | — | NS | NS | NS | NS | NS | NS | 0 | |
*** | NS | *** | *** | *** | ** | *** | *** | *** | NS | NS | NS | — | NS | NS | NS | NS | NS | 0 | |
*** | NS | *** | *** | *** | *** | *** | *** | *** | *** | NS | NS | NS | — | NS | NS | NS | NS | 0 | |
*** | NS | *** | *** | *** | *** | *** | *** | *** | ** | NS | NS | NS | NS | — | NS | NS | NS | 0 | |
*** | * | *** | *** | *** | *** | *** | *** | *** | *** | * | NS | NS | NS | NS | — | NS | NS | 0 | |
*** | NS | *** | *** | *** | *** | *** | *** | *** | ** | NS | NS | NS | NS | NS | NS | — | NS | 0 | |
*** | * | *** | *** | *** | *** | *** | *** | *** | *** | * | NS | NS | NS | NS | NS | NS | — | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zyrek, M.; Tarasiewicz, T.; Sadel, J.; Krzywon, A.; Kawulok, M. Task-Driven Real-World Super-Resolution of Document Scans. Appl. Sci. 2025, 15, 8063. https://doi.org/10.3390/app15148063
Zyrek M, Tarasiewicz T, Sadel J, Krzywon A, Kawulok M. Task-Driven Real-World Super-Resolution of Document Scans. Applied Sciences. 2025; 15(14):8063. https://doi.org/10.3390/app15148063
Chicago/Turabian StyleZyrek, Maciej, Tomasz Tarasiewicz, Jakub Sadel, Aleksandra Krzywon, and Michal Kawulok. 2025. "Task-Driven Real-World Super-Resolution of Document Scans" Applied Sciences 15, no. 14: 8063. https://doi.org/10.3390/app15148063
APA StyleZyrek, M., Tarasiewicz, T., Sadel, J., Krzywon, A., & Kawulok, M. (2025). Task-Driven Real-World Super-Resolution of Document Scans. Applied Sciences, 15(14), 8063. https://doi.org/10.3390/app15148063