A Hair Drawing Evaluation Algorithm for Exactness Assessment Method in Portrait Drawing Learning Assistant System
Abstract
:1. Introduction
2. Related Works in the Literature
3. Review of Portrait Drawing Learning Assistant System
3.1. System Overview
3.2. Auxiliary Lines
3.3. System Utilization Flow
- 1.
- Import a portrait photo file as the face image to a PC.
- 2.
- Run the auxiliary line generation algorithm on the PC to generate the auxiliary line image.
- 3.
- Run Procreate on iPad.
- 4.
- Import the face image and the auxiliary line image into iPad at different layers.
- 5.
- Draw the portrait using brushes and layer functions provided in Procreate.
- 6.
- Save the drawing result image as a PNG file.
- 7.
- Apply the drawing exactness evaluation method to the result image and calculate the NCC scores for the hair, the eyebrows, the mouth, and the nose.
- 8.
- Modify the face parts if their NCC scores are low and go back to 7.
3.4. Auxiliary Line Image
- Triangular guidelines that are commonly employed in traditional drawing techniques [21];
- Contours that outline the eyes, the mouth, and the lower facial shape;
- Three circular guidelines that indicate the structure of the nose;
- Human nose lines using Bessel curves to be mimicked;
- Boundary lines defining the hair, the eyebrows, and the eyeglasses.
3.5. Auxiliary Line Generation Algorithm
3.6. Drawing Practice on iPad
3.7. Drawing Exactness Evaluation Method
3.7.1. Normalized Cross-Correlation
3.7.2. NCC Score for Face Component
- 1.
- Choose Region: The corresponding region is identified in the image that aligns with the component to be assessed.
- 2.
- Extract Feature Vector: The pixel values from this region are extracted to construct the feature vector, where pre-processing is required to minimize the impact of noise and variations.
- 3.
- Normalize Feature Vector: The feature vector is normalized to have a zero mean and unit variance, helping to mitigate the effects of lighting or exposure differences between the images.
- 4.
- Compute Dot Product: The similarity is measured by computing the dot product of two normalized feature vectors.
- 5.
- Normalize Similarity Score: To obtain the final NCC value, the ratio of the dot product to the product of the magnitudes of the normalized feature vectors is computed, yielding the normalized correlation score.
3.7.3. Bounding Box for Face Part
4. Proposal of Hair Drawing Evaluation Algorithm
4.1. Bounding Box for Hair
4.2. Hair Texture
Hair Drawing Evaluation Procedure
- 1.
- Grayscale conversion: The color input image is converted to the grayscale one.
- 2.
- Edge detection: The Canny edge detection algorithm is applied to identify the edges in the grayscale image for locating the hair region.
- 3.
- Morphological processing: The dilation and erosion operations are applied to refine the edge detection result.
- 4.
- Hair region extraction: The grayscale image is combined with this mask to isolate the hair region.
- 5.
- Texture feature detection: The cornerEigenValsAndVecs method in the OpenCV library is applied to compute the eigenvalues for the blocks in the image, and produce the texture image for the hair region.
- 6.
- NCC score calculation: The NCC score is calculated for the texture image of both the original image and the user result.
5. Evaluation of Hair Drawing Evaluation Algorithm
5.1. Target Drawing Results
5.2. NCC Score Results
5.3. User 1 Result Analysis
5.4. User 3 Result Analysis
5.5. User 7 Result Analysis
5.6. User 12 Result Analysis
6. Evaluation of Iterative Drawing with PDLAS
6.1. NCC Score Improvements
6.2. User 1 Result Analysis
6.3. User 3 Result Analysis
6.4. User 7 Result Analysis
7. Discussion of Portrait Drawing Evaluation
7.1. Advantages of Auxiliary Lines
7.2. Advantages of NCC
- Texture feature extraction: The fine hair details are extracted by preprocessing both the reference image and the user’s drawing using edge detection and eigenvalue-based texture analysis.
- Pixel-by-pixel comparison: The NCC score is calculated by comparing corresponding pixels in the texture maps of both images, which ensures that even subtle differences in the texture and density are captured.
- Numerical assessment: The NCC score, ranging from −1 to 1, provides an objective measure of the similarity. A higher score indicates a more accurate reproduction of the hair texture, while a lower score highlights discrepancies in the structure and shading.
7.3. Subjective Evaluation
7.4. Drawing Cartoon Characters with PDLAS
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Blake, W.; Lawn, J. Portrait Drawing; Watson-Guptill Publications Inc.: New York City, NY, USA, 1981; pp. 004–005. [Google Scholar]
- Brandt, G.B. Bridgman’s Complete Guide to Drawing from Life; Sterling Publishing Co., Inc.: New York City, NY, USA, 2001. [Google Scholar]
- Zhang, Y.; Kong, Z.; Funabiki, N.; Hsu, C.-C. A study of a drawing exactness assessment method using localized normalized cross-correlations in a portrait drawing learning assistant system. Computers 2024, 13, 215. [Google Scholar] [CrossRef]
- CMU Perceptual Computing Lab. Available online: https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/index.html (accessed on 1 January 2025).
- OpenCV. Available online: https://opencv.org/ (accessed on 1 January 2025).
- Sarvaiya, J.; Patnaik, S.; Bombaywala, S. Image Registration by Template Matching Using Normalized Cross-Correlation. In Proceedings of the International Conference on Advances in Computing, Control, and Telecommunication Technologies, Bangalore, India, 28–29 December 2009. [Google Scholar]
- Huang, Z.; Heng, W.; Zhou, S.; Inc, M. Learning to Paint with Model-based Deep Reinforcement Learning. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Yi, R.; Liu, Y.-J.; Lai, Y.-K.; Rosin, P.L. Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 905–918. [Google Scholar] [CrossRef] [PubMed]
- Tong, Z.; Chen, X.; Ni, B.; Wang, X. Sketch Generation with Drawing Process Guided by Vector Flow and Grayscale. arXiv 2020, arXiv:2012.09004. [Google Scholar] [CrossRef]
- Looi, L.; Green, R. Estimating Drawing Guidelines for Portrait Drawing. In Proceedings of the IVCNZ, Tauranga, New Zealand, 8–10 December 2021. [Google Scholar]
- Takagi, S.; Matsuda, N.; Soga, M.; Taki, H.; Shima, T.; Yoshimoto, F. An educational tool for basic techniques in beginner’s pencil drawing. In Proceedings of the PCGI, Tokyo, Japan, 9–11 July 2003. [Google Scholar]
- Huang, Z.; Peng, Y.; Hibino, T.; Zhao, C.; Xie, H. DualFace: Two-stage drawing guidance for freehand portrait sketching. Comput. Vis. Media 2022, 8, 63–77. [Google Scholar] [CrossRef]
- Yi, R.; Liu, Y.-J.; Lai, Y.-K.; Rosin, P.L. APDrawingGAN: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs. In Proceedings of the 2019 IEEE/CVF, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Huang, Z.; Xie, H.; Fukusato, T.; Miyata, K. AniFaceDrawing: Anime Portrait Exploration during Your Sketching. In Proceedings of the ACM SIGGRAPH 2023 Conference (SIGGRAPH ’23), Association for Computing Machinery, New York, NY, USA, 6–10 August 2023. [Google Scholar]
- Singh, J.; Smith, C.; Echevarria, J.I.; Zheng, L. Intelli-Paint: Towards Developing Human-like Painting Agents. arXiv 2021, arXiv:2112.08930. [Google Scholar] [CrossRef]
- Li, S.; Xie, H.; Yang, X.; Chang, C.-M.; Miyata, K. A Drawing Support System for Sketching Aging Anime Faces. In Proceedings of the International Conference on Cyberworlds (CW), Kanazawa, Japan, 27–29 September 2022. [Google Scholar]
- Kong, Z.; Zhang, Y.; Funabiki, N.; Huo, Y.; Kuribayashi, M.; Harahap, D.P. A Proposal of Auxiliary Line Generation Algorithm for Portrait Drawing Learning Assistant System Using OpenPose and OpenCV. In Proceedings of the GCCE, Nara, Japan, 10–13 October 2023. [Google Scholar]
- Henderson, S.; Yeow, J. iPad in Education: A Case Study of iPad Adoption and Use in a Primary School. In Proceedings of the 45th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2012. [Google Scholar]
- Fernandez, A.L. Advances in Teaching Inorganic Chemistry Volume 2: Laboratory Enrichment and Faculty Community; Department of Chemistry and Biochemistry, George Mason University: Fairfax, VA, USA, 2020; pp. 79–93. [Google Scholar]
- Yongyeon, C. Tutorials of Visual Graphic Communication Programs for Interior Design 2; Iowa State University Digital Press: Ames, IA, USA, 2022; pp. 117–140. [Google Scholar]
- Hiromasa, U. Hiromasa’s Drawing Course: How to Draw Faces; Kosaido Pub.: Tokyo, Japan, 2014. [Google Scholar]
- Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing with Matlab; Gatesmark Publishing: Knoxville, TN, USA, 2020; pp. 311–315. [Google Scholar]
- Pele, O.; Werman, M. Robust Real-Time Pattern Matching Using Bayesian Sequential Hypothesis Testing. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1427–1443. [Google Scholar] [CrossRef] [PubMed]
- Li, G. Stereo Matching Using Normalized Cross-Correlation in LogRGB Space. In Proceedings of the International Conference on Computer Vision in Remote Sensing, Xiamen, China, 16–18 December 2012. [Google Scholar]
- Dawoud, N.N.; Samir, B.; Janier, J. N-mean Kernel Filter and Normalized Correlation for Face Localization. In Proceedings of the IEEE 7th International Colloquium on Signal Processing and Its Applications, Penang, Malaysia, 4–6 March 2011. [Google Scholar]
- Chandran, S.; Mogiloju, S. Algorithm for Face Matching Using Normalized Cross-Correlation. Int. J. Eng. Adv. Technol. 2013, 2, 2249–8958. [Google Scholar]
- Feng, Z.; Qingming, H.; Wen, G. Image Matching by Normalized Cross-Correlation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Toulouse, France, 14–19 May 2006. [Google Scholar]
- Zhang, B.; Yang, H.; Yin, Z. A Region-Based Normalized Cross-Correlation Algorithm for the Vision-Based Positioning of Elongated IC Chips. IEEE Trans. Semicond. Manuf. 2015, 28, 345–352. [Google Scholar] [CrossRef]
- Hisham, M.B.; Yaakob, S.N.; Raof, R.A.A.; Nazren, A.B.A.; Wafi, N.M. Template Matching Using Sum of Squared Difference and Normalized Cross-Correlation. In Proceedings of the IEEE Student Conference on Research and Development, Kuala Lumpur, Malaysia, 13–14 December 2015. [Google Scholar]
- Ban, K.D.; Lee, J.; Hwang, D.H.; Chung, Y.K. Face Image Registration Methods Using Normalized Cross-Correlation. In Proceedings of the International Conference on Control, Automation, and Systems, Seoul, Republic of Korea, 14–17 October 2008. [Google Scholar]
- Gao, Q.; Chen, H.; Yu, R.; Yang, J.; Duan, X. A Robot Portraits Pencil Sketching Algorithm Based on Face Component and Texture Segmentation. In Proceedings of the IEEE International Conference on Industrial Technology, Melbourne, VIC, Australia, 13–15 February 2019. [Google Scholar]
- Zhang, Y.; Kong, Z.; Funabiki, N.; Hsu, C.-C. An Extension of Drawing Exactness Assessment Method to Hair Evaluation in Portrait Drawing Learning Assistant System. In Proceedings of the 2024 8th International Conference on Information Technology (InCIT), Kanazawa, Japan, 14–15 November 2024. [Google Scholar]
- Muhammad, U.R.; Svanera, M.; Leonardi, R.; Benini, S. Hair Detection, Segmentation, and Hairstyle Classification in the Wild. Image Vis. Comput. 2018, 71, 25–37. [Google Scholar] [CrossRef]
- Person Photo. Available online: https://www.quchao.net/Generated-Photos.html (accessed on 1 January 2025).
Auxiliary Line Type | Relevant Keypoints |
---|---|
Center line | Calculated as the average of x values of No. 27 through No. 30 keypoints |
Eye position line | Derived from the y value of No. 36, No. 39, No. 42, and No. 45 keypoints and x value of 36th and 45th keypoints |
Horizontal lines above and below the ears | Based on the average y value of No. 37, No. 38, No. 43, and No. 44 keypoints, along with the x value of No. 3 and No. 13 keypoints |
Eyes | Ranges from No. 36 to No. 41 keypoints for one eye and No. 42 to No. 47 keypoints for the other |
Mouth | Comprises keypoints from No. 60 to No. 67 |
Lower face contour | Includes keypoints from No. 0 to No. 16 |
Three circles for the nose |
|
Component | Leftmost x Value | Rightmost x Value | Upmost y Value | Downmost y Value |
---|---|---|---|---|
left eye | 36th | 39th | 17th | 28th |
right eye | 42nd | 45th | 26th | 28th |
left eyebrow | 17th | 21st | Highest of 28th, 29th, 30th | Lowest of 17th, 22nd |
right eyebrow | 22nd | 26th | Highest of 23rd, 24th, 25th | Lowest of 22nd, 26th |
mouth | 48th | 54th | 33rd | 5th |
nose | 39th | 42nd | 29th | 33rd |
Part | Leftmost x Value | Rightmost x Value | Upmost y Value | Downmost y Value |
---|---|---|---|---|
hair | 0th | 16th | top of image | highest among 17th 26th |
User | User Gender | Photo Gender | Hair Color | Hair Style | Photo Race |
---|---|---|---|---|---|
No. 1 | male | male | gold | curl | white |
No. 2 | female | female | black | plate | asian |
No. 3 | male | female | brown | straight | white |
No. 4 | female | female | black | plate | asian |
No. 5 | female | female | gold | plate | white |
No. 6 | male | male | black | straight | white |
No. 7 | male | male | black | slightly curl | white |
No. 8 | male | male | black | straight | asian |
No. 9 | female | female | black | straight | asian |
No. 10 | male | female | brown | plate | asian |
No. 11 | female | female | slightly gray | short hair | white |
No. 12 | male | female | gray | short hair | white |
User | Hair with Texture | Hair Without Texture | Left Eye | Right Eye | Mouth | Nose | Left Eyebrow | Right Eyebrow |
---|---|---|---|---|---|---|---|---|
No. 1 | 0.60 | 0.54 | 0.34 | 0.38 | 0.53 | 0.27 | 0.03 | 0.27 |
No. 2 | 0.24 | 0.16 | 0.68 | 0.65 | 0.31 | 0.16 | 0.16 | 0.10 |
No. 3 | 0.42 | 0.34 | 0.57 | 0.54 | 0.52 | 0.25 | 0.32 | 0.41 |
No. 4 | 0.31 | 0.25 | 0.50 | 0.48 | 0.65 | 0.23 | 0.04 | 0.17 |
No. 5 | 0.45 | 0.35 | 0.34 | 0.49 | 0.27 | 0.19 | 0.22 | 0.16 |
No. 6 | 0.41 | 0.27 | 0.52 | 0.52 | 0.18 | 0.23 | 0.54 | 0.37 |
No. 7 | 0.10 | 0.08 | 0.54 | 0.53 | 0.44 | 0.34 | 0.55 | 0.45 |
No. 8 | 0.79 | 0.18 | 0.39 | 0.38 | 0.40 | 0.25 | 0.17 | 0.47 |
No. 9 | 0.44 | 0.20 | 0.43 | 0.51 | 0.29 | 0.26 | 0.33 | 0.42 |
No. 10 | 0.86 | 0.23 | 0.28 | 0.27 | 0.36 | 0.16 | 0.01 | 0.16 |
No. 11 | 0.76 | 0.02 | 0.43 | 0.50 | 0.46 | 0.11 | 0.31 | 0.29 |
No. 12 | 0.24 | 0.21 | 0.64 | 0.60 | 0.27 | 0.12 | 0.35 | 0.25 |
average | 0.47 | 0.24 | 0.45 | 0.46 | 0.39 | 0.21 | 0.24 | 0.27 |
User | Portrait | Hair | Left Eye | Right Eye | Mouth | Nose | Left Eyebrow | Right Eyebrow | Average |
---|---|---|---|---|---|---|---|---|---|
No. 1 | initial | 0.60 | 0.34 | 0.38 | 0.53 | 0.27 | 0.03 | 0.27 | 0.35 |
modify | 0.60 | 0.42 | 0.48 | 0.53 | 0.40 | 0.09 | 0.21 | 0.39 | |
No. 3 | initial | 0.34 | 0.57 | 0.54 | 0.52 | 0.25 | 0.32 | 0.41 | 0.42 |
modify | 0.43 | 0.59 | 0.57 | 0.55 | 0.41 | 0.36 | 0.45 | 0.48 | |
No. 7 | initial | 0.10 | 0.54 | 0.53 | 0.44 | 0.34 | 0.55 | 0.45 | 0.42 |
modify | 0.14 | 0.59 | 0.56 | 0.42 | 0.35 | 0.56 | 0.47 | 0.44 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Funabiki, N.; Febrianti, E.C.; Sudarsono, A.; Hsu, C. A Hair Drawing Evaluation Algorithm for Exactness Assessment Method in Portrait Drawing Learning Assistant System. Algorithms 2025, 18, 143. https://doi.org/10.3390/a18030143
Zhang Y, Funabiki N, Febrianti EC, Sudarsono A, Hsu C. A Hair Drawing Evaluation Algorithm for Exactness Assessment Method in Portrait Drawing Learning Assistant System. Algorithms. 2025; 18(3):143. https://doi.org/10.3390/a18030143
Chicago/Turabian StyleZhang, Yue, Nobuo Funabiki, Erita Cicilia Febrianti, Amang Sudarsono, and Chenchien Hsu. 2025. "A Hair Drawing Evaluation Algorithm for Exactness Assessment Method in Portrait Drawing Learning Assistant System" Algorithms 18, no. 3: 143. https://doi.org/10.3390/a18030143
APA StyleZhang, Y., Funabiki, N., Febrianti, E. C., Sudarsono, A., & Hsu, C. (2025). A Hair Drawing Evaluation Algorithm for Exactness Assessment Method in Portrait Drawing Learning Assistant System. Algorithms, 18(3), 143. https://doi.org/10.3390/a18030143