Next Article in Journal
Surface Metrology Based on Scanning Conoscopic Holography for In Situ and In-Process Monitoring of Microtexture in Paintings
Next Article in Special Issue
Classification of Skin Cancer Lesions Using Explainable Deep Learning
Previous Article in Journal
Detection of Hydrogen Peroxide in Liquid and Vapors Using Titanium(IV)-Based Test Strips and Low-Cost Hardware
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Approach to Facial Palsy Using a Novel Registration Method with 3D Facial Landmark

1
Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Korea
2
Department of Plastic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Korea
3
Graduate School of Smart Convergence, Kwangwoon University, Seoul 01897, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6636; https://doi.org/10.3390/s22176636
Submission received: 20 June 2022 / Revised: 2 August 2022 / Accepted: 30 August 2022 / Published: 2 September 2022
(This article belongs to the Special Issue Deep Learning for Healthcare: Review, Opportunities and Challenges)

Abstract

:
Treatment of facial palsy is essential because neglecting this disorder can lead to serious sequelae and further damage. For an objective evaluation and consistent rehabilitation training program of facial palsy patients, a clinician’s evaluation must be simultaneously performed alongside quantitative evaluation. Recent research has evaluated facial palsy using 68 facial landmarks as features. However, facial palsy has numerous features, whereas existing studies use relatively few landmarks; moreover, they do not confirm the degree of improvement in the patient. In addition, as the face of a normal person is not perfectly symmetrical, it must be compared with previous images taken at a different time. Therefore, we introduce three methods to numerically approach measuring the degree of facial palsy after extracting 478 3D facial landmarks from 2D RGB images taken at different times. The proposed numerical approach performs registration to compare the same facial palsy patients at different times. We scale landmarks by performing scale matching before global registration. After scale matching, coarse registration is performed with global registration. Point-to-plane ICP is performed using the transformation matrix obtained from global registration as the initial matrix. After registration, the distance symmetry, angular symmetry, and amount of landmark movement are calculated for the left and right sides of the face. The degree of facial palsy at a certain point in time can be approached numerically and can be compared with the degree of palsy at other times. For the same facial expressions, the degree of facial palsy at different times can be measured through distance and angle symmetry. For different facial expressions, the simultaneous degree of facial palsy in the left and right sides can be compared through the amount of landmark movement. Through experiments, the proposed method was tested using the facial palsy patient database at different times. The experiments involved clinicians and confirmed that using the proposed numerical approach can help assess the progression of facial palsy.

1. Introduction

Facial palsy [1,2,3] refers to paralysis of the face due to a problem in the functionality of the facial nerve that moves the muscles of the face. If the initial treatment [4,5,6] for facial palsy is not done properly, there could be serious sequelae. Additionally, it can cause external discomfort, psychological anxiety, and depression. Therefore, an accurate diagnosis of facial palsy is necessary, and it is important to accurately determine the degree of facial palsy progression. Although there is a visual method of diagnosis conducted by clinicians, this is subjective. Hence, a quantitative value that can be helpful for evaluation is required.
In recent years, quantitative evaluation of facial palsy has been studied in several ways. Optical markers [7,8,9] have been used to measure the degree of facial palsy by attaching a marker to the face. The marker is an active or passive optical marker or a gyro marker. Additionally, scanning using a laser [10,11,12] has been used to analyze facial palsy by 3D-scanning the face with an optical scanner. Although these methods very accurately determine facial features, they require additional equipment and a constrained environment; moreover, the patient feels uncomfortable when using them. RGB-D information [13,14,15,16] has also been used to extract the landmarks of the face by capturing the face using a depth camera. Although the RGB-D information method is relatively accurate, a depth camera is required. In addition, for accurate measurements, there must be a certain distance between the camera and the user. Therefore, evaluation through RGB imaging [17,18] with less restrictions on equipment and environment is an active field of study. However, the RGB imaging method uses only 68 facial landmarks, which limits the technique in terms of accuracy for muscles such as cheeks; moreover, the evaluation method is a score game for recovery or it detects whether or not facial palsy is present. Therefore, it is not possible to accurately determine the degree of facial palsy at different times.
This paper used a mediapipe [19,20,21] to extract 478 3D facial landmarks from RGB images. After extracting 3D facial landmarks, we propose a method of matching these landmarks with other facial expressions or images taken at different times. In addition, we propose three numerical approaches to measuring the progression of facial palsy after data registration. With the proposed method, the symmetry of the face can be measured and the amount of movement of the facial landmark can be obtained according to the change in this expression. Data from four patients with facial palsy were used for the evaluation of this experiment, and the consistency of the results compared to the clinician’s evaluation was confirmed. The remainder of this paper is organized as follows. Section 2 briefly reviews previous related studies. The proposed registration method and numerical approach are presented in Section 3. The experimental results for the proposed methods are reported in Section 4. Finally, we discuss and conclude the paper in Section 5.

2. Related Work

2.1. 3D Facial Landmark Localization

Three-dimensional facial landmark localization is a method of determining the location of a 3D facial landmark in a single image. In previous studies, methods for detecting 3D facial landmarks in 2D images have typically been classified into two types. A representative method [22,23,24,25] of 3D facial landmark localization using two stages involves extracting a 2D heatmap for facial landmark from a 2D image, and then expanding it to a 3D image. Although it has contributed to the study of extracting 3D landmarks from 2D, it requires a large amount of computation; hence, the one-stage methods were studied. The one-stage method estimates 3D facial landmarks without going through 2D heatmaps. Refs. [26,27,28] explore a one-stage method using scanning data and show a faster operation than the two-stage method, but it has the disadvantage of requiring scanning devices. Another one-stage method [29,30,31,32] estimates 3D facial landmarks from a single image without requiring 2D heatmaps.
However, these methods estimate a limited number of 68 landmarks. More landmark information is needed to accurately measure the progression of facial palsy. In a recent study, Kartynnik et al. [20] proposed a 3D facial landmark detector for estimating 3D mesh representations of human faces for AR apps. It uses Blazeface [33] to detect faces and extract 3D landmarks for the detected faces. After estimating 3D mesh vertices, it treats each vertex as a landmark. It operates in real time and extracts 468 facial landmarks. In Grishchenko et al. [19], in addition to these 468 landmarks, the eyes and mouth are further refined, and 10 iris landmarks are detected. In mediapipe [21], these data are provided in a modularized library; hence, it is easy to develop applications using AI functions from this library.

2.2. Assessment of Facial Palsy

Several studies have been conducted for the quantitative evaluation of facial palsy. Refs. [34,35,36,37,38] introduced a 3D-surface-based measurement using a 3D scanner to measure face symmetry, and [14,15,16,39] developed a 3D motion capture system using an RGB-D camera. However, these methods are still equipment-dependent. Ref. [40] developed a 3D VAS system to track dense 3D geometry, but had to manually annotate it frame by frame. For studies using machine learning, the existence of facial palsy is detected through a support vector machine [41,42,43,44] or by using a classifier [18,45]. Ngo et al. [46] evaluated the degree of facial palsy by estimating 3D facial landmarks using multiple RGB cameras. This method utilizes 3D angles and distance information but is independent of each axis. In addition, there are still limitations as multiple cameras are required, and feature information is limited because only 68 face landmarks are used. Liu et al. [47] graded degrees of facial palsy and trained the RF model using 2D facial landmarks as a feature. These gradings can be subjective and are not suitable for measuring the progression of facial paralysis in each patient. Barrios et al. [17] proposed a quantitative evaluation of facial palsy using the action unit [48] (AU) to determine the extent to which the left and right sides express the individual AUs. It is worthwhile to measure the left and right sides of the face separately, but this cannot capture the use of each muscle and does not use 3D information. Hence, our study used 3D information and captured the use of each muscle.

3. Proposed Methods

In this section, we propose a numerical approach method to evaluate facial palsy. The overall framework of the proposed numerical approach for facial palsy is shown in Figure 1. The input image used is assumed to be a frontal face. We used mediapipe [21] to extract 478 landmarks from RGB images, consisting of 468 facial landmarks and 10 eye landmarks. Landmarks from different times cannot be compared because their scale, rotation, and movement are different. Therefore, we aligned the coordinate system through registration. We propose a numerical approach to evaluate the degree of facial palsy after registration. The symmetry value of facial palsy was obtained using the distance symmetry and angle symmetry within one image. In addition, the amount of movement of the landmark of the same index extracted from two images was measured, and the amount of movement of the landmark corresponding to each side of the face was compared.

3.1. Registration Method

We propose a method for matching 3D facial landmarks extracted from RGB images at different times and with different facial expressions. For matching, it is necessary to consider that the corresponding landmark points are in the same area, but the location varies depending on the degree of facial palsy and facial expression. Euler transformation [49] has been used to register several landmarks; however, this is not applicable here because the facial muscles are mutually related. Additionally, the muscles that move depending on the facial expression are significantly different. In this method, it is therefore necessary to set landmarks to match each facial expression. Another representative registration method is the use of the iterative closest point (ICP) [50,51]. However, this method of registration for a numerical approach to facial palsy may not be suitable. For ICP, an initial transformation matrix is required. This can be matched to several facial landmarks; however, similar to the Euler transform, it is not rational to choose a fixed landmark for a particular facial expression. Therefore, we propose a registration method that does not fix landmarks. The proposed registration method is shown in Figure 2. We performed point-to-plane ICP after implementing global registration that does not require an initial transformation matrix. Let landmarks extracted from images taken at time T be the source landmarks, and landmarks extracted from images at time T+N be the target landmarks. Source landmarks and target landmarks are as follows:
source = ( s 0 , x , s 0 , y , s 0 , z ) , , ( s i , x , s i , y , s i , z ) , , ( s n 1 , x , s n 1 , y , s n 1 , z ) , target = ( t 0 , x , t 0 , y , t 0 , z ) , , ( t i , x , t i , y , t i , z ) , , ( t n 1 , x , t n 1 , y , t n 1 , z )
where n is the number of facial landmarks, which is 478 in this paper, i { 0 , n 1 } .
The proposed matching algorithm is as follows:
  • Because global registration does not involve scale alignment, scale matching is performed before global registration. We can acquire the scale factor through the i-th landmark of the source and target landmarks. All target landmarks are then scaled using the scale factor to perform scale matching. The formula to calculate the scale factor is given as follows:
    s c a l e i = ( s i , x O x ) 2 + ( s i , y O y ) 2 + ( s i , z O z ) 2 ( t i , x O x ) 2 + ( t i , y O y ) 2 + ( t i , z O z ) 2 = s i , x 2 + s i , y 2 + s i , z 2 t i , x 2 + t i , y 2 + t i , z 2
    where O i = ( O x , O y , O z ) is the origin of landmark coordinate. In this paper, the origin of the landmark coordinate is (0, 0, 0).
  • After scale matching, global registration that does not require an initial transformation matrix is performed for all 3D facial landmarks to perform coarse registration. Subsequently, for fine registration, the transformation matrix resulting from global registration is set as the initial transformation matrix and point-to-plane ICP [50] was performed. Point-to-plane ICP is a method to find a transformation matrix that minimizes the distance between the source landmarks and the plane of the normal vectors of target landmarks.
  • Through n iteration of steps 1 and 2, we obtain the total matrix T = T 0 , , T n 1 . T i is the transformation matrix registered as the scale factor of the i-th landmark. Each T i is composed of a 4 × 4 matrix that represents the transformation matrix for a 3D landmark in a homogeneous coordinate method. We select the transformation matrix with the smallest inlier RMSE among T as the final transformation matrix, i.e., T f = T m i n . The inlier RMSE is defined as follows:
    R M S E i n l i e r = ( e 0 2 + + e i 2 + + e n 1 2 ) n
    where e i is calculated as the L2 distance between the i-th landmarks and i-th landmarks after converting the source landmarks using the matrix T f , and n is the number of facial landmarks.
    After registration through the final transformation matrix, numerical approach is performed.

3.2. Numerical Approach

Once the source landmarks and target landmarks have been registered, a numerical approach to evaluate the degree of facial palsy is possible. Having each value of the 478 3D facial landmarks is useful for computation; however, many values may be inefficient in helping clinicians. As depicted in Figure 3, we grouped 3D facial landmarks according to facial muscles, resulting in 17 groups, where the name of each muscle and its location are shown. No. 5 is the Levator Labii Superioris and Levator Labii Superioris Alaeque Nasi; No. 7 does not represent a muscle, but the nose tip, which is a useful area for numerical analysis because this bends under the influence of palsy of the surrounding muscles. Table 1 and Table 2 show the 478 and 68 facial landmarks included within each muscle group. We can see that using 478 landmarks provides more information about the facial muscles. A numerical approach for each muscle is then obtained by averaging the values of that muscle region.
Figure 3. Facial muscles [52] (left), grouped from facial landmarks (middle) and indexes and muscle names (right). Adapted with permission from [52]. 2022, reineg.
Figure 3. Facial muscles [52] (left), grouped from facial landmarks (middle) and indexes and muscle names (right). Adapted with permission from [52]. 2022, reineg.
Sensors 22 06636 g003
In this paper, we propose three numerical approaches, including distance and angle symmetry, to analyze the same expression and amount of landmark movement obtained from different facial expressions after registration. First, in order to measure the symmetry of the face, the midsagittal plane is required. The midsagittal plane [53,54,55,56] is defined as the midline of the perpendicular bisector of the line connecting each iris. We extended this midsagittal plane of the face to 3D, as shown in Figure 4. The vector connecting each iris landmark is defined as the normal vector, and the plane passing through the midpoint of the iris landmark is defined as a facial midsagittal plane.

3.2.1. Distance Symmetry

The method to obtain the distance symmetry is shown in Figure 5. Distance symmetry is obtained by inverting the left side of the face of a person through the facial midsagittal plane. As the distance symmetry gets smaller, this implies a more perfect symmetry. Distance symmetry is defined as follows:
d i = ( p i , x R p i , x r L ) 2 + ( p i , y R p i , y r L ) 2 + ( p i , z R p i , z r L ) 2
where p i R is the landmark of the right side of the face, and p i r L is the landmark of the left side of the face landmark as inverted through the facial midsagittal plane.

3.2.2. Angle Symmetry

As distance symmetry considers only symmetry with respect to distance, a numerical approach considering angle is additionally required. If the normal vector for the midsagittal plane and the vector of the left and right facial landmark pairs are the same, it means the face has left–right symmetry. A depiction of angle symmetry is shown in Figure 6. n is the normal vector of midsagittal plane, and a is the i-th facial landmark pair vector. The i-th facial landmark pair vector is the vector of the left and right pair landmarks, as shown in Equation (5):
a : i t h f a c i a l l a n d m a r k p a i r v e c t o r = ( ( p i , x R p i , x L ) , ( p i , y R p i , y L ) , ( p i , z R p i , z L ) )
where p i is a pair of the i-th landmark, p R is a landmark on the right side of the face, and p L is a landmark on the left side of the face. Angle symmetry is defined in Equation (6). Angle symmetry uses cosine similarity. A value of 0 means perfect asymmetry, and a value of 1 means perfect symmetry.
angle symmetry = c o s ( θ ) = n · a n a = i = 1 n n i × a i i = 1 n ( n i ) 2 × i = 1 n ( a i ) 2

3.2.3. Landmark Movement Amounts

By simultaneously capturing neutral and smile expressions, the amount of movement of each pair of landmarks on each side of the face can be obtained. An explanation of the number of landmark movements is shown in Figure 7, in which it is possible to confirm how uniformly the left and right landmarks at the same location move in response to a change in facial expression. In addition, the amount of movement at other times and the degree of improvement in symmetry can be compared. After registration of the neutral and smile expressions, the amount of movement in the left and right landmarks corresponding to the i-th landmark are obtained using Equation (7). s i is the i-th landmark in the smile expression, and n i is the i-th landmark in the neutral expression.
i t h L e f t m o v e m e n t a m o u n t : m L , i = ( s i , x L n i , x L ) 2 + ( s i , y L n i , y L ) 2 + ( s i , z L n i , z L ) 2 i t h R i g h t m o v e m e n t a m o u n t : m R , i = ( s i , x R n i , x R ) 2 + ( s i , y R n i , y R ) 2 + ( s i , z R n i , z R ) 2
Therefore, the i-th landmark movement amount is calculated by Equation (8). A score of 0 landmark movements means perfect symmetry.
i t h l a n d m a r k m o v e m e n t = i t h L e f t m o v e m e n t s i t h R i g h t m o v e m e n t s

4. Experiments

In this section, we present the experiments conducted within this study. First, we introduce the data utilized and compare the proposed registration method with other registration methods. Next, we describe the results of applying the three numerical approach methods to facial palsy. Five clinicians participated in the experiments. The clinicians checked whether it could be helpful in the clinical evaluation of patients with facial palsy.

4.1. Experimental Data

All experiments were conducted with the consent of the patients. The dates corresponding to the first to third years of images taken of each patient are shown in Table 3. The images of the patients used in the experiment are shown in Figure 8. All RGB images were taken with a regular smartphone and webcam; a frontal face was assumed. Four patients with facial palsy were involved in this study. Images of neutral and smile expressions of each patient taken at three different times were used.

4.2. Experimental Results

After extracting 478 3D facial landmarks using mediapipe for patient data, we used open3D library [57], an open-source library compatible with Python. The open3D library supports the development of software that handles 3D data.

4.2.1. Registration Results

All registrations used within the experiment were applied after the proposed scale matching method. After registration, the inlier RMSE was compared. As we assumed a frontal face, the initial transformation matrix of point-to-point ICP [58] and point-to-plane ICP [50] was configured using Equation (9), where s is the source landmark, t is the target landmark, and the translation matrix is set as the centroid of the source and target. N is the number of facial landmarks.
T i n i t i a l = [ R = I 3 | T ] = 1 0 0 ( s x t x ) N 0 1 0 ( s y t y ) N 0 0 1 ( s z t z ) N 0 0 0 1
  • Experiments by year
    Table 4 shows the inlier RMSE results when several matching methods were applied. Three-dimensional facial landmarks extracted from smile expression images at different times were used. We compared the inlier RMSE by registration of the Year 1-Year 2 images and Year 1-Year 3 images of each patient. All of our proposed methods had minimal inlier RMSE except for Year 1-Year 3 of Patient 2. In the case of global registration, the RMSE was similar to that of other registrations, because there was no initial transformation matrix. However, Year 1-Year 3 of Patient 2 had the smallest RMSE in the point-to-plane ICP, demonstrating that our method, without requiring an initial transformation matrix, is superior to the other methods. Figure 9 shows an example of the visualized result for Year 1-Year 3 of Patient 4.
  • Experiments by expression
    Table 5 shows the RMSE when examining the neutral and smile expressions of each patient in the same year. Similar to the yearly experiments, global registration has a large RMSE compared to other registrations. Our method achieved more optimal registration, despite point-to-point ICP and point-to-plane ICP requiring an initial transformation matrix. A visualization of the results are shown in Figure 10 for the example of Year 1-Year 3 of patient 4.

4.2.2. Distance and Angle Symmetry Results

Here, we describe the experimental results when using the method of measuring symmetry in the static expression. The smiling expression was examined, which is a representative expression used in the diagnosis of facial palsy. Four, six, seven, and eight facial muscle areas were used in the experiment.
  • Distance Symmetry
    We obtained the distance symmetry by inverting the left side of the face through the facial midsagittal plane obtained from the distance between the irises. Table 6 shows the distance symmetry results for the four patients regarding four facial muscles. For intuitive observation, the results have been rounded to *100 and to the sixth decimal place. Distance symmetry has a positive value, so 0 means perfect symmetry. We also confirmed the agreement between the results and diagnosis of clinicians. In addition, it was confirmed that the distance symmetry of the patients became closer to perfect symmetry as time passed. Regarding the nose tip, Patient 1 had worse distance symmetry in Year 2 than in Year 1, which was consistent with the clinician’s evaluation.
  • Angle Symmetry
    Angle symmetry is determined by the cosine similarity of the facial midsagittal plane and the landmark pair vector. Table 7 shows the results of the angle symmetry analysis. Similar to the distance symmetry, these results have been rounded to *100 and the sixth decimal place for intuitive observation. For distance symmetry, 100 means perfect symmetry; the smaller this value is, the more asymmetrical the face is. The cosine similarity metric is 0 when both vectors are perpendicular. As the normal vector of the facial midsagittal plane and the landmark pair vector are generally close to parallel, the angle symmetry has a value of 90 or more. As we used a smile expression in the experiment, the change in Orbicularis Oris was the greatest. We confirmed that the Orbicularis Oris was close to angular symmetry for the four patients.

4.2.3. Landmark Movements Results

  • Landmark Movements
    This experiment involved a method of measuring symmetry in dynamic expressions. We simultaneously examined the neutral expressions and smile expressions. The closer the result was to 0, the more symmetric the expressions were, and the larger the value, the more asymmetric it was. Through experiments, we compared the balance of the movement between the left and right facial landmarks of the patient. Table 8 shows the difference in the amount of landmark movement corresponding to the left and right sides of the face for neutral and smile expressions. The Orbicularis Oris of Patient 4 was 9.501 in Year 1, but decreased to 5.285 in Year 2, and 1.391 in Year 3, resulting in a smaller amount of landmark movement. This showed that the degree of facial palsy had improved.

5. Conclusions and Discussion

In this study, we proposed three numerical approaches after registration for diagnosing the progression of facial palsy in patients. As RGB images have only 2D information, we attempted to obtain more information by extending this to a 3D image. To compare images at different times and with different facial expressions, registration was performed through scaling matching, global registration, and point-to-plane ICP. After registration, distance symmetry and angle symmetry, which can numerically evaluate the symmetry in the static expression, can be obtained. Symmetry in the dynamic expression was approached numerically using the amount of landmark movement. However, AI-based 3D facial landmark detection trained based on facial expressions of people without facial palsy diagnoses may cause errors when used for patients with facial palsy. In addition, due to personal information privacy problems and data limitations, we proceeded with limited expressions. We make the following contributions: First, a degree of improvement in facial palsy could be obtained numerically without sensors or depth cameras. Second, we proposed a numerical approach to measure the degree of facial palsy over time. Third, the degree of facial palsy on the left and right sides of the face could be obtained numerically according to the amount of landmark movement over time. Fourth, it is possible to accurately compare the location of muscles or landmarks at different times. It is expected that the proposed numerical approach to measuring facial palsy will help clinicians evaluate facial palsy. Future work will build and experiment with datasets with various expressions for facial palsy rehabilitation and various angles.

Author Contributions

Conceptualization, T.S.O., J.P.H., S.K. and J.Y.; Data curation, H.J., J.C., C.P., T.S.O. and J.P.H.; Investigation, J.K.; Methodology, J.K.; Resources, J.K.; Supervision, S.K. and J.Y.; Validation, T.S.O. and J.P.H. All authors have read and agreed to the published version of the manuscript.

Funding

The present research was conducted with a research grant from Kwangwoon University in 2022. This research was supported by Korea Technology and Information Promotion Agency for SMEs in 2022 (S2949268).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Asan Medical Center (protocol code 2022-1158 and 24 August 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cawthorne, T.; Haynes, D. Facial palsy. Br. Med J. 1956, 2, 1197. [Google Scholar] [CrossRef]
  2. Roob, G.; Fazekas, F.; Hartung, H.P. Peripheral facial palsy: Etiology, diagnosis and treatment. Eur. Neurol. 1999, 41, 3–9. [Google Scholar] [CrossRef] [PubMed]
  3. Hohman, M.H.; Hadlock, T.A. Etiology, diagnosis, and management of facial palsy: 2000 patients at a facial nerve center. Laryngoscope 2014, 124, E283–E293. [Google Scholar] [CrossRef]
  4. Pereira, L.; Obara, K.; Dias, J.; Menacho, M.; Lavado, E.; Cardoso, J. Facial exercise therapy for facial palsy: Systematic review and meta-analysis. Clin. Rehabil. 2011, 25, 649–658. [Google Scholar] [CrossRef] [PubMed]
  5. Garro, A.; Nigrovic, L.E. Managing peripheral facial palsy. Ann. Emerg. Med. 2018, 71, 618–624. [Google Scholar] [CrossRef]
  6. Robinson, M.W.; Baiungo, J. Facial rehabilitation: Evaluation and treatment strategies for the patient with facial palsy. Otolaryngol. Clin. North Am. 2018, 51, 1151–1167. [Google Scholar] [CrossRef] [PubMed]
  7. Hontanilla, B.; Aubá, C. Automatic three-dimensional quantitative analysis for evaluation of facial movement. J. Plast. Reconstr. Aesthetic Surg. 2008, 61, 18–30. [Google Scholar] [CrossRef]
  8. Demeco, A.; Marotta, N.; Moggio, L.; Pino, I.; Marinaro, C.; Barletta, M.; Petraroli, A.; Palumbo, A.; Ammendolia, A. Quantitative analysis of movements in facial nerve palsy with surface electromyography and kinematic analysis. J. Electromyogr. Kinesiol. 2021, 56, 102485. [Google Scholar] [CrossRef]
  9. Baude, M.; Hutin, E.; Gracies, J.M. A bidimensional system of facial movement analysis conception and reliability in adults. BioMed Res. Int. 2015, 2015, 812961. [Google Scholar] [CrossRef]
  10. Yitzhak, H.L.; Wolf, M.; Ozana, N.; Kelman, Y.T.; Zalevsky, Z. Optical analysis of facial nerve degeneration in Bell’s palsy. OSA Contin. 2021, 4, 1155–1161. [Google Scholar] [CrossRef]
  11. Petrides, G.; Clark, J.R.; Low, H.; Lovell, N.; Eviston, T.J. Three-dimensional scanners for soft-tissue facial assessment in clinical practice. J. Plast. Reconstr. Aesthetic Surg. 2021, 74, 605–614. [Google Scholar] [CrossRef] [PubMed]
  12. Azuma, T.; Fuchigami, T.; Nakamura, K.; Kondo, E.; Sato, G.; Kitamura, Y.; Takeda, N. New method to evaluate sequelae of static facial asymmetry in patients with facial palsy using three-dimensional scanning analysis. Auris Nasus Larynx 2022, 49, 755–761. [Google Scholar] [CrossRef] [PubMed]
  13. Cheng, X.; Da, F. 3D Facial landmark localization based on two-step keypoint detection. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018; pp. 406–412. [Google Scholar]
  14. Gerós, A.; Horta, R.; Aguiar, P. Facegram–Objective quantitative analysis in facial reconstructive surgery. J. Biomed. Inform. 2016, 61, 1–9. [Google Scholar] [CrossRef] [PubMed]
  15. Gaber, A.; Taher, M.F.; Wahed, M.A. Quantifying facial paralysis using the kinect v2. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2497–2501. [Google Scholar]
  16. Vinokurov, N.; Arkadir, D.; Linetsky, E.; Bergman, H.; Weinshall, D. Quantifying hypomimia in parkinson patients using a depth camera. In International Symposium on Pervasive Computing Paradigms for Mental Health; Springer: Berlin/Heidelberg, Germany, 2015; pp. 63–71. [Google Scholar]
  17. Barrios Dell’Olio, G.; Sra, M. FaraPy: An Augmented Reality Feedback System for Facial Paralysis using Action Unit Intensity Estimation. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual, 10–14 October 2021; pp. 1027–1038. [Google Scholar]
  18. Parra-Dominguez, G.S.; Sanchez-Yanez, R.E.; Garcia-Capulin, C.H. Facial paralysis detection on images using key point analysis. Appl. Sci. 2021, 11, 2435. [Google Scholar] [CrossRef]
  19. Grishchenko, I.; Ablavatski, A.; Kartynnik, Y.; Raveendran, K.; Grundmann, M. Attention Mesh: High-fidelity Face Mesh Prediction in Real-time. arXiv 2020, arXiv:2006.10962. [Google Scholar]
  20. Kartynnik, Y.; Ablavatski, A.; Grishchenko, I.; Grundmann, M. Real-time facial surface geometry from monocular video on mobile GPUs. arXiv 2019, arXiv:1907.06724. [Google Scholar]
  21. Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.G.; Lee, J.; et al. Mediapipe: A framework for building perception pipelines. arXiv 2019, arXiv:1906.08172. [Google Scholar]
  22. Zhao, R.; Wang, Y.; Benitez-Quiroz, C.F.; Liu, Y.; Martinez, A.M. Fast and precise face alignment and 3D shape reconstruction from a single 2D image. In Proceedings of the European Conference on Computer Vision-ECCV 2016 Workshops, Amsterdam, The Netherlands, 8–10 and 15–16 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 590–603. [Google Scholar]
  23. Bulat, A.; Tzimiropoulos, G. Two-stage convolutional part heatmap regression for the 1st 3d face alignment in the wild (3dfaw) challenge. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 616–624. [Google Scholar]
  24. Gou, C.; Wu, Y.; Wang, F.Y.; Ji, Q. Shape augmented regression for 3D face alignment. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 604–615. [Google Scholar]
  25. Bulat, A.; Tzimiropoulos, G. How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–27 October 2017; pp. 1021–1030. [Google Scholar]
  26. Jackson, A.S.; Bulat, A.; Argyriou, V.; Tzimiropoulos, G. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–27 October 2017; pp. 1031–1039. [Google Scholar]
  27. Taberna, G.A.; Guarnieri, R.; Mantini, D. SPOT3D: Spatial positioning toolbox for head markers using 3D scans. Sci. Rep. 2019, 9, 12813. [Google Scholar]
  28. Feng, Z.H.; Huber, P.; Kittler, J.; Hancock, P.; Wu, X.J.; Zhao, Q.; Koppen, P.; Rätsch, M. Evaluation of dense 3D reconstruction from 2D face images in the wild. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 780–786. [Google Scholar]
  29. Tulyakov, S.; Jeni, L.A.; Cohn, J.F.; Sebe, N. consistent 3D face alignment. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2250–2264. [Google Scholar] [CrossRef]
  30. Zhang, H.; Li, Q.; Sun, Z. Joint voxel and coordinate regression for accurate 3d facial landmark localization. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2202–2208. [Google Scholar]
  31. Bulat, A.; Tzimiropoulos, G. Hierarchical binary CNNs for landmark localization with limited resources. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 343–356. [Google Scholar] [CrossRef]
  32. Colaco, S.J.; Han, D.S. Deep Learning-Based Facial Landmarks Localization Using Compound Scaling. IEEE Access 2022, 10, 7653–7663. [Google Scholar] [CrossRef]
  33. Bazarevsky, V.; Kartynnik, Y.; Vakunov, A.; Raveendran, K.; Grundmann, M. Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv 2019, arXiv:1907.05047. [Google Scholar]
  34. Özsoy, U.; Sekerci, R.; Hizay, A.; Yildirim, Y.; Uysal, H. Assessment of reproducibility and reliability of facial expressions using 3D handheld scanner. J. Cranio-Maxillofac. Surg. 2019, 47, 895–901. [Google Scholar] [CrossRef] [PubMed]
  35. Sforza, C.; Ulaj, E.; Gibelli, D.; Allevi, F.; Pucciarelli, V.; Tarabbia, F.; Ciprandi, D.; Orabona, G.D.; Dolci, C.; Biglioli, F. Three-dimensional superimposition for patients with facial palsy: An innovative method for assessing the success of facial reanimation procedures. Br. J. Oral Maxillofac. Surg. 2018, 56, 3–7. [Google Scholar] [CrossRef] [PubMed]
  36. Gibelli, D.; Codari, M.; Pucciarelli, V.; Dolci, C.; Sforza, C. A quantitative assessment of lip movements in different facial expressions through 3-dimensional on 3-dimensional superimposition: A cross-sectional study. J. Oral Maxillofac. Surg. 2018, 76, 1532–1538. [Google Scholar] [CrossRef]
  37. Patel, A.; Islam, S.M.S.; Murray, K.; Goonewardene, M.S. Facial asymmetry assessment in adults using three-dimensional surface imaging. Prog. Orthod. 2015, 16, 1–9. [Google Scholar] [CrossRef]
  38. Taylor, H.O.; Morrison, C.S.; Linden, O.; Phillips, B.; Chang, J.; Byrne, M.E.; Sullivan, S.R.; Forrest, C.R. Quantitative facial asymmetry: Using three-dimensional photogrammetry to measure baseline facial surface symmetry. J. Craniofacial Surg. 2014, 25, 124–128. [Google Scholar] [CrossRef]
  39. Katsumi, S.; Esaki, S.; Hattori, K.; Yamano, K.; Umezaki, T.; Murakami, S. Quantitative analysis of facial palsy using a three-dimensional facial motion measurement system. Auris Nasus Larynx 2015, 42, 275–283. [Google Scholar] [CrossRef]
  40. Mehta, R.P.; Zhang, S.; Hadlock, T.A. Novel 3-D video for quantification of facial movement. Otolaryngol. Neck Surg. 2008, 138, 468–472. [Google Scholar] [CrossRef]
  41. Wang, T.; Zhang, S.; Dong, J.; Liu, L.; Yu, H. Automatic evaluation of the degree of facial nerve paralysis. Multimed. Tools Appl. 2016, 75, 11893–11908. [Google Scholar] [CrossRef]
  42. Guo, Z.; Dan, G.; Xiang, J.; Wang, J.; Yang, W.; Ding, H.; Deussen, O.; Zhou, Y. An unobtrusive computerized assessment framework for unilateral peripheral facial paralysis. IEEE J. Biomed. Health Inform. 2017, 22, 835–841. [Google Scholar] [CrossRef] [PubMed]
  43. Kim, H.S.; Kim, S.Y.; Kim, Y.H.; Park, K.S. A smartphone-based automatic diagnosis system for facial nerve palsy. Sensors 2015, 15, 26756–26768. [Google Scholar] [CrossRef]
  44. Azoulay, O.; Ater, Y.; Gersi, L.; Glassner, Y.; Bryt, O.; Halperin, D. Mobile Application for Diagnosis of Facial Palsy. 2014. Available online: https://www.semanticscholar.org/paper/Mobile-Application-for-Diagnosis-of-Facial-Palsy-Azoulay-Ater/890826f9a7e95232380a022f144f9a1d3b2c35ed (accessed on 20 June 2022).
  45. Barbosa, J.; Lee, K.; Lee, S.; Lodhi, B.; Cho, J.G.; Seo, W.K.; Kang, J. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier. BMC Med Imaging 2016, 16, 23. [Google Scholar] [CrossRef]
  46. Ngo, T.H.; Chen, Y.W.; Seo, M.; Matsushiro, N.; Xiong, W. Quantitative analysis of facial paralysis based on three-dimensional features. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1319–1323. [Google Scholar]
  47. Liu, Y.; Xu, Z.; Ding, L.; Jia, J.; Wu, X. Automatic Assessment of Facial Paralysis Based on Facial Landmarks. In Proceedings of the 2021 IEEE 2nd International Conference on Pattern Recognition and Machine Learning (PRML), Chengdu, China, 16–18 July 2021; pp. 162–167. [Google Scholar]
  48. Ekman, P.; Friesen, W.V. Facial action coding system. Environ. Psychol. Nonverbal Behav. 1978. [Google Scholar] [CrossRef]
  49. Agnew, R.P. Euler transformations. Am. J. Math. 1944, 66, 313–338. [Google Scholar] [CrossRef]
  50. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  51. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, Proceedings of the ROBOTICS ’91, Boston, MA, USA, 14–15 November 1991; SPIE: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  52. Reineg. Muscles of the Face, Colorful Anatomy info Poster. Available online: https://stock.adobe.com/us/search?k=facial+muscle&search_type=recentsearch&asset_id=309366859 (accessed on 20 June 2022).
  53. Nakao, N.; Ohyama, W.; Wakabayashi, T.; Kimura, F. Automatic Detection of Facial Midline as a Guide for Facial Feature Extraction. In Proceedings of the 7th International Workshop on Pattern Recognition in Information Systems, Funchal, Portugal, 12–13 June 2007; pp. 119–128. [Google Scholar]
  54. Galvánek, M.; Furmanová, K.; Chalás, I.; Sochor, J. Automated facial landmark detection, comparison and visualization. In Proceedings of the 31st Spring Conference on Computer Graphics, Smolenice, Slovakia, 22–24 April 2015; pp. 7–14. [Google Scholar]
  55. Lee, Y.; Kumar, Y.S.; Lee, D.; Kim, J.; Kim, J.; Yoo, J.; Kwon, S. An extended method for saccadic eye movement measurements using a head-mounted display. Healthcare 2020, 8, 104. [Google Scholar] [CrossRef]
  56. Ohyama, W.; Nakao, N.; Wakabayashi, T.; Kimura, F. Automatic detection of facial midline and its contributions to facial feature extraction. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2007, 6, 55–65. [Google Scholar] [CrossRef]
  57. Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A modern library for 3D data processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
  58. Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 698–700. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overall framework of our proposed method. Iterative closet point (ICP) is an algorithm that finds a correlation using the closest point of each data, and moves and rotates the data to register it.
Figure 1. Overall framework of our proposed method. Iterative closet point (ICP) is an algorithm that finds a correlation using the closest point of each data, and moves and rotates the data to register it.
Sensors 22 06636 g001
Figure 2. Registration flow of our proposed registration method. This method proceeds with registration after scale matching.
Figure 2. Registration flow of our proposed registration method. This method proceeds with registration after scale matching.
Sensors 22 06636 g002
Figure 4. Facial midsagittal plane. The vertical bisector of the line connecting the irises is the midsagittal line, which is expanded to a 3D image to define the midsagittal plane.
Figure 4. Facial midsagittal plane. The vertical bisector of the line connecting the irises is the midsagittal line, which is expanded to a 3D image to define the midsagittal plane.
Sensors 22 06636 g004
Figure 5. Distance symmetry. The distance from the inverted landmark (pink) after inverting the landmark on the left side of the face through the midsagittal plane.
Figure 5. Distance symmetry. The distance from the inverted landmark (pink) after inverting the landmark on the left side of the face through the midsagittal plane.
Sensors 22 06636 g005
Figure 6. Angle symmetry through the cosine similarity of the normal vector of the midsagittal plane and the landmark pair vector.
Figure 6. Angle symmetry through the cosine similarity of the normal vector of the midsagittal plane and the landmark pair vector.
Sensors 22 06636 g006
Figure 7. Amount of landmark movement determined through the distance of the landmarks corresponding to the left and right side of the face in the neutral and smile expression.
Figure 7. Amount of landmark movement determined through the distance of the landmarks corresponding to the left and right side of the face in the neutral and smile expression.
Sensors 22 06636 g007
Figure 8. Yearly patient images. (a) Year 1. (b) Year 2. (c) Year 3. The green points present the extracted 3D facial landmarks, and the red horizontal line represents the line connecting the irises. The red vertical line is the midsagittal plane, which is the vertical bisector of the red horizontal line. The images, from left to right, represent Patient 1 to Patient 4.
Figure 8. Yearly patient images. (a) Year 1. (b) Year 2. (c) Year 3. The green points present the extracted 3D facial landmarks, and the red horizontal line represents the line connecting the irises. The red vertical line is the midsagittal plane, which is the vertical bisector of the red horizontal line. The images, from left to right, represent Patient 1 to Patient 4.
Sensors 22 06636 g008
Figure 9. Registration comparison image in static expression.
Figure 9. Registration comparison image in static expression.
Sensors 22 06636 g009
Figure 10. Registration comparison image in dynamic expression.
Figure 10. Registration comparison image in dynamic expression.
Sensors 22 06636 g010
Table 1. The 478 facial landmarks in the defined facial muscle groups.
Table 1. The 478 facial landmarks in the defined facial muscle groups.
Muscle Index and NAME# of LandmarksMuscle Index and Name# of Landmarks
1. Frontalis610. Depressor Anguli Oris8
2. Corrugator111. Zygomaticus Minor6
3. Procerus112. Zygomaticus Major3
4. Orbicularis Oculi5913. Buccinator4
5. Levator Labii Superioris (+ Alaeque Nasi)1014. Risorius3
6. Nasalis615. Platysma5
7. Nose Tip3216. Masseter9
8. Orbicularis Oris4417. Temporalis16
9. Mentalis6
Table 2. The 68 facial landmarks in the defined facial muscle groups.
Table 2. The 68 facial landmarks in the defined facial muscle groups.
Muscle Index and Name# of LandmarksMuscle Index and Name# of Landmarks
1. Frontalis010. Depressor Anguli Oris2
2. Corrugator011. Zygomaticus Minor0
3. Procerus212. Zygomaticus Major0
4. Orbicularis Oculi613. Buccinator0
5. Levator Labii Superioris (+ Alaeque Nasi)014. Risorius0
6. Nasalis215. Platysma2
7. Nose Tip316. Masseter4
8. Orbicularis Oris1217. Temporalis1
9. Mentalis0
Table 3. Dates of images taken for each patient.
Table 3. Dates of images taken for each patient.
PatientYear 1Year 2Year 3
Patient 19 September 201729 October 201828 January 2019
Patient 22 May 201627 June 20177 May 2019
Patient 317 March 20168 November 201717 December 2018
Patient 413 February 201821 August 201829 July 2020
Table 4. Registration comparison in static expression.
Table 4. Registration comparison in static expression.
Patient 1Patient 2Patient 3Patient 4
Registration MethodYear 1-Year 2Year 1-Year 3Year 1-Year 2Year 1-Year 3Year 1-Year 2Year 1-Year 3Year 1-Year 2Year 1-Year 3
Before Registration 7.99 × 10 2 5.32 × 10 2 5.81 × 10 2 5.62 × 10 2 5.63 × 10 2 6.45 × 10 2 9.15 × 10 2 10.73 × 10 2
Point-to-point ICP 1.41 × 10 2 1.93 × 10 2 2.15 × 10 2 2.12 × 10 2 1.45 × 10 2 1.04 × 10 2 1.99 × 10 2 2.47 × 10 2
Point-to-plane ICP 1.65 × 10 2 2.23 × 10 2 1.57 × 10 2 1.31 × 10 2 12.58 × 10 2 1 × 10 2 1.87 × 10 2 1.67 × 10 2
Global Registration 1.91 × 10 2 2.5 × 10 2 2.11 × 10 2 1.88 × 10 2 1.86 × 10 2 1.38 × 10 2 2.34 × 10 2 2.1 × 10 2
Ours 1.27 × 10 2 1.61 × 10 2 1.53 × 10 2 1.33 × 10 2 1.34 × 10 2 0.91 × 10 2 1.73 × 10 2 1.64 × 10 2
Table 5. Registration comparison for dynamic expression (neutral-smile).
Table 5. Registration comparison for dynamic expression (neutral-smile).
Patient 1Patient 2Patient 3Patient 4
Registration MethodYear 1Year 2Year 3Year 1Year 2Year 3Year 1Year 2Year 3Year 1Year 2Year 3
Before Registration 6.49 × 10 2 3.52 × 10 2 3 × 10 2 2.48 × 10 2 3.22 × 10 2 2.8 × 10 2 2.03 × 10 2 2.88 × 10 2 1.71 × 10 2 2.38 × 10 2 2.2 × 10 2 3.93 × 10 2
Point-to-point ICP 1.47 × 10 2 1.42 × 10 2 1.63 × 10 2 2.01 × 10 2 1.14 × 10 2 1.12 × 10 2 1.07 × 10 2 0.75 × 10 2 0.78 × 10 2 1.17 × 10 2 1.39 × 10 2 2.04 × 10 2
Point-to-plane ICP 1.47 × 10 2 1.42 × 10 2 1.62 × 10 2 1.14 × 10 2 1.14 × 10 2 1.11 × 10 2 7.66 × 10 2 0.64 × 10 2 0.78 × 10 2 1.17 × 10 2 8.7 × 10 2 1.72 × 10 2
Global Registration 2.12 × 10 2 1.62 × 10 2 1.8 × 10 2 1.73 × 10 2 1.54 × 10 2 1.55 × 10 2 1.79 × 10 2 1.18 × 10 2 1.43 × 10 2 1.57 × 10 2 1.68 × 10 2 2.16 × 10 2
Ours 1.42 × 10 2 1.27 × 10 2 1.42 × 10 2 1.13 × 10 2 1.05 × 10 2 1.09 × 10 2 1.04 × 10 2 0.64 × 10 2 0.74 × 10 2 0.96 × 10 2 1.25 × 10 2 1.7 × 10 2
Table 6. Distance symmetry results for each facial muscle of patients by year in smile expression.
Table 6. Distance symmetry results for each facial muscle of patients by year in smile expression.
Facial Muscle IndexNameYear 1Year 2Year 3
4. Orbicularis OculiPatient 17.59210.736730.51192
Patient 21.489421.183710.38719
Patient 31.295082.049981.06607
Patient 40.928260.674370.8238
6. NasalisPatient 12.228542.867661.13281
Patient 20.816720.27760.22651
Patient 31.197510.878220.1149
Patient 41.906950.174840.21075
7. Nose TipPatient 13.443824.107620.907845
Patient 21.138580.685510.36036
Patient 32.087071.920980.18086
Patient 42.740560.483940.64874
8. Orbicularis OrisPatient 110.238859.49510.47006
Patient 26.071951.859011.00492
Patient 38.361192.695590.40773
Patient 47.868631.367831.72486
Table 7. Angle symmetry results for each facial muscle of patients by year in smile expression.
Table 7. Angle symmetry results for each facial muscle of patients by year in smile expression.
Facial Muscle IndexNameYear 1Year 2Year 3
4. Orbicularis OculiPatient 199.979699.9849899.98784
Patient 299.9257499.9856299.99131
Patient 399.9372199.8633999.98162
Patient 499.8633499.9686399.96985
6. NasalisPatient 199.950599.9736199.99662
Patient 299.9809199.9910699.99449
Patient 398.8247299.950899.996
Patient 499.973699.9954399.98499
7. Nose TipPatient 198.8905198.9450999.99477
Patient 298.9493399.981999.9842
Patient 399.8084699.9340399.99433
Patient 499.9409599.9702099.98882
8. Orbicularis OrisPatient 198.7517799.7907599.98268
Patient 299.678499.974199.97713
Patient 399.4489499.8370199.98333
Patient 499.7432899.8698799.96659
Table 8. Results for the amount of landmark movement of each facial muscle for patients by year in dynamic expression.
Table 8. Results for the amount of landmark movement of each facial muscle for patients by year in dynamic expression.
Facial Muscle indexNameYear 1Year 2Year 3
4. Orbicularis OculiPatient 112.9491.9610.798
Patient 210.1747.5811.087
Patient 36.5452.2091.641
Patient 46.4074.3733.707
6. NasalisPatient 12.4150.5820.319
Patient 20.5570.160.115
Patient 30.7590.3920.138
Patient 40.5220.3960.045
7. Nose TipPatient 12.7232.0490.418
Patient 22.3550.570.079
Patient 30.4440.3680.186
Patient 42.0712.0130.159
8. Orbicularis OrisPatient 110.4229.2765.685
Patient 24.8122.0980.59
Patient 35.9993.772.263
Patient 49.5015.2851.391
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J.; Jeong, H.; Cho, J.; Pak, C.; Oh, T.S.; Hong, J.P.; Kwon, S.; Yoo, J. Numerical Approach to Facial Palsy Using a Novel Registration Method with 3D Facial Landmark. Sensors 2022, 22, 6636. https://doi.org/10.3390/s22176636

AMA Style

Kim J, Jeong H, Cho J, Pak C, Oh TS, Hong JP, Kwon S, Yoo J. Numerical Approach to Facial Palsy Using a Novel Registration Method with 3D Facial Landmark. Sensors. 2022; 22(17):6636. https://doi.org/10.3390/s22176636

Chicago/Turabian Style

Kim, Junsik, Hyungwha Jeong, Jeongmok Cho, Changsik Pak, Tae Suk Oh, Joon Pio Hong, Soonchul Kwon, and Jisang Yoo. 2022. "Numerical Approach to Facial Palsy Using a Novel Registration Method with 3D Facial Landmark" Sensors 22, no. 17: 6636. https://doi.org/10.3390/s22176636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop