Quantifying Perceived Facial Asymmetry to Enhance Physician–Patient Communications

: In cosmetic surgery, bridging the anticipation gap between the patients and the physicians can be challenging if there lacks objective and transparent information exchange during the decision-making and surgical process. Among all factors, facial symmetry is the most important for assessing facial attractiveness. The aim of this work is to promote communications between the two parties by providing a quadruple of quantitative measurements: overall asymmetry index (oAI), asymmetry vector, classiﬁcation, and conﬁdence vector, using an artiﬁcial neural network classiﬁer to model people’s perception acquired from visual questionnaires concerning facial asymmetry. The questionnaire results exhibit a Cronbach’s Alpha value of 0.94 and categorize the respondents’ perception of each stimulus face into perceived normal (PN), perceived asymmetrically normal (PAN), and perceived abnormal (PA) categories. The trained classiﬁer yields an overall root mean squared error < 0.01, and its result shows that the oAI is, in general, proportional to the degree of perceived asymmetry. However, there exist faces that are difﬁcult to classify as either PN or PAN or either PAN or PA with competing conﬁdence values. In such cases, oAI alone is not sufﬁcient to articulate facial asymmetry. Assisting surgeon–patient conversations with the proposed asymmetry quadruple is advised to avoid or to mitigate potential medical disputes. Author Contributions: Data curation, S.-Y.W., P.-Y.T., and L.-J.L.; formal analysis, S.-Y.W.; funding acquisition, S.-Y.W.; investigation, S.-Y.W. and L.-J.L.; methodology, S.-Y.W. and P.-Y.T.; project administration, S.-Y.W.; resources, S.-Y.W. and L.-J.L.; software, S.-Y.W. and P.-Y.T.; supervision, S.-Y.W.; validation, S.-Y.W. and L.-J.L.; visualization, S.-Y.W. and P.-Y.T.; writing—original draft, S.-Y.W. and P.-Y.T.; writing—review and editing, S.-Y.W. and L.-J.L. All authors have read and agreed to the published version of the manuscript.


Introduction
Appearance is the most prominent stimulus to establish an impression in others [1]. Rubenstein et al. showed that humans develop their capability of perceiving attractiveness as early as in their infancy [2]. Cross-cultural and cross-age similarities exist in judging facial attractiveness [3]. Among all factors, facial symmetry is the most important factor that correlates facial attractiveness [4][5][6][7][8]. Perfect facial symmetry is only considered a theoretical existence, as most attractive individuals exhibit an asymmetric facial nature [9]. However, significant facial asymmetry can introduce aesthetic or even functional problems [10]. Cheong and Lo articulated the etiology of facial asymmetry as being congenital, developmental, or acquired; the clinical implications, evaluation, and treatment planning and management may vary accordingly [5,11].
People have grown more accepting of seeking craniofacial orthognathic, orthodontic, or plastic surgery for appearance improvements in recent years [12,13]. According to ISAPS Global Statistics, more than 11 million surgical procedures were performed in 2019, an increase of 20.6% from 2015, and there were 4,058,143 (35.7%) total face and head surgical procedures [14].
Although plastic surgery is expected to enhance one's appearance and self-esteem, there exist consequential uncertainties due to psychological and physical discomfort for the patients. Harmonious physician-patient relationships are crucial to soothe this discomfort and to promote healthcare delivery [15]. However, anticipation gaps and insufficient decision-making communications have caused medical disputes [16][17][18][19]. According to the Department of Health, Taipei City, among Taipei's 415 cosmetic surgery clinics in 2016, 116 of 374 medical disputes came from aesthetic medicine [20] To bridge the anticipation gaps, quantitative measurements are needed to model perception of facial asymmetry. Related work and strategies in the below literature include: (1) determining how stimulus faces are collected-two-dimensional (2D) or threedimensional (3D), real or simulated, with skin or without; (2) identifying facial landmarks automatically or semi-automatically, or maybe no landmarks; (3) defining a coordinate system and making measurements; (4) conducting questionnaire surveys to quantify perceptions of facial asymmetry; (5) computing the degree of asymmetry and classification.
Chu et al. employed a series of 2D images to present progressive asymmetry [21]. The results suggested that at least 3 mm of facial asymmetry on the oral commissure, brow, or both was required for the participants to notice the asymmetry. Naini et al. measured the nasolabial angle on 2D facial contours to determine whether a rhinoplasty surgery was needed [22]. The results showed that 2D stimulus facial images had their constraints. Meyer-Marcotty et al. performed a similar study on facial asymmetry, though utilizing 3D images [1]. Instead of employing images from different persons, they transformed the same image by incremental soft tissue alterations. They found that the nasal structure played a crucial role in perception of symmetry, and the asymmetry of chins greatly affected the facial appearance.
In clinical practices, asymmetry is usually measured by comparing bilateral facial features against the mid-sagittal plane. Quantitative studies of the face rely on the accurate positioning of facial landmarks or surface-based patches [23]. Masuoka et al. explored the correlation of facial asymmetry between cephalometric measurements and the frontal photograph evaluation conducted by orthodontists. The result argued that there existed a discrepancy between the two [24]. Ferrario et al. proposed quantitative metrics to assess pre-and post-operative differences [25]. Hajeer et al. compared the performance of bi-maxillary and maxillary osteotomy [26]. Pre-and post-orthognathic surgeries of 44 patients (20 Class III cases of bi-maxillary osteotomy; 12 Class III cases of maxillary osteotomy; 12 Class II cases of bi-maxillary osteotomy) were assessed. Meyer-Marcotty et al., Djordjevic et al., and Cevidanes et al. mirrored faces and aligned feature points [1,27,28]. Huang et al. proposed an asymmetry index (AI) for each individual facial landmark of interest [7]. Hsu et al. computed facial contour asymmetry instead of landmark asymmetry [29]. Lo et al. constructed a transfer learning model to score the asymmetry of facial contour maps and to assess the efficacy of an orthognathic surgery [30]. The abovementioned studies employed the Frankfurt horizontal plane or the natural head position to derive 3D head coordinate systems.
Questionnaires were used to elicit and quantify people's perception of facial asymmetry. Lee et al. claimed that non-experts' assessments for facial asymmetry should be first considered because the ultimate demand and perception rested upon the patients [31]. Padwa et al. maintained similar comments [32]. Jackson et al. conducted visual questionnaires among professional orthodontists, general dentists, and laypersons. [8]. Their study showed that abilities to assess facial symmetry were different over varied professional backgrounds, and the orthodontists had the most profound capabilities of differentiating symmetrical from asymmetrical faces. Chu et al. employed playback of photographs to trigger conscious perception of the observers [21]. In the study of Naini et al., three groups of respondents, i.e., orthognathic patients, clinicians, and laypeople, were invited to assess how mandible and chin landmarks related to perceived asymmetry [22]. The result demonstrated that the perception of clinicians and patients were more critical than laypeople. McAvinchey et al. classified the participants into five groups: laypeople, dental students, dental care professionals, dental practitioners, and orthodontists [33].
To grade the degree of asymmetry, Yamamoto et al. invited oral surgeons and orthodontists to subjectively evaluate facial asymmetry and defined grade#0 as good symmetrical frontal view, grade#1 as little asymmetry, grade#2 as localized asymmetry, and grade#3 as marked asymmetry [34]. Meyer-Marcotty et al. adopted a six-point scale to rank facial symmetry (1: very symmetrical; 6: very asymmetrical) [1]. Masuoka et al. categorized the frontal facial image into two groups [24]. Patients in Group A exhibited symmetrical or little asymmetrical frontal views, without having to undergo a surgical treatment. Patients in Group B exhibited marked asymmetry and a surgical treatment was required. McAvinchey et al. requested the observers to classify the facial images displayed on the screen into normal, slightly abnormal but socially acceptable, or abnormal [33].
The building blocks to achieve the aim of this work are threefold. (1) Define a 3D facial coordinate system regardless of the head direction and compute asymmetry index vector The confidence vector ( C) reveals laypeople's voting distribution over the three classes for a face.

Materials and Methods
Ethical approval for this study was obtained from the Institutional Review Board of Chang Gung Memorial Hospital, Taiwan, R.O.C. (102-1359B and 103-3130B) Figure 1 illustrates the procedure used to model the perceived facial asymmetry. On the top-left corner is a 3D normal face whose skin texture is removed, and facial landmarks are identified. The head coordinate system is then computed. Walking along the upper row and going downwards, the normal face is morphed to generate stimulus faces with varied degrees of asymmetry, with which the proposed visual questionnaire surveys are conducted. Each stimulus face receives votes from the respondents' asymmetry perception and is categorized as PN, PAN, or PA, with associated confidence vector ( C). Walking downwards on the left, the asymmetry index vector ( AI) is computed and fed as the input to train the artificial neural network classifier to learn the categorization on the right. The overall asymmetry (oAI) is the weighted sum of individual asymmetry indices.
The remainder of this section depicts the proposed methods in details.

Acquisition of 3D Facial Images and Pre-Process
A 3dMD scanner, an ultra-fast non-invasive 3D cranial imaging system, is used to document high-precision facial surface with the capture speed of 1.5 ms per image. To prevent the respondents from being distracted by the subjects' individual outlooks, facial

Acquisition of 3D Facial Images and Pre-Process
A 3dMD scanner, an ultra-fast non-invasive 3D cranial imaging system, is used to document high-precision facial surface with the capture speed of 1.5 ms per image. To prevent the respondents from being distracted by the subjects' individual outlooks, facial hues, or skin quality, we decide to use only one subject's 3D facial image, which is later morphed into a series of faces with different, controlled degrees of asymmetry. Furthermore, we remove the facial skin texture and make it monochrome. The inclusion criteria of the subject are of dental occlusion Angle Class I, no craniofacial deformity, no facial trauma history, no prior orthognathic surgery, and their face generally regarded as symmetric by three orthodontists.

Facial Landmarks and the Corresponding 3D Coordinate System
Twenty facial landmarks, including eight medial and six bilateral pairs, as shown in Table 1, are identified from the acquired facial image [6]. Each landmark, denoted as L(i), is associated with an ID (i). IDs with star superscript indicate bilateral landmarks.
To mathematically construct the 3D coordinate system, the mid-sagittal plane of the face is defined as the orthogonal to the vector Ex r Ex l that connects right and left Exocanthi (ID = 9*; Ex l and Ex r ) and passes through Nasion (ID = 2; N). The intersection of the mid-sagittal plane and the vector Ex r Ex l is denoted as N', which can be considered as the projection point of N along the mid-sagittal plane. The proposed head coordinate system is thus defined as (N, x , y , z ), where N is the origin; x = Ex r Ex l / Ex r Ex l (pointing to the right); y = N N/ N N (pointing into the head); z = x × y / x × y (pointing upwards). The coordinates of all 20 landmarks can then be determined against (N, x , y , z ). Note that these 20 facial landmarks are later identified as 14 facial features (ID = 1, 2, 3, . . . 14).

Visual Questionnaire Surveys
To generate transformed faces with varied degrees of asymmetry, we perform combined counter-clockwise rolls (rotations of the Y-axis) of the nose and the chin [7]. In total, 8 nose rolls together with 8 chin rolls yield 64 stimulus faces with off-the-mid-sagittal distances covering from normal to serious asymmetry [33], as shown in Table 2. In our study, the off-the-mid-sagittal distance ranges are 0.35-5.31 mm for the nose and 1.14-17.04 mm for the chin. For example, a stimulus denoted as n 09 c 03 indicates that the transformation involves 4.5 • of nose roll and 1.68 • of chin roll, and the corresponding Pronasale (Prn) and Menton (Me) displacements off the mid-sagittal plane are 3.19 mm (or δ x (Prn)) and 3.42 mm (or δ x (Me)), respectively. Figure 2a,b illustrate the facial coordinates and the counter-clockwise roll, and Figure 2c illustrates a series of deformed faces. Table 2. Roll rotations of the nose and the chin.  The 64 stimulus faces are randomly shuffled and compiled into a visual questionnaire. When conducting the questionnaires, each face is displayed for five seconds, sufficiently enough for a respondent to reach a decision [35,36]. A blank screen, lasting for two seconds, is shown between consecutive faces to remove residual stimuli. The study recruits 128 laypeople to serve as the questionnaire respondents. Respondents' informed consents are obtained before taking the survey. A questionnaire survey would take less than 10 minutes, including time spent for opening, closing, and other miscellaneous preparations. Two simple questions for a face are asked: (Q1) Do you think this face is symmetrical? (Q2) Is it abnormally asymmetrical and you would consider a surgery? A face is rated as perceived normal (or PN) if the answer to Q1 is YES. A face is rated as perceived asymmetrically normal (or PAN) if the answer to Q1 is NO and the The 64 stimulus faces are randomly shuffled and compiled into a visual questionnaire. When conducting the questionnaires, each face is displayed for five seconds, sufficiently enough for a respondent to reach a decision [35,36]. A blank screen, lasting for two seconds, is shown between consecutive faces to remove residual stimuli. The study recruits 128 laypeople to serve as the questionnaire respondents. Respondents' informed consents are obtained before taking the survey. A questionnaire survey would take less than Appl. Sci. 2021, 11, 8398 6 of 13 10 minutes, including time spent for opening, closing, and other miscellaneous preparations. Two simple questions for a face are asked: (Q1) Do you think this face is symmetrical? (Q2) Is it abnormally asymmetrical and you would consider a surgery? A face is rated as perceived normal (or PN) if the answer to Q1 is YES. A face is rated as perceived asymmetrically normal (or PAN) if the answer to Q1 is NO and the answer to Q2 is NO. A face is rated as perceived abnormal (or PA) if the answer to Q1 is NO and the answer to Q2 is YES. All respondents' answers are gathered, and each face is categorized as PN, PAN, or PA, if it receives the most votes for that class. Furthermore, the percentage of votes each face receives over PN, PAN, and PA, respectively, constitute the corresponding confidence vector ( C). For example, a face receives 20% votes for PN, 30% votes for PAN, and 50% votes for PA. This face is therefore categorized as PA with a confidence vector

Asymmetry Classifier
The asymmetry classifier is an artificial neural network using back-propagation to adjust its inter-layer weights. The model has 14 nodes on its input layer, 10 nodes on the hidden layer, and 2 target nodes. The 14 facial features fed into the input layer nodes are the 14 asymmetry indices (AI's) computed from the 20 landmarks as defined in Equation (1). The number of hidden layer nodes being 10 is determined by cross-validation and by employing the rule of thumb: (a) The number of hidden neurons is between the size of the input layer and the size of the output layer; (b) The number of hidden neurons is approximately two-thirds of the size of the input layer, plus the size of the output layer. The two target nodes correspond to the binary coding of the three possible asymmetry classes: PN, PAN, and PA.
An asymmetry index of a feature is its distance off the mid-sagittal plane if it is a medial landmark. On the other hand, the asymmetry index for a pair of bilateral landmarks is defined as the root sum squared (RSS) of the disparities of both landmarks on x-, y-, and z-directions. Equation (1) is calculated as follows: where L = L(i) represents the landmark with ID being i, as shown in Table 1. AI i denotes the asymmetry index of landmark L(i) ∈ {G, N, Prn, Sn, Ls, Li, Sto, Me} ∪ {Ex, En, Al, Ch, Zy, Go}; δ x (L) is the distance of landmark L off the mid-sagittal plane; L l x denotes the x-coordinate of landmark L(left); L r x is the x-coordinate of landmark L(right); L l y denotes the y-coordinate of landmark L(left); L r y is the y-coordinate of landmark L(right); L l z denotes the z-coordinate of landmark L(left); L r z is the z-coordinate of landmark L(right); M x denotes the x-coordinate of the mid-sagittal plane.
Specifically, in this study, we define mid-sagittal plane as the yz-plane, i.e., M x = 0; therefore, the definition of (1) can be simplified to Equation (2) as follows: During the training process, each face's 14 AIs are fed as the input to the underconstructed classifier, which adjusts its inter-layer weight matrices of the input and hidden and hidden and output layers to generate the desired target that is encoded from the corresponding questionnaire results. In combination, the 14 AIs of a face form an asymmetry index vector, AI = [AI 1 , AI 2 , · · · , AI 14 ] T . The AI s of the 64 faces are iteratively applied until the classifier reaches convergence and its mean square error from the desired target is below a pre-defined threshold.

Overall Asymmetry Index (oAI) and Asymmetry Quadruple
The overall asymmetry index (oAI) of a face is defined as the weighted sum of the asymmetry indices or as the inner product of the asymmetry index vector and the relative importance vector. The weight associated with a facial feature pertains its relative importance ( RI = [RI 1 , RI 2 , · · · , RI 14 ] T ) extracted from the constructed asymmetry classifier and can be computed as Equation (3) [37]: where RI i denotes the relative importance of landmark ID = i, as defined in Table 1; w ij represents the weight of the connection between node i on the input layer and node j on the hidden layer; w jk denotes the weight of the connection between node k on the output layer and node j on the hidden layer; i = 1, 2, · · · , m (m = 14); j = 1, 2, · · · , n (n = 10); individual weights are represented as unsigned magnitude values. The overall asymmetry index of a face is then defined as Equation (4): To put it all together, for a given face, overall asymmetry index, individual asymmetry indices (or asymmetry index vector), perceived asymmetry classification, and voting percentages of three categories (or associated confidence vector) constitute an asymmetry quadruple oAI, AI, PN|PAN|PA, C that quantitatively articulates the characteristics of the facial asymmetry.

Results
The study recruited 128 laypeople as the respondents and 113 (88.3%) of them replied with valid questionnaires, in which 43 (38%) are male and 70 (62%) are female. The invalid questionnaires include incomplete or apparently inconsistent answers, e.g., the respondent may consider a face as symmetrical but paradoxically call for a surgery. The questionnaire results present a Cronbach's Alpha value of 0.944 [37]. The category of a face, PN, PAN, or PA, is determined by which category receives the most votes from the respondents. Among the 64 stimulus faces, 9 faces are classified as PN (14%), 15 faces are classified as PAN (23%), and 40 faces are classified as PA (63%). The distribution of face categories is aligned with the process of generating stimulus faces, which are deformed from a normal face and present varied degrees of facial asymmetry.
The asymmetry classifier models the respondents' perceptions to predict the perceived asymmetry of a face. The input layer takes the asymmetry index vector (or 14 asymmetry indices) of the corresponding 20 facial landmarks. The 14 asymmetry indices (AI i , i = 1. . . 14) of a stimulus face are computed using Equation (2). The classifier is trained with 75%, validated with 15%, and tested with 15% of the 3D stimulus face images and yields an overall accuracy >99% (root mean squared error 0.000059225). The resulted inter-layer weight matrix between the input and hidden layers (denoted as IW 1,1 ) is detailed in Table 3. The inter-layer weight matrix between the hidden and output layers (denoted as LW 2,1 ) is detailed in Table 4. The bias vectors added to the hidden and output layers are listed in Table 5.
The relative importance RIs (RI i , i = 1 . . . 14) are calculated using Equation (3) and are employed as the weights of facial features, respectively. As shown in Table 6, the RIs are rank-ordered, and it shows that alar curvature (Al) and subnasale (Sn) are relatively important when classifying perceived asymmetry: both having RIs greater than 0.10, which are significantly greater than those of the remaining facial features. It is also aligned with the proposed deformation of the normal face with the emphasis on the nose and chin regions, as suggested in [1,7,33].
The oAI of a stimulus face is a linear combination (SOP; sum of product) of the corresponding AI i and RI i , as shown in Equation (4). Table 7 presents the overall asymmetry index (oAI), classification of perceived asymmetry (PN|PAN|PA), and the confidence value of the classification (Ĉ: percentage of the voted classification) for the 64 stimulus faces. Please note that for brevity of demonstration, only partial asymmetry quadruple oAI, AI, PN|PAN|PA, C is shown: the detailed asymmetry index vector ( AI) and detailed confidence vector ( C) are excluded. For example, n 03 n 15 is a stimulus face with a 1.5 • nose roll and a 8.42 • chin roll (as depicted in Table 2) from the normal face. Its entry in Table 7 is shown as (6.5208,PA,0.92), i.e., its oAI is 6.5208 and it is perceived as abnormal (PA) with a confidence vector of 0.92. The range of oAI is from 0.8182 (n 01 c 01 ) to 8.6792 (n 15 c 15 ). The greater the oAI of a stimulus face, the greater the possibility of it being classified as PAN or PA.   Table 7. Sixty-four stimulus faces and associated minimal asymmetry characteristics.

Discussion
We transformed a normal face rather than using a series of realistic asymmetric faces to maintain perception stability of the questionnaire respondents, similar to the approach proposed by Meyer-Marcotty et al. [1]. The degree of deformation can therefore be computed to correspond to the respondents' ratings over different faces. The facial texture is removed before deformation in our study to mitigate perception distractions by skin color, rashes, moles, etc. Table 8 illustrates the classification results of the facial asymmetry for the 64 stimulus faces. The 8 × 8 matrix is arranged so that the left side shows varied degrees of nose rolls, whereas different degrees of chin rolls are shown on the top. As shown in Table 2, each increment of i denotes 0.5 • of nasal roll, whereas each increment of j denotes 0.56 • of chin roll. Presumably, the stimulus faces towards the upper left are likely to be categorized as PN, where those towards the lower right are likely to be classified as PAN or PA. The perceived asymmetry is, in general, aligned with this assumption, with the exceptions of the four red-shaded entries (n 09 c 01 , n 11 c 01 , n 13 c 01 , and n 11 c 07 ), where there exhibit inconsistencies in the asymmetry classifications.   Table 9, the highest confidence value in a confidence vector is starred (*); and the second highest confidence value is underlined. For example, for the stimulus face n 11 c 07 that is associated with confidence vector [0.15, 0.43, 0.42] T , its highest confidence value is starred (0.43*) and the second highest confidence value is underlined (0.42). The four stimulus faces demonstrate too-close-to-call situations. This means that respondents' perceptions are divergent, and any classification decision may easily draw disagreement from the other camp. By reassigning the classifications of the four faces to the categories with the second highest votes, we obtain a revised matrix, as shown in Table 10, in which the red alerts are gone. By meticulously re-classifying these faces by examining the competing confidence values for the other stimulus faces, even though some of them are not red alerted here, we obtain similar results.   The ambiguous zones phenomenon of asymmetry perception suggests: (1) that misclassifications can happen and there are no univocally correct answers; (2) most importantly, that objective and transparent articulations of the asymmetry characteristics between the physician and the patient during the course of assessment, decision making, surgical planning, and post-op are needed to avoid or mitigate potential medical disputes. Figure 3 visualizes the relationship between oAI and asymmetry classification. The stimulus faces are arranged along the horizontal axis so that stimulus faces of the same classified category are clustered together: PN on the left (oAI: 1.88 ± 0.69), PAN in the middle (oAI: 3.43 ± 0.71), PA on the right (oAI: 5.89 ± 1.39). Within a category, the faces are sorted in order of oAI. Each graphic bar is associated with the confidence vector of a stimulus face. It illustrates the voting distribution (referring to the vertical percentage axis on the left) from the questionnaire surveys. Green segments relate to votes for PN, yellow segments to PAN, and red segments to PA. The solid black curve across the plot depicts the oAIs (referring to the oAI vertical axis on the right) of the corresponding stimulus faces.
The ambiguous zone phenomenon is again manifested in Figure 3. There are oAI drops around the inter-category areas (PN-PAN and PAN-PA)-coinciding with where red alerts occur in Table 8. For example, the first oAI drop happens at n 09 c 01 , which is one of the four stimulus faces exhibiting red alerts in Table 8. It has an oAI of 2.258 but is perceptually classified as PAN instead of PN. Its votes for PN and PAN are close (0.35 versus 0.46), and the corresponding green and yellow segments are visually of similar lengths. Such inconsistent asymmetry classifications exist when respondents' subjective opinions are divisive.
drops around the inter-category areas (PN-PAN and PAN-PA)-coinciding with where red alerts occur in Table 8. For example, the first oAI drop happens at , which is one of the four stimulus faces exhibiting red alerts in Table 8. It has an oAI of 2.258 but is perceptually classified as PAN instead of PN. Its votes for PN and PAN are close (0.35 versus 0.46), and the corresponding green and yellow segments are visually of similar lengths. Such inconsistent asymmetry classifications exist when respondents' subjective opinions are divisive.  Limitations and Strengths. Although quantifying facial asymmetry can be used as a helpful ancillary utensil, it is not the sole decision maker for surgical decisions. Comprehensive clinician grading, laypeople evaluations, patient-reported outcomes, and healthy patient-physician communications are just as important. The study models facial asymmetry for 3D images; however, 2D facial images are still widely utilized by facial plastic and reconstructive surgeons. The proposed model can be adapted for 2D images by (1) establishing 2D facial coordinate systems; (2) identifying the mid-face line instead of mid-sagittal plane; (3) computing the point-to-line distance instead of the point-to-plane distance; and (4) removing the Z-axis components in Equations (1) and (2). Furthermore, when conducting the questionnaire survey, the respondents rate asymmetry on faces with skin texture removed. Facial asymmetry among different races or different genders is not studied. Pre-and post-surgical asymmetry comparisons and how prominent facial features, such as skin colors, moles, mustaches, attractiveness, pimples, etc., affect the ratings of facial asymmetry are beyond the scope of this study.

Conclusions
In this study, we construct an artificial neural network model to address the perception of facial asymmetry. The resulted asymmetry quadruple oAI, AI, PN|PAN|PA, C can serve as a tool to establish more transparent communications between the physician and the patient and alleviate the anticipation gaps between the two parties. The oAI is an overall score of facial asymmetry and is a weighted sum of asymmetry indices of individual facial features. Before making surgical decision, the patient can weigh on their own asymmetry characteristics in terms of the asymmetry quadruple oAI, AI, PN|PAN|PA, C . The ambiguous zone phenomenon should be taken into account as well. With such practice, the patients are more involved in the process and analysis of surgical decision making and undertake more knowledgeable risks. On the other hand, it is the physician's responsibility to properly address the patient's asymmetry characteristics and perform an oAI-improving (or lower post-op oAI) surgery. Quantifying facial asymmetry can serve as an advisory tool during the surgical decision process, alongside comprehensive clinician grading, laypeople evaluations, and patient-reported outcomes.
Finally, the visualization of the ambiguous zones of asymmetry perception as depicted in Table 8 and Figure 3 helps explain why certain medical disputes are difficult to avoid. A thorough articulation of the proposed asymmetry quadruples oAI, AI, PN|PAN|PA, C is expected to improve physician-patient relationship.

Patents
The concept and preliminary study of this work have been awarded an invention patent (no. I595430) by the Intellectual Property Office, Ministry of Economic Affairs, Taiwan, ROC.