Next Article in Journal
EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset
Next Article in Special Issue
Finite Deformation of Scleral Tissue under Electrical Stimulation: An Arbitrary Lagrangian-Eulerian Finite Element Method
Previous Article in Journal
An Integrated Method of Biomechanics Modeling for Pelvic Bone and Surrounding Soft Tissues
Previous Article in Special Issue
Optimization Design and Performance Analysis of a Bionic Knee Joint Based on the Geared Five-Bar Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Baseline Facial Muscle Database Using Statistical Shape Modeling and In Silico Trials toward Decision Support for Facial Rehabilitation

1
Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology and Education, Thu Duc City 71300, Ho Chi Minh City, Vietnam
2
School of Engineering, Eastern International University, Thu Dau Mot City 75100, Binh Duong Province, Vietnam
3
Univ. Lille, CNRS, Centrale Lille, UMR 9013-LaMcube-Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2023, 10(6), 737; https://doi.org/10.3390/bioengineering10060737
Submission received: 19 May 2023 / Revised: 10 June 2023 / Accepted: 16 June 2023 / Published: 19 June 2023
(This article belongs to the Special Issue Multiscale Modeling in Computational Biomechanics)

Abstract

:
Backgrounds and Objective: Facial palsy is a complex pathophysiological condition affecting the personal and professional lives of the involved patients. Sudden muscle weakness or paralysis needs to be rehabilitated to recover a symmetric and expressive face. Computer-aided decision support systems for facial rehabilitation have been developed. However, there is a lack of facial muscle baseline data to evaluate the patient states and guide as well as optimize the rehabilitation strategy. In this present study, we aimed to develop a novel baseline facial muscle database (static and dynamic behaviors) using the coupling between statistical shape modeling and in-silico trial approaches. Methods: 10,000 virtual subjects (5000 males and 5000 females) were generated from a statistical shape modeling (SSM) head model. Skull and muscle networks were defined so that they statistically fit with the head shapes. Two standard mimics: smiling and kissing were generated. The muscle strains of the lengths in neutral and mimic positions were computed and recorded thanks to the muscle insertion and attachment points on the animated head and skull meshes. For validation, five head and skull meshes were reconstructed from the five computed tomography (CT) image sets. Skull and muscle networks were then predicted from the reconstructed head meshes. The predicted skull meshes were compared with the reconstructed skull meshes based on the mesh-to-mesh distance metrics. The predicted muscle lengths were also compared with those manually defined on the reconstructed head and skull meshes. Moreover, the computed muscle lengths and strains were compared with those in our previous studies and the literature. Results: The skull prediction’s median deviations from the CT-based models were 2.2236 mm, 2.1371 mm, and 2.1277 mm for the skull shape, skull mesh, and muscle attachment point regions, respectively. The median deviation of the muscle lengths was 4.8940 mm. The computed muscle strains were compatible with the reported values in our previous Kinect-based method and the literature. Conclusions: The development of our novel facial muscle database opens new avenues to accurately evaluate the facial muscle states of facial palsy patients. Based on the evaluated results, specific types of facial mimic rehabilitation exercises can also be selected optimally to train the target muscles. In perspective, the database of the computed muscle lengths and strains will be integrated into our available clinical decision support system for automatically detecting malfunctioning muscles and proposing patient-specific rehabilitation serious games.

Graphical Abstract

1. Introduction

Patients with facial palsy have difficulties in their daily activities for interpersonal communicating and expressing emotions, so facial mimic rehabilitation can enhance the life quality of the involved patients [1,2]. Facial mimics have resulted from muscle activations on skin layers [3,4,5]. The muscle activations were controlled by motor nerves [6]. Some causes (e.g., strokes and facial transplants) could make these nerves dysfunctioned so that they cannot activate their target muscles [7]. Consequently, the patients could not naturally and symmetrically perform some mimics (e.g., smiling and kissing) controlled by these malfunctioned muscles [8] or have unwanted facial movements in neutral or dynamic mimics [9]. The recovery procedures for these muscles were complex and needed long-term treatments [10,11,12,13,14,15]. Facial rehabilitation exercises can enhance recovery speed and treatment performances [16]. These exercises include repetitive and simple facial movements dedicated to specific muscles [8]. Consequently, dysfunctioned facial muscles must be first analyzed and diagnosed before selecting suitable types of exercises.
Regarding facial paralysis analysis, clinical and non-clinical facial paralysis grading methods have been employed [17]. Clinical facial paralysis grading, which was mainly based on the expertise of clinicians, was very subjective and varied among clinicians [17,18,19]. However, the non-clinical facial paralysis grading, which was mainly based on computer-aided processes, was objective and not dependent on clinicians [17]. In the literature, most studies tried to analyze facial mimics by evaluating their external geometrical information [20]. This information could be the symmetries between the left and right faces on 2D images or 3D meshes [21,22,23,24]. Moreover, symmetrical movements of 2D and/or 3D face features could also be employed [25,26,27,28,29,30,31,32,33]. Some studies also tried to detect and analyze action units (AUs) of the Facial Action Coding Systems (FACS) [34] through 2D images or 3D point clouds [20,35]. However, facial mimics are deformation results of muscle contractions on skin layers [3,4,5], so muscle behaviors should be directly analyzed instead of these geometrical appearances. In our previous study, we first proposed the concept of using muscle strains for facial paralysis grading [36]. A muscle network could be statistically predicted based on the target head shape and a statistics-based predicted skull mesh [37]. During the real-time head animation, we could fast compute muscle strains according to the vertex movements on the head and skull meshes [36]. However, we lacked standard values of muscle strains for diagnosing these muscles. Moreover, we only report muscle lengths and strains of only five subjects (three healthy subjects and two patients), so these values could not be represented for large populations during the muscle diagnoses [36].
Measuring standard muscle parameters was relatively challenging. In experimental studies, skeletal muscle measurements of faces could be conducted on cadavers, but the number of subjects was relatively small (from 1 to 20) [38,39,40]. These processing procedures needed much clinical expertise. In in-silico studies, facial muscles could be reconstructed through magnetic resonance imaging (MRI) data, but segmenting soft tissues in MRI images is time-consuming and need much clinical expertise [4,5]. Consequently, we could not reconstruct all facial muscles for a large number of subjects using this MRI-based method. Using computed tomography (CT) imaging data, although we can reconstruct both head and skull meshes, the soft tissues are lacking [37]. Moreover, most cadaver, MRI, or CT datasets were collected from dead subjects, so the measurement cannot be conducted in different mimics, especially in a dynamic manner [4,5,37,38,39,40]. In silico trials have been popularly employed for fastening novel clinical treatments and experiments. In in-silico trials, novel medical treatments or experiments were tested on the personalized virtual human models for fast collecting responses in simulation environments. These responses were employed for optimizing the treatments before being implemented on the real human. Consequently, in-silico trials help reduce clinical costs and deal with the lack of experimental data [41,42].
Recently, the statistical shape modeling (SSM) method has been popularly employed for modeling the human head and/or face geometries and mimics (i.e., FLAME head model [43], Basel face model [44], and other 3D morphable models [45]). These models were trained on large datasets of human faces both in static and dynamic mimics reconstructed from accurate depth sensors, so they can be used for representing the shapes and mimics of large populations [46]. These morphable face/head models were popularly employed for predicting face head meshes on mono images of large face variations [47]. Although these SSM models could generate large variations in head shapes and facial mimics, they still lack internal structures (i.e., skulls and muscle networks). In our previous study, we could predict internal structures including the skull and muscle network with acceptable accuracy for both patient and healthy subjects [36,37], but this prediction method has not been applied to those SSM head models.
Consequently, in this study, we aimed to apply our previous biomechanical head modeling methods [36,37] to the FLAME (Faces Learned with an Articulated Model and Expressions) head model [43] for computing standard values of muscle lengths and strains in static and dynamic facial mimics. In particular, the FLAME head model can generate 3D geometrical head models that are the 3D triangulated surface meshes of the head shapes. The vertices positions of the head meshes are controlled by the parameters of subject identity, head transforms, and facial mimics. The skull meshes were predicted based on the head meshes so that their shapes statistically fit with the head shapes. Muscle networks were defined as the action lines connected from the muscle attachment points on the skull to the muscle insertion point on the head. The muscle lengths and strains were finally computed in standard static and dynamics mimics because they are important to muscle-based facial paralysis diagnosing. These values will be applied in our clinical decision-support system for facial mimic rehabilitation. In the system, we can compare the computed static and dynamic muscle strains with the reported baseline values for automatically detecting malfunctioned facial muscles. In this study, the baseline facial muscle database is the database of the static and dynamic lengths and strains of the facial muscles in the kissing and smiling mimics. Moreover, suitable types of facial rehabilitation games will be proposed to train the detected muscles. The muscle behaviors will also be scored based on these standard values during playing games for evaluating the recovery progresses.
In the following sections, we will first describe the methods of head shape and mimic generations, skull prediction, and muscle network definition. The steps of CT-based validation will also be presented in the Section 2. The baseline values of muscle lengths and strains in neutral and other mimics according to their accuracies will also be reported in the Section 3. The contributions and drawbacks of this study will finally be discussed in the Section 4 and Section 5.

2. Materials and Methods

2.1. Overall Processing Workflow

The overall processing procedure is described in Figure 1. In particular, the processing steps include (a) head shape generation, (b) skull and muscle network prediction, (c) mimic performing, and (d) muscle analysis.
(a)
Regarding head shape generation, we use the SSM head model, Faces Learned with an Articulated Model and Expressions (FLAME) [43], for generating variations in virtual subjects by controlling the FLAME shape parameters. The other parameter sets including translations, rotations, poses, and expressions were kept all to zeros to be on the neutral mimic positions. The details of this step are explained in Section 2.2.
(b)
Regarding the skull and muscle network prediction, based on the head shape of each virtual subject, a skull mesh was predicted thanks to our developed SSM-based head-to-skull prediction method [43]. Moreover, a muscle network including linear and circle muscles was defined as action lines connected from muscle attachment points on the skull mesh to the muscle insertion points on the head mesh. The insertion and attachment points were positioned based on their vertex indices on the head and skull meshes. This processing step is clearly explained in Section 2.3.
(c)
Regarding mimic performing, we controlled the expression and pose parameters of the FLAME model to perform smiling and kissing mimics on each virtual subject. In the static mimics, we set the max values on the smiling and kissing control parameters. In the dynamic mimics, we set these parameters from zero to their max values with the step size of 1/200 of these max values. The details are explained in Section 2.4.
(d)
Regarding muscle analysis, because the mesh structures of the head and skull meshes do not change during the non-rigid animations, muscle insertion, and attachment points were automatically updated according to the motions of head and skull vertices. Consequently, muscle lengths could also be computed according to the updated insertion and attachment points. Muscle strains were computed as relative differences between the muscle lengths in the current mimics and those in the neutral mimics. In this study, muscle strains of both static and dynamic mimics were computed and reported. The details are presented in Section 2.4.
For validation, we tested the methods on the 5 head and skull meshes reconstructed from the CT image sets from the New Mexico Decedent Image Database [48]. The predicted skull meshes were compared with the reconstructed skull meshes based on the mesh-to-mesh distance metric. Moreover, the predicted muscle networks were also compared with the pre-defined muscle networks on the CT-based head and skull meshes based on the point-to-point distance metric. Last but not least, the computed muscle lengths were also compared with the reported muscle lengths in the literature. Details of the validation are explained in Section 2.5.

2.2. Subject Identity and Mimic Generation

Faces Learned with an Articulated Model and Expressions (FLAME) head model was one of the most popular 3DMM (3D Morphable Models) for the human head [43]. The FLAME model employed the non-animated head mesh of the SMPL (Skinned Multi-Person Linear) model [49]. The head mesh vertices were formed as Equations (1) and (2).
M ( β ,   θ , ψ ) = W ( T P ( β ,   θ , ψ ) , J ( β ) ,   θ , 𝒲 )
T P ( β ,   θ , ψ ) = T ¯ + B S ( β , 𝒮 ) + B P ( θ , 𝒫 ) + B E ( ψ ,   )
In which, J ( β ) are the joint locations of the rotation parts on the head mesh (mouth mesh, left eye mesh, and right eye mesh), and they were computed from the head vertices. 𝒲 are blend weights for linearly smoothing the skin vertices during rotating around the joints J ( β ) . T ¯ is the mean head vertices computed from the training dataset. B S ( β , 𝒮 ) is a shape blend shape function with the shape parameters, β , and orthonormal shape basis, 𝒮 . The 𝒮 was trained from the training dataset using the principal component analysis (PCA) [50]. B P ( θ , 𝒫 ) is a pose blend shape function with the pose parameters, θ , and the vertex offset from the rest pose, 𝒫 .   B E ( ψ ,   ) is the expression blend shape function with the expression parameters, ψ , and the orthonormal expression basis, . The was trained from the dataset using the PCA.
The training dataset of the FLAME head model includes 3800 scanned heads from the European CAESAR body scan database [51]. This dataset was scanned by 6 infrared time-of-flight sensors put around the target body with the circumferential accuracy <±5 mm. A template head mesh was deformed to the head shapes of this dataset to form the dataset for training the shape blend shape function. Moreover, the face expression model was also trained on the D3DFACS dataset [52], which contained 3D point cloud sequences of the face in time series with various standard facial expressions (facial action units (AUs)) defined in the facial action coding system (FACS) [53]. The template head meshes with pre-defined shape parameter sets was deformed to each facial to form the dataset for training the pose and expression blend shape functions. The FLAME head model was trained separately for males, females, and generic datasets.
Consequently, using the FLAME head model, we can re-generate the head mesh with various shapes and realistic facial expressions. we re-generated 10,000 virtual subjects with 10,000 shape parameter sets (5000 for males and 5000 for females). Figure 2a shows examples of head shape variations in neutral mimics of 10,000 virtual male and female subjects. To vary the subject identity of the FLAME head mesh, we tried to set the FLAME’s pose and mimic parameters to zeros so that the head and jaw regions were in the standard position, and the faces were in the neutral mimic. All shape parameters of the FLAME were randomly valued from their minimal to maximal (from −2.0 to 2.0). This selection strategy will guarantee that the regenerated head meshes will cover most variations in the head shapes on the FLAME’s training dataset. Additionally, two separate male and female models of the FLAME were employed for re-generating male and female virtual subjects.
For each virtual subject, we changed the expression parameters of the FLAME to their maximum values for creating static smiling and kissing mimics, as shown in Figure 2b. Moreover, dynamic mimics of each type of face movement (smiling and kissing) were also created by setting the appropriate expression parameters from zeros to their max values, as described in Figure 2c.

2.3. Skull and Muscle Network Generation

Even though the FLAME model can be used to generate shapes and mimics for large populations, they still lack the internal structures for analyzing muscle behaviors in each mimic. The head-to-skull prediction method was developed in our previous study, but we just applied it to the CT and Kinect-based head meshes [37]. In this study, we tried to apply the method to the head meshes generated from the FLAME model. Figure 3 shows the processing procedure of the FLAME-based head-to-skull prediction. In particular, given a neutral head-neck mesh generated by the FLAME model, we first generated the head-only mesh by replacing the vertices of the template head-only mesh with the appropriate vertices on the FLAME-based head-neck mesh. The head-only mesh was then registered to the standard head mesh based on the manually selected landmarks. The singular value decomposition (SVD) [54] and iterative closest point (ICP) registration methods [55] were employed for minimizing the manual landmark selections. The registered head mesh was sampled using the sampling rays, which were pre-defined during the head-to-skull training processes, to result in the head samples. A skull shape was predicted from the head samples using the partial least squared regression coefficients [56], which were trained in our previous study [37]. A template skull shape was finally deformed so that its shape optimally fitted with the generated skull shape.
As shown in Figure 4, a muscle network was defined as action lines connecting from attachment points on the skull mesh to the attachment points on the head (or skull) mesh. We defined facial muscle types based on the face’s anatomical structure [57]. The defined linear muscles included the Left/Right Procerus (L/RP), Left/Right Frontal Belly (L/RFB), Left/Right Corrugator Supperciliary (L/RCS), Left/Right Temporoparietalis (L/RT), Left/Right Nasalis (L/RN), Left/Right Depressor Septi Nasi (L/RDSN), Left/Right Zygomaticus Minor (L/RZm), Left/Right Zygomaticus Major (L/RZM), Left/Right Risorius (L/RR), Left/Right Depressor Anguli Oris (L/RDAO), Left/Right Mentalis (L/RM), Left/Right Levator Labii Superioris (L/RLLS), Left/Right Levator Labii Superioris Alaeque Nasi (L/RLLSA), Left/Right Levator Anguli Oris (L/RLAO), Left/Right Depressor Labii Inferioris (L/RDLI), Left/Right Buccinator (L/RB), and Left/Right Masseter (L/RMa). We also defined circle muscles including Left/Right Orbicularis Oculi and Orbicularis Oris. The muscle insertion/attachment points were positioned by the vertex indices on the head and skull meshes. During the animation of the head and skull meshes, the mesh vertex indices were not changed, so the muscle lengths and perimeters could be updated according to their insertion and attachment positions on the animated head and skull meshes.

2.4. Muscle-Based Analyses

In this study, we analyze the muscle behaviors in neutral, static mimics, and dynamic mimics. In neutral mimics, the lengths of linear muscles were computed as distances between their attachment points and insertion points. The mean and standard deviation lengths of each muscle throughout all male and female subjects were computed and reported. In static mimics, for each subject, during performing smiling and kissing, the length of each muscle was computed for each mimic. Relative differences between their lengths in the current mimics and those in the neutral mimics were computed as their muscle strains. The mean and standard deviation strains of each muscle throughout all male and female subjects were computed and reported for each mimic. In dynamic mimics, for each type of mimic (smiling or kissing), the values of expression parameters were increased from zeros to their max values with the step size of 1/200 of the max values and set to the FLAME model to generate the mimics. Throughout all male and female subjects, the mean and standard deviation strains of each muscle were computed and reported for each small mimic.

2.5. Validation

To evaluate the accuracies of head-to-skull prediction and muscle network definition. We collected five CT image sets from the New Mexico Decedent Image Database [48] (males: 3, females: 2, ages (Mean ± SD): 31.2 ± 6.5 years old). The CT image sets were reconstructed using the 3D Slicer software [58]. The head meshes were reconstructed by first segmenting both skins and bones in the CT images and then meshing based on the marching cube methods. Internal structures and neck regions were removed from the reconstructed meshes, as shown in Figure 5a. The skull meshes were reconstructed by first segmenting the bone tissue in the CT images and then meshing the segmented regions. The cervical spines were also removed from the reconstructed skull meshes, as shown in Figure 5b. Moreover, we also generated skull shapes from the CT-reconstructed skull meshes for shape validation, as shown in Figure 5c. The details of skull shape generation from skull meshes were described in our previous study [37]. The muscle networks were also manually defined by selecting their attachment points and insertion points on the reconstructed skull meshes and head meshes, as shown in Figure 5d, based on the face anatomy [57]. We applied our previous head-to-skull prediction method [37] and muscle network definition method [36] for predicting the skull meshes and muscle networks for the five CT-reconstructed head meshes. The skull shapes of the predicted skull meshes were also generated for evaluation. Distances between the predicted skull shapes and the CT-based skull shapes and between the predicted skull meshes and the CT-reconstructed skull meshes were computed based on the mesh-to-mesh distance metric. The computed mesh-to-mesh distances were also evaluated in muscle attachment/insertion point regions on the skull meshes. Additionally, we also compared the muscle lengths of the manually defined muscle networks and the predicted muscle networks for evaluating the accuracy of the muscle network’s prediction using the point-to-point distance metric. The predicted muscle lengths in neutral mimics and linear muscle strains in smiling and kissing mimics were also compared with those reported in the literature [5,38,39,40] and our previous study [36].

2.6. Used Technologies

The head mesh generation, skull prediction, and muscle parameter computing was programmed in Visual Studio C++ 2019 in the hardware configuration of HP Zbook 17G5 Intel(R) Xeon(R) E-2176M CPU @ 2.70GHz 2.71 GHz, 32.0 GB RAM, 64 bits Microsoft Windows 11 Pro for Workstations. Mesh processing was supported by LibIGL [59] and VCG and MeshLab [60]. Point cloud processing was supported by PCL C++ [61] libraries. The mesh rendering was supported by VTK [62]. The linear matrix operation was supported by Eigen [63]. The FLAME head model execution was executed on Tensorflow C++ API [64].

3. Results

3.1. Validation Deviations in Comparison with CT-Reconstructed Data

Figure 6 shows the validation results between the predicted skull meshes and the CT-reconstructed skull meshes in shape and mesh differences. Overall, as shown in Figure 6a, the predicted skull meshes (represented in wire-frame rendering) are optimally fitted with the reconstructed skull meshes (represented in smooth rendering). Regarding the skull shape comparison, as shown in Figure 6b, most deviations are distributed on the back skull regions and small regions of interest (i.e., teeth, top nose). Large deviations are distributed on the back head regions of subject 1 due to the head deformation during the CT image acquisition. Regarding the skull mesh comparison, as shown in Figure 6c, we can also have good accuracies on the facial regions of the skull meshes. Deviations also focused on the back-skull and internal regions of the skull meshes.
Figure 7 illustrates the deviations between the predicted skulls and the reconstructed skulls. In particular, mesh-to-mesh distances between the predicted skull shapes and the reconstructed skull shapes have a median of 2.2236 mm (Mean ± SD: 2.9917 ± 2.5117 mm). The median mesh-to-mesh distance between the predicted skull meshes and the reconstructed skull meshes is 2.1371 mm (Mean ± SD: 2.8694 ± 2.2194 mm). The median deviation in the muscle attachment points is 2.2177 mm (Mean ± SD: 2.9114 ± 2.2849 mm). The muscle length deviations have a median of 4.8940 mm (Mean ± SD: 6.1515 ± 5.1011 mm). This median deviation of the muscle lengths is within the experimental deviations of the muscle length (D 6 mm) as reported in the literature [65].

3.2. Muscle Lengths in Neutral Mimics

Table 1 shows the means and standard deviations of all muscle lengths of the 10,000 male and female subjects in this study. Overall, the mean muscle lengths of the male subjects are larger than those of the female subjects. The computed muscle lengths of the healthy subjects in our previous study are in the order of magnitude as those of the virtual subjects in this study. The average standard deviation of all muscle lengths is 2.7 mm, which agrees with the experimental perturbation values of the muscle insertion/attachment points (R 3 mm) as reported in the literature [65].

3.3. Static Muscle Analysis

Table 2 lists the standard strain values of the linear and circle muscles in smiling, kissing, and o-pronouncing mimics of all male and female subjects. Overall, the behaviors of the muscle strains can be used for describing and evaluating the quality of different facial mimics.
Regarding the smiling mimics, which the LZm, RZm, LZM, and RZM are mainly responsible for [53], the standard strain values (%) of the LZm, RZm, LZM, and RZM are −10.50 ± 0.48, −11.49 ± 0.60, −12.34 ± 0.50, and −12.78 ± 0.60, respectively, for males and −10.41 ± 0.47, −11.32 ± 0.57, −12.34 ± 0.52, and −12.74 ± 0.60, for females. Based on these strain values, the left and right Zm and ZM muscles are all shortened during the smiling mimics. This behavior is in agreement with the left and right Zm and ZM’s smiling behaviors reported in the literature [4,5]. Moreover, these strain values are relatively symmetrical between the left and right sides. These symmetrical and standard strain values could be used for evaluating the smiling mimics for healthy subjects, which usually have symmetrical and strong smiling mimics [4,5], and facial palsy patients, which usually have asymmetrical and weak smiling mimics [18]. For instance, in our previous study [36], for healthy subject 3, the strain values of the LZm, RZm, LZM, and RZM muscles were −9.93%, −9.93, −21.32%, and −19.72%, respectively. For patient subject 1, these values were −0.40%, −3.12%, −6.76%, and −9.53%, respectively. We can have a conclusion that the muscle strains of the healthy subject are stronger and more symmetrical than those of the patient subject when compared with the standard muscle strain values. Regarding the kissing mimics, for which the LZm, RZm, LZM, and RZM muscles are mainly responsible [53]. In the kissing mimic, these muscles must be elongated [4,5]. In this study, the standard strain values are 10.64% ± 0.48%, 11.69% ± 0.61%, 12.66% ± 0.50%, and 13.23% ± 0.59% for the LZm, RZm, LZM, and RZM muscle, respectively. Consequently, these values are all larger than zero and can show the elongating behavior during the kissing mimic. Moreover, we can see the symmetrical strain values of Zm and ZM muscles on the left and right sides. Consequently, these characteristics of the standard kissing strain values could also be used for evaluating the muscles in kissing mimics.

3.4. Dynamic Muscle Analysis

Besides the muscle evaluation in static mimics, this study also reported the dynamic muscle strain values supporting the muscle movement control diagnosis in smiling and kissing mimics.
Figure 8 shows the dynamic behaviors of the LZm, RZm, LZM, and RZM muscles while performing the smiling mimics. Overall, these muscles are all shortening linearly when the mimic is performing from neutral to the max smiling range. This shortening behavior is suitable with the reported behaviors of the left and right ZM(m) muscles in smiling mimics [4,5,53]. The strain values in the left and right ZM(m) of the male and female subjects are relatively the same as each other during performing the smiling mimics. Moreover, the standard deviations of the muscle strains tend to increase when the mean shortening strain increase. Especially, the lengths of the left and right ZM muscles shorten faster than those of the left and right Zm muscles. For example, for the male subjects, at the time-step 0, the mean strains of the left and right ZM muscles and those of the left and right Zm muscles are all 0s. At the time-step 200, the strains of the left and right ZM muscles are −12.34% ± 0.50% and −12.78% ± 0.60%, which are all smaller than those of the left and right Zm muscles (−10.50 ± 0.48 and −11.49 ± 0.60, respectively). Additionally, from the time-step 0 to 200, the dynamic strains of the left and right ZM(m) muscles are relatively symmetrical to each other.
Figure 9 shows the dynamic behaviors of the LZm, RZm, LZM, and RZM muscles while performing the kissing mimics. As reported in the literature, during the kissing mimics, these muscles should be elongated proportionally to the kissing intensity [4,5,53]. This behavior is met with our computed strain values. In particular, the strains (%) of the left and right ZM muscles increase linearly from 0 s to 12.66 ± 0.50 and 13.23 ± 0.59, respectively, for males (12.75 ± 0.52 and 13.31 ± 0.60, respectively, for females) when the kissing mimics increase from the neutral to the max intensity. The left and right Zm muscles also increase linearly from 0 s to 10.64% ± 0.48% and 11.69% ± 0.61%, respectively, for males (10.62% ± 0.48% and 11.63% ± 0.59%, respectively, for females) during this kissing intensity range. It is important to note that the left and right ZM muscles elongate faster than the left and right Zm muscles during the kissing performing mimics, and the standard deviations of the elongation tend to be larger in higher strain values. Moreover, the left and right ZM(m) muscles also elongate symmetrically during the kissing.
Based on the above analyses of the LZm, RZm, LZM, and RZM muscles in dynamic smiling and kissing mimics, we can employ the computed muscle strains for scoring the movement of muscles responsible for specific dynamic mimics. Figure 8 and Figure 9 only show the analyzed results of the four muscles responsible for smiling and kissing mimics, but, in our dataset, we also reported strain values of the 37 muscles, as listed in Table 1, for these mimics.

4. Discussion

Facial paralysis grading is the first requirement for personalizing and enhancing facial mimic rehabilitation treatments [66]. Currently, non-clinical grading methods have been promisingly employed in this issue due to their stable and objective outcomes [17]. However, most studies just tried to analyze facial mimics based on their visual deformation on facial skin [20]. Facial mimics are the deformation results of muscle contractions on skin layers [3,4,5], so they should be directly analyzed and graded. Analyzing facial muscles during facial movements is relatively challenging because internal structures cannot be easily examined on living objects [36]. For example, surface scanning sensors (i.e., camera and time-of-flight sensors) cannot acquire internal structures [67,68,69]. Reconstruction of soft tissue from the MRI datasets is time-consuming and needs too much clinical expertise [4,5], and we cannot build face muscles from CT images [36]. Experimental processing on the decedent is limited and cannot acquire data on their facial mimics [38,39,40]. Recently, in our previous study, we proposed a novel method for analyzing facial muscles in real time based only on visual facial mimics supporting facial paralysis grading [36]. However, we lack the baseline datasets for automatically diagnosing the predicted muscles in static and dynamic facial mimics. Consequently, such standard parameters of facial muscles in different mimics are particularly necessary. Especially, in silico trials have been popularly applied for collecting large variations in datasets [41,42]. Consequently, in the present study, we developed a novel baseline muscle database using in silico trial approach to provide the first reference database for facial muscle evaluation.
More precisely, we first applied our head-to-skull and muscle prediction methods to a 3DMM head model [43], for providing its meshed skull and muscle networks throughout large shape- and expression parameter sets with acceptable accuracy for facial muscle analyses. The FLAME head model [43] was trained on a large database of 3800 scanned heads from the European CAESAR body scan database [51] and 3D face scans D3DFACS in various facial mimics [52]. Consequently, the FLAME model can be used for re-generating standard heads with mimics of the public. Statistical shape modeling was successfully employed to reduce the diversities of the training datasets and can deal with the lack of training data [70]. Therefore, with the small number of shape parameters, we could regenerate virtual head shapes, which represented the public head shapes. However, the model lacks internal structures. In this study, we first applied our novel head-to-skull prediction for inferring skull structures based on the head geometrical structures and predicting their muscle networks for analyzing muscle strains according to the FLAME head’s animations. After validating with 5 CT subjects, the muscle length deviations have a median of 4.8940 mm. These deviations are within the error range of the facial muscle lengths reported in the literature (D 6 mm). Moreover, the computed muscle length values in neutral mimics are compatible with the reported values in our previous study and the literature, as shown in Table 3. Particularly, in the literature, all experimental studies just computed the muscle lengths of a limited number of subjects (from 1 to 20). In our study, by using the SSM of the head, we could analyze the muscle lengths of a large number of subjects (males: 5000 and females: 5000). Consequently, our computed muscle lengths could be the reference values for muscle length diagnosing in neutral mimics. The computed muscle strains in smiling and kissing mimics are also well-matched with those computed using the accurate FE-based facial models and the Kinect-based head model. In particular, in smiling mimics of the FE-based model, the strain value of the left and right ZM muscles was reported as −6.82%. In the smiling mimic of the Kinect-based model, the strain values of the left and right ZM muscles of the healthy subject were −17.46% ± 3.87% and −14.43% ± 5.30%, respectively. In this study, as listed in Table 2, the strains of the left and right ZMs are −12.34% ± 0.50% and −12.78% ± 0.60%, respectively, for males (−12.34% ± 0.52% and −12.74% ± 0.60%, respectively, for females). In the kissing mimics, as reported in the FE-based models, the ZM strain values were 24% [4] and 22% [5] when the subject made the [o]-sound, which is relatively similar to the kissing mimic [53]. In the Kinect-based model, these strain values were 14.84% ± 2.56% for the left and right ZMs of the healthy subjects. In this study, the kissing strain values of the left and right ZMs are 12.66% ± 0.50% and 13.23% ± 0.59%, respectively, for males (12.75% ± 0.52% and 13.31% ± 0.60%, respectively, for females). With the acceptable accuracy of the muscle behavior analyses in this study, we first proposed to use the reported values of muscle lengths and strains for facial muscle diagnosis in facial paralysis grading. By using our previous patient-specific real-time head animation, head-to-skull prediction, and muscle network definition methods, the strain of each muscle could be computed in real-time. The static or dynamic muscle strains recorded during performing the smiling or kissing mimic will be compared with the standard static or dynamic muscle strain values of the smiling or kissing mimic for muscle diagnosis. Moreover, as listed in Table 2, we found that the standard deviations of the muscle strain in smiling and kissing mimics are relatively small among various subjects. This means that we can use the report values for evaluating the muscles of the large population during their mimic performance. Additionally, as shown in Figure 8 and Figure 9, standard deviations of the muscle strains tend to increase when increasing the intensity of smiling or kissing mimics. This information is important for evaluating muscle synkinesis while performing mimics [8].
In addition, we provided a large database supporting facial paralysis grading and facial muscle diagnosis. This database includes 10,000 FLAME-based head meshes and 10,000 skull meshes predicted from the head meshes. This dataset could be used for studying relations between the FLAME head shape parameters and the skull structures. Moreover, muscle lengths can be straightforwardly computed from the FLAME head and skull meshes by directly changing the expression parameters of the FLAME model. Muscle lengths can, therefore, be analyzed according to the FLAME expression parameters. We also provided muscle lengths of all muscles of all 10,000 analyzed subjects in neutral mimics. Muscle strains of all muscles in smiling and kissing mimics were also provided for all subjects. Means and standard deviations of muscle lengths, smiling strains, and kissing strains were also reported separately for all males and females. The database could be downloaded via the link [71].
This study also contains some drawbacks. We only analyzed and report muscle strain values on static and dynamic smiling and kissing mimics. However, the method can be applied to analyze the muscle in any mimics by controlling the expression and pose parameters of the FLAME model. Moreover, our muscle-based analyzing method has not supported mimics with mandible movements. In perspective, we will enhance the muscle-based facial analysis so that it can support the facial mimics with mandible movement by studying the relation between the mouth movements and the mandible motions. The enhanced method will be used for analyzing the muscle of all Action Units (AUs) in the Facial Action Coding System (FACS). We will also implement the method into our clinical decision-support system [72] for automatically detecting malfunctioning muscles while performing each AU of FACS. Based on the diagnosed results, we will propose suitable serious games for training the target muscle. It is also important to note that the statistical shape modeling methods were employed for building the FLAME and head-to-skull prediction models. In these statistical models, geometrical deformations of the meshes were computed as linear combinations of the principal components, so they cannot handle complex geometrical structures of the head and skull meshes. More advanced statistical shape modeling methods (e.g., Gaussian-based PCA [73]), geometric deep learning [74], and Generative Adversarial Networks (e.g., SP-GAN [75]) can be employed to solve these issues. In further works, we will implement the computed muscle strains as the baseline values for diagnosing facial muscle behaviors. In particular, in our previous study, we could compute patient-specific muscle strains in real-time. For diagnosing, clinicians will then ask the patient to perform smiling and kissing mimics. Strains of all muscles were computed in those mimics. The failed muscles are those having computed strain values different from the baseline strain values of these muscles. The facial mimic rehabilitation exercises will then be selected to train the failed muscles.

5. Conclusions

Facial paralysis grading is important for personalizing facial mimic rehabilitation treatments. Muscle-based facial grading method has recently been proposed in the literature, but there is still a lack of facial muscle baseline data to evaluate the patient states and guide as well as optimize the rehabilitation strategy. In this present study, we aimed to develop a novel baseline facial muscle database (static and dynamic behaviors) using the coupling between statistical shape modeling and in-silico trial approaches. We applied our original head-to-skull and muscle network prediction method to the FLAME model, which can be represented for the standard head shapes and facial mimics, for computing the standard muscle strains in both static and dynamic smiling and kissing facial mimics. In perspective, this data will be integrated into our available clinical decision support system for automatically detecting malfunctioning muscles and proposing patient-specific rehabilitation serious games.

Author Contributions

Conceptualization, T.-T.D. and T.-N.N.; methodology, T.-N.N.; software, V.-D.T. and T.-N.N.; validation, V.-D.T. and T.-N.N.; formal analysis, A.B.; investigation, A.B. and T.-T.D.; resources, V.-D.T. and T.-N.N.; data curation, V.-D.T. and T.-N.N.; writing—original draft preparation, V.-D.T. and T.-N.N.; writing—review and editing, V.-D.T., T.-N.N., A.B. and T.-T.D.; visualization, T.-N.N.; supervision, T.-T.D.; project administration, V.-D.T.; funding acquisition, V.-D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Education and Training, and hosted by Ho Chi Minh City University of Technology and Education, Vietnam, grant number B2022-SPK-01.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available upon request.

Acknowledgments

This work belongs to the project grant No: B2022-SPK-01 funded by the Ministry of Education and Training, and hosted by Ho Chi Minh City University of Technology and Education, Vietnam.

Conflicts of Interest

The authors declare no potential conflict of interest.

References

  1. Frith, C. Role of Facial Expressions in Social Interactions. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3453–3458. [Google Scholar] [CrossRef] [Green Version]
  2. Ishii, L.E.; Nellis, J.C.; Boahene, K.D.; Byrne, P.; Ishii, M. The Importance and Psychology of Facial Expression. Otolaryngol. Clin. N. Am. 2018, 51, 1011–1017. [Google Scholar] [CrossRef] [PubMed]
  3. Wu, T.; Hung, A.P.L.; Hunter, P.; Mithraratne, K. Modelling Facial Expressions: A Framework for Simulating Nonlinear Soft Tissue Deformations Using Embedded 3D Muscles. Finite Elem. Anal. Des. 2013, 76, 63–70. [Google Scholar] [CrossRef]
  4. Fan, A.X.; Dakpé, S.; Dao, T.T.; Pouletaut, P.; Rachik, M.; Ho Ba Tho, M.C. MRI-Based Finite Element Modeling of Facial Mimics: A Case Study on the Paired Zygomaticus Major Muscles. Comput. Methods Biomech. Biomed. Eng. 2017, 20, 919–928. [Google Scholar] [CrossRef] [PubMed]
  5. Dao, T.T.; Fan, A.X.; Dakpé, S.; Pouletaut, P.; Rachik, M.; Ho Ba Tho, M.C. Image-Based Skeletal Muscle Coordination: Case Study on a Subject Specific Facial Mimic Simulation. J. Mech. Med. Biol. 2018, 18, 1850020. [Google Scholar] [CrossRef]
  6. Rittey, C. The Facial Nerve. Pediatric ENT; Springer: Berlin/Heidelberg, Germany, 2007; Volume 83, pp. 479–484. [Google Scholar] [CrossRef]
  7. Jayatilake, D.; Isezaki, T.; Teramoto, Y.; Eguchi, K.; Suzuki, K. Robot Assisted Physiotherapy to Support Rehabilitation of Facial Paralysis. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 644–653. [Google Scholar] [CrossRef] [PubMed]
  8. Wernick Robinson, M.; Baiungo, J.; Hohman, M.; Hadlock, T. Facial Rehabilitation. Oper. Tech. Otolaryngol.—Head Neck Surg. 2012, 23, 288–296. [Google Scholar] [CrossRef]
  9. Constantinides, M.; Galli, S.K.D.; Miller, P.J. Complications of Static Facial Suspensions with Expanded Polytetrafluoroethylene (EPTFE). Laryngoscope 2001, 111, 2114–2121. [Google Scholar] [CrossRef]
  10. Dubernard, J.M.; Lengelé, B.; Morelon, E.; Testelin, S.; Badet, L.; Moure, C.; Beziat, J.L.; Dakpé, S.; Kanitakis, J.; D’Hauthuille, C.; et al. Outcomes 18 Months after the First Human Partial Face Transplantation. N. Engl. J. Med. 2007, 357, 2451–2460. [Google Scholar] [CrossRef] [Green Version]
  11. Anderl, H. Reconstruction of the Face through Cross-Face-Nerve Transplantation in Facial Paralysis. Chir. Plast. 1973, 2, 17–45. [Google Scholar] [CrossRef]
  12. Khalifian, S.; Brazio, P.S.; Mohan, R.; Shaffer, C.; Brandacher, G.; Barth, R.N.; Rodriguez, E.D. Facial Transplantation: The First 9 Years. Lancet 2014, 384, 2153–2163. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Lopez, J.; Rodriguez, E.D.; Dorafshar, A.H. Facial Transplantation; Elsevier Inc.: Amsterdam, The Netherlands, 2019. [Google Scholar]
  14. Siemionow, M.Z.; Papay, F.; Djohan, R.; Bernard, S.; Gordon, C.R.; Alam, D.; Hendrickson, M.; Lohman, R.; Eghtesad, B.; Fung, J. First U.S. near-Total Human Face Transplantation: A Paradigm Shift for Massive Complex Injuries. Plast. Reconstr. Surg. 2010, 125, 111–122. [Google Scholar] [CrossRef] [PubMed]
  15. Lantieri, L. Man with 3 Faces: Frenchman Gets 2nd Face Transplant. AP NEWS, 17 April 2018. [Google Scholar]
  16. VanSwearingen, J. Facial Rehabilitation: A Neuromuscular Reeducation, Patient-Centered Approach. Facial Plast. Surg. 2008, 24, 250–259. [Google Scholar] [CrossRef]
  17. Samsudin, W.S.W.; Sundaraj, K. Clinical and Non-Clinical Initial Assessment of Facial Nerve Paralysis: A Qualitative Review. Biocybern. Biomed. Eng. 2014, 34, 71–78. [Google Scholar] [CrossRef]
  18. Owusu, J.A.; Stewart, C.M.; Boahene, K. Facial Nerve Paralysis. Med. Clin. N. Am. 2018, 102, 1135–1143. [Google Scholar] [CrossRef] [PubMed]
  19. Banks, C.A.; Bhama, P.K.; Park, J.; Hadlock, C.R.; Hadlock, T.A. Clinician-Graded Electronic Facial Paralysis Assessment: The EFACE. Plast. Reconstr. Surg. 2015, 136, 223e–230e. [Google Scholar] [CrossRef]
  20. Lou, J.; Yu, H.; Wang, F.Y. A Review on Automated Facial Nerve Function Assessment from Visual Face Capture. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 488–497. [Google Scholar] [CrossRef]
  21. Wang, S.; Li, H.; Qi, F.; Zhao, Y. Objective Facial Paralysis Grading Based OnPFace and Eigenflow. Med. Biol. Eng. Comput. 2004, 42, 598–603. [Google Scholar] [CrossRef]
  22. Desrosiers, P.A.; Bennis, Y.; Daoudi, M.; Amor, B.B.; Guerreschi, P. Analyzing of Facial Paralysis by Shape Analysis of 3D Face Sequences. Image Vis. Comput. 2017, 67, 67–88. [Google Scholar] [CrossRef] [Green Version]
  23. Gibelli, D.; De Angelis, D.; Poppa, P.; Sforza, C.; Cattaneo, C. An Assessment of How Facial Mimicry Can Change Facial Morphology: Implications for Identification. J. Forensic Sci. 2017, 62, 405–410. [Google Scholar] [CrossRef]
  24. Tanikawa, C.; Takada, K. Test-Retest Reliability of Smile Tasks Using Three-Dimensional Facial Topography. Angle Orthod. 2018, 88, 319–328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Frey, M.; John Tzou, C.H.; Michaelidou, M.; Pona, I.; Hold, A.; Placheta, E.; Kitzinger, H.B. 3D Video Analysis of Facial Movements. Fac. Plast. Surg. Clin. N. Am. 2011, 19, 639–646. [Google Scholar] [CrossRef] [PubMed]
  26. Salgado, M.D.; Curtiss, S.; Tollefson, T.T. Evaluating Symmetry and Facial Motion Using 3D Videography. Fac. Plast. Surg. Clin. N. Am. 2010, 18, 351–356. [Google Scholar] [CrossRef] [PubMed]
  27. Trotman, C.-A.; Phillips, C.; Faraway, J.J.; Ritter, K. Association between Subjective and Objective Measures of Lip Form and Function: An Exploratory Analysis. Cleft Palate-Craniofac. J. 2003, 40, 241–248. [Google Scholar] [CrossRef]
  28. Hontanilla, B.; Aubá, C. Automatic Three-Dimensional Quantitative Analysis for Evaluation of Facial Movement. J. Plast. Reconstr. Aesthetic Surg. 2008, 61, 18–30. [Google Scholar] [CrossRef]
  29. Trotman, C.A.; Faraway, J.; Hadlock, T.; Banks, C.; Jowett, N.; Jung, H.J. Facial Soft-Tissue Mobility: Baseline Dynamics of Patients with Unilateral Facial Paralysis. Plast. Reconstr. Surg.—Glob. Open 2018, 6, 1955. [Google Scholar] [CrossRef]
  30. Al-Hiyali, A.; Ayoub, A.; Ju, X.; Almuzian, M.; Al-Anezi, T. The Impact of Orthognathic Surgery on Facial Expressions. J. Oral Maxillofac. Surg. 2015, 73, 2380–2390. [Google Scholar] [CrossRef]
  31. Popat, H.; Henley, E.; Richmond, S.; Benedikt, L.; Marshall, D.; Rosin, P.L. A Comparison of the Reproducibility of Verbal and Nonverbal Facial Gestures Using Three-Dimensional Motion Analysis. Otolaryngol.—Head Neck Surg. 2010, 142, 867–872. [Google Scholar] [CrossRef]
  32. Mishima, K.; Umeda, H.; Nakano, A.; Shiraishi, R.; Hori, S.; Ueyama, Y. Three-Dimensional Intra-Rater and Inter-Rater Reliability during a Posed Smile Using a Video-Based Motion Analyzing System. J. Cranio-Maxillofac. Surg. 2014, 42, 428–431. [Google Scholar] [CrossRef]
  33. Trotman, C.A.; Faraway, J.; Hadlock, T.A. Facial Mobility and Recovery in Patients with Unilateral Facial Paralysis. Orthod. Craniofac. Res. 2020, 23, 82–91. [Google Scholar] [CrossRef]
  34. Ekman, R. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS); Oxford University Press: New York, NY, USA, 1997; ISBN 978-0195179644. [Google Scholar]
  35. Hamm, J.; Kohler, C.G.; Gur, R.C.; Verma, R. Automated Facial Action Coding System for Dynamic Analysis of Facial Expressions in Neuropsychiatric Disorders. J. Neurosci. Methods 2011, 200, 237–256. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Nguyen, T.-N.; Dakpe, S.; Ho Ba Tho, M.-C.; Dao, T.-T. Kinect-Driven Patient-Specific Head, Skull, and Muscle Network Modelling for Facial Palsy Patients. Comput. Methods Programs Biomed. 2021, 200, 105846. [Google Scholar] [CrossRef] [PubMed]
  37. Nguyen, T.-N.; Tran, V.-D.; Nguyen, H.-Q.; Dao, T.-T. A Statistical Shape Modeling Approach for Predicting Subject-Specific Human Skull from Head Surface. Med. Biol. Eng. Comput. 2020, 58, 2355–2373. [Google Scholar] [CrossRef]
  38. Freilinger, G.; Gruber, H.; Happak, W.; Pechmann, U. Surgical Anatomy of the Mimic Muscle System and the Facial Nerve: Importance for Reconstructive and Aesthetic Surgery. Plast. Reconstr. Surg. 1987, 80, 686–690. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Happak, W.; Liu, J.; Burggasser, G.; Flowers, A.; Gruber, H.; Freilinger, G. Human Facial Muscles: Dimensions, Motor Endplate Distribution, and Presence of Muscle Fibers with Multiple Motor Endplates. Anat. Rec. 1997, 249, 276–284. [Google Scholar] [CrossRef]
  40. Benington, P.C.M.; Gardener, J.E.; Hunt, N.P. Masseter Muscle Volume Measured Using Ultrasonography and Its Relationship with Facial Morphology. Eur. J. Orthod. 1999, 21, 659–670. [Google Scholar] [CrossRef] [Green Version]
  41. Pappalardo, F.; Russo, G.; Tshinanu, F.M.; Viceconti, M. In Silico Clinical Trials: Concepts and Early Adoptions. Brief. Bioinform. 2019, 20, 1699–1708. [Google Scholar] [CrossRef]
  42. Hodos, R.A.; Kidd, B.A.; Shameer, K.; Readhead, B.P.; Dudley, J.T. In Silico Methods for Drug Repurposing and Pharmacology. Wiley Interdiscip. Rev. Syst. Biol. Med. 2016, 8, 186–210. [Google Scholar] [CrossRef] [Green Version]
  43. Li, T.; Bolkart, T.; Black, M.J.; Li, H.; Romero, J. Learning a Model of Facial Shape and Expression from 4D Scans. ACM Trans. Graph. 2017, 36, 1–17. [Google Scholar] [CrossRef] [Green Version]
  44. Paysan, P.; Knothe, R.; Amberg, B.; Romdhani, S.; Vetter, T. A 3D Face Model for Pose and Illumination Invariant Face Recognition. In Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, Genova, Italy, 2–4 September 2009; pp. 296–301. [Google Scholar]
  45. Sharma, S.; Kumar, V. 3D Face Reconstruction in Deep Learning Era: A Survey; Springer: Dordrecht, The Netherlands, 2022; Volume 29, ISBN 0123456789. [Google Scholar]
  46. Salam, H.; Séguier, R. A Survey on Face Modeling: Building a Bridge between Face Analysis and Synthesis. Vis. Comput. 2018, 34, 289–319. [Google Scholar] [CrossRef]
  47. Wang, H. A Review of 3D Face Reconstruction from a Single Image. arXiv 2021, arXiv:2110.09299. [Google Scholar]
  48. Berry, S.D.; Edgar, H.J.H. Announcement: The New Mexico Decedent Image Database. Forensic Imaging 2021, 24, 200436. [Google Scholar] [CrossRef]
  49. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; Black, M.J. SMPL: A Skinned Multi-Person Linear Model. ACM Trans. Graph. 2015, 34, 1–16. [Google Scholar] [CrossRef]
  50. Maćkiewicz, A.; Ratajczak, W. Principal Components Analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
  51. Robinette, K.M.; Daanen, H.A.M. Precision of the CAESAR Scan-Extracted Measurements. Appl. Ergon. 2006, 37, 259–265. [Google Scholar] [CrossRef]
  52. Cosker, D.; Krumhuber, E.; Hilton, A. A FACS Valid 3D Dynamic Action Unit Database with Applications to 3D Dynamic Morphable Facial Modeling. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2296–2303. [Google Scholar]
  53. Ekman, P.; Friesen, W.V. Facial Action Coding System. Environmental Psychology and Nonverbal Behavior; Kluwer Academic Publishers-Human Sciences Press: Dordrecht, The Netherlands, 1978. [Google Scholar]
  54. Henry, E.R.; Hofrichter, J. Singular Value Decomposition: Application to Analysis of Experimental Data; Academic Press: Cambridge, MA, USA, 1992; pp. 129–192. [Google Scholar]
  55. Jost, T.; Hügli, H. Fast ICP Algorithms for Shape Registration. In Proceedings of the 24th DAGM Symposium, Zurich, Switzerland, 16–18 September 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 91–99. [Google Scholar]
  56. Geladi, P.; Kowalski, B.R. Partial Least-Squares Regression: A Tutorial. Anal. Chim. Acta 1986, 185, 1–17. [Google Scholar] [CrossRef]
  57. Prendergast, P.M. Facial Anatomy. Adv. Surg. Facial Rejuvenation Art Clin. Pract. 2012, 9783642178, 3–14. [Google Scholar] [CrossRef]
  58. Fedorov, A.; Beichel, R.; Kalpathy-Cramer, J.; Finet, J.; Fillion-Robin, J.-C.; Pujol, S.; Bauer, C.; Jennings, D.; Fennessy, F.; Sonka, M.; et al. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magn. Reson. Imaging 2012, 30, 1323–1341. [Google Scholar] [CrossRef] [Green Version]
  59. Jacobson, A.; Panozzo, D.; Schüller, C.; Diamanti, O.; Zhou, Q.; Pietroni, N. Libigl: A Simple C++ Geometry Processing Library 2018. Available online: http://hdl.handle.net/10453/167463 (accessed on 19 November 2022).
  60. Cignoni, P.; Ranzuglia, G.; Callieri, M.; Corsini, M.; Ganovelli, F.; Pietroni, N.; Tarini, M. MeshLab. 2011. Available online: https://air.unimi.it/handle/2434/625490 (accessed on 19 November 2022).
  61. Rusu, R.B.; Cousins, S. 3D Is Here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  62. Schroeder, W.J.; Avila, L.S.; Hoffman, W. Visualizing with VTK: A Tutorial. IEEE Comput. Graph. Appl. 2000, 20, 20–27. [Google Scholar] [CrossRef] [Green Version]
  63. Guennebaud, G.; Jacob, B. Eigen V3 2010. Available online: http://eigen.tuxfamily.org (accessed on 19 November 2022).
  64. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  65. Dao, T.T.; Pouletaut, P.; Lazáry, Á.; Tho, M.C.H.B. Multimodal Medical Imaging Fusion for Patient-Specific Musculoskeletal Modeling of the Lumbar Spine System in Functional Posture. J. Med. Biol. Eng. 2017, 37, 739–749. [Google Scholar] [CrossRef]
  66. Robinson, M.W.; Baiungo, J. Facial Rehabilitation: Evaluation and Treatment Strategies for the Patient with Facial Palsy. Otolaryngol. Clin. N. Am. 2018, 51, 1151–1167. [Google Scholar] [CrossRef] [PubMed]
  67. Marcos, S.; Gómez-García-Bermejo, J.; Zalama, E. A Realistic, Virtual Head for Human-Computer Interaction. Interact. Comput. 2010, 22, 176–192. [Google Scholar] [CrossRef]
  68. Matsuoka, A.; Yoshioka, F.; Ozawa, S.; Takebe, J. Development of Three-Dimensional Facial Expression Models Using Morphing Methods for Fabricating Facial Prostheses. J. Prosthodont. Res. 2019, 63, 66–72. [Google Scholar] [CrossRef] [PubMed]
  69. Turban, L.; Girard, D.; Kose, N.; Dugelay, J.L. From Kinect Video to Realistic and Animatable MPEG-4 Face Model: A Complete Framework. In Proceedings of the 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Turin, Italy, 30 July 2015; pp. 1–6. [Google Scholar] [CrossRef]
  70. Liu, H.; Rashid, T.; Habes, M. Cerebral Microbleed Detection Via Fourier Descriptor with Dual Domain Distribution Modeling. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops), Iowa City, IA, USA, 4 April 2020; pp. 20–23. [Google Scholar] [CrossRef]
  71. Nguyen, T.-N. FLAME Based Head and Skull Predictions. Available online: https://drive.google.com/file/d/1ma6_PrRUucGhmg3a4-syKrIpAgTt1-zd/view?usp=share_link (accessed on 19 November 2022).
  72. Nguyen, T.-N. Clinical Decision Support System for Facial Mimic Rehabilitation. Ph.D. Thesis, University of Technology of Compiegne, Compiègne, France, 2020. [Google Scholar]
  73. Luthi, M.; Gerig, T.; Jud, C.; Vetter, T. Gaussian Process Morphable Models. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1860–1873. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Abbas, A.; Rafiee, A.; Haase, M.; Malcolm, A. Geometrical Deep Learning for Performance Prediction of High-Speed Craft. Ocean Eng. 2022, 258, 111716. [Google Scholar] [CrossRef]
  75. Li, R.; Li, X.; Hui, K.-H.; Fu, C.-W. SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation. ACM Trans. Graph. 2021, 40, 1–12. [Google Scholar] [CrossRef]
Figure 1. The overall processing procedure for analyzing the facial muscle’s behaviors for both static and dynamic mimic positions: (a) the shape variations were defined thanks to the statistical shape model of the head; (b) skulls and muscle networks were defined based on the head shapes; (c) the static and dynamic mimics were performed by the virtual subjects and drove the skull and muscle network’s structures; (d) muscle lengths, static and dynamic muscle strains were computed based on the current muscle movements. Note that the red, green, blue, and yellow colors are represented the linear muscles, left orbicularis muscle, right orbicularis muscle, and oris muscle.
Figure 1. The overall processing procedure for analyzing the facial muscle’s behaviors for both static and dynamic mimic positions: (a) the shape variations were defined thanks to the statistical shape model of the head; (b) skulls and muscle networks were defined based on the head shapes; (c) the static and dynamic mimics were performed by the virtual subjects and drove the skull and muscle network’s structures; (d) muscle lengths, static and dynamic muscle strains were computed based on the current muscle movements. Note that the red, green, blue, and yellow colors are represented the linear muscles, left orbicularis muscle, right orbicularis muscle, and oris muscle.
Bioengineering 10 00737 g001
Figure 2. Subject identities and mimics generations from the statistical shape model of the head: (a) shape variations in neutral mimics of 5000 male subjects and 5000 female subjects; (b) static mimics performed by the virtual subjects; (c) dynamical mimics in time series.
Figure 2. Subject identities and mimics generations from the statistical shape model of the head: (a) shape variations in neutral mimics of 5000 male subjects and 5000 female subjects; (b) static mimics performed by the virtual subjects; (c) dynamical mimics in time series.
Bioengineering 10 00737 g002
Figure 3. Skull and muscle network generation procedure. Note that the color dots in this figure represent the muscle insertion/attachment points and feature points for rigid transformation.
Figure 3. Skull and muscle network generation procedure. Note that the color dots in this figure represent the muscle insertion/attachment points and feature points for rigid transformation.
Bioengineering 10 00737 g003
Figure 4. Muscle network definitions on the FLAME head mesh and the skull mesh.
Figure 4. Muscle network definitions on the FLAME head mesh and the skull mesh.
Bioengineering 10 00737 g004
Figure 5. Validation data reconstructed and defined from CT images: (a) reconstructed head meshes, (b) reconstructed skull meshes, (c) generated skull shapes from the CT-based skull meshes, and (d) muscle network manually defined based on the face anatomical structures and reconstructed head and skull meshes. The colored lines represent muscle action lines in different muscle types: linear, left oculi, right oculi, and oris for red, green, blue, and yellow colors.
Figure 5. Validation data reconstructed and defined from CT images: (a) reconstructed head meshes, (b) reconstructed skull meshes, (c) generated skull shapes from the CT-based skull meshes, and (d) muscle network manually defined based on the face anatomical structures and reconstructed head and skull meshes. The colored lines represent muscle action lines in different muscle types: linear, left oculi, right oculi, and oris for red, green, blue, and yellow colors.
Bioengineering 10 00737 g005
Figure 6. Validation results of the predicted skull mesh: (a) predicted skull meshes vs. reconstructed skull meshes, (b) error distribution in distance color maps between the predicted skull shapes and the reconstructed skull shapes, and (c) error distribution in distance color maps between the predicted skull meshes and the reconstructed skull meshes. The horizontal numbers indicate the tested subject identification.
Figure 6. Validation results of the predicted skull mesh: (a) predicted skull meshes vs. reconstructed skull meshes, (b) error distribution in distance color maps between the predicted skull shapes and the reconstructed skull shapes, and (c) error distribution in distance color maps between the predicted skull meshes and the reconstructed skull meshes. The horizontal numbers indicate the tested subject identification.
Bioengineering 10 00737 g006
Figure 7. Validated deviations: generated vs. reconstructed skull shapes, generated vs. reconstructed skull meshes, generated vs. reconstructed skull meshes in muscle attachment point regions, and automatically generated vs. manually defined muscle lengths. Note that the circles represent the outliers. The “x”s and the numbers inside the boxplots indicate the mean values.
Figure 7. Validated deviations: generated vs. reconstructed skull shapes, generated vs. reconstructed skull meshes, generated vs. reconstructed skull meshes in muscle attachment point regions, and automatically generated vs. manually defined muscle lengths. Note that the circles represent the outliers. The “x”s and the numbers inside the boxplots indicate the mean values.
Bioengineering 10 00737 g007
Figure 8. Dynamic strains of the left and right zygomaticus major and minor muscles in dynamical smiling mimics.
Figure 8. Dynamic strains of the left and right zygomaticus major and minor muscles in dynamical smiling mimics.
Bioengineering 10 00737 g008
Figure 9. Dynamic strains of the left and right zygomaticus major and minor muscles in dynamical kissing mimics.
Figure 9. Dynamic strains of the left and right zygomaticus major and minor muscles in dynamical kissing mimics.
Bioengineering 10 00737 g009
Table 1. Muscle lengths and perimeters of linear and circle muscles in neutral mimics of male and female subjects in this study and our previous study.
Table 1. Muscle lengths and perimeters of linear and circle muscles in neutral mimics of male and female subjects in this study and our previous study.
Left/RightsMuscle TypesMuscle IDs * Action   Line   Lengths   of   Facial   Muscles   in   Neutral   Position   ( l 0 ) ( Mean   ±   SD   * *   mm )
MalesFemales
LeftProcerusLP27.76 ± 2.4127.56 ± 2.26
RightRP26.58 ± 2.3326.63 ± 2.26
LeftFrontal BellyLFB33.06 ± 1.8432.07 ± 1.77
RightRFB32.60 ± 1.7831.74 ± 1.79
LeftTemporoparietalisLT30.69 ± 1.7330.10 ± 1.70
RightRT22.89 ± 1.7722.28 ± 1.79
LeftCorrugator SupperciliaryLCS29.83 ± 1.6429.55 ± 1.67
RightRCS28.45 ± 1.6928.26 ± 1.84
LeftNasalisLNa28.22 ± 1.9427.56 ± 2.09
RightRNa29.05 ± 1.9228.25 ± 2.12
LeftDepressor Septi NasiLDSN17.12 ± 2.6616.39 ± 2.31
RightRDSN13.29 ± 2.3813.11 ± 2.35
LeftZygomaticus MinorLZm59.21 ± 2.8255.88 ± 2.74
RightRZm54.73 ± 2.8952.25 ± 2.85
LeftLeft Zygomaticus MajorLZM67.29 ± 2.7163.70 ± 2.62
RightRZM62.93 ± 2.7159.81 ± 2.68
LeftRisoriusLR36.84 ± 2.0537.19 ± 1.98
RightRR36.04 ± 2.0536.75 ± 1.88
LeftDepressor Anguli OrisLDAO31.19 ± 3.1730.19 ± 2.17
RightRDAO32.39 ± 3.6431.47 ± 2.65
LeftMentalisLMe26.49 ± 3.2326.76 ± 2.63
RightRMe29.07 ± 3.1429.00 ± 2.60
LeftLevator Labii SuperiorisLLLS50.28 ± 3.0147.05 ± 2.87
RightRLLS46.54 ± 2.7544.01 ± 2.82
LeftLevator Labii Superioris Alaeque NasiLLLSAN60.39 ± 2.8357.16 ± 2.81
RightRLLSAN59.65 ± 2.7456.63 ± 2.82
LeftLevator Anguli OrisLLAO38.14 ± 2.8734.96 ± 2.82
RightRLAO34.49 ± 2.8031.76 ± 2.76
LeftDepressor Labii InferiorisLDLI37.39 ± 2.2037.33 ± 2.19
RightRDLI35.99 ± 2.6535.52 ± 2.22
LeftBuccinatorLB55.65 ± 3.0353.54 ± 3.05
RightRB52.32 ± 3.0950.55 ± 2.94
LeftMasseterLMa49.45 ± 2.3847.02 ± 2.20
RightRMa52.14 ± 2.5649.68 ± 2.23
LeftOrbicularis OculiLOO153.60 ± 5.01150.45 ± 5.25
RightROO148.66 ± 4.70144.12 ± 4.81
Orbicularis OrisOO176.43 ± 7.94165.76 ± 6.36
* ID: Identification; ** SD: Standard Deviation.
Table 2. Muscle strains of males and females subjects in static mimics: smiling and kissing.
Table 2. Muscle strains of males and females subjects in static mimics: smiling and kissing.
Muscle IDsMuscle Strains in Positions ( l l 0 l 0 ) (Mean ± SD %)
SmilingKissing
MalesFemalesMalesFemales
LP3.06 ± 0.343.00 ± 0.26−3.05 ± 0.34−3.00 ± 0.26
RP3.39 ± 0.383.30 ± 0.30−3.38 ± 0.38−3.30 ± 0.30
LFB2.88 ± 0.212.87 ± 0.21−2.84 ± 0.21−2.84 ± 0.21
RFB2.52 ± 0.172.56 ± 0.16−2.49 ± 0.17−2.53 ± 0.16
LT2.50 ± 0.202.46 ± 0.19−2.47 ± 0.20−2.44 ± 0.19
RT3.05 ± 0.293.07 ± 0.27−3.02 ± 0.28−3.05 ± 0.26
LCS−0.26 ± 0.30−0.05 ± 0.260.35 ± 0.300.14 ± 0.26
RCS−0.50 ± 0.44−0.19 ± 0.380.63 ± 0.440.32 ± 0.38
LNa−9.69 ± 1.03−9.40 ± 0.9510.44 ± 1.1010.08 ± 1.01
RNa−6.72 ± 0.77−6.97 ± 0.727.74 ± 0.807.86 ± 0.78
LDSN−22.40 ± 4.18−22.76 ± 3.7022.88 ± 4.4923.12 ± 3.78
RDSN−21.63 ± 3.86−22.83 ± 3.8525.41 ± 4.4325.95 ± 4.37
LZm−10.50 ± 0.48−10.41 ± 0.4710.64 ± 0.4810.62 ± 0.48
RZm−11.49 ± 0.60−11.32 ± 0.5711.69 ± 0.6111.63 ± 0.59
LZM−12.34 ± 0.50−12.34 ± 0.5212.66 ± 0.5012.75 ± 0.52
RZM−12.78 ± 0.60−12.74 ± 0.6013.23 ± 0.5913.31 ± 0.60
LR−12.17 ± 1.72−12.92 ± 1.6313.88 ± 1.6414.55 ± 1.63
RR−10.57 ± 2.02−11.49 ± 1.7212.76 ± 2.0013.55 ± 1.76
LDAO5.23 ± 2.534.33 ± 3.091.32 ± 2.783.47 ± 2.98
RDAO9.91 ± 3.019.92 ± 3.34−5.18 ± 2.23−3.95 ± 2.61
LMe−0.14 ± 1.46−0.79 ± 1.872.82 ± 2.044.03 ± 2.29
RMe1.31 ± 1.140.90 ± 1.590.88 ± 1.121.77 ± 1.46
LLLS−11.76 ± 0.77−11.57 ± 0.7512.13 ± 0.7712.08 ± 0.75
RLLS−12.24 ± 0.95−11.96 ± 0.9012.87 ± 0.9612.74 ± 0.91
LLLSAN−8.05 ± 0.45−7.77 ± 0.518.65 ± 0.468.48 ± 0.50
RLLSAN−7.44 ± 0.47−7.21 ± 0.518.17 ± 0.478.03 ± 0.49
LLAO−22.01 ± 1.86−22.67 ± 2.0122.82 ± 1.8623.66 ± 2.05
RLAO−19.84 ± 2.20−20.62 ± 2.1922.23 ± 2.3823.14 ± 2.34
LDLI−7.46 ± 1.30−8.20 ± 1.278.25 ± 1.308.98 ± 1.29
RDLI−6.55 ± 1.68−7.82 ± 1.577.76 ± 1.779.04 ± 1.67
LB−12.89 ± 0.79−13.46 ± 0.8913.57 ± 0.8214.10 ± 0.95
RB−12.21 ± 0.78−12.77 ± 0.8313.14 ± 0.8413.66 ± 0.91
LMa0.22 ± 0.49−1.16 ± 0.580.02 ± 0.450.86 ± 0.54
RMa−0.65 ± 0.48−0.25 ± 0.570.74 ± 0.460.14 ± 0.57
LOO−2.14 ± 0.10−2.16 ± 0.112.27 ± 0.112.29 ± 0.11
ROO−2.31 ± 0.10−2.39 ± 0.112.46 ± 0.102.54 ± 0.12
OO13.18 ± 0.6814.46 ± 0.66−11.31 ± 0.64−12.44 ± 0.61
Table 3. Comparison between the muscle lengths computed in this study and our previous study and the literature.
Table 3. Comparison between the muscle lengths computed in this study and our previous study and the literature.
Muscle IDsLengths of Facial Muscles in Neutral Mimics Reported in the Literature
This Study *Nguyen et al., 2021 [36] Freilinger et al., 1987 [38]Happak et al., 1997 [39]Bernington et al., 1999 [40]Fan et al., 2017 [4]Dao et al., 2018 [5]
Subjects: 5000 M, 5000 F
Ages: 29–49 Years
Status: In Silico
Subjects: 2 M, 3 F
Ages: 29–49
Status: 3 H, 2 P
Weight: 52–71 Kg
Height: 1.65–1.77 m
BMI: 18–26 kg/m2
Subjects: 20
Ages: 62–94
Status: Cadavers
Subject: 11
Ages: 53–73 Years
Status: Cadavers
Subjects: 4 M, 6 F
Ages: 15–31
Status: Patients
Subject: 1 F
Ages: 24
Status: Healthy Height: 1.5 m
Weight: 57 kg
MalesFemales
MeanSDMeanSDMeanSDMeanSDMeanSDMeanSDValueValue
LZm59.212.8255.882.7451.053.82--51.87.4----
RZm54.732.8952.252.8553.902.05--51.87.4----
LZM67.292.7163.702.6258.453.85M: 0.67
F: 69.50
6.32
6.58
65.63.8--43.6552
RZM62.932.7159.812.6861.233.05M: 0.67
F: 69.50
6.32
6.58
65.63.8--43.6552
LDAO31.193.1730.192.1736.693.23M: 37.83
F: 38.33
4.38
8.02
485.1----
RDAO32.393.6431.472.6531.863.35M: 37.83
F: 38.33
4.38
8.02
485.1----
LLLS50.283.0147.052.8746.263.00M: 33.67
F: 35.50
4.13
6.69
477.5--29.3-
RLLS46.542.7544.012.8248.592.14M: 33.67
F: 35.50
4.13
6.69
477.5--29.3-
LLLSAN60.392.8357.162.8158.063.65--61.67.6----
RLLSAN59.652.7456.632.8259.462.81--61.67.6----
LLAO38.142.8734.962.8234.302.53--422.5--27.4-
RLAO34.492.8031.762.7635.512.30--422.5--27.4-
LDLI37.392.2037.332.1936.734.39--294.9----
RDLI35.992.6535.522.2237.014.16--294.9----
LB55.653.0353.543.0556.353.35--567.4----
RB52.323.0950.552.9455.182.01--567.4----
LMa49.452.3847.022.2044.932.35----M: 45.9
F: 39.1
5.8
8.2
--
RMa52.142.5649.682.2345.032.57----M: 45.9
F: 39.1
5.8
8.2
--
VLOO51.811.9250.462.0340.702.99--609.6- --
VROO46.721.6345.291.9341.622.13--609.6- --
HLOO36.471.7335.681.6756.533.23--655.6- --
HROO36.481.7535.821.7856.922.85--655.6- --
* M: Male; F: Female; Ages: Min–Max (Years Old).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tran, V.-D.; Nguyen, T.-N.; Ballit, A.; Dao, T.-T. Novel Baseline Facial Muscle Database Using Statistical Shape Modeling and In Silico Trials toward Decision Support for Facial Rehabilitation. Bioengineering 2023, 10, 737. https://doi.org/10.3390/bioengineering10060737

AMA Style

Tran V-D, Nguyen T-N, Ballit A, Dao T-T. Novel Baseline Facial Muscle Database Using Statistical Shape Modeling and In Silico Trials toward Decision Support for Facial Rehabilitation. Bioengineering. 2023; 10(6):737. https://doi.org/10.3390/bioengineering10060737

Chicago/Turabian Style

Tran, Vi-Do, Tan-Nhu Nguyen, Abbass Ballit, and Tien-Tuan Dao. 2023. "Novel Baseline Facial Muscle Database Using Statistical Shape Modeling and In Silico Trials toward Decision Support for Facial Rehabilitation" Bioengineering 10, no. 6: 737. https://doi.org/10.3390/bioengineering10060737

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop