Next Article in Journal
Recurrent Metastatic Pulmonary Synovial Sarcoma during Pregnancy: A Case Report and Literature Review
Previous Article in Journal
Revolutionizing Breast Cancer Diagnosis: A Concatenated Precision through Transfer Learning in Histopathological Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Frontiers in Three-Dimensional Surface Imaging Systems for 3D Face Acquisition in Craniofacial Research and Practice: An Updated Literature Review

by
Pradeep Singh
1,
Michael M. Bornstein
2,
Richard Tai-Chiu Hsung
3,4,
Deepal Haresh Ajmera
1,
Yiu Yan Leung
4 and
Min Gu
1,*
1
Discipline of Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
2
Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland
3
Department of Computer Science, Hong Kong Chu Hai College, Hong Kong SAR, China
4
Discipline of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(4), 423; https://doi.org/10.3390/diagnostics14040423
Submission received: 8 January 2024 / Revised: 2 February 2024 / Accepted: 8 February 2024 / Published: 14 February 2024
(This article belongs to the Section Medical Imaging and Theranostics)

Abstract

:
Digitalizing all aspects of dental care is a contemporary approach to ensuring the best possible clinical outcomes. Ongoing advancements in 3D face acquisition have been driven by continuous research on craniofacial structures and treatment effects. An array of 3D surface-imaging systems are currently available for generating photorealistic 3D facial images. However, choosing a purpose-specific system is challenging for clinicians due to variations in accuracy, reliability, resolution, and portability. Therefore, this review aims to provide clinicians and researchers with an overview of currently used or potential 3D surface imaging technologies and systems for 3D face acquisition in craniofacial research and daily practice. Through a comprehensive literature search, 71 articles meeting the inclusion criteria were included in the qualitative analysis, investigating the hardware, software, and operational aspects of these systems. The review offers updated information on 3D surface imaging technologies and systems to guide clinicians in selecting an optimal 3D face acquisition system. While some of these systems have already been implemented in clinical settings, others hold promise. Furthermore, driven by technological advances, novel devices will become cost-effective and portable, and will also enable accurate quantitative assessments, rapid treatment simulations, and improved outcomes.

1. Introduction

Digitizing dental care is a contemporary approach to achieving optimal clinical results. The evolution of digitization in dentistry has been fueled by cutting-edge technologies, particularly in three-dimensional (3D) surface imaging, and is progressing towards digital treatments. Three-dimensional facial images reflecting actual faces are deemed the most reliable for detecting, planning, and predicting treatment outcomes [1]. Ongoing advancements in 3D surface imaging have enabled surgeons and orthodontists to assess surgical outcomes and gather detailed information about craniofacial structures [2,3]. Moreover, the current 3D surface-imaging technologies offer more detailed information and zero ionizing radiation than conventional imaging methods [4]. With the advent of 3D surface-imaging systems, a variety of clinical applications are now possible [5,6], including virtual articulation [7], obstructive sleep apnea diagnosis and prediction [8], smile design [9], virtual patient simulation [10], augmented-reality-driven real-time visualization of an operative scenario [11], and even interdisciplinary communication [12,13]. Also, the integration of 3D facial scans with artificial intelligence (AI) and machine learning (ML) [14] has made it possible to devise patient-specific treatment plans for plastic and reconstructive surgery [15], in addition to automating diagnosis in maxillofacial surgery and the identification of children with Autism Spectrum Disorder (AUD) [16]. Thus, there is a growing interest in adapting these technologies for various medical and dental disciplines.
An array of 3D surface-imaging technologies, including stereophotogrammetry, laser-based scanning, and structured light scanning, are currently available for generating photorealistic 3D facial images; however, 3D face acquisition systems based on these technologies differ in their accuracy, reliability, resolution, usability, and portability [17]. While the incorporation of some cutting-edge systems in a clinical setting without an in-depth understanding of the technicalities and their application could be an arduous task, others may be unsuitable because of constraints on resources, time, or space. Therefore, choosing a purpose-specific 3D face acquisition system could be a challenging task for clinicians owing to varied costs and hardware and software characteristics. Practitioners would feel more at ease implementing 3D surface imaging technologies into the dental workflow if they comprehended the various characteristics of the 3D face acquisition systems. Hence, given the breadth of incessantly evolving 3D surface imaging technology, providing clinicians and readers with updated information analyzing the various 3D face acquisition systems that are currently available is indispensable. Although several reviews of 3D surface imaging systems have been published previously [3,18,19], the majority are technical and engineering-specific. Therefore, with a focus on 3D face acquisition, this review aims to provide clinicians and researchers with an overview of the 3D surface imaging technologies and systems that are currently being used or have the potential to be used in the future for 3D face acquisition in craniofacial research and daily practice. The information presented in this review will guide clinicians in selecting an optimal 3D face acquisition system that is best suited for their clinical environment.

2. Materials and Methods

2.1. Eligibility Criteria

A literature search was conducted to find studies that matched the Population–Intervention–Control–Outcome (PICO) criteria for the topic: “What 3D surface imaging systems can be used for 3D face acquisition in craniofacial research and practice?” Only studies that addressed this question were considered appropriate for this review. The PICO criteria elements can be found in Figure 1.
The inclusion criteria for the studies were as follows: (1) utilization of a 3D surface imaging system for craniofacial research; (2) conducted on humans with proper analytical design (e.g., case–control studies, cross-sectional studies, prospective studies, retrospective studies including pilot studies); (3) absence of data duplication or overlap with other articles; and (4) availability of full-text studies published in English. Studies involving animals, those not focused on craniofacial research, those lacking data on hardware or software characteristics, those failing to report the type of 3D surface imaging system used, letters to editors, and review articles were excluded from this review.

2.2. Information Sources and Literature Search

Two authors (P.S. and D.A.) conducted a systematic and independent search for relevant studies. The search was comprehensive and included the electronic databases PubMed, EMBASE (via Ovid), Medline (via Ovid), Cochrane Library, Scopus, and Web of Science. The search spanned from January 2008 to November 2023 and utilized a combination of medical subject heading (MeSH) terms as keywords. Adjustments were made to the vocabulary and syntax across the databases. The search was not limited to published studies. In addition to the computer-assisted search, manual searches were conducted in scientific journals, online literature, reference lists of relevant reviews and retrieved studies, and articles that may have been missed. Furthermore, unpublished data were sought through the OpenGrey database, http://www.opengrey.eu/ (accessed on 7 November 2023).

2.3. Study Selection

Following a comprehensive literature search, two authors (P.S. and D.A.) systematically evaluated the titles and abstracts of potential studies to determine their eligibility based on predetermined inclusion and exclusion criteria. Any disagreements regarding inclusion were resolved through discussion. If disagreements persisted, a third author (G.M.) independently evaluated the articles. Full-text studies that met the inclusion criteria were retrieved. Endnote™, version 20 (Clarivate Analytics, Philadelphia, PA, USA), was used for collating, managing potentially eligible records, and obtaining bibliographic citations from the literature search.

2.4. Data Extraction and Outcomes of Interest

The two reviewers (P.S. and D.A.) independently performed data extraction using a standardized and predefined data format. The aim was to discern the following outcomes of interest: (1) operational considerations and (2) performance. To achieve this, relevant data from the full-text articles were extracted as follows:
(1)
Hardware characteristics (portability, system mobility, sensor position, cost-effectiveness);
(2)
Software characteristics (CT/CBCT integration, surgery simulation, real-time 3D volumetric visualization, tissue behavior simulation, progress and outcome monitoring);
(3)
Functionality (purpose, data delivery, capture speed, processing time, scan range, coverage, optimal 3D measurement range, color image, scan requisite, output format, scan processing software enabled, accuracy, precision, archivable data, user-friendliness, system requirements, calibration time). Table 1 illustrates the definitions of various characteristics studied in 3D face acquisition systems.
The study is organized into several sections. Section 3 delves into 3D surface imaging technologies and systems. Section 4 provides a discussion of general considerations and practical information about 3D face acquisition. Section 5 presents future directions. Lastly, the final section provides conclusions drawn from the study.

3. Results

Figure 2 presents the flowchart of the study selection process. A comprehensive search of six databases initially identified 926 records, and an additional 68 records were found from other sources.
After removing nine duplicates, the titles and abstracts of 917 articles were screened. Of these, 836 articles were found to be irrelevant to the topic and were excluded. Following the initial screening, 149 articles (81 from the database search and 68 from additional sources) were retrieved as full texts. After a detailed review of these articles, 78 studies were eliminated. Finally, two reviewers (D.A. and P.S.) considered 71 pieces of literature that met the inclusion criteria to be suitable for qualitative analysis and included them in this review. Articles were categorized depending on the 3D visualization principle of the surface imaging system employed. Figure 3 provides a broad classification of the 3D surface imaging systems analyzed in the present review.

3.1. Findings on 3D Surface Imaging Technologies and Systems

The hardware, software, and functionality characteristics of the 3D face acquisition systems are summarized in Table 2. Furthermore, Table 3 highlights the limitations and drawbacks associated with each system.

3.1.1. Laser-Based Scanning

The first 3D laser scanning system for clinical use was introduced in 1991 by Moss et al. to monitor the growth patterns of children with facial deformities [20]. Since then, laser scanners have been extensively used in anthropometric studies and applied clinically. They generate 3D images of the facial surface by projecting an eye-safe class 1 laser beam onto the patient’s face, which scatters the beam and determines the 3D coordinates (x, y, and z) of surface points [21,22]. The scanning process takes about 10 s and captures involuntary facial movements more distinctly than stereophotogrammetry [23]. Laser scanners for 3D imaging can be classified as single-point or slit scanners based on the beam source. For facial morphology analysis, slit scanners are a more practical option in terms of scanning time and mechanical simplicity [24]. The two most common laser scanners used for dental facial imaging are Minolta Vivid 910 and FastSCAN II [25].

Minolta Vivid 910 

Minolta Vivid scanners (Konica Minolta Sensing Inc., Tokyo, Japan; https://www.konicaminolta.com/instruments/download/instruction_manual/3d/index.html) (accessed on 8 December 2023) are widely used scanners for medical or facial scanning. The currently available models include the Vivid 900 and Vivid 910 [26]. The non-contact Vivid 910 scanner captures 3D data using triangulation [27] and manifests high-speed scanning capabilities, with the ability to capture an angular field of view (FOV) of 10 cm2–1 m2 in just 2.5 s. The generated polygonal mesh contains over 300,000 vertices (connected points). The generated polygonal mesh retains all the connectivity information, thus improving the detailed capture and eliminating geometric ambiguities. Vivid 910 can automatically detect the optimal measurement distance and laser intensity through autofocus (AF) and autoexposure (AE) technologies, respectively. It can measure objects of various sizes with three interchangeable lenses and is equipped with a high-accuracy calibration unit for 3D data calculation. It can be operated without a host computer by recording data on a compact flash memory card. The acquired images are 24-bit color, without parallax errors, and oriented on the same optical axis as the 3D data, allowing for the creation of true-color 3D models.
Validation and Dental Applications: The Vivid scanner can assist in fabricating orthodontic appliances, anthropometric measurements, prosthetic and orthotic manufacturing, rapid prototyping, and forensic modeling. Kau et al. evaluated the reliability of this system for measuring facial morphology and found no significant difference between the facial scans at baseline and at 3 min and 3 days postoperatively [28]. Kovacs et al. compared manual measurements on dummies using Vivid 910 and reported an error range of <2 mm for >93% of the data [29]. However, the accuracy was questionable because of the absence of voluntary and involuntary movements in dummies.

FastSCAN II 

FastSCAN II (Polhemus, Colchester, VT, USA; https://polhemus.com/scanning-digitizing/fastscan/) (accessed on 10 December 2023) is an ultra-portable, fast, and convenient laser scanner that takes less than 1 min to scan a human face. It uses a handheld wand mounted with a laser to triangulate the 3D object and a camera to record cross-sectional depth profiles, thereby measuring 3D shapes instantly by simply sweeping the wand over the object [30]. It is a dynamic system that allows for the turning and rotation of the objects during scanning. Furthermore, it is equipped with a magnetic tracking system that tracks the wand’s location and allows for the automatic registration and real-time transformation of a single scan into a 3D model. It has a sensing range of 3–6 feet and can even scan complex organic shapes that are otherwise difficult to scan. The salient features of FastSCAN include: (a) auto-stitching of 3D images in real time, which eliminates post-processing errors; (b) surface editing, which allows raw scan modification through the selection and deletion of raw data points; (c) on-screen direct linear measurements; and (d) accurate scanning through glass by means of mitigating refractive error.
Validation and Dental Applications: FastSCAN II has been reported to be a reliable, accurate, and clinically valid alternative to manual measurements and a quick, easy, and useful tool for the objective assessment of craniofacial malformations [31] that can be used for manufacturing orthotics and prosthetics, rapid prototyping, and forensic modeling.

3.1.2. Stereophotogrammetry

Photogrammetry is the science of photographic measurements, and involves the reconstruction of two-dimensional (2D) and 3D structures from photographic reproductions. It has been used in the fields of medicine and dentistry since the 1940s [32]; however, its first clinical use in orthodontics was reported by Thalmann-Degan in 1944 [33]. A modified and standardized version of this technique is “stereophotogrammetry”, wherein two or more stereo-paired cameras coordinate to capture an image simultaneously from different viewpoints and combine the multiple views into a 3D image [34,35,36]. Stereophotogrammetric image acquisition can be performed actively or passively, depending on whether the projected light is patterned or not. Active stereophotogrammetry involves projecting a patterned light onto an object and utilizing two or more cameras to capture the deformation in the pattern caused by the object’s surface via triangulation and calibration. This method is simpler than passive stereophotogrammetry, which requires two or more cameras in order to determine the 3D surface of an object without using a pattern projection, making the determination of correspondence between the views difficult and ambiguous in passive stereophotogrammetry. Contemporary digital stereophotogrammetry provides quantitative mesh information in the form of a dense polygon mesh accompanied by a qualitative, life-like rendering of the facial surface. High-velocity acquisition of the entire face (almost 360°) simultaneously from different viewpoints and reproduction of realistic facial surface geometry, texture, color, precision, and reproducibility render stereophotogrammetry the gold standard for facial scanning [37]. This study discusses the three commonly used stereophotogrammetry systems in facial morphology imaging.

Vectra H1 

Canfield (Canfield Scientific, Inc., Fairfield, NJ, USA; https://www.canfieldsci.com/) (accessed on 12 December 2023) offers several 3D surface-imaging solutions, including Vectra H1, Vectra M3, Vectra XT, and Vectra CR. Among these, passive stereophotogrammetry-based Vectra H1 has been the most favored portable imaging system as it is capable of capturing frontal faces through 100° and is affordable (Figure 4).
Canfield SculptorTM software (version 6.10) enables tissue simulations with 3D surface images and has an array of striking features: it is lightweight, handheld, simple, and intuitive image capture, and can achieve automated stitching of three facial captures into one 3D image. It uses the grey mode for facial contour evaluation, and it can differentiate between red and brown skin components for skin condition assessment using Canfield’s proprietary RBX® technology.
Validation and Dental Applications: Vectra H1 offers 3D assessment and simulation tools, such as marker-less tracking of soft tissue changes and volumetric changes after orthognathic surgery and automated linear and angular measurements for orthodontic treatment planning. It simulates the effects of combining complementary procedures on the treatment outcome, thus facilitating easy communication of surgical and non-surgical procedures to the patients. The accuracy and reproducibility of this system have been validated in previous studies and found to be sufficiently accurate for clinical use, with a random error < 1.5 mm for linear, angular, and surface area measurements [38,39], and the average participant and technical errors have been found to be 0.40 ± 0.06 mm and 0.34 ± 0.13 mm, respectively [40].

Di3D FCS-100 

Di3D (Dimensional Imaging Ltd., Glasgow, Scotland; https://di4d.com/) (accessed on 14 December 2023) is another passive stereophotogrammetry-based imaging system developed by Dimensional Imaging in 2002. Its Di3D FCS-100 model was specifically designed to capture 3D surface images of human faces [41]. Utilizing standard digital still cameras with the shutter speed set at 1/50th of a second and F/20 aperture, the cameras capture frontal faces instantaneously through 180º within the flash illumination (0.00125 s), thereby eliminating motion artifacts. This instantaneous capture is specifically useful for adults or children who find it difficult to stay still while capturing facial expressions. A continuous point cloud that is converted into a mesh represents the 3D geometry in Di3D. The company’s proprietary software, Di3Dcapture, integrates image capture and 3D processing automatically to generate a dense range map image, which is then merged with the original color image to produce seamless, high-definition, photographic-quality 3D surface images. The trademarked Di3Dview software (version 3.9) features tools for advanced 3D model alignment, mesh conformation, re-texturing, and morphing and allows for simple and effective landmark placement, point measurement, volume measurement, and symmetry assessment.
Validation and Dental Applications: Khambay et al. reported a mean system error of 0.2 mm for facial casts [42], while Winder et al. reported a geometric accuracy of 0.057 mm, a reproducibility error of 0.0016 mm, and a mean error of 0.6 mm for linear measurements using the Di3D system [43]. Fourie et al. also confirmed the system’s accuracy and reliability for clinical and research purposes using indirect anthropometric measurements from Di3D-derived 3D soft tissue surface models of cadaveric heads [44].

3.1.3. Structured Light Scanning

Structured light scanners are based on structured light patterns that capture 3D information [45]. A fully structured light pattern, such as an elliptical pattern or random texture map is projected onto the patient’s face, and the distortions in this pattern caused by the facial morphology are detected using a charged-coupled device. Following this, the distance of each point in the pattern is automatically calculated, and a 3D image is generated [46]. This technique virtually inhibits any possible motion artifact owing to its rapid capture speed, which is just a few milliseconds.

Morpheus 3D 

Morpheus 3D is a photogrammetry device (Morpheus Co., Ltd., Seoul, Republic of Korea; https://www.morpheus3d.co.kr) (accessed on 16 December 2023) that employs the structured light scanning principle. It uses an LED and a 3D scanner to generate 3D images and a 3D simulator that provides treatment simulations based on scanned data. This compact and user-friendly device (390 mm × 140 mm × 240 mm) captures three images (front, right, and left) from three different angles (45°) within a fraction of a second (approximately 0.8 s), and the acquired 3D image data is merged into a single composite 3D facial image within 2 min through the registration and integration processes [47] (Figure 5).
The overlapping areas resulting from the merging of the images are automatically deleted during these processes. The proprietary software, MAS (Morpheus3D Aesthetic Solution) (version 3.0), can perform various operations, including 3D diagnostics, patient data management, and 3D computed tomography (CT) bone surgery, thus eliminating the need for individual software for each function.
Validation and Dental Applications: Morpheus 3D provides a complete solution for facial diagnosis via automated landmark detection; assessment of the patient’s proportion ratio and golden ratio; evaluation of facial asymmetry; linear, curved, and volumetric measurements, comparison of the pre- and post-treatment changes; and simulations for various facial treatments such as Botox, laser, thread, filler, facelift, and double eyelid surgery. Previous studies have reported good congruence with anthropometric measurements, with mean differences of 0.9 mm [48] and 0.75 mm in the accuracy of 3D images, respectively, and validated its usefulness in clinical settings. Lee et al. validated this system for facial skin thickness estimation [49], while Traisrisin et al. reported Morpheus 3D Facemaker software (version 3.0)to be accurate in predicting soft tissue changes following orthognathic surgery [50].

Accu3D 

Accu3D (Accu3DX Co., Ltd., Tai Chung City, Taiwan; https://www.accu3dx.com) (accessed on 20 December 2023) is an ergonomically designed, portable, and user-friendly scanner with ultra-fast capture speed that delivers highly precise 3D geometry and texture quality in just 0.5 s. The system can capture the patient’s 3D image, including posteriorly located landmarks, by using an adjustable mount that can rotate beyond 180°. It offers three different scanning modes, including only face, face and neck, and all face, depending on the clinical requirement. For instance, in “all face” mode, frontal, left, right, and neck views are merged together to generate a complete 3D image (Figure 6).
The company’s Accu3DX Pro software integrates web-based data management and artificial intelligence (AI)-based facial analysis technology. The intuitive web database and web viewer features provide triple encryption for patients’ 3D data that can be used on any platform worldwide. Moreover, AI-driven landmark detection and head positioning shorten pre-analysis preparation times, while a patented treatment simulation algorithm yields a treatment plan within 30 s.
Validation and Dental Applications: The 3D data generated from the Accu3D scan can be used for orthodontic treatment planning, analyzing facial proportions, and assessing facial asymmetry. Additionally, Accu3DX Pro offers effective face comparison and superimposition features for pre- and post-treatment follow-ups and growth change evaluations. The scientific validity of this system is currently questionable due to a lack of peer-reviewed studies, although the company claims an accuracy of 0.2 mm.

Axis Three XS-200 

The Axis Three XS-200 is a structured, light-based scanner developed in 2002 (Axis Three, Belfast, Ireland) which is specifically optimized to capture facial topology and simulate facial surgery outcomes [51]. The XS-200 is a sleek, desk-mountable unit with a minimal hardware footprint and a modular plug-and-play design that integrates three high-resolution imaging heads and a patented Color Coded Triangulation (CCT™; Siemens and Axis Three patented technology) algorithm to facilitate anatomically accurate 3D image capturing with an optimal processing time. Three-dimensional image geometry is delivered as a continuous cloud, which is later converted into a mesh. Axis Three’s Tissue Behavior Simulation (TBS™) engine [35] facilitates precise real-time model generation within a few seconds [52], while its face module software enables the prediction and visualization of 3D postsurgical outcomes. The facial regions can be adjusted using preinstalled fine-tuning tools, with subtle yet discernible differences. Also, the software offers six different simulation views for comparison and accurate point-to-point measurement purposes. Despite gaining traction among surgeons in over 80 countries worldwide, the company went out of business in 2014. However, CIA Medical (https://www.ciamedical.com/search/Axis+Three+Ltd+XS+200) (accessed on 22 December 2023) stepped in as a distributor and is now providing after-sales support.
Validation and Dental Applications: Axis Three XS-200 allows for the precise 3D simulation of various facial cosmetic surgery procedures, including facelifts (https://www.bodysculpt.com/face/face-lift/) (accessed on 22 December 2023), rhinoplasty (https://www.bodysculpt.com/face/) (accessed on 22 December 2023), chin implants, neck lifts, and cheek augmentation, thereby redefining the esthetic consultation experience and delivering unprecedented patient confidence and postsurgical satisfaction. While the company claims a system accuracy of approximately 0.5 mm, scientific studies have not yet been conducted to validate this claim.

3.1.4. Cone-Beam Computed Tomography Integrated

Planmeca ProFace 

Cone-beam computed tomography (CBCT) has been extensively commercialized and widely used in dentistry since the 1990s [53]. CBCT systems have evolved considerably over time, and several hardware and software solutions are currently available. One such update in the CBCT system by Planmeca Oy, Helsinki, Finland (https://www.planmeca.com) (accessed on 24 December 2023), is the integration of a 3D face camera. Planmeca ProFace is an exclusive 3D face photo system available with all Planmeca ProMax 3D CBCT units. It is a laser scanning-based system wherein the lasers perform facial geometry scanning and the color and texture of the face are captured by digital cameras. Additionally, it provides the freedom to generate a 3D image solely without exposing the patient to ionizing radiation. This unique system integrates CBCT volume with realistic 3D face photos in a single imaging session without any additional steps for 3D face photo acquisition [54], thus ensuring perfectly compatible images, as the patient’s position, muscle position, and facial expressions remain unchanged. The company’s copyrighted software, Planmeca Romexis (version 6.4.5), coalesces the acquired information into a 3D face photo (Figure 7) that can be viewed as an image or in conjunction with the CBCT volume for a detailed facial anatomy view.
Validation and Dental Applications: Planmeca ProFace software (version 4.5.0.R) provides a safe, fast, precise, and effective tool for preoperative planning and follow-up for cosmetic, orthodontic, and maxillofacial surgeries. It allows for the visualization of soft tissues relative to the facial bones and dentine and offers various treatment planning functions, such as measurements, pre- and post-operative comparison, adjustments, and superimposition of images. The scientific validation of this system is a subject of debate. While Liberton et al. found its validity to be comparable to other surface imaging systems [55], a recent study by Amornvit et al. reported lower scanning accuracy [56].

3.1.5. Smartphone-Based Scanning

Bellus3D 

Bellus3D, Inc. (Campbell, CA, USA; https://www.bellus3d.com) (accessed on 21 March 2021) is a Silicon Valley startup established in 2015 that has been a pioneer of several face scanning programs, such as the ARC system, Dental Pro, Face Maker, and FaceApp [57]. Among these, their proprietary applications, Bellus3D FaceApp for iOS and Face Camera Pro for the Android and Windows platforms, have recently been favored. Both products have been designed for self-scanning and can generate a complete 3D facial reconstruction (from left to right ear) in a single scan within a significantly short scanning time. The generated 3D scans are not only readily downloadable and can be utilized for direct printing without any post-processing, but they can also be exported to other applications and artificial reality/virtual reality digital environments.
  • Bellus3D FaceApp
Bellus3D FaceApp (version 3P) is a free-to-use face-scanning mobile application (app) for iPhones and iPads. Bellus3D was the first to utilize the iPhone or iPad’s built-in true-depth camera, which uses infrared light to project over 30,000 dots, for generating high-resolution 3D facial scans [58]. This simple, user-friendly app is capable of capturing more than 250,000 3D data points of a user’s face within 10 s. Three-dimensional face scan acquisition involves facing the smartphone’s front camera and turning the head from left to right according to the app’s voice instructions. A 3D face is then virtually constructed with lifelike quality and can be zoomed, rotated, and viewed in 3D (Figure 8).
FaceApp offers options to scan the face, face and neck, or the full head, with an additional step needed for the latter two (turning the user’s head up and down). The app also employs the smartphone’s built-in gyro for controlled viewing of the 3D face, which can be saved to photo albums or shared on social media platforms. In addition, the app allows users to adjust the mesh smoothness and choose different scanning modes, such as low-definition (LD), standard-definition (SD), and high-definition (HD).
2.
Face Camera Pro
Unlike FaceApp, Face Camera Pro is a universal serial bus (USB) accessory camera specifically developed for Android and Windows devices. It is an easy-to-use, affordable, dual-structured light scanner that should be attached to a cellphone or tablet and controlled by the company’s proprietary software program, Face Camera App. The camera incorporates two infrared sensors (1 megapixel; 1280 × 800 pixels), one color sensor (2 megapixels; 1600 × 1200 pixels), and two infrared laser-structured light projectors. It utilizes the DepthShape and PhotoShape technologies to capture more than 500,000 3D facial data points in one scan. Furthermore, similarly to FaceApp, Face Camera Pro requires the mere turning of the user’s head from left to right to generate amazing 3D realistic facial models within seconds that can be saved in the SD or HD scanning modes.
Validation and Dental Applications: Bellus3D scanning platforms can be used to model and compare dental or plastic surgery outcomes in 3D. Regarding scientific validity, the mean precision and mean trueness values of Face Camera Pro were reported to be 0.32 mm and 0.91 mm, respectively [59], in Piedra-Cascón et al.’s study. Likewise, another study by Cascos et al. reported mean accuracy values of 0.61 mm and 0.28 mm with Face Camera Pro in maximum intercuspation and smile, respectively [60]. On the other hand, Bellus3D FaceApp has been reported to be less accurate than other scanners for depths greater than 2 mm. Similarly, Dzelzkaleja et al. demonstrated that the scanning results of Bellus3D FaceApp were not up to the standards set by other scanning apps such as Heges and Scandy Pro [61].

3.1.6. Four-Dimensional Imaging (Dynamic 3D)

Technological advancements from 3D scanning to four-dimensional (4D) scanning systems have enabled scanning in motion and have overcome the challenges associated with 2D still and dynamic face recognition systems, such as makeup, pose variation, and illumination [62,63,64]. In this regard, 3dMD and DI4D are the two commercially available 4D systems for facial scanning based on the “multi-view stereo acquisition technique” [65]. This technique employs the placement of multiple cameras at different viewpoints to capture various images of the scene, providing the corresponding points for image reconstruction.

3dMD 

3dMD is a hybrid stereophotogrammetry system first developed in 1997 (3dMD LLC, Atlanta, GA, USA; https://3dmd.com/) (accessed on 24 December 2023), and employs both active and passive stereophotogrammetry strategies that capture the deformation of the target object’s surface in a frame-by-frame manner. Superior-quality 3D images are generated by close synchronization between machine vision cameras, high-quality sensors, and robust light-emitting diode (LED) lighting, assisted by software algorithms that are based on projected random patterns (active) as well as skin textures (passive) for triangulation. 3dMD’s proprietary engine integrates information from all the camera viewpoints per frame, eliminating the need for manual stitching and registration, and automatically applies high-texture maps and renders colorful 3D images (Figure 9).
3dMD technology offers 4D motion recording as a standard feature with all its products. It offers seven different preconfigured dynamic-4D capture systems, namely, 3dMDface/3dMDtrio, 3dMDhead, 3dMDhand, 3dMDbody, 3dMDfoot, and also a customizable 3dMDflex, all powered by 3dMD software; however, only 3dMDface and 3dMDtrio have been preferred for facial surface imaging purposes. 3dMDface uses two modular camera units (MCUs) employing six high-frame-rate machine vision cameras placed at two viewpoints and synchronized with robust LED lighting, while 3dMDtrio comprises three MCUs that incorporate nine high-frame-rate machine vision cameras placed at three viewpoints. 3dMDface provides 190° full face coverage and captures the face and neck from ear to ear, which can be enhanced to 220° using 3dMDtrio and up to 360° using high-end models. 3dMD is capable of capturing 10 min of sequential 3D surface images and enables the operator to choose the entire image sequence for analysis or render the optimal time for immediate evaluation. The geometry is represented by a continuous point cloud per frame, and thousands of individual surface points can be tracked with six degrees of freedom using dense surface tracking. Furthermore, 3dMD’s eye-safe optics-based technology provides comfortable subject illumination even after prolonged sessions.
Validation and Dental Applications: A wide range of facial expressions, smiles, functions, and speech can be recorded in 4D using the 3dMD systems. The precision, reproducibility, and accuracy of the 3dMD have been extensively validated in the literature, making it the gold standard in 3D surface imaging [35,36,66,67], with an average technical error of 0.35 ± 0.14 mm and a mean global error of 0.2 mm reported [68] for 3dMDface images. The system has been widely used for facial asymmetry assessment [69], evaluating sexual dimorphism [70], investigating oral and maxillofacial surgery treatment outcomes [71], obstructive sleep apnea prediction [8], anthropometric comparisons [21,67,72,73,74], assessment of facial swellings, and changes in volume [75]. Additionally, it has been utilized for developing a dynamic facial expression database [76,77], defining alar mobility [78], and biometric identification [79].

DI4D 

Dimensional Imaging Ltd. (Glasgow, Scotland; https://www.di4d.com) (accessed on 26 December 2023) offers a 4D capture system called Di4D. It is a passive stereophotogrammetry system that uses three or more video cameras to generate a complete 3D video sequence of a moving object in color. The standard configuration comprises a stereo-pair of four synchronized monochrome grayscale cameras, a stereo-pair of two synchronized color cameras (Model avA 1600–65 km/kc, resolution 1600 × 1200 pixels; Kodak sensor model KAI-02050, Basler, Germany), and two illumination units (Model DIV- 401-DIVA LITE; Kino Flo Corp., Burbank, CA, USA). The grayscale digital cameras capture the video sequences, while the color cameras capture the surface texture. Each frame is treated as a separate stereo-pair of images that are processed automatically to generate 3D dynamic facial images over time, which are then coalesced to produce a 4D sequence. The cameras, while working in tandem, provide facial coverage of approximately 180° and capture 3D dynamic facial images at a temporal resolution of 60 frames/s. A continuous point cloud per frame, which is converted to a consistent mesh by employing texture maps and dynamic normal maps, represents the generated geometry. Furthermore, with the introduction of next-generation, unrivaled 4D facial capture systems, such as Di4D Pro and Di4D HMC, authentic facial expressions, nuances, and subtle facial movements can be translated to highest-fidelity colored 4D facial performance data, which may have clinical applications in the near future.
Validation and Dental Applications: Although Di4D promises a wide range of possibilities for dental applications, including facial expressions, functions, smiles, and facial animations, the literature corroborating this is sparse. Algha et al. investigated the reproducibility of facial expressions that can be reliably used to quantify facial muscle movements in patients with facial palsy, and reported Di4D to be a viable clinical tool for the assessment of facial expressions [80]. Similarly, Shujaat et al. evaluated the feasibility of measuring the changes in the magnitude, speed, and motion similarity of facial animations in head and neck oncology patients and described Di4D as a reliable, practical, and feasible method for capturing the dynamic facial soft tissue movements [81].

3.1.7. Red-Green-Blue-Depth (RGB-D)

In the clinical setting, it may be challenging to incorporate traditional facial scanners for clinical facial assessment. Therefore, consumer-grade 3D scanning alternatives, such as RGB-D (red-green-blue-depth) sensors, which combine red, green, and blue (RGB) data with depth information (D) [82], have been developed and are now being utilized in a variety of facial applications, including face detection, face authentication, face identification, and face expression recognition [83]. RGB-D sensors are compact, portable, and inexpensive and are capable of capturing 4D data, although at lower 2D and 3D resolutions. These sensors are laser-based time-of-flight (ToF) depth sensors that project infrared points onto an object’s surface, and the distance of the point is determined by the time taken by the wave to reflect and reach the scanner’s sensor to generate a 3D image. The Intel RealSense D435 Camera and Azure Kinect are among the most popular RGB-D sensors.

Intel RealSense D435 Camera 

Intel RealSense technology (Intel Corporation, Santa Clara, CA, USA; https://www.intel.com/content/www/us/en/homepage.html) (accessed on 28 December 2023) is a depth perception and tracking technology designed for a wide range of Intel RealSense D400 series (D415, D435i/D435, and D455) depth cameras. These cameras are the leading 3D depth-sensing cameras available [84], using a depth algorithm that allows for more precise and longer-range depth perception; however, only the D435i and D435 are equipped with RGB-D cameras. RealSense D435 (https://www.intelrealsense.com/depth-camera-d435/?wapkw=Intel%20RealSense%20D435%20Camera) is a portable imaging device comprising three core elements: Intel RealSense Module 430 D (1280 × 720 pixels) with wide infrared projector and left and right imagers, a 2-megapixel RGB camera (1920 × 1080 pixels), and Intel RealSense Vision Processor D4 (https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Intel-RealSense-D400-Series-Datasheet.pdf) (accessed on 28 December 2023). The non-visible static infrared pattern projected by the projector improves depth accuracy in scenes with low texture, while the scene data are captured by left and right imagers and processed by the vision processor to generate a depth frame. The subsequent depth frames then create a depth video frame. The active infrared stereoscopic depth-technology-based D435 camera offers a wide FOV for object recognition and is equipped with global shutter image sensors that provide excellent low-light sensitivity. In addition, the RGB rolling shutter sensor technology enables quality depth for an array of applications at a speed of 30 frames/s. This lightweight and powerful camera, accompanied by highly customizable software, provides low-cost sensing solutions for indoor and outdoor usage.
Validation and Dental Applications: Although there is little research on the reliability of the Intel RealSense D435 camera, it has various applications, including facial feature tracking and recognition, expression recognition, gesture recognition, head orientation detection, face recognition, and function recording in Bell’s palsy patients. Furthermore, it can perform face landmark tracking and skeleton tracking despite impediments such as facial hair, piercings, and eyeglasses, enabling the computation of patient movement parameters and joint angles.

Azure Kinect DK 

The Azure Kinect development kit (Azure Kinect DK, Microsoft Inc., Redmond, WA, USA), released in 2020, targets users in robotics, healthcare, retail, and manufacturing. It is based on a continuous-wave ToF camera, where objects in the camera’s FOV backscatter the light projected from an amplitude-modulated light source, and the phase difference between the emitted and reflected light is measured. This phase difference is then translated into a distance value for each pixel in the imaging array. The hardware employs a 12-megapixel RGB color camera (4096 × 3072 pixels), a 1-megapixel ToF depth camera (1024 × 1024 pixels), an inertial measurement unit, and a seven-microphone circular array (https://docs.microsoft.com/en-us/azure/kinect-dk/hardware-specification) (accessed on 28 December 2023). In addition to featuring a global shutter for pixel-synchronized capturing, the ToF sensor provides better resolution, a wider FOV, pixel binning, and reduced power consumption [85]. This compact device offers a range of operating modes with varying frame rates, resolutions, and ranges [86]. The development kit features a software support kit for data procurement and body tracking and can integrate with Microsoft Azure’s cloud-based AI services. It also provides dynamic motion tracking through the Azure Kinect Body Tracking software development kit (SDK).
Validation and Dental Applications: Azure Kinect is a versatile device with a wide range of applications, including object recognition and reconstruction, gesture recognition, human–machine interaction, and medical examination. Azure Kinect has been found to be a viable 3D scanning solution for clinical and research applications, with a systematic error of less than 2 mm in previous studies [86,87,88,89,90,91,92,93].

RAYFace 

RAYFace (RayMedical, Ray Co. Ltd., Seongnam, Republic of Korea; http://www.raymedical.com/) (accessed on 30 December 2023) is a 3D, one-shot face scanning solution developed in 2020. This static system is based on the quick “Flash 3D scan” technology that enables the acquisition of 3D face data in 0.5 s. Unlike other scanning systems, RAYFace captures 3D facial data from multiple angles in a single shot while the object is stationary and generates 3D images in less than 1 min with optimal outputs. Easy and quick scanning, capturing vivid facial expressions, and enhanced face recognition technology are some of RAYFace’s standout characteristics. It is incorporated with nine simultaneously operating image sensors (2-megapixel RGB camera + depth camera with 550 mm × 310 mm FOV) that capture facial depth and shape, as well as automatic registration technology that generates an accurate and realistic 3D facial structure. In addition, assisted by enhanced Realistic Color Facial Reconstruction V2 technology, more precise and vivid 3D facial data can be reproduced with natural contrast and skin tones. The company’s proprietary software, the RAYFace solution (version 2.0), is an open platform that enables the quick integration of RAYFace with existing CBCT systems and the quick import of intraoral scans that are automatically aligned with the 3D face scan data. RAYFace’s compact and curvilinear design allows for smooth installation in a small space, eliminating the need for a dedicated room with bulky lighting and unwieldy cameras, and its 3D digital lighting technology is eye-safe, ensuring a natural smile during image capture without causing eye fatigue. Furthermore, in terms of user-friendliness, it can be easily delegated to assistants because of its easy software operation.
Validation and Dental Applications: The RAYFace 3D technology enables clinicians to execute their clinical ideas in a multifaceted manner. The scanned virtual data can be used in various dental applications, including orthodontics, implant dentistry, and prosthetics, as well as for cosmetic dentistry procedures such as smile design and mock-up veneering. The integration of RAYFace with CT provides solutions for digital surgical guidance and prosthetic fabrication, providing precise diagnosis in plastic surgery and implantology. The predictive analytics offered by the software have been carefully linked with the actual treatment plan, allowing for easy monitoring and strict adherence to the devised treatment plan. The scanning accuracy of the RAYFace for creating digital face twins has been reported to be good, with an absolute surface discrepancy of 0.5277 when compared with the MegaGen and Artec Eva systems [94]. By integrating facial scans with CBCT, another study found RAYFace to be useful in identifying true mid-sagittal planes and anatomical landmarks [95].

4. Discussion

Over the past 30 years, 3D surface imaging systems have gained popularity in various medical disciplines, and their usage is evolving continuously. Owing to varied costs and hardware and software characteristics, different 3D face acquisition systems are used in various hospitals, and even among different departments within the same hospital. The majority of the previously published studies on 3D surface imaging systems are highly technical and focused on engineering aspects, making it challenging for clinicians to grasp the technicalities and usability of these devices [96]. This review discusses an array of cutting-edge 3D surface imaging systems that are currently in use or have the potential for future use in 3D face acquisition. The parameters outlined in Table 1 represent the hardware features, software characteristics, quality of the generated 3D images, and overall functionality of the 3D facial scanning systems. For clinical applicability, an ideal 3D face acquisition system must fulfill these criteria (Table 1).

4.1. Operational Considerations

Each system has distinct characteristics, advantages, and limitations that differentiate it from others and help to determine its clinical application. Three-dimensional face acquisition systems can be mobile or stationary and static or dynamic. While mobile systems like Vectra H1, Intel RealSense D435, and Azure Kinect DK are easily maneuverable around the target object due to their ultralight weight, stationary systems like 3dMD, DI4D, and RAYFace provide standardized image acquisition due to their one-shot flash image capture. The choice between a mobile or stationary system depends on the specific purpose in a clinical setting. In order to capture the intricate details of the target object when scanning dental casts, models, appliances, and prostheses, a mobile system may be deemed necessary, as opposed to a stationary system for scanning faces. In both scenarios, the standardization of image acquisition is a prime requisite for the system to be clinically useful. Furthermore, a portable, ultra-lightweight system with a static sensor and brief scanning time would be advantageous for facial scanning applications because facial expressions and muscle activity may change from one side of the face to the other while capturing using a dynamic sensor, thus influencing the generated 3D face image. On the other hand, smartphone-based scanning systems such as Bellus3D FaceApp and Bellus3D Face Camera Pro, although possessing static scanners, still require the subject to turn their head, which changes the position of their neck muscles. This change in head position can affect the reliability of landmarks and surface texture in the neck region of the generated 3D image.

4.2. Performance

4.2.1. Accuracy and Calibration

Selecting the right 3D face acquisition system depends not only on the technology, but also on its clinical purpose and usage. The accuracy of the system is one of the deciding factors for clinical application when key decisions about treatment planning based on 3D surface imaging-derived landmark data are to be made. Technically, the correspondence between a virtual copy and the actual object or the similarity between the dimensions of the 3D copy and the real object determines the accuracy of the scanning system, which varies by system. Involuntary movements have been reported to interfere with image acquisition and to influence accuracy. Although previous studies have suggested acquiring right-, frontal-, and left-face images separately as in modern systems, such as Morpheus 3D, Vectra H1, and Accu3D, and merging them together to somewhat mitigate the impact of involuntary movements [97,98], this approach is not only time-consuming, as it requires multiple scans, but the generated 3D face image may lose accuracy due to changes in facial expressions, and muscle activity may change while capturing from one side of the face to the other, as mentioned earlier.
The precision of a system may be limited to the landmarks within the volume captured or the coverage of the system. For instance, if the capturing of landmarks outside 3dMDface’s capture range, which has ear-to-ear coverage, is deemed necessary, then a system upgrade to 3dMDtrio or 3dMDhead, with a wider image capture range, would be beneficial. Advanced stationary systems such as the Vectra XT and mobile systems like Artec Eva can be used alongside 3dMDface in the clinical setting owing to their comparable reproducibility and accuracy [99]. Despite the fact that the next-generation mobile systems have outperformed the stationary systems in terms of portability and functionality, 3dMD continues to be the “gold standard” in 3D face acquisition due to its proven precision and accuracy [100,101]. “Calibration” is another important consideration for 3D face acquisition systems. For the majority of 3D face acquisition systems, calibration is required in order to obtain accurate results. Previous generation systems required on-site calibration through manual adjustments of the accuracy settings, and the total set-up time, including start-up and calibration time, has been reported to be as long as 20 min–1 h [102], which is inconvenient for both the operator and the patient. In contrast, handheld systems seem convenient to use, but they may require longer calibration times compared to static systems. However, newer systems, such as Vectra H1 and Bellus3D, have been factory-calibrated and are available in ready-to-use configurations (Table 2).

4.2.2. Scanning Time and Data Delivery

The scanning time plays a vital role in the accuracy of the system. The speed at which the scanner can capture a given object determines its capture/scanning speed; this speed depends on the type of technology utilized by the device. In a clinical setting, the speed of the scanner may significantly influence the diagnosis and decision-making process. The 3D image acquisition times of some systems, such as Planmeca Pro Face and Bellus3D Face Camera Pro, have been reported to be relatively high and may present greater noise than those of rapid capture systems in areas such as the eyes and ears. Furthermore, such lengthy approaches to 3D data collection may introduce motion artifacts and incorporate minor variations in the facial expression, thereby disrupting the accuracy of the final 3D image and making the scanning system clinically unreliable [103].
Providing quantifiable and incessant data based on facial measurements for the assessment of facial appearance and function is essential for diagnosing the severity of the abnormalities or determining the effectiveness of the intervention [104,105]. All currently available systems reviewed in this study achieve this purpose satisfactorily. Most of the systems require the target object to remain still when generating 3D data; in contrast, systems such as 3dMD, DI4D, Intel RealSense D435, and Azure Kinect are capable of delivering the data even when the object is in motion. Capturing stationary objects becomes challenging when using systems with long scanning times that demand subjects remain still during the scanning period. This factor becomes particularly crucial when working with children, especially those with developmental delays or cognitive deficits, who may find it difficult to hold still for a lengthy scanning period. To mitigate these challenges, a short scanning period and specific instructions regarding facial expressions, posture, and lip position during image acquisition can help to alleviate errors. Advanced 3D scanners such as the Di3D FCS-100 and 3dMDface capture objects quickly in one shot, in just 1 ms and 1.5 ms, respectively, to eliminate motion artifacts and noisy raw data issues. In contrast, the RAYFace system relies on its “Flash 3D scan” technology for rapid image capturing and eliminating image distortion errors related to dynamic scanning or patient movements. This is particularly advantageous for patients who cannot restrain their movements for a long period of time, such as children and elderly patients.

4.2.3. Image Quality

The quality of an image is a critical element that needs to be taken into account when selecting a 3D imaging system. Photorealistic images generated from 3D face acquisition systems allow for texture mapping, precise landmark identification, and treatment simulation. The higher the resolution, the more detailed the image, although at the cost of a heavy image file that requires a longer processing time. The system requirements for a high-resolution scanner are also high, as they require a powerful computer for image processing. The face is a complex 3D structure; hence, from a 3D face acquisition perspective, capturing even minute, intricate details of the face is indispensable and cannot be underestimated. The currently available passive stereophotogrammetry systems are solely dependent on natural patterns, such as skin pores, freckles, and scars. Therefore, high-resolution cameras are essential for pixel integration and high-quality 3D surface generation. Furthermore, careful lighting control using standardized flash units is crucial to overcome the sensitivity to illumination changes. Although low-cost RGB-D devices such as the Intel RealSense D435 Camera and Azure Kinect generate visible light color data in addition to depth, the face depth images they capture may also contain general depth value noise and undesirable holes (areas of invalid data). In this regard, deep-learning-driven face-specific deep learning depth image enhancers might offer a workable solution to efficiently enhance face-depth images [106].
Another key aspect that needs to be taken into account while using stereophotogrammetry systems is their inability to capture the eyes and cleft region accurately. The pattern of light used for 3D image construction interferes with the light reflected from the eye lenses, resulting in a concave appearance of the lenses instead of convex [107]. Additionally, the cleft region, which is often covered with saliva, reflects light, thereby producing artifacts in the final image. A more practical approach would be the use of a CBCT skull and a 3D stereophotograph-integrated fusion model that provides a photorealistic 3D representation of the patient’s face and may serve as a diagnostic tool for better treatment planning and postoperative evaluation by orthodontists and surgeons. A combination of two scanners—a stationary system for large areas and a hand-held scanner to fill in the details—could also be an alternative methodology for complex facial morphologies such as cleft lip or craniofacial syndromes.

4.2.4. 3D Software Solutions

The overall performance of the 3D face acquisition system is contingent upon the 3D software powering the system, which is capable of handling and processing all incoming data rapidly and accurately, thus providing real-time volumetric visualization and treatment simulation. Most of the currently available systems are equipped with indigenously built software to boost performance, although third-party solutions that are compatible with the hardware are also available. Passive stereophotogrammetry systems rely on either manual digitization of the landmarks directly on the patients’ faces or digitally on the patients’ scans. Although user training can help to achieve high accuracy and reliability with manual digitization [108], manual landmarking is associated with patient discomfort, is time-consuming, and may be cumbersome in busy clinical environments, leading to inaccuracies and human errors in the results [109]. Acquiring automated facial analysis approaches, such as neural networking, may help to alleviate the aforementioned concerns. For instance, Taylor et al. used the Vectra M3 system for the automatic computation of the symmetry plane using Procrustes analysis in patients with facial asymmetry and reported minimal errors [110]. Their automated approach further substantiates the importance of a completely automated 3D facial assessment tool that leaves minimal scope for human error. The software-assisted analysis methods, surgical planning, and research outcomes of previous studies are generally based on and limited to the single system being used, and need validation using different 3D scanning systems. This would allow for the comparison of multicenter studies with diverse scanning systems and reveal the best-suited and most accurate surface imaging system for 3D face acquisition.
To deliver the best results, clinicians must understand how to utilize the system to its full potential. Considering the busy schedules of clinicians, devices that are user-friendly, have minimal system requirements, do not require prior training, and provide rapid assessment results that can be clearly delivered to the patients are best suited for the clinical environment in terms of reduced consultation and decision-making times. Although stationary devices such as 3dMD provide quick and precise image acquisition, they are bulky and expensive. In contrast, portable or handheld devices, such as the Vectra H1 and Azure Kinect DK, provide better operator control and can be effortlessly maneuvered up to several degrees around the face. This ensures more coverage, and would be more suitable for 3D face acquisition purposes. New smartphone applications in conjunction with TrueDepth sensors have exhibited encouraging results; however, their longer acquisition times necessitate greater operator precision and patient compliance. Further, the influence of the scanning environment on the performance of the 3D face acquisition system, as mentioned in this review, cannot be underestimated. Therefore, standard imaging conditions comprising optimal temperature settings and acceptable humidity levels should be maintained in the clinical environment for accurate imaging outcomes. Finally, several 3D surface imaging systems are currently available, with varied prices corresponding to their inherent features, data capture speed, and image quality. Their clinical usage should be based on their purpose and workflow, irrespective of their price.
The human face is a complex 3D configuration with convexities, concavities, and tricky angles; hence, capturing its intricate details requires a comprehensive understanding of the factors described in this review. Although this review is limited to the handful of systems currently available for 3D face acquisition, they are capable of generating high-quality 3D facial images, given that the aforementioned factors are considered.

5. Future Directions

Both future clinical practice and research stand to benefit greatly from the rapid development of AI technology and facial scanning. It would eventually be possible, for example, to automatically set bilaterally balanced denture teeth through the integration of an ML-guided tooth arrangement robot and a virtual articulator with a 3D facial scan. In addition, a virtual patient created from a 3D facial scan would offer enough facial landmarks to aid in planning the final prosthesis in situations where extensive full-mouth rehabilitation is planned, but there are not many landmarks for the occlusal plane. Furthermore, integration of 3D surface imaging systems with other imaging tools and 3D printing technologies could enable individualized surgical planning and simulation, treatment sequencing, patient-specific implant and prosthesis fabrication, and patient education in the future.
The potential of 3D face acquisition technology to revolutionize healthcare is vast. Widespread adoption of 3D face acquisition for telemedicine, dermatological purposes, and facial reconstruction and forensic identification purposes is likely to enhance the quality of life. Remote monitoring of patients’ health and well-being through 3D face scanning will enable healthcare providers to provide virtual care and support to those unable to visit a healthcare facility. Automated algorithms will be able to precisely track changes in 3D facial images, allowing dermatologists to monitor disease progression and detect conditions like melanoma and skin cancer over time. Furthermore, 3D face acquisition could represent a valuable aid in facial reconstruction following trauma or deformity, as well as in forensic contexts to facilitate evidence collection and subject recognition through facial reconstruction.
Additionally, the widespread implementation of 3D face acquisition technology for product customization, such as breathing masks and medical devices like sleep apnea masks or eyeglasses, is foreseeable. However, to fully harness the potential of 3D face scanning, the development of a handheld, versatile, scientifically validated, and cost-effective 3D surface imaging system for clinical and research purposes is highly desirable.

6. Conclusions

The challenge of precise facial evaluation has led to the development of contemporary 3D surface imaging systems, each with their own inherent features and limitations. This review offers updated information on these technologies and systems to assist clinicians in selecting an optimal 3D face acquisition system. Advanced 3D face scanners, powered by cutting-edge technology and sophisticated software tools, generate 3D facial images of photorealistic quality with remarkable accuracy and reliability and can be integrated with other imaging tools and modernized 3D printing technologies. While some of these systems have already been adapted in clinical settings, the results of other advanced systems seem promising. However, scientific validation of the currently available systems across multiple centers is desirable before a system can be deemed the gold standard for 3D face acquisition. Furthermore, driven by technological advances, novel devices will become cost-effective and portable, and will enable accurate quantitative assessments, rapid treatment simulations, and definitively enhanced outcomes.

Author Contributions

P.S.: conceptualization, methodology, formal analysis, investigation, data curation, writing—original draft, writing—reviewing and editing, visualization. M.M.B.: writing—reviewing and editing, supervision. R.T.-C.H.: writing—reviewing and editing, supervision. D.H.A.: data curation, writing—reviewing and editing, visualization. Y.Y.L.: writing—reviewing and editing, supervision. M.G.: conceptualization, methodology, validation, resources, writing—reviewing and editing, supervision, project administration, funding acquisition. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Hong Kong General Research Fund (RGC Ref No. 17107321).

Institutional Review Board Statement

Ethics approval was obtained (approved on 29 September 2021) from the local institutional review board (IRB) of the University/Hospital Authority (approval number UW 21-529) before the commencement of this study.

Informed Consent Statement

The participant was informed verbally and written consent was obtained.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request due to for research purposes.

Acknowledgments

The authors wish to thank the precious contribution of Shadow Yeung (Faculty of Dentistry, the University of Hong Kong) to image processing.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Kau, C.H.; Richmond, S.; Incrapera, A.; English, J.; Xia, J.J. Three-dimensional surface acquisition systems for the study of facial morphology and their application to maxillofacial surgery. Int. J. Med. Robot. 2007, 3, 97–110. [Google Scholar] [CrossRef]
  2. Li, Y.; Yang, X.; Li, D. The application of three-dimensional surface imaging system in plastic and reconstructive surgery. Ann. Plast. Surg. 2016, 77 (Suppl. S1), S76–S83. [Google Scholar] [CrossRef] [PubMed]
  3. Karatas, O.H.; Toy, E. Three-dimensional imaging techniques: A literature review. Eur. J. Dent. 2014, 8, 132–140. [Google Scholar] [CrossRef] [PubMed]
  4. Alshammery, F.A. Three dimensional (3D) imaging techniques in orthodontics—An update. J. Fam. Med. Prim. Care 2020, 9, 2626–2630. [Google Scholar] [CrossRef] [PubMed]
  5. Cava, S.M.L.; Orrù, G.; Drahansky, M.; Marcialis, G.L.; Roli, F. 3D Face Reconstruction: The Road to Forensics. ACM Comput. Surv. 2023, 56, 77. [Google Scholar] [CrossRef]
  6. Liopyris, K.; Gregoriou, S.; Dias, J.; Stratigos, A.J. Artificial Intelligence in Dermatology: Challenges and Perspectives. Dermatol. Ther. 2022, 12, 2637–2651. [Google Scholar] [CrossRef] [PubMed]
  7. Lam, W.Y.; Hsung, R.T.; Choi, W.W.; Luk, H.W.; Pow, E.H. A 2-part facebow for CAD-CAM dentistry. J. Prosthet. Dent. 2016, 116, 843–847. [Google Scholar] [CrossRef] [PubMed]
  8. Eastwood, P.; Gilani, S.Z.; McArdle, N.; Hillman, D.; Walsh, J.; Maddison, K.; Goonewardene, M.; Mian, A. Predicting sleep apnea from three-dimensional face photography. J. Clin. Sleep Med. 2020, 16, 493–502. [Google Scholar] [CrossRef] [PubMed]
  9. Lin, W.S.; Harris, B.T.; Phasuk, K.; Llop, D.R.; Morton, D. Integrating a facial scan, virtual smile design, and 3D virtual patient for treatment with CAD-CAM ceramic veneers: A clinical report. J. Prosthet. Dent. 2018, 119, 200–205. [Google Scholar] [CrossRef]
  10. Joda, T.; Gallucci, G.O. The virtual patient in dental medicine. Clin. Oral Implant. Res. 2015, 26, 725–726. [Google Scholar] [CrossRef]
  11. Ricciardi, F.; Copelli, C.; De Paolis, L.T. An Augmented Reality System for Maxillo-Facial Surgery; Springer: Cham, Switzerland, 2017; pp. 53–62. [Google Scholar]
  12. Lo, S.; Fowers, S.; Darko, K.; Spina, T.; Graham, C.; Britto, A.; Rose, A.; Tittsworth, D.; McIntyre, A.; O’Dowd, C.; et al. Participatory development of a 3D telemedicine system during COVID: The future of remote consultations. J. Plast. Reconstr. Aesthet. Surg. 2023, 87, 479–490. [Google Scholar] [CrossRef]
  13. Moreau, P.; Ismael, S.; Masadeh, H.; Katib, E.A.; Viaud, L.; Nordon, C.; Herfat, S. 3D technology and telemedicine in humanitarian settings. Lancet Digit. Health 2020, 2, e108–e110. [Google Scholar] [CrossRef]
  14. Jiang, J.G.; Zhang, Y.D. Motion planning and synchronized control of the dental arch generator of the tooth-arrangement robot. Int. J. Med. Robot. 2013, 9, 94–102. [Google Scholar] [CrossRef]
  15. Knoops, P.G.M.; Papaioannou, A.; Borghi, A.; Breakey, R.W.F.; Wilson, A.T.; Jeelani, O.; Zafeiriou, S.; Steinbacher, D.; Padwa, B.L.; Dunaway, D.J.; et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci. Rep. 2019, 9, 13597. [Google Scholar] [CrossRef]
  16. Liu, W.; Li, M.; Yi, L. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework. Autism Res. 2016, 9, 888–898. [Google Scholar] [CrossRef]
  17. Farahani, N.; Braun, A.; Jutt, D.; Huffman, T.; Reder, N.; Liu, Z.; Yagi, Y.; Pantanowitz, L. Three-dimensional imaging and scanning: Current and future applications for pathology. J. Pathol. Inform. 2017, 8, 36. [Google Scholar] [CrossRef] [PubMed]
  18. Tzou, C.-H.J.; Frey, M. Evolution of 3D surface imaging systems in facial plastic surgery. Facial Plast. Surg. Clin. N. Am. 2011, 19, 591–602. [Google Scholar] [CrossRef] [PubMed]
  19. Daniele, G.; Annalisa, C.; Claudia, D.; Chiarella, S. 3D surface acquisition systems and their applications to facial anatomy: Let’s make a point. Ital. J. Anat. Embryol. 2020, 124, 422–431. [Google Scholar] [CrossRef]
  20. Moss, J.P.; Coombes, A.M.; Linney, A.D.; Campos, J. Methods of three-dimensional analysis of patients with asymmetry of the face. Proc. Finn. Dent. Soc. 1991, 87, 139–149. [Google Scholar] [PubMed]
  21. Germec-Cakan, D.; Canter, H.I.; Nur, B.; Arun, T. Comparison of facial soft tissue measurements on three-dimensional images and models obtained with different methods. J. Craniofac Surg. 2010, 21, 1393–1399. [Google Scholar] [CrossRef] [PubMed]
  22. Moss, J.P.; Linney, A.D.; Grindrod, S.R.; Mosse, C.A. A laser scanning system for the measurement of facial surface morphology. Opt. Lasers Eng. 1989, 10, 179–190. [Google Scholar] [CrossRef]
  23. Schwenzer-Zimmerer, K.; Chaitidis, D.; Berg-Boerner, I.; Krol, Z.; Kovacs, L.; Schwenzer, N.F.; Zimmerer, S.; Holberg, C.; Zeilhofer, H.F. Quantitative 3D soft tissue analysis of symmetry prior to and after unilateral cleft lip repair compared with non-cleft persons (performed in Cambodia). J. Craniomaxillofac. Surg. 2008, 36, 431–438. [Google Scholar] [CrossRef]
  24. Blais, F. Review of 20 years of range sensor development. J. Electron. Imaging 2004, 13, 231–243. [Google Scholar] [CrossRef]
  25. Lippold, C.; Liu, X.; Wangdo, K.; Drerup, B.; Schreiber, K.; Kirschneck, C.; Moiseenko, T.; Danesh, G. Facial landmark localization by curvature maps and profile analysis. Head Face Med. 2014, 10, 54. [Google Scholar] [CrossRef] [PubMed]
  26. Minolta Vivid. Available online: https://www.virvig.eu/services/Minolta.pdf (accessed on 1 June 2022).
  27. Minolta Vivid 910. Available online: https://pdf.directindustry.com/pdf/konica-minolta-sensing-americas/vivid-910/18425- (accessed on 8 December 2023).
  28. Kau, C.H.; Richmond, S.; Zhurov, A.I.; Knox, J.; Chestnutt, I.; Hartles, F.; Playle, R. Reliability of measuring facial morphology with a 3-dimensional laser scanning system. Am. J. Orthod. Dentofac. Orthop. 2005, 128, 424–430. [Google Scholar] [CrossRef] [PubMed]
  29. Kovacs, L.; Zimmermann, A.; Brockmann, G.; Baurecht, H.; Schwenzer-Zimmerer, K.; Papadopulos, N.A.; Papadopoulos, M.A.; Sader, R.; Biemer, E.; Zeilhofer, H.F. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner. IEEE Trans. Med. Imaging 2006, 25, 742–754. [Google Scholar] [CrossRef]
  30. FASTSCAN II. Available online: https://polhemus.com/_assets/img/FastSCAN_II_Brochure.pdf (accessed on 10 December 2023).
  31. Thompson, J.T.; David, L.R.; Wood, B.; Argenta, A.; Simpson, J.; Argenta, L.C. Outcome analysis of helmet therapy for positional plagiocephaly using a three-dimensional surface scanning laser. J. Craniofac. Surg. 2009, 20, 362–365. [Google Scholar] [CrossRef]
  32. Berghagen, N. Photogrammetric Principles Applied to Intra-oral Radiodontia. A Method for Diagnosis and Therapy in Odontology; Springer: Stockholm, Sweden, 1951. [Google Scholar]
  33. Burke, P.H.; Beard, F.H. Stereophotogrammetry of the face. A preliminary investigation into the accuracy of a simplified system evolved for contour mapping by photography. Am. J. Orthod. 1967, 53, 769–782. [Google Scholar] [CrossRef]
  34. Plooij, J.M.; Swennen, G.R.; Rangel, F.A.; Maal, T.J.; Schutyser, F.A.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Bergé, S.J. Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereophotogrammetry. Int. J. Oral Maxillofac. Surg. 2009, 38, 267–273. [Google Scholar] [CrossRef]
  35. Tzou, C.H.; Artner, N.M.; Pona, I.; Hold, A.; Placheta, E.; Kropatsch, W.G.; Frey, M. Comparison of three-dimensional surface-imaging systems. J. Plast. Reconstr. Aesthet. Surg. 2014, 67, 489–497. [Google Scholar] [CrossRef] [PubMed]
  36. Wong, J.Y.; Oh, A.K.; Ohta, E.; Hunt, A.T.; Rogers, G.F.; Mulliken, J.B.; Deutsch, C.K. Validity and reliability of craniofacial anthropometric measurement of 3D digital photogrammetric images. Cleft Palate Craniofac. J. 2008, 45, 232–239. [Google Scholar] [CrossRef]
  37. Heike, C.L.; Upson, K.; Stuhaug, E.; Weinberg, S.M. 3D digital stereophotogrammetry: A practical guide to facial image acquisition. Head Face Med. 2010, 6, 18. [Google Scholar] [CrossRef]
  38. Camison, L.; Bykowski, M.; Lee, W.W.; Carlson, J.C.; Roosenboom, J.; Goldstein, J.A.; Losee, J.E.; Weinberg, S.M. Validation of the Vectra H1 portable three-dimensional photogrammetry system for facial imaging. Int. J. Oral Maxillofac. Surg. 2018, 47, 403–410. [Google Scholar] [CrossRef]
  39. Gibelli, D.; Pucciarelli, V.; Cappella, A.; Dolci, C.; Sforza, C. Are portable stereophotogrammetric devices reliable in facial imaging? A validation study of VECTRA H1 device. J. Oral Maxillofac. Surg. 2018, 76, 1772–1784. [Google Scholar] [CrossRef]
  40. White, J.D.; Ortega-Castrillon, A.; Virgo, C.; Indencleef, K.; Hoskens, H.; Shriver, M.D.; Claes, P. Sources of variation in the 3dMDface and Vectra H1 3D facial imaging systems. Sci. Rep. 2020, 10, 4443. [Google Scholar] [CrossRef] [PubMed]
  41. Di3D. Available online: http://www.dirdim.com/pdfs/DDI_Dimensional_Imaging_DI3D.pdf (accessed on 14 December 2023).
  42. Khambay, B.; Nairn, N.; Bell, A.; Miller, J.; Bowman, A.; Ayoub, A.F. Validation and reproducibility of a high-resolution three-dimensional facial imaging system. Br. J. Oral Maxillofac. Surg. 2008, 46, 27–32. [Google Scholar] [CrossRef] [PubMed]
  43. Winder, R.J.; Darvann, T.A.; McKnight, W.; Magee, J.D.M.; Ramsay-Baggs, P. Technical validation of the Di3D stereophotogrammetry surface imaging system. Br. J. Oral Maxillofac. Surg. 2008, 46, 33–37. [Google Scholar] [CrossRef] [PubMed]
  44. Fourie, Z.; Damstra, J.; Gerrits, P.O.; Ren, Y. Evaluation of anthropometric accuracy and reliability using different three-dimensional scanning systems. Forensic Sci. Int. 2011, 207, 127–134. [Google Scholar] [CrossRef] [PubMed]
  45. Al-Anezi, T.; Khambay, B.; Peng, M.J.; O’Leary, E.; Ju, X.; Ayoub, A. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D). Int. J. Oral Maxillofac. Surg. 2013, 42, 9–18. [Google Scholar] [CrossRef] [PubMed]
  46. Ma, L.; Xu, T.; Lin, J. Validation of a three-dimensional facial scanning system based on structured light techniques. Comput. Methods Programs Biomed. 2009, 94, 290–298. [Google Scholar] [CrossRef] [PubMed]
  47. Kim, S.H.; Jung, W.Y.; Seo, Y.J.; Kim, K.A.; Park, K.H.; Park, Y.G. Accuracy and precision of integumental linear dimensions in a three-dimensional facial imaging system. Korean J. Orthod. 2015, 45, 105–112. [Google Scholar] [CrossRef]
  48. Choi, K.; Kim, M.; Lee, K.; Nam, O.; Lee, H.-S.; Choi, S.; Kim, K. Accuracy and Precision of Three-dimensional Imaging System of Children’s Facial Soft Tissue. J. Korean Acad. Pediatr. Dent. 2020, 47, 17–24. [Google Scholar] [CrossRef]
  49. Lee, K.W.; Kim, S.H.; Gil, Y.C.; Hu, K.S.; Kim, H.J. Validity and reliability of a structured-light 3D scanner and an ultrasound imaging system for measurements of facial skin thickness. Clin. Anat. 2017, 30, 878–886. [Google Scholar] [CrossRef] [PubMed]
  50. Traisrisin, K.; Wangsrimongkol, T.; Pisek, P.; Rattanaphan, P.; Puasiri, S. The Accuracy of soft tissue prediction using Morpheus 3D simulation software for planning orthognathic surgery. J. Med. Assoc. Thai 2017, 100 (Suppl. S6), S38–S49. [Google Scholar]
  51. AxisThree XS-200. Available online: https://market-comms.co.th/wp-content/uploads/2017/08/XS200-Spec.pdf (accessed on 22 December 2023).
  52. AxisThree 3D Simulation Technology. Available online: https://www.bodysculpt.com/3d-simulation-technology/axis-three-3d/ (accessed on 22 December 2023).
  53. Pauwels, R. History of dental radiography: Evolution of 2D and 3D imaging modalities. Med. Phys. Int. 2020, 8, 235–277. [Google Scholar]
  54. Planmeca ProFace. Available online: https://www.planmeca.com (accessed on 24 December 2023).
  55. Liberton, D.K.; Mishra, R.; Beach, M.; Raznahan, A.; Gahl, W.A.; Manoli, I.; Lee, J.S. Comparison of three-dimensional surface imaging systems using landmark analysis. J. Craniofac. Surg. 2019, 30, 1869–1872. [Google Scholar] [CrossRef]
  56. Amornvit, P.; Sanohkan, S. The accuracy of digital face scans obtained from 3D scanners: An in vitro study. Int. J. Environ. Res. Public Health 2019, 16, 5061. [Google Scholar] [CrossRef] [PubMed]
  57. Bellus3D. Available online: https://www.bellus3d.com/_assets/downloads/brochures/BellusD-Dental-Pro-Brochure.pdf (accessed on 21 March 2021).
  58. Bellus3D FaceApp. Available online: https://www.bellus3d.com/_assets/downloads/product/FCP (accessed on 21 March 2021).
  59. Piedra-Cascón, W.; Meyer, M.J.; Methani, M.M.; Revilla-León, M. Accuracy (trueness and precision) of a dual-structured light facial scanner and interexaminer reliability. J. Prosthet. Dent. 2020, 124, 567–574. [Google Scholar] [CrossRef]
  60. Cascos, R.; Ortiz del Amo, L.; Álvarez-Guzmán, F.; Antonaya-Martín, J.L.; Celemín-Viñuela, A.; Gómez-Costa, D.; Zafra-Vallejo, M.; Agustín-Panadero, R.; Gómez-Polo, M. Accuracy between 2D Photography and Dual-Structured Light 3D Facial Scanner for Facial Anthropometry: A Clinical Study. J. Clin. Med. 2023, 12, 3090. [Google Scholar] [CrossRef]
  61. Dzelzkaleja, L.; Knēts, J.; Rozenovskis, N.; Sīlītis, A. Mobile apps for 3D face scanning. In Proceedings of the IntelliSys 2021: Intelligent Systems and Applications, Amsterdam, The Netherlands, 1–2 September 2021; pp. 34–50. [Google Scholar]
  62. Berretti, S.; Bimbo, A.D.; Pala, P. 3D face recognition using isogeodesic stripes. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2162–2177. [Google Scholar] [CrossRef]
  63. Drira, H.; Ben Amor, B.; Srivastava, A.; Daoudi, M.; Slama, R. 3D face recognition under expressions, occlusions and pose variations. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2270–2283. [Google Scholar] [CrossRef]
  64. Alashkar, T.; Ben Amor, B.; Daoudi, M.; Berretti, S. A 3D dynamic database for unconstrained face recognition. In Proceedings of the 5th International conference and exhibition on 3D body scanning technologies, Lugano, Switzerland, 21–22 October 2014. [Google Scholar]
  65. Beeler, T.; Bickel, B.; Beardsley, P.A.; Sumner, B.; Gross, M.H. High-quality single-shot capture of facial geometry. ACM Trans. Graph. 2010, 29, 1–9. [Google Scholar] [CrossRef]
  66. Jayaratne, Y.S.; McGrath, C.P.; Zwahlen, R.A. How accurate are the fusion of cone-beam CT and 3-D stereophotographic images? PLoS ONE 2012, 7, e49585. [Google Scholar] [CrossRef]
  67. Weinberg, S.M.; Naidoo, S.; Govier, D.P.; Martin, R.A.; Kane, A.A.; Marazita, M.L. Anthropometric precision and accuracy of digital three-dimensional photogrammetry: Comparing the Genex and 3dMD imaging systems with one another and with direct anthropometry. J. Craniofac. Surg. 2006, 17, 477–483. [Google Scholar] [CrossRef]
  68. Lübbers, H.T.; Medinger, L.; Kruse, A.; Grätz, K.W.; Matthews, F. Precision and accuracy of the 3dMD photogrammetric system in craniomaxillofacial application. J. Craniofac. Surg. 2010, 21, 763–767. [Google Scholar] [CrossRef]
  69. Patel, A.; Islam, S.M.; Murray, K.; Goonewardene, M.S. Facial asymmetry assessment in adults using three-dimensional surface imaging. Prog. Orthod. 2015, 16, 36. [Google Scholar] [CrossRef]
  70. Liu, Y.; Kau, C.H.; Talbert, L.; Pan, F. Three-dimensional analysis of facial morphology. J. Craniofac. Surg. 2014, 25, 1890–1894. [Google Scholar] [CrossRef] [PubMed]
  71. Maal, T.J.; van Loon, B.; Plooij, J.M.; Rangel, F.; Ettema, A.M.; Borstlap, W.A.; Bergé, S.J. Registration of 3-dimensional facial photographs for clinical use. J. Oral. Maxillofac. Surg. 2010, 68, 2391–2401. [Google Scholar] [CrossRef] [PubMed]
  72. Aldridge, K.; Boyadjiev, S.A.; Capone, G.T.; DeLeon, V.B.; Richtsmeier, J.T. Precision and error of three-dimensional phenotypic measures acquired from 3dMD photogrammetric images. Am. J. Med. Genet. A 2005, 138a, 247–253. [Google Scholar] [CrossRef] [PubMed]
  73. Naini, F.B.; Akram, S.; Kepinska, J.; Garagiola, U.; McDonald, F.; Wertheim, D. Validation of a new three-dimensional imaging system using comparative craniofacial anthropometry. Maxillofac. Plast. Reconstr. Surg. 2017, 39, 23. [Google Scholar] [CrossRef] [PubMed]
  74. Metzler, P.; Bruegger, L.S.; Kruse Gujer, A.L.; Matthews, F.; Zemann, W.; Graetz, K.W.; Luebbers, H.T. Craniofacial landmarks in young children: How reliable are measurements based on 3-dimensional imaging? J. Craniofac. Surg. 2012, 23, 1790–1795. [Google Scholar] [CrossRef]
  75. Van der Meer, W.J.; Dijkstra, P.U.; Visser, A.; Vissink, A.; Ren, Y. Reliability and validity of measurements of facial swelling with a stereophotogrammetry optical three-dimensional scanner. Br. J. Oral Maxillofac. Surg. 2014, 52, 922–927. [Google Scholar] [CrossRef]
  76. Sandbach, G.; Zafeiriou, S.; Pantic, M.; Yin, L. Static and dynamic 3D facial expression recognition: A comprehensive survey. Image Vis. Comput. 2012, 30, 683–697. [Google Scholar] [CrossRef]
  77. Frowd, C.; Matuszewski, B.; Shark, L.; Quan, W. Towards a comprehensive 3D dynamic facial expression database. In Proceedings of the 9th WSEAS International Conference on Signal, Speech and Image Processing and 9th WSEAS International Conference on Multimedia, Internet and Video Technologies, Budapest, Hungary, 3–5 September 2009; pp. 113–119. [Google Scholar]
  78. Zhong, Y.; Zhu, Y.; Jiang, T.; Yuan, J.; Xu, L.; Cao, D.; Yu, Z.; Wei, M. A novel study on alar mobility of HAN female by 3dMD dynamic surface imaging system. Aesthetic Plast. Surg. 2022, 46, 364–372. [Google Scholar] [CrossRef] [PubMed]
  79. Benedikt, L.; Cosker, D.; Rosin, P.L.; Marshall, D. Assessing the uniqueness and permanence of facial actions for use in biometric applications. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2010, 40, 449–460. [Google Scholar] [CrossRef]
  80. Alagha, M.A.; Ju, X.; Morley, S.; Ayoub, A. Reproducibility of the dynamics of facial expressions in unilateral facial palsy. Int. J. Oral Maxillofac. Surg. 2018, 47, 268–275. [Google Scholar] [CrossRef] [PubMed]
  81. Shujaat, S.; Khambay, B.S.; Ju, X.; Devine, J.C.; McMahon, J.D.; Wales, C.; Ayoub, A.F. The clinical application of three-dimensional motion capture (4D): A novel approach to quantify the dynamics of facial animations. Int. J. Oral Maxillofac. Surg. 2014, 43, 907–916. [Google Scholar] [CrossRef] [PubMed]
  82. Gašparović, B.; Morelato, L.; Lenac, K.; Mauša, G.; Zhurov, A.; Katić, V. Comparing Direct Measurements and Three-Dimensional (3D) Scans for Evaluating Facial Soft Tissue. Sensors 2023, 23, 2412. [Google Scholar] [CrossRef] [PubMed]
  83. Ulrich, L.; Vezzetti, E.; Moos, S.; Marcolin, F. Analysis of RGB-D camera technologies for supporting different facial usage scenarios. Multimed. Tools Appl. 2020, 79, 29375–29398. [Google Scholar] [CrossRef]
  84. Siena, F.L.; Byrom, B.; Watts, P.; Breedon, P. Utilising the Intel RealSense camera for measuring health outcomes in clinical research. J. Med. Syst. 2018, 42, 53. [Google Scholar] [CrossRef]
  85. Bamji, C.; Mehta, S.; Thompson, B.; Elkhatib, T.; Wurster, S.; Akkaya, O.; Payne, A.; Godbaz, J.; Fenton, M.; Rajasekaran, V.; et al. IMpixel 65nm BSI 320MHz demodulated TOF Image sensor with 3μm global shutter pixels and analog binning. In Proceedings of the IEEE International Solid–State Circuits Conference, San Francisco, CA, USA, 11–15 February 2018; pp. 94–96. [Google Scholar]
  86. Kurillo, G.; Hemingway, E.; Cheng, M.-L.; Cheng, L. Evaluating the accuracy of the Azure Kinect and Kinect v2. Sensors 2022, 22, 2469. [Google Scholar] [CrossRef] [PubMed]
  87. Ma, Y.; Sheng, B.; Hart, R.; Zhang, Y. The validity of a dual Azure Kinect-based motion capture system for gait analysis: A preliminary study. In Proceedings of the 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, New Zealand, 7–10 December 2020; pp. 1201–1206. [Google Scholar]
  88. Albert, J.A.; Owolabi, V.; Gebel, A.; Brahms, C.M.; Granacher, U.; Arnrich, B. Evaluation of the pose tracking performance of the Azure Kinect and Kinect v2 for gait analysis in comparison with a gold standard: A pilot study. Sensors 2020, 20, 5104. [Google Scholar] [CrossRef] [PubMed]
  89. Yoshimoto, K.; Shinya, M. Use of the Azure Kinect to measure foot clearance during obstacle crossing: A validation study. PLoS ONE 2022, 17, e0265215. [Google Scholar] [CrossRef] [PubMed]
  90. Antico, M.; Balletti, N.; Laudato, G.; Lazich, A.; Notarantonio, M.; Oliveto, R.; Ricciardi, S.; Scalabrino, S.; Simeone, J. Postural control assessment via Microsoft Azure Kinect DK: An evaluation study. Comput. Methods Programs Biomed. 2021, 209, 106324. [Google Scholar] [CrossRef] [PubMed]
  91. Tölgyessy, M.; Dekan, M.; Chovanec, Ľ.; Hubinský, P. Evaluation of the Azure Kinect and Its comparison to Kinect V1 and Kinect V2. Sensors 2021, 21, 413. [Google Scholar] [CrossRef]
  92. McGlade, J.; Wallace, L.; Hally, B.; White, A.; Reinke, K.; Jones, S. An early exploration of the use of the Microsoft Azure Kinect for estimation of urban tree diameter at breast height. Remote Sens. Lett. 2020, 11, 963–972. [Google Scholar] [CrossRef]
  93. Neupane, C.; Koirala, A.; Wang, Z.; Walsh, K.B. Evaluation of depth cameras for use in fruit localization and sizing: Finding a successor to Kinect v2. Agronomy 2021, 11, 1780. [Google Scholar] [CrossRef]
  94. Cho, R.-Y.; Byun, S.-H.; Yi, S.-M.; Ahn, H.-J.; Nam, Y.-S.; Park, I.-Y.; On, S.-W.; Kim, J.-C.; Yang, B.-E. Comparative Analysis of Three Facial Scanners for Creating Digital Twins by Focusing on the Difference in Scanning Method. Bioengineering 2023, 10, 545. [Google Scholar] [CrossRef]
  95. Cho, S.W.; Byun, S.H.; Yi, S.; Jang, W.S.; Kim, J.C.; Park, I.Y.; Yang, B.E. Sagittal relationship between the maxillary central incisors and the forehead in digital twins of Korean adult females. J. Pers. Med. 2021, 11, 203. [Google Scholar] [CrossRef]
  96. Riphagen, J.M.; Van Neck, J.W.; Van Adrichem, L.N.A. 3D surface imaging in medicine: A review of working principles and implications for imaging the unsedated child. J. Craniofac. Surg. 2008, 19, 517–524. [Google Scholar] [CrossRef]
  97. Cattaneo, C.; Cantatore, A.; Ciaffi, R.; Gibelli, D.; Cigada, A.; De Angelis, D.; Sala, R. Personal identification by the comparison of facial profiles: Testing the reliability of a high-resolution 3D-2D comparison model. J. Forensic Sci. 2012, 57, 182–187. [Google Scholar] [CrossRef]
  98. Gibelli, D.; Pucciarelli, V.; Poppa, P.; Cummaudo, M.; Dolci, C.; Cattaneo, C.; Sforza, C. Three-dimensional facial anatomy evaluation: Reliability of laser scanner consecutive scans procedure in comparison with stereophotogrammetry. J. Craniomaxillofac. Surg. 2018, 46, 1807–1813. [Google Scholar] [CrossRef]
  99. Verhulst, A.; Hol, M.; Vreeken, R.; Becking, A.; Ulrich, D.; Maal, T. Three-dimensional imaging of the face: A comparison between three different imaging modalities. Aesthet. Surg. J. 2018, 38, 579–585. [Google Scholar] [CrossRef]
  100. ten Harkel, T.C.; Speksnijder, C.M.; van der Heijden, F.; Beurskens, C.H.G.; Ingels, K.J.A.O.; Maal, T.J.J. Depth accuracy of the RealSense F200: Low-cost 4D facial imaging. Sci. Rep. 2017, 7, 16263. [Google Scholar] [CrossRef]
  101. D’Ettorre, G.; Farronato, M.; Candida, E.; Quinzi, V.; Grippaudo, C. A comparison between stereophotogrammetry and smartphone structured light technology for three-dimensional face scanning. Angle Orthod. 2022, 92, 358–363. [Google Scholar] [CrossRef] [PubMed]
  102. Nord, F.; Ferjencik, R.; Seifert, B.; Lanzer, M.; Gander, T.; Matthews, F.; Rücker, M.; Lübbers, H.T. The 3dMD photogrammetric photo system in cranio-maxillofacial surgery: Validation of interexaminer variations and perceptions. J. Craniomaxillofac. Surg. 2015, 43, 1798–1803. [Google Scholar] [CrossRef] [PubMed]
  103. Kovacs, L.; Zimmermann, A.; Brockmann, G.; Gühring, M.; Baurecht, H.; Papadopulos, N.A.; Schwenzer-Zimmerer, K.; Sader, R.; Biemer, E.; Zeilhofer, H.F. Three-dimensional recording of the human face with a 3D laser scanner. J. Plast. Reconstr. Aesthet. Surg. 2006, 59, 1193–1202. [Google Scholar] [CrossRef] [PubMed]
  104. Aldridge, K.; George, I.D.; Cole, K.K.; Austin, J.R.; Takahashi, T.N.; Duan, Y.; Miles, J.H. Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Mol. Autism 2011, 2, 15. [Google Scholar] [CrossRef] [PubMed]
  105. Gerós, A.; Horta, R.; Aguiar, P. Facegram—Objective quantitative analysis in facial reconstructive surgery. J. Biomed. Inform. 2016, 61, 1–9. [Google Scholar] [CrossRef]
  106. Schlett, T.; Rathgeb, C.; Busch, C. Deep learning-based single image face depth data enhancement. Comput. Vis. Image Underst. 2021, 210, 103247. [Google Scholar] [CrossRef]
  107. Maal, T.J.; Plooij, J.M.; Rangel, F.A.; Mollemans, W.; Schutyser, F.A.; Bergé, S.J. The accuracy of matching three-dimensional photographs with skin surfaces derived from cone-beam computed tomography. Int. J. Oral Maxillofac. Surg. 2008, 37, 641–646. [Google Scholar] [CrossRef] [PubMed]
  108. Jodeh, D.S.; Curtis, H.; Cray, J.J.; Ford, J.; Decker, S.; Rottgers, S.A. Anthropometric evaluation of periorbital region and facial projection using three-dimensional photogrammetry. J. Craniofac. Surg. 2018, 29, 2017–2020. [Google Scholar] [CrossRef] [PubMed]
  109. Andrade, L.M.; Rodrigues da Silva, A.M.B.; Magri, L.V.; Rodrigues da Silva, M.A.M. Repeatability study of angular and linear measurements on facial morphology analysis by means of stereophotogrammetry. J. Craniofac. Surg. 2017, 28, 1107–1111. [Google Scholar] [CrossRef] [PubMed]
  110. Taylor, H.O.; Morrison, C.S.; Linden, O.; Phillips, B.; Chang, J.; Byrne, M.E.; Sullivan, S.R.; Forrest, C.R. Quantitative facial asymmetry: Using three-dimensional photogrammetry to measure baseline facial surface symmetry. J. Craniofac. Surg. 2014, 25, 124–128. [Google Scholar] [CrossRef]
Figure 1. Description of the PICO (P = population/participants; I = intervention; C = comparator/control; O = outcomes) elements used in structuring the research question and the search strategy.
Figure 1. Description of the PICO (P = population/participants; I = intervention; C = comparator/control; O = outcomes) elements used in structuring the research question and the search strategy.
Diagnostics 14 00423 g001
Figure 2. Flow diagram illustrating the study selection process.
Figure 2. Flow diagram illustrating the study selection process.
Diagnostics 14 00423 g002
Figure 3. Representation of the 3D surface imaging technologies and systems: 3D, three-dimensional; CBCT, cone beam computed tomography; 4D, four-dimensional; RGB-D, red-green-blue-depth.
Figure 3. Representation of the 3D surface imaging technologies and systems: 3D, three-dimensional; CBCT, cone beam computed tomography; 4D, four-dimensional; RGB-D, red-green-blue-depth.
Diagnostics 14 00423 g003
Figure 4. Vectra H1 facial imaging system (A) and an automatedly stitched 3D facial scan generated using the Vectra H1 face acquisition system (B).
Figure 4. Vectra H1 facial imaging system (A) and an automatedly stitched 3D facial scan generated using the Vectra H1 face acquisition system (B).
Diagnostics 14 00423 g004
Figure 5. Morpheus 3D facial imaging system (A) and a sample of a single composite 3D facial image generated using the Morpheus 3D face acquisition system (B).
Figure 5. Morpheus 3D facial imaging system (A) and a sample of a single composite 3D facial image generated using the Morpheus 3D face acquisition system (B).
Diagnostics 14 00423 g005
Figure 6. Accu3D facial imaging system (A) and an automatedly merged 3D facial scan generated using the Accu3D face acquisition system (B).
Figure 6. Accu3D facial imaging system (A) and an automatedly merged 3D facial scan generated using the Accu3D face acquisition system (B).
Diagnostics 14 00423 g006
Figure 7. Planmeca ProFace imaging system (A) and a 3D face photo generated using the Planmeca ProFace imaging system (B).
Figure 7. Planmeca ProFace imaging system (A) and a 3D face photo generated using the Planmeca ProFace imaging system (B).
Diagnostics 14 00423 g007
Figure 8. Bellus3D FaceApp interface (A) and a 3D facial scan generated using Bellus3D FaceApp (B).
Figure 8. Bellus3D FaceApp interface (A) and a 3D facial scan generated using Bellus3D FaceApp (B).
Diagnostics 14 00423 g008
Figure 9. 3dMDface imaging system (A) and a 3dMDface system-rendered 3D facial scan with accurate surface texture and color (B).
Figure 9. 3dMDface imaging system (A) and a 3dMDface system-rendered 3D facial scan with accurate surface texture and color (B).
Diagnostics 14 00423 g009
Table 1. Definitions of the characteristics studied in 3D face acquisition systems.
Table 1. Definitions of the characteristics studied in 3D face acquisition systems.
CharacteristicsDefinition
HardwarePortabilityHand-held and compact or bulky and cumbersome to relocate
System mobilitySystem is fixed or mobile while scanning
Sensor positionSensor is static or dynamic while scanning
Cost-effectivenessInexpensive equipment and price-worthy operation to use in a clinical setting
SoftwareCT/CBCT integrationPermits integration with other imaging tools such as CT/CBCT
Surgery simulationAllows simulation of surgical procedures through indigenous software or third-party-assisted software
Real-time 3D volumetric visualizationCapability to generate real-time photorealistic 3D virtual copy of the face
Tissue behavior simulationPredicts the post-treatment outcomes based on indigenous software or third-party-assisted software
Progress monitoring and outcome evaluationEnables treatment monitoring at different time points and outcome evaluation
FunctionalityPurposeProvision of facial measurement-based quantifiable and incessant data
Data deliveryDelivers data while the object is still or in motion
Scanning timeTime required by the system to scan an object
Processing timeTime required by the software from editing and merging the acquired meshes to generating a 3D model
CoverageCaptures only the face (excluding ears), face and neck, or full face (ear-to-ear) and neck
Scan requisiteRequires a single scan, multiple continuous scans, or multiple scans stitched together to generate a 3D image
AccuracyData generated by the system are sufficiently close to the real data
PrecisionData generated by the system display high reliability
Archivable dataData generated by the system can be stored in industry standard and easily accessible formats
User-friendlyDoes not require specialized training and equipment
System requirementsDoes not have extensive hardware or software requirements
3D, three-dimensional; CT/CBCT, computed tomography/cone beam computed tomography.
Table 2. Hardware, software, and functionality characteristics of 3D face acquisition systems analyzed in the review.
Table 2. Hardware, software, and functionality characteristics of 3D face acquisition systems analyzed in the review.
3D Face Acquisition System
CharacteristicsLaser-Based ScanningStereophotogrammetryStructured Light ScanningCBCT IntegratedSmartphone-Based Scanning4D ImagingRGB-D
Minolta Vivid 910FastSCAN IIVectra H1Di3D FCS-100Morpheus 3DAccu3DAxis Three XS-200Planmeca Pro FaceBellus3D FaceAppBellus3D Face Camera Pro3dMDDI4DIntel RealSense D435Azure Kinect
DK
RAYFace
HardwarePortabilityYYYYYYYNYYNNYYY
System mobilityStationaryMobileMobileStationaryStationaryMobileStationaryStationaryStationaryStationaryStationaryStationaryMobileMobileStationary
Sensor positionStaticDynamicDynamicStaticStatic *DynamicStaticDynamicStatic *Static *StaticStaticDynamicDynamicStatic
Cost-effectiveYYYN-YYNYYNNYYY
SoftwareCT/CBCT integration--Third-party
software
Third-party
software
--NRomexisNN3dMDvultus/third-party
software
---RAYFace solution
Surgery simulation--YThird-party
software
YYYYNNY-N--
Real-time 3D volumetric
visualization
YYYYYYY-NNYY--Y
Tissue behavior
simulation
--YThird-party
software
YYY-NNY----
Progress and outcome monitoring--YYYYY-NNY---Y
FunctionalityPurposeYYYYYYYYYYYYYYY
Data deliveryStillStillStillStillStillStillStillStillStillStillMotileMotileMotileMotileStill
Capture speed/Scanning time0.3–2.5 s<1 min2 ms1 ms0.8 s0.5 s<2 s30 s10 s25 s≈1.5 ms/1–120 fps≈2 ms/f90 fps20.3 ms0.5 s
Processing time--≈20 s60 s<2 min<1 min---15–30 s<8 s30 s--<1 min
Scan range1300 × 1100 mm50 cm≈100°≈180°225 × 300 mm-≈180°--66–69°190°–360°≈180°87° × 58°120° × 120°550 × 310 mm
CoverageFaceFull faceFull faceFull faceFaceFull faceFace + NeckFaceFull faceFull faceFull faceFull faceFaceFaceFull face
Optimal 3D measurement range0.6–1.2 m2–4 inch350–450 mm-650 mm45–50 cm1 m--30–45 cm1 m-0.3–3 m0.25–2.21 m-
Color imageYNYYYYYYYYYYYYY
Scan requisiteMultipleMultipleMultipleSingleMultipleMultipleSingleSingleSingleSingleSingleSingleMultipleMultipleSingle
Output formatstl, dxf, obj, ascii points, vrmlSeveral format options-Several format options-Several format options-stlobj, stl.obj, .mtl, .jpeg, .stl, .ymlSeveral format optionsSeveral format options--stl, .obj
Scan processing software-enabledYYYDi3DviewMASAccu3DX ProYRomexisYY3dMDvultusDI4DliveIntel RealSense SDK 2.0YRAYFace solution
AccuracyYYYYY--NYYYY-Y-
PrecisionYYYYY--NYYY--Y-
Archivable dataYYYY-Y-YYYYY- -
User-friendlinessYYYYYYN-YYYYYYY
System requirementsMinimalMinimalMinimalMinimalMinimalMinimalMinimalMinimalMinimalMinimalExtensiveMinimalMinimalMinimalMinimal
Calibration timeNR-NR5 min--<5 min-NRNR20–100 s5 min---
3D, three-dimensional; CBCT, cone-beam computed tomography; CT, computed tomography; 4D, four-dimensional; RGB-D, red-green-blue-depth; * requires turning the subject’s face; Y, yes; N, no; NR, not required; fps, frames per second; f, frame; third-party software such as Dolphin, Maxilim, and Materialize OMS; MAS, Morpheus Aesthetic Solution.
Table 3. Limitations and drawbacks of 3D face acquisition systems.
Table 3. Limitations and drawbacks of 3D face acquisition systems.
3D Face Acquisition SystemDisadvantages and Limitations
Minolta Vivid 910
  • Sensitive to lighting conditions, but operates well indoors.
  • Patients may notice a quick red flash when the laser stripe crosses the pupil.
  • Clinical usage is limited owing to their inability to assess dynamic facial function.
  • Slow acquisition times may introduce motion artifacts.
FastSCAN II
  • Large metal objects and electromagnetic fields may interfere with the scanner’s tracking and performance.
  • The number of studies validating the system is limited.
Vectra H1,
Di3D FCS-100
  • Surface texture details may be influenced by the glare on the patient’s face caused by strong directional ambient light.
  • Di3D system is no longer available for commercial use, as the company has replaced it with a more advanced DI4D system.
Morpheus 3D
  • Occasional localized distortion of the image may occur in the integration line region.
  • Does not support the export of the generated 3D facial images.
Accu3D
  • Additional lighting conditions are required as the reflected image on the screen appears dark, which makes it difficult to follow the image capture protocol.
  • Patient must be seated very close to the chest rest attachment of the mount during image capture.
  • The system has not been scientifically validated yet.
Axis Three XS-200
  • Slow capture speed of 2 s may result in missing or noisy raw data.
  • Re-calibration is required each time the hardware is moved.
  • The system has not been scientifically validated yet.
Planmeca Pro Face
  • Lengthy image acquisition times.
  • The ProFace scanner is sensitive to subtle movements or blinking during the scanning, which may affect the image resolution around the eyes, specifically in the exocanthion region.
  • Capturing posterior facial landmarks, including the tragus and otobasion, is challenging owing to ProFace scanner’s limited field of view.
  • The ProFace system’s standard camera positioning may limit the coverage of landmarks underneath the chin, such as the gnathion.
Bellus3D FaceApp
Bellus3D Face Camera Pro
  • True-depth camera’s sensitivity to infrared interference can affect the quality of 3D images in bright sunlight.
  • Bellus3D FaceApp lacks conformity with the scanned face.
  • Scanning of hair is difficult with Face Camera Pro.
  • Only one 3D scan can be downloaded at a time, and the download is expensive.
  • The company discontinued 3D face scanning operations recently, and the purchase or downloading of their items may not be feasible today.
3dMD
  • Recalibration may be required in some cases prior to image capture.
  • May not provide a good representation of the prominent areas of the face, such as the nose, ear, and lip margins and salivated cleft regions in non-treated cleft patients.
  • Daylight can cause lighting artifacts and affect the accuracy of generated 3D faces.
  • The overly large letters of the pre-labeled landmarks in the software make precise on-screen marking of landmarks around the mouth and nose difficult.
DI4D
  • System is not yet validated for surgery simulation or TBS studies.
Intel RealSense D435
  • Intel RealSense cameras are sensitive to light intensity variances.
  • Depth quality and resolution deteriorate as the distance of the object from the camera increases.
Azure Kinect DK
  • Poor resolution may culminate in inaccurate results.
  • Depth measurements are sensitive to off-center object placement, optical distortions of the lens, and calibration accuracy.
RAYFace
  • Image distortion in curved facial regions such as subzygomatic and subnasal areas.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, P.; Bornstein, M.M.; Hsung, R.T.-C.; Ajmera, D.H.; Leung, Y.Y.; Gu, M. Frontiers in Three-Dimensional Surface Imaging Systems for 3D Face Acquisition in Craniofacial Research and Practice: An Updated Literature Review. Diagnostics 2024, 14, 423. https://doi.org/10.3390/diagnostics14040423

AMA Style

Singh P, Bornstein MM, Hsung RT-C, Ajmera DH, Leung YY, Gu M. Frontiers in Three-Dimensional Surface Imaging Systems for 3D Face Acquisition in Craniofacial Research and Practice: An Updated Literature Review. Diagnostics. 2024; 14(4):423. https://doi.org/10.3390/diagnostics14040423

Chicago/Turabian Style

Singh, Pradeep, Michael M. Bornstein, Richard Tai-Chiu Hsung, Deepal Haresh Ajmera, Yiu Yan Leung, and Min Gu. 2024. "Frontiers in Three-Dimensional Surface Imaging Systems for 3D Face Acquisition in Craniofacial Research and Practice: An Updated Literature Review" Diagnostics 14, no. 4: 423. https://doi.org/10.3390/diagnostics14040423

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop