Previous Article in Journal
The Antibacterial Effect of Eight Selected Essential Oils Against Streptococcus mutans: An In Vitro Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study

1
Department of Biomedical and Dental Sciences and Morphofunctional Imaging, Messina University, 98100 Messina, Italy
2
Department of Engineering, Messina University, Contrada di Dio (S. Agata), 98166 Messina, Italy
3
Department of Biotechnological and Applied Clinical Sciences, University of L’Aquila, 67100 L’Aquila, Italy
4
Department of Neuroscience, Reproductive Science and Dentistry, University of Naples Federico II, 80138 Naples, Italy
*
Author to whom correspondence should be addressed.
Oral 2025, 5(4), 97; https://doi.org/10.3390/oral5040097 (registering DOI)
Submission received: 19 September 2025 / Revised: 10 November 2025 / Accepted: 19 November 2025 / Published: 2 December 2025

Abstract

Background: The precision of intraoral scanners (IOSs) is a key factor in ensuring the reliability of digital impressions, particularly in full-arch workflows. Although proprietary metrology tools are generally employed for scanner validation, open-source platforms could provide a cost-effective alternative for clinical research. Methods: This in vivo study compared the precision of two IOSs—3Shape TRIOS 3 and Planmeca Emerald S—using an open-source analytical workflow based on Autodesk Meshmixer and CloudCompare. A single healthy subject underwent five consecutive full-arch scans per device. Digital models were trimmed, aligned by manual landmarking and iterative closest-point refinement, and analyzed at six deviation thresholds (<0.01 mm to <0.4 mm). The percentage of surface points within clinically acceptable limits (<0.3 mm) was compared using paired t-tests. Results: TRIOS 3 exhibited significantly higher repeatability than Planmeca Emerald S (p < 0.001). At the <0.3 mm threshold, 99.3% ± 0.4% of points were within tolerance for TRIOS 3 versus 92.9% ± 6.8% for Planmeca. At the <0.1 mm threshold, values were 89.6% ± 5.7% and 47.3% ± 13.7%, respectively. Colorimetric deviation maps confirmed greater spatial consistency of TRIOS 3, particularly in posterior regions. Conclusions: Both scanners achieved clinically acceptable precision for full-arch impressions; however, TRIOS 3 demonstrated superior repeatability and lower variability. The proposed open-source workflow proved feasible and reliable, offering an accessible and reproducible method for IOS performance assessment in clinical settings.

1. Introduction

The integration of intraoral scanners (IOSs) into the digital workflow has significantly improved modern dental practice, especially in prosthodontics. These technologies allow for accurate prosthetic planning and fabrication by leveraging computer-aided design and computer-assisted manufacturing (CAD/CAM) systems [1,2,3]. Compared to conventional impression techniques, IOSs offer several advantages, including enhanced patient compliance, reduced chairside time, elimination of impression material distortion, easier data storage and transfer, and the ability to visualize scanned surfaces in three dimensions [2,4,5].
However, despite these advantages, several factors may compromise scan quality in clinical settings. Errors can occur in the presence of saliva, crevicular fluid, or blood on dental and gingival tissues, or due to patient movement during the scanning procedure [3,6,7]. IOSs generally project a structured light grid onto the teeth and capture the resulting distortion using high-resolution cameras. These images are processed by dedicated software to reconstruct a 3D model of the scanned area. This principle is shared with industrial-grade scanners [8,9,10].
Conventional impression materials, such as polyvinyl siloxane and polyether, are still regarded as the gold standard in fixed prosthodontics. This is largely due to the limited number of in vivo studies that thoroughly assess the accuracy of digital impressions, particularly in complex cases. Additional clinical validation is required before digital impressions can be fully endorsed as an alternative to conventional methods [3,11].
A fundamental parameter in evaluating IOS performance is accuracy, which comprises trueness—the ability to reproduce the actual geometry of an object—and precision, defined as the consistency among repeated measurements under identical conditions [3,12,13,14]. While trueness is typically assessed by comparing digital to conventional impressions, precision refers to the degree of variability among successive digital scans. A scanner with high precision is expected to minimize the dispersion of the dataset [15]. Precision may be particularly affected when scanning large objects such as full arches, due to the accumulation of alignment errors and the limited field of view of the scanner.
Intraoral scanning generates a 3D surface dataset through the superimposition of multiple captured frames. Errors in this stitching process tend to increase with the size of the scanned area, as more images must be aligned, which can compromise the final accuracy [16]. Although several in vitro studies have reported high precision for full-arch scans, these do not account for variables encountered in vivo, such as reflective surfaces, intraoral moisture, and patient motion [7,10]. Only a limited number of studies have directly evaluated scan precision in the oral cavity [16,17]. For example, Kwon et al. found a mean absolute precision of 56.6 ± 52.4 μm when comparing various intraoral scanners (i500, CS3600, Trios 3, iTero (Rochester, NY, USA), and CEREC Omnicam, Bensheim, Germany), demonstrating the feasibility of in vivo precision assessments [16].
Knowing the precision of IOSs is clinically valuable; however, most tools for 3D superimposition analysis are proprietary and expensive. Open-source software may serve as a valid alternative, offering free and accessible means of analysis [18]. Nevertheless, their adoption in dentistry has been limited due to low-quality and incomplete documentation [18].
The aim of the present in vivo study is to evaluate the precision of two intraoral scanners, 3Shape Trios (Milan, Italy) and Planmeca Emerald (Helsinki, Finland), using open-source software. The protocol is based on a previously published method that assessed the Medit i500 scanner (Seoul, Republic of Korea) [19].
The null hypotheses were as follows:
No differences in precision (i.e., variance of repeated measures) exist among repeated digital impressions. There are no significant differences in the percentage of clinically acceptable deviations, defined as <0.3 mm [19].

2. Materials and Methods

This in vivo experimental study aimed to evaluate the precision of two commercially available intraoral scanners—3Shape TRIOS 3 and Planmeca Emerald S—by performing repeated full-arch scans of a single healthy subject under standardized clinical conditions. The study design was based on the protocol previously described by Lo Giudice et al., which investigated the accuracy of the Medit i500 scanner using open-source 3D analysis software [19].

2.1. Subject Selection and Ethical Considerations

A single adult volunteer with a full upper dentition (excluding third molars) and a Decayed, Missing, and Filled Teeth (DMFT) index of zero was enrolled. The subject presented no signs of periodontal disease, restorations, or malocclusions that could interfere with the scanning process. The study adhered to ethical guidelines in line with the Helsinki declaration and was approved by the ethical committee (prot. n.95-23, 14 December 2022, Messina Local ethical committee, A.O.U. G. Martino, A.O. Papardo, A.S.P.). Prior to any procedures, each participant was provided with a brief written description of the study’s objective. Written informed consent was obtained from all subjects before their participation. The in vivo setting was chosen to replicate routine clinical conditions, introducing intraoral variables such as humidity, patient movement, and soft tissue presence. Although ISO 5725-1 [20] recommends at least 30 repetitions per specimen for full metrological validation, such repetition is not feasible in vivo due to patient discomfort, ethical limitations, and time constraints. Therefore, the present study adopted the validated in vivo precision protocol proposed by Lo Giudice et al. (2022) [19], which used five consecutive scans per device, generating ten pairwise comparisons for each scanner. This approach provides robust intra-operator repeatability while maintaining clinical realism. The present investigation involved a single participant, selected to ensure high internal validity and to reduce biological variability among scans, consistent with prior in vivo precision studies [19]. Power analysis was not performed because the experimental unit was the repeated scan pair rather than the participant. The design followed validated in vivo protocols where repeated measurements under identical conditions provide sufficient statistical power to detect inter-device variability [14,19].

2.2. Operator Standardization

All scans were performed by one experienced operator with over ten years of clinical practice and extensive daily use of IOSs. This approach was adopted to minimize operator-related variability and enhance intra-operator repeatability. To ensure optimal device performance, the operator strictly adhered to the scanning protocols provided by each manufacturer, including device-specific calibration prior to each scan session.

2.3. Scanning Procedure

Each scanner was used to perform five consecutive scans of the maxillary arch on the same subject. No scanning sprays, opacifiers, or contrast agents were applied. The scanning sequence was standardized: the operator initiated the scan at the occlusal surfaces of the left second molar, progressing to the right second molar, followed by buccal surfaces and ending with palatal aspects. This sequence, recommended by the manufacturers, was selected to minimize deviation from best-practice techniques.

2.4. Data Processing and Format

All digital impressions were exported in the OBJ file format to maintain mesh fidelity and compatibility with open-source 3D analysis platforms. To isolate the relevant dental surfaces and eliminate interference from soft tissue, gingiva, and mucosa, digital trimming was performed using Autodesk Meshmixer (version 3.5.4). The segmentation process was conducted carefully to ensure that only anatomically consistent hard tissue regions were retained.
A schematic representation of the entire experimental workflow is provided in Figure 1. The figure illustrates (a) the standardized scanning sequence (occlusal–buccal–palatal trajectory), (b) digital trimming and isolation of hard tissues in Autodesk Meshmixer, and (c(1–3)) subsequent alignment and deviation analysis performed in CloudCompare through landmark-based registration and iterative closest-point (ICP) refinement. This visual outline clarifies the sequential steps of the methodology and facilitates reproducibility of the protocol.

2.5. Alignment and Superimposition Protocol

The 3D models were imported into CloudCompare software (version 2.12) for alignment and deviation analysis. Each scan was paired with all others from the same scanner, producing ten pairwise combinations (Scan 1 vs. Scan 2, Scan 1 vs. Scan 3, etc.).
An initial rough alignment was performed using four manually selected anatomical landmarks: the incisal edges of the right and left central incisors and the mesio-buccal cusp tips of the right and left first molars. These reference points provided symmetrical and reproducible regions for alignment. A refined alignment was then conducted using the Iterative Closest Point (ICP) algorithm, which iteratively minimizes the mean point-to-point distance between meshes to achieve optimal superimposition. ICP was selected because it is widely recognized as the gold-standard algorithm for 3D surface registration in both industrial metrology and dental accuracy assessment [6,20]. To minimize operator bias during landmark selection, the same operator performed all alignments twice on randomly selected scan pairs, achieving a repeatability error below 0.05 mm. This confirmed the reliability and consistency of the manual landmarking procedure. This two-step alignment ensured high reproducibility and minimized operator-dependent bias.

2.6. Deviation Assessment and Colorimetric Analysis

Once aligned, the surface-to-surface deviations were evaluated both qualitatively and quantitatively. Colorimetric deviation maps were generated to visualize spatial differences between the scans. These maps applied a calibrated chromatic scale to indicate areas of minimal and maximal deviation, facilitating intuitive visual interpretation.
Quantitative deviation data were calculated across all matched points for each pairwise comparison. Six deviation thresholds were evaluated: <0.01 mm, <0.05 mm, <0.1 mm, <0.2 mm, <0.3 mm, and <0.4 mm. These limits were selected according to previous studies demonstrating that surface deviations below 0.3 mm are clinically acceptable for full-arch digital impressions [6,19]. The percentage of surface points falling within each range was recorded, and mean ± standard deviation values were calculated across all scan pairs for each scanner.

2.7. Data Presentation

Results were tabulated separately for each scanner, with detailed reporting of the percentage of points within each threshold, providing a robust comparison of the scanners’ performance. A schematic illustration of the experimental workflow is provided in Figure 1. Deviation and distribution between the scanners with interscan comparisons are shown in Figure 2, Figure 3, Figure 4 and Figure 5. Representative colorimetric heat maps generated during the analysis are shown in Figure 6, where green indicates minimal deviation and red/blue correspond to positive and negative discrepancies. These visualizations facilitate intuitive interpretation of spatial error distribution.

Statistical Analysis

Descriptive and inferential analyses were performed to compare scanner precision. For each threshold (<0.01 mm to <0.4 mm), the mean percentage of surface points and corresponding standard deviations were computed across ten scan-pair comparisons per device.
Differences between the two scanners were evaluated using paired t-tests at the <0.1 mm and <0.3 mm thresholds, representing high-resolution and clinically acceptable accuracy levels, respectively [6,19]. The significance level was set at p < 0.05. The coefficient of variation (CV) was also calculated to quantify relative dispersion and assess measurement stability. For each inter-scanner comparison, 95% confidence intervals (CI) and effect sizes (Cohen’s d) were also calculated to quantify the magnitude and precision of the observed differences.
All analyses were performed using GraphPad Prism 10.2 (GraphPad Software, La Jolla, CA, USA).

3. Results

Each 3D model obtained from the intraoral scans was sequentially labelled (Scan 1 to Scan 5) to facilitate pairwise comparisons. For each intraoral scanner, ten scan-pair combinations were generated (e.g., Scan 1 vs. Scan 2, Scan 1 vs. Scan 3, etc.), resulting in twenty total comparisons across the two devices. These were used to quantify the deviations in surface registration between repeated scans, expressed in millimetres (Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5).

3.1. Qualitative Analysis

Colorimetric deviation maps were generated using CloudCompare to provide a visual representation of surface discrepancies. As shown in Figure 6, green areas indicate minimal deviation between superimposed scans, while red and blue areas reflect positive and negative deviations, respectively. These maps allowed for intuitive assessment of spatial accuracy across the dental arch and facilitated the identification of regions with greater mismatch, typically in the posterior sectors (Figure 6).

3.2. Quantitative Deviation Analysis

Surface deviation was evaluated across six thresholds: <0.01 mm, <0.05 mm, <0.1 mm, <0.2 mm, <0.3 mm, and <0.4 mm. For each scan pair, the percentage of points falling below these thresholds was calculated. Summary data for Planmeca and 3Shape scanners are presented in Table 1 and Table 2, respectively. The mean values and standard deviations for each threshold across the ten scan pairs were also computed.

3.3. Planmeca Emerald S

The Planmeca scanner demonstrated moderate repeatability. At the clinically accepted threshold of <0.3 mm, the average percentage of compliant surface points was 92.9% ± 6.8%, with a coefficient of variation (CV) of 7.3%, indicating moderate data dispersion. In the more restrictive threshold of <0.1 mm, the mean was substantially lower (47.3% ± 13.7%), reflecting greater variability.
The cumulative distribution of deviation values (Figure 2) and corresponding frequency distribution (Figure 3) confirm that while most scan pairs reached acceptable accuracy by the 0.3 mm threshold, the scanner’s performance declined at narrower tolerances.

3.4. 3Shape TRIOS 3

The 3Shape scanner exhibited superior precision. At the <0.3 mm threshold, the average percentage of points within acceptable deviation was 99.3% ± 0.4%, with a CV of just 0.4%, reflecting excellent repeatability and consistency across all scan pairs. At the <0.1 mm threshold, the scanner still maintained high performance, with an average of 89.6% ± 5.7%. Deviation trends are graphically illustrated in Figure 4 (cumulative distribution) and Figure 5 (frequency distribution), showing tightly clustered values and minimal variability.
Representative colorimetric heat maps (Figure 6) illustrate the spatial distribution of deviations between repeated scans for both scanners. TRIOS 3 maps display predominantly green areas, indicating high repeatability and minimal surface variation, while Planmeca Emerald S maps show broader red and blue regions, especially in the posterior sectors. These visual results corroborate the quantitative findings reported in Table 1 and Table 2 and support the statistical significance of the observed differences (p < 0.001).

3.5. Comparative Statistical Analysis

To statistically compare scanner performance, paired t-tests were conducted on the percentage of compliant points at the <0.1 mm and <0.3 mm thresholds across the ten scan-pair sets per scanner.
  • <0.1 mm threshold
    • Planmeca: 47.3% ± 13.7%.
    • 3Shape: 89.6% ± 5.7%.
    • Paired t-test: t = −9.42, df = 9, p < 0.001.
  • <0.3 mm threshold
    • Planmeca: 92.9% ± 6.8%.
    • 3Shape: 99.3% ± 0.4%.
    • Paired t-test: t = −5.47, df = 9, p = 0.0004.
These findings confirm that 3Shape TRIOS 3 outperformed Planmeca Emerald S with statistical significance at both evaluated thresholds.

3.6. Combined Performance and Clinical Relevance

When combining the scan-pair results from both devices (n = 20), the overall mean precision at the <0.3 mm threshold was 96.1% ± 4.6%, with a combined coefficient of variation of 4.8%. This supports the clinical acceptability of both scanners within the evaluated threshold.
Nonetheless, the 3Shape scanner consistently delivered more accurate and reliable scans, especially at stricter tolerances, supporting its use in clinical applications where high precision is critical.
The use of colorimetric maps has allowed for an effective visualization of the deviations between the compared models, with a representative example shown in Figure 6. Through the use of a color scale, these maps indicate areas with the most significant deviations, facilitating the visual interpretation of data.
The research also conducted a quantitative analysis of the deviations, focusing on the cumulative frequency of points recording a deviation below pre-established threshold. The results of this analysis were summarized in Table 1 and Table 2, where percentage data of points within specific deviation limits, along with mean values and standard deviations for each pair of models, were reported.

4. Discussion

The present in vivo study compared the precision of two widely used intraoral scanners—3Shape TRIOS 3 and Planmeca Emerald S—using a standardized full-arch scanning protocol and an open-source analytical workflow. Precision, defined as the degree of agreement among repeated measurements under identical conditions, represents a fundamental aspect of the overall accuracy of intraoral scanning, complementing trueness as established by ISO 5725-1 [20]. The study focused exclusively on precision—the consistency among repeated measurements—without including a reference-based control group for trueness assessment. This design choice was deliberate, as the primary aim was to evaluate intra-operator repeatability under realistic clinical conditions rather than the absolute accuracy of each device. Nonetheless, the absence of a control group is acknowledged as a limitation, as it precludes direct comparison to a ground-truth model. Future research should include a reference scanner or master cast to quantify trueness in parallel with precision.
Both scanners achieved precision levels consistent with the clinically acceptable threshold of <0.3 mm, a limit commonly adopted in prosthodontic literature to ensure proper marginal adaptation and passive fit [6,7,15,21]. However, the quantitative findings demonstrated a clear difference between the two devices: TRIOS 3 achieved 99.3% ± 0.4% of points within the <0.3 mm threshold and 89.6% ± 5.7% within the <0.1 mm range, whereas Planmeca Emerald S reached 92.9% ± 6.8% and 47.3% ± 13.7%, respectively. The statistically significant results (p < 0.001) confirm that these differences are not attributable to random variation. The deviation thresholds applied in this study (<0.01 mm to <0.4 mm, with <0.3 mm as the clinically acceptable limit) were derived from the literature on complete-arch accuracy [6,22,23]. This threshold has been consistently adopted in the literature as the upper limit for clinical acceptability in full-arch digital impressions, ensuring marginal adaptation comparable to conventional materials [6,15,24]. These boundaries align with ISO 5725-1 standards and ensure comparability with previous investigations by Ender and Mehl (2013) [6] and Lo Giudice et al. (2022) [19], who validated similar precision criteria in in vitro and in vivo contexts, respectively.
From a clinical perspective, these outcomes underscore the importance of distinguishing between scanners according to the tolerance requirements of specific rehabilitative procedures. [25,26] While both systems may be suitable for conventional restorative workflows—such as single crowns or short-span prostheses—where dimensional deviations up to 300 µm do not compromise adaptation [3,27,28], only TRIOS 3 demonstrated repeatability compatible with the more stringent tolerance levels (<100 µm) needed for implant-supported or full-arch restorations [6,29,30]. In these cases, even minimal inter-scan variability can translate into clinically significant misfits, jeopardizing passive fit and potentially leading to mechanical complications such as screw loosening or prosthetic stress [31,32].
The findings are coherent with previous in vivo and in vitro investigations assessing IOS performance. Ender and Mehl [6] reported that cumulative errors tend to increase with the length of the scanned arch, particularly beyond the premolar region. Similarly, Kwon et al., [14] observed a mean absolute precision of 56.6 ± 52.4 µm when comparing five intraoral scanners, noting that posterior sectors exhibited the greatest deviations. In our study, colorimetric deviation maps (Figure 6) confirmed that discrepancies were concentrated in the posterior segments for both scanners, yet more pronounced with Planmeca Emerald S. This pattern is consistent with the limited field of view and frame-stitching accumulation intrinsic to the scanning technology [10,16,28]. The lower standard deviation observed for TRIOS 3 reflects its higher capture frequency and larger field of view, which minimize cumulative stitching errors. In contrast, Planmeca Emerald S, based on sequential image stitching with smaller optical frames, is more susceptible to cumulative drift, particularly in posterior regions. These hardware and software differences account for the disparity in precision consistency between devices.
The high performance of TRIOS 3 can be attributed to its advanced optical triangulation and real-time correction algorithms, which enhance surface recognition and reduce cumulative drift. These characteristics explain the low standard deviation and coefficient of variation (CV = 0.4%) recorded in our dataset, reflecting exceptional measurement repeatability. Comparable findings were described by Lo Giudice et al., [19], who, using an open-source protocol similar to the present one, reported a precision of 98.8% ± 1.4% within the <0.3 mm threshold for the Medit i500 scanner. The current results not only confirm the feasibility of open-source metrology but extend its validation to multiple scanner systems, reinforcing reproducibility across platforms.
The integration of open-source software (Autodesk Meshmixer and CloudCompare) proved effective for both qualitative and quantitative analyses. Previous reviews have expressed concern that non-commercial software may suffer from incomplete documentation or limited standardization [33]; however, our workflow demonstrated that, when properly configured, open-source solutions can yield precise, repeatable, and cost-efficient results. This represents a relevant contribution for academic and clinical environments where access to proprietary metrology tools is restricted. By enabling accurate superimposition and deviation measurement through freely available tools, this study supports the democratization of digital metrology in dental research [18].
The methodological design also contributes to the internal validity of the results. Scans were performed consecutively on a single healthy subject by an experienced operator following manufacturer-recommended trajectories (occlusal–buccal–palatal path). This minimized intra-operator variability and reduced random errors associated with repositioning or recalibration. Similar approaches have been recommended by Müller et al., [17] and Schmidt et al., [20], who highlighted operator experience and scan strategy as key determinants of reproducibility. The five-scan per-device design yielded ten pairwise comparisons per scanner, generating a statistically robust dataset for precision estimation.
Clinically, the differences observed between scanners should guide the practitioner’s device selection based not only on global accuracy but also on the consistency required for the intended application. High-precision procedures such as digital implant guide fabrication, long-span frameworks, and longitudinal digital monitoring of soft tissue changes demand the lowest possible variance among scans. Conversely, less demanding restorative workflows may tolerate higher variability without significant impact on clinical outcomes [21,22,23].
Beyond its immediate implications for prosthodontics, the validation of open-source workflows contributes to broader clinical integration of digital technologies. Digital imaging and 3D scanning are increasingly complemented by diagnostic modalities such as CBCT or MRI, which enhance both accuracy and functional assessment [34]. The synergy between scanning, virtual planning, and additive manufacturing has been demonstrated in maxillofacial reconstruction and surgical navigation [35,36], underscoring how precision analysis forms the foundation for successful computer-assisted treatment. When comparing our results to previous studies, TRIOS 3 displayed a precision (99.3% ± 0.4% within < 0.3 mm) comparable to or exceeding that reported by Lo Giudice et al. (2022) [19] for Medit i500 (98.8% ± 1.4%), while Planmeca Emerald S demonstrated slightly lower but still clinically acceptable performance. Kwon et al. (2021) [14] observed wider variability among scanners under in vivo conditions, reinforcing the value of the present findings in confirming repeatability using an open-source protocol. The novelty of this research lies in demonstrating that free, non-proprietary platforms—Autodesk Meshmixer and CloudCompare—can achieve metrological reliability equivalent to commercial systems, enabling cost-effective and transparent validation workflows in clinical research. The reliability of CloudCompare as a deviation-analysis platform has been documented by Chruściel-Nogalska et al. (2017) [33], who demonstrated accuracy comparable to commercial metrology tools for surface registration. Lo Giudice et al. (2022) [19] further validated the same open-source workflow for intraoral scanner evaluation, confirming its metrological robustness and reproducibility in clinical applications.

Limitations and Future Directions

Despite its strengths, the study presents several limitations. The use of a single subject and a single dental arch restrict the generalizability of the results and precludes formal power analysis. Only precision was evaluated, without trueness measurements or reference-standard validation. Future investigations should include a larger sample size determined through power analysis, multiple operators to evaluate inter-operator variability, and comparative assessments of open-source versus proprietary metrology software. Incorporating trueness measurements and reference-based models will further enhance the external validity of future research. Precision alone was investigated, whereas trueness—the deviation from a physical reference standard—was not measured, limiting the completeness of the accuracy assessment [7,12]. Furthermore, soft-tissue regions were intentionally trimmed to improve measurement consistency, which excludes clinically relevant data for edentulous or partially edentulous arches [31,36,37]. Future research should include larger and more heterogeneous samples, multiple operators, and combined trueness-precision analysis to enhance external validity.
In conclusion, within these boundaries, the present work demonstrates that open-source tools can provide reliable, reproducible, and clinically meaningful measurements of intraoral scanner precision. Both scanners achieved precision suitable for clinical application, yet the statistically superior repeatability of TRIOS 3 indicates its suitability for procedures requiring high fidelity. The results corroborate the concept that accessible, transparent analytical workflows can effectively support precision-based digital dentistry, facilitating broader adoption of accurate and cost-efficient digital workflows in both clinical and research contexts [38,39].

5. Conclusions

Within the limitations of this in vivo investigation, both intraoral scanners evaluated—3Shape TRIOS 3 and Planmeca Emerald S—demonstrated clinically acceptable precision for full-arch digital impressions, as defined by deviation values within the <0.3 mm threshold. Nevertheless, statistically significant differences were found in repeatability and measurement stability, indicating superior performance of TRIOS 3. The scanner achieved 99.3% ± 0.4% precision within < 0.3 mm and 89.6% ± 5.7% within < 0.1 mm, compared with 92.9% ± 6.8% and 47.3% ± 13.7%, respectively, for Planmeca Emerald S (p < 0.001).These findings suggest that while both devices are suitable for general restorative procedures, TRIOS 3 offers a level of precision more appropriate for demanding applications such as implant-supported rehabilitations, full-arch frameworks, and digital monitoring of peri-implant or soft-tissue changes over time. The reduced variability observed with TRIOS 3 also supports its use in longitudinal assessments where reproducibility between scans is critical.
From a methodological perspective, the present research confirms the validity of an open-source analytical workflow combining Autodesk Meshmixer and CloudCompare for precision evaluation. This approach enables accurate and cost-effective assessment of digital devices without reliance on proprietary metrology platforms, representing a valuable tool for clinicians and institutions seeking reproducible, transparent, and accessible evaluation methods.
Future studies should expand the investigation to multiple subjects, different operators, and various scanning trajectories, integrating both trueness and precision assessments. Such evidence will contribute to defining standardized protocols for scanner validation and to optimizing digital workflows that align with the principles of precision, reproducibility, and clinical reliability in contemporary dentistry. Clinicians are encouraged to consider not only the general acceptability of digital impressions, but also the specific performance characteristics of each scanner. Factors such as repeatability, scan trajectory optimization, and precision thresholds must be critically assessed when selecting a device for specific clinical applications. Furthermore, the current findings highlight the need for ongoing validation of intraoral scanners in real-world conditions, and support the development of standardized, reproducible protocols for evaluating digital impression quality.
Future studies involving a larger sample size, multiple operators, and the inclusion of trueness measurements are recommended to further confirm these results and expand the generalizability of the findings.

Author Contributions

Conceptualization, F.P.; formal analysis, E.L.; investigation, F.S.; data curation, F.S.; writing—original draft preparation, I.U. and S.D.V.; writing—review and editing, R.G.; supervision, R.L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and Messina Local ethical committee, A.O.U. G. Martino, A.O. Papardo, A.S.P prot. n.95-23, 14 December 2022.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data can be requested from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mörmann, W.H. The evolution of the CEREC system. J. Am. Dent. Assoc. 2006, 137 (Suppl. S1), 7S–13S. [Google Scholar] [CrossRef]
  2. Pradíes, G.; Zarauz, C.; Valverde, A.; Ferreiroa, A.; Martínez-Rus, F. Clinical evaluation comparing the fit of all-ceramic crowns obtained from silicone and digital intraoral impressions based on wavefront sampling technology. J. Dent. 2015, 43, 201–208. [Google Scholar] [CrossRef]
  3. Giachetti, L.; Sarti, C.; Cinelli, F.; Russo, D.S. Accuracy of digital impressions in fixed prosthodontics: A systematic review of clinical studies. Int. J. Prosthodont. 2020, 33, 192–201. [Google Scholar] [CrossRef]
  4. Lee, S.J.; Gallucci, G.O. Digital vs. conventional implant impressions: Efficiency outcomes. Clin. Oral Implant. Res. 2013, 24, 111–115. [Google Scholar] [CrossRef]
  5. Ting-Shu, S.; Jian, S. Intraoral digital impression technique: A review. J. Prosthodont. 2015, 24, 313–321. [Google Scholar] [CrossRef]
  6. Ender, A.; Mehl, A. Accuracy of complete-arch dental impressions: A new method of measuring trueness and precision. J. Prosthet. Dent. 2013, 109, 121–128. [Google Scholar] [CrossRef]
  7. Imburgia, M.; Logozzo, S.; Hauschild, U.; Veronesi, G.; Mangano, C.; Mangano, F.G. Accuracy of four intraoral scanners in oral implantology: A comparative in vitro study. BMC Oral Health 2017, 17, 92. [Google Scholar] [CrossRef]
  8. Cucinotta, F.; Raffaele, M.; Salmeri, F. A stress-based topology optimization method by a Voronoi tessellation Additive Manufacturing oriented. Int. J. Adv. Manuf. Technol. 2019, 102, 1579–1591. [Google Scholar] [CrossRef]
  9. Cucinotta, F.; Raffaele, M.; Salmeri, F. A Topology Optimization of a Motorsport Safety Device. In Proceedings of the 7th International Conference on Mechanics and Materials in Design, Online, 9–11 September 2020; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 355–362. [Google Scholar]
  10. Lee, K.M. Comparison of two intraoral scanners based on three-dimensional surface analysis. Prog. Orthod. 2018, 19, 27. [Google Scholar] [CrossRef]
  11. Tsirogiannis, P.; Reissmann, D.R.; Heydecke, G. Evaluation of the marginal fit of single-unit, complete-coverage ceramic restorations fabricated after digital and conventional impressions: A systematic review and meta-analysis. J. Prosthet. Dent. 2016, 116, 328–335. [Google Scholar] [CrossRef]
  12. Mangano, F.; Gandolfi, A.; Luongo, G.; Logozzo, S. Intraoral scanners in dentistry: A review of the current literature. BMC Oral Health 2017, 17, 149. [Google Scholar] [CrossRef]
  13. Richert, R.; Goujat, A.; Venet, L.; Viguie, G.; Viennot, S.; Robinson, P.; Farges, J.-C.; Fages, M.; Ducret, M. Intraoral scanner technologies: A review to make a successful impression. J. Healthcare Eng. 2017, 2017, 8427595. [Google Scholar] [CrossRef]
  14. Kwon, M.; Cho, Y.; Kim, D.W.; Kim, M.; Kim, Y.J.; Chang, M. Full-arch accuracy of five intraoral scanners: In vivo analysis of trueness and precision. Korean J. Orthod. 2021, 51, 95–104. [Google Scholar] [CrossRef]
  15. Goracci, C.; Franchi, L.; Vichi, A.; Ferrari, M. Accuracy, reliability, and efficiency of intraoral scanners for full-arch impressions: A systematic review of the clinical evidence. Eur. J. Orthod. 2016, 38, 422–428. [Google Scholar] [CrossRef]
  16. Rhee, Y.K.; Huh, Y.H.; Cho, L.R.; Park, C.J. Comparison of intraoral scanning and conventional impression techniques using 3-dimensional superimposition. J. Adv. Prosthodont. 2015, 7, 460–467. [Google Scholar] [CrossRef]
  17. Müller, P.; Ender, A.; Joda, T.; Katsoulis, J. Impact of digital intraoral scan strategies on the impression accuracy using the TRIOS Pod scanner. Quintessence Int. 2016, 47, 343–349. [Google Scholar]
  18. Ender, A.; Zimmermann, M.; Mehl, A. Accuracy of complete- and partial-arch impressions of actual intraoral scanning systems in vitro. Int. J. Comput. Dent. 2019, 22, 11–19. [Google Scholar]
  19. Lo Giudice, R.; Galletti, C.; Tribst, J.P.M.; Melenchón, L.P.; Matarese, M.; Miniello, A.; Cucinotta, F.; Salmeri, F. In vivo analysis of intraoral scanner precision using open-source 3D software. Prosthesis 2022, 4, 554–563. [Google Scholar] [CrossRef]
  20. ISO 5725-1:1994; Accuracy (Trueness and Precision) of Measurement Methods and Results. Part 1: General Principles and Definitions. ISO: Geneva, Switzerland, 1994.
  21. Schmidt, A.; Klussmann, L.; Wöstmann, B.; Schlenz, M.A. Accuracy of digital and conventional full-arch impressions in patients: An update. J. Clin. Med. 2020, 9, 688. [Google Scholar] [CrossRef] [PubMed]
  22. Sanda, K.; Yasunami, N.; Okada, M.; Furuhashi, A.; Ayukawa, Y. Accuracy of the intra- and extra-oral scanning technique for transferring the intaglio surface of a pontic of provisional restorations to definitive restorations. Materials 2021, 14, 6489. [Google Scholar] [CrossRef] [PubMed]
  23. Braian, M.; Wennerberg, A. Trueness and precision of 5 intraoral scanners for scanning edentulous and dentate complete-arch mandibular casts: A comparative in vitro study. J. Prosthet. Dent. 2019, 122, 129–136.e2. [Google Scholar] [CrossRef]
  24. Lee, S.J.; Kim, S.W.; Lee, J.J.; Cheong, C.W. Comparison of intraoral and extraoral digital scanners: Evaluation of surface topography and precision. Dent. J. 2020, 8, 52. [Google Scholar] [CrossRef] [PubMed]
  25. Lee, K.C.; Park, S.J. Digital intraoral scanners and alginate impressions in reproducing full dental arches: A comparative 3D assessment. Appl. Sci. 2020, 10, 7637. [Google Scholar] [CrossRef]
  26. Michelinakis, G.; Apostolakis, D.; Tsagarakis, A.; Kourakis, G.; Pavlakis, E. A comparison of accuracy of 3 intraoral scanners: A single-blinded in vitro study. J. Prosthet. Dent. 2020, 124, 581–588. [Google Scholar] [CrossRef]
  27. Del Amo, F.S.L.; Yu, S.H.; Sammartino, G.; Sculean, A.; Zucchelli, G.; Rasperini, G.; Felice, P.; Pagni, G.; Iorio-Siciliano, V.; Grusovin, M.G.; et al. Peri-implant soft tissue management: Cairo opinion consensus conference. Int. J. Environ. Res. Public Health 2020, 17, 2281. [Google Scholar] [CrossRef] [PubMed]
  28. Bruno, V.; Berti, C.; Barausse, C.; Badino, M.; Gasparro, R.; Ippolito, D.R.; Felice, P. Clinical relevance of bone density values from CT related to dental implant stability: A retrospective study. Biomed. Res. Int. 2018, 2018, 6758245. [Google Scholar] [CrossRef] [PubMed]
  29. Dioguardi, M.; Spirito, F.; Quarta, C.; Sovereto, D.; Basile, E.; Ballini, A.; Caloro, G.A.; Troiano, G.; Muzio, L.L.; Mastrangelo, F. Guided dental implant surgery: Systematic review. J. Clin. Med. 2023, 12, 1490. [Google Scholar] [CrossRef]
  30. Braian, M.; De Bruyn, H.; Fransson, H.; Christersson, C.; Wennerberg, A. Tolerance measurements on internal- and external-hexagon implants. Int. J. Oral Maxillofac. Implant. 2014, 29, 846–852. [Google Scholar] [CrossRef]
  31. Tribst, J.P.M.; Dal Piva, A.M.D.O.; Lo Giudice, R.; Borges, A.L.S.; Bottino, M.A.; Epifania, E.; Ausiello, P. The influence of custom-milled framework design for an implant-supported full-arch fixed dental prosthesis: 3D-FEA study. Int. J. Environ. Res. Public Health 2020, 17, 4040. [Google Scholar] [CrossRef]
  32. Di Spirito, F.; D’Ambrosio, F.; Cannatà, D.; D’Antò, V.; Giordano, G.; Martina, S. Impact of clear aligners versus fixed appliances on periodontal status of patients undergoing orthodontic treatment: A systematic review of systematic reviews. Healthcare 2023, 11, 1340. [Google Scholar] [CrossRef]
  33. Chruściel-Nogalska, M.; Smektała, T.; Tutak, M.; Sporniak-Tutak, K.; Olszewski, R. Open-source software in dentistry: A systematic review. Int. J. Technol. Assess Health Care 2017, 33, 487–493. [Google Scholar] [CrossRef]
  34. Santagata, M.; De Luca, R.; Lo Giudice, G.; Troiano, A.; Lo Giudice, G.; Corvo, G.; Tartaro, G. Arthrocentesis and Sodium Hyaluronate Infiltration in Temporomandibular Disorders Treatment. Clinical and MRI Evaluation. J. Funct. Morphol. Kinesiol. 2020, 5, 18. [Google Scholar] [CrossRef]
  35. Lo Giudice, G.; Calvo, A.; Magaudda, E.; De Ponte, F.S.; Nastro Siniscalchi, E. Virtual surgery and 3D printing in a medication-related osteonecrosis of the jaws (MRONJ) pathological mandibular fracture: A case report. Front. Oral Health 2025, 6, 1520195. [Google Scholar] [CrossRef] [PubMed]
  36. Son, K.; Jin, M.U.; Lee, K.B. Feasibility of using an intraoral scanner for a complete-arch digital scan, part 2: A comparison of scan strategies. J. Prosthet. Dent. 2021, 125, 548–555. [Google Scholar] [CrossRef] [PubMed]
  37. Giordano, F.; Di Spirito, F.; Acerra, A.; Rupe, A.; Cirigliano, G.; Caggiano, M. The outcome of tilted distal implants immediately loaded under screw-retained cross-arch prostheses: A 5-year retrospective cohort study. J. Osseointegr. 2024, 16, 31–38. [Google Scholar]
  38. Puleio, F.; Lizio, A.S.; Coppini, V.; Lo Giudice, R.; Lo Giudice, G. CBCT-Based Assessment of Vapor Lock Effects on Endodontic Disinfection. Appl. Sci. 2023, 13, 9542. [Google Scholar] [CrossRef]
  39. Puleio, F.; Lo Giudice, G.; Bellocchio, A.M.; Boschetti, C.E.; Lo Giudice, R. Clinical, Research, and Educational Applications of ChatGPT in Dentistry: A Narrative Review. Appl. Sci. 2024, 14, 10802. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the entire experimental workflow. (a) the standardized scanning sequence (occlusal–buccal–palatal trajectory), (b) digital trimming and isolation of hard tissues in Autodesk Meshmixer, and (c(13)) subsequent alignment and deviation analysis performed in CloudCompare through landmark-based registration and iterative closest-point (ICP) refinement.
Figure 1. Schematic representation of the entire experimental workflow. (a) the standardized scanning sequence (occlusal–buccal–palatal trajectory), (b) digital trimming and isolation of hard tissues in Autodesk Meshmixer, and (c(13)) subsequent alignment and deviation analysis performed in CloudCompare through landmark-based registration and iterative closest-point (ICP) refinement.
Oral 05 00097 g001
Figure 2. Deviation distribution as cumulative frequency for Planmeca scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Figure 2. Deviation distribution as cumulative frequency for Planmeca scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Oral 05 00097 g002
Figure 3. Deviation distribution as frequency for Planmeca scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Figure 3. Deviation distribution as frequency for Planmeca scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Oral 05 00097 g003
Figure 4. Deviation distribution as cumulative frequency for 3Shape scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Figure 4. Deviation distribution as cumulative frequency for 3Shape scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Oral 05 00097 g004
Figure 5. Deviation distribution as frequency for 3Shape scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Figure 5. Deviation distribution as frequency for 3Shape scanner: interscan comparison (scan number vs. scan number). Statistical significance between scanners: p < 0.001 (paired t-test).
Oral 05 00097 g005
Figure 6. Colorimetric map generated by the 3D models’ superimposition.
Figure 6. Colorimetric map generated by the 3D models’ superimposition.
Oral 05 00097 g006
Table 1. Percentage of points below determined deviation thresholds for Planmeca scanner.
Table 1. Percentage of points below determined deviation thresholds for Planmeca scanner.
Pair of ScansDeviation < 0.01 mmDeviation < 0.05 mmDeviation < 0.1 mmDeviation < 0.2mmDeviation < 0.3 mmDeviation < 0.4 mm
1 VS. 20.0%5.6%29.0%65.0%80.0%87.0%
1 VS. 30.1%9.1%46.4%89.7%98.6%99.7%
1 VS. 40.1%9.8%47.7%91.9%98.9%99.9%
1 VS. 50.1%10.1%43.8%85.5%97.5%99.6%
2 VS. 30.0%6.6%33.3%66.9%82.1%93.5%
2 VS. 40.1%13.8%64.5%95.3%98.8%99.6%
2 VS. 50.1%14.3%58.3%92.3%95.9%97.9%
3 VS. 40.1%7.5%38.4%72.8%89.8%97.3%
3 VS. 50.0%7.7%36.9%70.4%89.2%97.5%
4 VS. 50.1%22.7%74.4%95.0%98.2%99.2%
Mean0.1%10.7%47.3%82.5%92.9%97.1%
Standard dev.0.0%4.8%13.7%11.6%6.8%3.8%
Table 2. Percentage of points below determined deviation thresholds for 3Shape scanner.
Table 2. Percentage of points below determined deviation thresholds for 3Shape scanner.
Pair of ScansDeviation < 0.01 mmDeviation < 0.05 mmDeviation < 0.1 mmDeviation < 0.2mmDeviation < 0.3 mmDeviation < 0.4 mm
1 VS. 20.6%74.2%95.4%97.8%98.7%99.2%
1 VS. 30.4%63.0%87.8%98.7%99.4%99.7%
1 VS. 40.4%59.1%82.7%98.2%99.1%99.5%
1 VS. 52.1%87.7%97.0%99.0%99.5%99.7%
2 VS. 31.1%67.2%84.2%97.7%99.0%99.3%
2 VS. 41.6%65.4%83.1%97.4%98.5%98.9%
2 VS. 51.4%80.0%97.1%98.8%99.3%99.5%
3 VS. 42.5%71.8%95.7%99.3%99.7%99.8%
3 VS. 51.2%67.6%87.2%99.5%99.9%100.0%
4 VS. 51.2%59.9%85.2%99.3%99.9%100.0%
Mean1.2%69.6%89.6%98.6%99.3%99.6%
Standard dev.0.7%8.6%5.7%0.7%0.4%0.3%
The analysis revealed that the 3Shape scanner demonstrated greater accuracy compared to the Planmeca scanner. Specifically, the average percentage of points falling below the deviation threshold of 0.3 mm for the 3Shape scanner was 99.3% with a standard deviation of 0.4%, while for the Planmeca scanner it was 92.9% with a standard deviation of 6.8%.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Puleio, F.; Salmeri, F.; Lupi, E.; Urbano, I.; Gasparro, R.; De Vita, S.; Lo Giudice, R. In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study. Oral 2025, 5, 97. https://doi.org/10.3390/oral5040097

AMA Style

Puleio F, Salmeri F, Lupi E, Urbano I, Gasparro R, De Vita S, Lo Giudice R. In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study. Oral. 2025; 5(4):97. https://doi.org/10.3390/oral5040097

Chicago/Turabian Style

Puleio, Francesco, Fabio Salmeri, Ettore Lupi, Ines Urbano, Roberta Gasparro, Simone De Vita, and Roberto Lo Giudice. 2025. "In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study" Oral 5, no. 4: 97. https://doi.org/10.3390/oral5040097

APA Style

Puleio, F., Salmeri, F., Lupi, E., Urbano, I., Gasparro, R., De Vita, S., & Lo Giudice, R. (2025). In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study. Oral, 5(4), 97. https://doi.org/10.3390/oral5040097

Article Metrics

Back to TopTop