You are currently viewing a new version of our website. To view the old version click .
Dentistry Journal
  • Systematic Review
  • Open Access

14 November 2025

Accuracy of Navigation and Robot-Assisted Systems for Dental Implant Placement: A Systematic Review

,
,
,
,
,
,
,
and
1
Department of Maxillofacial Surgery and Radiology, Oral Radiology, “Iuliu Hatieganu” University of Medicine and Pharmacy, 32 Clinicilor Street, 400006 Cluj-Napoca, Romania
2
CESTER, (Research Center for Industrial Robots Simulation and Testing), Department of Mechanical Systems Engineering, Faculty of Industrial Engineering, Robotics and Production Management Technical University of Cluj-Napoca, 28 Memorandumului Street, 4000114 Cluj-Napoca, Romania
3
Department of Maxillofacial Surgery and Radiology, Maxillofacial Surgery and Implantology, “Iuliu Hatieganu” University of Medicine and Pharmacy, 37 Iuliu Hossu Street, 400429 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Dental Public Health Landscape: Challenges, Technological Innovation and Opportunities in the 21st Century

Abstract

Background: Computer-assisted implant surgery (CAIS) aims to improve placement accuracy versus freehand drilling. We compared the three-dimensional accuracy of robot-guided CAIS (r-CAIS), dynamic navigation (d-CAIS), static-template guidance (s-CAIS), and freehand (FH) in clinical and in vitro settings. Methods: We searched PubMed/MEDLINE, Scopus, and Web of Science (1 January 2019–2025). Eligible populations were adults receiving conventional or zygomatic implants in vivo, plus validated in vitro human-jaw models using plan-versus-placement workflows; studies had to report study-level means with dispersion for ≥1 primary outcome with ≥5 implants per arm. Interventions were r-CAIS, d-CAIS, or s-CAIS; with a baseline as the freehand technique. Risk of bias used RoB 2 (RCTs), ROBINS-I (non-randomized clinical), and QUIN (in vitro). Because of heterogeneity in definitions and workflows, we performed a descriptive synthesis by modality (no meta-analysis). Registration: OSF. Results: Forty-three studies (7 RCTs, 10 non-randomized clinical, 26 in vitro) reported more than 4000 implants. Across studies, typical study-level means for global linear deviation clustered around < 1 mm (r-CAIS), ~1 mm (d-CAIS), ~1.3 mm (s-CAIS), and ~1.8 mm (FH). In clinical contexts, d-CAIS often showed slightly lower angular deviation than s-CAIS. Conclusions: CAIS improves accuracy versus freehand. d-CAIS and s-CAIS show similar linear accuracy, with d-CAIS frequently yielding slightly lower angular deviation; r-CAIS exhibits tight error clusters in our dataset, but limited comparative clinical evidence precludes superiority claims. Limitations: non-uniform registration/measurement, variable operator experience, and absence of meta-analysis.

1. Introduction

Dental implants have become a routine solution for edentulism, with high long-term success rates reported in clinical studies []. However, the precision of implant positioning is critically important for the success of the final prosthesis. Implants placed at an improper angulation or depth can complicate prosthetic rehabilitation, impair bone integration, develop facial asymmetry due to an incorrect occlusion and even risk damaging vital anatomical structures such as nerves or the maxillary sinus [,]. Consequently, achieving a prosthetically driven, accurate implant placement is widely regarded as a key factor in avoiding early or late implant failures [,]. As a conclusion, surgical accuracy is a critical factor for both functional and aesthetic outcomes, motivating its detailed analysis across multiple technological support platforms and freehand interventions.
Over the past two decades, clinicians have increasingly turned to technology to increase placement accuracy and overcome the limitations of freehand surgery. Traditional freehand implantation relies only on the surgeon’s skill and experience, which can lead to positional deviations; computer-assisted implant surgery (CAIS) techniques were introduced, evolving through static, dynamic, and now robotic guidance systems []. In static guided surgery, a custom surgical template (guide) is fabricated from pre-operative cone-beam CT data and digital planning to direct the drill in a fixed trajectory, reproducing the planned implant position []. The term “static guide” does not denote a single standardized surgical technique but rather encompasses a range of methods that share the principle of preoperative, computer-assisted planning for implant placement. Static guides vary in support type (tooth-, mucosa-, bone-, or pin-supported), extent of guidance (pilot-only or fully guided), and fabrication method (conventional or digitally designed and 3D-printed). Design features such as sleeve geometry, preset offsets, bur–sleeve tolerances, manufacturing accuracy, and guide stabilization further influence both linear and angular deviations. These variations can significantly affect surgical accuracy and therefore confound direct comparisons with dynamic navigation (d-CAIS) and robot-assisted systems (r-CAIS) []. In contrast, dynamic navigation involves real-time tracking of the handpiece, allowing the surgeon to actively control drilling while visualizing the planned versus actual trajectory. Unlike static guidance, no mechanical constraint is applied during drilling, providing greater intraoperative flexibility at the cost of a steeper learning curve in interpreting navigation feedback. Various systems such as fully autonomous dental implant robots have been prototyped, as seen with systems like China’s Remebot (Beijing Bihui Weikang Technology Co., Ltd., Beijing, China), which can carry out the osteotomy and implant insertion according to the virtual plan []. These advancements have a clear trend in implantology from freehand techniques toward increasing computerization and mechanization with the aim of greater accuracy and consistency in outcomes.
Recent high-quality meta-analyses, including those by Khan et al., 2024 [], and Khaohoen et al., 2024 [] have provided compelling evidence for a relative accuracy hierarchy among static, dynamic, and robotic computer-assisted implant surgery (CAIS) techniques. Static and dynamic guidance systems consistently demonstrate higher positional and angular accuracy compared with conventional freehand placement, while robotic platforms show further improvements in precision and reproducibility. Despite these advances, substantial heterogeneity remains across study designs, outcome definitions, and operator-related factors, and several clinically relevant subgroups continue to be underrepresented in pooled analyses.
Accordingly, this systematic review was designed to extend previous benchmark meta-analyses and synthesize evidence published between 2019 and 2025. Its specific aims are to standardize outcome definitions across heterogeneous clinical, in vitro, and in vivo workflows to enable valid inter-modality comparisons of accuracy. Integrate underrepresented subgroups, including hybrid static–dynamic protocols and zygomatic implant procedures, which have been inconsistently included in earlier reviews. Evaluate the influence of operator learning and training, parameters largely omitted from prior syntheses, on surgical accuracy and reproducibility. Delineate the current technological frontier of CAIS by identifying the limited number of cadaveric and artificial intelligence–assisted validation studies available to date.
By addressing these objectives, this review serves as an analytical extension of prior meta-analyses, contextualizing their conclusions with the most recent evidence and providing a unified assessment of accuracy trends, learning-curve dynamics, and technological evolution across static (s-CAIS), dynamic (d-CAIS), and robotic (r-CAIS) implant surgery systems.

2. Methods

2.1. Protocol and Registration

This systematic review was registered on the Open Science Framework (OSF; https://doi.org/10.17605/OSF.IO/98Q3G, accessed on 30 July 2025). Although our review protocol was registered on OSF after data collection was finalized, all eligibility criteria and analysis strategies were pre-specified and adhered to throughout the study.

2.2. Study Question and PICOS Strategy

The review was framed according to the PICOS model. Population (P) comprised partially or fully edentulous human patients, cadavers, as well as validated human-jaw replicas fabricated from polyurethane or 3-D printing. Interventions (I) included any r-CAIS, d-CAIS, or s-CAIS workflow executed with commercially available or prototype systems, optical navigation units (e.g., Navident—ClaroNav Technology Inc., Toronto, ON, Canada®), and stereolithographic drill guides (e.g., coDiagnostiX—Dental Wings Inc., Montreal, QC, Canada). Comparators (C) were either freehand placement or an alternative CAIS modality tested under the same conditions. Conventional root-form implants, and zygomatic implants were present in the dataset. Outcomes (O) were primary accuracy metrics; global three-dimensional linear deviation at the platform and apex, vertical (depth) deviation, and angular deviation between the planned and achieved implant axes; secondary outcomes included operative time, registration or guide-fabrication time, intra-operative complications and the need for corrective maneuvers. Study designs (S) eligible for inclusion were randomized or non-randomized clinical trials, prospective or retrospective cohort studies, and controlled laboratory investigations that analyzed at least five implants per study arm.
The review question was therefore: In human jaws or validated jaw models, do robot-assisted and other computer-assisted implant surgery (CAIS) techniques achieve greater three-dimensional positional accuracy than conventional freehand placement, and how does accuracy differ among the three CAIS modalities (static, dynamic, and robotic)?

2.3. Eligibility Criteria

English-language articles were considered when they (i) described clinical or in vitro drilling of dental implants, (ii) applied at least two CAIS modality of interest, and (iii) quantified accuracy by superimposing post-operative cone-beam CT, optical surface scan, or coordinate-measurement data on the virtual plan and reporting mean ± standard deviation or raw values. Exclusion criteria included narrative or systematic reviews, technical notes, conference abstracts, editorials, animal studies, finite-element simulations, reports with fewer than five implants per group, studies with no comparative arm, mini-implants and papers that evaluated only prosthetic fit or crestal bone levels without measuring positional deviation.

2.4. Study-Selection and Data-Extraction Process

Each included report was coded for publication year, study design (randomized clinical trial, non-randomized clinical cohort, or controlled in vitro experiment), guidance modality, number of implants analyzed, registration method, outcome definitions and numerical accuracy data. Data was extracted independently and imported into Rayyan [] by the two reviewers (V.B, P.T). When a paper provided both platform-to-plan and apex-to-plan deviations, both were captured; when accuracy was expressed as root-mean-square or absolute means, values were converted to pooled means and standard deviations. Disagreements in either the selection or extraction phase were resolved by discussion and, where necessary, by referring to a third investigator (C.V). The complete flow of records is documented in a PRISMA 2020 diagram [], and a PRISMA Checklist was provided as Supplementary Table S1.

2.5. Search Strategy

A comprehensive electronic search was conducted by V.B and D.P, with oversight from C.V and consultation with a colleague experienced in systematic reviews (M.H) in the three databases; PubMed/MEDLINE, Scopus and Web of Science Core Collection from 2019 to 2025, with English-only publications and no restrictions on publication type applied. PubMed was selected for its exhaustive coverage of dental-implant literature; Scopus and Web of Science were added to capture records indexed outside MEDLINE and to enable transparent deduplication. The three-database search corresponds with the recommendation of PRISMA 2020 Guidelines [] and Cochrane Handbook guidance []. The core syntax combining MeSH headings and free-text terms is shown in Table 1:
Table 1. Search strategy for each database.
The Scopus search was deliberately restricted to the Dentistry subject area and to publications from 2019 to 2025 to align with the period in which contemporary navigation and robotic systems became commercially available; no language limits were applied in PubMed or Web of Science, although only the English language papers were selected. Grey-literature sources were not searched because positional-accuracy studies require postoperative imaging and are almost exclusively published in peer-reviewed journals.
Search strings were adapted to the indexing rules of the database. Titles and abstracts were screened independently by two reviewers (V.B, P.T); potentially eligible reports underwent full-text assessment, again in duplicate, with disagreements resolved by discussion. Duplicates were removed, and the selection process was summarized in a PRISMA 2020 flow diagram. Data extraction followed a spreadsheet completed independently by two reviewers (V.B, P.T); any discrepancy in retrieved record counts or search-string translation was discussed between V.B and P.T, unresolved items were adjudicated by a third reviewer (C.V), and any other issues were discussed with experienced personnel (M.H, D.P) in systematic reviews. If essential numerical values were missing, corresponding authors were contacted by e-mail.

2.6. Types of Outcome Measures

The review extracted three established accuracy variables.
Coronal (platform) deviation is defined as the three-dimensional Euclidean distance, in millimetres, between the centre of the implant shoulder in the virtual plan and in the registered post-operative scan.
Apical deviation is defined as a 3-D Euclidean distance (mm) between the centroid of the apical endpoint in the plan and in the registered post-operative model, defined analogously to the platform to ensure symmetry of reference points.
Angular deviation represented the absolute inter-axis angle, in degrees, obtained from the arccosine of the dot-product of the planned and achieved implant vectors.
These definitions follow the CBCT super-imposition protocols described by Tao et al., 2022 [] and Taheri Otaghsara et al., 2023 [] for in vitro work and by Jorba-García et al., 2023 [] for a split-mouth clinical trial and are identical to the method used in the multi-workflow bench comparison of Xu et al., 2024 []. Where authors reported separate Bucco-lingual and Mesio-distal offsets the vector norm was computed to yield a single global value, and when both “planned-versus-placed” and “placed-versus-ideal” data were available the former were extracted because they most directly reflect surgical trueness rather than prosthetic fit.

2.7. Risk-of-Bias Assessment

Risk of bias was assessed with design-specific, validated instruments.
Randomized controlled trials (RCTs, n = 7) were appraised with the Cochrane RoB 2 tool (five domains).
Non-randomized clinical investigations, including prospective or retrospective cohorts and case-series (n = 10), were assessed with ROBINS-I (seven domains).
In vitro accuracy studies (n = 26) were analyzed with the QUIN checklist (five methodological domains).
Two reviewers worked independently after piloting each instrument on one representative article; disagreements were resolved by discussion between the two primary reviewers (V.B, P.T), and unresolved conflicts were resolved by a third reviewer (C.V). Domain-level judgements were entered into a spreadsheet that generated colour-coded overall ratings (“Low”, “Some concerns/Moderate”, “Serious/High”). Considering that no quantitative publication-bias assessment (e.g., Funnel plot) was undertaken because no meta-analysis was performed, the risk was judged within the relevant domain of each risk-of-bias tool. A ‘traffic-light’ figure and summary bar chart were created in Excel to visualize the findings.

2.8. Synthesis Methods

Given heterogeneity in design, setting (clinical, in vitro), implant category (conventional, zygomatic), and definitions, we performed a descriptive synthesis using grouped graphical displays by modality (r-CAIS, d-CAIS, s-CAIS, FH). Where subgroup data were separable in source studies, we summarize exploratory tendencies in text. No meta-analysis was undertaken.
We generated two complementary figure families. Main-text figures display all modalities on shared axes to enable between-modality comparisons. To enhance readability and allow within-modality inspection, we produced per-modality supplementary plots (Robot, Dynamic, Static, Freehand) for each outcome (Supplementary Figures S1–S12),showing study-level mean ± SD sorted by mean.

3. Results

3.1. Identification and Screening

The three-database search yielded 843 records (Figure 1, PRISMA 2020). After removal of 127 duplicates, 716 unique records were screened at title/abstract level, excluding 460. Two-hundred and fifty-six full-text articles were assessed for eligibility. Of these, 208 were excluded: 13 systematic reviews, 3 case reports, 99 non-comparative reports, 10 conference abstracts, 42 without quantitative accuracy data, 24 with no implant placement or <5 implants, and 22 lacking essential methodological information, including cadaveric investigation that did not meet the eligibility criteria. This left 43 primary investigations for qualitative synthesis.
Figure 1. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flowchart summarizing the screening process.

3.2. Study Range and Characteristics

The forty-three primary studies encompassed a heterogeneous yet well-balanced mix of experimental designs. Seven were parallel or split-mouth randomized controlled trials (e.g., Kaewsiri et. al., 2019 []), and others were prospective or retrospective clinical cohorts. Clinical investigations analyzed an average of 28 implants per article (range 10–90), whereas laboratory studies drilled an average of 40 osteotomies (range 15–240). The final dataset spans publications from 2019 to 2025 and comprises approximately 4200 individual implants: 544 freehand, 753 static-guide, 2313 dynamic-navigation, and 595 robotic placements. Surgical sites were evenly distributed between maxillae and mandibles; 22% of cohorts involved fully edentulous arches, 55% partially edentulous jaws, and 23% single-tooth sites. The navigation technologies represented in the data were well represented across the dataset.
RCAIS was evaluated in 9 of the 43 primary studies, and most of these employed the fully autonomous Remebot system, which were in vitro usability experiments demonstrating Remebot’s consistent depth and angulation control [,,,]. Dynamic navigation (d-CAIS) featured in 36 studies: the majority used NaviDent as the navigation technology (e.g., [,,]), some employed the Iris-100 platform, and two reported on custom-built optical prototypes [,]. Static computer-aided implant surgery (s-CAIS) was investigated in 24 papers, with coDiagnostiX®-designed stereolithographic guides predominating (eleven studies), and the remainder relying on BlueSkyPlan or generic STL-based workflows. Conventional freehand drilling served as a comparator in 21 investigations, notably the split-mouth RCT of Jorba-García et al., 2023 [] and multiple in vitro bench studies underscoring the operator-dependent variability of unguided implant placement.
Software heterogeneity was pronounced: three studies combined mixed-reality or augmented-reality overlays with Navident®; two contrasted markerless “trace-registration” against fiducial workflows within X-Guide (X-Nav Technologies, Lansdale, PA, USA); and one evaluated an open-sleeve versus closed-sleeve guide design for irrigation optimization. Such variation in planning and registration protocols partly explains the inter-study dispersion observed in the pooled accuracy metrics, but it also enhances the external validity of the synthesis by reflecting the breadth of digital workflows currently available in clinical practice.
Table 2 presents the data extraction table, with each study’s information such as the study design, navigation type, hardware of software used, implant type, number of groups, number of implants, and the three metric data—apical deviation, coronal deviation, and angle deviation.
Table 2. Data extraction table.

3.3. Risk-of-Bias Findings

The Supplementary Table S2 provides the risk of bias assessment performed for each study, providing a better understanding of the analysis.
Randomized evidence (7 RCTs). Randomisation methods and outcome-data completeness were adequate in every trial. One study achieved an overall “low-risk” judgement, while the remaining six were rated as “some concerns”, chiefly because surgeons could not be blinded and assessor blinding was inconsistently reported.
Non-randomized clinical studies (10 cohorts/case-series). Five investigations were at serious risk of bias, largely owing to residual confounding and outcome assessment by the treating team; the other five were judged as moderate risk. Selection criteria were generally explicit, and follow-up was complete.
In vitro investigations (26 studies). Five studies satisfied all QUIN domains (overall low risk). The other twenty-two were moderate risk, most often because blinding of the examiner measuring implant deviation was absent or unreported. Model standardization, replication, and statistical analysis were usually satisfactory.
Across all designs, limitations related to blinding and confounding were more common than issues with missing data or selective reporting (see Figure 2 and the Supplementary Table S2).
Figure 2. Overall risk-of-bias judgement.

3.4. Surgical Approaches Represented

Robot-guided workflows appeared in 8 studies. Dynamic navigation was the most frequently researched modality (35 papers), predominantly using Navident, Iris-100, or Yizhime platforms. Static stereolithographic guides featured in 24 investigations (proprietary or generic CAD/CAM templates), and FH was retained as a baseline comparator in 21 studies.

3.5. Distribution of Implant-Placement Deviations

Implant placement deviation was evaluated comparatively in terms of angular accuracy, apical accuracy, and apical deviation. As the studies are different in their design, the data is organized in grouped charts to illustrate the deviation values from each study spread across different guidance methods. The studies include comparative designs across clinical and in vitro settings. The Data Extraction Table can be analyzed for each study details.

3.6. Studies Accuracy

Study-level accuracy across all modalities is presented in the main-text figures. To facilitate detailed inspection without cross-method clutter, we also provide per-modality supplementary plots (robot, dynamic, static, freehand) for angular, apical, and coronal deviations (Supplementary Figures S1–S12), each showing mean ± SD per study arm.

3.6.1. Angular Deviation

  • Angle deviation 0–1°
Sub-degree control (0–1°, n = 10) is achieved largely by dynamic (four datasets) and robotic workflows (six) []. (dynamic AR) reaches 0.70 ± 0.29°. Two static fully guided bench datasets from Lysenko et al., 2023 [] (0.49 ± 0.17°) and Du et al., 2025 [] showed the highest point in this range as a robotic study but kept its low deviation at a range of ±0.3. The values regarding this range of precision are represented in Figure 3.
Figure 3. Angle deviation graph in the range of 0–1 degree [,,,,,,].
  • Angle deviation 1–2°
Fifteen datasets clustered between 1 and 2°. Again, dynamic navigation dominated, but single values for both static guides (e.g., Zhao et al., 2024 [] 1.87 ± 0.44°) and freehand (Lysenko et al., 2023 [], 1.44 ± 0.67°) were present. The dispersion of static data supports the meta-analytic observation by Yu et al., 2023 [] that angular advantages of dynamic over static guidance average ≈0.8° yet overlap in individual trials. Robotic approaches pose a good dispersion of the accuracy, as shown in Figure 4.
Figure 4. Angle deviation graph—1–2° [,,,,,,,,,,,,].
  • Angle deviation 2–4°
Twenty-three studies occupied this mid-zone, largely static and dynamic workflows, with only one robotic outlier Tao et al., 2022 []. Static guides demonstrated a widespread (2.4–3.9°), reflecting sensitivity to sleeve length and offset reported by Kivovics et al., 2022 [] and Huang et al., 2023 []. Dynamic navigation studies with inexperienced operators (e.g., Kunakornsawat et al., 2023 []) also fell into this category, reinforcing that real-time tracking cannot fully compensate for a learning curve. The values are represented in Figure 5.
Figure 5. Angle deviation graph—2–4° [,,,,,,,,,,,,,,,,,,,,,,].
  • Angle deviation ≥ 4°
Data shown in Figure 6 set exceeded the clinically acceptable 4° threshold, most of them being freehand techniques. Rueda et al., 2022 [] showed the highest values regarding dynamic navigation due to the novice profile of the surgeon, with experience only in freehand techniques and due to the use of zygomatic implants, five times the length of conventional implants. No robotic study crossed 4°, confirming their capacity to restrain large angular deviations.
Figure 6. Angle deviation graph ≥ 4° [,,,,,,,,,,,,,,,,,,,,].

3.6.2. Apical Deviation

  • Apical accuracy 0–1 mm
In Figure 7 Eighteen studies, split evenly between dynamic and robotic workflows, achieved < 1 mm apex error. Kim et al., 2024 [] stands alone as the only static guide study, using a hand-piece-mounted zirconia sleeve, that can lock the drill path to its final diameter, yielding the best results for the static approach, showing that properly designed mechanical constraints can rival with technologically advanced guidance systems. Zhao et al., 2024 [] shows three studies, robotic (0.47 ± 0.03), dynamic navigation (0.64 ± 0.05) and static navigation (0.87 ± 0.07) on the same polyurethane models, confirming that on-point planning and good standardization can decrease the variability in outcomes. Zhou et al., 2021 [] reports impressive results in the dynamic approach (0.34 ± 0.33), where postgraduate residents worked with few navigation cases. The data illustrates that true apical precision depends on a solid registration, whether achieved by a passive sleeve, haptic arms, or an optical tracker, while, as shown in the previous papers, experience is associated with greater variability.
Figure 7. Apical deviation graph—<1 mm [,,,,,,,,,,,,,,,,,,,,,,,,,,,].
  • Apical accuracy (1–2 mm)
Thirty-six datasets lay between 1 and 2 mm (Figure 8). This is the zone where static and dynamic distributions overlap, confirming meta-analytic equivalence for linear metrics. Tao et al., 2022 [] with a deviation of 1.06 ± 0.59 are the only remaining robotic systems remaining in this range. Stefanelli et al., 2020 [] reported that dynamic navigation manages to get close to the lower limit of 1 mm, with 3–4 tooth tracing yielding 1.17 ± 0.31 mm and 5–6 tooth tracing 0.88 ± 0.37 mm (pooled mean ~1.01 mm), showing that optical tracking can still provide good outcomes if registration is carefully performed on multiple landmarks. By contrast, Li et al., 2024 [] static-guide protocol produced a slightly higher global apical deviation of 1.24 ± 0.52 mm, strongly influenced by the fabrication tolerances of the surgical guides. The distribution of freehand techniques is already scarce in this graph, suggesting a majority in the next graph.
Figure 8. Apical deviation graph—1–2 mm [,,,,,,,,,,,,,,,,,,,,,,,,,,].
  • Apical deviation ≥ 2 mm
Twenty-two datasets exceeded an apical error of 2 mm; their distribution is mainly of static navigation systems and freehand techniques, although a small number of dynamic navigation systems still appear in this range. X. Wang et al., 2022 [] stands out with two extreme outliers in the same study (shown in Figure 9). The static approach showed that there is a big fluctuation when considering the experience of the surgeon; the experienced group had better results, but with a higher variability, whereas the novice showed a higher mean with a lower deviation (7.27 ± 3.82 for experienced and 7.07 ± 4.38 for novice). Both results arose in dense mandible models. Mampilly et al., 2023 [] was at the back of the list as well, measuring a mean apical deviation in regard to the dynamic navigation system of 5.89 ± 1.08 mm versus 6.95 ± 2.12 mm for freehand placement. The study highlights that dynamic tracking can be constrained by line-of-sight interruptions and patient movement, that determines the multi-millimetre apex shifts when registration is improper.
Figure 9. Apical deviation graph—≥2 mm [,,,,,,,,,,,].

3.6.3. Coronal (Platform) Deviation

  • Coronal deviation < 1 mm
Thirty datasets were <1 mm (Figure 10). Dynamic navigation contributed the largest share, but static guides with rigid metal sleeves [] were equivalent. Optical tracking is capable of matching robotic precision in straightforward sites, as illustrated by Xu et al., 2024 (in a dynamic clinical cohort, 28 implants planned with Navident and mixed-reality trace registration, showed 0.69 ± 0.25 mm apical deviation []) and by the trial of Baoxin []. Freehand was rare in this segment; the freehand point (Hama & Mahmood, 2023 []) corresponded to a veteran operator in a low-visibility mandible but used extensive surgical stents for alignment.
Figure 10. Coronal deviation graph—<1 mm [,,,,,,,,,,,,,,,,,,,].
  • Coronal deviation 1–2 mm
In Figure 11, Twenty-seven datasets clustered here, predominantly static and dynamic. Yimarj et al., 2020 [] (static guide, 1.04 mm but with an SD ± 0.67 mm) reported high variability in their accuracy outcomes, due to the duality of the study. Senior staff placed half of the implants with a deviation around 0.8 mm, whereas residents’ deviations reached up to 1.6 mm. Wang et al., 2022 [] compared active versus passive infrared navigation systems across a high range of mandible models (64, 704 implants). The learning curve had a profound effect on the outcomes of the study, with the first 12 cases contributing the largest individual errors (>3 mm), widening the SD to ±1.12 mm. The novice-versus-experienced paper by Jorba-Garcia et al.,2019 [] reinforces that clinically deployed dynamic or static guides typically cluster in 1–1.3 mm range. There is an absence of any robotic systems already in this margin, all of them remaining below the 1 mm threshold (excepting the extent of their SD).
Figure 11. Coronal deviation graph—1–2 mm [,,,,,,,,,,,,,,,,,,].
  • Coronal deviation ≥ 2 mm
Ten studies consisting of all, but robotic systems exceeded 2 mm. Mampilly et al., 2023 [] asked five experienced and five novice surgeons to place 4.2 × 10 mm fixtures in three maxillary sites with and without Navident, averaging 5.34 ± 1.45 mm for the navigated arm and 6.19 ± 3.14 mm freehand. Authors tracing this outcome were strongly influenced by the steep learning curve in the successive three-day sessions, where they were still acclimatizing to the hand-eye-screen coordination and the registration error of ≈0.7 mm of the navigation system. González Rueda et al., 2023 [] reported that high coronal-entry error was determined by the posterior access and the limited visual control and drill chatter in the dense bone, this can be analysed in Figure 12.
Figure 12. Coronal deviation graph—≥2 mm [,,,,,].

4. Discussion

4.1. Main Findings

Across 43 comparative investigations (2019–2025), every computer-assisted workflow outperformed freehand placement in three-dimensional accuracy. Dynamic systems repeatedly achieved smaller angular deviation than static guides (Kaewsiri et al., 2019 []; Taheri Otaghsara et al., 2023 []; Yimarj et al., 2020 []; Yotpibulwong et al., 2023 []), while fully guided static templates approached similar linear precision when rigid metal sleeves or optimized offsets were used (Li et al., 2024 []; Kim et al., 2024 []). Robotic platforms such as Remebot or task-autonomous cobots yielded sub-millimetre and sub-degree control in laboratory settings (Du et al., 2025 []; Xu et al., 2024 []), confirming the quantitative hierarchy previously reported by Khaohoen et al., 2024 [].

4.2. Relation to Previous Work

Early attempts to standardize implant positioning relied on two-dimensional intra-oral radiographs and geometric back-projection. Cosola et al., 2021 [] showed that simple radiographic calibration markers can reconstruct three-dimensional implant position with reasonable precision, highlighting that digital navigation builds on longstanding principles rather than replacing them. Our review builds on these foundational studies by harmonizing accuracy metrics across modalities and integrating newer robotic and zygomatic data under a consistent analytic framework. The accuracy hierarchy observed here mirrors that of Du et al., 2025 [] and Khan et al., 2024 [], but contrasts partially with Khaohoen et al., 2024 [], who reported no significant difference between dynamic and static guidance in clinical trials. This discrepancy likely reflects the inclusion of zygomatic and hybrid workflows in our dataset. Moreover, unlike prior meta-analyses, we did not identify any cadaveric comparative evidence and AI-based systems, exposing a translational gap between laboratory validation and clinical reality

4.3. Static vs. Dynamic Navigation

The equivalence of static and dynamic linear metrics seen here mirrors earlier findings (Yu et al., 2023 []; Khan et al., 2024 []). Differences emerge mainly in angular deviation, where continuous visual feedback enables intra-operative trajectory correction. Studies using Navident and Iris-100 consistently showed a <2° mean error (Aydemir et al., 2020 []; Zhou et al., 2021 []; Kunakornsawat et al., 2023 []), whereas pilot-only static guides drifted toward 3–4° (Kivovics et al., 2022 []; Huang et al., 2023 []). Hybrid approaches combining static sleeves with real-time feedback improved reproducibility in zygomatic and full-arch cases (Du et al., 2025 []; González-Rueda et al., 2023 []). These patterns indicate that angular control, more than linear trueness, differentiates dynamic systems.

4.4. Dynamic vs. Robotic Guidance

Robotic systems demonstrated narrower deviation ranges (<1 mm, <2°) than manual dynamic navigation under identical conditions (Tao et al., 2022 []; Xu et al., 2024 []; Zhao et al., 2024 []). Chen et al., 2023 [] confirmed consistent precision in immediate and full-arch placements, while Li et al., 2024 [] and Zhang et al., 2024 [] showed reproducibility in anterior and posterior regions. Nevertheless, most robotic data remain in vitro or single-centre clinical, limiting external validity. Current evidence indicates that mechanical constraint and calibration accuracy, rather than autonomous control, account for the observed precision of current robotic workflows.

4.5. Strengths and Limitations of the Evidence

Only six of forty-three studies were at low risk of bias (one RCT and five in vitro experiments). Out of the seven randomized clinical trials, six had “some concerns” and only one was “low risk”, mainly due to unavoidable surgeon un-blinding and inconsistent assessor blinding. Among the fourteen non-randomized clinical cohorts/case-series, nine were at serious risk; mainly because accuracy was measured by the treating team or derived retrospectively, while the remaining five were of moderate risk. Of the twenty-six in vitro studies, twenty-two carried a moderate rating (absent or unreported examiner blinding), and five fulfilled all QUIN criteria. These issues may inflate reported advantages and suggest that real-world accuracy gains could be smaller than pooled medians indicate.
Additional heterogeneity arises from varied measurement protocols (single 3-D apex-to-plan distance vs. separate linear and angular metrics) and diverse registration or drill-sequence procedures, particularly within static and dynamic systems. Operator experience was poorly documented; only a handful of studies explored learning-curve effects.

4.6. Technology-Specific Considerations

Static templates (s-CAIS). Accuracy depends on guide support (tooth/mucosa/bone-borne), sleeve–drill tolerances, offsets, and multi-step drill stacks. Template stability and irrigation pathways limit performance in posterior or limited-access sites and may influence thermal control.
Dynamic navigation (d-CAIS). Key drivers are registration (fiducials vs. surface matching), line-of-sight for optical tracking, tool calibration (bur swaps, length compensation), and latency. Continuous visual feedback likely underpins the angular advantage observed in several datasets, balanced against cognitive load (screen–field switching) and dependence on tracking quality.
Robot-assisted systems (r-CAIS). Tight clusters likely reflect mechanical constraint (haptic/kinematic limits) and consistent trajectory enforcement. Performance is sensitive to registration accuracy, arm calibration/compensation, stiffness/backlash, and autonomy level (shared control vs. autonomous). In our dataset, clinical r-CAIS evidence draws from relatively few platforms, which constrains causal inference despite encouraging precision.
Accuracy varied with registration and anatomical site. Trace-registration and markerless optical workflows reduced error relative to fiducial approaches (Wu et al., 2023 []; Xu et al., 2024 []). Posterior maxillae and zygomatic trajectories generated larger angular spread due to restricted access and dense bone (González-Rueda et al.,2023 []; Rueda et al., 2022 []). Static templates using mucosa-borne supports showed greater variability than tooth- or bone-borne designs (Yimarj et al., 2020 []; Lysenko et al., 2023 []). Such variability underscores the need for standardized reporting of registration accuracy, sleeve offset, and support type in future studies.

4.7. Zygomatic Implants

Zygomatic placement presents unique challenges (long drill path, angulation, and proximity to vital structures), which can magnify systematic and random errors inherent to each guidance modality. Contemporary clinical series using combined static dynamic workflows (Du et al., 2025 []) report angular deviations ~2° and coronal global deviations ~1.1–1.3 mm, with apical medians ~1.7 mm, suggesting that hybrid guidance can stabilize coronal control while maintaining acceptable apical accuracy. Within zygomatic cohorts, we observed no clear side-to-side differences, and accuracy appeared comparable across typical implant lengths, although confidence intervals were wide. These findings align with broader trends that dynamic guidance improves 3D control yet remains sensitive to registration fidelity and surgeon experience; combining static sleeves [] (for initial positioning) with dynamic feedback appears to mitigate drift over long trajectories.

4.8. Learning Curve and Training

Wang et al., 2022 [] conducted an in vitro study with 704 implants using active vs. passive infrared dynamic navigation (Yizhimei) and found that the learning curve approached plateau after approximately 12 cases and converged by 27 placements. Active IR systems reached stable accuracy faster than passive IR systems, with consistently smaller angular and coronal deviations.
Wang et al., 2023 [] compared freehand, static-guided, and dynamic navigation among novice and experienced surgeons and showed that dynamic navigation significantly reduced angular error compared with other techniques. The accuracy difference between novice and expert users disappeared under dynamic navigation, while procedure time remained longer than freehand but was not affected by experience. Some papers also showed that students using dynamic navigation achieved lower angular deviation in comparison with other approaches, such as freehand (4.9 deg vs. 10.1 deg) and were uniformly satisfied, but the procedure time doubled in initial trials []. The influence of gender or gaming familiarity was non-existent based on the finding of Yan et al.,2024 [].
Kunakornsawat et al., 2023 [] performed an exploratory randomized trial on novice surgeons using dynamic navigation. The study found that distributed training over several days resulted in faster operative performance and slightly improved coronal accuracy in later sessions compared with same day massed training. Both groups achieved acceptable accuracy levels after the third session.
Education-focused and translational studies indicate rapid proficiency gains with dynamic systems after short, distributed training; template workflows hinge on accurate lab steps and fit checks; robotic systems add setup/calibration time and team coordination. Reporting operator credentials, prior case numbers, and training dose is essential to interpret performance and to generalize across centres.

4.9. Clinical Implications

For anatomically constrained or highly aesthetic sites, dynamic navigation already offers a pragmatic accuracy improvement over freehand surgery, especially for inexperienced operators. Static guides remain a cost-effective option for fully edentulous arches or when optical tracking is unavailable. Robotic systems appear most precise but demand higher capital investment and workflow changes. Clinicians should weigh these factors against the modest (≈0.5–1 mm) absolute accuracy benefit.

4.10. Translational and Evidence Gaps

Although cadaveric reports exist in the broader CAIS literature, none fulfilled inclusion criteria, mainly for lacking comparative design or full quantitative deviation data, leaving an unvalidated bridge between in vitro precision and realistic anatomy. At the same time, none of the included studies incorporated genuine AI or machine-learning algorithms; all relied on deterministic optical tracking or haptic constraint (Chen et al., 2023 []; Zhang et al., 2024 []). This confirms that current “robotic intelligence” remains mechanical rather than adaptive. Bridging these gaps requires standardized cadaveric validation and exploration of AI-assisted registration or trajectory optimization.

4.11. Future Directions

While the review provides a comprehensive synthesis of the current literature on static, dynamic, and robot-assisted implant systems, several limitations must be acknowledged. A large proportion of the papers included were assessed primarily in vitro or in controlled experimental models, which may not fully replicate clinical variability (soft tissues, patient movement).
Additionally, although all studies reported accuracy outcomes, differences in how these were defined (e.g., 3D apex to plan distance versus separate linear and angular values) may reduce comparability across datasets. The variability in registration protocols, template designs, and drill sequence control especially in static and dynamic systems, although providing high diversity and a better overview of the field, introduces methodological heterogeneity. Future research should prioritize controlled cadaveric trials, standardized 3D deviation metrics, and exploration of AI-assisted registration or adaptive control algorithms.

5. Conclusions

This systematic review provides an updated and methodologically harmonized overview of the accuracy of computer-assisted implant surgery (CAIS), integrating evidence from robotic, dynamic, and static systems published between 2019 and 2025. All guided modalities demonstrated superior positional precision compared with freehand placement, confirming the cumulative benefit of computerization in implant surgery. Dynamic and static systems achieved comparable linear accuracy, with dynamic navigation showing a consistent angular advantage. Robotic platforms achieved the tightest accuracy clusters, although clinical data remain limited.
The distinctive contribution of this review lies in its unified comparison across modalities using standardized accuracy definitions, inclusion of hybrid and zygomatic workflows, and synthesis of learning-curve data; dimensions not previously consolidated in a single analysis.
Nowadays static guides and dynamic navigation techniques have reached clinical maturity, whereas robotic systems are still in pioneering age, their continuing development aiming to reduce operator-dependent variability and enhance surgical reproducibility. The clinical evidence base for robotic systems remains relatively limited, with most reported datasets being derived from in vitro or non-randomized studies. This illustrates the need for more high-quality clinical trials to fully assess the impact of robotic assistance in everyday dental surgery practice. Dynamic navigation and robotic systems provide the highest placement accuracy, followed by static guides and freehand drilling. Clinicians should weigh the modest accuracy benefit against equipment cost and training needs, while future well-designed trials are required to confirm these findings.
Clinically, CAIS technologies should be regarded as complementary rather than competitive tools; static guides provide predictable outcomes for routine cases, dynamic systems enable intraoperative adaptability, and robotic assistance offers the potential for maximal reproducibility once validated in larger clinical cohorts.
Future research should prioritize high-quality comparative trials on cadaveric or clinical models, standardized 3D deviation metrics, and exploration of adaptive AI-assisted registration and control. These directions will close the translational gap between laboratory precision and consistent real-world outcomes.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/dj13110537/s1, Figure S1: angular_dynamic, Figure S2: angular_dynamic; Figure S3: angular_dynamic, Figure S4: angular_dynamic, Figure S5: angular_dynamic, Figure S6: angular_dynamic, Figure S7: angular_dynamic, Figure S8: angular_dynamic, Figure S9: angular_dynamic, Figure S10: angular_dynamic, Figure S11: angular_dynamic, Figure S12: angular_dynamic. Table S1: PRISMA_2020_Checklist; Table S2: Supplementary Table S2.

Author Contributions

Conceptualization, V.B. and D.P. (Daria Pisla).; methodology, V.B.; software, P.T.; validation, D.P. (Doina Pisla)., M.H. and C.V.; formal analysis, R.M.; investigation, V.B.; resources, A.P.; data curation, C.D.; writing—original draft preparation, V.B.; writing—review and editing, V.B.; visualization, C.V.; supervision, D.P. (Doina Pisla).; project administration, D.P. (Doina Pisla); funding acquisition, D.P. (Doina Pisla). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the project Technologically Enabled Advancements in Dental Medicine (TEAM), funded by the European Union—NextGenerationEU and the Romanian Government, under National Recovery and Resilience Plan for Romania, contract no. 760303_CF_80_I8_R2_AA1, through the Romanian Ministry of Research, Innovation and Digitalization, within Component 9, investment I8, and by the “Iuliu Hatieganu” University of Medicine and Pharmacy Cluj-Napoca, Romania, Department of Maxillofacial Surgery and Radiology, Doctoral Research Program, through project PCD 646/40/11.01.2024.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

This work was supported by project PNRR-III-C9-2023-I8, “Technologically Enabled Advancements in Dental Medicine (TEAM)”, CF.80/31.07.2023, number 760235/28.12.2023. TEAM Project Group: Jacobs R., Almășan O., Băciuț M., Bran S., Burde A., Cordoș A., Crișan B., Dinu C., Dioșan L., Mureșanu S., Hedeșiu M., Ilea A., Jacobs R., Leucuța D.C., Lucaciu O., Manea A., Mocan, R., Olariu E., Pisla, D., Roman R., Rotaru H., Stoia S., Tamaș T., and Văcăraș S.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Khaohoen, A.; Powcharoen, W.; Sornsuwan, T.; Chaijareenont, P.; Rungsiyakull, C.; Rungsiyakull, P. Accuracy of implant placement with computer-aided static, dynamic, and robot-assisted surgery: A systematic review and meta-analysis of clinical trials. BMC Oral Health 2024, 24, 359. [Google Scholar] [CrossRef]
  2. Younis, H.; Lv, C.; Xu, B.; Zhou, H.; Du, L.; Liao, L.; Zhao, N.; Long, W.; Elayah, S.A.; Chang, X.; et al. Accuracy of dynamic navigation compared to static surgical guides and the freehand approach in implant placement: A prospective clinical study. Head Face Med. 2024, 20, 30. [Google Scholar] [CrossRef]
  3. Xu, Z.; Zhou, L.; Han, B.; Wu, S.; Xiao, Y.; Zhang, S.; Chen, J.; Guo, J.; Wu, D. Accuracy of dental implant placement using different dynamic navigation and robotic systems: An in vitro study. npj Digit. Med. 2024, 7, 182. [Google Scholar] [CrossRef]
  4. Shi, Y.; Wang, J.; Ma, C.; Shen, J.; Dong, X.; Lin, D. A systematic review of the accuracy of digital surgical guides for dental implantation. Int. J. Implant. Dent. 2023, 9, 38. [Google Scholar] [CrossRef]
  5. Khan, M.; Javed, F.; Haji, Z.; Ghafoor, R. Comparison of the positional accuracy of robotic guided dental implant placement with static guided and dynamic navigation systems: A systematic review and meta-analysis. J. Prosthet. Dent. 2024, 132, 746.e1–746.e8. [Google Scholar] [CrossRef] [PubMed]
  6. Ouzzani, M.; Hammady, H.; Fedorowicz, Z.; Elmagarmid, A. Rayyan—A web and mobile app for systematic reviews. Syst. Rev. 2016, 5, 210. [Google Scholar] [CrossRef] [PubMed]
  7. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  8. Higgins, J.P.T.; Thomas, J.; Chandler, J.; Cumpston, M.; Li, T.; Page, M.J.; Welch, V.A. (Eds.) Cochrane Handbook for Systematic Reviews of Interventions, 2nd ed.; John Wiley & Sons: Chichester, UK, 2019. [Google Scholar] [CrossRef]
  9. Tao, B.; Feng, Y.; Fan, X.; Zhuang, M.; Chen, X.; Wang, F.; Wu, Y. Accuracy of dental implant surgery using dynamic navigation and robotic systems: An in vitro study. J. Dent. 2022, 123, 104170. [Google Scholar] [CrossRef]
  10. Otaghsara, S.S.T.; Joda, T.; Thieringer, F.M. Accuracy of dental implant placement using static versus dynamic computer-assisted implant surgery: An in vitro study. J. Dent. 2023, 132, 104487. [Google Scholar] [CrossRef]
  11. Jorba-García, A.; Bara-Casaus, J.J.; Camps-Font, O.; Sánchez-Garcés, M.Á.; Figueiredo, R.; Valmaseda-Castellón, E. Accuracy of dental implant placement with or without the use of a dynamic navigation assisted system: A randomized clinical trial. Clin. Oral Implant. Res. 2023, 34, 438–449. [Google Scholar] [CrossRef]
  12. Kaewsiri, D.; Panmekiate, S.; Subbalekha, K.; Mattheos, N.; Pimkhaokham, A. The accuracy of static vs. dynamic computer-assisted implant surgery in single tooth space: A randomized controlled trial. Clin. Oral Implant. Res. 2019, 30, 505–514. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, J.; Zhuang, M.; Tao, B.; Wu, Y.; Ye, L.; Wang, F. Accuracy of immediate dental implant placement with task-autonomous robotic system and navigation system: An in vitro study. Clin. Oral Implant. Res. 2024, 35, 973–983. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, S.; Cai, Q.; Chen, W.; Lin, Y.; Gao, Y.; Wu, D.; Chen, J. Accuracy of implant placement via dynamic navigation and autonomous robotic computer-assisted implant surgery methods: A retrospective study. Clin. Oral Implant. Res. 2024, 35, 220–229. [Google Scholar] [CrossRef] [PubMed]
  15. Li, J.; Dai, M.; Wang, S.; Zhang, X.; Fan, Q.; Chen, L. Accuracy of immediate anterior implantation using static and robotic computer-assisted implant surgery: A retrospective study. J. Dent. 2024, 148, 105218. [Google Scholar] [CrossRef]
  16. Zhao, W.; Teng, W.; Su, Y.; Zhou, L. Accuracy of dental implant surgery with freehand, static computer-aided, dynamic computer-aided, and robotic computer-aided implant systems: An in vitro study. J. Prosthet. Dent. 2024. [Google Scholar] [CrossRef]
  17. González-Rueda, J.; Galparsoro-Catalán, A.; de Paz-Hermoso, V.; Riad-Deglow, E.; Zubizarreta-Macho, Á.; Pato-Mourelo, J.; Hernández-Montero, S.; Montero-Martín, J. Accuracy of zygomatic dental implant placement using computer-aided static and dynamic navigation systems compared with a mixed reality appliance. An in vitro study. J. Clin. Exp. Dent. 2023, 15, e1035–e1044. [Google Scholar] [CrossRef]
  18. Mampilly, M.; Kuruvilla, L.; Niyazi, A.A.T.; Shyam, A.; Thomas, P.A.; Ali, A.S.; Pullishery, F. Accuracy and Self-Confidence Level of Freehand Drilling and Dynamic Navigation System of Dental Implants: An In Vitro Study. Cureus 2023, 15, e49618. [Google Scholar] [CrossRef]
  19. Zhou, M.; Zhou, H.; Li, S.-Y.; Zhu, Y.-B.; Geng, Y.-M. Comparison of the accuracy of dental implant placement using static and dynamic computer-assisted systems: An in vitro study. J. Stomatol. Oral Maxillofac. Surg. 2021, 122, 343–348. [Google Scholar] [CrossRef]
  20. Wu, B.; Xue, F.; Ma, Y.; Sun, F. Accuracy of automatic and manual dynamic navigation registration techniques for dental implant surgery in posterior sites missing a single tooth: A retrospective clinical analysis. Clin. Oral Implant. Res. 2023, 34, 221–232. [Google Scholar] [CrossRef]
  21. Pomares-Puig, C.; Sánchez-Garcés, M.A.; Jorba-García, A. Dynamic and static computer-assisted implant surgery for completely edentulous patients. A proof of a concept. J. Dent. 2023, 130, 104443. [Google Scholar] [CrossRef]
  22. Wang, W.; Zhuang, M.; Li, S.; Shen, Y.; Lan, R.; Wu, Y.; Wang, F. Exploring training dental implant placement using static or dynamic devices among dental students. Eur. J. Dent. Educ. 2023, 27, 438–448. [Google Scholar] [CrossRef] [PubMed]
  23. Du, C.; Peng, P.; Guo, X.; Wu, Y.; Zhang, Z.; Hao, L.; Zhang, Z.; Xiong, J. Combined static and dynamic computer-guided surgery for prosthetically driven zygomatic implant placement. J. Dent. 2025, 152, 105453. [Google Scholar] [CrossRef] [PubMed]
  24. Rueda, J.R.G.; Catalán, A.G.; Hermoso, V.M.d.P.; Deglow, E.R.; Zubizarreta-Macho, Á.; Mourelo, J.P.; Martín, J.M.; Montero, S.H. Accuracy of computer-aided static and dynamic navigation systems in the placement of zygomatic dental implants. BMC Oral Health 2023, 23, 150. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, X.; Shujaat, S.; Meeus, J.; Shaheen, E.; Legrand, P.; Lahoud, P.; Gerhardt, M.D.N.; Jacobs, R. Performance of novice versus experienced surgeons for dental implant placement with freehand, static guided and dynamic navigation approaches. Sci. Rep. 2023, 13, 2598. [Google Scholar] [CrossRef]
  26. Kunakornsawat, W.; Serichetaphongse, P.; Arunjaroensuk, S.; Kaboosaya, B.; Mattheos, N.; Pimkhaokham, A. Training of novice surgeons using dynamic computer assisted dental implant surgery: An exploratory randomized trial. Clin. Implant. Dent. Relat. Res. 2023, 25, 511–518. [Google Scholar] [CrossRef]
  27. Wang, X.-Y.; Liu, L.; Guan, M.-S.; Liu, Q.; Zhao, T.; Li, H.-B. The accuracy and learning curve of active and passive dynamic navigation-guided dental implant surgery: An in vitro study. J. Dent. 2022, 124, 104240. [Google Scholar] [CrossRef]
  28. Jaemsuwan, S.; Arunjaroensuk, S.; Kaboosaya, B.; Subbalekha, K.; Mattheos, N.; Pimkhaokham, A. Comparison of the accuracy of implant position among freehand implant placement, static and dynamic computer-assisted implant surgery in fully edentulous patients: A non-randomized prospective study. Int. J. Oral Maxillofac. Surg. 2023, 52, 264–271. [Google Scholar] [CrossRef]
  29. Wang, X.; Shaheen, E.; Shujaat, S.; Meeus, J.; Legrand, P.; Lahoud, P.; Gerhardt, M.D.N.; Politis, C.; Jacobs, R. Influence of experience on dental implant placement: An in vitro comparison of freehand, static guided and dynamic navigation approaches. Int. J. Implant. Dent. 2022, 8, 42. [Google Scholar] [CrossRef]
  30. Zhong, X.; Xing, Y.; Yan, J.; Chen, J.; Chen, Z.; Liu, Q. Surgical performance of dental students using computer-assisted dynamic navigation and freehand approaches. Eur. J. Dent. Educ. 2024, 28, 504–510. [Google Scholar] [CrossRef]
  31. Lysenko, A.V.; Yaremenko, A.I.; Ivanov, V.M.; Lyubimov, A.I.; Leletkina, N.A.; Prokofeva, A.A. Comparison of Dental Implant Placement Accuracy Using a Static Surgical Guide, a Virtual Guide and a Manual Placement Method—An In-Vitro Study. Ann. Maxillofac. Surg. 2023, 13, 158–162. [Google Scholar] [CrossRef]
  32. Kivovics, M.; Takács, A.; Pénzes, D.; Németh, O.; Mijiritsky, E. Accuracy of dental implant placement using augmented reality-based navigation, static computer assisted implant surgery, and the free-hand method: An in vitro study. J. Dent. 2022, 119, 104070. [Google Scholar] [CrossRef] [PubMed]
  33. Aydemir, C.A.; Arısan, V. Accuracy of dental implant placement via dynamic navigation or the freehand method: A split-mouth randomized controlled clinical trial. Clin. Oral Implant. Res. 2020, 31, 255–263. [Google Scholar] [CrossRef]
  34. Chen, J.; Bai, X.; Ding, Y.; Shen, L.; Sun, X.; Cao, R.; Yang, F.; Wang, L. Comparison the accuracy of a novel implant robot surgery and dynamic navigation system in dental implant surgery: An in vitro pilot study. BMC Oral Health 2023, 23, 179. [Google Scholar] [CrossRef] [PubMed]
  35. Feng, Y.; Su, Z.; Mo, A.; Yang, X. Comparison of the accuracy of immediate implant placement using static and dynamic computer-assisted implant system in the esthetic zone of the maxilla: A prospective study. Int. J. Implant. Dent. 2022, 8, 65. [Google Scholar] [CrossRef] [PubMed]
  36. Yotpibulwong, T.; Arunjaroensuk, S.; Kaboosaya, B.; Sinpitaksakul, P.; Arksornnukit, M.; Mattheos, N.; Pimkhaokham, A. Accuracy of implant placement with a combined use of static and dynamic computer-assisted implant surgery in single tooth space: A randomized controlled trial. Clin. Oral Implant. Res. 2023, 34, 330–341. [Google Scholar] [CrossRef]
  37. Kim, Y.-J.; Kim, J.; Lee, J.-R.; Kim, H.-S.; Sim, H.-Y.; Lee, H.; Han, Y.-S. Comparison of the accuracy of implant placement using a simple guide device and freehand surgery. J. Dent. Sci. 2024, 19, 2256–2261. [Google Scholar] [CrossRef]
  38. Jia, S.; Wang, G.; Zhao, Y.; Wang, X. Accuracy of an autonomous dental implant robotic system versus static guide-assisted implant surgery: A retrospective clinical study. J. Prosthet. Dent. 2023, 133, 771–779. [Google Scholar] [CrossRef]
  39. Shusterman, A.; Nashef, R.; Tecco, S.; Mangano, C.; Lerner, H.; Mangano, F.G. Accuracy of implant placement using a mixed reality-based dynamic navigation system versus static computer-assisted and freehand surgery: An in Vitro study. J. Dent. 2024, 146, 105052. [Google Scholar] [CrossRef]
  40. Yimarj, P.; Subbalekha, K.; Dhanesuan, K.; Siriwatana, K.; Mattheos, N.; Pimkhaokham, A. Comparison of the accuracy of implant position for two-implants supported fixed dental prosthesis using static and dynamic computer-assisted implant surgery: A randomized controlled clinical trial. Clin. Implant. Dent. Relat. Res. 2020, 22, 672–678. [Google Scholar] [CrossRef]
  41. Neuschitzer, M.; Toledano-Serrabona, J.; Jorba-García, A.; Bara-Casaus, J.; Figueiredo, R.; Valmaseda-Castellón, E. Comparative accuracy of dCAIS and freehand techniques for immediate implant placement in the maxillary aesthetic zone: An in vitro study. J. Dent. 2025, 153, 105472. [Google Scholar] [CrossRef]
  42. Hama, D.R.; Mahmood, B.J. Comparison of accuracy between free-hand and surgical guide implant placement among experienced and non-experienced dental implant practitioners: An in vitro study. J. Periodontal Implant. Sci. 2023, 53, 388–401. [Google Scholar] [CrossRef]
  43. Huang, L.; Liu, L.; Yang, S.; Khadka, P.; Zhang, S. Evaluation of the accuracy of implant placement by using implant positional guide versus freehand: A prospective clinical study. Int. J. Implant. Dent. 2023, 9, 45. [Google Scholar] [CrossRef]
  44. Rueda, J.R.G.; Ávila, I.G.; Hermoso, V.M.d.P.; Deglow, E.R.; Zubizarreta-Macho, Á.; Mourelo, J.P.; Martín, J.M.; Montero, S.H. Accuracy of a Computer-Aided Dynamic Navigation System in the Placement of Zygomatic Dental Implants: An In Vitro Study. J. Clin. Med. 2022, 11, 1436. [Google Scholar] [CrossRef]
  45. Stefanelli, L.V.; Mandelaris, G.A.; Franchina, A.; Pranno, N.; Pagliarulo, M.; Cera, F.; Maltese, F.; De Angelis, F.; Di Carlo, S. Accuracy of dynamic navigation system workflow for implant supported full arch prosthesis: A case series. Int. J. Environ. Res. Public Health 2020, 17, 5038. [Google Scholar] [CrossRef] [PubMed]
  46. Kang, Y.; Ge, Y.; Ding, M.; Liu-Fu, J.; Cai, Z.; Shan, X. A comparison of accuracy among different approaches of static-guided implant placement in patients treated with mandibular reconstruction: A retrospective study. Clin. Oral Implant. Res. 2024, 35, 251–257. [Google Scholar] [CrossRef] [PubMed]
  47. Yan, Q.; Wu, X.; Shi, J.; Shi, B. Does dynamic navigation assisted student training improve the accuracy of dental implant placement by postgraduate dental students: An in vitro study. BMC Oral Health 2024, 24, 600. [Google Scholar] [CrossRef] [PubMed]
  48. Jorba-Garcia, A.; Figueiredo, R.; Gonzalez-Barnadas, A.; Camps-Font, O.; Valmaseda-Castellon, E. Accuracy and the role of experience in dynamic computer guided dental implant surgery: An in-vitro study. Med. Oral Patol. Oral Cir. Bucal 2019, 24, E76–E83. [Google Scholar] [CrossRef]
  49. Mediavilla Guzmán, A.; Riad Deglow, E.; Zubizarreta-Macho, Á.; Agustín-Panadero, R.; Hernández Montero, S. Accuracy of computer-aided dynamic navigation compared to computer-aided static navigation for dental implant placement: An in vitro study. J. Clin. Med. 2019, 8, 2123. [Google Scholar] [CrossRef]
  50. Chen, J.; Ding, Y.; Cao, R.; Zheng, Y.; Shen, L.; Wang, L.; Yang, F. Accuracy of a Novel Robot-Assisted System and Dynamic Navigation System for Dental Implant Placement: A Clinical Retrospective Study. Clin. Oral Implant. Res. 2025, 36, 725–735. [Google Scholar] [CrossRef]
  51. Yu, X.; Tao, B.; Wang, F.; Wu, Y. Accuracy assessment of dynamic navigation during implant placement: A systematic review and meta-analysis of clinical studies in the last 10 years. J. Dent. 2023, 135, 104567. [Google Scholar] [CrossRef]
  52. Cosola, S.; Toti, P.; Peñarrocha-Diago, M.; Covani, U.; Brevi, B.C.; Peñarrocha-Oltra, D. Standardization of three-dimensional pose of cylindrical implants from intraoral radiographs: A preliminary study. BMC Oral Health 2021, 21, 100. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.