The Role of Augmented Reality Neuronavigation in Transsphenoidal Surgery: A Systematic Review

In the field of minimally invasive neurosurgery, microscopic transsphenoidal surgery (MTS) and endoscopic transsphenoidal surgery (ETS) have been widely accepted as a safe approach for pituitary lesions and, more recently, their indications have been extended to lesions at various skull base regions. It is mandatory during transsphenoidal surgery (TS) to identify key anatomical landmarks in the sphenoid sinus and distinguish them from the lesion. Over the years, many intraoperative tools have been introduced to improve the neuronavigation systems aiming to achieve safer and more accurate neurosurgical interventions. However, traditional neuronavigation systems may lose the accuracy of real-time location due to the discrepancy between the actual surgical field and the preoperative 2D images. To deal with this, augmented reality (AR)—a new sophisticated 3D technology that superimposes computer-generated virtual objects onto the user’s view of the real world—has been considered a promising tool. Particularly, in the field of TS, AR can minimize the anatomic challenges of traditional endoscopic or microscopic surgery, aiding in surgical training, preoperative planning and intra-operative orientation. The aim of this systematic review is to analyze the potential future role of augmented reality, both in endoscopic and microscopic transsphenoidal surgeries.


Introduction
Augmented reality (AR), also referred to as 'Enhanced Reality' or simply 'Image Enhancement' (IE), is advanced technology that superimposes computer-generated virtual objects onto the user's view of the real world.Unlike virtual reality (VR), which immerses the user in an entirely computer-generated environment, AR aims to enhance a real-world image with virtual objects or subjects [1][2][3].
In the age of minimally invasive surgery (MIS), where multiple critical structures are involved in a remarkably compact anatomical region and landmarks identification is critical, the use of such advanced technology has been considered a promising tool to improve visualization and surgical guidance since it was first described in the clinical setting by Kelly et al. [4] and Roberts et al. [5] in the 1980s.In the field of MIS, microscopic transsphenoidal surgery (MTS) and, more recently and more widely used, transsphenoidal endoscopic endonasal surgery (ETS) [5], are among the most commonly performed minimally invasive neurosurgical procedures for the resection of pituitary lesions (Figure 1) [6][7][8][9].
Brain Sci.2023, 13, 1695 2 of 21 MTS and especially ETS have proven to be safe and effective, guarantying a good postoperative outcome.Despite the obvious post-operative advantages of minimally invasive surgery, especially when compared to open transcranial surgical approaches (OTCS), their limited visualization remains a critical hurdle that has not yet been adequately overcome by standard surgical neuronavigation guidance systems.
Brain Sci.2023, 13, x FOR PEER REVIEW 2 of 21 minimally invasive neurosurgical procedures for the resection of pituitary lesions (Figure 1) [6][7][8][9].MTS and especially ETS have proven to be safe and effective, guarantying a good post-operative outcome.Despite the obvious post-operative advantages of minimally invasive surgery, especially when compared to open transcranial surgical approaches (OTCS), their limited visualization remains a critical hurdle that has not yet been adequately overcome by standard surgical neuronavigation guidance systems.To deal with this hurdle, an AR guidance system may have several roles and applications in TS surgery.AR provides real-time information overlaid on the surgeon's view, which can include 3D reconstructions of the surgical field, anatomical landmarks and tumor boundaries.This enhanced visualization may improve the surgeon's spatial orientation and accuracy during the neurosurgical procedure.In addition, AR can be used as a navigation guidance system that overlays preoperative imaging data directly onto the patient's anatomy, allowing the surgeon to follow a planned path for tumor resection even in cases where landmarks are obscured or absent [9].Moreover, for patients with significant anatomical variants, especially those with conditions like acromegaly [8,[10][11][12][13], AR To deal with this hurdle, an AR guidance system may have several roles and applications in TS surgery.AR provides real-time information overlaid on the surgeon's view, which can include 3D reconstructions of the surgical field, anatomical landmarks and tumor boundaries.This enhanced visualization may improve the surgeon's spatial orientation and accuracy during the neurosurgical procedure.In addition, AR can be used as a navigation guidance system that overlays preoperative imaging data directly onto the patient's anatomy, allowing the surgeon to follow a planned path for tumor resection even in cases where landmarks are obscured or absent [9].Moreover, for patients with significant anatomical variants, especially those with conditions like acromegaly [8,[10][11][12][13], AR can adapt to these variations by constantly updating the surgical plan based on real-time patient-specific data, ensuring precision and safety.The aim of this systematic review is to analyze the potential future role of AR in both endoscopic and microscopic TS: could AR navigation be the next promising tool in the evolution of minimally invasive transsphenoidal surgery?

Materials and Methods
We conducted a systematic review, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [14] (Figure 1) guidelines, of the literature using PubMed/Medline, and additional articles were identified through manual searches on PubMed, Google Scholar and Scopus.Ethical approval and patient consent were not required for this study.
Herein, a flow chart of the systematic review's key stages is reported (Figure 2).
Brain Sci.2023, 13, x FOR PEER REVIEW 3 of 2 can adapt to these variations by constantly updating the surgical plan based on real-tim patient-specific data, ensuring precision and safety.The aim of this systematic review is to analyze the potential future role of AR in both endoscopic and microscopic TS: could AR navigation be the next promising tool in th evolution of minimally invasive transsphenoidal surgery?

Materials and Methods
We conducted a systematic review, according to the Preferred Reporting Items fo Systematic Reviews and Meta-Analyses (PRISMA) [14] (Figure 1) guidelines, of the liter ature using PubMed/Medline, and additional articles were identified through manua searches on PubMed, Google Scholar and Scopus.Ethical approval and patient consen were not required for this study.
Herein, a flow chart of the systematic review's key stages is reported (Figure 2).The following Mesh Terms were used: Two independent reviewers performed the study selection to include all relevant papers from the literature.Collected data were evaluated by 2 authors (V.G. and C.A). Disagreement was solved via discussion and consensus, with a third author mediating (B.M.C.) whenever necessary.Duplicated papers were removed using Microsoft Excel 16.37 Software (Redmond, WA, USA).Then, the titles and abstracts of the search results were screened, and non-related articles were excluded.Full-text articles were assessed for eligibility.

Eligibility Criteria
Articles were considered eligible according to the following criteria:

•
Full article in English;

•
Articles not in English;

•
Articles reporting surgical simulation just in virtual reality (VR).
The article's full text was retrieved for further investigations when the title and abstract met the inclusion criteria, while irrelevant papers were excluded.Papers were selected with no time limit, including studies from 2002 to 2023.

Data Extraction
After selecting the relevant studies, according to the inclusion/exclusion criteria above, the extracted data from each paper were: the first author, publication year, study design, patients' characteristics, test subjects (vivo, experimental model, cadaver specimens), pathology, AR Display Devices, AR technique, landmark identification, accuracy results, surgical time of AR-assisted surgery, reoperation or anatomical variants (ex.acromegaly), outcomes, surgeon's perspective on using AR-assisted surgery (Table 1).

Results
A total of 119 published studies were identified using PubMed and additional reference list searches.After removing duplicates in Excel, n = 74 articles were screened.Based on the titles and abstracts, we then excluded 29 articles.Two articles were not retrieved because the full text was not available, two articles were excluded because they were not in the English language.Another 25 articles were excluded because they did not meet our eligibility criteria.Finally, 16 papers were included in this systematic review.

Results
A total of 119 published studies were identified using PubMed and additional reference list searches.After removing duplicates in Excel, n = 74 articles were screened.Based on the titles and abstracts, we then excluded 29 articles.Two articles were not retrieved because the full text was not available, two articles were excluded because they were not in the English language.Another 25 articles were excluded because they did not meet our eligibility criteria.Finally, 16 papers were included in this systematic review.

Augmented Reality Techniques
During the procedures analyzed, 15 studies used AR-assisted navigation, of which 11 studies used AR-assisted endoscopy with AR superimposed onto the endoscopic display (68.75%), 3 studies used AR-assisted microscopy (18.75%)-of which two used a heads-up microscopic display and one used AR superimposed onto the microscope eyepiece-while in one study (6.25%)AR assisted both endoscopy and microscopy.The last study (6.25%) used AR for preoperative simulation, which was then compared to intraoperative live endoscopic images.

Landmarks Identification
Regarding the number and type of structures identified, AR-assisted neuronavigation allowed for the identification of the main anatomical landmarks in most studies, such as the internal carotid arteries (62%), the optic nerves (37%) and the pituitary gland (19%) (Figure 4).
eyepiece-while in one study (6.25%)AR assisted both endoscopy and microscopy.The last study (6.25%) used AR for preoperative simulation, which was then compared to intraoperative live endoscopic images.

Landmarks Identification
Regarding the number and type of structures identified, AR-assisted neuronavigation allowed for the identification of the main anatomical landmarks in most studies, such as the internal carotid arteries (62%), the optic nerves (37%) and the pituitary gland (19%) (Figure 4).

Accuracy of the Results
A variety of target registration error (TRE) parameters were reported in the included studies.The average precision of the overlay was 1.37 mm with a standard deviation of 0.65 mm.The TRE was less than 2 mm in the majority of studies [15][16][17][18][19][21][22][23][24][25].Two studies did not capture quantitatively the accuracy of the system, but surgeons have found ARassisted navigation to be very helpful in effectively targeting lesions.[26,27] Lai et al. and Caversaccio et al. [16,25] have reported the best accuracy of the target (0.55 mm ± 0,24 DS, 0.7 mm, respectively).Goto et al. [29] showed that AR-assisted navigation was more useful than traditional navigation systems, reporting an average score overall of 4.7 (IC 95%) (Figure 5).

Accuracy of the Results
A variety of target registration error (TRE) parameters were reported in the included studies.The average precision of the overlay was 1.37 mm with a standard deviation of 0.65 mm.The TRE was less than 2 mm in the majority of studies [15][16][17][18][19][21][22][23][24][25].Two studies did not capture quantitatively the accuracy of the system, but surgeons have found ARassisted navigation to be very helpful in effectively targeting lesions.[26,27] Lai et al. and Caversaccio et al. [16,25] have reported the best accuracy of the target (0.55 mm ± 0.24 DS, 0.7 mm, respectively).Goto et al. [29] showed that AR-assisted navigation was more useful than traditional navigation systems, reporting an average score overall of 4.7 (IC 95%) (Figure 5).

AR Surgeons' Perspectives
Thirteen studies have shown the significant benefits and positive experiences of using AR-assisted navigation in transsphenoidal surgery.Indeed, Kawamata et al. [15]

AR Surgeons' Perspectives
Thirteen studies have shown the significant benefits and positive experiences of using AR-assisted navigation in transsphenoidal surgery.Indeed, Kawamata et al. [15] and Barber et al. [23] highlighted the benefits of knowing the exact surgical orientations, which meant surgeons were able to (i) adequately identify the details of the endoscopic surgery and (ii) to avoid delicate structures.Five studies [17,24,[27][28][29] have found AR-assisted navigation to be significantly useful for all surgical tasks.In addition, four studies [19,21,22,24] have unanimously reported a significant reduction in mental effort and frustration compared to conventional neuronavigation systems.Moreover, Cabrilo et al. [20] have pointed out that AR-assisted navigation demanded less mental effort to merge the whole-time neuronavigational data with the surgical field.

Augmented Reality in Neurosurgery
Augmented reality (AR), also referred to as 'Enhanced Reality' or simply 'Image Enhancement' (IE), is advanced technology that superimposes computer-generated virtual objects on the user's view of the real world [1].Unlike virtual reality (VR), which immerses the user in an entirely computer-generated environment, AR aims to enhance a real-world image with virtual objects or subjects.[1-3] Furthermore, mixed reality (MR) combines VR and AR by projecting virtual and responsive objects into the real world.Although the term 'Virtual Reality' is always distinguished from 'Augmented Reality', the terms 'Mixed Reality' and 'Augmented Reality' are sometimes used interchangeably [30].
In the modern scientific age, the application of these latest advanced technologies, which allow the augmentation or virtual representation of real images by creating an immersive virtual world defined as the "metaverse", is becoming even more common in the medical field, including in neurosurgery [31].Indeed, the term "neuroverse" is currently commonly used to describe several roles of such advanced technology in the neurosurgical field, including surgical training, preoperative planning, and intra-operative guidance [3].
Particularly, the intraoperative use of AR as an adjunct to conventional neuronavigation systems is increasingly being explored.Although the use of conventional neuronavigation systems represents a widespread and indispensable tool to modern neurosurgical practice, these systems still have several limitations.First, they display 3D patient imaging data on a 2D display, requiring the surgeon to mentally merge the data to the patient's anatomy [30].Secondly, these systems repeatedly divert the surgeon's attention from the surgical field, contributing to increased fatigue and operative times, while introducing potential errors related to task-switching [30].An AR-assisted neuronavigation system, overlaying preoperative and intraoperative patient images directly onto the patient's anatomy, has the potential to overcome these limitations, significantly decreasing the need to shift the focus away from the surgical procedure [5].

Can Augmented Reality Improve the Accuracy of Image-Guided Surgery?
Image-guided surgery (IGS) systems are currently used in neurosurgical practice, allowing a real-time, intraoperative tracking of the current position of the lesion and/or anatomical features based on preoperative images.In order to use these systems safely during surgery, it is essential to understand the registration error associated with each system to determine the level of confidence that can be placed in it.In this field, Maurer et al. [32] proposed three useful error measurements for analyzing the accuracy of point-based registration methods (Figure 6).
Image-guided surgery (IGS) systems are currently used in neurosurgical practice, allowing a real-time, intraoperative tracking of the current position of the lesion and/or anatomical features based on preoperative images.In order to use these systems safely during surgery, it is essential to understand the registration error associated with each system to determine the level of confidence that can be placed in it.In this field, Maurer et al. [32] proposed three useful error measurements for analyzing the accuracy of pointbased registration methods (Figure 6).(2) The FRE is the distance between the measured position of the fiducial in one space and its counterpart in another space after registration.(3) The TRE is the distance between corresponding points other than the fiducial points after registration.[33,34].
The TRE is the best estimate of navigation accuracy, which defines the relationship between the instrument tip and its measured position.Predictably, the lower the TRE is, the higher the accuracy is.In this field, the general consensus is that the TRE for surgical navigation should be 2 mm or less; moreover, the TRE goal stated by Citardi et al. [35,36] for a next-generation surgical navigation platform would be 1.0-1.5 mm, ideally 0.6-1.0mm, especially in ETS, where several critical structures are involved in a remarkably compact anatomical region, and landmark identification is critical.Labadie et al. [33], in a comprehensive analysis of the conventional IGS systems commonly used in ETS, found that conventional neuronavigation systems did not consistently achieve a clinically acceptable TRE.Furthermore, even when a TRE of 2 mm is achieved, these systems may be considered as a complement to but not replacement of the knowledge of surgical anatomy.Indeed, Synderman et al. [37] prospectively recorded the TRE measured using the Stryker Navigation System (Kalamazoo, MI, USA) during 50 endoscopic anterior skull (2) The FRE is the distance between the measured position of the fiducial in one space and its counterpart in another space after registration.(3) The TRE is the distance between corresponding points other than the fiducial points after registration [33,34].
The TRE is the best estimate of navigation accuracy, which defines the relationship between the instrument tip and its measured position.Predictably, the lower the TRE is, the higher the accuracy is.In this field, the general consensus is that the TRE for surgical navigation should be 2 mm or less; moreover, the TRE goal stated by Citardi et al. [35,36] for a next-generation surgical navigation platform would be 1.0-1.5 mm, ideally 0.6-1.0mm, especially in ETS, where several critical structures are involved in a remarkably compact anatomical region, and landmark identification is critical.Labadie et al. [33], in a comprehensive analysis of the conventional IGS systems commonly used in ETS, found that conventional neuronavigation systems did not consistently achieve a clinically acceptable TRE.Furthermore, even when a TRE of 2 mm is achieved, these systems may be considered as a complement to but not replacement of the knowledge of surgical anatomy.Indeed, Synderman et al. [37] prospectively recorded the TRE measured using the Stryker Navigation System (Kalamazoo, MI, USA) during 50 endoscopic anterior skull base procedures in an academic university setting.The mean error of initial registration was 2.8 mm (range: 1.4 to 7.1 mm).Thus, they attempted to achieve greater accuracy by excluding fiducials with large errors and, after excluding two fiducials, their mean final registration error decreased significantly to 1.6 mm, even though large variability remained (range: 0.6 to 3.7 mm).Registration is the process by which the IGS computer matches the preoperative images to the intraoperative surgical volume; in this context, AR navigation systems overlaying 3D-generated images directly onto the endoscopic view have been shown to (i) enable navigation while visualizing sub-surface anatomy and (ii) in some cases significantly reduce TRE, thereby improving the neuronavigation workflow.Li et al. [21] and Bong et al. [22] provided clear examples of this.They have both reported a registration accuracy with a mean TRE of less than 1.30 mm.Even more impressive were the results obtained by Lai et al. 25], who achieved a TRE of 0.55 ± 0.24 mm in CBCT image projection, with a median of 0.51 mm and an interquartile range of 0.39-0.68mm.Nevertheless, it is essential to point out that all the mean TREs reported in the articles included in this systematic review are related to cadavers or experimental models.Thus, the accuracy of the AR navigation systems in clinical practice may be slightly different, requiring further, deeper studies.

Augmented Reality-Assisted Neuronavigation: Workflow
In order to display AR images intra-operatively, most systems required patients' preoperative assessment images (CT, MRI or angiographic scans), which typically involved the automatic or manual segmentation of anatomical structures from the patient's imaging data and the use of 3D modeling software to format the images for the AR system [30].The 3D image data is then superimposed on the real patient's anatomical structures, and it can be viewed directly through the eyepiece of the microscope, a separate endoscopic or microscopic screen, headset displays, HoloLens smart glasses, a tablet or smartphones.A tracking system, either electromagnetic or optical, is usually used to continuously monitor the position and orientation in real time (Figure 7).

Augmented Reality-Assisted Neuronavigation in Transsphenoidal Surgery (TS)
TS is actually considered superior to conventional OTCS approaches due to more favorable post-operative outcomes and less collateral tissue damage due to the shorter operative route and smaller surgical site [11,30,38].Despite the increasingly precise aim of less-invasive surgery, the technical issues related to intraoperative anatomical assessments and potential complications have largely lasted out.In this field, the use of AR-assisted neuronavigation-based TS might overcome these limitations, improving anatomy identification and intraoperative guidance.

Augmented Reality-Assisted Neuronavigation in Transsphenoidal Surgery (TS)
TS is actually considered superior to conventional OTCS approaches due to more favorable post-operative outcomes and less collateral tissue damage due to the shorter operative route and smaller surgical site [11,30,38].Despite the increasingly precise aim of less-invasive surgery, the technical issues related to intraoperative anatomical assessments and potential complications have largely lasted out.In this field, the use of AR-assisted neuronavigation-based TS might overcome these limitations, improving anatomy identification and intraoperative guidance.Microscope-based AR was introduced to the neurosurgical community through surgical microscopes such as the robotic multiple-coordinate manipulator microscope (Zeiss, Oberkochen, Germany) with an integrated heads-up display (HUD), and became commercially available in the mid-1990s [24,[39][40][41].Currently, modern surgical microscopes combined with AR-assisted navigation systems are expanding their application in different cranial neurosurgical approaches [24].As soon as these innovative surgical microscopes became clinically available, they were applied to skull base surgery, particularly transsphenoidal ones [30], as experienced by Carl et al. [24].
TS using a HUD microscope was performed in 47 consecutive procedures and compared with a control group who underwent surgery with classic MTS.In the surgeons' experience, enhanced AR visualizations improved their three-dimensional perception compared to the standard display, particularly in a case of recurrent Cushing's disease, where intuitive AR-assisted surgical orientation has simplified the removal of scar tissue and allowed for an easier localization of the recurrent adenoma.Carl et al. demonstrated that the overall clinical accuracy of the AR application depends on the navigation accuracy and on microscopic calibration, indeed the TREs ranged from 0.55 to 4.78 mm (in the fiducial-based registration group) and from 0.21 to 2.07 mm (in the iCT-based registration group).A similar study and results were carried out by Bopp et al. [28], in which 81 patients underwent surgery with classic MTS while 84 patients underwent surgery with HUD-microscope-supported TS.The overall clinical accuracy of the AR application, even in this case, significantly increased in the iCT-based registration group (TRE, 0.76 ± 0.33 mm) compared to the fiducial-based registration group (TRE 1.85 ± 1.02 mm).The application of AR clearly enhanced the intraoperative orientation in the aforementioned cases, especially when intraoperative landmarks were missing due to previous surgery or to particular anatomical variants; it contributed to patient safety and also increased the surgeon's comfort [28].Moreover, Cabrilo et al. have attempted to demonstrate the usefulness and ease of application of augmented reality-based neuronavigation through a case example of a recurrent clival chordoma; unlike previous cases, they reported how the use of microscopebased AR overlays 3D images directly onto the microscope's eyepiece, guiding the surgeon to find a cleavage plane among multilayer fibrosis from previous surgeries [20].

Augmented Reality's Application in Endoscopic Transsphenoidal Surgery (ETS)
Endoscopic transsphenoidal skull base surgery requires a simultaneous picture of the relationship between the lesion and the surrounding anatomical structures, including cranial nerves and critical vascular structures, from the limited landmarks enclosed in the sphenoidal sinus; therefore, Image-Guided Surgery (IGS) systems are often used to facilitate the real-time anatomical orientation of these structures [42,43].However, there may be a mismatch between the anatomical topography in the surgical field and the 2D images on the navigation monitor, as described above for conventional navigation systems.In order to deal with this, over the past few decades, AR has also been explored as a tool to improve endoscopic navigation.Indeed, AR can be used to augment the live video stream from the endoscope by superimposing computer-generated image data from pre-or intraoperative radiological examinations, such as MRI or CT scans, onto the real-world view of the endoscope [43].Most of the studies included in this systematic review investigated the potential role of AR in ETS as superimposing 3D virtual objects on the endoscopic view [1,17,19,21,22,[25][26][27].

AR's Application in ETS (Cadavers or Experimental Models)
Some studies have evaluated the accuracy of the different AR-assisted neuronavigation systems previously on cadaver specimens or experimental models (i.e., phantoms) before testing them on real patients.For instance, to evaluate the accuracy and performance of some AR-assisted navigation systems during ETS, Bong et al. and Lai et al. have used a phantom to simulate the workflow that might occur in a real surgical scenario [22,25].In the first case, Bong's group [22] experienced the use of AR-assisted ETS with an optical system for endoscopic calibration and tracking during two sets of experiments, firstly measuring the spatial errors of a virtual object superimposed on the endoscopic view, and secondly performing simulated surgical tasks using a phantom model to compare the performance of the AR navigation system with the conventional endoscopic procedure.Using the AR neuronavigation system, they showed that the mean spatial errors of the AR were ~1 mm and also that the number of error events decreased from 4.86 ± 1.22 to 1.71 ± 1.38 (p = 0.0073 < 0.05), and the duration of error events decreased from 2.00 ± 0.75 s to 0.58 ± 0.61 s (p = 0.0013 < 0.05), with statistically significant differences.Nevertheless, even if the mean total operation time was slightly longer with the AR-based navigation system (18.26 ± 7.82 s) than with the conventional endoscopic method (13.75 ± 6.30 s), these experiments have shown that AR guarantees a more precise and safer performance.In the second case, an augmented reality surgical navigation system (ARSN) with 3D cone beam computed tomography (CBCT) superimposed on the endoscope's view was used by Lai et al. [25].Even in this case, the navigation system included an optical tracking system (OTS) with four video cameras, but these were embedded in the flat detector of the motorized C-arm for image acquisition.The accuracy of the CBCT image's co-registration was tested using a custom-made grid with embedded 3D spheres and the overall TRE was 0.55 ± 0.24 mm, with a median of 0.51 mm and an interquartile range of 0.39-0.68mm.In either case, the spatial errors were in within the acceptable range for skull base surgery [39].Instead, Li et al. [21] conducted experiments both on a skull phantom and a cadaver to determine the display effect and accuracy of their AR-assisted navigation system.Indeed, in the first phase, they used a qualitative polyvinyl chloride (PVC) skull phantom with a top that can be opened and six non-coplanar reference markers (located in the parietal, frontal, temporal and mastoid processes regions bilaterally) to perform a CT scan to obtain imaging data.Then, nine anatomical markers were identified with a calibrated probe and then transferred to the workstation to obtain 3D coordinates.The spatial distance between the two coordinates (the virtual image and the corresponding position) was calculated as the target registration error (TRE), so that the average TRE of the system was 1.19 ± 0.42 mm.Secondly, they performed a cadaver head experiment that demonstrated the effectiveness of the display effect of the AR-assisted navigation system in an anatomical and structural environment.Indeed, a comparison of the performance of the AR-assisted navigation system with the conventional one during the simulated surgeries has shown that the average TRE rates were 1.28 ± 0.45 and 1.32 ± 0.41 mm, respectively, and the average OT was 88.27 ± 20.45 and 104.93 ± 24.61 min, respectively, demonstrating the superiority of the AR-assisted neuronavigation system in terms of timing as well, contrary to what Bong et al. have previously suggested [22].With regard to cadaver dissection studies, two consecutive studies by Dixon et al. [18,19] using AR-assisted ETS (an ART-IGS system) on cadaver specimens showed that there was a significant decrease in the mental and temporal demand, effort and frustration of surgeons, as measured by the National Aeronautics and Space Administration Task Load Index (NASA-TLX), in the ART-IGS group compared to the control group (p < 0.02).In all cases, the TRE was between 1 mm and 1.8 mm.Furthermore, Prisman et al. [17] practiced surgical tasks such as uncinectomy, ethmoidectomy, sphenoidectomy/pituitary resection and clival resection on a cadaveric specimen during an endoscopic surgical approach to the skull base.They found that the preoperative contouring of neurovascular structures, combined with the ability to control the distance of the surgical plane from the tip of the endoscope, was particularly useful, with a mean calculated TRE < 2 mm.Even in these cases, all cadaver dissection studies were well below the currently accepted TRE of 2.0 mm and were also in line with the goal of Citardi et al. [35], who stated that the TRE target for a next-generation surgical navigation platform would be 1.0-1.5 mm, and ideally 0.6-1.0mm [35,36].

AR's Application in ETS (Vivo)
Once the accuracy of AR-assisted ETS systems has been assessed in both phantom and cadaveric experimental models, it will be possible to assess its accuracy in vivo.In the studies included in this systematic review, AR-assisted ETS systems were found to be particularly useful in cases where the classic anatomy had been compromised, or even in children, where incomplete pneumatization of the sphenoid sinus, bone thickness and distance between the carotid arteries may affect surgical planning.In such cases, endoscopic-assisted skull base surgery combined with navigation systems is even more important.Indeed, Kawamata et al. [15] used an endoscopic AR-assisted navigation system, in which 3D virtual images of 12 consecutive pituitary tumors and nearby anatomic structures were superimposed on real-time live endoscopic images.In their experience, the colors of the wireframe images of the tumor change according to the distance between the tip of the endoscope and the tumor, giving the surgeon an accurate current orientation and warning of delicate structures.The error of the superimposed wireframe images was always less than 2 mm.In addition, a few studies have investigated and evaluated the use of this type of advanced technology in pediatric case series.First Caversaccio et al. [16] and then Pennacchietti et al. [27] reported their first experience with AR-assisted navigation in endoscopic skull base surgery, even in a pediatric case series.In their experience, simultaneous visualization of the endoscope position on standard axial, sagittal and coronal MRI views, as well as the trajectory-aligned reconstruction of the 3D MRI images on the navigation screen, was able to overcome the morphological anatomical variability in all pediatric cases.Finally, Zeiger et al. described the first clinical series of complex skull base pathologies treated using a novel mixed reality platform (EndoSNAP, Surgical Theatre, Mayfield Village, Ohio).Firstly, 3D digital models of the patient's anatomy using EndoSNAP were added to Brainlab's Cranial Navigation software (Brainlab, Munich, Germany), and then fiducials markers were added to the endoscope.A dynamic image of the 3D reconstruction was displayed alongside a corresponding endoscopic camera view, matching the real view with the patient's anatomy reconstruction.Although the accuracy of the system was not quantified, surgeons found EndoSNAP particularly helpful in cases where atypical pathologies or altered anatomical relationships were found [26].

Limitations and Future Directions
In the current systematic review, augmented reality-assisted transsphenoidal surgery (TS) has demonstrated significant utility in cadaveric dissection, experimental phantom studies, and in vivo experiments.However, the presence of diverse studies utilizing different augmented reality techniques in both microscopic and endoscopic TS may have impacted our conclusions.Notably, most studies were not in vivo, potentially leading to an exaggerated perception of the technology's efficiency and effectiveness.Moreover, these studies failed to clearly define their prognosis and outcomes.
The transition from successful controlled experiments to practical clinical applications is challenging due to patient variability and the complexities of real-time surgery.While most studies in this review achieved an acceptable target registration error, a limitation is the scarcity of studies comparing the accuracy of AR-assisted neuronavigation with conventional methods, i.e., lacking a 'control group'.Consequently, we cannot definitively establish the superiority of AR-assisted neuronavigation over conventional approaches.
Therefore, considering our achievements and the highlighted limitations, further research, especially in vivo studies, is imperative to accurately determine the efficiency of applying this advanced technology in TS surgery, particularly in the context of neuronavigation-based transsphenoidal surgery.Additionally, a comprehensive validation of its safety and efficacy,

Figure 2 .
Figure 2. A flow chart of the systematic review's key stages.

Figure 2 .
Figure 2. A flow chart of the systematic review's key stages.The following Mesh Terms were used: Brain Sci.2023, 13, x FOR PEER REVIEW 4 of 21

Figure 3 .
Figure 3. Report of the most reported pathologies

Figure 3 .
Figure 3. Report of the most reported pathologies.

21 Figure 5 .
Figure 5. Histogram showing the target registration error (TRE) in millimeters in the reported studies.

Figure 5 .
Figure 5. Histogram showing the target registration error (TRE) in millimeters in the reported studies.

Figure 6 .
Figure 6.Types of Registration Errors.(1) The FLE is a random, unknown error that occurs both in the location of fiducials on a CT scan and in the location of fiducials on the patient in the operating room.(2)The FRE is the distance between the measured position of the fiducial in one space and its counterpart in another space after registration.(3) The TRE is the distance between corresponding points other than the fiducial points after registration.[33,34].

Figure 6 .
Figure 6.Types of Registration Errors.(1) The FLE is a random, unknown error that occurs both in the location of fiducials on a CT scan and in the location of fiducials on the patient in the operating room.(2)The FRE is the distance between the measured position of the fiducial in one space and its counterpart in another space after registration.(3) The TRE is the distance between corresponding points other than the fiducial points after registration[33,34].

Figure 7 .
Figure 7. Augmented Reality Workflow.(1) 3D Imaging : combine preoperative MRI, CT or Angiography scans to create a 3D model of the patient's anatomy.This model serves as the foundation for AR visualization.(2) Registration & Segmentation: generally fiducial or automated registration and segmentation.(3) Tracking : utilize a tracking system to continuously monitor the position and orientation in real time.This can be achieved using optical tracking markers or electromagnetic tracking.(4)Visual Display of the AR-based Navigation System: heads-up display (HUD) to the microscope eyepiece or a separate screen; 3D virtual images are superimposed onto endoscopic or microscopic live images.

Figure 7 .
Figure 7. Augmented Reality Workflow.(1) 3D Imaging: combine preoperative MRI, CT or Angiography scans to create a 3D model of the patient's anatomy.This model serves as the foundation for AR visualization.(2) Registration & Segmentation: generally fiducial or automated registration and segmentation.(3) Tracking: utilize a tracking system to continuously monitor the position and orientation in real time.This can be achieved using optical tracking markers or electromagnetic tracking.(4)Visual Display of the AR-based Navigation System: heads-up display (HUD) to the microscope eyepiece or a separate screen; 3D virtual images are superimposed onto endoscopic or microscopic live images.

Table 1 .
Included studies evaluating the use of AR in transsphenoidal surgery.