Design and Testing of Augmented Reality-Based Fluorescence Imaging Goggle for Intraoperative Imaging-Guided Surgery

The different pathways between the position of a near-infrared camera and the user’s eye limit the use of existing near-infrared fluorescence imaging systems for tumor margin assessments. By utilizing an optical system that precisely matches the near-infrared fluorescence image and the optical path of visible light, we developed an augmented reality (AR)-based fluorescence imaging system that provides users with a fluorescence image that matches the real-field, without requiring any additional algorithms. Commercial smart glasses, dichroic beam splitters, mirrors, and custom near-infrared cameras were employed to develop the proposed system, and each mount was designed and utilized. After its performance was assessed in the laboratory, preclinical experiments involving tumor detection and lung lobectomy in mice and rabbits by using indocyanine green (ICG) were conducted. The results showed that the proposed system provided a stable image of fluorescence that matched the actual site. In addition, preclinical experiments confirmed that the proposed system could be used to detect tumors using ICG and evaluate lung lobectomies. The AR-based intraoperative smart goggle system could detect fluorescence images for tumor margin assessments in animal models, without disrupting the surgical workflow in an operating room. Additionally, it was confirmed that, even when the system itself was distorted when worn, the fluorescence image consistently matched the actual site.


Introduction
During cancer surgery, although preoperative imaging techniques such as computed tomography (CT) and positron emission tomography (PET) have a meaningful impact on preoperative planning, the surgeon's eyes and hands remain the decisive factors [1][2][3]. It can be difficult discriminating between malignant and normal tissues during actual clinical practice [4,5]. This may result in incomplete resection or unnecessary normal tissue resection [6]. Recently, indocyanine green (ICG)-based near-infrared (NIR) fluorescence imaging technology has been actively used to observe various types of cancer [7][8][9][10][11] and lymph nodes [12,13], as well as important structures, blood vessel formation [14,15], and blood perfusion [16], in real time during surgery. Furthermore, in previous studies, our group has employed ICG for applications such as sentinel lymph node (SLN) detection [17][18][19], assessment of lung segments [14], gastric conduit perfusion [20], and lung cancer detection [10]. However, as these systems display information on a remote monitor, surgeons are required to look at the monitor in order to identify the NIR fluorescence image [21][22][23]. This interrupts the surgeons' attention, thereby increasing the probability of errors and the overall surgery time [24,25].
To overcome the abovementioned limitations, intraoperative systems with a head mount display (HMD) were developed (Table 1). Most existing HMDs are binocular and provide an immersive viewing experience [26]. Video see-through HMDs use a small display to show images from a computer or a camera and thus, create an immersive and virtual reality environment. However, the lack of direct visual access to the scenes interferes with the surgeons' ability to assess and resect tissue spontaneously during surgery. Optical seethrough HMDs are based on augmented reality (AR); in these devices, see-through displays are used to project images directly in the user's field-of-view (FOV) [27][28][29], creating an AR environment [26,30]. Using this system, the user can view both the projected image and the object itself. Additionally, it supports visual tissue assessment and resection [31]. However, previous optical see-through systems are composed of optoelectrical components, making them bulky and difficult to wear. These features reduce the surgeon's manipulability and block the surgeon's vision. Furthermore, different pathways exist between the camera and the eye position. Therefore, it was necessary to use additional components or algorithms to match the user's view field and the fluorescence image from the camera. In this study, we developed an augmented reality-based fluorescence imaging (ARFI) goggle system that synchronizes the user's view and the NIR camera's view, in order to decrease related costs and improve the efficiency of the system. To assess its clinical applicability, the developed system was tested through preclinical experiments using a cancer model for guiding tumor resection in mice and rabbits.

Hardware Design for the Arfi System
The prototype of the ARFI system is presented in Figure 1. The proposed ARFI system consists of a customized NIR camera with the OS05A20 sensor (CMOS, Omnivision, Santa Clara, CA, USA) for fluorescence imaging, a commercial smart glass (Moverio BT-300, Epson, Suwa-shi, Nagano, Japan) for real-time display of the fusion image, and an optical system for utilizing the smart glass's curved mirror. The CMOS camera is used to acquire a fluorescence image with a resolution of 2688 × 1944 pixels and a frame rate of 60 frames per second. The ARFI system was constructed by attaching the NIR camera onto the smart glass hardware, as shown in Figure 1. To acquire the fluorescence images, an 814-870 nm bandpass filter (FF01-842/56-25, Semrock, Rochester, NY, USA) was attached in front of the NIR camera to block all incoming light, except for the fluorescence emission from ICG. The smart glass system was equipped with two display panels (0.43 wide panel), each with a resolution of 1280 × 720 pixels. After processing the acquired images, the surgical and fluorescence images could be merged on the display. The optical system comprised a 20 mm square dichroic beam splitter (DBS, #62-628, Edmund Optics, Barrington, NJ, USA), a 20 mm protected gold coated 90 • specialty mirror (#65-849, Edmund Optics, Barrington, NJ, USA), and mounts (DBS, mirror, and NIR camera mounts). Each mount was modeled using a 3D modeling program (Fusion 360, Autodesk, San Rafael, CA, USA) and printed with a 3D printer. These mounts could shift easily because they were connected via rods (SR6, SR3-P4, Thorlabs, Newton, NJ, USA). To co-register a fluorescence image with a surgical scene image, the DBS mount employed a small spring, screws, and a ball to rotate along the three axes; it could be adjusted horizontally to match the position of the user's eye. The NIR camera mount was grooved to allow for heat release, and it could be adjusted vertically. The central mirror mount secured the triangular mirror and the smart glass; it also linked the other mounts via the rod, as shown in Figure 1. Owing to these mounts, the system could be customized depending on the user; this increased the accuracy of the fluorescence image registration.

Operational Principle of Arfi System
The schematic of the ARFI system is shown in Figure 2a. The excitation light source for ICG was used by attaching a 790 nm short-pass filter (FF01-790/SP-25; Semrock, Rochester, NY, USA) in front of an LED source (M780L3, Thorlabs, Newton, NJ, USA) with a 780 nm peak, and focused on the surgical field. This filter was used only to detect the fluorescence emitted by ICG on blocking light with a similar wavelength as the fluores-

Operational Principle of Arfi System
The schematic of the ARFI system is shown in Figure 2a. The excitation light source for ICG was used by attaching a 790 nm short-pass filter (FF01-790/SP-25; Semrock, Rochester, NY, USA) in front of an LED source (M780L3, Thorlabs, Newton, NJ, USA) with a 780 nm peak, and focused on the surgical field. This filter was used only to detect the fluorescence emitted by ICG on blocking light with a similar wavelength as the fluorescence reflected from the sample. White light from an array of four Luxeon III white light diodes was used (Lumileds, San Jose, CA, USA); all NIR wavelengths were filtered out from this light (E680SP, Chroma). The generated fluence rates for NIR excitation and white light were 0-5 mW/cm 2 and 0-1 mW/cm 2 , respectively. The distance between the source and surgical field was approximately 250 mm. The DBS transmitted visible light enabling the surgeon to view the actual surgical scene, while also reflecting the near-infrared rays entering the same optical path as that of the surgeon. In this case, the near-infrared rays are reflected again through the mirror and then continue toward NIR cameras, such that the NIR camera could acquire fluorescence images. Thus, NIR images were acquired on the same optical path as the actual surgical scene through an optical separation system, and they were then presented to the surgeon. The major benefit of using two different images that share the same optical path is that easy blending between the NIR and actual images can be achieved. As demonstrated in Figure 2b, the operation of the entire system is as follows. The typical working distance to acquire a focused fluorescence image is approximately 500 mm. Fluorescence images acquired from the NIR cameras were delivered to a computer through a USB 3.0 port and subjected to post-processing. Subsequently, they were transmitted to the smart glass via Miracast [34], a wireless image transmission technique, and were then projected to the eyes in real time. As the real-time projected image is overlaid on the actual field, users can obtain useful information, including the margin of cancer, in As demonstrated in Figure 2b, the operation of the entire system is as follows. The typical working distance to acquire a focused fluorescence image is approximately 500 mm.
Fluorescence images acquired from the NIR cameras were delivered to a computer through a USB 3.0 port and subjected to post-processing. Subsequently, they were transmitted to the smart glass via Miracast [34], a wireless image transmission technique, and were then projected to the eyes in real time. As the real-time projected image is overlaid on the actual field, users can obtain useful information, including the margin of cancer, in real time. The acquisition of fluorescence signals from NIR cameras and the postprocessing were achieved using Visual Studio 2017 (Microsoft, Redmond, WA, USA), C++, and OpenCV library. Camera grabbing, which refers to image acquisition via NIR cameras, was performed using OpenCV library functions. Signals above a specific threshold value were saved from removing background noise from the raw fluorescent images. Then, to improve the contrast with the surrounding area, the NIR fluorescence image was pseudocolored in green and overlaid with 100% transparency over the color video image of the same surgical field. During the surgery, the surgeon cannot realign the hardware immediately. Therefore, the software can be used to manually adjust the magnification, reduction, and rotation of the fluorescence image to match the actual field.

In Vivo Animal Studies
This study was approved by the Institutional Animal Care and Use Committee of Korea University College of Medicine (IACUC approval number: KOREA-2016-0228). Six-week-old C57BL/6 mice (20-25 g; Orient Biotech, Seongnam-si, Gyeonggi-do, Korea) and female New Zealand white rabbits (2.5-3 kg; DooYeol Biotech Co Ltd., Seoul, Korea) were used. To assist the animals in adapting to their environment, all rabbits were housed in individual cages with freely available food and water for 1-2 weeks, according to existing human-animal care protocols. All animals were anesthetized intramuscularly before experiments with 5 mg/kg of zylazine (Rompun, Bayer Korea Inc., Seoul, Korea) and 5 mg/kg of alfaxalone (Alfaxan, Jurox Pty Ltd., Hunter Valley, NSW, Australia).

Mouse Subcutaneous Tumor Model
Lewis lung carcinoma (LLC) cells were used to establish the subcutaneous tumor in the mice. For in vivo experiments, LLC cells (20 µL of 2 × 10 6 cells/mL) were injected subcutaneously into the hind legs. The tumor model was established after 2-3 weeks. ICG (5 mg/kg; Daiichi-Sankyo Co., Tokyo, Japan) was injected into the tail vein. The fluorescence signal was observed using the ARFI system 12 h after injection.

Rabbit Lung Cancer Model
A rabbit lung cancer model was established as per a previous study [35]. Briefly, VX2 single-cell suspensions with Matrigel solutions were directly injected into the rabbit's lung using a 28-gauge needle. The in vivo experiment was performed after two weeks. The VX2 model of the lung tumor was established 2-3 weeks after administration. ICG (5 mg/kg) was injected into the ear vein, and the fluorescence signal was observed using the ARFI system 12 h after injection. Normal rabbits were used for the detection of the intersegmental line. After ligating the right middle lobar pulmonary arteries and vein in each rabbit, 0.6 mg/kg of ICG (n = 3 for each group) was injected into the ear vein. The fluorescence signal was observed using the ARFI system after injection.

System Evaluation
FOV tests and image-matching ratio tests were conducted to evaluate the performance of the proposed system. First, as shown in Figure 3a, an ICG of 128 µM was prepared and used in a cotton swab. The light source for exciting the ICG was prepared by attaching a 790 nm short-pass filter (FF01-790/SP-25; Semrock, Rochester, NY, USA) at the front of an LED source (M780L3, Thorlabs, Newton, NJ, USA) with a 780 nm peak. Figure 3b shows the results of the FOV test. The blue rectangles denote the FOV of the system, and the green light represents the fluorescence signal. The results showed that the system provides consistent matching images, which means that the fluorescence image is always matched on the IGG sample, even when the sample is moved. This indicates that the direction in which the user wears the system and moves their head does not influence the performance of the system. Additionally, as Figure 4 shows, we calculated the matching ratio, which indicated how closely the real-field and fluorescence images were matched. First, the fluorescence signal was changed to orange to increase the contrast with the green ICG sample. Subsequently, to calculate the matching percentage, the two images were cropped to the same size in MATLAB (Mathworks, Natick, MA, USA) and converted to grayscale. Thereafter, the matching percentage was calculated using the corr2 function. We calculated the percentage five times in the system's FOV and averaged the obtained values. The result showed an approximately 95.3% match. Each pixel intensity above the threshold was processed to have a value of 1 to ensure that the difference between the real part and the fluorescence image could be easily interpreted.

Tumor Detection Using Arfi System in Mouse Tumor Model
The mice had a subcutaneous tumor in the left thigh, with a mean tumor diameter of 0.5 ± 0.2 cm (range 0.4-0.6 cm). The tumor was successfully detected through the NIR fluorescence signal in all mouse models, as presented in Figure 5. In addition, only tumor Additionally, as Figure 4 shows, we calculated the matching ratio, which indicated how closely the real-field and fluorescence images were matched. First, the fluorescence signal was changed to orange to increase the contrast with the green ICG sample. Subsequently, to calculate the matching percentage, the two images were cropped to the same size in MATLAB (Mathworks, Natick, MA, USA) and converted to grayscale. Thereafter, the matching percentage was calculated using the corr2 function. We calculated the percentage five times in the system's FOV and averaged the obtained values. The result showed an approximately 95.3% match. Each pixel intensity above the threshold was processed to have a value of 1 to ensure that the difference between the real part and the fluorescence image could be easily interpreted. Additionally, as Figure 4 shows, we calculated the matching ratio, which indicated how closely the real-field and fluorescence images were matched. First, the fluorescence signal was changed to orange to increase the contrast with the green ICG sample. Subsequently, to calculate the matching percentage, the two images were cropped to the same size in MATLAB (Mathworks, Natick, MA, USA) and converted to grayscale. Thereafter, the matching percentage was calculated using the corr2 function. We calculated the percentage five times in the system's FOV and averaged the obtained values. The result showed an approximately 95.3% match. Each pixel intensity above the threshold was processed to have a value of 1 to ensure that the difference between the real part and the fluorescence image could be easily interpreted.

Tumor Detection Using Arfi System in Mouse Tumor Model
The mice had a subcutaneous tumor in the left thigh, with a mean tumor diameter of 0.5 ± 0.2 cm (range 0.4-0.6 cm). The tumor was successfully detected through the NIR fluorescence signal in all mouse models, as presented in Figure 5. In addition, only tumor

Tumor Detection Using Arfi System in Mouse Tumor Model
The mice had a subcutaneous tumor in the left thigh, with a mean tumor diameter of 0.5 ± 0.2 cm (range 0.4-0.6 cm). The tumor was successfully detected through the NIR fluorescence signal in all mouse models, as presented in Figure 5. In addition, only tumor and injection sites are highlighted in the ARFI system but invisible in the surrounding scene, which indicates that the ARFI system can identify tumor margins during a surgical procedure. and injection sites are highlighted in the ARFI system but invisible in the surrounding scene, which indicates that the ARFI system can identify tumor margins during a surgical procedure.

Tumor Detection Using Arfi System in Rabbit Lung Tumor Model
The rabbit lung tumor model was successfully established in all four rabbits. The mean tumor diameter was 0.8 ± 0.3 cm. In all the models, we successfully detected the lung tumor through the NIR fluorescence signal. As shown in Figure 6, in the in vivo and ex vivo images captured by the ARFI system, the NIR fluorescence matched lung tumor sites.

Tumor Detection Using Arfi System in Rabbit Lung Tumor Model
The rabbit lung tumor model was successfully established in all four rabbits. The mean tumor diameter was 0.8 ± 0.3 cm. In all the models, we successfully detected the lung tumor through the NIR fluorescence signal. As shown in Figure 6, in the in vivo and ex vivo images captured by the ARFI system, the NIR fluorescence matched lung tumor sites. and injection sites are highlighted in the ARFI system but invisible in the surrounding scene, which indicates that the ARFI system can identify tumor margins during a surgical procedure.

Tumor Detection Using Arfi System in Rabbit Lung Tumor Model
The rabbit lung tumor model was successfully established in all four rabbits. The mean tumor diameter was 0.8 ± 0.3 cm. In all the models, we successfully detected the lung tumor through the NIR fluorescence signal. As shown in Figure 6, in the in vivo and ex vivo images captured by the ARFI system, the NIR fluorescence matched lung tumor sites.  Furthermore, we clarified whether ARFI could evaluate the blood flow distribution in the lung lobe in real time, as shown in Figure 7. When ICG was injected intravenously, the resection area of the lung lobe where the blood vessel was ligated was easily distinguishable using ARFI. Consequently, a surgeon can easily resect the exact area.
Furthermore, we clarified whether ARFI could evaluate the blood flow distribution in the lung lobe in real time, as shown in Figure 7. When ICG was injected intravenously, the resection area of the lung lobe where the blood vessel was ligated was easily distinguishable using ARFI. Consequently, a surgeon can easily resect the exact area.

Discussion
This study was conducted to develop an intraoperative fluorescence imaging system using the AR technique and validate the performance of this system through laboratorylevel and preclinical studies. The proposed ARFI system was developed to address the FOV difference and inconvenience associated with existing fluorescence navigation systems. We devised a new optical system to solve these problems; this system can exactly match the fluorescence pathways with the visible light. Consequently, the user can directly view the surgical field through fluorescence images, without having to look at a computer display. Furthermore, the proposed ARFI system can guide complete tumor resection in mice and rabbits.
One of the main advantages of the proposed system is that it avoids complicated image-matching algorithms. Previous studies on see-through HMD systems have used two cameras to separate the visible light and NIR fluorescence light [29,36,37]. Therefore, the camera angles of the two separately acquired images could not be directly aligned. If this issue is not addressed, the projected fluorescence image could cause severe eye fatigue as the user would need to switch focus between the real image and the fluorescence image. Therefore, an additional image-processing algorithm was needed [38]. However, our system can be more consistent with fluorescence images to the actual area using the mount which can more precisely align, and simple algorithms which can adjust the size or rotate the angles of the supplied fluorescence images. Furthermore, the image remains aligned even when the wearer moves or the glass is tilted. This advantage has been confirmed not only at the laboratory level but also in various preclinical studies, such as mouse tumor models and rabbit lung tumor models (Supplementary Video S1). The preuse calibration feature for the eye positions of different individuals provides consistent images during use, which ensures the stability required for use in an actual operating room. This feature potentially allows for a quicker and safer procedure and could consequently enhance patient safety.
Despite the achievements, however, this study has some limitations. Although the effectiveness of tumor detection in animal models was confirmed, accurate detection of deep lesions has not yet been achieved. In addition, ICG can only be detected up to 1 cm

Discussion
This study was conducted to develop an intraoperative fluorescence imaging system using the AR technique and validate the performance of this system through laboratorylevel and preclinical studies. The proposed ARFI system was developed to address the FOV difference and inconvenience associated with existing fluorescence navigation systems. We devised a new optical system to solve these problems; this system can exactly match the fluorescence pathways with the visible light. Consequently, the user can directly view the surgical field through fluorescence images, without having to look at a computer display. Furthermore, the proposed ARFI system can guide complete tumor resection in mice and rabbits.
One of the main advantages of the proposed system is that it avoids complicated image-matching algorithms. Previous studies on see-through HMD systems have used two cameras to separate the visible light and NIR fluorescence light [29,36,37]. Therefore, the camera angles of the two separately acquired images could not be directly aligned. If this issue is not addressed, the projected fluorescence image could cause severe eye fatigue as the user would need to switch focus between the real image and the fluorescence image. Therefore, an additional image-processing algorithm was needed [38]. However, our system can be more consistent with fluorescence images to the actual area using the mount which can more precisely align, and simple algorithms which can adjust the size or rotate the angles of the supplied fluorescence images. Furthermore, the image remains aligned even when the wearer moves or the glass is tilted. This advantage has been confirmed not only at the laboratory level but also in various preclinical studies, such as mouse tumor models and rabbit lung tumor models (Supplementary Video S1). The pre-use calibration feature for the eye positions of different individuals provides consistent images during use, which ensures the stability required for use in an actual operating room. This feature potentially allows for a quicker and safer procedure and could consequently enhance patient safety.
Despite the achievements, however, this study has some limitations. Although the effectiveness of tumor detection in animal models was confirmed, accurate detection of deep lesions has not yet been achieved. In addition, ICG can only be detected up to 1 cm below the skin. Therefore, certain recording positions may not be usable [39]. In this study, we detected steady-state fluorescence using a CMOS camera. However, steady-state fluorescence intensity has some limitations such as photobleaching of the NIR fluorophore, phototoxicity, and changes in fluorescence collection [40]. Therefore, for sensitive detection of tumors, it is expected that such a high fluence rate will be required. Additionally, through system evaluation, it was confirmed that the ARFI system has a high image-matching ratio. However, since the matching ratio between visible and fluorescence images can vary depending on the ICG concentration and fluence rate, the optimal ICG concentration and fluence rate are required to obtain an accurate fluorescence image during the surgery (Supplementary Note S1). Moreover, since our system does not have a visible-fluorescence image alignment algorithm, the visible and fluorescence images were manually matched. Therefore, a slight error inevitably occurs, and an algorithm that can match two images is additionally required for a perfect correlation. Although the tumor was confirmed through biopsy, tumor margin estimation should be considered for minimal incision and minimally invasive surgery [41].
Furthermore, the current system uses ready-made see-through HMD primarily designed for entertainment purposes. Although this type of display is useful for demonstrating the feasibility, we will develop a customized and ergonomic display unit optimized for medical imaging.

Conflicts of Interest:
The authors declare that there is no conflict of interest regarding the publication of this paper.