Next Article in Journal
Tibial Plateau Fractures among Alpine Skiers: A Retrospective Case Series
Previous Article in Journal
The Influence of Anti-Citrullinated Polypeptide Antibodies on Bone Mineral Density Decrease and Incident Major Osteoporotic Fractures in Patients with Rheumatoid Arthritis: A Retrospective Case-Control Study
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Human Middle Ear Anatomy Based on Micro-Computed Tomography and Reconstruction: An Immersive Virtual Reality Development

Royal Prince Alfred Institute of Academic Surgery, Sydney Local Health District, Sydney, NSW 2050, Australia
Vestibular Research Laboratory, School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
Department of Head and Neck Surgery, Chris O’Brien Lifehouse Cancer Centre, Sydney, NSW 2050, Australia
Sydney Medical School, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia
Author to whom correspondence should be addressed.
Osteology 2023, 3(2), 61-70;
Submission received: 14 November 2022 / Revised: 8 March 2023 / Accepted: 9 May 2023 / Published: 23 May 2023


Background: For almost a decade, virtual reality (VR) has been employed in otology simulation. The realism and accuracy of traditional three-dimensional (3D) mesh models of the middle ear from clinical CT have suffered because of their low resolution. Although micro-computed tomography (micro-CT) imaging overcomes resolution issues, its usage in virtual reality platforms has been limited due to the high computational requirements. The aim of this study was to optimize a high-resolution 3D human middle ear mesh model suitable for viewing and manipulation in an immersive VR environment using an HTC VIVE VR headset (HTC and Valve Corporation, USA) to enable a seamless middle ear anatomical visualisation viewing experience in VR while preserving anatomical accuracy. Methods: A high-resolution 3D mesh model of the human middle ear was reconstructed using micro-CT data with 28 μm voxel resolution. The models were optimised by tailoring the surface model polygon counts, file size, loading time, and frame rate. Results: The optimized middle ear model and its surrounding structures (polygon counts reduced from 21 million polygons to 2.5 million) could be uploaded and visualised in immersive VR at 82 frames per second with no VR-related motion sickness reported. Conclusion: High-resolution micro-CT data can be visualized in an immersive VR environment after optimisation. To our knowledge, this is the first report on overcoming the translational hurdle in middle ear applications of VR.

1. Introduction

The middle ear is a highly complex structure that plays a critical role in the human auditory system. As such, gaining a thorough understanding of its intricate and variable anatomy is essential for the diagnosis and treatment of a range of ear-related conditions. Advances in surgical techniques, such as endoscopic ear surgery, have made it more important than ever to have a detailed understanding of middle ear anatomy. However, creating high-resolution 3D visualizations and simulations of the middle ear in virtual reality (VR) has proven challenging due to limitations in imaging resolution, data size, and computing hardware. Standard computed tomography (CT) techniques, which are commonly used clinically for medical imaging, do not provide adequate resolution for creating detailed and realistic three-dimensional (3D) reconstructions of the middle ear’s complex structures [1]. This is particularly true for areas such as the lenticular process of the incus and the stapes superstructure, which have reduced bone density compared to other parts of the middle ear. Soft tissue structures, such as membranes and muscles, are also challenging to reconstruct using standard petrous temporal bone clinical CT scans [2]. The use of histology in the preparation of temporal bones is resource-intensive and limits the ability to increase a range of VR assets. As such, the use of histology-based preparation for temporal bones is reducing in frequency [3]. Instead, utilisation of microtomography (micro-CT) to overcome these challenges has increased in recent years to generate higher-resolution anatomically accurate data. However, the large amount of data generated by micro-CT is difficult to manipulate and computationally intensive, making it challenging to utilize in VR applications. This is a barrier to further applications of VR in simulation for otology. As a result, cadaveric temporal bones remain the gold standard for technical training in otology [4], especially when applied to tasks involving middle ear surgery.
Despite this, VR technology has shown increasing trends in application in otology, demonstrating that there is an ongoing demand to innovate in this area [3]. If newer forms of data sets of the middle ear, such as micro-CT, could be adopted into VR, it would open the possibility of attaining a variety of VR assets of the middle ear to simulate anatomical variation and disease without compromising anatomical accuracy. Additional advantages would include a reduction in processing time and the cost of development, making it a more scalable innovation [5].
This study sought to address these challenges by optimizing a mesh model of the middle ear generated from micro-CT data using the adaptive polygon optimization (APO) technique. The study examined the polygon count, mesh model loading time, and VR frame rate of the optimized mesh model to assess its feasibility for creating an anatomically accurate middle ear model in VR. To the researchers’ knowledge, this is the first study to explore the use of human middle ear micro-CT in VR otology visualization. These findings have important implications for the future of medical education and surgical training, suggesting that VR technology could be an effective tool for teaching technical skills in middle ear surgery.

2. Methods

This study was carried out following institutional ethics approval from the Royal Prince Alfred Hospital Ethics Office through the Research Ethics and Governance Information System (REGIS), protocol numbers 2019/ETH13789 and X19-0480.
An overview of the major development steps is provided in Figure 1.

2.1. Specimen Processing and Scanning

A sample of six freshly frozen temporal bones was dissected to extract the otic capsules with an intact tympanic membrane, which contains the middle ear. The bones were placed in Karnovsky’s fixative (3% paraformaldehyde, 0.5% glutaraldehyde in phosphate buffer) to prevent shrinkage. The specimens were bathed in diluted osmium tetroxide to allow soft-tissue staining. The specimens were placed in an Xradia MicroXCT 400 Micro Tomography scanner (Carl Zeiss AG, Oberkochen, Germany). The bones were scanned at a resolution of 28 microns, and cross-section image sequences (1024 px × 1024 px) were saved in Tag Image File Format (TIFF) at 24-bit color depth. The specimen that showed the best soft-tissue structures was chosen for reconstruction.

2.2. Segmentation and Surface Model Reconstruction

Segmentation and surface model reconstructions were performed using a previously validated protocol [1], and image sequences were loaded into AVIZO 2021 3D Visualization & Analysis Software (Thermo Fisher Scientific Inc, Waltham, MA, USA). Middle ear regions of interest (ROI) were specified and volumetric models were rendered. A semiautomatic image segmentation procedure was implemented to reconstruct a 3D surface model [Figure 2].

2.3. Adaptive Polygon Optimisation

Small polygons were combined into larger polygons using adaptive polygon optimisation (APO) software (Mootools software, Saint Vincent de Cosse, France) while maintaining surface details [Figure 3]. Using this method, the surface model size could be reduced dramatically. Eight surface models of the human ear and its surroundings were created with polygon counts ranging from 8.5 million to 600,000. The size of the mesh model was also reduced from 128 Mb to 5.5 Mb [Figure 3].
After the APO process, the surface model of the middle ear and its surroundings were reoriented using transform tools in Autodesk 3dsMax 2021 (Autodesk, San Rafael, CA, USA). Middle ear structures were extracted using the detaching tool. Some additional landmark structures, such as nerves and blood vessels, were not included in the model. These structures were created in 3dsMax 2021 software using primitive shapes such as tubes to keep a low polygon profile and provide a contextual reference. Color was applied manually to ossicles, nerves, tympanic membranes, ossicular ligaments, and surrounding structures [Figure 4]. Completed models were checked by an otologist (PM) and exported in FBX (Filmbox) format with embedded media options activated to include color information. The optimal combination of variables in the workflow, including data segmentation, surface model optimization, VR loading time, and frame rate, was then evaluated.

3. Results

The original micro-CT surface model contained over 21 million polygons, and translation to VR was not possible. Thus, reducing the polygon count through surface model optimisation as shown in Table 1 enabled acceptable loading rates and frame rates for VR applications. By using the APO, polygon counts could be reduced by 90% with the preservation of surface details. However, once the optimisation ratio exceeded 90%, the surface details started to fade. Ideal optimisation of the middle ear surface model for VR was achieved between 2 million and 2.5 million polygons, which enabled a loading time of fewer than 3 min at a frame rate of 82 to 90 frames per second with no VR-related motion sickness experienced.

4. Discussion

Simulation has been widely applied in otolaryngology [4,6,7,8]. In the last 2 decades, the application of 3D technology in simulation for education has increased [3]. This has been particularly prevalent in rhinology and otology, where reliance on technology has been necessary to attain minimally invasive routes for surgery. Due to the increased cost and reduced availability of traditional teaching resources such as human cadavers, there is a need for other technologies that are more readily available, cost-effective, and reproducible to facilitate the uniform assessment of competency [6,7]. Therefore, the use of 3D technologies such as VR has grown in prominence, particularly in temporal bone surgical training [3]. Advantages of VR include the development of standardized assessments, objective evaluation of experience level [8], and enhancement in surgical skills for both novice and experienced surgeons [9], as well as case-specific surgical rehearsal leading to improving confidence [10]. Limitations include attaining high-resolution data to create anatomical accuracy, attaining a range of anatomical models to simulate interindividual variation, simulating disease, and doing so in a manner that is not resource-intensive. Therefore, the application of VR has largely been limited to the technical skills that can be taught, such as mastoid surgery and cochlear implantation, but applications for use in middle-ear-specific tasks have been limited [11].

4.1. Strengths of the Reported Method

Sorensen et al. successfully demonstrated the use of VR in ear simulation over two decades ago [12]. The visible ear simulator [12] used images taken from one specimen to generate a volume-rendered model of the temporal bone to enable mastoidectomy surgery. Images were gathered using microslicing with slice width ranging from 50–100 µm. Segmentation of the specimen alone was reported to take 100–150 h, and a total of 450 h of labor was required to generate the entire visible ear model from one temporal bone. Almost 15 years later, using a protocol combining CBCT and microslicing data, 6 additional temporal bones were added to the open ear library [5]. Despite this advancement, image loss appearing as dehiscence and the need for manual segmentation was reported in important structures such as semicircular canals, facial nerves, and tegmen, demonstrating that limitations of image processing and labor intensity continue to be limitations in attaining high-resolution and anatomically accurate data sets. In addition, despite advances in CBCT resolution, the resolution of clinical imaging, especially for middle ear visualisation, still imposes limitations if imaging resolution is greater than 100 µm [1].
The micro-CT dataset used in this experiment used a resolution of 28 µm, and total image processing time to publish the dataset in VR was 30 h. Advantages of micro-CT include having access to a higher-resolution dataset, avoiding temporal bone processing artefacts (such as those created due to microslicing) and reduced segmentation time. This also allows the ability to increase the number of temporal bones, although micro-CT datasets are still limited to cadaveric bones.
The use of micro-CT to study middle ear anatomy dates back to 2003. Applications include finite element studies [13,14], prosthesis design [15], 3D printing [16], and anatomical evaluation of ossicles [17]. Neural anatomical structures of the middle ear such as the facial nerve [18] and chorda tympani [19] have also been described. Due to laborious data processing [20] and expensive visualization software [21], studying the middle ear and its surrounding structures has been challenging for large groups of students and trainees. Combining the superior imaging quality of micro-CT with the simulation advantages of VR is a possible method for making such a high-resolution data set more readily accessible to a wider group of people, especially as VR headsets become cheaper and more user-friendly. As demonstrated in this study, if this dataset is processed by selecting specific strategies in rendering, image optimisation, and VR visualisation techniques tailoring to the simulation intent, this is indeed feasible.

4.2. Rendering, Image Optimisation and Visualisation Techniques

This study used surface rendering techniques with APO. The strengths and weaknesses of rendering techniques using volumetric data (voxels) compared to the surface models (polygons) have been reported in the literature (15). In representing anatomical details, Udupa and Hung (1991) [21] examined surface and volume rendering methods and concluded that the surface process has a slight advantage. In comparison, the volume rendering suggested by van Ooijen et al. [22] produced better image quality without loss of information compared to surface modelling, which only used a limited portion (i.e., the surface detail) of the available data. A recent study showed that while volume models gave superior volumetric data, detail of surface anatomy was sacrificed [2].
For virtual temporal bone drilling exercises, 3D volumetric voxel data is utilised, as it displays the entire raw 3D dataset without human interpretation (such as imaging segmentation by defining a threshold and 3D reconstruction). At each step of the virtual exercise, as the drill removes a volume of bone, a 3D array of voxels is removed and the whole model is updated to display the result of each surgical move [23,24]. However, rendering entire volumetric data stacks is a time-consuming, memory-intensive, and computationally expensive task, especially when dealing with large micro-CT data sets. As an alternative approach, a hybrid data structure is commonly used in some virtual procedure simulators [25], in which surface model vertices directly correspond to the volumetric representation. The graphically rendered surface model is dynamic and is updated while the voxels are being ‘drilled away’.
When studying surface anatomy, the inherent nature of the surface modelling technique allows good depth perspective and a higher definition of surface detail, which has been shown to be a significant advantage compared to volume modelling. Surface models have been reported [26,27,28] to allow precise clinical measurements of anatomical structures and are suited to the fabrication of tangible educational models using 3D printers. Other advantages of surface modelling include features such as geometry optimisation and the creation of colour and texture maps. In addition, with short computational times, relatively small data size and less memory and storage requirements, complex surface anatomy can be visualised when the surface modelling approach is deployed compared to volume modelling [29], which makes it more feasible for tasks requiring detailed visualisation.
Ultimately, the choice of volume, surface, or hybrid rendering can be customised to address the task at hand. In this study, where the purpose was to visualise anatomical structures of the middle ear and interact with whole shapes (i.e., ossicles) that did not involve drilling (typically required in virtual mastoidectomy), volume data was not necessary. Surface rendering was used to reduce file sizes while preserving the fine details of structures such as the posterior wall of the middle ear, the lenticular process and stapes, and the tendons and nerves traversing the middle ear [30,31]. Combining surface-rendering techniques with APO allowed image optimisation to attain a sufficient frame rate for an immersive VR experience.
A low VR frame rate (FPS) can cause motion sickness, resulting in nausea, headaches, sweating, and dizziness [32]. As a result, balancing other criteria such as polygon counts, surface model loading time, and surface model detail is crucial to obtaining an optimal frame rate. VR performance indicators reported in this study, such as polygon numbers, model size, loading time, VR frame rate, polygon optimisation ratio, and surface detail assessment, offer useful references to help researchers choose an optimal VR parameter setup.

4.3. Limitations of the Reported Method

Reconstructing finer low-attenuation structures such as the middle ear membranes and tympanic diaphragm continues to be challenging. Increasing the greyscale threshold to include all soft tissue structures created artifacts by capturing osmium sedimentation. However, reducing it did not accurately allow the whole structure to be reconstructed. In this study, this required a manual design, which added to the processing time. These limitations can be overcome by rescanning the bone before and after staining and overlapping of imaging data, which in the future has the potential to further reduce processing time. In addition, as the clinical imaging resolution of CBCT improves, using patient-specific datasets that are also anatomically accurate may also become feasible.

5. Conclusions

In recent years, 3D animation simulators have become increasingly popular for medical education and training, particularly in the field of temporal bone training. However, the range and complexity of surgical tasks that can be simulated are limited by the lack of variation in available anatomy and the resolution of imaging techniques. It is also not possible to create anatomically accurate VR assets for middle ear disease. The middle ear, with its intricate and variable anatomy, presents a significant challenge due to variations in bone density and the presence of soft tissue structures such as membranes, ligaments, and nerves. Therefore, the introduction of micro-CT as a source of data for VR is a small but critical technological challenge, which this paper reports being able to overcome. As demonstrated in this study, selective rendering strategies can be utilized to attain task-specific assets, such as surface rendering for anatomical studies. Further, by optimizing a mesh model generated from micro-CT data using the adaptive polygon optimization technique, the assets can be better optimised for an immersive VR experience without compromising anatomical accuracy. This technological milestone marks a significant step forward in the use of VR in otology applications, and has the potential to revolutionize medical training and education by reducing time taken to create the assets, and thereby costs. This method might be used on a wider variety of specimens with further development, enabling a wider range of technical tasks to be performed with more accuracy and realism.

Author Contributions

Conceptualization, K.C.; Methodology, K.C. and H.M.; Formal analysis, K.C.; Investigation, H.M. and J.R.C.; Writing—original draft, K.C.; Writing—review & editing, I.C., H.M., J.R.C. and P.M.; Visualization, K.C.; Supervision, P.M. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Study protocol was approved by the Institutional Review Board of Royal Prince Alfred Hospital (protocol numbers 2019/ETH13789 and X19-0480, 13 July 2019).

Informed Consent Statement

Informed consent was waived due to the anonymous nature of the data used in this study.

Data Availability Statement

In the spirit of collaboration and advancing scientific knowledge, we are open to sharing the anonymized CT scan data utilized in this study upon reasonable request.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


  1. Mukherjee, P.; Cheng, K.; Chung, J.; Grieve, S.M.; Solomon, M.; Wallace, G. Precision Medicine in Ossiculoplasty. Otol. Neurotol. 2021, 42, 177–185. [Google Scholar] [CrossRef] [PubMed]
  2. Cohen, J.; Reyes, S.A. Creation of a 3D Printed Temporal Bone Model from Clinical CT Data. Am. J. Otolaryngol. 2015, 36, 619–624. [Google Scholar] [CrossRef] [PubMed]
  3. Mukherjee, P.; Cheng, K.; Wallace, G.; Chiaravano, E.; Macdougall, H.; O’Leary, S.; Solomon, M. 20 Year Review of Three-dimensional Tools in Otology: Challenges of Translation and Innovation. Otol. Neurotol. 2020, 41, 589–595. [Google Scholar] [CrossRef] [PubMed]
  4. Mowry, S.E.; Hansen, M.R. Resident Participation in Cadaveric Temporal Bone Dissection Correlates with Improved Performance on a Standardized Skill Assessment Instrument. Otol. Neurotol. 2014, 35, 77–83. [Google Scholar] [CrossRef]
  5. Sieber, D.M.; Andersen, S.A.W.; Sørensen, M.S.; Mikkelsen, P.T. OpenEar Image Data Enables Case Variation in High Fidelity Virtual Reality Ear Surgery. Otol. Neurotol. 2021, 42, 1245–1252. [Google Scholar] [CrossRef] [PubMed]
  6. Zirkle, M.; Roberson, D.W.; Leuwer, R.; Dubrowski, A. Using a Virtual Reality Temporal Bone Simulator to Assess Otolaryngology Trainees. Laryngoscope 2007, 117, 258–263. [Google Scholar] [CrossRef]
  7. Zhao, K.G.; Kennedy, G.; Yukawa, K.; Pyman, B.; O’Leary, S. Can Virtual Reality Simulator Be Used as a Training Aid to Improve Cadaver Temporal Bone Dissection? Results of a Randomized Blinded Control Trial. Laryngoscope 2011, 121, 831–837. [Google Scholar] [CrossRef]
  8. Khemani, S.; Arora, A.; Singh, A.; Tolley, N.; Darzi, A. Objective Skills Assessment and Construct Validation of a Virtual Reality Temporal Bone Simulator. Otol. Neurotol. 2012, 33, 1225–1231. [Google Scholar] [CrossRef]
  9. Linke, R.; Leichtle, A.; Sheikh, F.; Schmidt, C.; Frenzel, H.; Graefe, H.; Wollenberg, B.; Meyer, J.E. Assessment of Skills Using a Virtual Reality Temporal Bone Surgery Simulator. ACTA Otorhinolaryngol. Ital. 2013, 33, 273–281. [Google Scholar]
  10. Arora, A.; Swords, C.; Khemani, S.; Awad, Z.; Darzi, A.; Singh, A.; Tolley, N. Virtual Reality Case-specific Rehearsal in Temporal Bone Surgery: A Preliminary Evaluation. Int. J. Surg. 2013, 12, 141–145. [Google Scholar] [CrossRef]
  11. Javia, L.R.; Sardesai, M.G. Physical Models and Virtual Reality Simulators in Otolaryngology. Otolaryngol. Clin. N. Am. 2017, 50, 875–891. [Google Scholar] [CrossRef] [PubMed]
  12. Sorensen, M.S.; Dobrzeniecki, A.B.; Larsen, P.; Frisch, T.; Sporring, J.; Darvann, T.A. The Visible Ear: A Digital Image Library of the Temporal Bone. J. Oto-Rhino-Laryngol. Its Relat. Spec. 2002, 64, 378–381. [Google Scholar] [CrossRef] [PubMed]
  13. Kelly, D.J.; Prendergast, P.J.; Blayney, A.W. The effect of prosthesis design on vibration of the reconstructed ossicular chain: A comparative finite element analysis of four prostheses. Otol. Neurotol. 2003, 24, 11–19. [Google Scholar] [CrossRef] [PubMed]
  14. Prendergast, P.J.; Kelly, D.J.; Rafferty, M.; Blayney, A.W. The effect of ventilation tubes on stresses and vibration motion in the tympanic membrane: A finite element analysis. Clin. Otolaryngol. 1999, 24, 542–548. [Google Scholar] [CrossRef] [PubMed]
  15. Kamrava, B.; Gerstenhaber, J.A.; Amin, M.; Har-el, Y.; Roehm, P.C. Preliminary Model for the Design of a Custom Middle Ear Prosthesis. Otol. Neurotol. 2017, 38, 839–845. [Google Scholar] [CrossRef] [PubMed]
  16. Kuru, I.; Maier, H.; Müller, M.; Lenarz, T.; Lueth, T.C. A 3D-printed functioning anatomical human middle ear model. Hear. Res. 2016, 340, 204–213. [Google Scholar] [CrossRef] [PubMed]
  17. Saha, R.; Srimani, P.; Mazumdar, A.; Mazumdar, S. Morphological Variations of Middle Ear Ossicles and its Clinical Implications. J. Clin. Diagn. Res. 2017, 11, AC01–AC04. [Google Scholar] [CrossRef]
  18. Kozerska, S.J.; Spulber, A.; Walocha, J.; Wroński, S.; Tarasiuk, J. Micro-CT study of the dehiscences of the tympanic segment of the facial canal. Surg. Radiol. Anat. 2017, 39, 375–382. [Google Scholar] [CrossRef]
  19. McManus, L.J.; Dawes, P.J.D.; Stringer, M.D. Surgical anatomy of the chorda tympani: A micro-CT study. Surg. Radiol. Anat. 2012, 34, 513–518. [Google Scholar] [CrossRef]
  20. Lee, D.H.; Chan, S.; Salisbury, C.; Kim, N.; Salisbury, K.; Puria, S.; Blevins, N.H. Reconstruction and exploration of virtual middle-ear models derived from micro-CT datasets. Hear. Res. 2010, 263, 198–203. [Google Scholar] [CrossRef]
  21. Udupa, J.K.; Hung, H.M.; Chuang, K.S. Surface and volume rendering in three-dimensional imaging: A comparison. J. Digit. Imaging 1991, 4, 159–168. [Google Scholar] [CrossRef] [PubMed]
  22. van Ooijen, P.M.A.; van Geuns, R.J.M.; Rensing, B.J.W.M.; Bongaerts, A.H.H.; de Feyter, P.J.; Oudkerk, M. Noninvasive Coronary Imaging Using Electron Beam CT: Surface Rendering Versus Volume Rendering. Am. J. Roentgenol. 2003, 180, 223–226. [Google Scholar] [CrossRef]
  23. Wiet, G.J.; Stredney, D.; Kerwin, T.; Hittle, B.; Fernandez, S.A.; Abdel-Rasoul, M.; Welling, D.B. Virtual temporal bone dissection system: OSU virtual temporal bone system: Development and testing. Laryngoscope 2012, 122, S1–S12. [Google Scholar] [CrossRef] [PubMed]
  24. Hardcastle, T.N.; Wood, A. The utility of virtual reality surgical simulation in the undergraduate otorhinolaryngology curriculum. J. Laryngol. Otol. 2018, 132, 1072–1076. [Google Scholar] [CrossRef] [PubMed]
  25. Morris, D.; Sewell, C.; Barbagli, F.; Salisbury, K.; Blevins, N.H.; Girod, S. Visuohaptic Simulation of Bone Surgery for Training and Evaluation. IEEE Comput. Graph. Appl. 2006, 26, 48–57. [Google Scholar] [CrossRef] [PubMed]
  26. Rose, A.S.; Kimbell, J.S.; Webster, C.E.; Harrysson, O.L.A.; Formeister, E.J.; Buchman, C.A. Multi-material 3D Models for Temporal Bone Surgical Simulation. Ann. Otol. Rhinol. Laryngol. 2015, 124, 528–536. [Google Scholar] [CrossRef]
  27. Rose, A.S.; Webster, C.E.; Harrysson, O.L.A.; Formeister, E.J.; Rawal, R.B.; Iseli, C.E. Pre-operative simulation of pediatric mastoid surgery with 3D-printed temporal bone models. Int. J. Pediatr. Otorhinolaryngol. 2015, 79, 740–744. [Google Scholar] [CrossRef]
  28. Rodt, T.; Ratiu, P.; Becker, H.; Bartling, S.; Kacher, D.F.; Anderson, M.; Jolesz, F.A.; Kikinis, R. 3D visualisation of the middle ear and adjacent structures using reconstructed multi-slice CT datasets, correlating 3D images and virtual endoscopy to the 2D cross-sectional images. Neuroradiology 2002, 44, 783–790. [Google Scholar] [CrossRef]
  29. Entsfellner, K.; Kuru, I.; Strauss, G.; Lueth, T.C. A new physical temporal bone and middle ear model with complete ossicular chain for simulating surgical procedures. In Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 1654–1659. [Google Scholar]
  30. Pandey, A.K.; Bapuraj, J.R.; Gupta, A.K.; Khandelwal, N. Is there a role for virtual otoscopy in the preoperative assessment of the ossicular chain in chronic suppurative otitis media? Comparison of HRCT and virtual otoscopy with surgical findings. Eur. Radiol. 2009, 19, 1408–1416. [Google Scholar] [CrossRef]
  31. Jain, N.; Youngblood, P.; Hasel, M.; Srivastava, S. An augmented reality tool for learning spatial anatomy on mobile devices. Clin. Anat. 2017, 30, 736–741. [Google Scholar] [CrossRef]
  32. Meehan, M.; Razzaque, S.; Insko, B.; Whitton, M.; Brooks, F.P. Review of four studies on the use of physiological reaction as a measure of presence in stressful virtual environments. Appl. Psychophysiol. Biofeedback 2005, 30, 239–258. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Development of an immersive virtual reality human middle ear anatomy based on micro-CT data.
Figure 1. Development of an immersive virtual reality human middle ear anatomy based on micro-CT data.
Osteology 03 00007 g001
Figure 2. The middle ear and its surrounding reconstruction using AVIZO 2021 software.
Figure 2. The middle ear and its surrounding reconstruction using AVIZO 2021 software.
Osteology 03 00007 g002
Figure 3. Incus surface detail comparison before and after adaptive polygon optimisation.
Figure 3. Incus surface detail comparison before and after adaptive polygon optimisation.
Osteology 03 00007 g003
Figure 4. Micro-CT human middle ear surface model with color information applied.
Figure 4. Micro-CT human middle ear surface model with color information applied.
Osteology 03 00007 g004
Table 1. Polygon numbers, model size, loading time, VR frame rate, polygon optimization ratio and surface detail assessment.
Table 1. Polygon numbers, model size, loading time, VR frame rate, polygon optimization ratio and surface detail assessment.
PolygonsModel Size (MB)VR Loading TimeFrame Rate (FPS)Optimisation RatioSurface Details
8.5 million12812 min and 20 s3559.5%
6.5 million969 min and 45 s5169%
4.3 million645 min and 10 s6379.5%
2.5 million322 min and 55 s8288%
2 million252 min and 30 s9090%
1.5 million191 min and 55 s9093%
1 million121 min and 20 s9095.2%
600 k5.530 s9097.1%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, K.; Curthoys, I.; MacDougall, H.; Clark, J.R.; Mukherjee, P. Human Middle Ear Anatomy Based on Micro-Computed Tomography and Reconstruction: An Immersive Virtual Reality Development. Osteology 2023, 3, 61-70.

AMA Style

Cheng K, Curthoys I, MacDougall H, Clark JR, Mukherjee P. Human Middle Ear Anatomy Based on Micro-Computed Tomography and Reconstruction: An Immersive Virtual Reality Development. Osteology. 2023; 3(2):61-70.

Chicago/Turabian Style

Cheng, Kai, Ian Curthoys, Hamish MacDougall, Jonathan Robert Clark, and Payal Mukherjee. 2023. "Human Middle Ear Anatomy Based on Micro-Computed Tomography and Reconstruction: An Immersive Virtual Reality Development" Osteology 3, no. 2: 61-70.

Article Metrics

Back to TopTop