Next Article in Journal
Vibro-Acoustic Radiation Analysis for Detecting Otitis Media with Effusion
Previous Article in Journal
Compact Low-Frequency High-Homogeneity Magnetic Field Exposure System for Cell Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advanced 3D Modeling and Bioprinting of Human Anatomical Structures: A Novel Approach for Medical Education Enhancement

by
Sergio Castorina
1,2,
Stefano Puleo
2,
Caterina Crescimanno
3 and
Salvatore Pezzino
3,*
1
Department of Medical, Surgical Sciences and Advanced Technologies “G.F. Ingrassia”, University of Catania, 95123 Catania, Italy
2
Mediterranean Foundation “GB Morgagni”, 95125 Catania, Italy
3
Department of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 5; https://doi.org/10.3390/app16010005
Submission received: 10 November 2025 / Revised: 15 December 2025 / Accepted: 17 December 2025 / Published: 19 December 2025
(This article belongs to the Section Applied Biosciences and Bioengineering)

Abstract

Current challenges in anatomical teachings, such as cadaver shortages, ethical limitations, and restricted access to pathological specimens, are increasingly being mitigated by advancing medical technologies, and among these are three-dimensional modeling technology and multi-material bioprinting. These innovations could facilitate a deeper understanding of complex anatomical components while encouraging an interactive learning environment that accommodates diverse educational needs. These technologies have the capacity to transform anatomy education, yielding better-prepared healthcare practitioners. Combining artificial intelligence with acquired medical images makes it easier to reconstruct anatomy and saves time while still being very accurate. This review seeks to thoroughly assess the current landscape of advanced three-dimensional printing, multi-material bioprinting, and related technologies used in anatomical education. It aims to consolidate evidence related to their educational effectiveness and to outline potential pathways for clinical applications and research development.

1. Introduction

Anatomical teaching has historically employed cadaveric dissection as a method to provide medical students with the necessary three-dimensional spatial cognition and authentic tactile sensations required for clinical practice [1]. Unfortunately, nowadays, medical anatomical education has numerous challenges, including a lack of cadavers, ethical concerns, a lack of access to clinical specimens, and a poor reflection of anatomical variation [1,2,3]. Regarding the availability of cadaveric specimens, supply continues to decline, representing a major challenge to anatomical education globally [4,5,6,7,8,9,10,11,12]. In fact, global rates of cadaver donation have plunged sharply, with significant disproportions in resource distribution between high-income as well as low- and middle-income countries. In the low- and middle-income countries, the lack of access to cadaveric material for anatomical training represents a barrier to adequate medical training with significant impacts predominantly upon students attending low-resource settings [8,13,14], to which is also added the difficult access to new technologies that could make a contribution to overcoming this problem. These situations have sparked extraordinary interest in alternative methods to anatomical education, framing emerging technology as potentially transformational answers to long-standing educational constraints [15]. In this scenario, the emergence of advanced three-dimensional modeling technologies combined with sophisticated bioprinting capabilities offers revolutionary opportunities to transform anatomical education while addressing fundamental limitations of traditional approaches [16,17,18,19]. In fact, in contrast with conventional plastic anatomical models that depict idealized anatomy, novel 3D-printed anatomical models may represent a direct result of imaging data from patients, with retention of natural anatomical variation and pathological states, as well as individual morphology that reflects student exposure in clinical work [20]. This technical advance allows for the fabrication of patient-specific anatomical structures that offer superior learning opportunities with retention of the hands-on experiential learning necessary for building medical competency [21]. Several studies demonstrate that 3D-printed anatomical models could enhance learning outcomes measurably compared to conventional two-dimensional anatomical training materials, with special effectiveness in educational contexts that involve complicated anatomical structures in which spatial relationships are indispensable for understanding [19,22,23,24]. Students who learn via 3D-printed models consistently exhibit improved post-test scores, enhanced spatial visualization abilities with large effect sizes (SMD 0.72–0.93), superior long-term knowledge retention, and significantly higher satisfaction rates compared with conventional instructional approaches [20,25]. Such enhancements are most pronounced for anatomical areas with complicated three-dimensional relationships, such as cardiac, neuroanatomy, and musculoskeletal systems, areas in which conventionally offered instruction often struggles to convey spatial complexity [19,20].
At the basis of these advances in anatomical teaching, there are several technologies, including evolute medical imaging acquisition, sophisticated segmentation programs, artificial intelligence-controlled reconstruction procedures, as well as new multi-material manufacturing processes [22,26,27,28,29,30]. This review aims to evaluate recent developments in 3D models and bioprinting technologies for anatomical education, considering technological platforms, educational uses across a range of student populations and body regions, validation techniques and quality assurance measures, weaknesses in existing practices, and emerging advances.

2. Advanced Medical Imaging Technologies for 3D Anatomical Model Generation

The foundation of anatomical modeling is the use of state-of-the-art medical imaging data acquisition and analysis. This ensures that the anatomical structures are rendered accurately and naturally similar to medical imaging data [28]. The quality, resolution, and contrast of the original medical imaging data require careful optimization of image acquisition techniques to optimize the anatomical information available for subsequent segmentation and reconstruction workflows. Computed tomography and magnetic resonance imaging are the two main types of imaging used to make anatomical models. Both medical imaging technologies have advantages and disadvantages related to fundamental physical principles and tissue contrast [31,32].

2.1. Computed Tomography Imaging: Transition from Conventional to Spectral Technologies

The evolution of CT technology has fundamentally transformed how anatomical models are created from medical imaging data. Understanding this progression—from basic single-slice systems to modern spectral imaging—is essential for appreciating the quality improvements in 3D anatomical reconstruction.
The transition from single-detector to multidetector computed tomography (CT) has transformed volumetric imaging for the creation of anatomical models [33]. Historically, early computed tomography systems obtained single slices sequentially, a process that was time-consuming and limited spatial coverage. In contrast, multidetector computed tomography (MDCT) utilizes detector arrays with multiple rows functioning concurrently, enabling rapid acquisition of complete volumetric datasets. This advancement facilitates the swift acquisition of volumetric data with enhanced temporal resolution and diminished motion artifacts [34,35,36,37,38,39]. The key benefit is that MDCT scanners can image an entire anatomical region in seconds rather than minutes, making them practical for clinical use while providing superior image quality.
MDCT scanners attain high spatial resolution with submillimeter isotropic voxel dimensions (≤0.5 mm slice thickness and ≤0.5 mm in-plane resolution), facilitating accurate segmentation of complex osseous structures, vascular anatomy, and calcified tissues with enhanced contrast differentiation [34,35,36,37,38,39]. This level of detail is crucial for 3D model creation because it ensures that small but important anatomical features—such as trabecular bone patterns or narrow vascular channels—are captured with sufficient resolution for accurate reproduction in printed models. MDCT’s characteristics make it possible to quickly create clinically useful segmentation procedures that produce anatomical models for each patient that can be used for surgical planning and teaching [39,40,41,42].
The latest generation of conventional CT systems represents the recent advancement in conventional detector technology, achieving spatial resolutions below 0.25 mm through the optimization of detector components [43]. Contemporary UHRCT (ultra-high-resolution CT) systems utilize an ultra-high-resolution mode featuring 0.25 mm detector elements in both in-plane and longitudinal orientations, effectively doubling the spatial resolution of traditional MDCT [43,44]. This improved resolution allows for the observation of intricate anatomical elements that were previously unresolvable with conventional imaging, especially beneficial for craniofacial anatomy, temporal bone structures, and complicated vascular linkages that necessitate submillimeter precision. For educators, this means students can examine fine anatomical details such as individual trabecular struts or delicate vascular branches that would be impossible to visualize at conventional resolution. Volume-rendered 3D models based on UHRCT data have significantly improved anatomical accuracy compared to conventional MDCT-based models and thus improve the teaching value by enabling the visualization of anatomical details at a resolution close to that of micro-CT [45].
Beyond standard clinical imaging, advanced research methodologies have enabled even deeper visualization, such as microfocus CT (micro-CT) and nanocomputed tomography (nano-CT), which enable resolution down to microscopic levels of anatomy [46,47,48,49,50,51,52,53,54,55,56,57,58]. These specialized imaging techniques push the boundaries of what can be visualized and printed. Micro-CT scanners can achieve voxel sizes of down to sub-micron levels, whereas specialized nano-CT scanners can achieve resolutions well below 50 μm [48,57]. The clinical applications of these research-grade imaging modalities are still emerging, but they provide unprecedented insight into tissue microarchitecture. These methods enable 3D visualization of trabecular bone, vessels, and interactions of microscopic anatomy at the cellular level [46]. 3D printing of micro-CT and nano-CT data can produce enlarged anatomical models of unrivaled detail, allowing for the visualization of features of microscopic anatomy such as osteocyte lacunae, the canalicular network, and trabecular structure [47,48]. For anatomy education, these ultra-high-resolution models are particularly valuable for demonstrating bone microarchitecture, embryonic structures, and craniofacial complexity at a resolution that far exceeds traditional clinical imaging [49]. The fact that these imaging methods are non-destructive means that standardized and reproducible digital datasets can be created that could form the basis of teaching materials used by many different centers [46,47,48,49,50,51,52,53,54,55,56,57,58].
Advanced material characterization techniques enhance the diagnostic capability of standard CT imaging. Dual-energy computed tomography (DECT) offers superior tissue differentiation capabilities compared to traditional single-energy CT imaging [55]. Rather than measuring only the density of tissue, DECT provides additional information about tissue composition. Dual-energy CT captures imaging data at two distinct energy levels (often 80 kVp and 140 kVp), facilitating material decomposition algorithms that differentiate tissues based on their energy-dependent attenuation properties rather than only on density [59]. This approach is analogous to seeing an object in different colored lights—it reveals properties that a single illumination cannot. This method facilitates greater discrimination of iodinated contrast agents from calcified structures, superior visualization of vascular disease, and the creation of virtual non-contrast pictures from contrast-enhanced acquisitions [60,61,62]. Dual-energy CT enhances segmentation accuracy in three-dimensional anatomical model generation by providing iodine-specific and calcium-specific reconstructions, which are particularly beneficial for intricate cardiovascular models that necessitate precise differentiation between contrast-enhanced vessels and neighboring calcified structures [63,64,65].
The newest paradigm in CT imaging involves photon-counting detection, a technology that represents a fundamental shift in how X-rays are measured and processed. Photon-counting detector computed tomography combines ultra-high spatial resolution with inherent spectral imaging capabilities [66]. Unlike conventional detectors that merely measure total energy (like measuring the total brightness of light), photon-counting detectors count individual X-ray photons and measure each photon’s specific energy. Unlike conventional energy-integrating detectors that measure total deposited energy, photon-counting detector systems directly count individual X-ray photons while simultaneously measuring each photon’s energy [67]. This fundamental difference eliminates a major source of image degradation—electronic noise—while simultaneously improving both spatial resolution and dose efficiency. This fundamental difference eliminates electronic noise inherent in conventional detectors, produces images with superior spatial resolution (achieving performance comparable to UHRCT systems), enhanced contrast-to-noise ratios, and reduced radiation dose simultaneously [68,69]. The inherent spectral capabilities enable the generation of multiple energy-specific reconstructions, including virtual mono-energetic images, material decomposition maps, and virtual non-contrast images without requiring multiple acquisitions [68,69]. For anatomical model generation, photon-counting CT offers an unprecedented combination of spatial resolution and material characterization, enabling tissue-specific segmentation with accuracy previously unattainable with conventional imaging technologies [67,68,70].
The spectrum of CT technologies described above—from conventional MDCT to emerging photon-counting systems—reflects current and future capabilities in medical imaging. However, an important practical distinction exists between technologies suitable for routine educational anatomical model implementation and those serving primarily research or specialized clinical functions. For educators implementing 3D models globally, conventional MDCT imaging with standard clinical protocols provides fully adequate resolution (0.5–1.0 mm slice thickness) for producing high-quality anatomical models suitable for education, and this modality is available in virtually all hospitals and medical centers worldwide. Advanced technologies such as UHRCT, micro-CT, and photon-counting CT offer incremental improvements in anatomical detail but at significantly increased cost, complexity, and limited global availability. These advanced systems should be considered as specialized tools for specific applications (e.g., craniofacial surgery planning, research investigations of microscopic anatomy) rather than foundational requirements for educational anatomical modeling. The emphasis on advanced clinical and research imaging modalities in this comprehensive review reflects the current state of medical technology; educators should prioritize conventional MDCT combined with effective segmentation and 3D printing strategies when establishing or expanding anatomical modeling programs, particularly in resource-limited settings where MDCT is more accessible than advanced alternatives.

2.2. Magnetic Resonance Imaging

Magnetic resonance imaging (MRI) provides a complementary approach to CT, offering exceptional soft tissue visualization without radiation exposure. Understanding when and why MRI is preferred over CT is essential for selecting optimal imaging data for 3D anatomical model creation.
Magnetic resonance imaging provides unparalleled soft tissue contrast resolution without the use of ionizing radiation, making it especially useful for creating anatomical models of the heart, brain, spinal cord, musculoskeletal soft tissues, and other intricate structures where differentiating between soft tissue types is critical [71,72]. The fundamental advantage of MRI is that it uses magnetic fields and radiofrequency pulses rather than X-rays, allowing for superior differentiation of soft tissues that appear nearly identical on CT scans. The exceptional contrast-to-noise ratio for soft tissues, combined with the ability to acquire images in any arbitrary plane without patient repositioning, and the availability of multiple tissue-specific contrast mechanisms (T1-weighted, T2-weighted, proton density, FLAIR, diffusion-weighted imaging), provides comprehensive anatomical information unavailable with CT imaging alone [73]. For 3D printing applications, this means MRI is the gold standard for organs like the heart and brain, where precise soft tissue boundaries are essential for accurate model creation.
MRI-derived anatomical models are invaluable for visualizing congenital heart disease, necessitating a thorough understanding of intricate anatomical connections such as complex shunt pathways, anomalous vascular connections, and subtle structural abnormalities that influence clinical management strategies and surgical approaches [74,75,76]. Cardiac MRI offers unique advantages because it can depict both the static anatomy and the dynamic motion of the heart throughout its beating cycle. Cardiac cine-MRI sequences offer a dynamic representation of cardiac motion throughout the cardiac cycle, facilitating the creation of models that integrate functional data with anatomical details, thus enhancing comprehension of dynamic anatomical relationships and the physiological implications of structural anomalies [76]. This capability is particularly valuable for educational purposes, as students can understand not just where structures are located, but also how they move and function. Three-dimensional cardiac MRI datasets with spatial resolutions of 1.0–1.5 mm isotropic voxels enable comprehensive imaging of valve anatomy, myocardial architecture, and coronary artery origins, which is essential for complete cardiovascular education [77].
Recent technological advances have expanded MRI’s vascular imaging capabilities, allowing for improved visualization of arterial and venous anatomy. Recent advances in contrast-enhanced MR angiography (CE-MRA) techniques that use time-resolved three-dimensional acquisitions with parallel imaging and view-sharing reconstruction algorithms have enabled visualization of vascular architecture with temporal and spatial resolution previously unattainable with conventional imaging methods [78]. These new protocols capture images of blood vessels so rapidly that arterial and venous phases are clearly separated, preventing the confusing overlap that plagued older techniques. These advanced CE-MRA protocols achieve sub-second temporal resolution and spatial resolution below 1 mm, allowing for comprehensive visualization of arterial anatomy without venous contamination, resulting in highly accurate vascular models that preserve intricate branching patterns and anatomical variations [78,79,80]. The capacity to provide time-resolved contrast-enhanced imaging is especially useful for complicated vascular areas such as mesenteric arteries, where anatomical differences have a substantial impact on surgical planning and educational applications [81].
Emerging MRI techniques now allow visualization of tissues previously invisible on MRI, dramatically expanding the range of anatomical structures that can be imaged for 3D printing. Ultra-short echo time (UTE) and zero echo time (ZTE) MRI sequences are emerging technologies that allow for direct visualization of tissues with very short T2 relaxation times, such as cortical bone, ligaments, tendons, and other connective tissues that are typically invisible on conventional MRI sequences [82,83].
Conventional MRI sequences cannot visualize bone because the signal decays too rapidly; UTE and ZTE sequences overcome this limitation by capturing signals before this decay occurs, enabling bone visualization while maintaining MRI’s superior soft tissue contrast. These novel sequences achieve echo times of less than 100 microseconds, allowing signal acquisition before complete T2 decay occurs in rapidly relaxing tissues, resulting in “CT-like” bone visualization alongside superior soft tissue contrast characteristic of MRI [84]. The creation of “synthetic CT” images from UTE/ZTE MRI data gives bone density data suited for 3D printing applications while retaining the extensive soft tissue visualization benefits of magnetic resonance imaging [85].
Similar to CT imaging modalities, MRI-based anatomical modeling benefits from advanced specialized techniques while remaining fully functional with standard clinical MRI protocols. Institutions implementing educational 3D models should not feel obligated to invest in advanced sequences such as UTE/ZTE or highly specialized contrast protocols; excellent anatomical models can be generated from routine clinical MRI sequences available at virtually all hospitals. However, understanding the spectrum of available MRI capabilities—including advanced angiographic techniques and specialized bone imaging—enables educators to leverage existing advanced protocols when available, and informs future investment decisions as institutional capabilities evolve.

2.3. Medical Imaging Workflow and Quality Assurance

The conversion of raw medical imaging data into high-fidelity three-dimensional models necessitates complex quality assurance processes and established workflows to ensure anatomical accuracy during the reconstruction process (Figure 1). Modern workflows prioritize automation and standardization to ensure reproducible, high-quality results while minimizing manual processing requirements and operator variability [86]. Optimal imaging acquisition protocols for anatomical model creation recommend intravenous contrast administration when vascular visualization is required, slice thickness below 1.25 mm to ensure sufficient spatial resolution for accurate reconstruction, and electrocardiographic gating for cardiac imaging to minimize motion artifacts [87]. Thicker imaging slices reduce model accuracy due to partial volume averaging effects, whereas excessively thin slices increase radiation dose (for CT), acquisition time, dataset size, and processing requirements without corresponding improvements in final model quality [88]. Medical imaging data collected in DICOM (Digital Imaging and Communications in Medicine) format—the international standard for medical image storage and transmission—must be transferred to specialized segmentation software capable of processing volumetric datasets and producing three-dimensional anatomical reconstructions [89]. Segmentation software options include open source platforms and specialized vendor-specific platforms, each with its own set of capabilities, user interfaces, regulatory compliance features, and integration options with clinical information systems [89,90,91,92]. Post-segmentation workflows differ depending on the intended applications: patient-specific models for surgical planning applications require preservation of exact anatomical geometry without modification to maintain clinical validity and regulatory compliance, while educational models may undergo additional processing, including hole filling, surface smoothing, feature enhancement, and multi-color rendering to optimize pedagogical effectiveness [93,94,95]. This post-processing flexibility allows for the production of anatomical models that are customized for specific instructional objectives while keeping the foundation of clinically correct anatomy drawn from actual patient imaging data [93,94,95]. Comprehensive quality assurance frameworks that ensure dimensional correctness and anatomical fidelity throughout the imaging-to-model workflow are critical components of clinical-grade anatomical model development [87,93,96]. Quality assurance processes must address a variety of potential error causes, including segmentation inaccuracies, digital model processing artifacts, and manufacturing differences, which all contribute to final model correctness [87]. Validated workflows achieve dimensional errors below ±2% compared to source imaging data, with absolute errors typically below 1 mm for structures larger than 10 mm [28]. Because anatomical model construction is multi-step, full error analysis is required, with segmentation error, digital editing error, and printing error identified as distinct components contributing to overall model error [97,98]. Quantitative assessments show that typical segmentation errors are 0.8 mm (median), digital editing errors vary with complexity and operator experience, and printing errors average 0.26 mm, with total errors typically less than 0.9 mm for properly validated workflows [99,100].

3. Artificial Intelligence-Based Reconstruction Technologies

Artificial intelligence (AI) has been incorporated into three-dimensional anatomical model generation, enabling unparalleled accuracy and efficiency. This shift transforms segmentation from a time-consuming manual procedure into an automated workflow, thereby radically changing how anatomical models are developed and deployed in clinical and educational settings.

3.1. Deep Learning Neural Networks, AI Architecture

Sophisticated technologies based on artificial intelligence (AI) have now reached maturity to drive reconstruction systems using sophisticated convolutional neural networks and three-dimensional deep learning architectures that have been trained on large datasets of medical images annotated by expert anatomists and clinicians, resulting in robust algorithms capable of identifying, segmenting, and classifying complex anatomical structures with remarkable consistency [101]. These advanced neural network architectures, which include 3D U-Net, DeepMedic, and V-Net variants, have transformed medical image analysis by processing volumetric data directly rather than analyzing individual two-dimensional slices, preserving critical spatial relationships and anatomical context required for accurate reconstruction [102,103]. The segmentation process based on deep learning (defined as the automated identification and delineation of anatomical features of interest from volumetric imaging datasets) has a strategic importance in the various procedural steps in deciding the final quality and accuracy of 3D-printed anatomical models [104]. Modern deep learning segmentation frameworks are extremely versatile across numerous imaging modalities, anatomical locations, and therapeutic applications. Classification accuracy consistently exceeds 92% for diagnostic tasks, segmentation Dice scores exceed 91% for anatomical structure delineation, and inference times remain under 80 milliseconds per image, allowing for real-time processing capabilities suitable for clinical deployment [105]. These performance characteristics, validated across a variety of public benchmark datasets, confirm the clinical readiness of modern AI-based segmentation systems for anatomical model creation workflows [105]. AI-driven systems generate quantitatively significant performance improvements. Dice similarity coefficients, the gold standard metric for segmentation accuracy, exceed 89.2% across diverse anatomical structures and patient populations, with some specialized systems reaching coefficients above 92% for specific organ systems [105]. Reconstruction times have been dramatically reduced from 30–120 min for traditional manual segmentation processes to under 4 min for fully automated AI workflows, resulting in a more than 20-fold reduction in processing time while maintaining or exceeding human-level accuracy in anatomical structure identification [29,106]. This enormous efficiency gain has converted 3D reconstruction from an unfeasible research tool needing extensive human effort to a therapeutically viable technology that can be used in routine surgery planning and instructional applications. Ensemble learning approaches combine multiple neural network architectures—typically integrating ResNet, DenseNet, and U-Net variants—to achieve robust performance across diverse imaging conditions, patient populations, anatomical regions, and scanner manufacturers [107,108,109]. The ensemble methodology outperforms single-model approaches by lowering prediction variance and enhancing generalization to previously unknown anatomical changes or imaging abnormalities. Multi-task learning frameworks improve reconstruction quality by optimizing segmentation, classification, and geometric reconstruction objectives all at once, allowing networks to develop a more comprehensive anatomical understanding via shared feature representations [110].

3.2. Attention Mechanisms and Uncertainty Quantification

Advanced attention mechanisms built into modern AI reconstruction systems allow for selective emphasis on diagnostically relevant anatomical regions while reducing background noise and irrelevant structures. Self-attention modules and transformer-based topologies enable networks to create long-range dependencies between distant anatomical components, which is essential for comprehending complex spatial interactions in three-dimensional anatomy [111]. These attention-guided methods generate detailed uncertainty maps that strongly correlate with segmentation performance metrics like Dice coefficients, giving clinicians and educators quantitative confidence measures for each reconstructed anatomical structure [112]. Studies show a significant association (Spearman’s rank correlation, p < 0.05) between uncertainty estimations and segmentation mistakes, allowing for intelligent identification of potentially problematic reconstructions [113].

3.3. Clinical Validation and Anatomical Variability Detection

Recent clinical validation tests conducted across numerous medical centers show that AI-driven reconstruction achieves high accuracy rates across a variety of anatomical systems. In thoracic surgery applications, AI-3D reconstruction systems outperform traditional manual reconstruction methods by 94.7% for arterial structure identification, 92.1% for venous structures, and 100% for bronchial anatomy classification [106]. AI-assisted reconstruction improves case-wise median accuracy from 0.78 to 0.87 (p < 0.01), resulting in a 41% reduction in anatomical variant identification errors and a 35% improvement in surgical procedure selection accuracy [29]. AI-powered segmentation systems excel at identifying anatomical variances and clinical states, which are critical for complete medical education. The ability to systematically analyze thousands of clinical imaging datasets allows for the identification and reconstruction of rare anatomical variants found in only 5–10% of the population, which students may never encounter during traditional cadaver-based anatomical education [114]. A comparative study on pulmonary anatomy reconstruction showed that AI systems achieve 8–10% higher accuracy rates in anatomical variant identification than experienced radiologists with 6–19 years of clinical experience, with consistent performance improvements observed across all ten readers participating in blinded evaluation protocols [29]. When AI-3D assistance was provided, reader agreement increased significantly from 0.33 to 0.43 for anatomical variant identification and from 0.70 to 0.76 for operation procedure planning, indicating improved diagnostic consistency and reliability [29].

3.4. Specialized Capabilities Include Non-Contrast CT and Artifact Reduction

The technique excels at processing non-contrast computed tomography scans, which have traditionally been one of the most difficult imaging modalities to reconstruct accurately due to poor tissue differentiation and reduced vascular contrast. AI-powered systems trained with sophisticated data augmentation techniques and physics-based artifact simulation achieve high-fidelity structure identification and classification even under suboptimal imaging conditions, making manual reconstruction impractical [115,116]. Deep learning reconstruction algorithms that operate in both the raw data and image domains have been shown to reduce image noise by 30–71% compared to filtered back projection while preserving spatial resolution and natural noise texture, improving diagnostic confidence without increasing radiation exposure [117]. Deep learning-based metal artifact reduction algorithms produce significantly clearer, less noisy images (p < 0.001) by effectively eliminating metallic streaks around implants, substantially improving surgical planning accuracy compared to traditional methods [118].

3.5. Quality Assurance and Automated Validation Mechanism

The AI reconstruction technique includes advanced quality assurance systems that automatically detect and fix common imaging abnormalities while indicating unexpected anatomical characteristics for expert assessment, ensuring clinical safety while increasing automation efficiency. Advanced quality assurance systems are increasingly incorporating artificial intelligence-based quality prediction algorithms that estimate segmentation accuracy without requiring manual expert contours, allowing for continuous monitoring of automated segmentation performance in production environments [119,120]. These AI-driven quality assurance frameworks use regression models trained on large validation datasets to predict Dice similarity coefficients directly from image-segmentation pairs, with mean absolute prediction errors below 0.05 across a wide range of anatomical structures and imaging conditions [121]. Image domain shift detectors with denoising autoencoders and engineered features can identify imaging conditions that fall outside training distributions, triggering appropriate quality control responses when unusual imaging characteristics are encountered [122].

3.6. Clinical Results and Educational Impact

Recent developments in the reduction in computed tomography artifacts have benefited significantly from the integration of artificial intelligence models. Cao et al. (2024) demonstrated how a convolutional neural network-based algorithm (MARIO) improves image quality in CT-guided interventional procedures, with reductions in the conspicuity of metal artifacts by 34–46% and improvements in anatomical visualization [116]. In parallel, Reynolds et al. (2024) developed universal non-circular orbits for cone-beam CT, achieving significant reductions in artifacts from 511% to values below 20% in critical regions, with improvement in soft tissue visualization from 205% to 6–7% [123]. On the validation and quality assurance front, Zou et al. (2025) combined iterative metal artifact reduction algorithms with deep learning-based image reconstruction, demonstrating particular efficacy at high kVp voltages in scenarios with metal hardware [124]. In addition, the educational impact of these AI systems has been documented in recent studies: the implementation of personalized AI-based platforms has shown significant improvements in medical students’ clinical skills, knowledge retention, and diagnostic capacity, with increased classroom engagement and optimized learning times. These converging results suggest that the integration of AI, automated validation, and quality assurance represents an emerging paradigm in contemporary clinical practice and specialized medical education.

3.7. Comparative Analysis: AI-Based vs. Traditional Reconstruction Methods

The integration of artificial intelligence into anatomical reconstruction workflows represents a paradigmatic shift in both efficiency and accuracy. While traditional manual segmentation methods remain valid, modern AI-driven approaches demonstrate quantifiable advantages across multiple metrics [29,125]. In terms of efficiency, processing times have been drastically reduced from the traditional 30–120 min per case for manual segmentation to under 4 min with AI assistance, representing a greater than 20-fold reduction that substantially mitigates operator burden and enables the scaling of workflows to clinical volumes previously considered unfeasible [126,127]. Regarding accuracy and performance, AI-based systems consistently achieve Dice similarity coefficients between 89.2% and over 92% across diverse anatomical structures, significantly outperforming the inter-observer variability inherent in manual methods, which typically ranges from 0.33 to 0.78 [125,128]. Furthermore, AI assistance enhances diagnostic reliability, identifying anatomical variants with 8–10% higher accuracy rates than experienced radiologists [29]. When AI-3D assistance is provided, inter-observer agreement has been shown to increase from κ = 0.33 to κ = 0.43 (Cohen’s kappa coefficient, a statistical measure of inter-rater reliability) for variant identification, and from κ = 0.70 to κ = 0.76 for surgical procedure planning [29]. However, the efficacy of these models depends heavily on data quality requirements; successful AI-based reconstruction mandates careful attention to imaging acquisition parameters, including a minimum slice thickness of <1.25 mm (with thinner slices < 0.5 mm preferred for complex osseous or vascular anatomy), adequate vascular contrast with arterial/venous opacification via timing-bolus or bolus tracking protocols, and mandatory ECG-gating for cardiac imaging to minimize motion artifacts, with cine sequences preferred for dynamic anatomy. Image quality must ensure minimal motion artifact and noise levels within the manufacturer’s specifications. While non-contrast CT has historically been a limitation due to poor tissue differentiation, modern AI systems employing physics-based artifact simulation now achieve high-fidelity reconstruction even under suboptimal conditions [129]. The types of applicable data for AI models have expanded to include high-resolution CT (MDCT, UHRCT, photon-counting detector CT), MRI sequences such as cardiac cine, contrast-enhanced angiography, and UTE/ZTE bone imaging, as well as multi-modal fusion combining simultaneous CT and MRI information for comprehensive tissue characterization, and research-grade imaging like micro-CT/nano-CT for ultra-detailed micro-anatomical reconstruction. Despite these capabilities, limitations and considerations persist, particularly regarding domain shift, where AI systems trained on specific imaging protocols may show reduced performance on different scanners or acquisition parameters; however, denoising autoencoders and feature engineering can detect and flag unusual imaging conditions [130]. Additionally, robust training typically requires 300–500+ expert-annotated cases with careful data augmentation strategies to address class imbalances [125], and while AI excels at identifying variants with 5–10% population prevalence, ultra-rare anomalies may exceed training set representation. Modern integrated workflows combine this AI efficiency with quality assurance oversight, moving from DICOM input through AI segmentation and uncertainty quantification to expert review and final model approval. Ultimately, this paradigm shift toward AI-assisted reconstruction represents not a replacement of human expertise, but rather an augmentation of clinician capabilities, enabling scaled deployment of precise anatomical models for both clinical and educational applications.

4. Advanced Bioprinting Manufacturing Technology

Anatomical model creation has progressed far beyond simple plastic reproduction to include sophisticated multi-material bioprinting technologies that precisely simulate tissue-specific mechanical properties and provide realistic tactile experiences required for comprehensive medical education. Advanced bioprinting systems use a variety of additive manufacturing techniques, each tailored to specific educational applications and anatomical needs, with technological capabilities varying significantly across platforms (Table 1) [131,132,133]. Fused Deposition Modeling (FDM) offers low-cost solutions for rigid anatomical structures and the ability to print in multiple colors to improve anatomical differentiation and educational clarity [134,135]. FDM materials, such as PLA or PETG, are significantly less expensive than other 3D printing technologies, making large-scale model production economically viable for educational institutions with limited budgets [136]. FDM provides far greater color variety than competing technologies, with PLA filament available in a wide range of colors, including multi-color filament and specialty finishes such as silk, matte, and glow-in-the-dark options, eliminating the need for post-processing painting in many cases [136,137,138]. Recent advances in FDM technology have significantly increased surface quality and detail resolution by incorporating enhanced extruder designs, optimized printing parameters, and refined post-processing procedures that reduce layer visibility and increase tactile realism. Through dual extrusion capabilities and sophisticated nozzle designs, modern FDM systems may attain layer thicknesses of 0.05–0.15 mm, allowing for the fabrication of anatomically accurate structures that were previously only possible with more expensive technologies. Microscopic study of improved FDM prints demonstrates much reduced layer stepping compared to previous generation systems, while minor surface defects persist when compared to competing technologies, such as stereolithography [135,139].
Stereolithography (SLA) and related vat photopolymerization technologies offer superior surface quality and detail resolution, which are required for models with precise anatomical features and smooth surface finishes that improve both visual and tactile educational value [140,141]. SLA uses extremely precise laser beams to generate very thin layer thicknesses of about 0.02 mm, allowing for the replication of minute complex features with realistic finishes and great dimensional precision for components up to several hundred millimeters in size [142]. These technologies excel at constructing complicated internal geometries and precise anatomical details, which are difficult to create using traditional production methods or competing 3D printing technologies. Microscopic analysis reveals that SLA-printed materials consistently exhibit superior surface finish, dimensional accuracy, and microstructural integrity when compared to FDM alternatives, resulting in higher tensile and flexural strengths for rigid resins and greater elasticity and surface resilience for flexible formulations [142]. Recent advances in SLA materials have expanded applications to include transparent and flexible resins that allow visualization of internal structures while maintaining appropriate mechanical properties for educational manipulation, with some specialized elastomeric formulations achieving elongation at break of more than 200% [142,143].
PolyJet technology facilitates the concurrent printing of stiff photopolymers and elastomeric resins in exact spatial configurations on a single build platform [144,145]. The technology attains layer thicknesses of 16 μm, facilitating high precision [144,145]. PolyJet exhibited a print precision of 30.4 μm [144,145,146]. Custom PolyJet systems concurrently fabricate stiff polymeric supports alongside elastomeric components, facilitating tailored mechanical qualities without the necessity for post-assembly [144,147]. Validation tests affirm PolyJet’s proficiency in producing models appropriate for tactile and imaging-based clinical instruction [148]. However, PolyJet’s economic constraints significantly limit accessibility for medical education programs. Material costs substantially exceed competing technologies, equipment investment requirements render adoption prohibitive for many institutions, and specialized post-processing infrastructure necessitates additional facility development and waste management protocols. These limitations have motivated researchers to explore alternative multi-material printing approaches and cost-effective configurations for anatomical model production [149,150]. DLP technology utilizes digital micromirror devices to polymerize entire layers concurrently, attaining dimensional accuracy measurements of 46.2 μm in trueness and 43.6 μm in precision for dental applications [151]. The concurrent layer polymerization technique eradicates scanning artifacts typical of laser-based SLA systems, resulting in very uniform layer-to-layer characteristics. DLP demonstrates exceptional cost-effectiveness, with material costs approximately one-tenth to one-fifteenth those of PolyJet, while maintaining precision measurements clinically appropriate for dental applications [151]. This cost advantage renders DLP ideal for dental prostheses, orthodontic models, and small-scale anatomical duplicates under budgetary limitations. The principal restriction is the constrained selection of materials to specialist dental photoresins with few elastomeric alternatives, making DLP inappropriate for multi-material anatomical models. Moreover, diminutive build platforms restrict applications to oral models and minor anatomical components, excluding full-scale organ models [151,152].
Binder jetting technology constitutes a unique manufacturing approach, selectively applying liquid adhesive to powder particles via adhesive chemistry instead of heat or photochemical methods [153]. This method facilitates intricate internal geometries without the need for detachable support material, as unsintered powder offers structural support. Binder jetting received a regulatory acknowledgment in 2015 when the United States FDA sanctioned 3D printing as a pharmaceutical production method, with Spritam® (levetiracetam) being the inaugural commercial 3D-printed medicinal product [154]. This regulatory approval substantially differentiates binder jetting from other 3D printing processes, facilitating direct pharmaceutical applications. Binder jetting produces patient-specific bone implants featuring intricate porosity patterns for implant applications. Hydroxyapatite and calcium phosphate formulations yield tissue scaffolds with mechanical qualities that closely resemble those of native trabecular bone. Recent advancements include polycaprolactone (PCL) infiltration, significantly enhancing compressive strength and toughness [155,156]. Key limitations include delicate green-state components necessitating infiltration and sintering (spanning days to weeks), rough surface finishes (80–100 μm precision), and intricate material waste management—impeding rapid prototyping applications for anatomical teaching [155,156].
Inkjet bioprinting produces cell-laden biomaterial suspensions (bioinks) that contain viable, spatially structured living cells, which are fundamentally different from the production of anatomical phantoms [157,158]. Piezoelectric systems dispense bioink droplets with layer accuracy. Bioinks utilize natural polymers (such as alginate, gelatin, and collagen) or synthetic polymers (including PEG, PCL, and PLGA) combined with mammalian cells [159]. Applications encompass tissue engineering and individualized pharmacological testing. Nevertheless, inkjet bioprinting is incompatible with anatomical teaching since it necessitates sterile conditions, physiological culture fluids, and prolonged maturation periods. The costs associated with inkjet bioprinting render large-scale batch production economically unfeasible when compared to conventional anatomical phantom manufacturing [160].

4.1. Multi-Material Bioprinting & Comparative Technology Selection Framework

The most significant advancement in anatomical model manufacturing is the use of multi-material bioprinting systems that can process rigid and flexible materials at the same time to simulate tissue-specific compliance and accurate mechanical behavior of various anatomical structures (Figure 2) [161,162]. Custom bioprinting platforms using thermoplastic polymers, typically polylactic acid (PLA) or acrylonitrile butadiene styrene (ABS), combined with medical-grade silicone elastomers, allow for the creation of anatomical models in which osseous structures maintain appropriate rigidity while soft tissues exhibit realistic flexibility and tactile response characteristic of living tissue [150,163,164]. These advanced systems use sophisticated material blending and infill structuring strategies to create graduated compliance transitions between different tissue types, significantly improving educational realism and providing authentic haptic feedback during manipulation and examination. The printer combines extrusion-based deposition technology with dual-extruder capabilities, allowing for precise spatial control over material distribution and infill structure to obtain desired mechanical qualities [163]. Validation studies demonstrate that multi-material printing using PLA supports with silicone elastomer infills can achieve elastic moduli ranging from 0.26 to 0.37 MPa for soft tissue simulation, closely matching biological soft tissue properties. Mechanical testing confirms that viscoelastic behavior, including loss modulus, can be precisely tuned through controlled infill density and material composition [163,165,166,167]. This method is especially useful for organ systems where both stiffness and damping properties are required for precise simulation of physiological activity and tissue-specific resistance during manipulation. These radiological features—the tissue-equivalent Hounsfield unit values and attenuation characteristics—allow anatomical models to serve a dual teaching purpose: providing authentic tactile feedback during hands-on manipulation while simultaneously producing accurate representations when imaged with computed tomography scanners used in clinical practice. Silicone materials with variable infill densities can be closer to realistic tissue simulation when compared to reference standards [165,166,167]. These radiological features allow anatomical models to serve a dual teaching purpose, giving realistic tactile experiences as well as correct depiction in medical imaging modalities used in clinical practice.
The selection of technology is contingent upon educational objectives, budget limitations, accuracy demands, and anatomical intricacy. FDM is the most efficient method for skeletal anatomy education, emphasizing cost-efficiency. SLA offers enhanced precision for vascular and cardiac anatomy. DLP provides an outstanding cost-to-accuracy ratio for dental applications. SLS is superior for intricate anatomical designs necessitating internal voids without the use of detachable supports. Binder jetting is utilized in specific pharmaceutical applications. Inkjet bioprinting is confined to tissue engineering, where the integration of living cells takes precedence over anatomical precision. The integration of many complementary technologies enhances resource allocation: well-funded institutions validate PolyJet investment for precise applications while retaining FDM for economical prototyping. Research-oriented colleges may utilize binder jetting for pharmaceutical development or bioprinting frameworks for tissue engineering initiatives, extending beyond conventional anatomical teaching, thereby enhancing educational value while ensuring financial prudence.

4.2. Cell-Laden Bioprinting and Its Educational Significance Beyond Anatomical Replication

Anatomical teaching has historically emphasized structural precision; however, cell-laden bioprinting incorporates living cellular elements to investigate tissue formation, pathology, and therapeutic pathways [168]. This technology combines living mammalian cells with biocompatible bioinks to make functional tissue constructs. It uses natural polymers like alginate and collagen for high viability (>80%) or synthetic choices like PEG and PCL for adjustable mechanics [169]. In terms of education, this goes beyond just seeing things; it also lets you study developmental biology using 3D models that mimic embryonic microenvironments and how cells interact with each other [168,170]. It also helps with disease modeling by creating pathological states, like cancer microenvironments, that let researchers see how the disease progresses over time and compare it to healthy tissue [168,171]. Additionally, it facilitates therapeutic modeling, including drug efficacy evaluation and personalized medicine applications utilizing patient-derived cells [150,158]. Current piezoelectric inkjet systems can keep cells alive 65–90% of the time, with spatial resolutions of 20–100 μm, and print speeds of up to 10,000 cells per second [172]. They can make instructional tissue volumes in 1–4 weeks [172]. However, implementation faces significant hurdles: substantial infrastructure requirements (biosafety cabinets, incubators), high costs for materials and equipment compared to conventional models, and the need for specialized expertise in cell culture [173,174]. Also, the durations it takes for tissues to mature are typically not compatible with fast-paced learning cycles, and repeatability is still a problem [173,174]. Even with these drawbacks, cell-laden bioprinting marks a significant transition from morphological to functional tissue education, integrating fundamental biology with clinical application in regenerative medicine and individualized care.

5. Educational Application and Learning Outcomes

5.1. Normal Anatomical Education and Spatial Visualization Improvement

Extensive controlled studies across multiple institutions consistently show that 3D-printed anatomical models supplement traditional teaching methodologies, with particular effectiveness in complex three-dimensional anatomical regions where spatial relationships are critical for understanding (Table 2) [20,175]. Randomized controlled trials comparing 3D-printed models to cadaveric specimens show that students who use 3D models achieve significantly higher post-test scores, especially in spatial anatomy assessments that require three-dimensional visualization skills [20,175]. Cardiac anatomy education is a landmark example of 3D printing educational superiority, with landmark blinded randomized controlled trials involving medical students revealing that groups using 3D-printed heart models achieved the highest examination scores (60.83%) compared to cadaveric (44.81%), atlas-based, or combined learning approaches [20]. Cardiac 3D models are educationally successful because students may alter and see complicated cardiac structures from numerous perspectives, allowing for a thorough grasp of chamber interconnections, valve processes, and vascular linkages [20]. Manufacturing costs for cardiac models remain remarkably cost-effective ($14–50 per model), making high-quality cardiac education available across a wide range of educational institutions, independent of budget constraints [176]. Cranial and skeletal anatomy education offers comparable educational benefits, with studies using color-coded 3D-printed skull models outperforming traditional cadaveric or atlas-based learning approaches [177]. Color-coded 3D-printed skulls outperformed cadaveric skulls (29.5 [IQR: 25–33], p = 0.044) and atlas-based learning (27.75 [IQR: 24.125–32]), with significant advantages in structural recognition (p = 0.046) [177]. Color-coding dramatically improved learning effectiveness by lowering superfluous cognitive load and increasing structural distinction and student comprehension [178].

5.2. Pathological and Variant Anatomy Visualization

Three-dimensional printing technology is particularly useful for providing educational access to rare pathological conditions and anatomical variations that may be unavailable or inconsistently represented in traditional cadaveric collections [181]. The ability to create models from patient imaging data allows medical educators to expose students to specific pathological conditions, congenital anomalies, and anatomical variants that are required for clinical competency but are rarely available in traditional teaching materials. Congenital anomaly education greatly benefits from 3D printing technology, with fetal anatomy models created from imaging data providing risk-free exposure to developmental anatomy while addressing cultural, emotional, and ethical concerns associated with actual fetal specimens [182]. Three-dimensional fetal heart models generated from fetal echocardiography and MRI blood flow data allow for the visualization and quantification of intracardiac blood flow profiles in vitro, which aids in the diagnosis of congenital cardiac abnormalities such as hypoplastic left heart syndrome [182]. In clinical investigations of prenatal anomalies, 3D-printed models decreased physician misdiagnosis rates from 5% to 0.4% and student misdiagnosis from 17.9% to 0.4%, with average variations of 0.1 mm from source imaging [183]. Fracture pattern visualization is an important educational application in which 3D-printed bone fracture models help students grasp complex trauma patterns through hands-on manipulation and investigation [184]. Patient-specific fracture models built from CT scans provide visualization of complex fracture patterns that are difficult to interpret using radiography images alone, offering a three-dimensional understanding of fracture mechanics and fragment interactions [184]. Three-dimensional printed models of articular fractures aided surgical planning and preoperative simulations, resulting in a 15% reduction in surgical time and better resident performance during fracture surgery [185].

5.3. Clinical Integration and Radiological Interpretation Skills

Advanced 3D anatomical models help students make the critical transition from theoretical anatomical knowledge to clinical application by allowing them to connect three-dimensional anatomical understanding with clinical imaging interpretation [186]. Studies suggest that students who are exposed to CT/MRI-derived 3D models perform much better in radiological interpretation tests and have superior spatial visualization abilities, which translate directly to clinical competency [186]. The combination of 3D segmentation models and augmented reality technology allows for interactive learning experiences in which students can manipulate virtual models alongside physical specimens or digital overlays, providing a comprehensive understanding of anatomical relationships from multiple perspectives [187,188]. Students exposed to 3D/AR models produced from DICOM data with slice thicknesses of 1 mm or less perform better in radiological anatomy assessments and have more trust in clinical imaging interpretation [189]. Contemporary instructional uses include dynamic imaging correlation, which allows students to see how anatomical structures appear in various imaging modalities while also evaluating corresponding 3D-printed models or digital renderings. Cross-sectional anatomy integration approaches using Anatomage virtual human tables in conjunction with traditional dissection and radiological imaging show that the use of 3D digital tools improves student performance in learning gross anatomy while also improving their ability to correlate basic science anatomy with clinical imaging [188]. Rigorous educational validation is an essential component of 3D-printed anatomical model implementation, necessitating comprehensive assessment methodologies that demonstrate superior educational outcomes and quantify learning outcomes across diverse student populations. Meta-analyses of educational efficacy studies consistently show that 3D-printed model integration improves knowledge assessment scores, spatial visualization abilities, and short-term knowledge retention more than traditional teaching methods [22]. Contemporary validation studies that use randomized controlled trials, blinded evaluations, and standardized outcome measurements give strong evidence of educational efficacy. Students using 3D-printed anatomical models outperform traditional methods by 8–15% on post-test scores, with notably good performance in spatial visualization skills [190]. Across many surveys, student satisfaction rates regularly surpass 80%, with especially high scores for model realism and educational utility [190,191]. For specialist populations, such as orthopedic residents, 3D-printed patient-specific anatomical models provide significant demonstrated improvements in learning outcomes, with 85.6% reporting improved understanding of complicated anatomical systems. First-year residents reported better satisfaction (mean score 7.9) than advanced trainees, indicating a special benefit for early-stage learner education [192]. Physical manipulation of models had the highest educational value rating (mean score 8.1), whereas 76.3% of residents preferred small-group teaching settings (4–6 participants) to maximize educational efficacy [192].

5.4. 4D Printing and Dynamic Anatomical Modeling: Emerging Frontiers in Educational Innovation

While traditional 3D printing creates static models, 4D printing introduces a temporal dimension, allowing structures to transform, self-assemble, or adapt in response to stimuli like temperature, pH, or moisture using smart materials such as shape-memory polymers [193,194]. This technology holds significant potential for tissue engineering—enabling scaffolds that change shape at body temperature or stents that adjust to vessel diameter—and for drug delivery systems with physiologically triggered release mechanisms [195]. Educationally, 4D printing offers unique opportunities to simulate dynamic physiological processes (e.g., cardiac cycles, embryonic development) and disease progression through time-dependent transformations [194]. However, widespread adoption is currently hindered by a limited range of biocompatible materials, lower resolution compared to standard 3D printing, high costs, and complex design requirements [196]. Despite these barriers, future integration with AR/VR and advancements in material science position 4D printing as a promising frontier for interactive, dynamic medical education [194,197,198].

6. Conclusions

Three-dimensional modeling and multi-material bioprinting technologies are radically altering the teaching of medical anatomy. These novel approaches address historical obstacles of traditional teaching: the shortage of anatomical specimens, ethical restraints, and unequal access to notable clinical cases. Students employing three-dimensional models demonstrate continuous improvements in knowledge, understanding, and retention. Spatial visualization skills accumulate over time, surpassing what is obtained with ordinary two-dimensional education. AI-assisted reconstruction methods get very accurate anatomical results and cut down on processing time by a lot compared to manually analyzing human material. The fabrication of bioprinted models reaches very high-quality standards, with essentially small margins of error. A key benefit is affordability: the fabrication of complicated anatomical models has become sustainable even in environments with low resources. Multisensory learning—involving sight, touch, and movement—simultaneously engages many neural systems, enabling deeper and more enduring knowledge than traditional techniques.
Despite these transformative potentials, some fundamental constraints and implementation hurdles need careful examination. The economic implications remain significant; while desktop FDM printers are increasingly affordable, high-fidelity multi-material systems (e.g., PolyJet) and their associated post-processing infrastructure entail substantial capital and maintenance costs that may be prohibitive for many institutions. In low-resource environments, these cost constraints are typically compounded by technological impediments, including restricted access to high-performance computation for image segmentation, insecure supply chains for printing materials, and a scarcity of specialized technical personnel. Furthermore, there is a possible risk of pedagogical overreliance on these digital and printed surrogates. Exclusive dependence on models may mistakenly restrict student exposure to the true biological variety, wet tissue haptics, and pathological complexity seen in cadaveric dissection. Consequently, 3D-printed models should be strictly positioned as powerful adjuncts that improve, rather than replace, the full understanding acquired from traditional anatomical pedagogy.
For educators launching 3D anatomical modeling programs, a phased deployment strategy is recommended to enhance value and durability. Institutions should emphasize high-yield anatomical systems—specifically cardiac and neuroanatomical structures—where 3D visualization delivers the most significant educational advantage over traditional methods, rather than seeking complete multi-system adoption immediately. Infrastructure obstacles can be avoided by employing open-source segmentation platforms (e.g., 3D Slicer) and picking printing technologies linked with specific educational goals: cost-effective FDM printers for skeleton models and SLA systems when fine vascular detail is required. Financial sustainability demands realistic planning; while fully resourced labs may require an initial expenditure of $5000–$15,000, effective programs can be started with much lower costs employing entry-level technology. Crucially, these models must be integrated as supplementary tools to address specific learning gaps—such as complex spatial linkages or unusual pathologies—rather than as generic alternatives for standard teaching approaches.
The future of anatomical education lies in integrating these innovative tools with emerging technologies such as augmented reality and haptic feedback, creating personalized and more inclusive learning pathways.

Author Contributions

Conceptualization, S.P. (Salvatore Pezzino); writing—original draft preparation, S.P. (Salvatore Pezzino); writing—review and editing, S.P. (Stefano Puleo), C.C., and S.C.; supervision, S.P. (Salvatore Pezzino). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputed tomography
MDCTMultidetector computed tomography
kVpkilovolt peak
UHRCTUltra-high-resolution CT
micro-CTMicrofocus CT
nano-CTNanocomputed tomography
DECTDual-energy computed tomography
MRIMagnetic resonance imaging
CE-MRAContrast-enhanced MR angiography
UTEUltra-short echo time
ZTEZero echo time
DICOMDigital imaging and communications in medicine
FDMFused deposition modeling
SLAStereolithography
DLPDigital light processing
SLSSelective laser sintering
AIArtificial intelligence
CNNConvolutional neural networks
3D U-NetThree-dimensional U-Net architecture
DeepMedicDeep medical imaging
V-NetVolumetric neural network
SMDStandardized mean difference
RCTRandomized controlled trial
CIConfidence interval
vsVersus
IQRInterquartile range
HAPHydroxyapatite
Ca-PCalcium phosphate
PCLPolycaprolactone
PEGPolyethylene glycol
PLGAPoly(lactic-co-glycolic acid)
PLAPolylactic acid
PETGPolyethylene terephthalate glycol
TPUThermoplastic polyurethane
ABSAcrylonitrile butadiene styrene
HUHounsfield unit
DOIDigital Object Identifier
ARAugmented reality
VRVirtual reality

References

  1. Ghosh, S.K. Cadaveric Dissection as an Educational Tool for Anatomical Sciences in the 21st Century. Anat. Sci. Educ. 2017, 10, 286–299. [Google Scholar] [CrossRef] [PubMed]
  2. Valcke, J.; Csík, L.B.; Säflund, Z.; Nagy, A.; Eltayb, A. Anatomy Education at Central Europe Medical Schools: A Qualitative Analysis of Educators’ Pedagogical Knowledge, Methods, Practices, and Challenges. BMC Med. Educ. 2025, 25, 1173. [Google Scholar] [CrossRef]
  3. Ail, G.; Freer, F.; Chan, C.S.; Jones, M.; Broad, J.; Canale, G.P.; Elston, P.; Leeney, J.; Vickerton, P. A Comparison of Virtual Reality Anatomy Models to Prosections in Station-Based Anatomy Teaching. Anat. Sci. Educ. 2024, 17, 763–769. [Google Scholar] [CrossRef]
  4. Wirtu, A.T.; Manjatika, A.T. Challenges in Sourcing Bodies for Anatomy Education and Research in Ethiopia: Pre and Post COVID-19 Scenarios. Ann. Anat.-Anat. Anz. 2024, 254, 152234. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, D.; Zhang, Q.; Deng, J.; Cai, Y.; Huang, J.; Li, F.; Xiong, K. A Shortage of Cadavers: The Predicament of Regional Anatomy Education in Mainland China. Anat. Sci. Educ. 2018, 11, 397–402. [Google Scholar] [CrossRef]
  6. Quiroga-Garza, A.; Reyes-Hernández, C.G.; Zarate-Garza, P.P.; Esparza-Hernández, C.N.; Gutierrez-de la O, J.; de la Fuente-Villarreal, D.; Elizondo-Omaña, R.E.; Guzman-Lopez, S. Willingness toward Organ and Body Donation among Anatomy Professors and Students in Mexico. Anat. Sci. Educ. 2017, 10, 589–597. [Google Scholar] [CrossRef]
  7. Gürses, İ.A.; Coşkun, O.; Öztürk, A. Current Status of Cadaver Sources in Turkey and a Wake-up Call for Turkish Anatomists. Anat. Sci. Educ. 2018, 11, 155–165. [Google Scholar] [CrossRef] [PubMed]
  8. McMenamin, P.G.; Costello, L.F.; Quayle, M.R.; Bertram, J.F.; Kaka, A.; Tefuarani, N.; Adams, J.W. Challenges of Access to Cadavers in Low- and Middle-Income Countries (LMIC) for Undergraduate Medical Teaching: A Review and Potential Solutions in the Form of 3D Printed Replicas. 3D Print. Med. 2025, 11, 28. [Google Scholar] [CrossRef]
  9. De Caro, R.; Boscolo-Berto, R.; Artico, M.; Bertelli, E.; Cannas, M.; Cappello, F.; Carpino, G.; Castorina, S.; Cataldi, A.; Cavaletti, G.A.; et al. The Italian Law on Body Donation: A Position Paper of the Italian College of Anatomists. Ann. Anat.-Anat. Anz. 2021, 238, 151761. [Google Scholar] [CrossRef]
  10. Tesfaye, S.; Hamba, N.; Kebede, W.; Bajiro, M.; Debela, L.; Nigatu, T.A.; Gerbi, A. Assessment of Ethical Compliance of Handling and Usage of the Human Body in Anatomical Facilities of Ethiopian Medical Schools. Pragmatic Obs. Res. 2021, 12, 65–80. [Google Scholar] [CrossRef]
  11. Habicht, J.L.; Kiessling, C.; Winkelmann, A. Bodies for Anatomy Education in Medical Schools: An Overview of the Sources of Cadavers Worldwide. Acad. Med. J. Assoc. Am. Med. Coll. 2018, 93, 1293–1300. [Google Scholar] [CrossRef] [PubMed]
  12. Zhou, X.; Xiong, H.; Wen, Y.; Li, F.; Hu, D. Global Trends in Cadaver Donation and Medical Education Research: Bibliometric Analysis Based on VOSviewer and CiteSpace. JMIR Med. Educ. 2025, 11, e71935. [Google Scholar] [CrossRef] [PubMed]
  13. Papa, V.; Vaccarezza, M. Teaching Anatomy in the XXI Century: New Aspects and Pitfalls. Sci. World J. 2013, 2013, 310348. [Google Scholar] [CrossRef]
  14. Meyer, A.J.; Chapman, J.A. A Slide into Obscurity? The Current State of Histology Education in Australian and Aotearoa New Zealand Medical Curricula in 2022–2023. Anat. Sci. Educ. 2024, 17, 1694–1705. [Google Scholar] [CrossRef]
  15. Pezzino, S.; Luca, T.; Castorina, M.; Puleo, S.; Castorina, S. Transforming Medical Education Through Intelligent Tools: A Bibliometric Exploration of Digital Anatomy Teaching. Educ. Sci. 2025, 15, 346. [Google Scholar] [CrossRef]
  16. Sanghera, R.; Kotecha, S. The Educational Value in the Development and Printing of 3D Medical Models—A Medical Student’s Perspective. Med. Sci. Educ. 2022, 32, 1563–1564. [Google Scholar] [CrossRef]
  17. Wilk, R.; Likus, W.; Hudecki, A.; Syguła, M.; Różycka-Nechoritis, A.; Nechoritis, K. What Would You like to Print? Students’ Opinions on the Use of 3D Printing Technology in Medicine. PLoS ONE 2020, 15, e0230851. [Google Scholar] [CrossRef] [PubMed]
  18. Pujol, S.; Baldwin, M.; Nassiri, J.; Kikinis, R.; Shaffer, K. Using 3D Modeling Techniques to Enhance Teaching of Difficult Anatomical Concepts. Acad. Radiol. 2016, 23, 507–516. [Google Scholar] [CrossRef]
  19. Salazar, D.; Thompson, M.; Rosen, A.; Zuniga, J. Using 3D Printing to Improve Student Education of Complex Anatomy: A Systematic Review and Meta-Analysis. Med. Sci. Educ. 2022, 32, 1209–1218. [Google Scholar] [CrossRef]
  20. Lim, K.H.; Loo, Z.Y.; Goldie, S.J.; Adams, J.W.; McMenamin, P.G. Use of 3D Printed Models in Medical Education: A Randomized Control Trial Comparing 3D Prints versus Cadaveric Materials for Learning External Cardiac Anatomy. Anat. Sci. Educ. 2016, 9, 213–221. [Google Scholar] [CrossRef]
  21. Zafošnik, U.; Cerovečki, V.; Stojnić, N.; Belec, A.P.; Klemenc-Ketiš, Z. Developing a Competency Framework for Training with Simulations in Healthcare: A Qualitative Study. BMC Med. Educ. 2024, 24, 180. [Google Scholar] [CrossRef]
  22. Ye, Z.; Jiang, H.; Bai, S.; Wang, T.; Yang, D.; Hou, H.; Zhang, Y.; Yi, S. Meta-Analyzing the Efficacy of 3D Printed Models in Anatomy Education. Front. Bioeng. Biotechnol. 2023, 11, 1117555. [Google Scholar] [CrossRef] [PubMed]
  23. Backhouse, S.; Taylor, D.; Armitage, J.A. Is This Mine to Keep? Three-Dimensional Printing Enables Active, Personalized Learning in Anatomy. Anat. Sci. Educ. 2019, 12, 518–528. [Google Scholar] [CrossRef]
  24. Tripodi, N.; Kelly, K.; Husaric, M.; Wospil, R.; Fleischmann, M.; Johnston, S.; Harkin, K. The Impact of Three-Dimensional Printed Anatomical Models on First-Year Student Engagement in a Block Mode Delivery. Anat. Sci. Educ. 2020, 13, 769–777. [Google Scholar] [CrossRef] [PubMed]
  25. Xie, G.; Wang, T.; Fu, H.; Liu, D.; Deng, L.; Zheng, X.; Li, L.; Liao, J. The Role of Three-Dimensional Printing Models in Medical Education: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. BMC Med. Educ. 2025, 25, 826. [Google Scholar] [CrossRef] [PubMed]
  26. Freiser, M.E.; Ghodadra, A.; Hirsch, B.E. Operable, Low-Cost, High-Resolution, Patient-Specific 3D Printed Temporal Bones for Surgical Simulation and Evaluation. Ann. Otol. Rhinol. Laryngol. 2021, 130, 1044–1051. [Google Scholar] [CrossRef]
  27. Kumar, A.; Singh, P. Innovating Medical Education Using a Cost Effective and Easy-to-Use Virtual Reality-Based Simulator for Medical Training. Sci. Rep. 2025, 15, 1234. [Google Scholar] [CrossRef]
  28. Nguyen, P.; Stanislaus, I.; McGahon, C.; Pattabathula, K.; Bryant, S.; Pinto, N.; Jenkins, J.; Meinert, C. Quality Assurance in 3D-Printing: A Dimensional Accuracy Study of Patient-Specific 3D-Printed Vascular Anatomical Models. Front. Med. Technol. 2023, 5, 1097850. [Google Scholar] [CrossRef]
  29. Chen, X.; Dai, C.; Peng, M.; Wang, D.; Sui, X.; Duan, L.; Wang, X.; Wang, X.; Weng, W.; Wang, S.; et al. Artificial Intelligence Driven 3D Reconstruction for Enhanced Lung Surgery Planning. Nat. Commun. 2025, 16, 4086. [Google Scholar] [CrossRef] [PubMed]
  30. Godin, A.; Molina, J.C.; Morisset, J.; Liberman, M. The Future of Surgical Lung Biopsy: Moving from the Operating Room to the Bronchoscopy Suite. Curr. Chall. Thorac. Surg. 2019, 1, 1–12. [Google Scholar] [CrossRef]
  31. Hussain, S.; Mubeen, I.; Ullah, N.; Shah, S.S.U.D.; Khan, B.A.; Zahoor, M.; Ullah, R.; Khan, F.A.; Sultan, M.A. Modern Diagnostic Imaging Technique Applications and Risk Factors in the Medical Field: A Review. BioMed Res. Int. 2022, 2022, 5164970. [Google Scholar] [CrossRef]
  32. Florkow, M.C.; Willemsen, K.; Mascarenhas, V.V.; Oei, E.H.G.; van Stralen, M.; Seevinck, P.R. Magnetic Resonance Imaging Versus Computed Tomography for Three-Dimensional Bone Imaging of Musculoskeletal Pathologies: A Review. J. Magn. Reson. Imaging 2022, 56, 11–34. [Google Scholar] [CrossRef] [PubMed]
  33. McCollough, C.H.; Rajiah, P.S. Milestones in CT: Past, Present, and Future. Radiology 2023, 309, e230803. [Google Scholar] [CrossRef]
  34. Machida, H.; Tanaka, I.; Fukui, R.; Shen, Y.; Ishikawa, T.; Tate, E.; Ueno, E. Current and Novel Imaging Techniques in Coronary CT. Radiographics 2015, 35, 991–1010. [Google Scholar] [CrossRef]
  35. Burrill, J.; Dabbagh, Z.; Gollub, F.; Hamady, M. Multidetector Computed Tomographic Angiography of the Cardiovascular System. Postgrad. Med. J. 2007, 83, 698–704. [Google Scholar] [CrossRef]
  36. George, R.T.; Silva, C.; Cordeiro, M.A.S.; DiPaula, A.; Thompson, D.R.; McCarthy, W.F.; Ichihara, T.; Lima, J.A.C.; Lardo, A.C. Multidetector Computed Tomography Myocardial Perfusion Imaging During Adenosine Stress. J. Am. Coll. Cardiol. 2006, 48, 153–160. [Google Scholar] [CrossRef] [PubMed]
  37. Felipe, V.C.; Barbosa, P.N.V.P.; Chojniak, R.; Bitencourt, A.G.V. Evaluating Multidetector Row CT for Locoregional Staging in Individuals with Locally Advanced Breast Cancer. Radiol. Cardiothorac. Imaging 2025, 7, e240008. [Google Scholar] [CrossRef] [PubMed]
  38. Alabousi, M.; McInnes, M.D.; Salameh, J.-P.; Satkunasingham, J.; Kagoma, Y.K.; Ruo, L.; Meyers, B.M.; Aziz, T.; van der Pol, C.B. MRI vs. CT for the Detection of Liver Metastases in Patients With Pancreatic Carcinoma: A Comparative Diagnostic Test Accuracy Systematic Review and Meta-Analysis. J. Magn. Reson. Imaging 2021, 53, 38–48. [Google Scholar] [CrossRef]
  39. Ferrari, V.; Carbone, M.; Cappelli, C.; Boni, L.; Melfi, F.; Ferrari, M.; Mosca, F.; Pietrabissa, A. Value of Multidetector Computed Tomography Image Segmentation for Preoperative Planning in General Surgery. Surg. Endosc. 2012, 26, 616–626. [Google Scholar] [CrossRef]
  40. Wu, W.; Budovec, J.; Foley, W.D. Prospective and Retrospective ECG Gating for Thoracic CT Angiography: A Comparative Study. Am. J. Roentgenol. 2009, 193, 955–963. [Google Scholar] [CrossRef]
  41. Rogers, T.; Campbell-Washburn, A.E.; Ramasawmy, R.; Yildirim, D.K.; Bruce, C.G.; Grant, L.P.; Stine, A.M.; Kolandaivelu, A.; Herzka, D.A.; Ratnayaka, K.; et al. Interventional Cardiovascular Magnetic Resonance: State-of-the-Art. J. Cardiovasc. Magn. Reson. 2023, 25, 48. [Google Scholar] [CrossRef]
  42. Kumamaru, K.K.; Hoppel, B.E.; Mather, R.T.; Rybicki, F.J. CT Angiography: Current Technology and Clinical Use. Radiol. Clin. N. Am. 2010, 48, 213–235. [Google Scholar] [CrossRef]
  43. Altmann, S.; Abello Mercado, M.A.; Ucar, F.A.; Kronfeld, A.; Al-Nawas, B.; Mukhopadhyay, A.; Booz, C.; Brockmann, M.A.; Othman, A.E. Ultra-High-Resolution CT of the Head and Neck with Deep Learning Reconstruction—Assessment of Image Quality and Radiation Exposure and Intraindividual Comparison with Normal-Resolution CT. Diagnostics 2023, 13, 1534. [Google Scholar] [CrossRef]
  44. Oostveen, L.J.; Boedeker, K.L.; Brink, M.; Prokop, M.; de Lange, F.; Sechopoulos, I. Physical Evaluation of an Ultra-High-Resolution CT Scanner. Eur. Radiol. 2020, 30, 2552–2560. [Google Scholar] [CrossRef]
  45. Schuijf, J.D.; Lima, J.A.C.; Boedeker, K.L.; Takagi, H.; Tanaka, R.; Yoshioka, K.; Arbab-Zadeh, A. CT Imaging with Ultra-High-Resolution: Opportunities for Cardiovascular Imaging in Clinical Practice. J. Cardiovasc. Comput. Tomogr. 2022, 16, 388–396. [Google Scholar] [CrossRef]
  46. Shelmerdine, S.C.; Simcock, I.C.; Hutchinson, J.C.; Aughwane, R.; Melbourne, A.; Nikitichev, D.I.; Ong, J.; Borghi, A.; Cole, G.; Kingham, E.; et al. 3D Printing from Microfocus Computed Tomography (Micro-CT) in Human Specimens: Education and Future Implications. Br. J. Radiol. 2018, 91, 20180306. [Google Scholar] [CrossRef] [PubMed]
  47. Yu, B.; Gauthier, R.; Olivier, C.; Villanova, J.; Follet, H.; Mitton, D.; Peyrin, F. 3D Quantification of the Lacunocanalicular Network on Human Femoral Diaphysis through Synchrotron Radiation-Based nanoCT. J. Struct. Biol. 2024, 216, 108111. [Google Scholar] [CrossRef] [PubMed]
  48. Peyrin, F.; Dong, P.; Pacureanu, A.; Langer, M. Micro- and Nano-CT for the Study of Bone Ultrastructure. Curr. Osteoporos. Rep. 2014, 12, 465–474. [Google Scholar] [CrossRef] [PubMed]
  49. Sandrini, C.; Lombardi, C.; Shearn, A.I.U.; Ordonez, M.V.; Caputo, M.; Presti, F.; Luciani, G.B.; Rossetti, L.; Biglino, G. Three-Dimensional Printing of Fetal Models of Congenital Heart Disease Derived From Microfocus Computed Tomography: A Case Series. Front. Pediatr. 2020, 7, 567. [Google Scholar] [CrossRef]
  50. Scott, A.E.; Vasilescu, D.M.; Seal, K.A.D.; Keyes, S.D.; Mavrogordato, M.N.; Hogg, J.C.; Sinclair, I.; Warner, J.A.; Hackett, T.-L.; Lackie, P.M. Three Dimensional Imaging of Paraffin Embedded Human Lung Tissue Samples by Micro-Computed Tomography. PLoS ONE 2015, 10, e0126230. [Google Scholar] [CrossRef]
  51. Ito, M. Assessment of Bone Quality Using Micro-Computed Tomography (Micro-CT) and Synchrotron Micro-CT. J. Bone Miner. Metab. 2005, 23, 115–121. [Google Scholar] [CrossRef]
  52. Lombardi, C.M.; Zambelli, V.; Botta, G.; Moltrasio, F.; Cattoretti, G.; Lucchini, V.; Fesslova, V.; Cuttin, M.S. Postmortem Microcomputed Tomography (Micro-CT) of Small Fetuses and Hearts. Ultrasound Obstet. Gynecol. Off. J. Int. Soc. Ultrasound Obstet. Gynecol. 2014, 44, 600–609. [Google Scholar] [CrossRef] [PubMed]
  53. Park, S.S.; Chunta, J.L.; Robertson, J.M.; Martinez, A.A.; Oliver Wong, C.-Y.; Amin, M.; Wilson, G.D.; Marples, B. MicroPET/CT Imaging of an Orthotopic Model of Human Glioblastoma Multiforme and Evaluation of Pulsed Low-Dose Irradiation. Int. J. Radiat. Oncol. Biol. Phys. 2011, 80, 885–892. [Google Scholar] [CrossRef]
  54. Kim, A.J.; Francis, R.; Liu, X.; Devine, W.A.; Ramirez, R.; Anderton, S.J.; Wong, L.Y.; Faruque, F.; Gabriel, G.C.; Chung, W.; et al. Microcomputed Tomography Provides High Accuracy Congenital Heart Disease Diagnosis in Neonatal and Fetal Mice. Circ. Cardiovasc. Imaging 2013, 6, 551–559. [Google Scholar] [CrossRef] [PubMed]
  55. Hutchinson, J.C.; Barrett, H.; Ramsey, A.T.; Haig, I.G.; Guy, A.; Sebire, N.J.; Arthurs, O.J. Virtual Pathological Examination of the Human Fetal Kidney Using Micro-CT. Ultrasound Obstet. Gynecol. Off. J. Int. Soc. Ultrasound Obstet. Gynecol. 2016, 48, 663–665. [Google Scholar] [CrossRef]
  56. Hutchinson, J.C.; Shelmerdine, S.C.; Simcock, I.C.; Sebire, N.J.; Arthurs, O.J. Early Clinical Applications for Imaging at Microscopic Detail: Microfocus Computed Tomography (Micro-CT). Br. J. Radiol. 2017, 90, 20170113. [Google Scholar] [CrossRef]
  57. Kampschulte, M.; Langheinirch, A.C.; Sender, J.; Litzlbauer, H.D.; Althöhn, U.; Schwab, J.D.; Alejandre-Lafont, E.; Martels, G.; Krombach, G.A. Nano-Computed Tomography: Technique and Applications. RöFo-Fortschr. Geb. Rontgenstr. Nuklearmed 2016, 188, 146–154. [Google Scholar] [CrossRef] [PubMed]
  58. Khoury, B.M.; Bigelow, E.M.R.; Smith, L.M.; Schlecht, S.H.; Scheller, E.L.; Andarawis-Puri, N.; Jepsen, K.J. The Use of Nano-Computed Tomography to Enhance Musculoskeletal Research. Connect. Tissue Res. 2015, 56, 106–119. [Google Scholar] [CrossRef]
  59. Tatsugami, F.; Higaki, T.; Nakamura, Y.; Honda, Y.; Awai, K. Dual-Energy CT: Minimal Essentials for Radiologists. Jpn. J. Radiol. 2022, 40, 547–559. [Google Scholar] [CrossRef]
  60. Naruto, N.; Itoh, T.; Noguchi, K. Dual Energy Computed Tomography for the Head. Jpn. J. Radiol. 2018, 36, 69–80. [Google Scholar] [CrossRef]
  61. Gupta, A.; Kikano, E.G.; Bera, K.; Baruah, D.; Saboo, S.S.; Lennartz, S.; Hokamp, N.G.; Gholamrezanezhad, A.; Gilkeson, R.C.; Laukamp, K.R. Dual Energy Imaging in Cardiothoracic Pathologies: A Primer for Radiologists and Clinicians. Eur. J. Radiol. Open 2021, 8, 100324. [Google Scholar] [CrossRef]
  62. Nair, J.R.; Burrows, C.; Jerome, S.; Ribeiro, L.; Larrazabal, R.; Gupta, R.; Yu, E. Dual Energy CT: A Step Ahead in Brain and Spine Imaging. Br. J. Radiol. 2020, 93, 20190872. [Google Scholar] [CrossRef]
  63. Fonseca, G.P.; Rezaeifar, B.; Lackner, N.; Haanen, B.; Reniers, B.; Verhaegen, F. Dual-Energy CT Evaluation of 3D Printed Materials for Radiotherapy Applications. Phys. Med. Biol. 2023, 68, 035005. [Google Scholar] [CrossRef]
  64. Ge, T.; Liao, R.; Medrano, M.; Politte, D.G.; Williamson, J.F.; O’Sullivan, J.A. MB-DECTNet: A Model-Based Unrolling Network for Accurate 3D Dual-Energy CT Reconstruction from Clinically Acquired Helical Scans. Phys. Med. Biol. 2023, 68, 245009. [Google Scholar] [CrossRef] [PubMed]
  65. Chen, S.; Zhong, X.; Hu, S.; Dorn, S.; Kachelrieß, M.; Lell, M.; Maier, A. Automatic Multi-Organ Segmentation in Dual-Energy CT (DECT) with Dedicated 3D Fully Convolutional DECT Networks. Med. Phys. 2020, 47, 552–562. [Google Scholar] [CrossRef]
  66. Dodda, V.C.; Kuruguntla, L.; Ravichandran, N.K.; Lee, K.-S.; Sollapur, R.; Damodaran, M.; Kumar, R.; Anilkumar, N.; Itapu, S.; Kumar, M.; et al. Overview of Photon-Counted Three-Dimensional Imaging and Related Applications. Opt. Express 2025, 33, 31211–31234. [Google Scholar] [CrossRef] [PubMed]
  67. Greffier, J.; Viry, A.; Robert, A.; Khorsi, M.; Si-Mohamed, S. Photon-Counting CT Systems: A Technical Review of Current Clinical Possibilities. Diagn. Interv. Imaging 2025, 106, 53–59. [Google Scholar] [CrossRef] [PubMed]
  68. Kopp, F.K.; Daerr, H.; Si-Mohamed, S.; Sauter, A.P.; Ehn, S.; Fingerle, A.A.; Brendel, B.; Pfeiffer, F.; Roessl, E.; Rummeny, E.J.; et al. Evaluation of a Preclinical Photon-Counting CT Prototype for Pulmonary Imaging. Sci. Rep. 2018, 8, 17386. [Google Scholar] [CrossRef]
  69. Meloni, A.; Maffei, E.; Clemente, A.; De Gori, C.; Occhipinti, M.; Positano, V.; Berti, S.; La Grutta, L.; Saba, L.; Cau, R.; et al. Spectral Photon-Counting Computed Tomography: Technical Principles and Applications in the Assessment of Cardiovascular Diseases. J. Clin. Med. 2024, 13, 2359. [Google Scholar] [CrossRef]
  70. Beckhorn, C.B.; Moya-Mendez, M.E.; Aiduk, M.; Thornton, S.; Medina, C.K.; Louie, A.D.; Overbey, D.; Cao, J.Y.; Tracy, E.T. Use of Photon-Counting CT and Three-Dimensional Printing for an Intra-Thoracic Retained Ballistic Fragment in a 9-Year-Old. Pediatr. Radiol. 2025, 55, 875–879. [Google Scholar] [CrossRef]
  71. Lugauer, F.; Wetzl, J. Magnetic Resonance Imaging. In Medical Imaging Systems: An Introductory Guide; Maier, A., Steidl, S., Christlein, V., Hornegger, J., Eds.; Springer: Cham, Switzerland, 2018; ISBN 978-3-319-96519-2. [Google Scholar]
  72. Wu, L.; Liu, F.; Li, S.; Luo, X.; Wang, Y.; Zhong, W.; Feiweier, T.; Xu, J.; Bao, H.; Shi, D.; et al. Comparison of MR Cytometry Methods in Predicting Immunohistochemical Factor Status and Molecular Subtypes of Breast Cancer. Radiol. Oncol. 2025, 59, 337–348. [Google Scholar] [CrossRef] [PubMed]
  73. Sun, Y.; Wang, C. Brain Tumor Detection Based on a Novel and High-Quality Prediction of the Tumor Pixel Distributions. Comput. Biol. Med. 2024, 172, 108196. [Google Scholar] [CrossRef] [PubMed]
  74. Mejia, E.; Sweeney, S.; Zablah, J.E. Virtual 3D Reconstruction of Complex Congenital Cardiac Anatomy from 3D Rotational Angiography. 3D Print. Med. 2025, 11, 4. [Google Scholar] [CrossRef]
  75. Hougaard, M.; Hansen, H.S.; Thayssen, P.; Antonsen, L.; Jensen, L.O. Uncovered Culprit Plaque Ruptures in Patients with ST-Segment Elevation Myocardial Infarction Assessed by Optical Coherence Tomography and Intravascular Ultrasound with iMap. JACC Cardiovasc. Imaging 2018, 11, 859–867. [Google Scholar] [CrossRef] [PubMed]
  76. Cox, B.F.; Pressman, P. Dynamic Cardiac Imaging as a Preclinical Cardiovascular Pathophysiology Teaching Aid: Facta Non Verba. Sage Open Med. 2024, 12, 20503121231225322. [Google Scholar] [CrossRef]
  77. Odille, F.; Bustin, A.; Liu, S.; Chen, B.; Vuissoz, P.-A.; Felblinger, J.; Bonnemains, L. Isotropic 3D Cardiac Cine MRI Allows Efficient Sparse Segmentation Strategies Based on 3D Surface Reconstruction. Magn. Reson. Med. 2018, 79, 2665–2675. [Google Scholar] [CrossRef]
  78. Zhang, H.L.; Maki, J.H.; Prince, M.R. 3D Contrast-enhanced MR Angiography. J. Magn. Reson. Imaging 2007, 25, 13–25. [Google Scholar] [CrossRef]
  79. Zun, Z.; Hargreaves, B.A.; Rosenberg, J.; Zaharchuk, G. Improved Multislice Perfusion Imaging with Velocity-Selective Arterial Spin Labeling. J. Magn. Reson. Imaging 2015, 41, 1422–1431. [Google Scholar] [CrossRef]
  80. Liu, Q.; Lu, J.P.; Wang, F.; Wang, L.; Jin, A.G.; Wang, J.; Tian, J.M. Visceral Artery Aneurysms: Evaluation Using 3D Contrast-Enhanced MR Angiography. Am. J. Roentgenol. 2008, 191, 826–833. [Google Scholar] [CrossRef]
  81. Maj, E.; Cieszanowski, A.; Rowiński, O.; Wojtaszek, M.; Szostek, M.; Tworus, R. Time-Resolved Contrast-Enhanced MR Angiography: Value of Hemodynamic Information in the Assessment of Vascular Diseases. Pol. J. Radiol. 2010, 75, 52–60. [Google Scholar]
  82. Aydıngöz, Ü.; Yıldız, A.E.; Ergen, F.B. Zero Echo Time Musculoskeletal MRI: Technique, Optimization, Applications, and Pitfalls. Radiographics 2022, 42, 1398–1414. [Google Scholar] [CrossRef]
  83. Larson, P.E.Z.; Han, M.; Krug, R.; Jakary, A.; Nelson, S.J.; Vigneron, D.B.; Henry, R.G.; McKinnon, G.; Kelley, D.A.C. Ultrashort Echo Time and Zero Echo Time MRI at 7T. Magn. Reson. Mater. Phys. Biol. Med. 2016, 29, 359–370. [Google Scholar] [CrossRef]
  84. Kim, M.; Park, J.E.; Kim, H.S.; Kim, N.; Park, S.Y.; Kim, Y.-H.; Kim, J.H. Spatiotemporal Habitats from Multiparametric Physiologic MRI Distinguish Tumor Progression from Treatment-Related Change in Post-Treatment Glioblastoma. Eur. Radiol. 2021, 31, 6374–6383. [Google Scholar] [CrossRef] [PubMed]
  85. Vu, B.-T.D.; Kamona, N.; Kim, Y.; Ng, J.J.; Jones, B.C.; Wehrli, F.W.; Song, H.K.; Bartlett, S.P.; Lee, H.; Rajapakse, C.S. Three Contrasts in 3 Min: Rapid, High-Resolution, and Bone-Selective UTE MRI for Craniofacial Imaging with Automated Deep-Learning Skull Segmentation. Magn. Reson. Med. 2025, 93, 245–260. [Google Scholar] [CrossRef] [PubMed]
  86. Bücking, T.M.; Hill, E.R.; Robertson, J.L.; Maneas, E.; Plumb, A.A.; Nikitichev, D.I. From Medical Imaging Data to 3D Printed Anatomical Models. PLoS ONE 2017, 12, e0178540. [Google Scholar] [CrossRef]
  87. Güzelbağ, A.N.; Baş, S.; Toprak, M.H.H.; Kangel, D.; Çoban, Ş.; Sağlam, S.; Öztürk, E. Transforming Cardiac Imaging: Can CT Angiography Replace Interventional Angiography in Tetralogy of Fallot? J. Clin. Med. 2025, 14, 1493. [Google Scholar] [CrossRef]
  88. Huang, K.; Rhee, D.J.; Ger, R.; Layman, R.; Yang, J.; Cardenas, C.E.; Court, L.E. Impact of Slice Thickness, Pixel Size, and CT Dose on the Performance of Automatic Contouring Algorithms. J. Appl. Clin. Med. Phys. 2021, 22, 168–174. [Google Scholar] [CrossRef] [PubMed]
  89. Onken, M.; Eichelberg, M.; Riesmeier, J.; Jensch, P. Digital Imaging and Communications in Medicine. In Biomedical Image Processing; Deserno, T.M., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 427–454. ISBN 978-3-642-15816-2. [Google Scholar]
  90. Hassan, K.; Dort, J.C.; Sutherland, G.R.; Chan, S. Evaluation of Software Tools for Segmentation of Temporal Bone Anatomy. Stud. Health Technol. Inform. 2016, 220, 130–133. [Google Scholar]
  91. Virzì, A.; Muller, C.O.; Marret, J.-B.; Mille, E.; Berteloot, L.; Grévent, D.; Boddaert, N.; Gori, P.; Sarnacki, S.; Bloch, I. Comprehensive Review of 3D Segmentation Software Tools for MRI Usable for Pelvic Surgery Planning. J. Digit. Imaging 2020, 33, 99–110. [Google Scholar] [CrossRef]
  92. Matsiushevich, K.; Belvedere, C.; Leardini, A.; Durante, S. Quantitative Comparison of Freeware Software for Bone Mesh from DICOM Files. J. Biomech. 2019, 84, 247–251. [Google Scholar] [CrossRef]
  93. Queisner, M.; Eisenträger, K. Surgical Planning in Virtual Reality: A Systematic Review. J. Med. Imaging 2024, 11, 062603. [Google Scholar] [CrossRef] [PubMed]
  94. Rana, M.; Buchbinder, D.; Aniceto, G.S.; Mast, G. Patient-Specific Solutions for Cranial, Midface, and Mandible Reconstruction Following Ablative Surgery: Expert Opinion and a Consensus on the Guidelines and Workflow. Craniomaxillofac. Trauma Reconstr. 2025, 18, 15. [Google Scholar] [CrossRef]
  95. Smith, M.; Faraci, A.; Bello, F. Segmentation and Generation of Patient-Specific 3D Models of Anatomy for Surgical Simulation. Stud. Health Technol. Inform. 2004, 98, 360–362. [Google Scholar]
  96. Wendo, K.; Behets, C.; Barbier, O.; Herman, B.; Schubert, T.; Raucent, B.; Olszewski, R. Dimensional Accuracy Assessment of Medical Anatomical Models Produced by Hospital-Based Fused Deposition Modeling 3D Printer. J. Imaging 2025, 11, 39. [Google Scholar] [CrossRef]
  97. Ogden, K.M.; Morabito, K.E.; Depew, P.K. 3D Printed Testing Aids for Radiographic Quality Control. J. Appl. Clin. Med. Phys. 2019, 20, 127–134. [Google Scholar] [CrossRef]
  98. Ogden, K.M.; Aslan, C.; Ordway, N.; Diallo, D.; Tillapaugh-Fay, G.; Soman, P. Factors Affecting Dimensional Accuracy of 3-D Printed Anatomical Structures Derived from CT Data. J. Digit. Imaging 2015, 28, 654–663. [Google Scholar] [CrossRef]
  99. Claudia, C.; Farida, C.; Guy, G.; Marie-Claude, M.; Carl-Eric, A. Quantitative Evaluation of an Automatic Segmentation Method for 3D Reconstruction of Intervertebral Scoliotic Disks from MR Images. BMC Med. Imaging 2012, 12, 26. [Google Scholar] [CrossRef] [PubMed]
  100. Juergensen, L.; Rischen, R.; Hasselmann, J.; Toennemann, M.; Pollmanns, A.; Gosheger, G.; Schulze, M. Insights into Geometric Deviations of Medical 3d-Printing: A Phantom Study Utilizing Error Propagation Analysis. 3D Print. Med. 2024, 10, 38. [Google Scholar] [CrossRef]
  101. Ilesanmi, A.E.; Ilesanmi, T.O.; Ajayi, B.O. Reviewing 3D Convolutional Neural Network Approaches for Medical Image Segmentation. Heliyon 2024, 10, e27398. [Google Scholar] [CrossRef] [PubMed]
  102. Wu, S.; Wu, Y.; Chang, H.; Su, F.T.; Liao, H.; Tseng, W.; Liao, C.; Lai, F.; Hsu, F.; Xiao, F. Deep Learning-Based Segmentation of Various Brain Lesions for Radiosurgery. Appl. Sci. 2021, 11, 9180. [Google Scholar] [CrossRef]
  103. Pezzino, S.; Luca, T.; Castorina, M.; Puleo, S.; Castorina, S. Current Trends and Emerging Themes in Utilizing Artificial Intelligence to Enhance Anatomical Diagnostic Accuracy and Efficiency in Radiotherapy. Prog. Biomed. Eng. 2025, 7, 032002. [Google Scholar] [CrossRef]
  104. Rayed, M.E.; Islam, S.M.S.; Niha, S.I.; Jim, J.R.; Kabir, M.M.; Mridha, M.F. Deep Learning for Medical Image Segmentation: State-of-the-Art Advancements and Challenges. Inform. Med. Unlocked 2024, 47, 101504. [Google Scholar] [CrossRef]
  105. Xu, Y.; Quan, R.; Xu, W.; Huang, Y.; Chen, X.; Liu, F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering 2024, 11, 1034. [Google Scholar] [CrossRef]
  106. Li, X.; Yu, L.; Yang, H. Accuracy and Efficiency of an Artificial Intelligence-Based Three-Dimensional Reconstruction System in Thoracic Surgery. EBioMedicine 2023, 87, 104422. [Google Scholar] [CrossRef]
  107. Herath, H.M.S.S.; Yasakethu, S.L.P.; Madusanka, N.; Yi, M.; Lee, B.-I. Comparative Analysis of Deep Learning Architectures for Macular Hole Segmentation in OCT Images: A Performance Evaluation of U-Net Variants. J. Imaging 2025, 11, 53. [Google Scholar] [CrossRef]
  108. Punn, N.S.; Agarwal, S. Modality Specific U-Net Variants for Biomedical Image Segmentation: A Survey. Artif. Intell. Rev. 2022, 55, 5845–5889. [Google Scholar] [CrossRef]
  109. Maqsood, R.; Abid, F.; Rasheed, J.; Osman, O.; Alsubai, S. Optimal Res-UNET Architecture with Deep Supervision for Tumor Segmentation. Front. Med. 2025, 12, 1593016. [Google Scholar] [CrossRef]
  110. Vijayalakshmi, S.; Manoharan, J.S.; Nivetha, B.; Sathiya, A. Multi-Task Deep Learning Framework Combining CNN: Vision Transformers and PSO for Accurate Diabetic Retinopathy Diagnosis and Lesion Localization. Sci. Rep. 2025, 15, 35076. [Google Scholar] [CrossRef] [PubMed]
  111. Mamdouh, D.; Attia, M.; Osama, M.; Mohamed, N.; Lotfy, A.; Arafa, T.; Rashed, E.A.; Khoriba, G. Advancements in Radiology Report Generation: A Comprehensive Analysis. Bioengineering 2025, 12, 693. [Google Scholar] [CrossRef] [PubMed]
  112. Adiga, S.; Dolz, J.; Lombaert, H. Anatomically-Aware Uncertainty for Semi-Supervised Image Segmentation. Med. Image Anal. 2024, 91, 103011. [Google Scholar] [CrossRef] [PubMed]
  113. Fave, X.; Cook, M.; Frederick, A.; Zhang, L.; Yang, J.; Fried, D.; Stingo, F.; Court, L. Preliminary Investigation into Sources of Uncertainty in Quantitative Imaging Features. Comput. Med. Imaging Graph. 2015, 44, 54–61. [Google Scholar] [CrossRef]
  114. Weikert, T.; Cyriac, J.; Yang, S.; Nesic, I.; Parmar, V.; Stieltjes, B. A Practical Guide to Artificial Intelligence-Based Image Analysis in Radiology. Investig. Radiol. 2020, 55, 1–7. [Google Scholar] [CrossRef]
  115. Li, W.; Diao, K.; Wen, Y.; Shuai, T.; You, Y.; Zhao, J.; Liao, K.; Lu, C.; Yu, J.; He, Y.; et al. High-Strength Deep Learning Image Reconstruction in Coronary CT Angiography at 70-kVp Tube Voltage Significantly Improves Image Quality and Reduces Both Radiation and Contrast Doses. Eur. Radiol. 2022, 32, 2912–2920. [Google Scholar] [CrossRef] [PubMed]
  116. Cao, W.; Parvinian, A.; Adamo, D.; Welch, B.; Callstrom, M.; Ren, L.; Missert, A.; Favazza, C.P. Deep Convolutional-Neural-Network-Based Metal Artifact Reduction for CT-Guided Interventional Oncology Procedures (MARIO). Med. Phys. 2024, 51, 4231–4242. [Google Scholar] [CrossRef] [PubMed]
  117. Koetzier, L.R.; Mastrodicasa, D.; Szczykutowicz, T.P. Deep Learning Image Reconstruction for CT: Technical Principles and Clinical Prospects. Radiology 2023, 306, e221257. [Google Scholar] [CrossRef]
  118. Arabi, H.; Zaidi, H. Deep Learning–Based Metal Artefact Reduction in PET/CT Imaging. Eur. Radiol. 2021, 31, 6384–6396. [Google Scholar] [CrossRef] [PubMed]
  119. Gomase, V.S.; Ghatule, A.P.; Sharma, R.; Sardana, S. Leveraging Artificial Intelligence for Data Integrity, Transparency, and Security in Technology-Enabled Improvements to Clinical Trial Data Management in Healthcare. Rev. Recent. Clin. Trials 2025, 20. [Google Scholar] [CrossRef]
  120. Shin, Y.; Lee, M.; Lee, Y.; Kim, K.; Kim, T. Artificial Intelligence-Powered Quality Assurance: Transforming Diagnostics, Surgery, and Patient Care—Innovations, Limitations, and Future Directions. Life 2025, 15, 654. [Google Scholar] [CrossRef]
  121. Wong, Y.M.; Yeap, P.L.; Ong, A.L.K.; Tuan, J.K.L.; Lew, W.S.; Lee, J.C.L.; Tan, H.Q. Machine Learning Prediction of Dice Similarity Coefficient for Validation of Deformable Image Registration. Intell.-Based Med. 2024, 10, 100163. [Google Scholar] [CrossRef]
  122. Li, G.; Jung, J.J. Deep Learning for Anomaly Detection in Multivariate Time Series: Approaches, Applications, and Challenges. Inf. Fusion 2023, 91, 93–102. [Google Scholar] [CrossRef]
  123. Reynolds, T.; Ma, Y.; Kanawati, A.; Dillon, O.; Baer, K.; Gang, G.; Stayman, J. Universal Non-Circular Cone Beam CT Orbits for Metal Artifact Reduction Imaging during Image-Guided Procedures. Sci. Rep. 2024, 14, 26274. [Google Scholar] [CrossRef] [PubMed]
  124. Zou, H.; Wang, Z.; Guo, M.; Peng, K.; Zhou, J.; Zhou, L.; Fan, B. Metal Artifact Reduction Combined with Deep Learning Image Reconstruction Algorithm for CT Image Quality Optimization: A Phantom Study. PeerJ 2025, 13, e19516. [Google Scholar] [CrossRef] [PubMed]
  125. Wasserthal, J.; Breit, H.-C.; Meyer, M.T.; Pradella, M.; Hinck, D.; Sauter, A.W.; Heye, T.; Boll, D.T.; Cyriac, J.; Yang, S.; et al. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiol. Artif. Intell. 2023, 5, e230024. [Google Scholar] [CrossRef] [PubMed]
  126. Nikhita; Bannur, D.; Cerdas, M.G.; Saeed, A.Z.; Imam, B.; Thandi, R.S.; Anusha, H.C.; Reddy, P.; Ali, R.; Nikhita, N.; et al. Efficiency of Artificial Intelligence in Three-Dimensional Reconstruction of Medical Imaging. Cureus 2025, 17, e96580. [Google Scholar] [CrossRef]
  127. van Sluis, J.; Noordzij, W.; de Vries, E.G.E.; Kok, I.C.; de Groot, D.J.A.; Jalving, M.; Lub-de Hooge, M.N.; Brouwers, A.H.; Boellaard, R. Manual Versus Artificial Intelligence-Based Segmentations as a Pre-Processing Step in Whole-Body PET Dosimetry Calculations. Mol. Imaging Biol. 2023, 25, 435–441. [Google Scholar] [CrossRef]
  128. Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic Accuracy of Deep Learning in Medical Imaging: A Systematic Review and Meta-Analysis. npj Digit. Med. 2021, 4, 65. [Google Scholar] [CrossRef]
  129. Xia, J.; Zhou, Y.; Deng, W.; Kang, J.; Wu, W.; Qi, M.; Zhou, L.; Ma, J.; Xu, Y. PND-Net: Physics-Inspired Non-Local Dual-Domain Network for Metal Artifact Reduction. IEEE Trans. Med. Imaging 2024, 43, 2125–2136. [Google Scholar] [CrossRef]
  130. Guan, H.; Liu, M. Domain Adaptation for Medical Image Analysis: A Survey. IEEE Trans. Biomed. Eng. 2022, 69, 1173–1185. [Google Scholar] [CrossRef]
  131. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. 3D Printing Applications for Healthcare Research and Development. Glob. Health J. 2022, 6, 217–226. [Google Scholar] [CrossRef]
  132. Tripathi, S.; Dash, M.; Chakraborty, R.; Lukman, H.J.; Kumar, P.; Hassan, S.; Mehboob, H.; Singh, H.; Nanda, H.S. Engineering Considerations in the Design of Tissue Specific Bioink for 3D Bioprinting Applications. Biomater. Sci. 2024, 13, 93–129. [Google Scholar] [CrossRef]
  133. Ahmadi Soufivand, A.; Faber, J.; Hinrichsen, J.; Budday, S. Multilayer 3D Bioprinting and Complex Mechanical Properties of Alginate-Gelatin Mesostructures. Sci. Rep. 2023, 13, 11253. [Google Scholar] [CrossRef]
  134. Shah, N.; Patel, A.; Sohn, M.K. Multi-Material and Multidimensional Bioprinting in Regenerative Medicine and Cancer Research. Adv. Healthc. Mater. 2025, 14, e2500475. [Google Scholar] [CrossRef]
  135. Campbell, C.; Ariff, A.; Ghersi, G. Comparative Analysis of the Mechanical Properties of FDM and SLA 3D-Printed Materials for Medical Education. SAGE Open Med. 2025, 13, 25165984251364689. [Google Scholar] [CrossRef]
  136. Storck, J.L.; Ehrmann, G.; Güth, U.; Uthoff, J.; Homburg, S.V.; Blachowicz, T.; Ehrmann, A. Investigation of Low-Cost FDM-Printed Polymers for Elevated-Temperature Applications. Polymers 2022, 14, 2826. [Google Scholar] [CrossRef] [PubMed]
  137. Frunzaverde, D.; Cojocaru, V.; Bacescu, N.; Ciubotariu, C.-R.; Miclosina, C.-O.; Turiac, R.R.; Marginean, G. The Influence of the Layer Height and the Filament Color on the Dimensional Accuracy and the Tensile Strength of FDM-Printed PLA Specimens. Polymers 2023, 15, 2377. [Google Scholar] [CrossRef] [PubMed]
  138. Ahn, S.-J.; Lee, H.; Cho, K.-J. 3D Printing with a 3D Printed Digital Material Filament for Programming Functional Gradients. Nat. Commun. 2024, 15, 3605. [Google Scholar] [CrossRef]
  139. Equbal, A.; Murmu, R.; Kumar, V.; Equbal, M.A.; Equbal, A.; Murmu, R.; Kumar, V.; Equbal, M.A. A Recent Review on Advancements in Dimensional Accuracy in Fused Deposition Modeling (FDM) 3D Printing. AIMS Mater. Sci. 2024, 11, 950–990. [Google Scholar] [CrossRef]
  140. Ali, F.; Kalva, S.N.; Koc, M. Advancements in 3D Printing Techniques for Biomedical Applications: A Comprehensive Review of Materials Consideration, Post Processing, Applications, and Challenges. Discov. Mater. 2024, 4, 53. [Google Scholar] [CrossRef]
  141. Kantaros, A.; Petrescu, F.I.T.; Abdoli, H.; Diegel, O.; Chan, S.; Iliescu, M.; Ganetsos, T.; Munteanu, I.S.; Ungureanu, L.M. Additive Manufacturing for Surgical Planning and Education: A Review. Appl. Sci. 2024, 14, 2550. [Google Scholar] [CrossRef]
  142. Husna, A.; Ashrafi, S.; Tomal, A.A.; Tuli, N.T.; Bin Rashid, A. Recent Advancements in Stereolithography (SLA) and Their Optimization of Process Parameters for Sustainable Manufacturing. Hybrid Adv. 2024, 7, 100307. [Google Scholar] [CrossRef]
  143. Maines, E.M.; Porwal, M.K.; Ellison, C.J.; Reineke, T.M. Sustainable Advances in SLA/DLP 3D Printing Materials and Processes. Green Chem. 2021, 23, 6863–6897. [Google Scholar] [CrossRef]
  144. Muthuram, N.; Sriram Madhav, P.; Keerthi Vasan, D.; Mohan, M.E.; Prajeeth, G. A Review of Recent Literatures in Poly Jet Printing Process. Mater. Today Proc. 2022, 68, 1906–1920. [Google Scholar] [CrossRef]
  145. Majca-Nowak, N.; Pyrzanowski, P. The Analysis of Mechanical Properties and Geometric Accuracy in Specimens Printed in Material Jetting Technology. Materials 2023, 16, 3014. [Google Scholar] [CrossRef]
  146. Vincze, Z.É.; Kovács, Z.I.; Vass, A.F.; Borbély, J.; Márton, K. Evaluation of the Dimensional Stability of 3D-Printed Dental Casts. J. Dent. 2024, 151, 105431. [Google Scholar] [CrossRef] [PubMed]
  147. García-Collado, A.; Blanco, J.M.; Gupta, M.K.; Dorado-Vicente, R. Advances in Polymers Based Multi-Material Additive-Manufacturing Techniques: State-of-Art Review on Properties and Applications. Addit. Manuf. 2022, 50, 102577. [Google Scholar] [CrossRef]
  148. Soni, Y.; Rothweiler, P.; Erdman, A.G. Mechanical Characterization and Feasibility Analysis of PolyJetTM Materials in Tissue-Mimicking Applications. Machines 2025, 13, 234. [Google Scholar] [CrossRef]
  149. Schneider, K.H.; Oberoi, G.; Unger, E.; Janjic, K.; Rohringer, S.; Heber, S.; Agis, H.; Schedle, A.; Kiss, H.; Podesser, B.K.; et al. Medical 3D Printing with Polyjet Technology: Effect of Material Type and Printing Orientation on Printability, Surface Structure and Cytotoxicity. 3D Print. Med. 2023, 9, 27. [Google Scholar] [CrossRef]
  150. Zhu, Y.; Guo, S.; Ravichandran, D.; Ramanathan, A.; Sobczak, M.T.; Sacco, A.F.; Patil, D.; Thummalapalli, S.V.; Pulido, T.V.; Lancaster, J.N.; et al. 3D-Printed Polymeric Biomaterials for Health Applications. Adv. Healthc. Mater. 2025, 14, 2402571. [Google Scholar] [CrossRef]
  151. Emir, F.; Ayyildiz, S. Accuracy Evaluation of Complete-Arch Models Manufactured by Three Different 3D Printing Technologies: A Three-Dimensional Analysis. J. Prosthodont. Res. 2021, 65, 365–370. [Google Scholar] [CrossRef]
  152. Modular Digital and 3D-Printed Dental Models with Applicability in Dental Education. Available online: https://www.mdpi.com/1648-9144/59/1/116 (accessed on 1 November 2025).
  153. The Application and Challenge of Binder Jet 3D Printing Technology in Pharmaceutical Manufacturing. Available online: https://www.mdpi.com/1999-4923/14/12/2589 (accessed on 1 November 2025).
  154. Hong, X.; Han, X.; Li, X.; Li, J.; Wang, Z.; Zheng, A. Binder Jet 3D Printing of Compound LEV-PN Dispersible Tablets: An Innovative Approach for Fabricating Drug Systems with Multicompartmental Structures. Pharmaceutics 2021, 13, 1780. [Google Scholar] [CrossRef] [PubMed]
  155. Wang, M.; Xu, Y.; Cao, L.; Xiong, L.; Shang, D.; Cong, Y.; Zhao, D.; Wei, X.; Li, J.; Fu, D.; et al. Mechanical and Biological Properties of 3D Printed Bone Tissue Engineering Scaffolds. Front. Bioeng. Biotechnol. 2025, 13, 1545693. [Google Scholar] [CrossRef]
  156. Shiran, S.; Nourbakhsh, M.S.; Setayeshmehr, M.; Poursamar, S.A.; Rafienia, M. Improvement in Surface Morphology and Mechanical Properties of the Polycaprolactone/Hydroxyapatite/Graphene Oxide Scaffold: 3D Printing—Salt Leaching Method. J. Mater. Res. Technol. 2025, 36, 8731–8744. [Google Scholar] [CrossRef]
  157. Rossi, A.; Pescara, T.; Gambelli, A.M.; Gaggia, F.; Asthana, A.; Perrier, Q.; Basta, G.; Moretti, M.; Senin, N.; Rossi, F.; et al. Biomaterials for Extrusion-Based Bioprinting and Biomedical Applications. Front. Bioeng. Biotechnol. 2024, 12, 1393641. [Google Scholar] [CrossRef] [PubMed]
  158. Liu, J.; Shahriar, M.; Xu, H.; Xu, C. Cell-Laden Bioink Circulation-Assisted Inkjet-Based Bioprinting to Mitigate Cell Sedimentation and Aggregation. Biofabrication 2022, 14, e96580. [Google Scholar] [CrossRef] [PubMed]
  159. Natural and Synthetic Bioinks for 3D Bioprinting—Khoeini—2021—Advanced NanoBiomed Research—Wiley Online Library. Available online: https://advanced.onlinelibrary.wiley.com/doi/full/10.1002/anbr.202000097 (accessed on 1 November 2025).
  160. Schwab, A.; Levato, R.; D’Este, M.; Piluso, S.; Eglin, D.; Malda, J. Printability and Shape Fidelity of Bioinks in 3D Bioprinting. Chem. Rev. 2020, 120, 11028–11055. [Google Scholar] [CrossRef]
  161. Namli, I.; Gupta, D.; Singh, Y.P.; Datta, P.; Rizwan, M.; Baykara, M.; Ozbolat, I.T. Progressive Insights into 3D Bioprinting for Corneal Tissue Restoration. Adv. Healthc. Mater. 2025, 2025, e03372. [Google Scholar] [CrossRef] [PubMed]
  162. Arias-Peregrino, V.M.; Tenorio-Barajas, A.Y.; Mendoza-Barrera, C.O.; Román-Doval, J.; Lavariega-Sumano, E.F.; Torres-Arellanes, S.P.; Román-Doval, R. 3D Printing for Tissue Engineering: Printing Techniques, Biomaterials, Challenges, and the Emerging Role of 4D Bioprinting. Bioengineering 2025, 12, 936. [Google Scholar] [CrossRef]
  163. Jaksa, L.; Ates, G.; Heller, S. Development of a Multi-Material 3D Printer for Functional Anatomic Models. Int. J. Bioprint. 2021, 7, 420. [Google Scholar] [CrossRef]
  164. Siddiqui, M.A.S.; Rabbi, M.S.; Ahmed, R.U.; Billah, M.M. Biodegradable Natural Polymers and Fibers for 3D Printing: A Holistic Perspective on Processing, Characterization, and Advanced Applications. Clean. Mater. 2024, 14, 100275. [Google Scholar] [CrossRef]
  165. Hatamikia, S.; Jaksa, L.; Kronreif, G.; Birkfellner, W.; Kettenbach, J.; Buschmann, M.; Lorenz, A. Silicone Phantoms Fabricated with Multi-Material Extrusion 3D Printing Technology Mimicking Imaging Properties of Soft Tissues in CT. Z. Für Med. Phys. 2025, 35, 138–151. [Google Scholar] [CrossRef]
  166. Hatamikia, S.; Zaric, O.; Jaksa, L.; Schwarzhans, F.; Trattnig, S.; Fitzek, S.; Kronreif, G.; Woitek, R.; Lorenz, A. Evaluation of 3D-Printed Silicone Phantoms with Controllable MRI Signal Properties. Int. J. Bioprinting 2025, 11, 381–396. [Google Scholar] [CrossRef]
  167. Zhou, L.; Gao, Q.; Fu, J.; Chen, Q.; Zhu, J.; Sun, Y.; He, Y. Multimaterial 3D Printing of Highly Stretchable Silicone Elastomers. ACS Appl. Mater. Interfaces 2019, 11, 23573–23583. [Google Scholar] [CrossRef]
  168. Murphy, S.V.; Atala, A. 3D Bioprinting of Tissues and Organs. Nat. Biotechnol. 2014, 32, 773–785. [Google Scholar] [CrossRef] [PubMed]
  169. Gungor-Ozkerim, P.S.; Inci, I.; Zhang, Y.S.; Khademhosseini, A.; Dokmeci, M.R. Bioinks for 3D Bioprinting: An Overview. Biomater. Sci. 2018, 6, 915–946. [Google Scholar] [CrossRef]
  170. Zhang, Y.S.; Duchamp, M.; Oklu, R.; Ellisen, L.W.; Langer, R.; Khademhosseini, A. Bioprinting the Cancer Microenvironment. ACS Biomater. Sci. Eng. 2016, 2, 1710–1721. [Google Scholar] [CrossRef] [PubMed]
  171. Cui, H.; Nowicki, M.; Fisher, J.P.; Zhang, L.G. 3D Bioprinting for Organ Regeneration. Adv. Healthc. Mater. 2017, 6. [Google Scholar] [CrossRef]
  172. Gudapati, H.; Dey, M.; Ozbolat, I. A Comprehensive Review on Droplet-Based Bioprinting: Past, Present and Future. Biomaterials 2016, 102, 20–42. [Google Scholar] [CrossRef]
  173. Matai, I.; Kaur, G.; Seyedsalehi, A.; McClinton, A.; Laurencin, C.T. Progress in 3D Bioprinting Technology for Tissue/Organ Regenerative Engineering. Biomaterials 2020, 226, 119536. [Google Scholar] [CrossRef]
  174. Unagolla, J.M.; Jayasuriya, A.C. Hydrogel-Based 3D Bioprinting: A Comprehensive Review on Cell-Laden Hydrogels, Bioink Formulations, and Future Perspectives. Appl. Mater. Today 2020, 18, 100479. [Google Scholar] [CrossRef]
  175. Youn, J.K.; Park, H.S.; Ko, D.; Yang, H.-B.; Kim, H.-Y.; Yoon, H.B. Application of Additional Three-Dimensional Materials for Education in Pediatric Anatomy. Sci. Rep. 2023, 13, 9973. [Google Scholar] [CrossRef]
  176. Lau, I.; Wong, Y.H.; Yeong, C.H.; Abdul Aziz, Y.F.; Md Sari, N.A.; Hashim, S.A.; Sun, Z. Quantitative and Qualitative Comparison of Low- and High-Cost 3D-Printed Heart Models. Quant. Imaging Med. Surg. 2019, 9, 107–114. [Google Scholar] [CrossRef] [PubMed]
  177. Chen, S.; Zhang, S.; Li, M. The Role of Three-Dimensional Printed Models of Skull in Anatomy Teaching: A Randomized Controlled Trail. Sci. Rep. 2017, 7, 44889. [Google Scholar] [CrossRef]
  178. Koh, M.Y.; Aidoo-Micah, M.; Zhou, Y. Spatial Ability and 3D Model Colour-Coding Affect Anatomy Learning Performance. Sci. Rep. 2023, 13, 8235. [Google Scholar] [CrossRef]
  179. Fidanza, A.; Caggiari, G.; Di Petrillo, F.; Fiori, E.; Momoli, A.; Logroscino, G. Three-Dimensional Printed Models Can Reduce Costs and Surgical Time for Complex Proximal Humeral Fractures: Preoperative Planning, Patient Satisfaction, and Improved Resident Skills. J. Orthop. Traumatol. Off. J. Ital. Soc. Orthop. Traumatol. 2024, 25, 11. [Google Scholar] [CrossRef] [PubMed]
  180. Zhou, X.; Yi, K.; Shi, Y. Orthopedic Trainees’ Perception of the Educational Utility of Patient-Specific 3D-Printed Anatomical Models: A Questionnaire-Based Observational Study. Adv. Med. Educ. Pract. 2025, 16, 1399–1409. [Google Scholar] [CrossRef]
  181. Barger, J.B.; Park, C.Y.; Lopez, A. Development, Implementation, and Perceptions of a 3D-Printed Human Skull for Dental Education. J. Dent. Educ. 2024, 88, 442–452. [Google Scholar] [CrossRef]
  182. Wolder, D.; Blazuk-Fortak, A.; Góra, T.; Michalska, A.; Kaczmarek, P.; Świercz, G. The Role of Three-Dimensional Printed Models of Fetuses Obtained from Ultrasonographic Examinations in Obstetrics: Clinical and Educational Aspects. Eur. J. Obstet. Gynecol. Reprod. Biol. 2025, 312, 114538. [Google Scholar] [CrossRef]
  183. Liang, J.; Ma, Q.; Zhao, X.; Pan, G.; Zhang, G.; Zhu, B.; Xue, Y.; Li, D.; Lu, B. Feasibility Analysis of 3D Printing With Prenatal Ultrasound for the Diagnosis of Fetal Abnormalities. J. Ultrasound Med. 2022, 41, 1385–1396. [Google Scholar] [CrossRef]
  184. Neijhoft, J.; Henrich, D.; Mörs, K.; Marzi, I.; Janko, M. Visualization of Complicated Fractures by 3D-Printed Models for Teaching and Surgery: Hands-on Transitional Fractures of the Ankle. Eur. J. Trauma Emerg. Surg. 2022, 48, 3923–3931. [Google Scholar] [CrossRef]
  185. Samaila, E.M.; Negri, S.; Zardini, A.; Bizzotto, N.; Maluta, T.; Rossignoli, C.; Magnan, B. Value of Three-Dimensional Printing of Fractures in Orthopaedic Trauma Surgery. J. Int. Med. Res. 2019, 48, 0300060519887299. [Google Scholar] [CrossRef]
  186. Asghar, A.; Naaz, S.; Patra, A.; Ravi, K.S.; Khanal, L. Effectiveness of 3D-Printed Models Prepared from Radiological Data for Anatomy Education: A Meta-Analysis and Trial Sequential Analysis of 22 Randomized, Controlled, Crossover Trials. J. Educ. Health Promot. 2022, 11, 353. [Google Scholar] [CrossRef]
  187. Chauhan, P.; Mehra, S.; Pandya, A. Randomised Controlled Trial: Role of Virtual Interactive 3-Dimensional Models in Anatomical and Medical Education. J. Vis. Commun. Med. 2024, 47, 39–45. [Google Scholar] [CrossRef]
  188. Kavvadia, E.-M.; Katsoula, I.; Angelis, S.; Filippou, D. The Anatomage Table: A Promising Alternative in Anatomy Education. Cureus 2023, 15, e43047. [Google Scholar] [CrossRef]
  189. Pinsky, B.M.; Panicker, S.; Chaudhary, N.; Gemmete, J.J.; Wilseck, Z.M.; Lin, L. The Potential of 3D Models and Augmented Reality in Teaching Cross-Sectional Radiology. Med. Teach. 2023, 45, 1108–1111. [Google Scholar] [CrossRef] [PubMed]
  190. Paymard, M.; Naderian, H.; Hassani Bafrani, H.; Azami Tameh, A.; Mirsafi Niasar, M.; Rahimi, H.; Hosseini, H.S.; Rafat, A. An Evaluation of the Use of 3D-Printed Anatomical Models in Anatomy Education. Surg. Radiol. Anat. 2025, 47, 199. [Google Scholar] [CrossRef]
  191. Baratz, G.; Sridharan, P.S.; Yong, V.; Tatsuoka, C.; Griswold, M.A.; Wish-Baratz, S. Comparing Learning Retention in Medical Students Using Mixed-Reality to Supplement Dissection: A Preliminary Study. Int. J. Med. Educ. 2022, 13, 107–114. [Google Scholar] [CrossRef]
  192. Kılıç, M.F.; Yurtsever, A.Z.; Açıkgöz, F.; Başgut, B.; Mavi, B.; Ertuç, E.; Sevim, S.; Oruk, T.; Kıyak, Y.S.; Peker, T. A New Classmate in Anatomy Education: 3D Anatomical Modeling Medical Students’ Engagement on Learning through Self-prepared Anatomical Models. Anat. Sci. Educ. 2025, 18, 727–737. [Google Scholar] [CrossRef]
  193. Fenta, E.W.; Alsheghri, A. Exploring 4D Printing for Biomedical Applications: Advancements, Challenges, and Future Perspectives. Bioprinting 2025, 50, e00436. [Google Scholar] [CrossRef]
  194. Ahmed, A.; Arya, S.; Gupta, V.; Furukawa, H.; Khosla, A. 4D Printing: Fundamentals, Materials, Applications and Challenges. Polymer 2021, 228, 123926. [Google Scholar] [CrossRef]
  195. Wan, X.; Chen, S.; Ma, J.; Dong, C.; Banerjee, H.; Laperrousaz, S.; Piveteau, P.-L.; Meng, Y.; Leng, J.; Sorin, F. Multimaterial Shape Memory Polymer Fibers for Advanced Drug Release Applications. Adv. Fiber Mater. 2025, 7, 1576–1589. [Google Scholar] [CrossRef]
  196. Mathur, V.; Agarwal, P.; Kasturi, M.; Varadharajan, S.; Devi, E.S.; Vasanthan, K.S. Transformative Bioprinting: 4D Printing and Its Role in the Evolution of Engineering and Personalized Medicine. Discov. Nano 2025, 20, 118. [Google Scholar] [CrossRef] [PubMed]
  197. Faizan Siddiqui, M.; Jabeen, S.; Alwazzan, A.; Vacca, S.; Dalal, L.; Al-Haddad, B.; Jaber, A.; Ballout, F.F.; Abou Zeid, H.K.; Haydamous, J.; et al. Integration of Augmented Reality, Virtual Reality, and Extended Reality in Healthcare and Medical Education: A Glimpse into the Emerging Horizon in LMICs—A Systematic Review. J. Med. Educ. Curric. Dev. 2025, 12, 23821205251342315. [Google Scholar] [CrossRef] [PubMed]
  198. Urlings, J.; de Jong, G.; Maal, T.; Henssen, D. Views on Augmented Reality, Virtual Reality, and 3D Printing in Modern Medicine and Education: A Qualitative Exploration of Expert Opinion. J. Digit. Imaging 2023, 36, 1930–1939. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Integrated workflow for 3D anatomical model generation. The figure illustrates the complete five-stage workflow for transforming medical imaging data into educational anatomical models. Stage 1 (Medical Imaging Acquisition) encompasses conventional CT and MRI technologies as well as specialized research modalities. Stage 2 (Image Processing & Quality Assurance) involves DICOM standardization and validation protocols. Stage 3 (AI-Driven Segmentation & Reconstruction) represents the automated identification and three-dimensional reconstruction of anatomical structures using deep learning algorithms. Stage 4 (3D Printing Technology Selection) shows some available manufacturing technologies, from FDM systems to SLA and specialized PolyJet systems. Stage 5 (Educational Application) demonstrates the ultimate deployment of anatomical models for student learning, including spatial visualization, exposure to anatomical variants, clinical integration, and validated learning outcomes. This integrated framework demonstrates how technological advances across multiple domains—imaging, artificial intelligence, and additive manufacturing—converge to enable effective anatomical education. Created with Microsoft PowerPoint v.16.
Figure 1. Integrated workflow for 3D anatomical model generation. The figure illustrates the complete five-stage workflow for transforming medical imaging data into educational anatomical models. Stage 1 (Medical Imaging Acquisition) encompasses conventional CT and MRI technologies as well as specialized research modalities. Stage 2 (Image Processing & Quality Assurance) involves DICOM standardization and validation protocols. Stage 3 (AI-Driven Segmentation & Reconstruction) represents the automated identification and three-dimensional reconstruction of anatomical structures using deep learning algorithms. Stage 4 (3D Printing Technology Selection) shows some available manufacturing technologies, from FDM systems to SLA and specialized PolyJet systems. Stage 5 (Educational Application) demonstrates the ultimate deployment of anatomical models for student learning, including spatial visualization, exposure to anatomical variants, clinical integration, and validated learning outcomes. This integrated framework demonstrates how technological advances across multiple domains—imaging, artificial intelligence, and additive manufacturing—converge to enable effective anatomical education. Created with Microsoft PowerPoint v.16.
Applsci 16 00005 g001
Figure 2. Multi-material integration for hybrid anatomical models. The diagram illustrates the convergence of rigid polymers (e.g., PLA/ABS) for bone structures and flexible elastomers for soft tissue simulation. The resulting combined model replicates physiological mechanical properties—balancing high stiffness for bone with compliance for soft tissues—and ensures visibility under radiologic imaging (CT). Created with Microsoft PowerPoint v.16.
Figure 2. Multi-material integration for hybrid anatomical models. The diagram illustrates the convergence of rigid polymers (e.g., PLA/ABS) for bone structures and flexible elastomers for soft tissue simulation. The resulting combined model replicates physiological mechanical properties—balancing high stiffness for bone with compliance for soft tissues—and ensures visibility under radiologic imaging (CT). Created with Microsoft PowerPoint v.16.
Applsci 16 00005 g002
Table 1. Comparison of the main 3D printing technologies for anatomical models.
Table 1. Comparison of the main 3D printing technologies for anatomical models.
TechnologyLayer Thickness (μm)Precision/AccuracyMaterial
Options
Multi-
Material Capability
Anatomical
Applications
Post-ProcessingKey
Limitations
Educational Value
FDM (Fused Deposition Modeling)50–300±0.5% (desktop), ±0.15% (industrial)PLA, PETG, TPU, flexible materialsLimited (dual extrusion)Rigid structures, skeletal models, low-cost productionMinimal to moderateLower surface quality, visible layer steppingExcellent (cost-effective, color variety)
SLA (Stereolithography)20±0.5% (desktop), ±0.15% (industrial)Standard & flexible resins, biocompatibleNo (single resin tank)High-detail vascular/cardiac, precise anatomical featuresModerate (washing, UV curing)Potential warping of unsupported spans, UV sensitivityExcellent (high-detail, realistic finishes)
PolyJet (Material Jetting)16±0.04–0.2 mm RMSRigid & elastic photopolymers, multiple colorsYes (simultaneous hard & soft)Multi-tissue simulation, color-differentiation, complex geometriesMinimal (water jet or manual removal)High economic constraints limit accessibilitySuperior (multi-material, tissue-specific properties)
DLP (Digital Light Processing)46.2 (trueness)46.2 trueness, 43.6 precision (μm)Dental photoresins, limited elastomericNo (single material)Dental, orthodontic, small-scale modelsModerate (surface cleaning)Limited material variety, small build platformsGood (high precision for small models)
Binder Jetting80–100Surface finish rough, ±0.8–1.2 mmPowders (HAP, Ca-P, pharmaceuticals)Limited (powder mixture)Bone implants (HAP/Ca-P), patient-specific porosityExtensive (days to weeks infiltration/sintering)Delicate green-state, long processing, rough finish, waste managementLimited (for anatomical education)
Inkjet BioprintingVariable (piezoelectric)Piezoelectric precision variableNatural polymers (alginate, gelatin, collagen), synthetic (PEG, PCL, PLGA)Cell-laden onlyTissue engineering, not suitable for anatomical educationExtensive (sterile culture, physiological media)Requires sterile conditions, incompatible with anatomical teaching, cost-prohibitiveNot applicable (tissue engineering only)
Table 2. Evidence Summary—several educational outcomes and clinical applications of 3D-printed anatomical models. RCT, randomized controlled trial; n, total sample size (with number of studies in parentheses for meta-analyses); IQR, interquartile range; SMD, standardized mean difference (effect size: 0.2 = small, 0.5 = medium, 0.8 = large, >1.2 = very large); p, probability value (p < 0.05 indicates statistical significance); CI, 95% confidence interval; vs., versus; min, minutes. Results are expressed as mean ± standard deviation, median [IQR], or SMD [CI] depending on original study methodology. Study quality levels: Level 1 = RCT with blinded evaluation or meta-analysis (strongest evidence); Level 2 = prospective cohort studies; Level 3 = satisfaction surveys. Statistical significance is indicated by p < 0.01 (highly significant), p < 0.05 (significant), or—(not applicable). Each DOI corresponds to the original peer-reviewed publication. For meta-analyses, “n” represents the total combined participants from all included studies.
Table 2. Evidence Summary—several educational outcomes and clinical applications of 3D-printed anatomical models. RCT, randomized controlled trial; n, total sample size (with number of studies in parentheses for meta-analyses); IQR, interquartile range; SMD, standardized mean difference (effect size: 0.2 = small, 0.5 = medium, 0.8 = large, >1.2 = very large); p, probability value (p < 0.05 indicates statistical significance); CI, 95% confidence interval; vs., versus; min, minutes. Results are expressed as mean ± standard deviation, median [IQR], or SMD [CI] depending on original study methodology. Study quality levels: Level 1 = RCT with blinded evaluation or meta-analysis (strongest evidence); Level 2 = prospective cohort studies; Level 3 = satisfaction surveys. Statistical significance is indicated by p < 0.01 (highly significant), p < 0.05 (significant), or—(not applicable). Each DOI corresponds to the original peer-reviewed publication. For meta-analyses, “n” represents the total combined participants from all included studies.
Learning
Domain
Study
Design
Sample SizeOutcome
Metric
ResultsSignificativeDOI
Cardiac AnatomyBlinded RCTn = 52Post-test scores3D: 60.83% vs. Cadaver: 44.81% vs. Combined: 44.62%p = 0.010[20]
Spatial VisualizationMeta-analysisn = 2492 (27 studies)Standardized Mean Difference (SMD)3D vs. Traditional: SMD 0.72 [95% CI: 0.32–1.13]; 3D vs. 2D: SMD 0.93 [95% CI: 0.49–1.37]p < 0.001[22]
Cranial/Skeletal AnatomyRCTn = 79Structural recognition3D: 31.5 [IQR:29–36] vs. Cadaver: 29.5 [IQR:25–33]p = 0.044[177]
Color-Coded LearningRCTn = 102Knowledge retention78.3 ± 6.1 vs. Monochromatic: 71.2 ± 7.3p < 0.001[178]
Fracture Surgery PlanningProspective RCTn = 40Operative time reduction75.47 ± 9.06 min (3D-planned) vs. 88.55 ± 11.20 min (conventional)p = 0.0002[179]
Orthopedic Resident SatisfactionSurveyn = 76Educational benefit rating85.6% improved understanding; Physical manipulation: 8.1 ± 0.9/10[180]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Castorina, S.; Puleo, S.; Crescimanno, C.; Pezzino, S. Advanced 3D Modeling and Bioprinting of Human Anatomical Structures: A Novel Approach for Medical Education Enhancement. Appl. Sci. 2026, 16, 5. https://doi.org/10.3390/app16010005

AMA Style

Castorina S, Puleo S, Crescimanno C, Pezzino S. Advanced 3D Modeling and Bioprinting of Human Anatomical Structures: A Novel Approach for Medical Education Enhancement. Applied Sciences. 2026; 16(1):5. https://doi.org/10.3390/app16010005

Chicago/Turabian Style

Castorina, Sergio, Stefano Puleo, Caterina Crescimanno, and Salvatore Pezzino. 2026. "Advanced 3D Modeling and Bioprinting of Human Anatomical Structures: A Novel Approach for Medical Education Enhancement" Applied Sciences 16, no. 1: 5. https://doi.org/10.3390/app16010005

APA Style

Castorina, S., Puleo, S., Crescimanno, C., & Pezzino, S. (2026). Advanced 3D Modeling and Bioprinting of Human Anatomical Structures: A Novel Approach for Medical Education Enhancement. Applied Sciences, 16(1), 5. https://doi.org/10.3390/app16010005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop