Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (160)

Search Parameters:
Keywords = DICOM images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 2837 KiB  
Article
Voxel Size and Field of View Influence on Periodontal Bone Assessment Using Four CBCT Systems: An Experimental Ex Vivo Analysis
by Victória Geisa Brito de Oliveira, Polyane Mazucatto Queiroz, Alessandra Rocha Simões, Mônica Ghislaine Oliveira Alves, Maria Aparecida Neves Jardini, André Luiz Ferreira Costa and Sérgio Lucio Pereira de Castro Lopes
Tomography 2025, 11(7), 74; https://doi.org/10.3390/tomography11070074 - 25 Jun 2025
Viewed by 359
Abstract
Objective: This ex vivo study aimed to evaluate the influence of different acquisition protocols, combining voxel size and field of view (FOV), across four cone-beam computed tomography (CBCT) systems, on the accuracy of alveolar bone level measurements for periodontal assessment. Materials and Methods: [...] Read more.
Objective: This ex vivo study aimed to evaluate the influence of different acquisition protocols, combining voxel size and field of view (FOV), across four cone-beam computed tomography (CBCT) systems, on the accuracy of alveolar bone level measurements for periodontal assessment. Materials and Methods: A dry human mandible was used, with standardized radiopaque markers placed on the cementoenamel junction (CEJ) of the buccal–mesial and buccal–distal aspects of teeth 34 and 43. CBCT scans were performed using four systems—Veraview® X800, OP300 Pro®, I-CAT Next Generation®, and Orthophos XG®—applying various combinations of field of view (FOV) and voxel resolution available in each device. Reference measurements were obtained in situ using a digital caliper. CBCT images were exported in DICOM format and analyzed with OnDemand3D software (version 4.6) to obtain paracoronal sections. Linear measurements from the CEJ to the alveolar crest were recorded in triplicate and compared to the gold standard using ANOVA and the Dunnett test (α = 0.05). Results: Protocols with smaller voxel sizes and limited FOVs generally yielded measurements closer to the gold standard. However, some larger-FOV protocols with intermediate voxel sizes also achieved comparable accuracy. Among the systems, the I-CAT showed lower agreement within in situ measurements, while others demonstrated reliable performance depending on the acquisition parameters. Conclusions: The findings suggest that CBCT protocols with smaller voxel sizes and reduced FOVs can enhance measurement accuracy in periodontal bone assessments. Nevertheless, intermediate protocols may offer a balance between diagnostic quality and radiation exposure, aligning with the ALADA principle. This study reinforces the need for standardized acquisition parameters tailored to periodontal imaging. Full article
Show Figures

Figure 1

12 pages, 1985 KiB  
Article
Evaluating Virtual Planning Accuracy in Bimaxillary Advancement Surgery: A Retrospective Study Introducing the Planning Accuracy Coefficient
by Paweł Piotr Grab, Michał Szałwiński, Maciej Jagielak, Jacek Rożko, Dariusz Jurkiewicz, Aldona Chloupek, Maria Sobol and Piotr Rot
J. Clin. Med. 2025, 14(10), 3527; https://doi.org/10.3390/jcm14103527 - 18 May 2025
Viewed by 498
Abstract
Background: Bimaxillary (BiMax) advancement surgeries are one of the most frequently performed procedures in the orthognathic subspecialty of craniomaxillofacial surgery. The growing digitalization of the planning process and the shift from physical to virtual settings in procedure design have allowed, among other things, [...] Read more.
Background: Bimaxillary (BiMax) advancement surgeries are one of the most frequently performed procedures in the orthognathic subspecialty of craniomaxillofacial surgery. The growing digitalization of the planning process and the shift from physical to virtual settings in procedure design have allowed, among other things, for better visualization of surgeries, improved preparation, and a more profound understanding of individual anatomy. Therefore, the question of the accuracy of performed virtual planning (VP) as well as the available methods of its evaluation arises naturally. The aim of this study was to determine the accuracy of performed BiMax advancement surgeries and propose a new planning accuracy coefficient (PAC). Methods: A group of 35 patients who underwent BiMax surgery were included in the study. Computed tomography (CT) of the head and neck region was performed 2 weeks preoperatively and 6 months postoperatively. Acquired Digital Imaging and Communications in Medicine (DICOM) files were used to perform a VP and a 3-dimensional (3D) cephalometry analysis using IPS CASE DESIGNER® software, v2.5.7.1 (KLS Martin Group, Tuttlingen, Germany). Statistical significance evaluation and basic measures of central tendency and dispersion of the analyzed variables were calculated. The accuracy of the performed planning was assessed based on the mean absolute error (MAE) between the planned and achieved cephalometric data variables. Additional assessment was performed based on the proposed PAC. Results: VP was found to be accurate in terms of cephalometric data assessing the height of the maxilla and mandible, the inclination of the occlusal plane, the position of the jaws in relation to the skull base, as well as overjet and overbite. There was a discrepancy in results between the classic and proposed methods of accuracy assessment in the case of several of the evaluated variables. Conclusions: The accuracy of the VP of BiMax advancement surgeries can be evaluated based on 3D cephalometry, and it is accurate in the assessment of the previously mentioned variables. There is a need for further analysis and potential development of the proposed PAC; however, the data obtained based on PAC are promising, and by taking into account the magnitude of planned movements, it can facilitate a fair comparison of results presented in different studies based on various assessment methods. Full article
Show Figures

Figure 1

16 pages, 6927 KiB  
Article
Estimation of Missing DICOM Windowing Parameters in High-Dynamic-Range Radiographs Using Deep Learning
by Mateja Napravnik, Natali Bakotić, Franko Hržić, Damir Miletić and Ivan Štajduhar
Mathematics 2025, 13(10), 1596; https://doi.org/10.3390/math13101596 - 13 May 2025
Viewed by 414
Abstract
Digital Imaging and Communication in Medicine (DICOM) is a standard format for storing medical images, which are typically represented in higher bit depths (10–16 bits), enabling detailed representation but exceeding the display capabilities of standard displays and human visual perception. To address this, [...] Read more.
Digital Imaging and Communication in Medicine (DICOM) is a standard format for storing medical images, which are typically represented in higher bit depths (10–16 bits), enabling detailed representation but exceeding the display capabilities of standard displays and human visual perception. To address this, DICOM images are often accompanied by windowing parameters, analogous to tone mapping in High-Dynamic-Range image processing, which compress the intensity range to enhance diagnostically relevant regions. This study evaluates traditional histogram-based methods and explores the potential of deep learning for predicting window parameters in radiographs where such information is missing. A range of architectures, including MobileNetV3Small, VGG16, ResNet50, and ViT-B/16, were trained on high-bit-depth computed radiography images using various combinations of loss functions, including structural similarity (SSIM), perceptual loss (LPIPS), and an edge preservation loss. Models were evaluated based on multiple criteria, including pixel entropy preservation, Hellinger distance of pixel value distributions, and peak-signal-to-noise ratio after 8-bit conversion. The tested approaches were further validated on the publicly available GRAZPEDWRI-DX dataset. Although histogram-based methods showed satisfactory performance, especially scaling through identifying the peaks in the pixel value histogram, deep learning-based methods were better at selectively preserving clinically relevant image areas while removing background noise. Full article
Show Figures

Figure 1

14 pages, 2851 KiB  
Article
Guided Frontal Sinus Osteotomy: A Pilot Study of a Digital Protocol for “In-House” Manufacturing Surgical Cutting Guides
by Antonio Romano, Stefania Troise, Raffaele Spinelli, Vincenzo Abbate and Giovanni Dell’Aversana Orabona
J. Clin. Med. 2025, 14(9), 3141; https://doi.org/10.3390/jcm14093141 - 1 May 2025
Viewed by 525
Abstract
Objective: Frontal sinus surgery is still challenging for surgeons; the frontal osteotomy with the preparation of a frontal bone flap to access the sinus is usually hand-crafted by experienced surgeons. The objective of our study is to present a fully digital protocol for [...] Read more.
Objective: Frontal sinus surgery is still challenging for surgeons; the frontal osteotomy with the preparation of a frontal bone flap to access the sinus is usually hand-crafted by experienced surgeons. The objective of our study is to present a fully digital protocol for the manufacturing of “in-house” surgical cutting guides, customized to the patient’s anatomy, to perform precise frontal sinus osteotomy, showing the costs, times, and intraoperative complications reduction. Materials and Methods: A prospective study was conducted on 12 patients with complex pathologies involving the frontal sinus who underwent frontal sinus osteotomy in the Maxillofacial Surgery Unit of the Federico II University of Naples, from January 2021 to April 2025, considering the last surgery in November 2023. The same digital protocol to manufacture the surgical cutting guide was used for all the 12 patients. The first step was to upload the preoperative CT images in DICOM format to the software Mimics Medical to perform a rapid segmentation of the skull region of interest to create a 3D object and to identify the frontal sinus margins and the osteotomy lines. The second step was to realize the surgical cutting guide, incorporating the design of titanium plates to fix onto the skull in order to make a precise osteotomy. The final digital step was to export the cutting guide 3D object in the software “Formlab-Form 3B” to print the model with a specific resin. The model was then used during the surgery to perform the precise frontal osteotomy by piezo surgery. The clinical outcomes, in terms of complications and recurrences, were then recorded. Results: In all the patients, no intraoperative complications occurred; the median follow-up was 31.7 months and at one year of follow-up only one patient experienced a recurrence. The mean operative time was about 4 h, with a frontal osteotomy time of about 23 min. Digital protocol time was about 4 h while printing times were between 2 and 4 h. Conclusions: This “in-house” protocol seems to demonstrate that the use of intraoperative templates for the realization of the frontal sinus osteotomy reduces preoperative and intraoperative costs and times, reducing the risk of intraoperative complications, and also allows less experienced surgeons to perform the procedure safely. Obviously, this study is to be considered a “pilot study”, and other studies with large cohorts of patients will have to confirm these promising results. Full article
(This article belongs to the Special Issue Innovations in Maxillofacial Surgery)
Show Figures

Graphical abstract

15 pages, 29428 KiB  
Article
Color as a High-Value Quantitative Tool for PET/CT Imaging
by Michail Marinis, Sofia Chatziioannou and Maria Kallergi
Information 2025, 16(5), 352; https://doi.org/10.3390/info16050352 - 27 Apr 2025
Viewed by 610
Abstract
The successful application of artificial intelligence (AI) techniques for the quantitative analysis of hybrid medical imaging data such as PET/CT is challenged by the differences in the type of information and image quality between the two modalities. The purpose of this work was [...] Read more.
The successful application of artificial intelligence (AI) techniques for the quantitative analysis of hybrid medical imaging data such as PET/CT is challenged by the differences in the type of information and image quality between the two modalities. The purpose of this work was to develop color-based, pre-processing methodologies for PET/CT data that could yield a better starting point for subsequent diagnosis and image processing and analysis. Two methods are proposed that are based on the encoding of Hounsfield Units (HU) and Standardized Uptake Values (SUVs) in separate transformed .png files as reversible color information in combination with .png basic information metadata based on DICOM attributes. Linux Ubuntu using Python was used for the implementation and pilot testing of the proposed methodologies on brain 18F-FDG PET/CT scans acquired with different PET/CT systems. The range of HUs and SUVs was mapped using novel weighted color distribution functions that allowed for a balanced representation of the data and an improved visualization of anatomic and metabolic differences. The pilot application of the proposed mapping codes yielded CT and PET images where it was easier to pinpoint variations in anatomy and metabolic activity and offered a potentially better starting point for the subsequent fully automated quantitative analysis of specific regions of interest or observer evaluation. It should be noted that the output .png files contained all the raw values and may be treated as raw DICOM input data. Full article
Show Figures

Figure 1

14 pages, 4053 KiB  
Case Report
Virtual Reality for Pre-Procedural Planning of Interventional Pain Procedures: A Real-World Application Case Series
by Ingharan J. Siddarthan, Cary Huang, Parhesh Kumar, John E. Rubin, Robert S. White, Neel Mehta and Rohan Jotwani
J. Clin. Med. 2025, 14(9), 3019; https://doi.org/10.3390/jcm14093019 - 27 Apr 2025
Viewed by 1132
Abstract
Background/Objectives: Virtual reality (VR), a component of extended reality (XR), has shown promise in pre-procedural planning by providing immersive, patient-specific simulations. In pain management, where precise anatomical understanding is critical for interventions such as peripheral nerve stimulation (PNS), nerve blocks, and intrathecal [...] Read more.
Background/Objectives: Virtual reality (VR), a component of extended reality (XR), has shown promise in pre-procedural planning by providing immersive, patient-specific simulations. In pain management, where precise anatomical understanding is critical for interventions such as peripheral nerve stimulation (PNS), nerve blocks, and intrathecal pump placement, the application of VR remains underexplored. This case series examines the role of VR in enhancing pre-procedural planning for complex chronic pain interventions. Methods: From August 2022 to December 2024, six patients with anatomically challenging conditions underwent VR-assisted pre-procedural planning at Weill Cornell Medical Center. Patient-specific 3D models were created using the manual or automatic segmentation of imaging data and reviewed in VR to optimize procedural strategies by the surgeons performing the case. Procedures were then performed using conventional fluoroscopic or ultrasound guidance. Results: In all cases, VR facilitated the improved visualization of complex anatomies and informed optimal procedural trajectories. In patients with a complex cancer anatomy, previous surgical changes, or hardware, VR enabled precise PNS lead or needle placement, resulting in significant pain reductions postoperatively. In certain cases where previous interventional pain procedures had failed, VR allowed for a “second opinion” to develop an alternative approach with improved outcomes. Finally, in one case, VR served to potentially prevent patient harm by providing insight to the proceduralists regarding an alternative approach. Across the series, VR enhanced the spatial awareness, procedural accuracy, and confidence in navigating challenging anatomical scenarios. Conclusions: This case series demonstrates the utility of VR in pre-procedural planning for chronic pain interventions. By enabling detailed anatomical visualization and trajectory optimization, VR has the potential to improve outcomes in complex cases. Further studies are needed to evaluate its broader clinical applications and cost-effectiveness in pain management. Full article
Show Figures

Figure 1

12 pages, 1620 KiB  
Article
Comparison of Marginal and Internal Fit of CAD/CAM Ceramic Inlay Restorations Fabricated Through Model Scanner, Intraoral Scanner, and CBCT Scans
by Ayben Şentürk, Bora Akat, Mert Ocak, Mehmet Ali Kılıçarslan and Kaan Orhan
Appl. Sci. 2025, 15(9), 4626; https://doi.org/10.3390/app15094626 - 22 Apr 2025
Viewed by 651
Abstract
Background and Objectives: CBCT images have been successfully used for CAD/CAM crown restorations; however, their use for ceramic inlay restorations remains unclear. This study aimed to evaluate the marginal and internal fit of CAD/CAM ceramic inlay restorations fabricated using intraoral scanner, model [...] Read more.
Background and Objectives: CBCT images have been successfully used for CAD/CAM crown restorations; however, their use for ceramic inlay restorations remains unclear. This study aimed to evaluate the marginal and internal fit of CAD/CAM ceramic inlay restorations fabricated using intraoral scanner, model scanner, and CBCT data. Materials and Methods: Inlay preparations were performed on 11 mandibular molar typodont teeth. The teeth were scanned using an intraoral scanner, an extraoral scanner, and CBCT (0.075 mm voxel size). CBCT-generated DICOM data were converted to STL format with dedicated software. All scan data were transferred to CAD software, and a total of 33 restorations were designed. Feldspathic ceramic blocks were used for milling. Micro-CT was employed to measure marginal and internal gaps, with 60 measurement points taken from three cross-sections per sample. Data were analyzed using ANOVA and Bonferroni tests (p < 0.05). Results: CBCT exhibited greater marginal and internal gap dimensions (mean: 169.27 ± 38.64 μm), which were approximately 60–70 μm higher than those of the intraoral (97.00 ± 10.1 μm) and model scanner groups (109.67 ± 9.72 μm), exceeding clinically acceptable limits (≤120 μm) (p < 0.05). Intraoral and model scanners showed similar, clinically acceptable results. Conclusions: CBCT was less accurate for inlay restorations, likely due to their complex geometry. Nevertheless, fabrication was possible, and further research may improve its clinical applicability. Full article
Show Figures

Figure 1

37 pages, 12112 KiB  
Article
Protocol for Converting DICOM Files to STL Models Using 3D Slicer and Ultimaker Cura
by Malena Pérez-Sevilla, Fernando Rivas-Navazo, Pedro Latorre-Carmona and Darío Fernández-Zoppino
J. Pers. Med. 2025, 15(3), 118; https://doi.org/10.3390/jpm15030118 - 19 Mar 2025
Viewed by 1696
Abstract
Background/Objectives: 3D printing has become an invaluable tool in medicine, enabling the creation of precise anatomical models for surgical planning and medical education. This study presents a comprehensive protocol for converting DICOM files into three-dimensional models and their subsequent transformation into GCODE [...] Read more.
Background/Objectives: 3D printing has become an invaluable tool in medicine, enabling the creation of precise anatomical models for surgical planning and medical education. This study presents a comprehensive protocol for converting DICOM files into three-dimensional models and their subsequent transformation into GCODE files ready for 3D printing. Methods: We employed the open-source software “3D Slicer” for the initial conversion of the DICOM files, capitalising on its robust capabilities in segmentation and medical image processing. An optimised workflow was developed for the precise and efficient conversion of medical images into STL models, ensuring high fidelity in anatomical structures. The protocol was validated through three case studies, achieving elevated structural fidelity based on deviation analysis between the STL models and the original DICOM data. Furthermore, the segmentation process preserved morphological accuracy within a narrow deviation range, ensuring the reliable replication of anatomical features for medical applications. Our protocol provides an effective and accessible approach to generating 3D anatomical models with enhanced accuracy and reproducibility. In later stages, we utilised the “Ultimaker Cura” software to generate customised GCODE files tailored to the specifications of the 3D printer. Results: Our protocol offers an effective, accessible, and more accurate solution for creating 3D anatomical models from DICOM images. Furthermore, the versatility of this approach allows for its adaptation to various 3D printers and materials, expanding its utility in the medical and scientific community. Conclusions: This study presents a robust and reproducible approach for converting medical data into physical three-dimensional objects, paving the way for a wide range of applications in personalised medicine and advanced clinical practice. The selection of sample datasets from the 3D Slicer repository ensures standardisation and reproducibility, allowing for independent validation of the proposed workflow without ethical or logistical constraints related to patient data access. However, we acknowledge that future work could expand upon this by incorporating real patient datasets and benchmarking the protocol against alternative segmentation methods and software packages to further assess performance across different clinical scenarios. Essentially, this protocol can be particularly characterised by its commitment to open-source software and low-cost solutions, making advanced 3D modelling accessible to a wider audience. By leveraging open-access tools such as “3D Slicer” and “Ultimaker Cura”, we democratise the creation of anatomical models, ensuring that institutions with limited resources can also benefit from this technology, promoting innovation and inclusivity in medical sciences and education. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

18 pages, 7130 KiB  
Article
Improving Cerebrovascular Imaging with Deep Learning: Semantic Segmentation for Time-of-Flight Magnetic Resonance Angiography Maximum Intensity Projection Image Enhancement
by Tomonari Yamada, Takaaki Yoshimura, Shota Ichikawa and Hiroyuki Sugimori
Appl. Sci. 2025, 15(6), 3034; https://doi.org/10.3390/app15063034 - 11 Mar 2025
Viewed by 947
Abstract
Magnetic Resonance Angiography (MRA) is widely used for cerebrovascular assessment, with Time-of-Flight (TOF) MRA being a common non-contrast imaging technique. However, maximum intensity projection (MIP) images generated from TOF-MRA often include non-essential vascular structures such as external carotid branches, requiring manual editing for [...] Read more.
Magnetic Resonance Angiography (MRA) is widely used for cerebrovascular assessment, with Time-of-Flight (TOF) MRA being a common non-contrast imaging technique. However, maximum intensity projection (MIP) images generated from TOF-MRA often include non-essential vascular structures such as external carotid branches, requiring manual editing for accurate visualization of intracranial arteries. This study proposes a deep learning-based semantic segmentation approach to automate the removal of these structures, enhancing MIP image clarity while reducing manual workload. Using DeepLab v3+, a convolutional neural network model optimized for segmentation accuracy, the method achieved an average Dice Similarity Coefficient (DSC) of 0.9615 and an Intersection over Union (IoU) of 0.9261 across five-fold cross-validation. The developed system processed MRA datasets at an average speed of 16.61 frames per second, demonstrating real-time feasibility. A dedicated software tool was implemented to apply the segmentation model directly to DICOM images, enabling fully automated MIP image generation. While the model effectively removed most external carotid structures, further refinement is needed to improve venous structure suppression. These results indicate that deep learning can provide an efficient and reliable approach for automated cerebrovascular image processing, with potential applications in clinical workflows and neurovascular disease diagnosis. Full article
(This article belongs to the Special Issue MR-Based Neuroimaging)
Show Figures

Figure 1

19 pages, 30651 KiB  
Article
Comparative Evaluation of Commercial, Freely Available, and Open-Source Tools for Single-Cell Analysis Within Freehand-Defined Histological Brightfield Image Regions of Interest
by Filippo Piccinini, Marcella Tazzari, Maria Maddalena Tumedei, Nicola Normanno, Gastone Castellani and Antonella Carbonaro
Technologies 2025, 13(3), 110; https://doi.org/10.3390/technologies13030110 - 7 Mar 2025
Viewed by 1305
Abstract
In the field of histological analysis, one of the typical issues is the analysis of single cells contained in regions of interest (i.e., ROIs). Today, several commercial, freely available, and open-source software options are accessible for this task. However, the literature lacks recent [...] Read more.
In the field of histological analysis, one of the typical issues is the analysis of single cells contained in regions of interest (i.e., ROIs). Today, several commercial, freely available, and open-source software options are accessible for this task. However, the literature lacks recent extensive reviews that summarise the functionalities of the opportunities currently available and provide guidance on selecting the most suitable option for analysing specific cases, for instance, irregular freehand-defined ROIs on brightfield images. In this work, we reviewed and compared 14 software tools tailored for single-cell analysis within a 2D histological freehand-defined image ROI. Precisely, six open-source tools (i.e., CellProfiler, Cytomine, Digital Slide Archive, Icy, ImageJ/Fiji, QuPath), four freely available tools (i.e., Aperio ImageScope, NIS Elements Viewer, Sedeen, SlideViewer), and four commercial tools (i.e., Amira, Arivis, HALO, Imaris) were considered. We focused on three key aspects: (a) the capacity to handle large file formats such as SVS, DICOM, and TIFF, ensuring compatibility with diverse datasets; (b) the flexibility in defining irregular ROIs, whether through automated extraction or manual delineation, encompassing square, circular, polygonal, and freehand shapes to accommodate varied research needs; and (c) the capability to classify single cells within selected ROIs on brightfield images, ranging from fully automated to semi-automated or manual approaches, requiring different levels of user involvement. Thanks to this work, a deeper understanding of the strengths and limitations of different software platforms emerges, facilitating informed decision making for researchers looking for a tool to analyse histological brightfield images. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

10 pages, 3707 KiB  
Article
Unveiling Software Limitations in the Assessment of the Minimum Sectional Area and Volume in Cleft LIP and Palate Patients
by Beethoven Estevao Costa, Renato Yassutaka Faria Yaedú, Maísa Pereira-Silva, André Luis da Silva Fabris, Michele Garcia-Usó, Osvaldo Magro Filho and Simone Soares
Life 2025, 15(2), 226; https://doi.org/10.3390/life15020226 - 4 Feb 2025
Viewed by 714
Abstract
The increasing use of cone beam computed tomography (CBCT) has led to a growing demand for DICOM software that enables the assessment and measurement of craniofacial structures. This study aimed to compare the airway volume and the minimum axial area in patients with [...] Read more.
The increasing use of cone beam computed tomography (CBCT) has led to a growing demand for DICOM software that enables the assessment and measurement of craniofacial structures. This study aimed to compare the airway volume and the minimum axial area in patients with cleft lip and palate using five different imaging software programs: Dolphin3D, InVivo Dental, ITK Snap, InVesalius, and NemoFAB. Initially, 100 CBCT scans were selected by an examiner, and their corresponding DICOM files were collected. The oropharyngeal segments were delineated following the manufacturer’s guidelines, using two different segmentation techniques: interactive and fixed threshold. The results were analyzed using the Friedman test and Wilcoxon post hoc test, with a 5% significance level for all statistical tests. The findings for both the minimum axial area and total volume revealed that the median values across the software groups were higher than expected, and significant differences were observed when comparing the groups (p < 0.001). All five software programs showed notable differences in their outputs. Specifically, a statistically significant difference in volume was found across all groups, except between InVivo and ITK-Snap. It is recommended that pre- and post-treatment comparisons be performed using the same software for consistency. Full article
(This article belongs to the Section Medical Research)
Show Figures

Figure 1

10 pages, 1064 KiB  
Article
Sex-Specific Analysis of Carotid Artery Through Bilateral 3D Modeling via MRI and DICOM Processing
by Pedro Martinez, Jose Roberto Torres, Daniel Conde, Manuel Gomez and Alvaro N. Gurovich
Bioengineering 2025, 12(2), 142; https://doi.org/10.3390/bioengineering12020142 - 1 Feb 2025
Viewed by 1554
Abstract
The present study explores the anatomical differences between sexes of the carotid artery using non-invasive magnetic resonance imaging (MRI) and a DICOM processing protocol. Bilateral three-dimensional models of the carotid artery were constructed for 20 healthy young adults, 10 males and 10 females, [...] Read more.
The present study explores the anatomical differences between sexes of the carotid artery using non-invasive magnetic resonance imaging (MRI) and a DICOM processing protocol. Bilateral three-dimensional models of the carotid artery were constructed for 20 healthy young adults, 10 males and 10 females, in order to evaluate key anatomical landmarks; these include the bifurcation diameter and angle, as well as the internal and external carotid arteries (ICA and ECA) for both sides (left and right). The results show that males exhibit larger bifurcation and ECA diameters, which could indicate reduced endothelial shear stress (ESS). However, as there is no previously observed sex difference in ESS between sexes, compensatory factors might be in play, such as blood pressure. This underscores the interaction between vascular geometry and stroke risk disparities; future research is encouraged to analyze diverse demographics and employ flow modeling techniques to further asses the connection between anatomical differences within a given population and vascular outcomes. Full article
(This article belongs to the Special Issue Advances in Physical Therapy and Rehabilitation)
Show Figures

Figure 1

11 pages, 3039 KiB  
Article
Development of Three-Dimensional Anatomical Models of Dogs with Congenital Extrahepatic Portosystemic Shunts
by Éverton Oliveira Calixto, Erika Toledo da Fonseca, Anna Luiza Campos Pollon and Antônio Chaves de Assís Neto
Animals 2025, 15(3), 352; https://doi.org/10.3390/ani15030352 - 26 Jan 2025
Viewed by 1591
Abstract
The aim of this study was to develop three-dimensional anatomical models of dogs with congenital extrahepatic portosystemic shunts (CEPSs) using 3D printing, as well as to detail their development process and compare the final models to volume rendering (VR) derived from computed tomography [...] Read more.
The aim of this study was to develop three-dimensional anatomical models of dogs with congenital extrahepatic portosystemic shunts (CEPSs) using 3D printing, as well as to detail their development process and compare the final models to volume rendering (VR) derived from computed tomography (CT) scans. CT scans in the Digital Imaging and Communications in Medicine (DICOM) format of two canine patients were used—one with splenocaval deviation and the other with right gastrocaval deviation. The images were segmented using 3DSlicer software, generating 3D files in Standard Tessellation Language (STL) format, which were then subjected to refinement and mesh adjustment using Blender software. The models were printed on a J750™ Digital Anatomy™ printer, followed by post-processing in a 2% sodium hydroxide solution for 72 h, with subsequent rinsing to remove support resin residues. The printed models showed colored anatomical structures, including the liver; spleen; kidneys; part of the arterial, venous, and portal circulations; and CEPSs. For comparison purposes, VR of the scans was recreated in the RadiAnt DICOM Viewer software. Despite some limitations of the segmentation software, the 3D-printed models effectively represented the anatomy of the patients and the CEPSs, demonstrating good equivalence to the VR. Full article
(This article belongs to the Section Veterinary Clinical Studies)
Show Figures

Figure 1

14 pages, 4444 KiB  
Article
Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging
by Emre Haylaz, Ismail Gumussoy, Suayip Burak Duman, Fahrettin Kalabalik, Muhammet Can Eren, Mustafa Sami Demirsoy, Ozer Celik and Ibrahim Sevki Bayrakdar
J. Clin. Med. 2025, 14(3), 778; https://doi.org/10.3390/jcm14030778 - 25 Jan 2025
Cited by 5 | Viewed by 1655
Abstract
Background/Objectives: There are various challenges in the segmentation of anatomical structures with artificial intelligence due to the different structural features of the relevant region/tissue. The aim of this study was to detect the nasolacrimal canal (NLC) using the nnU-Net v2 convolutional neural network [...] Read more.
Background/Objectives: There are various challenges in the segmentation of anatomical structures with artificial intelligence due to the different structural features of the relevant region/tissue. The aim of this study was to detect the nasolacrimal canal (NLC) using the nnU-Net v2 convolutional neural network (CNN) model in cone beam-computed tomography (CBCT) images and to evaluate the successful performance of the model in automatic segmentation. Methods: CBCT images of 100 patients were randomly selected from the data archive. The raw data were transferred to the 3D Slicer imaging software in DICOM format (Version 4.10.2; MIT, Massachusetts, USA). NLC was labeled using the polygonal type of manual method. The dataset was split into training, validation and test sets in a ratio of 8:1:1. nnU-Net v2 architecture was applied to the training and test datasets to predict and generate appropriate algorithm weight factors. The confusion matrix was used to check the accuracy and performance of the model. As a result of the test, the Dice Coefficient (DC), Intersection over Union (IoU), F1-Score and 95% Hausdorff distance (95% HD) metrics were calculated. Results: By testing the model, DC, IoU, F1-Scores and 95% HD metric values were found to be 0.8465, 0.7341, 0.8480 and 0.9460, respectively. According to the data obtained, the receiver-operating characteristic (ROC) curve was drawn and the AUC value under the curve was determined to be 0.96. Conclusions: These results showed that the proposed nnU-Net v2 model achieves NLC segmentation on CBCT images with high precision and accuracy. The automated segmentation of NLC may assist clinicians in determining the surgical technique to be used to remove lesions, especially those affecting the anterior wall of the maxillary sinus. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

18 pages, 1820 KiB  
Article
DicomOS: A Preliminary Study on a Linux-Based Operating System Tailored for Medical Imaging and Enhanced Interoperability in Radiology Workflows
by Tiziana Currieri, Orazio Gambino, Roberto Pirrone and Salvatore Vitabile
Electronics 2025, 14(2), 330; https://doi.org/10.3390/electronics14020330 - 15 Jan 2025
Viewed by 1221
Abstract
In this paper, we propose a Linux-based operating system, namely, DicomOS, tailored for medical imaging and enhanced interoperability, addressing user-friendly functionality and the main critical needs in radiology workflows. Traditional operating systems in clinical settings face limitations, such as fragmented software ecosystems and [...] Read more.
In this paper, we propose a Linux-based operating system, namely, DicomOS, tailored for medical imaging and enhanced interoperability, addressing user-friendly functionality and the main critical needs in radiology workflows. Traditional operating systems in clinical settings face limitations, such as fragmented software ecosystems and platform-specific restrictions, which disrupt collaborative workflows and hinder diagnostic efficiency. Built on Ubuntu 22.04 LTS, DicomOS integrates essential DICOM functionalities directly into the OS, providing a unified, cohesive platform for image visualization, annotation, and sharing. Methods include custom configurations and the development of graphical user interfaces (GUIs) and command-line tools, making them accessible to medical professionals and developers. Key applications such as ITK-SNAP and 3D Slicer are seamlessly integrated alongside specialized GUIs that enhance usability without requiring extensive technical expertise. As preliminary work, DicomOS demonstrates the potential to simplify medical imaging workflows, reduce cognitive load, and promote efficient data sharing across diverse clinical settings. However, further evaluations, including structured clinical tests and broader deployment with a distributable ISO image, must validate its effectiveness and scalability in real-world scenarios. The results indicate that DicomOS provides a versatile and adaptable solution, supporting radiologists in routine tasks while facilitating customization for advanced users. As an open-source platform, DicomOS has the potential to evolve alongside medical imaging needs, positioning it as a valuable resource for enhancing workflow integration and clinical collaboration. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop