Topic Editors

Department of Radiology, Jagiellonian University Medical College, 19 Kopernika Street, 31-501 Cracow, Poland
Institute of Electronics, Lodz University of Technology, Wolczanska 211/215, 90-924 Łódź, Poland
Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland

Artificial Intelligence in Medical Imaging and Image Processing

Abstract submission deadline
closed (31 October 2023)
Manuscript submission deadline
closed (31 December 2023)
Viewed by
95081

Topic Information

Dear Colleagues,

In modern healthcare, the importance of computer-aided diagnosis is quickly becoming obvious, with clear benefits for the medical professionals and patients. Automatization of processes traditionally maintained by human professionals is also growing in importance. The process of image analysis can be supported by the use of networks that can carry out multilayer analyses of patterns—collectively called artificial intelligence (AI). If supported by large datasets of input data, computer networks can suggest the result with low error bias. Medical imaging focused on pattern detection is typically supported by AI algorithms. AI can be used as an important aid in three major steps of decision making in the medical imaging workflow: detection (image segmentation), recognition (assignment to the class), and result description (transformation of the result to natural language). The implementation of AI algorithms may participate in the diagnostic process standardization and markedly reduces the time needed to achieve pathology detection and description of the results. With AI support, medical specialists may work more effectively, which can improve healthcare quality. As AI has been a topic of interest for a while now, there are many approaches to and techniques for the implementation of AI based on different computing methods designed to work in various systems. The aim of this Special Issue in to present the current knowledge dedicated to the AI methods used in medical systems, with their applications in different fields of diagnostic imaging. Our goal is for this collection of works to contribute to the exchange of knowledge resulting in a better understanding of AI technical aspects and applications in modern radiology.

Dr. Rafał Obuchowicz
Prof. Dr. Michał Strzelecki
Prof. Dr. Adam Piorkowski
Topic Editors

Keywords

  • artificial intelligence
  • computer-aided diagnosis
  • medical imaging
  • image analysis
  • image processing

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
BioMed
biomed
- - 2021 27 Days CHF 1000
Cancers
cancers
5.2 7.4 2009 17.9 Days CHF 2900
Diagnostics
diagnostics
3.6 3.6 2011 20.7 Days CHF 2600
Journal of Clinical Medicine
jcm
3.9 5.4 2012 17.9 Days CHF 2600
Tomography
tomography
1.9 2.3 2015 24.5 Days CHF 2400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (35 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
16 pages, 271 KiB  
Editorial
Clinical Applications of Artificial Intelligence in Medical Imaging and Image Processing—A Review
by Rafał Obuchowicz, Michał Strzelecki and Adam Piórkowski
Cancers 2024, 16(10), 1870; https://doi.org/10.3390/cancers16101870 - 14 May 2024
Viewed by 370
Abstract
Artificial intelligence (AI) is currently becoming a leading field in data processing [...] Full article
34 pages, 3877 KiB  
Article
Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble
by Md. Mamun Hossain, Md. Moazzem Hossain, Most. Binoee Arefin, Fahima Akhtar and John Blake
Diagnostics 2024, 14(1), 89; https://doi.org/10.3390/diagnostics14010089 - 30 Dec 2023
Cited by 1 | Viewed by 2241
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study [...] Read more.
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes. Full article
Show Figures

Figure 1

13 pages, 1606 KiB  
Article
Dynamic Chest Radiograph Simulation Technique with Deep Convolutional Neural Networks: A Proof-of-Concept Study
by Dongrong Yang, Yuhua Huang, Bing Li, Jing Cai and Ge Ren
Cancers 2023, 15(24), 5768; https://doi.org/10.3390/cancers15245768 - 8 Dec 2023
Cited by 1 | Viewed by 791
Abstract
In this study, we present an innovative approach that harnesses deep neural networks to simulate respiratory lung motion and extract local functional information from single-phase chest X-rays, thus providing valuable auxiliary data for early diagnosis of lung cancer. A novel radiograph motion simulation [...] Read more.
In this study, we present an innovative approach that harnesses deep neural networks to simulate respiratory lung motion and extract local functional information from single-phase chest X-rays, thus providing valuable auxiliary data for early diagnosis of lung cancer. A novel radiograph motion simulation (RMS) network was developed by combining a U-Net and a long short-term memory (LSTM) network for image generation and sequential prediction. By utilizing a spatial transformer network to deform input images, our proposed network ensures accurate image generation. We conducted both qualitative and quantitative assessments to evaluate the effectiveness and accuracy of our proposed network. The simulated respiratory motion closely aligns with pulmonary biomechanics and reveals enhanced details of pulmonary diseases. The proposed network demonstrates precise prediction of respiratory motion in the test cases, achieving remarkable average Dice scores exceeding 0.96 across all phases. The maximum variation in lung length prediction was observed during the end-exhale phase, with average deviation of 4.76 mm (±6.64) for the left lung and 4.77 mm (±7.00) for the right lung. This research validates the feasibility of generating patient-specific respiratory motion profiles from single-phase chest radiographs. Full article
Show Figures

Figure 1

26 pages, 8038 KiB  
Article
Automated Computer-Assisted Medical Decision-Making System Based on Morphological Shape and Skin Thickness Analysis for Asymmetry Detection in Mammographic Images
by Rafael Bayareh-Mancilla, Luis Alberto Medina-Ramos, Alfonso Toriz-Vázquez, Yazmín Mariela Hernández-Rodríguez and Oscar Eduardo Cigarroa-Mayorga
Diagnostics 2023, 13(22), 3440; https://doi.org/10.3390/diagnostics13223440 - 14 Nov 2023
Cited by 8 | Viewed by 1187
Abstract
Breast cancer is a significant health concern for women, emphasizing the need for early detection. This research focuses on developing a computer system for asymmetry detection in mammographic images, employing two critical approaches: Dynamic Time Warping (DTW) for shape analysis and the Growing [...] Read more.
Breast cancer is a significant health concern for women, emphasizing the need for early detection. This research focuses on developing a computer system for asymmetry detection in mammographic images, employing two critical approaches: Dynamic Time Warping (DTW) for shape analysis and the Growing Seed Region (GSR) method for breast skin segmentation. The methodology involves processing mammograms in DICOM format. In the morphological study, a centroid-based mask is computed using extracted images from DICOM files. Distances between the centroid and the breast perimeter are then calculated to assess similarity through Dynamic Time Warping analysis. For skin thickness asymmetry identification, a seed is initially set on skin pixels and expanded based on intensity and depth similarities. The DTW analysis achieves an accuracy of 83%, correctly identifying 23 possible asymmetry cases out of 20 ground truth cases. The GRS method is validated using Average Symmetric Surface Distance and Relative Volumetric metrics, yielding similarities of 90.47% and 66.66%, respectively, for asymmetry cases compared to 182 ground truth segmented images, successfully identifying 35 patients with potential skin asymmetry. Additionally, a Graphical User Interface is designed to facilitate the insertion of DICOM files and provide visual representations of asymmetrical findings for validation and accessibility by physicians. Full article
Show Figures

Figure 1

19 pages, 9472 KiB  
Article
Towards Realistic 3D Models of Tumor Vascular Networks
by Max C. Lindemann, Lukas Glänzer, Anjali A. Roeth, Thomas Schmitz-Rode and Ioana Slabu
Cancers 2023, 15(22), 5352; https://doi.org/10.3390/cancers15225352 - 9 Nov 2023
Cited by 1 | Viewed by 879
Abstract
For reliable in silico or in vitro investigations in, for example, biosensing and drug delivery applications, accurate models of tumor vascular networks down to the capillary size are essential. Compared to images acquired with conventional medical imaging techniques, digitalized histological tumor slices have [...] Read more.
For reliable in silico or in vitro investigations in, for example, biosensing and drug delivery applications, accurate models of tumor vascular networks down to the capillary size are essential. Compared to images acquired with conventional medical imaging techniques, digitalized histological tumor slices have a higher resolution, enabling the delineation of capillaries. Volume rendering procedures can then be used to generate a 3D model. However, the preparation of such slices leads to misalignments in relative slice orientation between consecutive slices. Thus, image registration algorithms are necessary to re-align the slices. Here, we present an algorithm for the registration and reconstruction of a vascular network from histologic slices applied to 169 tumor slices. The registration includes two steps. First, consecutive images are incrementally pre-aligned using feature- and area-based transformations. Second, using the previous transformations, parallel registration for all images is enabled. Combining intensity- and color-based thresholds along with heuristic analysis, vascular structures are segmented. A 3D interpolation technique is used for volume rendering. This results in a 3D vascular network with approximately 400–450 vessels with diameters down to 25–30 µm. A delineation of vessel structures with close distance was limited in areas of high structural density. Improvement can be achieved by using images with higher resolution and or machine learning techniques. Full article
Show Figures

Figure 1

38 pages, 3291 KiB  
Article
Exploration and Enhancement of Classifiers in the Detection of Lung Cancer from Histopathological Images
by Karthikeyan Shanmugam and Harikumar Rajaguru
Diagnostics 2023, 13(20), 3289; https://doi.org/10.3390/diagnostics13203289 - 23 Oct 2023
Cited by 5 | Viewed by 1066
Abstract
Lung cancer is a prevalent malignancy that impacts individuals of all genders and is often diagnosed late due to delayed symptoms. To catch it early, researchers are developing algorithms to study lung cancer images. The primary objective of this work is to propose [...] Read more.
Lung cancer is a prevalent malignancy that impacts individuals of all genders and is often diagnosed late due to delayed symptoms. To catch it early, researchers are developing algorithms to study lung cancer images. The primary objective of this work is to propose a novel approach for the detection of lung cancer using histopathological images. In this work, the histopathological images underwent preprocessing, followed by segmentation using a modified approach of KFCM-based segmentation and the segmented image intensity values were dimensionally reduced using Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO). Algorithms such as KL Divergence and Invasive Weed Optimization (IWO) are used for feature selection. Seven different classifiers such as SVM, KNN, Random Forest, Decision Tree, Softmax Discriminant, Multilayer Perceptron, and BLDC were used to analyze and classify the images as benign or malignant. Results were compared using standard metrics, and kappa analysis assessed classifier agreement. The Decision Tree Classifier with GWO feature extraction achieved good accuracy of 85.01% without feature selection and hyperparameter tuning approaches. Furthermore, we present a methodology to enhance the accuracy of the classifiers by employing hyperparameter tuning algorithms based on Adam and RAdam. By combining features from GWO and IWO, and using the RAdam algorithm, the Decision Tree classifier achieves the commendable accuracy of 91.57%. Full article
Show Figures

Figure 1

18 pages, 35266 KiB  
Article
Retrospective Motion Artifact Reduction by Spatial Scaling of Liver Diffusion-Weighted Images
by Johannes Raspe, Felix N. Harder, Selina Rupp, Sean McTavish, Johannes M. Peeters, Kilian Weiss, Marcus R. Makowski, Rickmer F. Braren, Dimitrios C. Karampinos and Anh T. Van
Tomography 2023, 9(5), 1839-1856; https://doi.org/10.3390/tomography9050146 - 6 Oct 2023
Cited by 2 | Viewed by 1210
Abstract
Cardiac motion causes unpredictable signal loss in respiratory-triggered diffusion-weighted magnetic resonance imaging (DWI) of the liver, especially inside the left lobe. The left liver lobe may thus be frequently neglected in the clinical evaluation of liver DWI. In this work, a data-driven algorithm [...] Read more.
Cardiac motion causes unpredictable signal loss in respiratory-triggered diffusion-weighted magnetic resonance imaging (DWI) of the liver, especially inside the left lobe. The left liver lobe may thus be frequently neglected in the clinical evaluation of liver DWI. In this work, a data-driven algorithm that relies on the statistics of the signal in the left liver lobe to mitigate the motion-induced signal loss is presented. The proposed data-driven algorithm utilizes the exclusion of severely corrupted images with subsequent spatially dependent image scaling based on a signal-loss model to correctly combine the multi-average diffusion-weighted images. The signal in the left liver lobe is restored and the liver signal is more homogeneous after applying the proposed algorithm. Furthermore, overestimation of the apparent diffusion coefficient (ADC) in the left liver lobe is reduced. The proposed algorithm can therefore contribute to reduce the motion-induced bias in DWI of the liver and help to increase the diagnostic value of DWI in the left liver lobe. Full article
Show Figures

Figure 1

11 pages, 2500 KiB  
Communication
Generating Synthetic Radiological Images with PySynthMRI: An Open-Source Cross-Platform Tool
by Luca Peretti, Graziella Donatelli, Matteo Cencini, Paolo Cecchi, Guido Buonincontri, Mirco Cosottini, Michela Tosetti and Mauro Costagli
Tomography 2023, 9(5), 1723-1733; https://doi.org/10.3390/tomography9050137 - 11 Sep 2023
Cited by 3 | Viewed by 1469
Abstract
Synthetic MR Imaging allows for the reconstruction of different image contrasts from a single acquisition, reducing scan times. Commercial products that implement synthetic MRI are used in research. They rely on vendor-specific acquisitions and do not include the possibility of using custom multiparametric [...] Read more.
Synthetic MR Imaging allows for the reconstruction of different image contrasts from a single acquisition, reducing scan times. Commercial products that implement synthetic MRI are used in research. They rely on vendor-specific acquisitions and do not include the possibility of using custom multiparametric imaging techniques. We introduce PySynthMRI, an open-source tool with a user-friendly interface that uses a set of input images to generate synthetic images with diverse radiological contrasts by varying representative parameters of the desired target sequence, including the echo time, repetition time and inversion time(s). PySynthMRI is written in Python 3.6, and it can be executed under Linux, Windows, or MacOS as a python script or an executable. The tool is free and open source and is developed while taking into consideration the possibility of software customization by the end user. PySynthMRI generates synthetic images by calculating the pixelwise signal intensity as a function of a set of input images (e.g., T1 and T2 maps) and simulated scanner parameters chosen by the user via a graphical interface. The distribution provides a set of default synthetic contrasts, including T1w gradient echo, T2w spin echo, FLAIR and Double Inversion Recovery. The synthetic images can be exported in DICOM or NiFTI format. PySynthMRI allows for the fast synthetization of differently weighted MR images based on quantitative maps. Specialists can use the provided signal models to retrospectively generate contrasts and add custom ones. The modular architecture of the tool can be exploited to add new features without impacting the codebase. Full article
Show Figures

Figure 1

11 pages, 2052 KiB  
Article
Chest X-ray Foreign Objects Detection Using Artificial Intelligence
by Jakub Kufel, Katarzyna Bargieł-Łączek, Maciej Koźlik, Łukasz Czogalik, Piotr Dudek, Mikołaj Magiera, Wiktoria Bartnikowska, Anna Lis, Iga Paszkiewicz, Szymon Kocot, Maciej Cebula, Katarzyna Gruszczyńska and Zbigniew Nawrat
J. Clin. Med. 2023, 12(18), 5841; https://doi.org/10.3390/jcm12185841 - 8 Sep 2023
Cited by 3 | Viewed by 1866
Abstract
Diagnostic imaging has become an integral part of the healthcare system. In recent years, scientists around the world have been working on artificial intelligence-based tools that help in achieving better and faster diagnoses. Their accuracy is crucial for successful treatment, especially for imaging [...] Read more.
Diagnostic imaging has become an integral part of the healthcare system. In recent years, scientists around the world have been working on artificial intelligence-based tools that help in achieving better and faster diagnoses. Their accuracy is crucial for successful treatment, especially for imaging diagnostics. This study used a deep convolutional neural network to detect four categories of objects on digital chest X-ray images. The data were obtained from the publicly available National Institutes of Health (NIH) Chest X-ray (CXR) Dataset. In total, 112,120 CXRs from 30,805 patients were manually checked for foreign objects: vascular port, shoulder endoprosthesis, necklace, and implantable cardioverter-defibrillator (ICD). Then, they were annotated with the use of a computer program, and the necessary image preprocessing was performed, such as resizing, normalization, and cropping. The object detection model was trained using the You Only Look Once v8 architecture and the Ultralytics framework. The results showed not only that the obtained average precision of foreign object detection on the CXR was 0.815 but also that the model can be useful in detecting foreign objects on the CXR images. Models of this type may be used as a tool for specialists, in particular, with the growing popularity of radiology comes an increasing workload. We are optimistic that it could accelerate and facilitate the work to provide a faster diagnosis. Full article
Show Figures

Figure 1

9 pages, 7581 KiB  
Article
Deep Learning-Based Versus Iterative Image Reconstruction for Unenhanced Brain CT: A Quantitative Comparison of Image Quality
by Andrea Cozzi, Maurizio Cè, Giuseppe De Padova, Dario Libri, Nazarena Caldarelli, Fabio Zucconi, Giancarlo Oliva and Michaela Cellina
Tomography 2023, 9(5), 1629-1637; https://doi.org/10.3390/tomography9050130 - 31 Aug 2023
Cited by 2 | Viewed by 1706
Abstract
This exploratory retrospective study aims to quantitatively compare the image quality of unenhanced brain computed tomography (CT) reconstructed with an iterative (AIDR-3D) and a deep learning-based (AiCE) reconstruction algorithm. After a preliminary phantom study, AIDR-3D and AiCE reconstructions (0.5 mm thickness) of 100 [...] Read more.
This exploratory retrospective study aims to quantitatively compare the image quality of unenhanced brain computed tomography (CT) reconstructed with an iterative (AIDR-3D) and a deep learning-based (AiCE) reconstruction algorithm. After a preliminary phantom study, AIDR-3D and AiCE reconstructions (0.5 mm thickness) of 100 consecutive brain CTs acquired in the emergency setting on the same 320-detector row CT scanner were retrospectively analyzed, calculating image noise reduction attributable to the AiCE algorithm, artifact indexes in the posterior cranial fossa, and contrast-to-noise ratios (CNRs) at the cortical and thalamic levels. In the phantom study, the spatial resolution of the two datasets proved to be comparable; conversely, AIDR-3D reconstructions showed a broader noise pattern. In the human study, median image noise was lower with AiCE compared to AIDR-3D (4.7 vs. 5.3, p < 0.001, median 19.6% noise reduction), whereas AIDR-3D yielded a lower artifact index than AiCE (7.5 vs. 8.4, p < 0.001). AiCE also showed higher median CNRs at the cortical (2.5 vs. 1.8, p < 0.001) and thalamic levels (2.8 vs. 1.7, p < 0.001). These results highlight how image quality improvements granted by deep learning-based (AiCE) and iterative (AIDR-3D) image reconstruction algorithms vary according to different brain areas. Full article
Show Figures

Figure 1

25 pages, 22409 KiB  
Review
Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging
by Reabal Najjar
Diagnostics 2023, 13(17), 2760; https://doi.org/10.3390/diagnostics13172760 - 25 Aug 2023
Cited by 21 | Viewed by 31082
Abstract
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of [...] Read more.
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of machine learning and deep learning in modern medical image analysis. The primary focus of this review is to shed light on AI applications in radiology, elucidating their seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology—data quality, the ’black box’ enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. The conclusion underlines the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility. Full article
Show Figures

Figure 1

13 pages, 3305 KiB  
Article
Deep-Learning-Based Dose Predictor for Glioblastoma–Assessing the Sensitivity and Robustness for Dose Awareness in Contouring
by Robert Poel, Amith J. Kamath, Jonas Willmann, Nicolaus Andratschke, Ekin Ermiş, Daniel M. Aebersold, Peter Manser and Mauricio Reyes
Cancers 2023, 15(17), 4226; https://doi.org/10.3390/cancers15174226 - 23 Aug 2023
Cited by 1 | Viewed by 1136
Abstract
External beam radiation therapy requires a sophisticated and laborious planning procedure. To improve the efficiency and quality of this procedure, machine-learning models that predict these dose distributions were introduced. The most recent dose prediction models are based on deep-learning architectures called 3D U-Nets [...] Read more.
External beam radiation therapy requires a sophisticated and laborious planning procedure. To improve the efficiency and quality of this procedure, machine-learning models that predict these dose distributions were introduced. The most recent dose prediction models are based on deep-learning architectures called 3D U-Nets that give good approximations of the dose in 3D almost instantly. Our purpose was to train such a 3D dose prediction model for glioblastoma VMAT treatment and test its robustness and sensitivity for the purpose of quality assurance of automatic contouring. From a cohort of 125 glioblastoma (GBM) patients, VMAT plans were created according to a clinical protocol. The initial model was trained on a cascaded 3D U-Net. A total of 60 cases were used for training, 15 for validation and 20 for testing. The prediction model was tested for sensitivity to dose changes when subject to realistic contour variations. Additionally, the model was tested for robustness by exposing it to a worst-case test set containing out-of-distribution cases. The initially trained prediction model had a dose score of 0.94 Gy and a mean DVH (dose volume histograms) score for all structures of 1.95 Gy. In terms of sensitivity, the model was able to predict the dose changes that occurred due to the contour variations with a mean error of 1.38 Gy. We obtained a 3D VMAT dose prediction model for GBM with limited data, providing good sensitivity to realistic contour variations. We tested and improved the model’s robustness by targeted updates to the training set, making it a useful technique for introducing dose awareness in the contouring evaluation and quality assurance process. Full article
Show Figures

Figure 1

9 pages, 2518 KiB  
Article
Image Quality Improvement in Deep Learning Image Reconstruction of Head Computed Tomography Examination
by Michal Pula, Emilia Kucharczyk, Agata Zdanowicz and Maciej Guzinski
Tomography 2023, 9(4), 1485-1493; https://doi.org/10.3390/tomography9040118 - 9 Aug 2023
Cited by 2 | Viewed by 2125
Abstract
In this study, we assess image quality in computed tomography scans reconstructed via DLIR (Deep Learning Image Reconstruction) and compare it with iterative reconstruction ASIR-V (Adaptive Statistical Iterative Reconstruction) in CT (computed tomography) scans of the head. The CT scans of 109 patients [...] Read more.
In this study, we assess image quality in computed tomography scans reconstructed via DLIR (Deep Learning Image Reconstruction) and compare it with iterative reconstruction ASIR-V (Adaptive Statistical Iterative Reconstruction) in CT (computed tomography) scans of the head. The CT scans of 109 patients were subjected to both objective and subjective evaluation of image quality. The objective evaluation was based on the SNR (signal-to-noise ratio) and CNR (contrast-to-noise ratio) of the brain’s gray and white matter. The regions of interest for our study were set in the BGA (basal ganglia area) and PCF (posterior cranial fossa). Simultaneously, a subjective assessment of image quality, based on brain structure visibility, was conducted by experienced radiologists. In the assessed scans, we obtained up to a 54% increase in SNR for gray matter and a 60% increase for white matter using DLIR in comparison to ASIR-V. Moreover, we achieved a CNR increment of 58% in the BGA structures and 50% in the PCF. In the subjective assessment of the obtained images, DLIR had a mean rating score of 2.8, compared to the mean score of 2.6 for ASIR-V images. In conclusion, DLIR shows improved image quality compared to the standard iterative reconstruction of CT images of the head. Full article
Show Figures

Figure 1

11 pages, 1713 KiB  
Article
Effects of Path-Finding Algorithms on the Labeling of the Centerlines of Circle of Willis Arteries
by Se-On Kim and Yoon-Chul Kim
Tomography 2023, 9(4), 1423-1433; https://doi.org/10.3390/tomography9040113 - 24 Jul 2023
Cited by 3 | Viewed by 1215
Abstract
Quantitative analysis of intracranial vessel segments typically requires the identification of the vessels’ centerlines, and a path-finding algorithm can be used to automatically detect vessel segments’ centerlines. This study compared the performance of path-finding algorithms for vessel labeling. Three-dimensional (3D) time-of-flight magnetic resonance [...] Read more.
Quantitative analysis of intracranial vessel segments typically requires the identification of the vessels’ centerlines, and a path-finding algorithm can be used to automatically detect vessel segments’ centerlines. This study compared the performance of path-finding algorithms for vessel labeling. Three-dimensional (3D) time-of-flight magnetic resonance angiography (MRA) images from the publicly available dataset were considered for this study. After manual annotations of the endpoints of each vessel segment, three path-finding methods were compared: (Method 1) depth-first search algorithm, (Method 2) Dijkstra’s algorithm, and (Method 3) A* algorithm. The rate of correctly found paths was quantified and compared among the three methods in each segment of the circle of Willis arteries. In the analysis of 840 vessel segments, Method 2 showed the highest accuracy (97.1%) of correctly found paths, while Method 1 and 3 showed an accuracy of 83.5% and 96.1%, respectively. The AComm artery was highly inaccurately identified in Method 1, with an accuracy of 43.2%. Incorrect paths by Method 2 were noted in the R-ICA, L-ICA, and R-PCA-P1 segments. The Dijkstra and A* algorithms showed similar accuracy in path-finding, and they were comparable in the speed of path-finding in the circle of Willis arterial segments. Full article
Show Figures

Figure 1

19 pages, 4176 KiB  
Article
Performance of Fully Automated Algorithm Detecting Bone Marrow Edema in Sacroiliac Joints
by Joanna Ożga, Michał Wyka, Agata Raczko, Zbisław Tabor, Zuzanna Oleniacz, Michał Korman and Wadim Wojciechowski
J. Clin. Med. 2023, 12(14), 4852; https://doi.org/10.3390/jcm12144852 - 24 Jul 2023
Cited by 2 | Viewed by 1395
Abstract
This study evaluates the performance of a fully automated algorithm to detect active inflammation in the form of bone marrow edema (BME) in iliac and sacral bones, depending on the quality of the coronal oblique plane in patients with axial spondyloarthritis (axSpA). The [...] Read more.
This study evaluates the performance of a fully automated algorithm to detect active inflammation in the form of bone marrow edema (BME) in iliac and sacral bones, depending on the quality of the coronal oblique plane in patients with axial spondyloarthritis (axSpA). The results were assessed based on the technical correctness of MRI examination of the sacroiliac joints (SIJs). A total of 173 patients with suspected axSpA were included in the study. In order to verify the correctness of the MRI, a deviation angle was measured on the slice acquired in the sagittal plane in the T2-weighted sequence. This angle was located between the line drawn between the posterior edges of S1 and S2 vertebrae and the line that marks the actual plane in which the slices were acquired in T1 and STIR sequences. All examinations were divided into quartiles according to the deviation angle measured in degrees as follows: 1st group [0; 2.2], 2nd group (2.2; 5.7], 3rd group (5.7; 10] and 4th group (10; 29.2]. Segmentations of the sacral and iliac bones were acquired manually and automatically using the fully automated algorithm on the T1 sequence. The Dice coefficient for automated bone segmentations with respect to reference manual segmentations was 0.9820 (95% CI [0.9804, 0.9835]). Examinations of BME lesions were assessed using the SPARCC scale (in 68 cases SPARCC > 0). Manual and automatic segmentations of the lesions were performed on STIR sequences and compared. The sensitivity of detection of BME ranged from 0.58 (group 1) to 0.83 (group 2) versus 0.76 (total), while the specificity was equal to 0.97 in each group. The study indicates that the performance of the algorithm is satisfactory regardless of the deviation angle. Full article
Show Figures

Figure 1

16 pages, 2772 KiB  
Article
Segmentation of Portal Vein in Multiphase CTA Image Based on Unsupervised Domain Transfer and Pseudo Label
by Genshen Song, Ziyue Xie, Haoran Wang, Shiman Li, Demin Yao, Shiyao Chen and Yonghong Shi
Diagnostics 2023, 13(13), 2250; https://doi.org/10.3390/diagnostics13132250 - 3 Jul 2023
Cited by 1 | Viewed by 1100
Abstract
Background: Clinically, physicians diagnose portal vein diseases on abdominal CT angiography (CTA) images scanned in the hepatic arterial phase (H-phase), portal vein phase (P-phase) and equilibrium phase (E-phase) simultaneously. However, existing studies typically segment the portal vein on P-phase images without considering other [...] Read more.
Background: Clinically, physicians diagnose portal vein diseases on abdominal CT angiography (CTA) images scanned in the hepatic arterial phase (H-phase), portal vein phase (P-phase) and equilibrium phase (E-phase) simultaneously. However, existing studies typically segment the portal vein on P-phase images without considering other phase images. Method: We propose a method for segmenting portal veins on multiphase images based on unsupervised domain transfer and pseudo labels by using annotated P-phase images. Firstly, unsupervised domain transfer is performed to make the H-phase and E-phase images of the same patient approach the P-phase image in style, reducing the image differences caused by contrast media. Secondly, the H-phase (or E-phase) image and its style transferred image are input into the segmentation module together with the P-phase image. Under the constraints of pseudo labels, accurate prediction results are obtained. Results: This method was evaluated on the multiphase CTA images of 169 patients. The portal vein segmented from the H-phase and E-phase images achieved DSC values of 0.76 and 0.86 and Jaccard values of 0.61 and 0.76, respectively. Conclusion: The method can automatically segment the portal vein on H-phase and E-phase images when only the portal vein on the P-phase CTA image is annotated, which greatly assists in clinical diagnosis. Full article
Show Figures

Figure 1

17 pages, 11974 KiB  
Article
A Deep Learning Approach for Rapid and Generalizable Denoising of Photon-Counting Micro-CT Images
by Rohan Nadkarni, Darin P. Clark, Alex J. Allphin and Cristian T. Badea
Tomography 2023, 9(4), 1286-1302; https://doi.org/10.3390/tomography9040102 - 2 Jul 2023
Cited by 5 | Viewed by 2311
Abstract
Photon-counting CT (PCCT) is powerful for spectral imaging and material decomposition but produces noisy weighted filtered backprojection (wFBP) reconstructions. Although iterative reconstruction effectively denoises these images, it requires extensive computation time. To overcome this limitation, we propose a deep learning (DL) model, UnetU, [...] Read more.
Photon-counting CT (PCCT) is powerful for spectral imaging and material decomposition but produces noisy weighted filtered backprojection (wFBP) reconstructions. Although iterative reconstruction effectively denoises these images, it requires extensive computation time. To overcome this limitation, we propose a deep learning (DL) model, UnetU, which quickly estimates iterative reconstruction from wFBP. Utilizing a 2D U-net convolutional neural network (CNN) with a custom loss function and transformation of wFBP, UnetU promotes accurate material decomposition across various photon-counting detector (PCD) energy threshold settings. UnetU outperformed multi-energy non-local means (ME NLM) and a conventional denoising CNN called UnetwFBP in terms of root mean square error (RMSE) in test set reconstructions and their respective matrix inversion material decompositions. Qualitative results in reconstruction and material decomposition domains revealed that UnetU is the best approximation of iterative reconstruction. In reconstructions with varying undersampling factors from a high dose ex vivo scan, UnetU consistently gave higher structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) to the fully sampled iterative reconstruction than ME NLM and UnetwFBP. This research demonstrates UnetU’s potential as a fast (i.e., 15 times faster than iterative reconstruction) and generalizable approach for PCCT denoising, holding promise for advancing preclinical PCCT research. Full article
Show Figures

Figure 1

12 pages, 1528 KiB  
Article
Use of Automated Machine Learning for Classifying Hemoperitoneum on Ultrasonographic Images of Morrison’s Pouch: A Multicenter Retrospective Study
by Dongkil Jeong, Wonjoon Jeong, Ji Han Lee and Sin-Youl Park
J. Clin. Med. 2023, 12(12), 4043; https://doi.org/10.3390/jcm12124043 - 14 Jun 2023
Cited by 2 | Viewed by 2082
Abstract
This study evaluated automated machine learning (AutoML) in classifying the presence or absence of hemoperitoneum in ultrasonography (USG) images of Morrison’s pouch. In this multicenter, retrospective study, 864 trauma patients from trauma and emergency medical centers in South Korea were included. In all, [...] Read more.
This study evaluated automated machine learning (AutoML) in classifying the presence or absence of hemoperitoneum in ultrasonography (USG) images of Morrison’s pouch. In this multicenter, retrospective study, 864 trauma patients from trauma and emergency medical centers in South Korea were included. In all, 2200 USG images (1100 hemoperitoneum and 1100 normal) were collected. Of these, 1800 images were used for training and 200 were used for the internal validation of AutoML. External validation was performed using 100 hemoperitoneum images and 100 normal images collected separately from a trauma center that were not included in the training and internal validation sets. Google’s open-source AutoML was used to train the algorithm in classifying hemoperitoneum in USG images, followed by internal and external validation. In the internal validation, the sensitivity, specificity, and area under the receiver operating characteristic (AUROC) curve were 95%, 99%, and 0.97, respectively. In the external validation, the sensitivity, specificity, and AUROC were 94%, 99%, and 0.97, respectively. The performances of AutoML in the internal and external validation were not statistically different (p = 0.78). A publicly available, general-purpose AutoML can accurately classify the presence or absence of hemoperitoneum in USG images of the Morrison’s pouch of real-world trauma patients. Full article
Show Figures

Figure 1

16 pages, 10641 KiB  
Article
Sinogram Inpainting with Generative Adversarial Networks and Shape Priors
by Emilien Valat, Katayoun Farrahi and Thomas Blumensath
Tomography 2023, 9(3), 1137-1152; https://doi.org/10.3390/tomography9030094 - 13 Jun 2023
Cited by 3 | Viewed by 1757
Abstract
X-ray computed tomography is a widely used, non-destructive imaging technique that computes cross-sectional images of an object from a set of X-ray absorption profiles (the so-called sinogram). The computation of the image from the sinogram is an ill-posed inverse problem, which becomes underdetermined [...] Read more.
X-ray computed tomography is a widely used, non-destructive imaging technique that computes cross-sectional images of an object from a set of X-ray absorption profiles (the so-called sinogram). The computation of the image from the sinogram is an ill-posed inverse problem, which becomes underdetermined when we are only able to collect insufficiently many X-ray measurements. We are here interested in solving X-ray tomography image reconstruction problems where we are unable to scan the object from all directions, but where we have prior information about the object’s shape. We thus propose a method that reduces image artefacts due to limited tomographic measurements by inferring missing measurements using shape priors. Our method uses a Generative Adversarial Network that combines limited acquisition data and shape information. While most existing methods focus on evenly spaced missing scanning angles, we propose an approach that infers a substantial number of consecutive missing acquisitions. We show that our method consistently improves image quality compared to images reconstructed using the previous state-of-the-art sinogram-inpainting techniques. In particular, we demonstrate a 7 dB Peak Signal-to-Noise Ratio improvement compared to other methods. Full article
Show Figures

Figure 1

15 pages, 3618 KiB  
Article
Deep Learning Algorithm for Differentiating Patients with a Healthy Liver from Patients with Liver Lesions Based on MR Images
by Maciej Skwirczyński, Zbisław Tabor, Julia Lasek, Zofia Schneider, Sebastian Gibała, Iwona Kucybała, Andrzej Urbanik and Rafał Obuchowicz
Cancers 2023, 15(12), 3142; https://doi.org/10.3390/cancers15123142 - 11 Jun 2023
Cited by 1 | Viewed by 1463
Abstract
The problems in diagnosing the state of a vital organ such as the liver are complex and remain unresolved. These problems are underscored by frequently published studies on this issue. At the same time, demand for imaging diagnostics, preferably using a method that [...] Read more.
The problems in diagnosing the state of a vital organ such as the liver are complex and remain unresolved. These problems are underscored by frequently published studies on this issue. At the same time, demand for imaging diagnostics, preferably using a method that can detect the disease at the earliest possible stage, is constantly increasing. In this paper, we present liver diseases in the context of diagnosis, diagnostic problems, and possible elimination. We discuss the dataset and methods and present the stages of the pipeline we developed, leading to multiclass segmentation of the liver in multiparametric MR image into lesions and normal tissue. Finally, based on the processing results, each case is classified as either a healthy liver or a liver with lesions. For the training set, the AUC ROC is 0.925 (standard error 0.013 and a p-value less than 0.001), and for the test set, the AUC ROC is 0.852 (standard error 0.039 and a p-value less than 0.001). Further refinements to the proposed pipeline are also discussed. The proposed approach could be used in the detection of focal lesions in the liver and the description of liver tumors. Practical application of the developed multi-class segmentation method represents a key step toward standardizing the medical evaluation of focal lesions in the liver. Full article
Show Figures

Figure 1

15 pages, 4104 KiB  
Article
‘Earlier than Early’ Detection of Breast Cancer in Israeli BRCA Mutation Carriers Applying AI-Based Analysis to Consecutive MRI Scans
by Debbie Anaby, David Shavin, Gali Zimmerman-Moreno, Noam Nissan, Eitan Friedman and Miri Sklair-Levy
Cancers 2023, 15(12), 3120; https://doi.org/10.3390/cancers15123120 - 8 Jun 2023
Cited by 1 | Viewed by 2064
Abstract
Female BRCA1/BRCA2 (=BRCA) pathogenic variants (PVs) carriers are at a substantially higher risk for developing breast cancer (BC) compared with the average risk population. Detection of BC at an early stage significantly improves prognosis. To facilitate early BC detection, a surveillance [...] Read more.
Female BRCA1/BRCA2 (=BRCA) pathogenic variants (PVs) carriers are at a substantially higher risk for developing breast cancer (BC) compared with the average risk population. Detection of BC at an early stage significantly improves prognosis. To facilitate early BC detection, a surveillance scheme is offered to BRCA PV carriers from age 25–30 years that includes annual MRI based breast imaging. Indeed, adherence to the recommended scheme has been shown to be associated with earlier disease stages at BC diagnosis, more in-situ pathology, smaller tumors, and less axillary involvement. While MRI is the most sensitive modality for BC detection in BRCA PV carriers, there are a significant number of overlooked or misinterpreted radiological lesions (mostly enhancing foci), leading to a delayed BC diagnosis at a more advanced stage. In this study we developed an artificial intelligence (AI)-network, aimed at a more accurate classification of enhancing foci, in MRIs of BRCA PV carriers, thus reducing false-negative interpretations. Retrospectively identified foci in prior MRIs that were either diagnosed as BC or benign/normal in a subsequent MRI were manually segmented and served as input for a convolutional network architecture. The model was successful in classification of 65% of the cancerous foci, most of them triple-negative BC. If validated, applying this scheme routinely may facilitate ‘earlier than early’ BC diagnosis in BRCA PV carriers. Full article
Show Figures

Figure 1

21 pages, 5632 KiB  
Article
Deep Learning- and Expert Knowledge-Based Feature Extraction and Performance Evaluation in Breast Histopathology Images
by Hepseeba Kode and Buket D. Barkana
Cancers 2023, 15(12), 3075; https://doi.org/10.3390/cancers15123075 - 6 Jun 2023
Cited by 4 | Viewed by 2007
Abstract
Cancer develops when a single or a group of cells grows and spreads uncontrollably. Histopathology images are used in cancer diagnosis since they show tissue and cell structures under a microscope. Knowledge-based and deep learning-based computer-aided detection is an ongoing research field in [...] Read more.
Cancer develops when a single or a group of cells grows and spreads uncontrollably. Histopathology images are used in cancer diagnosis since they show tissue and cell structures under a microscope. Knowledge-based and deep learning-based computer-aided detection is an ongoing research field in cancer diagnosis using histopathology images. Feature extraction is vital in both approaches since the feature set is fed to a classifier and determines the performance. This paper evaluates three feature extraction methods and their performance in breast cancer diagnosis. Features are extracted by (1) a Convolutional Neural Network, (2) a transfer learning architecture VGG16, and (3) a knowledge-based system. The feature sets are tested by seven classifiers, including Neural Network (64 units), Random Forest, Multilayer Perceptron, Decision Tree, Support Vector Machines, K-Nearest Neighbors, and Narrow Neural Network (10 units) on the BreakHis 400× image dataset. The CNN achieved up to 85% for the Neural Network and Random Forest, the VGG16 method achieved up to 86% for the Neural Network, and the knowledge-based features achieved up to 98% for Neural Network, Random Forest, Multilayer Perceptron classifiers. Full article
Show Figures

Figure 1

12 pages, 795 KiB  
Review
Extended Reality in Diagnostic Imaging—A Literature Review
by Paulina Kukla, Karolina Maciejewska, Iga Strojna, Małgorzata Zapał, Grzegorz Zwierzchowski and Bartosz Bąk
Tomography 2023, 9(3), 1071-1082; https://doi.org/10.3390/tomography9030088 - 24 May 2023
Cited by 8 | Viewed by 2690
Abstract
The utilization of extended reality (ER) has been increasingly explored in the medical field over the past ten years. A comprehensive analysis of scientific publications was conducted to assess the applications of ER in the field of diagnostic imaging, including ultrasound, interventional radiology, [...] Read more.
The utilization of extended reality (ER) has been increasingly explored in the medical field over the past ten years. A comprehensive analysis of scientific publications was conducted to assess the applications of ER in the field of diagnostic imaging, including ultrasound, interventional radiology, and computed tomography. The study also evaluated the use of ER in patient positioning and medical education. Additionally, we explored the potential of ER as a replacement for anesthesia and sedation during examinations. The use of ER technologies in medical education has received increased attention in recent years. This technology allows for a more interactive and engaging educational experience, particularly in anatomy and patient positioning, although the question may be asked: is the technology and maintenance cost worth the investment? The results of the analyzed studies suggest that implementing augmented reality in clinical practice is a positive phenomenon that expands the diagnostic capabilities of imaging studies, education, and positioning. The results suggest that ER has significant potential to improve diagnostic imaging procedures’ accuracy and efficiency and enhance the patient experience through increased visualization and understanding of medical conditions. Despite these promising advancements, further research is needed to fully realize the potential of ER in the medical field and to address the challenges and limitations associated with its integration into clinical practice. Full article
Show Figures

Figure 1

18 pages, 9346 KiB  
Article
Application of Deep Learning Methods in a Moroccan Ophthalmic Center: Analysis and Discussion
by Zineb Farahat, Nabila Zrira, Nissrine Souissi, Safia Benamar, Mohammed Belmekki, Mohamed Nabil Ngote and Kawtar Megdiche
Diagnostics 2023, 13(10), 1694; https://doi.org/10.3390/diagnostics13101694 - 10 May 2023
Cited by 2 | Viewed by 1734
Abstract
Diabetic retinopathy (DR) remains one of the world’s frequent eye illnesses, leading to vision loss among working-aged individuals. Hemorrhages and exudates are examples of signs of DR. However, artificial intelligence (AI), particularly deep learning (DL), is poised to impact nearly every aspect of [...] Read more.
Diabetic retinopathy (DR) remains one of the world’s frequent eye illnesses, leading to vision loss among working-aged individuals. Hemorrhages and exudates are examples of signs of DR. However, artificial intelligence (AI), particularly deep learning (DL), is poised to impact nearly every aspect of human life and gradually transform medical practice. Insight into the condition of the retina is becoming more accessible thanks to major advancements in diagnostic technology. AI approaches can be used to assess lots of morphological datasets derived from digital images in a rapid and noninvasive manner. Computer-aided diagnosis tools for automatic detection of DR early-stage signs will ease the pressure on clinicians. In this work, we apply two methods to the color fundus images taken on-site at the Cheikh Zaïd Foundation’s Ophthalmic Center in Rabat to detect both exudates and hemorrhages. First, we apply the U-Net method to segment exudates and hemorrhages into red and green colors, respectively. Second, the You Look Only Once Version 5 (YOLOv5) method identifies the presence of hemorrhages and exudates in an image and predicts a probability for each bounding box. The segmentation proposed method obtained a specificity of 85%, a sensitivity of 85%, and a Dice score of 85%. The detection software successfully detected 100% of diabetic retinopathy signs, the expert doctor detected 99% of DR signs, and the resident doctor detected 84%. Full article
Show Figures

Figure 1

17 pages, 1290 KiB  
Article
Textural Features of MR Images Correlate with an Increased Risk of Clinically Significant Cancer in Patients with High PSA Levels
by Sebastian Gibala, Rafal Obuchowicz, Julia Lasek, Zofia Schneider, Adam Piorkowski, Elżbieta Pociask and Karolina Nurzynska
J. Clin. Med. 2023, 12(8), 2836; https://doi.org/10.3390/jcm12082836 - 12 Apr 2023
Cited by 1 | Viewed by 1562
Abstract
Background: Prostate cancer, which is associated with gland biology and also with environmental risks, is a serious clinical problem in the male population worldwide. Important progress has been made in the diagnostic and clinical setups designed for the detection of prostate cancer, with [...] Read more.
Background: Prostate cancer, which is associated with gland biology and also with environmental risks, is a serious clinical problem in the male population worldwide. Important progress has been made in the diagnostic and clinical setups designed for the detection of prostate cancer, with a multiparametric magnetic resonance diagnostic process based on the PIRADS protocol playing a key role. This method relies on image evaluation by an imaging specialist. The medical community has expressed its desire for image analysis techniques that can detect important image features that may indicate cancer risk. Methods: Anonymized scans of 41 patients with laboratory diagnosed PSA levels who were routinely scanned for prostate cancer were used. The peripheral and central zones of the prostate were depicted manually with demarcation of suspected tumor foci under medical supervision. More than 7000 textural features in the marked regions were calculated using MaZda software. Then, these 7000 features were used to perform region parameterization. Statistical analyses were performed to find correlations with PSA-level-based diagnosis that might be used to distinguish suspected (different) lesions. Further multiparametrical analysis using MIL-SVM machine learning was used to obtain greater accuracy. Results: Multiparametric classification using MIL-SVM allowed us to reach 92% accuracy. Conclusions: There is an important correlation between the textural parameters of MRI prostate images made using the PIRADS MR protocol with PSA levels > 4 mg/mL. The correlations found express dependence between image features with high cancer markers and hence the cancer risk. Full article
Show Figures

Figure 1

11 pages, 1746 KiB  
Article
Radiologic versus Segmentation Measurements to Quantify Wilms Tumor Volume on MRI in Pediatric Patients
by Myrthe A. D. Buser, Alida F. W. van der Steeg, Marc H. W. A. Wijnen, Matthijs Fitski, Harm van Tinteren, Marry M. van den Heuvel-Eibrink, Annemieke S. Littooij and Bas H. M. van der Velden
Cancers 2023, 15(7), 2115; https://doi.org/10.3390/cancers15072115 - 1 Apr 2023
Cited by 2 | Viewed by 1642
Abstract
Wilms tumor is a common pediatric solid tumor. To evaluate tumor response to chemotherapy and decide whether nephron-sparing surgery is possible, tumor volume measurements based on magnetic resonance imaging (MRI) are important. Currently, radiological volume measurements are based on measuring tumor dimensions in [...] Read more.
Wilms tumor is a common pediatric solid tumor. To evaluate tumor response to chemotherapy and decide whether nephron-sparing surgery is possible, tumor volume measurements based on magnetic resonance imaging (MRI) are important. Currently, radiological volume measurements are based on measuring tumor dimensions in three directions. Manual segmentation-based volume measurements might be more accurate, but this process is time-consuming and user-dependent. The aim of this study was to investigate whether manual segmentation-based volume measurements are more accurate and to explore whether these segmentations can be automated using deep learning. We included the MRI images of 45 Wilms tumor patients (age 0–18 years). First, we compared radiological tumor volumes with manual segmentation-based tumor volume measurements. Next, we created an automated segmentation method by training a nnU-Net in a five-fold cross-validation. Segmentation quality was validated by comparing the automated segmentation with the manually created ground truth segmentations, using Dice scores and the 95th percentile of the Hausdorff distances (HD95). On average, manual tumor segmentations result in larger tumor volumes. For automated segmentation, the median dice was 0.90. The median HD95 was 7.2 mm. We showed that radiological volume measurements underestimated tumor volume by about 10% when compared to manual segmentation-based volume measurements. Deep learning can potentially be used to replace manual segmentation to benefit from accurate volume measurements without time and observer constraints. Full article
Show Figures

Figure 1

10 pages, 1886 KiB  
Article
Using Deep-Learning-Based Artificial Intelligence Technique to Automatically Evaluate the Collateral Status of Multiphase CTA in Acute Ischemic Stroke
by Chun-Chao Huang, Hsin-Fan Chiang, Cheng-Chih Hsieh, Chao-Liang Chou, Zong-Yi Jhou, Ting-Yi Hou and Jin-Siang Shaw
Tomography 2023, 9(2), 647-656; https://doi.org/10.3390/tomography9020052 - 16 Mar 2023
Cited by 4 | Viewed by 2015
Abstract
Background: Collateral status is an important predictor for the outcome of acute ischemic stroke with large vessel occlusion. Multiphase computed-tomography angiography (mCTA) is useful to evaluate the collateral status, but visual evaluation of this examination is time-consuming. This study aims to use an [...] Read more.
Background: Collateral status is an important predictor for the outcome of acute ischemic stroke with large vessel occlusion. Multiphase computed-tomography angiography (mCTA) is useful to evaluate the collateral status, but visual evaluation of this examination is time-consuming. This study aims to use an artificial intelligence (AI) technique to develop an automatic AI prediction model for the collateral status of mCTA. Methods: This retrospective study enrolled subjects with acute ischemic stroke receiving endovascular thrombectomy between January 2015 and June 2020 in a tertiary referral hospital. The demographic data and images of mCTA were collected. The collateral status of all mCTA was visually evaluated. Images at the basal ganglion and supraganglion levels of mCTA were selected to produce AI models using the convolutional neural network (CNN) technique to automatically predict the collateral status of mCTA. Results: A total of 82 subjects were enrolled. There were 57 cases randomly selected for the training group and 25 cases for the validation group. In the training group, there were 40 cases with a positive collateral result (good or intermediate) and 17 cases with a negative collateral result (poor). In the validation group, there were 21 cases with a positive collateral result and 4 cases with a negative collateral result. During training for the CNN prediction model, the accuracy of the training group could reach 0.999 ± 0.015, whereas the prediction model had a performance of 0.746 ± 0.008 accuracy on the validation group. The area under the ROC curve was 0.7. Conclusions: This study suggests that the application of the AI model derived from mCTA images to automatically evaluate the collateral status is feasible. Full article
Show Figures

Figure 1

14 pages, 1882 KiB  
Article
Convolutional Neural Networks to Classify Alzheimer’s Disease Severity Based on SPECT Images: A Comparative Study
by Wei-Chih Lien, Chung-Hsing Yeh, Chun-Yang Chang, Chien-Hsiang Chang, Wei-Ming Wang, Chien-Hsu Chen and Yang-Cheng Lin
J. Clin. Med. 2023, 12(6), 2218; https://doi.org/10.3390/jcm12062218 - 13 Mar 2023
Cited by 1 | Viewed by 2266
Abstract
Image recognition and neuroimaging are increasingly being used to understand the progression of Alzheimer’s disease (AD). However, image data from single-photon emission computed tomography (SPECT) are limited. Medical image analysis requires large, labeled training datasets. Therefore, studies have focused on overcoming this problem. [...] Read more.
Image recognition and neuroimaging are increasingly being used to understand the progression of Alzheimer’s disease (AD). However, image data from single-photon emission computed tomography (SPECT) are limited. Medical image analysis requires large, labeled training datasets. Therefore, studies have focused on overcoming this problem. In this study, the detection performance of five convolutional neural network (CNN) models (MobileNet V2 and NASNetMobile (lightweight models); VGG16, Inception V3, and ResNet (heavier weight models)) on medical images was compared to establish a classification model for epidemiological research. Brain scan image data were collected from 99 subjects, and 4711 images were used. Demographic data were compared using the chi-squared test and one-way analysis of variance with Bonferroni’s post hoc test. Accuracy and loss functions were used to evaluate the performance of CNN models. The cognitive abilities screening instrument and mini mental state exam scores of subjects with a clinical dementia rating (CDR) of 2 were considerably lower than those of subjects with a CDR of 1 or 0.5. This study analyzed the classification performance of various CNN models for medical images and proved the effectiveness of transfer learning in identifying the mild cognitive impairment, mild AD, and moderate AD scoring based on SPECT images. Full article
Show Figures

Figure 1

19 pages, 5627 KiB  
Article
A Deep Learning Radiomics Nomogram to Predict Response to Neoadjuvant Chemotherapy for Locally Advanced Cervical Cancer: A Two-Center Study
by Yajiao Zhang, Chao Wu, Zhibo Xiao, Furong Lv and Yanbing Liu
Diagnostics 2023, 13(6), 1073; https://doi.org/10.3390/diagnostics13061073 - 11 Mar 2023
Cited by 6 | Viewed by 1783
Abstract
Purpose: This study aimed to establish a deep learning radiomics nomogram (DLRN) based on multiparametric MR images for predicting the response to neoadjuvant chemotherapy (NACT) in patients with locally advanced cervical cancer (LACC). Methods: Patients with LACC (FIGO stage IB-IIIB) who underwent preoperative [...] Read more.
Purpose: This study aimed to establish a deep learning radiomics nomogram (DLRN) based on multiparametric MR images for predicting the response to neoadjuvant chemotherapy (NACT) in patients with locally advanced cervical cancer (LACC). Methods: Patients with LACC (FIGO stage IB-IIIB) who underwent preoperative NACT were enrolled from center 1 (220 cases) and center 2 (independent external validation dataset, 65 cases). Handcrafted and deep learning-based radiomics features were extracted from T2WI, DWI and contrast-enhanced (CE)-T1WI, and radiomics signatures were built based on the optimal features. Two types of radiomics signatures and clinical features were integrated into the DLRN for prediction. The AUC, calibration curve and decision curve analysis (DCA) were employed to illustrate the performance of these models and their clinical utility. In addition, disease-free survival (DFS) was assessed by Kaplan–Meier survival curves based on the DLRN. Results: The DLRN showed favorable predictive values in differentiating responders from nonresponders to NACT with AUCs of 0.963, 0.940 and 0.910 in the three datasets, with good calibration (all p > 0.05). Furthermore, the DLRN performed better than the clinical model and handcrafted radiomics signature in all datasets (all p < 0.05) and slightly higher than the DL-based radiomics signature in the internal validation dataset (p = 0.251). DCA indicated that the DLRN has potential in clinical applications. Furthermore, the DLRN was strongly correlated with the DFS of LACC patients (HR = 0.223; p = 0.004). Conclusion: The DLRN performed well in preoperatively predicting the therapeutic response in LACC and could provide valuable information for individualized treatment. Full article
Show Figures

Figure 1

19 pages, 9182 KiB  
Article
DBE-Net: Dual Boundary-Guided Attention Exploration Network for Polyp Segmentation
by Haichao Ma, Chao Xu, Chao Nie, Jubao Han, Yingjie Li and Chuanxu Liu
Diagnostics 2023, 13(5), 896; https://doi.org/10.3390/diagnostics13050896 - 27 Feb 2023
Cited by 2 | Viewed by 1872
Abstract
Automatic segmentation of polyps during colonoscopy can help doctors accurately find the polyp area and remove abnormal tissues in time to reduce the possibility of polyps transforming into cancer. However, the current polyp segmentation research still has the following problems: blurry polyp boundaries, [...] Read more.
Automatic segmentation of polyps during colonoscopy can help doctors accurately find the polyp area and remove abnormal tissues in time to reduce the possibility of polyps transforming into cancer. However, the current polyp segmentation research still has the following problems: blurry polyp boundaries, multi-scale adaptability of polyps, and close resemblances between polyps and nearby normal tissues. To tackle these issues, this paper proposes a dual boundary-guided attention exploration network (DBE-Net) for polyp segmentation. Firstly, we propose a dual boundary-guided attention exploration module to solve the boundary-blurring problem. This module uses a coarse-to-fine strategy to progressively approximate the real polyp boundary. Secondly, a multi-scale context aggregation enhancement module is introduced to accommodate the multi-scale variation of polyps. Finally, we propose a low-level detail enhancement module, which can extract more low-level details and promote the performance of the overall network. Extensive experiments on five polyp segmentation benchmark datasets show that our method achieves superior performance and stronger generalization ability than state-of-the-art methods. Especially for CVC-ColonDB and ETIS, two challenging datasets among the five datasets, our method achieves excellent results of 82.4% and 80.6% in terms of mDice (mean dice similarity coefficient) and improves by 5.1% and 5.9% compared to the state-of-the-art methods. Full article
Show Figures

Figure 1

21 pages, 5369 KiB  
Article
A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI
by Nuno M. Rodrigues, Sara Silva, Leonardo Vanneschi and Nickolas Papanikolaou
Cancers 2023, 15(5), 1467; https://doi.org/10.3390/cancers15051467 - 25 Feb 2023
Cited by 7 | Viewed by 2253
Abstract
Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is [...] Read more.
Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation. Full article
Show Figures

Figure 1

15 pages, 553 KiB  
Article
Avoiding Tissue Overlap in 2D Images: Single-Slice DBT Classification Using Convolutional Neural Networks
by João Mendes, Nuno Matela and Nuno Garcia
Tomography 2023, 9(1), 398-412; https://doi.org/10.3390/tomography9010032 - 14 Feb 2023
Cited by 2 | Viewed by 2108
Abstract
Breast cancer was the most diagnosed cancer around the world in 2020. Screening programs, based on mammography, aim to achieve early diagnosis which is of extreme importance when it comes to cancer. There are several flaws associated with mammography, with one of the [...] Read more.
Breast cancer was the most diagnosed cancer around the world in 2020. Screening programs, based on mammography, aim to achieve early diagnosis which is of extreme importance when it comes to cancer. There are several flaws associated with mammography, with one of the most important being tissue overlapping that can result in both lesion masking and fake-lesion appearance. To overcome this, digital breast tomosynthesis takes images (slices) at different angles that are later reconstructed into a 3D image. Having in mind that the slices are planar images where tissue overlapping does not occur, the goal of the work done here was to develop a deep learning model that could, based on the said slices, classify lesions as benign or malignant. The developed model was based on the work done by Muduli et. al, with a slight change in the fully connected layers and in the regularization done. In total, 77 DBT volumes—39 benign and 38 malignant—were available. From each volume, nine slices were taken, one where the lesion was most visible and four above/below. To increase the quantity and the variability of the data, common data augmentation techniques (rotation, translation, mirroring) were applied to the original images three times. Therefore, 2772 images were used for training. Data augmentation techniques were then applied two more times—one set used for validation and one set used for testing. Our model achieved, on the testing set, an accuracy of 93.2% while the values of sensitivity, specificity, precision, F1-score, and Cohen’s kappa were 92%, 94%, 94%, 94%, and 0.86, respectively. Given these results, the work done here suggests that the use of single-slice DBT can compare to state-of-the-art studies and gives a hint that with more data, better augmentation techniques and the use of transfer learning might overcome the use of mammograms in this type of studies. Full article
Show Figures

Figure 1

11 pages, 2403 KiB  
Article
Key-Point Detection Algorithm of Deep Learning Can Predict Lower Limb Alignment with Simple Knee Radiographs
by Hee Seung Nam, Sang Hyun Park, Jade Pei Yuik Ho, Seong Yun Park, Joon Hee Cho and Yong Seuk Lee
J. Clin. Med. 2023, 12(4), 1455; https://doi.org/10.3390/jcm12041455 - 11 Feb 2023
Cited by 2 | Viewed by 1760
Abstract
(1) Background: There have been many attempts to predict the weight-bearing line (WBL) ratio using simple knee radiographs. Using a convolutional neural network (CNN), we focused on predicting the WBL ratio quantitatively. (2) Methods: From March 2003 to December 2021, 2410 patients with [...] Read more.
(1) Background: There have been many attempts to predict the weight-bearing line (WBL) ratio using simple knee radiographs. Using a convolutional neural network (CNN), we focused on predicting the WBL ratio quantitatively. (2) Methods: From March 2003 to December 2021, 2410 patients with 4790 knee AP radiographs were randomly selected using stratified random sampling. Our dataset was cropped by four points annotated by a specialist with a 10-pixel margin. The model predicted our interest points, which were both plateau points, i.e., starting WBL point and exit WBL point. The resulting value of the model was analyzed in two ways: pixel units and WBL error values. (3) Results: The mean accuracy (MA) was increased from around 0.5 using a 2-pixel unit to around 0.8 using 6 pixels in both the validation and the test sets. When the tibial plateau length was taken as 100%, the MA was increased from approximately 0.1, using 1%, to approximately 0.5, using 5% in both the validation and the test sets. (4) Conclusions: The DL-based key-point detection algorithm for predicting lower limb alignment through labeling using simple knee AP radiographs demonstrated comparable accuracy to that of the direct measurement using whole leg radiographs. Using this algorithm, the WBL ratio prediction with simple knee AP radiographs could be useful to diagnose lower limb alignment in osteoarthritis patients in primary care. Full article
Show Figures

Figure 1

11 pages, 4705 KiB  
Article
COVID and Cancer: A Complete 3D Advanced Radiological CT-Based Analysis to Predict the Outcome
by Syed Rahmanuddin, Asma Jamil, Ammar Chaudhry, Tyler Seto, Jordyn Brase, Pejman Motarjem, Marjaan Khan, Cristian Tomasetti, Umme Farwa, William Boswell, Haris Ali, Danielle Guidaben, Rafay Haseeb, Guibo Luo, Guido Marcucci, Steven T. Rosen and Wenli Cai
Cancers 2023, 15(3), 651; https://doi.org/10.3390/cancers15030651 - 20 Jan 2023
Cited by 2 | Viewed by 1506
Abstract
Background: Cancer patients infected with COVID-19 were shown in a multitude of studies to have poor outcomes on the basis of older age and weak immune systems from cancer as well as chemotherapy. In this study, the CT examinations of 22 confirmed COVID-19 [...] Read more.
Background: Cancer patients infected with COVID-19 were shown in a multitude of studies to have poor outcomes on the basis of older age and weak immune systems from cancer as well as chemotherapy. In this study, the CT examinations of 22 confirmed COVID-19 cancer patients were analyzed. Methodology: A retrospective analysis was conducted on 28 cancer patients, of which 22 patients were COVID positive. The CT scan changes before and after treatment and the extent of structural damage to the lungs after COVID-19 infection was analyzed. Structural damage to a lung was indicated by a change in density measured in Hounsfield units (HUs) and by lung volume reduction. A 3D radiometric analysis was also performed and lung and lesion histograms were compared. Results: A total of 22 cancer patients were diagnosed with COVID-19 infection. A repeat CT scan were performed in 15 patients after they recovered from infection. Most of the study patients were diagnosed with leukemia. A secondary clinical analysis was performed to show the associations of COVID treatment on the study subjects, lab data, and outcome on mortality. It was found that post COVID there was a decrease of >50% in lung volume and a higher density in the form of HUs due to scar tissue formation post infection. Conclusion: It was concluded that COVID-19 infection may have further detrimental effects on the lungs of cancer patients, thereby, decreasing their lung volume and increasing their lung density due to scar formation. Full article
Show Figures

Figure 1

20 pages, 2380 KiB  
Systematic Review
Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis
by Burak B. Ozkara, Melissa M. Chen, Christian Federau, Mert Karabacak, Tina M. Briere, Jing Li and Max Wintermark
Cancers 2023, 15(2), 334; https://doi.org/10.3390/cancers15020334 - 4 Jan 2023
Cited by 11 | Viewed by 3353
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use [...] Read more.
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences. Full article
Show Figures

Figure 1

Back to TopTop