Applications of Artificial Intelligence in Medical Imaging—Clinical and Pre-Clinical Scope

A special issue of Tomography (ISSN 2379-139X).

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 14286

Special Issue Editor


E-Mail Website
Guest Editor
Department of Clinical Medicine, MR research centre, Aarhus University, University Hospital, 8200 Aarhus N, Denmark
Interests: MRI; cardiac MRI; hyperpolarized MR; metabolic imaging; MR tracers; translational imaging; machine learning; AI

Special Issue Information

Dear Colleagues, 

Artificial intelligence (AI) is an extremely fast-growing field that infiltrates everything from household medical devices to intensive care units. The comparison that the breakthrough of AI is akin to that of electricity may illustrate the significance of this field. In medical imaging, AI has gained ground in multiple disciplines—from improving raw data to guiding outlines of pathology.

Drivers in AI development are channelled by both new medical discoveries as well as aided support for the interpretation of the growing range of medical imaging, the latter having major time and cost implications for the healthcare system.

We invite submissions presenting AI applications in the medical imaging field. Our special focus is on AI application that aids preclinical and clinical imaging work to help the interpretation, either by reconstruction improvements, co-registration, organ-segmentation or image improvements by noise filtration or resolution. Furthermore, submissions using imaging data along with tabular data to improve the interpretation are welcome.

The main scope is research in a clinical or translational phase, but we will also encourage submissions on papers focused on better pre-clinical workflow. This Special Issue will invite imaging modalities in MRI, CT, nuclear, molecular, ultrasound, optical and spectroscopy, and multimodal imaging approaches are also within the scope. Lastly, submissions exploring the benefits for cost, efficiency or added guidance for the application of AI in medical imaging are sought with great interest.

Dr. Esben Søvsø Szocska Hansen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Tomography is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • machine learning
  • artificial intelligense
  • image reconstruction
  • quantative imaging
  • MRI
  • CT
  • ultrasound
  • metabolic imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 2664 KiB  
Article
An Efficient Multi-Scale Convolutional Neural Network Based Multi-Class Brain MRI Classification for SaMD
by Syed Ali Yazdan, Rashid Ahmad, Naeem Iqbal, Atif Rizwan, Anam Nawaz Khan and Do-Hyeun Kim
Tomography 2022, 8(4), 1905-1927; https://doi.org/10.3390/tomography8040161 - 26 Jul 2022
Cited by 27 | Viewed by 5347
Abstract
A brain tumor is the growth of abnormal cells in certain brain tissues with a high mortality rate; therefore, it requires high precision in diagnosis, as a minor human judgment can eventually cause severe consequences. Magnetic Resonance Image (MRI) serves as a non-invasive [...] Read more.
A brain tumor is the growth of abnormal cells in certain brain tissues with a high mortality rate; therefore, it requires high precision in diagnosis, as a minor human judgment can eventually cause severe consequences. Magnetic Resonance Image (MRI) serves as a non-invasive tool to detect the presence of a tumor. However, Rician noise is inevitably instilled during the image acquisition process, which leads to poor observation and interferes with the treatment. Computer-Aided Diagnosis (CAD) systems can perform early diagnosis of the disease, potentially increasing the chances of survival, and lessening the need for an expert to analyze the MRIs. Convolutional Neural Networks (CNN) have proven to be very effective in tumor detection in brain MRIs. There have been multiple studies dedicated to brain tumor classification; however, these techniques lack the evaluation of the impact of the Rician noise on state-of-the-art deep learning techniques and the consideration of the scaling impact on the performance of the deep learning as the size and location of tumors vary from image to image with irregular shape and boundaries. Moreover, transfer learning-based pre-trained models such as AlexNet and ResNet have been used for brain tumor detection. However, these architectures have many trainable parameters and hence have a high computational cost. This study proposes a two-fold solution: (a) Multi-Scale CNN (MSCNN) architecture to develop a robust classification model for brain tumor diagnosis, and (b) minimizing the impact of Rician noise on the performance of the MSCNN. The proposed model is a multi-class classification solution that classifies MRIs into glioma, meningioma, pituitary, and non-tumor. The core objective is to develop a robust model for enhancing the performance of the existing tumor detection systems in terms of accuracy and efficiency. Furthermore, MRIs are denoised using a Fuzzy Similarity-based Non-Local Means (FSNLM) filter to improve the classification results. Different evaluation metrics are employed, such as accuracy, precision, recall, specificity, and F1-score, to evaluate and compare the performance of the proposed multi-scale CNN and other state-of-the-art techniques, such as AlexNet and ResNet. In addition, trainable and non-trainable parameters of the proposed model and the existing techniques are also compared to evaluate the computational efficiency. The experimental results show that the proposed multi-scale CNN model outperforms AlexNet and ResNet in terms of accuracy and efficiency at a lower computational cost. Based on experimental results, it is found that our proposed MCNN2 achieved accuracy and F1-score of 91.2% and 91%, respectively, which is significantly higher than the existing AlexNet and ResNet techniques. Moreover, our findings suggest that the proposed model is more effective and efficient in facilitating clinical research and practice for MRI classification. Full article
Show Figures

Figure 1

12 pages, 1536 KiB  
Article
AI Denoising Improves Image Quality and Radiological Workflows in Pediatric Ultra-Low-Dose Thorax Computed Tomography Scans
by Andreas S. Brendlin, Ulrich Schmid, David Plajer, Maryanna Chaika, Markus Mader, Robin Wrazidlo, Simon Männlin, Jakob Spogis, Arne Estler, Michael Esser, Jürgen Schäfer, Saif Afat and Ilias Tsiflikas
Tomography 2022, 8(4), 1678-1689; https://doi.org/10.3390/tomography8040140 - 24 Jun 2022
Cited by 8 | Viewed by 2434
Abstract
(1) This study evaluates the impact of an AI denoising algorithm on image quality, diagnostic accuracy, and radiological workflows in pediatric chest ultra-low-dose CT (ULDCT). (2) Methods: 100 consecutive pediatric thorax ULDCT were included and reconstructed using weighted filtered back projection (wFBP), iterative [...] Read more.
(1) This study evaluates the impact of an AI denoising algorithm on image quality, diagnostic accuracy, and radiological workflows in pediatric chest ultra-low-dose CT (ULDCT). (2) Methods: 100 consecutive pediatric thorax ULDCT were included and reconstructed using weighted filtered back projection (wFBP), iterative reconstruction (ADMIRE 2), and AI denoising (PixelShine). Place-consistent noise measurements were used to compare objective image quality. Eight blinded readers independently rated the subjective image quality on a Likert scale (1 = worst to 5 = best). Each reader wrote a semiquantitative report to evaluate disease severity using a severity score with six common pathologies. The time to diagnosis was measured for each reader to compare the possible workflow benefits. Properly corrected mixed-effects analysis with post-hoc subgroup tests were used. Spearman’s correlation coefficient measured inter-reader agreement for the subjective image quality analysis and the severity score sheets. (3) Results: The highest noise was measured for wFBP, followed by ADMIRE 2, and PixelShine (76.9 ± 9.62 vs. 43.4 ± 4.45 vs. 34.8 ± 3.27 HU; each p < 0.001). The highest subjective image quality was measured for PixelShine, followed by ADMIRE 2, and wFBP (4 (4–5) vs. 3 (4–5) vs. 3 (2–4), each p < 0.001) with good inter-rater agreement (r ≥ 0.790; p ≤ 0.001). In diagnostic accuracy analysis, there was a good inter-rater agreement between the severity scores (r ≥ 0.764; p < 0.001) without significant differences between severity score items per reconstruction mode (F (5.71; 566) = 0.792; p = 0.570). The shortest time to diagnosis was measured for the PixelShine datasets, followed by ADMIRE 2, and wFBP (2.28 ± 1.56 vs. 2.45 ± 1.90 vs. 2.66 ± 2.31 min; F (1.000; 99.00) = 268.1; p < 0.001). (4) Conclusions: AI denoising significantly improves image quality in pediatric thorax ULDCT without compromising the diagnostic confidence and reduces the time to diagnosis substantially. Full article
Show Figures

Figure 1

15 pages, 4809 KiB  
Article
AI Denoising Significantly Enhances Image Quality and Diagnostic Confidence in Interventional Cone-Beam Computed Tomography
by Andreas S. Brendlin, Arne Estler, David Plajer, Adrian Lutz, Gerd Grözinger, Malte N. Bongers, Ilias Tsiflikas, Saif Afat and Christoph P. Artzner
Tomography 2022, 8(2), 933-947; https://doi.org/10.3390/tomography8020075 - 1 Apr 2022
Cited by 5 | Viewed by 2897
Abstract
(1) To investigate whether interventional cone-beam computed tomography (cbCT) could benefit from AI denoising, particularly with respect to patient body mass index (BMI); (2) From 1 January 2016 to 1 January 2022, 100 patients with liver-directed interventions and peri-procedural cbCT were included. The [...] Read more.
(1) To investigate whether interventional cone-beam computed tomography (cbCT) could benefit from AI denoising, particularly with respect to patient body mass index (BMI); (2) From 1 January 2016 to 1 January 2022, 100 patients with liver-directed interventions and peri-procedural cbCT were included. The unenhanced mask run and the contrast-enhanced fill run of the cbCT were reconstructed using weighted filtered back projection. Additionally, each dataset was post-processed using a novel denoising software solution. Place-consistent regions of interest measured signal-to-noise ratio (SNR) per dataset. Corrected mixed-effects analysis with BMI subgroup analyses compared objective image quality. Multiple linear regression measured the contribution of “Radiation Dose”, “Body-Mass-Index”, and “Mode” to SNR. Two radiologists independently rated diagnostic confidence. Inter-rater agreement was measured using Spearman correlation (r); (3) SNR was significantly higher in the denoised datasets than in the regular datasets (p < 0.001). Furthermore, BMI subgroup analysis showed significant SNR deteriorations in the regular datasets for higher patient BMI (p < 0.001), but stable results for denoising (p > 0.999). In regression, only denoising contributed positively towards SNR (0.6191; 95%CI 0.6096 to 0.6286; p < 0.001). The denoised datasets received overall significantly higher diagnostic confidence grades (p = 0.010), with good inter-rater agreement (r ≥ 0.795, p < 0.001). In a subgroup analysis, diagnostic confidence deteriorated significantly for higher patient BMI (p < 0.001) in the regular datasets but was stable in the denoised datasets (p ≥ 0.103).; (4) AI denoising can significantly enhance image quality in interventional cone-beam CT and effectively mitigate diagnostic confidence deterioration for rising patient BMI. Full article
Show Figures

Figure 1

12 pages, 1742 KiB  
Article
Semi-Supervised Deep Learning Semantic Segmentation for 3D Volumetric Computed Tomographic Scoring of Chronic Rhinosinusitis: Clinical Correlations and Comparison with Lund-Mackay Scoring
by Chung-Feng Jeffrey Kuo, Yu-Shu Liao, Jagadish Barman and Shao-Cheng Liu
Tomography 2022, 8(2), 718-729; https://doi.org/10.3390/tomography8020059 - 7 Mar 2022
Cited by 7 | Viewed by 2862
Abstract
Background: The traditional Lund-Mackay score (TLMs) is unable to subgrade the volume of inflammatory disease. We aimed to propose an effective modification and calculated the volume-based modified LM score (VMLMs), which should correlate more strongly with clinical symptoms than the TLMs. Methods: Semi-supervised [...] Read more.
Background: The traditional Lund-Mackay score (TLMs) is unable to subgrade the volume of inflammatory disease. We aimed to propose an effective modification and calculated the volume-based modified LM score (VMLMs), which should correlate more strongly with clinical symptoms than the TLMs. Methods: Semi-supervised learning with pseudo-labels used for self-training was adopted to train our convolutional neural networks, with the algorithm including a combination of MobileNet, SENet, and ResNet. A total of 175 CT sets, with 50 participants that would undergo sinus surgery, were recruited. The Sinonasal Outcomes Test-22 (SNOT-22) was used to assess disease-specific symptoms before and after surgery. A 3D-projected view was created and VMLMs were calculated for further comparison. Results: Our methods showed a significant improvement both in sinus classification and segmentation as compared to state-of-the-art networks, with an average Dice coefficient of 91.57%, an MioU of 89.43%, and a pixel accuracy of 99.75%. The sinus volume exhibited sex dimorphism. There was a significant positive correlation between volume and height, but a trend toward a negative correlation between maxillary sinus and age. Subjects who underwent surgery had significantly greater TLMs (14.9 vs. 7.38) and VMLMs (11.65 vs. 4.34) than those who did not. ROC-AUC analyses showed that the VMLMs had excellent discrimination at classifying a high probability of postoperative improvement with SNOT-22 reduction. Conclusions: Our method is suitable for obtaining detailed information, excellent sinus boundary prediction, and differentiating the target from its surrounding structure. These findings demonstrate the promise of CT-based volumetric analysis of sinus mucosal inflammation. Full article
Show Figures

Figure 1

Back to TopTop