Information Processing in Medical Imaging

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 9124

Special Issue Editors

College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
Interests: medical imaging; image analysis; computer vision
College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
Interests: PET image denoising/reconstruction; machine learning in medical Imaging

Special Issue Information

Dear Colleagues,

The practice of modern medicine increasingly relies on medical imaging from multiple sources to guide better diagnosis and therapy. Large amounts of medical images collected from different systems and devices are produced daily, such as computed tomography (CT), magnetic resonance (MR) imaging, positron emission tomography (PET), single photon emission computed tomography (SPECT), photoacoustic tomography, ultrasound, optical coherence tomography, EEG/MEG, and pathological imaging. Modern image processing technologies can improve medical image quality considering the physics degradation factors or extract information from medical images for guidance, which help doctors improve diagnostic accuracy and reliability.

In this Special Issue, we invite novel research contributions showing information processing techniques in medical imaging. Possible research topics include, but are not limited to:

  • Reconstruction;
  • Denoising;
  • Segmentation;
  • Classification;
  • Registration;
  • Motion analysis.

Prof. Dr. Huafeng Liu
Dr. Jianan Cui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • reconstruction
  • denoising
  • segmentation
  • classification
  • registration
  • motion analysis

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4962 KiB  
Article
Deep-Learning-Based Framework for PET Image Reconstruction from Sinogram Domain
by Zhiyuan Liu, Huihui Ye and Huafeng Liu
Appl. Sci. 2022, 12(16), 8118; https://doi.org/10.3390/app12168118 - 13 Aug 2022
Cited by 5 | Viewed by 1896
Abstract
High-quality and fast reconstructions are essential for the clinical application of positron emission tomography (PET) imaging. Herein, a deep-learning-based framework is proposed for PET image reconstruction directly from the sinogram domain to achieve high-quality and high-speed reconstruction at the same time. In this [...] Read more.
High-quality and fast reconstructions are essential for the clinical application of positron emission tomography (PET) imaging. Herein, a deep-learning-based framework is proposed for PET image reconstruction directly from the sinogram domain to achieve high-quality and high-speed reconstruction at the same time. In this framework, conditional generative adversarial networks are constructed to learn a mapping from sinogram data to a reconstructed image and to generate a well-trained model. The network consists of a generator that utilizes the U-net structure and a whole-image strategy discriminator, which are alternately trained. Simulation experiments are conducted to validate the performance of the algorithm in terms of reconstruction accuracy, reconstruction efficiency, and robustness. Real patient data and Sprague Dawley rat data were used to verify the performance of the proposed method under complex conditions. The experimental results demonstrate the superior performance of the proposed method in terms of image quality, reconstruction speed, and robustness. Full article
(This article belongs to the Special Issue Information Processing in Medical Imaging)
Show Figures

Figure 1

21 pages, 7008 KiB  
Article
An Explainable Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM
by Nikolaos I. Papandrianos, Anna Feleki, Serafeim Moustakidis, Elpiniki I. Papageorgiou, Ioannis D. Apostolopoulos and Dimitris J. Apostolopoulos
Appl. Sci. 2022, 12(15), 7592; https://doi.org/10.3390/app12157592 - 28 Jul 2022
Cited by 13 | Viewed by 2983
Abstract
Background: This study targets the development of an explainable deep learning methodology for the automatic classification of coronary artery disease, utilizing SPECT MPI images. Deep learning is currently judged as non-transparent due to the model’s complex non-linear structure, and thus, it is considered [...] Read more.
Background: This study targets the development of an explainable deep learning methodology for the automatic classification of coronary artery disease, utilizing SPECT MPI images. Deep learning is currently judged as non-transparent due to the model’s complex non-linear structure, and thus, it is considered a «black box», making it hard to gain a comprehensive understanding of its internal processes and explain its behavior. Existing explainable artificial intelligence tools can provide insights into the internal functionality of deep learning and especially of convolutional neural networks, allowing transparency and interpretation. Methods: This study seeks to address the identification of patients’ CAD status (infarction, ischemia or normal) by developing an explainable deep learning pipeline in the form of a handcrafted convolutional neural network. The proposed RGB-CNN model utilizes various pre- and post-processing tools and deploys a state-of-the-art explainability tool to produce more interpretable predictions in decision making. The dataset includes cases from 625 patients as stress and rest representations, comprising 127 infarction, 241 ischemic, and 257 normal cases previously classified by a doctor. The imaging dataset was split into 20% for testing and 80% for training, of which 15% was further used for validation purposes. Data augmentation was employed to increase generalization. The efficacy of the well-known Grad-CAM-based color visualization approach was also evaluated in this research to provide predictions with interpretability in the detection of infarction and ischemia in SPECT MPI images, counterbalancing any lack of rationale in the results extracted by the CNNs. Results: The proposed model achieved 93.3% accuracy and 94.58% AUC, demonstrating efficient performance and stability. Grad-CAM has shown to be a valuable tool for explaining CNN-based judgments in SPECT MPI images, allowing nuclear physicians to make fast and confident judgments by using the visual explanations offered. Conclusions: Prediction results indicate a robust and efficient model based on the deep learning methodology which is proposed for CAD diagnosis in nuclear medicine. Full article
(This article belongs to the Special Issue Information Processing in Medical Imaging)
Show Figures

Figure 1

23 pages, 11587 KiB  
Article
A Deep Learning-Based Diagnosis System for COVID-19 Detection and Pneumonia Screening Using CT Imaging
by Ramzi Mahmoudi, Narjes Benameur, Rania Mabrouk, Mazin Abed Mohammed, Begonya Garcia-Zapirain and Mohamed Hedi Bedoui
Appl. Sci. 2022, 12(10), 4825; https://doi.org/10.3390/app12104825 - 10 May 2022
Cited by 31 | Viewed by 3440
Abstract
Background: Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a global threat impacting the lives of millions of people worldwide. Automated detection of lung infections from Computed Tomography scans represents an excellent alternative; however, segmenting infected regions from CT slices encounters many challenges. [...] Read more.
Background: Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a global threat impacting the lives of millions of people worldwide. Automated detection of lung infections from Computed Tomography scans represents an excellent alternative; however, segmenting infected regions from CT slices encounters many challenges. Objective: Developing a diagnosis system based on deep learning techniques to detect and quantify COVID-19 infection and pneumonia screening using CT imaging. Method: Contrast Limited Adaptive Histogram Equalization pre-processing method was used to remove the noise and intensity in homogeneity. Black slices were also removed to crop only the region of interest containing the lungs. A U-net architecture, based on CNN encoder and CNN decoder approaches, is then introduced for a fast and precise image segmentation to obtain the lung and infection segmentation models. For better estimation of skill on unseen data, a fourfold cross-validation as a resampling procedure has been used. A three-layered CNN architecture, with additional fully connected layers followed by a Softmax layer, was used for classification. Lung and infection volumes have been reconstructed to allow volume ratio computing and obtain infection rate. Results: Starting with the 20 CT scan cases, data has been divided into 70% for the training dataset and 30% for the validation dataset. Experimental results demonstrated that the proposed system achieves a dice score of 0.98 and 0.91 for the lung and infection segmentation tasks, respectively, and an accuracy of 0.98 for the classification task. Conclusions: The proposed workflow aimed at obtaining good performances for the different system’s components, and at the same time, dealing with reduced datasets used for training. Full article
(This article belongs to the Special Issue Information Processing in Medical Imaging)
Show Figures

Figure 1

Back to TopTop