Machine Learning/Deep Learning in Medical Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 26041

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507, Japan
Interests: medical imaging; machine learning; deep learning; cancer diagnosis; diagnostic radiology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This special issue focuses on application of machine learning/deep learning for medical images. We welcome original papers and review papers related with the following topics. Although this special issue focuses on machine learning/deep learning, papers of medical image processing with other techniques are also welcomed.  

Research Topics:

  • Cutting-edge methodology/algorithm of machine learning/deep learning for medical images
  • Clinical application of machine learning/deep learning for medical images
  • Open source software of machine learning/deep learning which are used for medical image processing
  • Open data of medical images which are useful for development and validation of machine learning/deep learning
  • Reproducibility/validation study of open source software of machine learning/deep learning for medical images

Dr. Mizuho Nishio
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine Learning
  • Deep learning
  • Medical image processing
  • Reproducibility study
  • Computed tomography (CT)
  • Magnetic resonance imaging (MRI)
  • Positron emission tomography (PET)
  • Digital Pathology...

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 172 KiB  
Editorial
Special Issue on “Machine Learning/Deep Learning in Medical Image Processing”
by Mizuho Nishio
Appl. Sci. 2021, 11(23), 11483; https://doi.org/10.3390/app112311483 - 03 Dec 2021
Cited by 3 | Viewed by 1165
Abstract
Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL) [...] Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)

Research

Jump to: Editorial

24 pages, 5387 KiB  
Article
Automatic Diagnosis of Coronary Artery Disease in SPECT Myocardial Perfusion Imaging Employing Deep Learning
by Nikolaos Papandrianos and Elpiniki Papageorgiou
Appl. Sci. 2021, 11(14), 6362; https://doi.org/10.3390/app11146362 - 09 Jul 2021
Cited by 24 | Viewed by 3376
Abstract
Focusing on coronary artery disease (CAD) patients, this research paper addresses the problem of automatic diagnosis of ischemia or infarction using single-photon emission computed tomography (SPECT) (Siemens Symbia S Series) myocardial perfusion imaging (MPI) scans and investigates the capabilities of deep learning and [...] Read more.
Focusing on coronary artery disease (CAD) patients, this research paper addresses the problem of automatic diagnosis of ischemia or infarction using single-photon emission computed tomography (SPECT) (Siemens Symbia S Series) myocardial perfusion imaging (MPI) scans and investigates the capabilities of deep learning and convolutional neural networks. Considering the wide applicability of deep learning in medical image classification, a robust CNN model whose architecture was previously determined in nuclear image analysis is introduced to recognize myocardial perfusion images by extracting the insightful features of an image and use them to classify it correctly. In addition, a deep learning classification approach using transfer learning is implemented to classify cardiovascular images as normal or abnormal (ischemia or infarction) from SPECT MPI scans. The present work is differentiated from other studies in nuclear cardiology as it utilizes SPECT MPI images. To address the two-class classification problem of CAD diagnosis, achieving adequate accuracy, simple, fast and efficient CNN architectures were built based on a CNN exploration process. They were then employed to identify the category of CAD diagnosis, presenting its generalization capabilities. The results revealed that the applied methods are sufficiently accurate and able to differentiate the infarction or ischemia from healthy patients (overall classification accuracy = 93.47% ± 2.81%, AUC score = 0.936). To strengthen the findings of this study, the proposed deep learning approaches were compared with other popular state-of-the-art CNN architectures for the specific dataset. The prediction results show the efficacy of new deep learning architecture applied for CAD diagnosis using SPECT MPI scans over the existing ones in nuclear medicine. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Figure 1

10 pages, 12298 KiB  
Article
Deep Learning Based Airway Segmentation Using Key Point Prediction
by Jinyoung Park, JaeJoon Hwang, Jihye Ryu, Inhye Nam, Sol-A Kim, Bong-Hae Cho, Sang-Hun Shin and Jae-Yeol Lee
Appl. Sci. 2021, 11(8), 3501; https://doi.org/10.3390/app11083501 - 14 Apr 2021
Cited by 11 | Viewed by 2436
Abstract
The purpose of this study was to investigate the accuracy of the airway volume measurement by a Regression Neural Network-based deep-learning model. A set of manually outlined airway data was set to build the algorithm for fully automatic segmentation of a deep learning [...] Read more.
The purpose of this study was to investigate the accuracy of the airway volume measurement by a Regression Neural Network-based deep-learning model. A set of manually outlined airway data was set to build the algorithm for fully automatic segmentation of a deep learning process. Manual landmarks of the airway were determined by one examiner using a mid-sagittal plane of cone-beam computed tomography (CBCT) images of 315 patients. Clinical dataset-based training with data augmentation was conducted. Based on the annotated landmarks, the airway passage was measured and segmented. The accuracy of our model was confirmed by measuring the following between the examiner and the program: (1) a difference in volume of nasopharynx, oropharynx, and hypopharynx, and (2) the Euclidean distance. For the agreement analysis, 61 samples were extracted and compared. The correlation test showed a range of good to excellent reliability. A difference between volumes were analyzed using regression analysis. The slope of the two measurements was close to 1 and showed a linear regression correlation (r2 = 0.975, slope = 1.02, p < 0.001). These results indicate that fully automatic segmentation of the airway is possible by training via deep learning of artificial intelligence. Additionally, a high correlation between manual data and deep learning data was estimated. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Figure 1

19 pages, 4196 KiB  
Article
Characterization of Optical Coherence Tomography Images for Colon Lesion Differentiation under Deep Learning
by Cristina L. Saratxaga, Jorge Bote, Juan F. Ortega-Morán, Artzai Picón, Elena Terradillos, Nagore Arbide del Río, Nagore Andraka, Estibaliz Garrote and Olga M. Conde
Appl. Sci. 2021, 11(7), 3119; https://doi.org/10.3390/app11073119 - 01 Apr 2021
Cited by 12 | Viewed by 2702
Abstract
(1) Background: Clinicians demand new tools for early diagnosis and improved detection of colon lesions that are vital for patient prognosis. Optical coherence tomography (OCT) allows microscopical inspection of tissue and might serve as an optical biopsy method that could lead to in-situ [...] Read more.
(1) Background: Clinicians demand new tools for early diagnosis and improved detection of colon lesions that are vital for patient prognosis. Optical coherence tomography (OCT) allows microscopical inspection of tissue and might serve as an optical biopsy method that could lead to in-situ diagnosis and treatment decisions; (2) Methods: A database of murine (rat) healthy, hyperplastic and neoplastic colonic samples with more than 94,000 images was acquired. A methodology that includes a data augmentation processing strategy and a deep learning model for automatic classification (benign vs. malignant) of OCT images is presented and validated over this dataset. Comparative evaluation is performed both over individual B-scan images and C-scan volumes; (3) Results: A model was trained and evaluated with the proposed methodology using six different data splits to present statistically significant results. Considering this, 0.9695 (±0.0141) sensitivity and 0.8094 (±0.1524) specificity were obtained when diagnosis was performed over B-scan images. On the other hand, 0.9821 (±0.0197) sensitivity and 0.7865 (±0.205) specificity were achieved when diagnosis was made considering all the images in the whole C-scan volume; (4) Conclusions: The proposed methodology based on deep learning showed great potential for the automatic characterization of colon polyps and future development of the optical biopsy paradigm. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Graphical abstract

14 pages, 7354 KiB  
Article
Deep Learning-Based Pixel-Wise Lesion Segmentation on Oral Squamous Cell Carcinoma Images
by Francesco Martino, Domenico D. Bloisi, Andrea Pennisi, Mulham Fawakherji, Gennaro Ilardi, Daniela Russo, Daniele Nardi, Stefania Staibano and Francesco Merolla
Appl. Sci. 2020, 10(22), 8285; https://doi.org/10.3390/app10228285 - 23 Nov 2020
Cited by 24 | Viewed by 5044
Abstract
Oral squamous cell carcinoma is the most common oral cancer. In this paper, we present a performance analysis of four different deep learning-based pixel-wise methods for lesion segmentation on oral carcinoma images. Two diverse image datasets, one for training and another one for [...] Read more.
Oral squamous cell carcinoma is the most common oral cancer. In this paper, we present a performance analysis of four different deep learning-based pixel-wise methods for lesion segmentation on oral carcinoma images. Two diverse image datasets, one for training and another one for testing, are used to generate and evaluate the models used for segmenting the images, thus allowing to assess the generalization capability of the considered deep network architectures. An important contribution of this work is the creation of the Oral Cancer Annotated (ORCA) dataset, containing ground-truth data derived from the well-known Cancer Genome Atlas (TCGA) dataset. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Figure 1

23 pages, 7468 KiB  
Article
An Efficient Lightweight CNN and Ensemble Machine Learning Classification of Prostate Tissue Using Multilevel Feature Analysis
by Subrata Bhattacharjee, Cho-Hee Kim, Deekshitha Prakash, Hyeon-Gyun Park, Nam-Hoon Cho and Heung-Kook Choi
Appl. Sci. 2020, 10(22), 8013; https://doi.org/10.3390/app10228013 - 12 Nov 2020
Cited by 13 | Viewed by 3478
Abstract
Prostate carcinoma is caused when cells and glands in the prostate change their shape and size from normal to abnormal. Typically, the pathologist’s goal is to classify the staining slides and differentiate normal from abnormal tissue. In the present study, we used a [...] Read more.
Prostate carcinoma is caused when cells and glands in the prostate change their shape and size from normal to abnormal. Typically, the pathologist’s goal is to classify the staining slides and differentiate normal from abnormal tissue. In the present study, we used a computational approach to classify images and features of benign and malignant tissues using artificial intelligence (AI) techniques. Here, we introduce two lightweight convolutional neural network (CNN) architectures and an ensemble machine learning (EML) method for image and feature classification, respectively. Moreover, the classification using pre-trained models and handcrafted features was carried out for comparative analysis. The binary classification was performed to classify between the two grade groups (benign vs. malignant) and quantile-quantile plots were used to show their predicted outcomes. Our proposed models for deep learning (DL) and machine learning (ML) classification achieved promising accuracies of 94.0% and 92.0%, respectively, based on non-handcrafted features extracted from CNN layers. Therefore, these models were able to predict nearly perfectly accurately using few trainable parameters or CNN layers, highlighting the importance of DL and ML techniques and suggesting that the computational analysis of microscopic anatomy will be essential to the future practice of pathology. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Figure 1

15 pages, 4348 KiB  
Article
Simulation Study of Low-Dose Sparse-Sampling CT with Deep Learning-Based Reconstruction: Usefulness for Evaluation of Ovarian Cancer Metastasis
by Yasuyo Urase, Mizuho Nishio, Yoshiko Ueno, Atsushi K. Kono, Keitaro Sofue, Tomonori Kanda, Takaki Maeda, Munenobu Nogami, Masatoshi Hori and Takamichi Murakami
Appl. Sci. 2020, 10(13), 4446; https://doi.org/10.3390/app10134446 - 28 Jun 2020
Cited by 14 | Viewed by 2605
Abstract
The usefulness of sparse-sampling CT with deep learning-based reconstruction for detection of metastasis of malignant ovarian tumors was evaluated. We obtained contrast-enhanced CT images (n = 141) of ovarian cancers from a public database, whose images were randomly divided into 71 training, [...] Read more.
The usefulness of sparse-sampling CT with deep learning-based reconstruction for detection of metastasis of malignant ovarian tumors was evaluated. We obtained contrast-enhanced CT images (n = 141) of ovarian cancers from a public database, whose images were randomly divided into 71 training, 20 validation, and 50 test cases. Sparse-sampling CT images were calculated slice-by-slice by software simulation. Two deep-learning models for deep learning-based reconstruction were evaluated: Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) and deeper U-net. For 50 test cases, we evaluated the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as quantitative measures. Two radiologists independently performed a qualitative evaluation for the following points: entire CT image quality; visibility of the iliac artery; and visibility of peritoneal dissemination, liver metastasis, and lymph node metastasis. Wilcoxon signed-rank test and McNemar test were used to compare image quality and metastasis detectability between the two models, respectively. The mean PSNR and SSIM performed better with deeper U-net over RED-CNN. For all items of the visual evaluation, deeper U-net scored significantly better than RED-CNN. The metastasis detectability with deeper U-net was more than 95%. Sparse-sampling CT with deep learning-based reconstruction proved useful in detecting metastasis of malignant ovarian tumors and might contribute to reducing overall CT-radiation exposure. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Figure 1

11 pages, 731 KiB  
Article
Automatic Pancreas Segmentation Using Coarse-Scaled 2D Model of Deep Learning: Usefulness of Data Augmentation and Deep U-Net
by Mizuho Nishio, Shunjiro Noguchi and Koji Fujimoto
Appl. Sci. 2020, 10(10), 3360; https://doi.org/10.3390/app10103360 - 12 May 2020
Cited by 23 | Viewed by 4015
Abstract
Combinations of data augmentation methods and deep learning architectures for automatic pancreas segmentation on CT images are proposed and evaluated. Images from a public CT dataset of pancreas segmentation were used to evaluate the models. Baseline U-net and deep U-net were chosen for [...] Read more.
Combinations of data augmentation methods and deep learning architectures for automatic pancreas segmentation on CT images are proposed and evaluated. Images from a public CT dataset of pancreas segmentation were used to evaluate the models. Baseline U-net and deep U-net were chosen for the deep learning models of pancreas segmentation. Methods of data augmentation included conventional methods, mixup, and random image cropping and patching (RICAP). Ten combinations of the deep learning models and the data augmentation methods were evaluated. Four-fold cross validation was performed to train and evaluate these models with data augmentation methods. The dice similarity coefficient (DSC) was calculated between automatic segmentation results and manually annotated labels and these were visually assessed by two radiologists. The performance of the deep U-net was better than that of the baseline U-net with mean DSC of 0.703–0.789 and 0.686–0.748, respectively. In both baseline U-net and deep U-net, the methods with data augmentation performed better than methods with no data augmentation, and mixup and RICAP were more useful than the conventional method. The best mean DSC was obtained using a combination of deep U-net, mixup, and RICAP, and the two radiologists scored the results from this model as good or perfect in 76 and 74 of the 82 cases. Full article
(This article belongs to the Special Issue Machine Learning/Deep Learning in Medical Image Processing)
Show Figures

Figure 1

Back to TopTop