Special Issue "Machine Learning in Medical Image Processing"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2020).

Special Issue Editor

Prof. Chuan-Yu Chang
Website
Guest Editor
National Yunlin University of Science and Technology
Interests: medical image processing; neural networks; machine learning

Special Issue Information

Dear Colleagues,

With the rapid improvement of computing power, machine learning-based algorithms have received considerable attention from researchers and academics due to their convincing performance in medical image processing and recognition. There are a variety of medical imaging modalities, including ultrasound, X-Ray, CT, MRI, and pathology imaging, that physicians access to a wealth of data. However, we still lack effective tools to accurately identify important information in these medical images. Machine learning is a technique for recognizing patterns that can be applied to medical image processing, image segmentation, image interpretation, image fusion, image registration, computer-aided diagnosis, and image-guided therapy.

A considerable number of machine learning technologies have been proposed, including support vector machine (SVM), neural network (NN), KNN, convolutional neural network (CNN), recurrent neural network (RNN), long short term memory (LSTM), extreme learning model (ELM), generative adversarial networks (GANs) etc. Through machine learning technology, we can extract information from images and represents information efficiently and efficiently. Machine learning facilitates and assists physicians for more accurate and faster diagnosis of diseases. These techniques also enhance the ability of physicians and researchers to understand how to analyze the generic variations which will lead to disease. Therefore, the purpose of this Special Issue is to present the developments and achievements of the recently popular machine learning algorithms in medical image analysis and processing. Topics of interest include, but are not limited to the following:

  1. Certain element detection and recognition
  2. Image segmentation and interpretation
  3. Image reconstruction
  4. Image registration and fusion
  5. Computer-aided diagnosis
  6. Other applications in medical image analysis

Prof. Chuan-Yu Chang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image processing
  • machine learning
  • neural networks
  • support vector machine
  • deep learning
  • image segmentation
  • Image reconstruction
  • image registration
  • image fusion

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
HyCAD-OCT: A Hybrid Computer-Aided Diagnosis of Retinopathy by Optical Coherence Tomography Integrating Machine Learning and Feature Maps Localization
Appl. Sci. 2020, 10(14), 4716; https://doi.org/10.3390/app10144716 - 08 Jul 2020
Abstract
Optical Coherence Tomography (OCT) imaging has major advantages in effectively identifying the presence of various ocular pathologies and detecting a wide range of macular diseases. OCT examinations can aid in the detection of many retina disorders in early stages that could not be [...] Read more.
Optical Coherence Tomography (OCT) imaging has major advantages in effectively identifying the presence of various ocular pathologies and detecting a wide range of macular diseases. OCT examinations can aid in the detection of many retina disorders in early stages that could not be detected in traditional retina images. In this paper, a new hybrid computer-aided OCT diagnostic system (HyCAD) is proposed for classification of Diabetic Macular Edema (DME), Choroidal Neovascularization (CNV) and drusen disorders, while separating them from Normal OCT images. The proposed HyCAD hybrid learning system integrates the segmentation of Region of Interest (RoI), based on central serious chorioretinopathy (CSC) in Spectral Domain Optical Coherence Tomography (SD-OCT) images, with deep learning architectures for effective diagnosis of retinal disorders. The proposed system assimilates a range of techniques including RoI localization and feature extraction, followed by classification and diagnosis. An efficient feature fusion phase has been introduced for combining the OCT image features, extracted by Deep Convolutional Neural Network (CNN), with the features extracted from the RoI segmentation phase. This fused feature set is used to predict multiclass OCT retina disorders. The proposed segmentation phase of retinal RoI regions adds substantial contribution as it draws attention to the most significant areas that are candidate for diagnosis. A new modified deep learning architecture (Norm-VGG16) is introduced integrating a kernel regularizer. Norm-VGG16 is trained from scratch on a large benchmark dataset and used in RoI localization and segmentation. Various experiments have been carried out to illustrate the performance of the proposed system. Large Dataset of Labeled Optical Coherence Tomography (OCT) v3 benchmark is used to validate the efficiency of the model compared with others in literature. The experimental results show that the proposed model achieves relatively high-performance in terms of accuracy, sensitivity and specificity. An average accuracy, sensitivity and specificity of 98.8%, 99.4% and 98.2% is achieved, respectively. The remarkable performance achieved reflects that the fusion phase can effectively improve the identification ratio of the urgent patients’ diagnostic images and clinical data. In addition, an outstanding performance is achieved compared to others in literature. Full article
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Open AccessArticle
Intelligent Computer-Aided Diagnostic System for Magnifying Endoscopy Images of Superficial Esophageal Squamous Cell Carcinoma
Appl. Sci. 2020, 10(8), 2771; https://doi.org/10.3390/app10082771 - 16 Apr 2020
Abstract
Predicting the depth of invasion of superficial esophageal squamous cell carcinomas (SESCCs) is important when selecting treatment modalities such as endoscopic or surgical resections. Recently, the Japanese Esophageal Society (JES) proposed a new simplified classification for magnifying endoscopy findings of SESCCs to predict [...] Read more.
Predicting the depth of invasion of superficial esophageal squamous cell carcinomas (SESCCs) is important when selecting treatment modalities such as endoscopic or surgical resections. Recently, the Japanese Esophageal Society (JES) proposed a new simplified classification for magnifying endoscopy findings of SESCCs to predict the depth of tumor invasion based on intrapapillary capillary loops with the SESCC microvessels classified into the B1, B2, and B3 types. In this study, a four-step classification method for SESCCs is proposed. First, Niblack’s method was applied to endoscopy images to select a candidate region of microvessels. Second, the background regions were delineated from the vessel area using the high-speed fast Fourier transform and adaptive resonance theory 2 algorithm. Third, the morphological characteristics of the vessels were extracted. Based on the extracted features, the support vector machine algorithm was employed to classify the microvessels into the B1 and non-B1 types. Finally, following the automatic measurement of the microvessel caliber using the proposed method, the non-B1 types were sub-classified into the B2 and B3 types via comparisons with the caliber of the surrounding microvessels. In the experiments, 114 magnifying endoscopy images (47 B1-type, 48 B2-type, and 19 B3-type images) were used to classify the characteristics of SESCCs. The accuracy, sensitivity, and specificity of the classification into the B1 and non-B1 types were 83.3%, 74.5%, and 89.6%, respectively, while those for the classification of the B2 and B3 types in the non-B1 types were 73.1%, 73.7%, and 72.9%, respectively. The proposed machine learning based computer-aided diagnostic system could obtain the objective data by analyzing the pattern and caliber of the microvessels with acceptable performance. Further studies are necessary to carefully validate the clinical utility of the proposed system. Full article
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Open AccessArticle
Machine Learning Classifiers Evaluation for Automatic Karyogram Generation from G-Banded Metaphase Images
Appl. Sci. 2020, 10(8), 2758; https://doi.org/10.3390/app10082758 - 16 Apr 2020
Abstract
This work proposes the evaluation of a set of algorithms of machine learning and the selection of the most appropriate one for the classification of segmented chromosomes images acquired using the Giemsa staining technique (G-banding). The evaluation and selection of the best classification [...] Read more.
This work proposes the evaluation of a set of algorithms of machine learning and the selection of the most appropriate one for the classification of segmented chromosomes images acquired using the Giemsa staining technique (G-banding). The evaluation and selection of the best classification algorithms was carried out over a dataset of 119 Q-banding chromosomes images, and the obtained results were then applied to a dataset of 24 G-band chromosomes images, manually classified by an expert of the Laboratory of Cytogenetic of the Children’s Hospital of Tamaulipas. The results of evaluation of 51 classifiers yielded that the best classification accuracy for the selected features was obtained by a backpropagation neural network. One of the main contributions of this study is the proposal of a two-stage classification scheme based on the best classifier found by the initial evaluation. In stage 1, chromosome images are classified into three major groups. In stage 2, the output of phase 1 is used as the input of a multiclass classifier. Using this scheme, 82% of the IGB bank samples and 88% of the samples of a bank of images obtained with a Q-band available in the literature consisting of 119 chromosome studies were successfully classified. The proposed work is a part of an desktop application that allows cytogeneticist to automatically generate cytogenetic reports. Full article
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Open AccessArticle
Visual and Quantitative Evaluation of Amyloid Brain PET Image Synthesis with Generative Adversarial Network
Appl. Sci. 2020, 10(7), 2628; https://doi.org/10.3390/app10072628 - 10 Apr 2020
Cited by 1
Abstract
Conventional data augmentation (DA) techniques, which have been used to improve the performance of predictive models with a lack of balanced training data sets, entail an effort to define the proper repeating operation (e.g., rotation and mirroring) according to the target class distribution. [...] Read more.
Conventional data augmentation (DA) techniques, which have been used to improve the performance of predictive models with a lack of balanced training data sets, entail an effort to define the proper repeating operation (e.g., rotation and mirroring) according to the target class distribution. Although DA using generative adversarial network (GAN) has the potential to overcome the disadvantages of conventional DA, there are not enough cases where this technique has been applied to medical images, and in particular, not enough cases where quantitative evaluation was used to determine whether the generated images had enough realism and diversity to be used for DA. In this study, we synthesized 18F-Florbetaben (FBB) images using CGAN. The generated images were evaluated using various measures, and we presented the state of the images and the similarity value of quantitative measurement that can be expected to successfully augment data from generated images for DA. The method includes (1) conditional WGAN-GP to learn the axial image distribution extracted from pre-processed 3D FBB images, (2) pre-trained DenseNet121 and model-agnostic metrics for visual and quantitative measurements of generated image distribution, and (3) a machine learning model for observing improvement in generalization performance by generated dataset. The Visual Turing test showed similarity in the descriptions of typical patterns of amyloid deposition for each of the generated images. However, differences in similarity and classification performance per axial level were observed, which did not agree with the visual evaluation. Experimental results demonstrated that quantitative measurements were able to detect the similarity between two distributions and observe mode collapse better than the Visual Turing test and t-SNE. Full article
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Open AccessArticle
Using 2D CNN with Taguchi Parametric Optimization for Lung Cancer Recognition from CT Images
Appl. Sci. 2020, 10(7), 2591; https://doi.org/10.3390/app10072591 - 09 Apr 2020
Cited by 2
Abstract
Lung cancer is one of the common causes of cancer deaths. Early detection and treatment of lung cancer is essential. However, the detection of lung cancer in patients produces many false positives. Therefore, increasing the accuracy of the classification of diagnosis or true [...] Read more.
Lung cancer is one of the common causes of cancer deaths. Early detection and treatment of lung cancer is essential. However, the detection of lung cancer in patients produces many false positives. Therefore, increasing the accuracy of the classification of diagnosis or true detection by computed tomography (CT) is a difficult task. Solving this problem using intelligent and automated methods has become a hot research topic in recent years. Hence, we propose a 2D convolutional neural network (2D CNN) with Taguchi parametric optimization for automatically recognizing lung cancer from CT images. In the Taguchi method, 36 experiments and 8 control factors of mixed levels were selected to determine the optimum parameters of the 2D CNN architecture and improve the classification accuracy of lung cancer. The experimental results show that the average classification accuracy of the 2D CNN with Taguchi parameter optimization and the original 2D CNN in lung cancer recognition are 91.97% and 98.83% on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset, and 94.68% and 99.97% on the International Society for Optics and Photonics with the support of the American Association of Physicists in Medicine (SPIE-AAPM) dataset, respectively. The proposed method is 6.86% and 5.29% more accurate than the original 2D CNN on the two datasets, respectively, proving the superiority of proposed model. Full article
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Open AccessArticle
A Novel Computer-Aided-Diagnosis System for Breast Ultrasound Images Based on BI-RADS Categories
Appl. Sci. 2020, 10(5), 1830; https://doi.org/10.3390/app10051830 - 06 Mar 2020
Abstract
The breast ultrasound is not only one of major devices for breast tissue imaging, but also one of important methods in breast tumor screening. It is non-radiative, non-invasive, harmless, simple, and low cost screening. The American College of Radiology (ACR) proposed the Breast [...] Read more.
The breast ultrasound is not only one of major devices for breast tissue imaging, but also one of important methods in breast tumor screening. It is non-radiative, non-invasive, harmless, simple, and low cost screening. The American College of Radiology (ACR) proposed the Breast Imaging Reporting and Data System (BI-RADS) to evaluate far more breast lesion severities compared to traditional diagnoses according to five-criterion categories of masses composition described as follows: shape, orientation, margin, echo pattern, and posterior features. However, there exist some problems, such as intensity differences and different resolutions in image acquisition among different types of ultrasound imaging modalities so that clinicians cannot always identify accurately the BI-RADS categories or disease severities. To this end, this article adopted three different brands of ultrasound scanners to fetch breast images for our experimental samples. The breast lesion was detected on the original image using preprocessing, image segmentation, etc. The breast tumor’s severity was evaluated on the features of the breast lesion via our proposed classifiers according to the BI-RADS standard rather than traditional assessment on the severity; i.e., merely using benign or malignant. In this work, we mainly focused on the BI-RADS categories 2–5 after the stage of segmentation as a result of the clinical practice. Moreover, several features related to lesion severities based on the selected BI-RADS categories were introduced into three machine learning classifiers, including a Support Vector Machine (SVM), Random Forest (RF), and Convolution Neural Network (CNN) combined with feature selection to develop a multi-class assessment of breast tumor severity based on BI-RADS. Experimental results show that the proposed CAD system based on BI-RADS can obtain the identification accuracies with SVM, RF, and CNN reaching 80.00%, 77.78%, and 85.42%, respectively. We also validated the performance and adaptability of the classification using different ultrasound scanners. Results also indicate that the evaluations of F-score based on CNN can obtain measures higher than 75% (i.e., prominent adaptability) when samples were tested on various BI-RADS categories. Full article
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Back to TopTop