applsci-logo

Journal Browser

Journal Browser

Machine Learning-Based Medical Image Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (1 August 2023) | Viewed by 30119

Special Issue Editors


E-Mail Website
Guest Editor
Department of Artificial Intelligence Convergence, Chonnam National University, 77 Yongbong-ro, Gwangju 61186, Republic of Korea
Interests: deep-learning-based emotion recognition; medical image analysis; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Radiology, Chonnam National University, Gwangju 59626, Republic of Korea
2. Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 59626, Republic of Korea
Interests: magnetic resonance imaging; medical imaging; clinical application of machine learning and deep learning
Associate Professor, Division of Culture Contents, Graduate School of Data Science, AI Convergence and Open Sharing System, Chonnam National University, Republic of Korea
Interests: object/image detection; segmentation; recognition; tracking; image understanding; action/behavior/gesture recognition; emotion recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning and deep learning techniques have contributed to great success in medical image analysis. This Special Issue is being assembled to share various in-depth research results related to machine-learning and deep-learning-based medical image analysis methods, including, but not limited to, organ segmentation, detection of particular regions of interest, disease diagnosis and quantification, prediction of prognosis, and image restoration and synthesis. These applications may utilize various types of medical imaging modalities, such as X-ray, computerized tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasound, mammography, and pathological images.

It is our pleasure to invite you to join this Special Issue, entitled “Machine Learning-Based Medical Image Analysis”, whereby you are welcome to contribute a manuscript on your valuable research progress. Thank you very much.

Prof. Soo-Hyung Kim
Prof. Ilwoo Park
Prof. In Seop Na
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning and deep learning
  • medical image analysis
  • X-ray, CT, MRI, PET, ultrasound, mammography, and pathological images
  • lesion segmentation, detection, quantification and diagnosis
  • computer-aided medical imaging tool

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 5735 KiB  
Article
CI-UNet: Application of Segmentation of Medical Images of the Human Torso
by Junkang Qin, Xiao Wang, Dechang Mi, Qinmu Wu, Zhiqin He and Yu Tang
Appl. Sci. 2023, 13(12), 7293; https://doi.org/10.3390/app13127293 - 19 Jun 2023
Cited by 2 | Viewed by 1613
Abstract
The study of human torso medical image segmentation is significant for computer-aided diagnosis of human examination, disease tracking, and disease prevention and treatment. In this paper, two application tasks are designed for torso medical images: the abdominal multi-organ segmentation task and the spine [...] Read more.
The study of human torso medical image segmentation is significant for computer-aided diagnosis of human examination, disease tracking, and disease prevention and treatment. In this paper, two application tasks are designed for torso medical images: the abdominal multi-organ segmentation task and the spine segmentation task. For this reason, this paper proposes a net-work model CI-UNet improve the accuracy of edge segmentation. CI-UNet is a U-shaped network structure consisting of encoding and decoding networks. Firstly, it replaces UNet’s double convolutional backbone network with a VGG16 network loaded with Transfer Learning. It feeds image information from two adjacent layers in the VGG16 network into the decoding grid via information aggregation blocks. Secondly, Polarized Self-Attention is added at the decoding network and the hopping connection, which allows the network to focus on the compelling features of the image. Finally, the image information is decoded by convolution and Up-sampling several times to obtain the segmentation results. CI-UNet was tested in the abdominal multi-organ segmentation task using the Chaos (Combined CT-MR Healthy Abdominal Organ Segmentation) open challenge dataset and compared with UNet, Attention UNet, PSPNet, DeepLabv3+ prediction networks, and dedicated network for MRI images. The experimental results showed that the average intersegmental union (mIoU) and average pixel accuracy (mPA) of organ segmentation were 82.33% and 90.10%, respectively, higher than the above comparison network. Meanwhile, we used CI-UNet for the spine dataset of the Guizhou branch of Beijing Jishuitan Hospital. The average intersegmental union (mIoU) and average pixel accuracy (mPA) of organ segmentation were 87.97% and 93.48%, respectively, which were approved by the physicians for both tasks. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

17 pages, 3862 KiB  
Article
End-to-End Convolutional Neural Network Framework for Breast Ultrasound Analysis Using Multiple Parametric Images Generated from Radiofrequency Signals
by Soohyun Kim, Juyoung Park, Joonhwan Yi and Hyungsuk Kim
Appl. Sci. 2022, 12(10), 4942; https://doi.org/10.3390/app12104942 - 13 May 2022
Cited by 6 | Viewed by 2247
Abstract
Breast ultrasound (BUS) is an effective clinical modality for diagnosing breast abnormalities in women. Deep-learning techniques based on convolutional neural networks (CNN) have been widely used to analyze BUS images. However, the low quality of B-mode images owing to speckle noise and a [...] Read more.
Breast ultrasound (BUS) is an effective clinical modality for diagnosing breast abnormalities in women. Deep-learning techniques based on convolutional neural networks (CNN) have been widely used to analyze BUS images. However, the low quality of B-mode images owing to speckle noise and a lack of training datasets makes BUS analysis challenging in clinical applications. In this study, we proposed an end-to-end CNN framework for BUS analysis using multiple parametric images generated from radiofrequency (RF) signals. The entropy and phase images, which represent the microstructural and anatomical information, respectively, and the traditional B-mode images were used as parametric images in the time domain. In addition, the attenuation image, estimated from the frequency domain using RF signals, was used for the spectral features. Because one set of RF signals from one patient produced multiple images as CNN inputs, the proposed framework overcame the limitation of datasets in a broad sense of data augmentation while providing complementary information to compensate for the low quality of the B-mode images. The experimental results showed that the proposed architecture improved the classification accuracy and recall by 5.5% and 11.6%, respectively, compared with the traditional approach using only B-mode images. The proposed framework can be extended to various other parametric images in both the time and frequency domains using deep neural networks to improve its performance. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

23 pages, 4616 KiB  
Article
SA-GAN: Stain Acclimation Generative Adversarial Network for Histopathology Image Analysis
by Tasleem Kausar, Adeeba Kausar, Muhammad Adnan Ashraf, Muhammad Farhan Siddique, Mingjiang Wang, Muhammad Sajid, Muhammad Zeeshan Siddique, Anwar Ul Haq and Imran Riaz
Appl. Sci. 2022, 12(1), 288; https://doi.org/10.3390/app12010288 - 29 Dec 2021
Cited by 21 | Viewed by 3419
Abstract
Histopathological image analysis is an examination of tissue under a light microscope for cancerous disease diagnosis. Computer-assisted diagnosis (CAD) systems work well by diagnosing cancer from histopathology images. However, stain variability in histopathology images is inevitable due to the use of different staining [...] Read more.
Histopathological image analysis is an examination of tissue under a light microscope for cancerous disease diagnosis. Computer-assisted diagnosis (CAD) systems work well by diagnosing cancer from histopathology images. However, stain variability in histopathology images is inevitable due to the use of different staining processes, operator ability, and scanner specifications. These stain variations present in histopathology images affect the accuracy of the CAD systems. Various stain normalization techniques have been developed to cope with inter-variability issues, allowing standardizing the appearance of images. However, in stain normalization, these methods rely on the single reference image rather than incorporate color distributions of the entire dataset. In this paper, we design a novel machine learning-based model that takes advantage of whole dataset distributions as well as color statistics of a single target image instead of relying only on a single target image. The proposed deep model, called stain acclimation generative adversarial network (SA-GAN), consists of one generator and two discriminators. The generator maps the input images from the source domain to the target domain. Among discriminators, the first discriminator forces the generated images to maintain the color patterns as of target domain. While second discriminator forces the generated images to preserve the structure contents as of source domain. The proposed model is trained using a color attribute metric, extracted from a selected template image. Therefore, the designed model not only learns dataset-specific staining properties but also image-specific textural contents. Evaluated results on four different histopathology datasets show the efficacy of SA-GAN to acclimate stain contents and enhance the quality of normalization by obtaining the highest values of performance metrics. Additionally, the proposed method is also evaluated for multiclass cancer type classification task, showing a 6.9% improvement in accuracy on ICIAR 2018 hidden test data. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

16 pages, 5405 KiB  
Article
POCS-Augmented CycleGAN for MR Image Reconstruction
by Yiran Li, Hanlu Yang, Danfeng Xie, David Dreizin, Fuqing Zhou and Ze Wang
Appl. Sci. 2022, 12(1), 114; https://doi.org/10.3390/app12010114 - 23 Dec 2021
Cited by 1 | Viewed by 3093
Abstract
Recent years have seen increased research interest in replacing the computationally intensive Magnetic resonance (MR) image reconstruction process with deep neural networks. We claim in this paper that the traditional image reconstruction methods and deep learning (DL) are mutually complementary and can be [...] Read more.
Recent years have seen increased research interest in replacing the computationally intensive Magnetic resonance (MR) image reconstruction process with deep neural networks. We claim in this paper that the traditional image reconstruction methods and deep learning (DL) are mutually complementary and can be combined to achieve better image reconstruction quality. To test this hypothesis, a hybrid DL image reconstruction method was proposed by combining a state-of-the-art deep learning network, namely a generative adversarial network with cycle loss (CycleGAN), with a traditional data reconstruction algorithm: Projection Onto Convex Set (POCS). The output of the first iteration’s training results of the CycleGAN was updated by POCS and used as the extra training data for the second training iteration of the CycleGAN. The method was validated using sub-sampled Magnetic resonance imaging data. Compared with other state-of-the-art, DL-based methods (e.g., U-Net, GAN, and RefineGAN) and a traditional method (compressed sensing), our method showed the best reconstruction results. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

17 pages, 9340 KiB  
Article
Classification of Breast Cancer in Mammograms with Deep Learning Adding a Fifth Class
by Salvador Castro-Tapia, Celina Lizeth Castañeda-Miranda, Carlos Alberto Olvera-Olvera, Héctor A. Guerrero-Osuna, José Manuel Ortiz-Rodriguez, Ma. del Rosario Martínez-Blanco, Germán Díaz-Florez, Jorge Domingo Mendiola-Santibañez and Luis Octavio Solís-Sánchez
Appl. Sci. 2021, 11(23), 11398; https://doi.org/10.3390/app112311398 - 2 Dec 2021
Cited by 11 | Viewed by 3612
Abstract
Breast cancer is one of the diseases of most profound concern, with the most prevalence worldwide, where early detections and diagnoses play the leading role against this disease achieved through imaging techniques such as mammography. Radiologists tend to have a high false positive [...] Read more.
Breast cancer is one of the diseases of most profound concern, with the most prevalence worldwide, where early detections and diagnoses play the leading role against this disease achieved through imaging techniques such as mammography. Radiologists tend to have a high false positive rate for mammography diagnoses and an accuracy of around 82%. Currently, deep learning (DL) techniques have shown promising results in the early detection of breast cancer by generating computer-aided diagnosis (CAD) systems implementing convolutional neural networks (CNNs). This work focuses on applying, evaluating, and comparing the architectures: AlexNet, GoogLeNet, Resnet50, and Vgg19 to classify breast lesions after using transfer learning with fine-tuning and training the CNN with regions extracted from the MIAS and INbreast databases. We analyzed 14 classifiers, involving 4 classes as several researches have done it before, corresponding to benign and malignant microcalcifications and masses, and as our main contribution, we also added a 5th class for the normal tissue of the mammary parenchyma increasing the correct detection; in order to evaluate the architectures with a statistical analysis based on the received operational characteristics (ROC), the area under the curve (AUC), F1 Score, accuracy, precision, sensitivity, and specificity. We generate the best results with the CNN GoogLeNet trained with five classes on a balanced database with an AUC of 99.29%, F1 Score of 91.92%, the accuracy of 91.92%, precision of 92.15%, sensitivity of 91.70%, and specificity of 97.66%, concluding that GoogLeNet is optimal as a classifier in a CAD system to deal with breast cancer. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

20 pages, 6954 KiB  
Article
Multiclass Skin Cancer Classification Using Ensemble of Fine-Tuned Deep Learning Models
by Nabeela Kausar, Abdul Hameed, Mohsin Sattar, Ramiza Ashraf, Ali Shariq Imran, Muhammad Zain ul Abidin and Ammara Ali
Appl. Sci. 2021, 11(22), 10593; https://doi.org/10.3390/app112210593 - 11 Nov 2021
Cited by 37 | Viewed by 5051
Abstract
Skin cancer is a widespread disease associated with eight diagnostic classes. The diagnosis of multiple types of skin cancer is a challenging task for dermatologists due to the similarity of skin cancer classes in phenotype. The average accuracy of multiclass skin cancer diagnosis [...] Read more.
Skin cancer is a widespread disease associated with eight diagnostic classes. The diagnosis of multiple types of skin cancer is a challenging task for dermatologists due to the similarity of skin cancer classes in phenotype. The average accuracy of multiclass skin cancer diagnosis is 62% to 80%. Therefore, the classification of skin cancer using machine learning can be beneficial in the diagnosis and treatment of the patients. Several researchers developed skin cancer classification models for binary class but could not extend the research to multiclass classification with better performance ratios. We have developed deep learning-based ensemble classification models for multiclass skin cancer classification. Experimental results proved that the individual deep learners perform better for skin cancer classification, but still the development of ensemble is a meaningful approach since it enhances the classification accuracy. Results show that the accuracy of individual learners of ResNet, InceptionV3, DenseNet, InceptionResNetV2, and VGG-19 are 72%, 91%, 91.4%, 91.7% and 91.8%, respectively. The accuracy of proposed majority voting and weighted majority voting ensemble models are 98% and 98.6%, respectively. The accuracy of proposed ensemble models is higher than the individual deep learners and the dermatologists’ diagnosis accuracy. The proposed ensemble models are compared with the recently developed skin cancer classification approaches. The results show that the proposed ensemble models outperform recently developed multiclass skin cancer classification models. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

17 pages, 4450 KiB  
Article
Multi-Class Classification of Lung Diseases Using CNN Models
by Min Hong, Beanbonyka Rim, Hongchang Lee, Hyeonung Jang, Joonho Oh and Seongjun Choi
Appl. Sci. 2021, 11(19), 9289; https://doi.org/10.3390/app11199289 - 6 Oct 2021
Cited by 32 | Viewed by 5462
Abstract
In this study, we propose a multi-class classification method by learning lung disease images with Convolutional Neural Network (CNN). As the image data for learning, the U.S. National Institutes of Health (NIH) dataset divided into Normal, Pneumonia, and Pneumothorax and the Cheonan Soonchunhyang [...] Read more.
In this study, we propose a multi-class classification method by learning lung disease images with Convolutional Neural Network (CNN). As the image data for learning, the U.S. National Institutes of Health (NIH) dataset divided into Normal, Pneumonia, and Pneumothorax and the Cheonan Soonchunhyang University Hospital dataset including Tuberculosis were used. To improve performance, preprocessing was performed with Center Crop while maintaining the aspect ratio of 1:1. As a Noisy Student of EfficientNet B7, fine-tuning learning was performed using the weights learned from ImageNet, and the features of each layer were maximally utilized using the Multi GAP structure. As a result of the experiment, Benchmarks measured with the NIH dataset showed the highest performance among the tested models with an accuracy of 85.32%, and the four-class predictions measured with data from Soonchunhyang University Hospital in Cheonan had an average accuracy of 96.1%, an average sensitivity of 92.2%, an average specificity of 97.4%, and an average inference time of 0.2 s. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Figure 1

15 pages, 3269 KiB  
Article
Image Quality Assessment to Emulate Experts’ Perception in Lumbar MRI Using Machine Learning
by Steren Chabert, Juan Sebastian Castro, Leonardo Muñoz, Pablo Cox, Rodrigo Riveros, Juan Vielma, Gamaliel Huerta, Marvin Querales, Carolina Saavedra, Alejandro Veloz and Rodrigo Salas
Appl. Sci. 2021, 11(14), 6616; https://doi.org/10.3390/app11146616 - 19 Jul 2021
Cited by 11 | Viewed by 3236
Abstract
Medical image quality is crucial to obtaining reliable diagnostics. Most quality controls rely on routine tests using phantoms, which do not reflect closely the reality of images obtained on patients and do not reflect directly the quality perceived by radiologists. The purpose of [...] Read more.
Medical image quality is crucial to obtaining reliable diagnostics. Most quality controls rely on routine tests using phantoms, which do not reflect closely the reality of images obtained on patients and do not reflect directly the quality perceived by radiologists. The purpose of this work is to develop a method that classifies the image quality perceived by radiologists in MR images. The focus was set on lumbar images as they are widely used with different challenges. Three neuroradiologists evaluated the image quality of a dataset that included T1-weighting images in axial and sagittal orientation, and sagittal T2-weighting. In parallel, we introduced the computational assessment using a wide range of features extracted from the images, then fed them into a classifier system. A total of 95 exams were used, from our local hospital and a public database, and part of the images was manipulated to broaden the distribution quality of the dataset. Good recall of 82% and an area under curve (AUC) of 77% were obtained on average in testing condition, using a Support Vector Machine. Even though the actual implementation still relies on user interaction to extract features, the results are promising with respect to a potential implementation for monitoring image quality online with the acquisition process. Full article
(This article belongs to the Special Issue Machine Learning-Based Medical Image Analysis)
Show Figures

Graphical abstract

Back to TopTop