applsci-logo

Journal Browser

Journal Browser

The Application of Machine Learning in Medical Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 13598

Special Issue Editors


E-Mail Website
Guest Editor
College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
Interests: machine learning; image processing; pattern recognition; computer vision; biomedical applications

E-Mail Website1 Website2
Guest Editor
Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
Interests: deep learning; computer vision; biometrics; machine learning
Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
Interests: computer vision; medical image processing; machine learning; deep learning; multimedia forensics; artificial intelligence

Special Issue Information

Dear Colleagues,

From several perspectives, including clinical research, decision support, and public health, machine learning has become essential to the healthcare sector. In fact, machine learning has demonstrated great performance in organ segmentation, disease prediction, and medical image classification, particularly when taking into account methods for pattern recognition, such as texture analysis. Medical image processing techniques have significantly changed in response to the introduction of various machine learning approaches. The use of automated computer-aided tools (which offer precise descriptions of disease characteristics) may enable radiologists to make more accurate diagnoses of a variety of diseases.

This Special Issue aims to apply machine learning in medical image processing, including 2D and 3D image classification, segmentation, disease prediction, etc.

We invite you to submit original research articles, review articles and short technical communications on the above topics and areas. Research areas may include (but are not limited to) the following:

  • Machine learning;
  • Deep learning;
  • Artificial intelligence;
  • Data mining;
  • Medical signal and data processing, including preprocessing, classification, recognition, reconstruction, registration, etc.;
  • Medical imaging and pattern recognition;
  • Quantitative imaging analysis;
  • Computer-aided diagnosis.

We look forward to receiving your contributions.

Dr. Sami Bourouis
Dr. Hammam Alshazly
Dr. Ali Javed
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • medical imaging
  • classification
  • reconstruction
  • segmentation
  • denoising
  • computer-aided diagnosis

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 1120 KiB  
Article
Mask Guidance Pyramid Network for Overlapping Cervical Cell Edge Detection
by Wei Zhang, Huijie Fan, Xuanhua Xie, Qiang Wang and Yandong Tang
Appl. Sci. 2023, 13(13), 7526; https://doi.org/10.3390/app13137526 - 26 Jun 2023
Cited by 2 | Viewed by 1041
Abstract
An important indicator of cervical cancer diagnosis is to calculate the proportion of diseased cells and cancer cells, so it is necessary to segment cells and judge the cell status. The existing methods are difficult to deal with the segmentation of overlapping cells. [...] Read more.
An important indicator of cervical cancer diagnosis is to calculate the proportion of diseased cells and cancer cells, so it is necessary to segment cells and judge the cell status. The existing methods are difficult to deal with the segmentation of overlapping cells. In order to solve this problem, we put forward such a hypothesis by reading a large number of literature, that is, image segmentation and edge measurement tasks have unity in high-level features. To prove this hypothesis, in this paper, we focus on the complementary between overlapping cervical cell edge information and cell object information to get higher accuracy cell edge detection results. Specifically, we present a joint multi-task learning framework for overlapping cell edge detection by the mask guidance pyramid network. The main component of the framework is the Mask Guidance Module (MGM), which integrates two tasks and stores the shared latent semantics to interact in the two tasks. For semantic edge detection, we propose the novel Refinement Aggregated Module (RAM) fusion to promote semantic edges. Finally, to improve the edge pixel accuracy, the edge consistency constraint loss function is introduced to our model training. Our extensive experiments have proved that our method outperforms other edge detection efforts. Full article
(This article belongs to the Special Issue The Application of Machine Learning in Medical Image Processing)
Show Figures

Figure 1

17 pages, 2291 KiB  
Article
Transfer Learning for Diabetic Retinopathy Detection: A Study of Dataset Combination and Model Performance
by A. M. Mutawa, Shahad Alnajdi and Sai Sruthi
Appl. Sci. 2023, 13(9), 5685; https://doi.org/10.3390/app13095685 - 5 May 2023
Cited by 5 | Viewed by 3973
Abstract
Diabetes’ serious complication, diabetic retinopathy (DR), which can potentially be life-threatening, might result in vision loss in certain situations. Although it has no symptoms in the early stages, this illness is regarded as one of the “silent diseases” that go unnoticed. The fact [...] Read more.
Diabetes’ serious complication, diabetic retinopathy (DR), which can potentially be life-threatening, might result in vision loss in certain situations. Although it has no symptoms in the early stages, this illness is regarded as one of the “silent diseases” that go unnoticed. The fact that various datasets have varied retinal features is one of the significant difficulties in this field of study. This information impacts the models created for this purpose. This study’s method can efficiently learn and classify DR from three diverse datasets. Four models based on transfer learning Convolution Neural Network (CNN)—Visual Geometry Group (VGG) 16, Inception version 3 (InceptionV3), Dense Network (DenseNet) 121, and Mobile Network version 2 (MobileNetV2)—are employed in this work, with evaluation parameters, including loss, accuracy, recall, precision, and specificity. The models are also tested by combining the images from the three datasets. The DenseNet121 model performs better with 98.97% accuracy on the combined image set. The study concludes that combining multiple datasets improves performance compared to individual datasets. The obtained model can be utilized globally to accommodate more tests that clinics perform for diabetic patients to prevent DR. It helps health workers refer patients to ophthalmologists before DR becomes serious. Full article
(This article belongs to the Special Issue The Application of Machine Learning in Medical Image Processing)
Show Figures

Figure 1

12 pages, 7945 KiB  
Article
MIU-Net: MIX-Attention and Inception U-Net for Histopathology Image Nuclei Segmentation
by Jiangqi Li and Xiang Li
Appl. Sci. 2023, 13(8), 4842; https://doi.org/10.3390/app13084842 - 12 Apr 2023
Cited by 1 | Viewed by 1667
Abstract
In the medical field, hematoxylin and eosin (H&E)-stained histopathology images of cell nuclei analysis represent an important measure for cancer diagnosis. The most valuable aspect of the nuclei analysis is the segmentation of the different nuclei morphologies of different organs and subsequent diagnosis [...] Read more.
In the medical field, hematoxylin and eosin (H&E)-stained histopathology images of cell nuclei analysis represent an important measure for cancer diagnosis. The most valuable aspect of the nuclei analysis is the segmentation of the different nuclei morphologies of different organs and subsequent diagnosis of the type and severity of the disease based on pathology. In recent years, deep learning techniques have been widely used in digital histopathology analysis. Automated nuclear segmentation technology enables the rapid and efficient segmentation of tens of thousands of complex and variable nuclei in histopathology images. However, a challenging problem during nuclei segmentation is the blocking of cell nuclei, overlapping, and background complexity of the tissue fraction. To address this challenge, we present MIU-net, an efficient deep learning network structure for the nuclei segmentation of histopathology images. Our proposed structure includes two blocks with modified inception module and attention module. The advantage of the modified inception module is to balance the computation and network performance of the deeper layers of the network, combined with the convolutional layer using different sizes of kernels to learn effective features in a fast and efficient manner to complete kernel segmentation. The attention module allows us to extract small and fine irregular boundary features from the images, which can better segment cancer cells that appear disorganized and fragmented. We test our methodology on public kumar datasets and achieve the highest AUC score of 0.92. The experimental results show that the proposed method achieves better performance than other state-of-the-art methods. Full article
(This article belongs to the Special Issue The Application of Machine Learning in Medical Image Processing)
Show Figures

Figure 1

15 pages, 3237 KiB  
Article
An Explainable Brain Tumor Detection Framework for MRI Analysis
by Fei Yan, Yunqing Chen, Yiwen Xia, Zhiliang Wang and Ruoxiu Xiao
Appl. Sci. 2023, 13(6), 3438; https://doi.org/10.3390/app13063438 - 8 Mar 2023
Cited by 6 | Viewed by 3179
Abstract
Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can [...] Read more.
Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can complete the tasks of segmentation, classification, and explainability. The re-parameterization method is applied to our classification network, and the effect of explainable heatmaps is improved by modifying the network architecture. Our classification model also has the advantage of post-hoc explainability. We used the BraTS-2018 dataset for training and verification. Experimental results show that our simplified framework has excellent performance and high calculation speed. The comparison of results by segmentation and explainable neural networks helps researchers better understand the process of the black box method, increase the trust of the deep model output, and make more accurate judgments in disease identification and diagnosis. Full article
(This article belongs to the Special Issue The Application of Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 1725 KiB  
Review
A Survey on Diabetic Retinopathy Lesion Detection and Segmentation
by Anila Sebastian, Omar Elharrouss, Somaya Al-Maadeed and Noor Almaadeed
Appl. Sci. 2023, 13(8), 5111; https://doi.org/10.3390/app13085111 - 19 Apr 2023
Cited by 8 | Viewed by 2397
Abstract
Diabetes is a global problem which impacts people of all ages. Diabetic retinopathy (DR) is a main ailment of the eyes resulting from diabetes which can result in loss of eyesight if not detected and treated on time. The current process of detecting [...] Read more.
Diabetes is a global problem which impacts people of all ages. Diabetic retinopathy (DR) is a main ailment of the eyes resulting from diabetes which can result in loss of eyesight if not detected and treated on time. The current process of detecting DR and its progress involves manual examination by experts, which is time-consuming. Extracting the retinal vasculature, and segmentation of the optic disc (OD)/fovea play a significant part in detecting DR. Detecting DR lesions like microaneurysms (MA), hemorrhages (HM), and exudates (EX), helps to establish the current stage of DR. Recently with the advancement in artificial intelligence (AI), and deep learning(DL), which is a division of AI, is widely being used in DR related studies. Our study surveys the latest literature in “DR segmentation and lesion detection from fundus images using DL”. Full article
(This article belongs to the Special Issue The Application of Machine Learning in Medical Image Processing)
Show Figures

Figure 1

Back to TopTop