Machine Learning for Medical Imaging Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Bioelectronics".

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 14554

Special Issue Editors


E-Mail Website
Guest Editor
Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Republic of Korea
Interests: medical imaging; positron emission tomography (PET); PET/MRI

E-Mail Website
Guest Editor
Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
Interests: machine learning; deep learning; image reconstruction

Special Issue Information

Dear Colleagues,

This Special Issue (SI) encourages authors to present their latest research achievements in relation to new methods and applications of machine learning for medical image processing. Medical image processing involves the use and exploration of 2D or higher dimensional image datasets of the human body obtained from various medical imaging devices to diagnose disease or guide medical interventions. Machine learning is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Deep neural networks are now state-of-the-art machine learning models used for medical image analysis and processing.

We look forward to the latest research results that suggest feasible solutions for various challenging tasks in medical image processing based on advanced machine learning technology. While not limited to these alone, the typical biomedical image datasets of interest include those acquired from:

X-ray;

Computed tomography;

Magnetic resonance imaging;

Nuclear medicine;

Ultrasound;

Optical and confocal microscopy;

Video and range data images.

Prof. Dr. Jae Sung Lee
Prof. Dr. Quanzheng Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Medical image
  • Image processing
  • Machine learning
  • Deep learning
  • Classification
  • Detection
  • Segmentation
  • Registration
  • Image generation
  • Image enhancement

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 1658 KiB  
Article
Res-CDD-Net: A Network with Multi-Scale Attention and Optimized Decoding Path for Skin Lesion Segmentation
by Zian Song, Wenjie Luo and Qingxuan Shi
Electronics 2022, 11(17), 2672; https://doi.org/10.3390/electronics11172672 - 26 Aug 2022
Cited by 2 | Viewed by 1482
Abstract
Melanoma is a lethal skin cancer. In its diagnosis, skin lesion segmentation plays a critical role. However, skin lesions exhibit a wide range of sizes, shapes, colors, and edges. This makes skin lesion segmentation a challenging task. In this paper, we propose an [...] Read more.
Melanoma is a lethal skin cancer. In its diagnosis, skin lesion segmentation plays a critical role. However, skin lesions exhibit a wide range of sizes, shapes, colors, and edges. This makes skin lesion segmentation a challenging task. In this paper, we propose an encoding–decoding network called Res-CDD-Net to address the aforementioned aspects related to skin lesion segmentation. First, we adopt ResNeXt50 pre-trained on the ImageNet dataset as the encoding path. This pre-trained ResNeXt50 can provide rich image features to the whole network to achieve higher segmentation accuracy. Second, a channel and spatial attention block (CSAB), which integrates both channel and spatial attention, and a multi-scale capture block (MSCB) are introduced between the encoding and decoding paths. The CSAB can highlight the lesion area and inhibit irrelevant objects. MSCB can extract multi-scale information to learn lesion areas of different sizes. Third, we upgrade the decoding path. Every 3 × 3 square convolution kernel in the decoding path is replaced by a diverse branch block (DBB), which not only promotes the feature restoration capability, but also improves the performance and robustness of the network. We evaluate the proposed network on three public skin lesion datasets, namely ISIC-2017, ISIC-2016, and PH2. The dice coefficient is 6.90% higher than that of U-Net, whereas the Jaccard index is 10.84% higher than that of U-Net (assessed on the ISIC-2017 dataset). The results show that Res-CDD-Net achieves outstanding performance, higher than the performance of most state-of-the-art networks. Last but not least, the training of the network is fast, and good results can be achieved in early stages of training. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging Processing)
Show Figures

Figure 1

18 pages, 2385 KiB  
Article
Wound Detection by Simple Feedforward Neural Network
by Domagoj Marijanović, Emmanuel Karlo Nyarko and Damir Filko
Electronics 2022, 11(3), 329; https://doi.org/10.3390/electronics11030329 - 21 Jan 2022
Cited by 6 | Viewed by 2718
Abstract
Chronic wounds are a heavy burden on medical facilities, so any help in treating them is most welcome. Current research focuses on wound analysis, especially wound tissue classification, wound measurement, and wound healing prediction to assist medical personnel in wound treatment, with the [...] Read more.
Chronic wounds are a heavy burden on medical facilities, so any help in treating them is most welcome. Current research focuses on wound analysis, especially wound tissue classification, wound measurement, and wound healing prediction to assist medical personnel in wound treatment, with the main goal of reducing wound healing time. The first phase of wound analysis is wound segmentation, where the task is to extract wounds from the healthy tissue and image background. In this work, a standard feedforward neural network was developed for the purpose of wound segmentation using data from the MICCAI 2021 Foot Ulcer Segmentation (FUSeg) Challenge. It proved to be a simple yet efficient method for extracting wounds from images. The proposed algorithm is part of a compact system that analyzes chronic wounds using a robotic manipulator, RGB-D camera and 3D scanner. The feedforward neural network consists of only five fully connected layers, the first four with Rectified Linear Unit (ReLU) activation functions and the last with sigmoid activation functions. Three separate models were trained and tested using images provided as part of the challenge. The predicted images were post-processed and merged to improve the final segmentation performance.The accuracy metrics observed during model training and selection were Precision, Recall and F1 score. The experimental results of the proposed network provided a recall value of 0.77, precision value of 0.72, and an F1 score (Dice score) of 0.74. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging Processing)
Show Figures

Figure 1

14 pages, 5465 KiB  
Article
Accurate Transmission-Less Attenuation Correction Method for Amyloid-β Brain PET Using Deep Neural Network
by Bo-Hye Choi, Donghwi Hwang, Seung-Kwan Kang, Kyeong-Yun Kim, Hongyoon Choi, Seongho Seo and Jae-Sung Lee
Electronics 2021, 10(15), 1836; https://doi.org/10.3390/electronics10151836 - 30 Jul 2021
Cited by 8 | Viewed by 1965
Abstract
The lack of physically measured attenuation maps (μ-maps) for attenuation and scatter correction is an important technical challenge in brain-dedicated stand-alone positron emission tomography (PET) scanners. The accuracy of the calculated attenuation correction is limited by the nonuniformity of tissue composition [...] Read more.
The lack of physically measured attenuation maps (μ-maps) for attenuation and scatter correction is an important technical challenge in brain-dedicated stand-alone positron emission tomography (PET) scanners. The accuracy of the calculated attenuation correction is limited by the nonuniformity of tissue composition due to pathologic conditions and the complex structure of facial bones. The aim of this study is to develop an accurate transmission-less attenuation correction method for amyloid-β (Aβ) brain PET studies. We investigated the validity of a deep convolutional neural network trained to produce a CT-derived μ-map (μ-CT) from simultaneously reconstructed activity and attenuation maps using the MLAA (maximum likelihood reconstruction of activity and attenuation) algorithm for Aβ brain PET. The performance of three different structures of U-net models (2D, 2.5D, and 3D) were compared. The U-net models generated less noisy and more uniform μ-maps than MLAA μ-maps. Among the three different U-net models, the patch-based 3D U-net model reduced noise and cross-talk artifacts more effectively. The Dice similarity coefficients between the μ-map generated using 3D U-net and μ-CT in bone and air segments were 0.83 and 0.67. All three U-net models showed better voxel-wise correlation of the μ-maps compared to MLAA. The patch-based 3D U-net model was the best. While the uptake value of MLAA yielded a high percentage error of 20% or more, the uptake value of 3D U-nets yielded the lowest percentage error within 5%. The proposed deep learning approach that requires no transmission data, anatomic image, or atlas/template for PET attenuation correction remarkably enhanced the quantitative accuracy of the simultaneously estimated MLAA μ-maps from Aβ brain PET. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging Processing)
Show Figures

Figure 1

8 pages, 2089 KiB  
Article
Noise2Noise Improved by Trainable Wavelet Coefficients for PET Denoising
by Seung-Kwan Kang, Si-Young Yie and Jae-Sung Lee
Electronics 2021, 10(13), 1529; https://doi.org/10.3390/electronics10131529 - 24 Jun 2021
Cited by 10 | Viewed by 3005
Abstract
The significant statistical noise and limited spatial resolution of positron emission tomography (PET) data in sinogram space results in the degradation of the quality and accuracy of reconstructed images. Although high-dose radiotracers and long acquisition times improve the PET image quality, the patients’ [...] Read more.
The significant statistical noise and limited spatial resolution of positron emission tomography (PET) data in sinogram space results in the degradation of the quality and accuracy of reconstructed images. Although high-dose radiotracers and long acquisition times improve the PET image quality, the patients’ radiation exposure increases and the patient is more likely to move during the PET scan. Recently, various data-driven techniques based on supervised deep neural network learning have made remarkable progress in reducing noise in images. However, these conventional techniques require clean target images that are of limited availability for PET denoising. Therefore, in this study, we utilized the Noise2Noise framework, which requires only noisy image pairs for network training, to reduce the noise in the PET images. A trainable wavelet transform was proposed to improve the performance of the network. The proposed network was fed wavelet-decomposed images consisting of low- and high-pass components. The inverse wavelet transforms of the network output produced denoised images. The proposed Noise2Noise filter with wavelet transforms outperforms the original Noise2Noise method in the suppression of artefacts and preservation of abnormal uptakes. The quantitative analysis of the simulated PET uptake confirms the improved performance of the proposed method compared with the original Noise2Noise technique. In the clinical data, 10 s images filtered with Noise2Noise are virtually equivalent to 300 s images filtered with a 6 mm Gaussian filter. The incorporation of wavelet transforms in Noise2Noise network training results in the improvement of the image contrast. In conclusion, the performance of Noise2Noise filtering for PET images was improved by incorporating the trainable wavelet transform in the self-supervised deep learning framework. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging Processing)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 588 KiB  
Review
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications
by Yutong Xie and Quanzheng Li
Electronics 2022, 11(4), 586; https://doi.org/10.3390/electronics11040586 - 15 Feb 2022
Cited by 13 | Viewed by 4140
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission [...] Read more.
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging Processing)
Show Figures

Figure 1

Back to TopTop