sensors-logo

Journal Browser

Journal Browser

Advances of Deep Learning in Medical Image Interpretation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (20 March 2023) | Viewed by 16324

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Johns Hopkins University, Baltimore, 21231 MD, USA
Interests: computer-aided detection and diagnosis; computer vision; medical image analysis; abdominal imaging; cancer detectionpervised learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Georgia, Athens, GA 30602, USA
Interests: biomedical image analysis; computational neuroscience, and biomedical informatics

Special Issue Information

Dear Colleagues,

Deep learning has shown revolutionary progress in various aspects of medical image interpretation, propelling computer-aided diagnosis forward at a rapid pace. Deep learning excels at identifying and localizing intricate patterns from images and providing quantifiable assessments through image analysis. There is no doubt that the impact of deep learning on medical imaging will be tremendous. In the future, many medical images will reach physicians along with an interpretation provided by deep learning.

Medical images possess unique characteristics compared to photographic images, which provide both opportunities and challenges for applying deep learning to disease diagnosis and prognosis. Medical images contain quantitative imaging characteristics (e.g., the intensity scale and physical size of pixels) that can be used as valuable information to enhance deep learning performance. Medical images also present qualitative imaging characteristics (e.g., consistent and predictable anatomical structures with dimensional details) that can provide an excellent opportunity for algorithm development. Meanwhile, several characteristics unique to medical images create new challenges (e.g., isolated, discrepant data and partial, noisy labels) that must be addressed through additional investigation.

This Special Issue is to address significant challenges to deep learning adoption in medical image analysis. We are looking for methodological advancements in exploiting the unique characteristics of medical images, covering image modalities of radiology, cardiology, pathology, dermatology, etc.

Dr. Zongwei Zhou
Prof. Dr. Tianming Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • applications of medical imaging
  • image segmentation, registration, and fusion
  • representation learning, feature extraction
  • image reconstruction, image enhancement
  • microscopy image analysis
  • machine learning, deep learning
  • computer-aided diagnosis
  • image-guided interventions and surgery

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1325 KiB  
Article
A New Regularization for Deep Learning-Based Segmentation of Images with Fine Structures and Low Contrast
by Jiasen Zhang and Weihong Guo
Sensors 2023, 23(4), 1887; https://doi.org/10.3390/s23041887 - 8 Feb 2023
Cited by 1 | Viewed by 2130
Abstract
Deep learning methods have achieved outstanding results in many image processing and computer vision tasks, such as image segmentation. However, they usually do not consider spatial dependencies among pixels/voxels in the image. To obtain better results, some methods have been proposed to apply [...] Read more.
Deep learning methods have achieved outstanding results in many image processing and computer vision tasks, such as image segmentation. However, they usually do not consider spatial dependencies among pixels/voxels in the image. To obtain better results, some methods have been proposed to apply classic spatial regularization, such as total variation, into deep learning models. However, for some challenging images, especially those with fine structures and low contrast, classical regularizations are not suitable. We derived a new regularization to improve the connectivity of segmentation results and make it applicable to deep learning. Our experimental results show that for both deep learning methods and unsupervised methods, the proposed method can improve performance by increasing connectivity and dealing with low contrast and, therefore, enhance segmentation results. Full article
(This article belongs to the Special Issue Advances of Deep Learning in Medical Image Interpretation)
Show Figures

Figure 1

18 pages, 3196 KiB  
Article
Deep Learning-Based Multiclass Brain Tissue Segmentation in Fetal MRIs
by Xiaona Huang, Yang Liu, Yuhan Li, Keying Qi, Ang Gao, Bowen Zheng, Dong Liang and Xiaojing Long
Sensors 2023, 23(2), 655; https://doi.org/10.3390/s23020655 - 6 Jan 2023
Cited by 6 | Viewed by 2439
Abstract
Fetal brain tissue segmentation is essential for quantifying the presence of congenital disorders in the developing fetus. Manual segmentation of fetal brain tissue is cumbersome and time-consuming, so using an automatic segmentation method can greatly simplify the process. In addition, the fetal brain [...] Read more.
Fetal brain tissue segmentation is essential for quantifying the presence of congenital disorders in the developing fetus. Manual segmentation of fetal brain tissue is cumbersome and time-consuming, so using an automatic segmentation method can greatly simplify the process. In addition, the fetal brain undergoes a variety of changes throughout pregnancy, such as increased brain volume, neuronal migration, and synaptogenesis. In this case, the contrast between tissues, especially between gray matter and white matter, constantly changes throughout pregnancy, increasing the complexity and difficulty of our segmentation. To reduce the burden of manual refinement of segmentation, we proposed a new deep learning-based segmentation method. Our approach utilized a novel attentional structural block, the contextual transformer block (CoT-Block), which was applied in the backbone network model of the encoder–decoder to guide the learning of dynamic attentional matrices and enhance image feature extraction. Additionally, in the last layer of the decoder, we introduced a hybrid dilated convolution module, which can expand the receptive field and retain detailed spatial information, effectively extracting the global contextual information in fetal brain MRI. We quantitatively evaluated our method according to several performance measures: dice, precision, sensitivity, and specificity. In 80 fetal brain MRI scans with gestational ages ranging from 20 to 35 weeks, we obtained an average Dice similarity coefficient (DSC) of 83.79%, an average Volume Similarity (VS) of 84.84%, and an average Hausdorff95 Distance (HD95) of 35.66 mm. We also used several advanced deep learning segmentation models for comparison under equivalent conditions, and the results showed that our method was superior to other methods and exhibited an excellent segmentation performance. Full article
(This article belongs to the Special Issue Advances of Deep Learning in Medical Image Interpretation)
Show Figures

Figure 1

25 pages, 13125 KiB  
Article
Part Affinity Fields and CoordConv for Detecting Landmarks of Lumbar Vertebrae and Sacrum in X-ray Images
by Chang-Hyeon An, Jeong-Sik Lee, Jun-Su Jang and Hyun-Chul Choi
Sensors 2022, 22(22), 8628; https://doi.org/10.3390/s22228628 - 9 Nov 2022
Cited by 3 | Viewed by 2713
Abstract
With the prevalence of degenerative diseases due to the increase in the aging population, we have encountered many spine-related disorders. Since the spine is a crucial part of the body, fast and accurate diagnosis is critically important. Generally, clinicians use X-ray images to [...] Read more.
With the prevalence of degenerative diseases due to the increase in the aging population, we have encountered many spine-related disorders. Since the spine is a crucial part of the body, fast and accurate diagnosis is critically important. Generally, clinicians use X-ray images to diagnose the spine, but X-ray images are commonly occluded by the shadows of some bones, making it hard to identify the whole spine. Therefore, recently, various deep-learning-based spinal X-ray image analysis approaches have been proposed to help diagnose the spine. However, these approaches did not consider the characteristics of frequent occlusion in the X-ray image and the properties of the vertebra shape. Therefore, based on the X-ray image properties and vertebra shape, we present a novel landmark detection network specialized in lumbar X-ray images. The proposed network consists of two stages: The first step detects the centers of the lumbar vertebrae and the upper end plate of the first sacral vertebra (S1), and the second step detects the four corner points of each lumbar vertebra and two corner points of S1 from the image obtained in the first step. We used random spine cutout augmentation in the first step to robustify the network against the commonly obscured X-ray images. Furthermore, in the second step, we used CoordConv to make the network recognize the location distribution of landmarks and part affinity fields to understand the morphological features of the vertebrae, resulting in more accurate landmark detection. The proposed network was evaluated using 304 X-ray images, and it achieved 98.02% accuracy in center detection and 8.34% relative distance error in corner detection. This indicates that our network can detect spinal landmarks reliably enough to support radiologists in analyzing the lumbar X-ray images. Full article
(This article belongs to the Special Issue Advances of Deep Learning in Medical Image Interpretation)
Show Figures

Figure 1

18 pages, 6123 KiB  
Article
Application of Deep Convolutional Neural Networks in the Diagnosis of Osteoporosis
by Róża Dzierżak and Zbigniew Omiotek
Sensors 2022, 22(21), 8189; https://doi.org/10.3390/s22218189 - 26 Oct 2022
Cited by 5 | Viewed by 2180
Abstract
The aim of this study was to assess the possibility of using deep convolutional neural networks (DCNNs) to develop an effective method for diagnosing osteoporosis based on CT images of the spine. The research material included the CT images of L1 spongy tissue [...] Read more.
The aim of this study was to assess the possibility of using deep convolutional neural networks (DCNNs) to develop an effective method for diagnosing osteoporosis based on CT images of the spine. The research material included the CT images of L1 spongy tissue belonging to 100 patients (50 healthy and 50 diagnosed with osteoporosis). Six pre-trained DCNN architectures with different topological depths (VGG16, VGG19, MobileNetV2, Xception, ResNet50, and InceptionResNetV2) were used in the study. The best results were obtained for the VGG16 model characterised by the lowest topological depth (ACC = 95%, TPR = 96%, and TNR = 94%). A specific challenge during the study was the relatively small (for deep learning) number of observations (400 images). This problem was solved using DCNN models pre-trained on a large dataset and a data augmentation technique. The obtained results allow us to conclude that the transfer learning technique yields satisfactory results during the construction of deep models for the diagnosis of osteoporosis based on small datasets of CT images of the spine. Full article
(This article belongs to the Special Issue Advances of Deep Learning in Medical Image Interpretation)
Show Figures

Figure 1

20 pages, 5866 KiB  
Article
Automatic Segmentation of Periodontal Tissue Ultrasound Images with Artificial Intelligence: A Novel Method for Improving Dataset Quality
by Radu Chifor, Mircea Hotoleanu, Tiberiu Marita, Tudor Arsenescu, Mihai Adrian Socaciu, Iulia Clara Badea and Ioana Chifor
Sensors 2022, 22(19), 7101; https://doi.org/10.3390/s22197101 - 20 Sep 2022
Cited by 3 | Viewed by 1721
Abstract
This research aimed to evaluate Mask R-CNN and U-Net convolutional neural network models for pixel-level classification in order to perform the automatic segmentation of bi-dimensional images of US dental arches, identifying anatomical elements required for periodontal diagnosis. A secondary aim was to evaluate [...] Read more.
This research aimed to evaluate Mask R-CNN and U-Net convolutional neural network models for pixel-level classification in order to perform the automatic segmentation of bi-dimensional images of US dental arches, identifying anatomical elements required for periodontal diagnosis. A secondary aim was to evaluate the efficiency of a correction method of the ground truth masks segmented by an operator, for improving the quality of the datasets used for training the neural network models, by 3D ultrasound reconstructions of the examined periodontal tissue. Methods: Ultrasound periodontal investigations were performed for 52 teeth of 11 patients using a 3D ultrasound scanner prototype. The original ultrasound images were segmented by a low experienced operator using region growing-based segmentation algorithms. Three-dimensional ultrasound reconstructions were used for the quality check and correction of the segmentation. Mask R-CNN and U-NET were trained and used for prediction of periodontal tissue’s elements identification. Results: The average Intersection over Union ranged between 10% for the periodontal pocket and 75.6% for gingiva. Even though the original dataset contained 3417 images from 11 patients, and the corrected dataset only 2135 images from 5 patients, the prediction’s accuracy is significantly better for the models trained with the corrected dataset. Conclusions: The proposed quality check and correction method by evaluating in the 3D space the operator’s ground truth segmentation had a positive impact on the quality of the datasets demonstrated through higher IoU after retraining the models using the corrected dataset. Full article
(This article belongs to the Special Issue Advances of Deep Learning in Medical Image Interpretation)
Show Figures

Figure 1

23 pages, 2708 KiB  
Article
A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms
by Nagwan Abdel Samee, Amel A. Alhussan, Vidan Fathi Ghoneim, Ghada Atteia, Reem Alkanhel, Mugahed A. Al-antari and Yasser M. Kadah
Sensors 2022, 22(13), 4938; https://doi.org/10.3390/s22134938 - 30 Jun 2022
Cited by 30 | Viewed by 3508
Abstract
One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the [...] Read more.
One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis. Full article
(This article belongs to the Special Issue Advances of Deep Learning in Medical Image Interpretation)
Show Figures

Figure 1

Back to TopTop