Special Issue "Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging"

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 31 March 2021.

Special Issue Editor

Dr. Alexandr Kalinin
Website
Guest Editor
1. Shenzhen Research Institute of Big Data, Shenzhen, China
2. University of Michigan, Ann Arbor, MI, USA
Interests: biomedical imaging; computer vision; machine learning; visual analytics

Special Issue Information

Dear Colleagues,

Deep learning has led to dramatic advances in the analysis of images and video and has demonstrated the potential to transform computer-aided diagnosis in biomedical imaging. Innovations in algorithm and software development and availability of larger annotated biomedical imaging datasets are driving improvements in automated classification, localization, retrieval, and segmentation of molecules, cells, lesions, nodules, tumors, organs and other structures of interest. Deep neural networks have also been employed for medical image generation and enhancement, and integration of images and biomedical data of other modalities. However, applications of deep learning to computer-assisted diagnosis in biomedical imaging also pose important challenges, such as learning from small, imbalanced, and noisy data, estimating model uncertainty, evaluating and interpreting models, limiting computing requirements, and designing and building interfaces between algorithms and clinicians. This Special Issue will focus on recent advances, prospects and challenges in deep learning applications to computer-aided biomedical imaging.

Dr. Alexandr Kalinin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • biomedical image analysis
  • computer-assisted diagnosis
  • computer vision
  • artificial intelligence in biomedicine
  • self- and semi-supervised learning
  • one- and few-shot learning
  • transfer learning
  • model interpretability
  • biomedical image augmentation
  • biomedical image segmentation
  • object detection and localization
  • image generation and enhancement
  • 2D and 3D modeling
  • 2D and 3D reconstruction
  • image-guided surgery and intervention
  • biomarkers
  • personalized medicine
  • on-device deep learning

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Automatic Fetal Middle Sagittal Plane Detection in Ultrasound Using Generative Adversarial Network
Diagnostics 2021, 11(1), 21; https://doi.org/10.3390/diagnostics11010021 - 24 Dec 2020
Abstract
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system [...] Read more.
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system that enables precise fetal MSP detection from three-dimensional (3D) US and provides an evaluation of its performance using a generative adversarial network (GAN) framework. Method: The neural network is designed as a filter and generates masks to obtain the MSP, learning the features and MSP location in 3D space. Using the proposed image analysis system, a seed point was obtained from 218 first-trimester fetal 3D US volumes using deep learning and the MSP was automatically extracted. Results: The experimental results reveal the feasibility and excellent performance of the proposed approach between the automatically and manually detected MSPs. There was no significant difference between the semi-automatic and automatic systems. Further, the inference time in the automatic system was up to two times faster than the semi-automatic approach. Conclusion: The proposed system offers precise fetal MSP measurements. Therefore, this automatic fetal MSP detection and measurement approach is anticipated to be useful clinically. The proposed system can also be applied to other relevant clinical fields in the future. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
Deep Learning Assisted Localization of Polycystic Kidney on Contrast-Enhanced CT Images
Diagnostics 2020, 10(12), 1113; https://doi.org/10.3390/diagnostics10121113 - 21 Dec 2020
Abstract
Total Kidney Volume (TKV) is essential for analyzing the progressive loss of renal function in Autosomal Dominant Polycystic Kidney Disease (ADPKD). Conventionally, to measure TKV from medical images, a radiologist needs to localize and segment the kidneys by defining and delineating the kidney’s [...] Read more.
Total Kidney Volume (TKV) is essential for analyzing the progressive loss of renal function in Autosomal Dominant Polycystic Kidney Disease (ADPKD). Conventionally, to measure TKV from medical images, a radiologist needs to localize and segment the kidneys by defining and delineating the kidney’s boundary slice by slice. However, kidney localization is a time-consuming and challenging task considering the unstructured medical images from big data such as Contrast-enhanced Computed Tomography (CCT). This study aimed to design an automatic localization model of ADPKD using Artificial Intelligence. A robust detection model using CCT images, image preprocessing, and Single Shot Detector (SSD) Inception V2 Deep Learning (DL) model is designed here. The model is trained and evaluated with 110 CCT images that comprise 10,078 slices. The experimental results showed that our derived detection model outperformed other DL detectors in terms of Average Precision (AP) and mean Average Precision (mAP). We achieved mAP = 94% for image-wise testing and mAP = 82% for subject-wise testing, when threshold on Intersection over Union (IoU) = 0.5. This study proves that our derived automatic detection model can assist radiologist in locating and classifying the ADPKD kidneys precisely and rapidly in order to improve the segmentation task and TKV calculation. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
An Aggregated-Based Deep Learning Method for Leukemic B-lymphoblast Classification
Diagnostics 2020, 10(12), 1064; https://doi.org/10.3390/diagnostics10121064 - 08 Dec 2020
Abstract
Leukemia is a cancer of blood cells in the bone marrow that affects both children and adolescents. The rapid growth of unusual lymphocyte cells leads to bone marrow failure, which may slow down the production of new blood cells, and hence increases patient [...] Read more.
Leukemia is a cancer of blood cells in the bone marrow that affects both children and adolescents. The rapid growth of unusual lymphocyte cells leads to bone marrow failure, which may slow down the production of new blood cells, and hence increases patient morbidity and mortality. Age is a crucial clinical factor in leukemia diagnosis, since if leukemia is diagnosed in the early stages, it is highly curable. Incidence is increasing globally, as around 412,000 people worldwide are likely to be diagnosed with some type of leukemia, of which acute lymphoblastic leukemia accounts for approximately 12% of all leukemia cases worldwide. Thus, the reliable and accurate detection of normal and malignant cells is of major interest. Automatic detection with computer-aided diagnosis (CAD) models can assist medics, and can be beneficial for the early detection of leukemia. In this paper, a single center study, we aimed to build an aggregated deep learning model for Leukemic B-lymphoblast classification. To make a reliable and accurate deep learner, data augmentation techniques were applied to tackle the limited dataset size, and a transfer learning strategy was employed to accelerate the learning process, and further improve the performance of the proposed network. The results show that our proposed approach was able to fuse features extracted from the best deep learning models, and outperformed individual networks with a test accuracy of 96.58% in Leukemic B-lymphoblast diagnosis. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
Convolutional Neural Network-Based Humerus Segmentation and Application to Bone Mineral Density Estimation from Chest X-ray Images of Critical Infants
Diagnostics 2020, 10(12), 1028; https://doi.org/10.3390/diagnostics10121028 - 30 Nov 2020
Abstract
Measuring bone mineral density (BMD) is important for surveying osteopenia in premature infants. However, the clinical availability of dual-energy X-ray absorptiometry (DEXA) for standard BMD measurement is very limited, and it is not a practical technique for critically premature infants. Developing alternative approaches [...] Read more.
Measuring bone mineral density (BMD) is important for surveying osteopenia in premature infants. However, the clinical availability of dual-energy X-ray absorptiometry (DEXA) for standard BMD measurement is very limited, and it is not a practical technique for critically premature infants. Developing alternative approaches for DEXA might improve clinical care for bone health. This study aimed to measure the BMD of premature infants via routine chest X-rays in the intensive care unit. A convolutional neural network (CNN) for humeral segmentation and quantification of BMD with calibration phantoms (QRM-DEXA) and soft tissue correction were developed. There were 210 X-rays of premature infants evaluated by this system, with an average Dice similarity coefficient value of 97.81% for humeral segmentation. The estimated humerus BMDs (g/cm3; mean ± standard) were 0.32 ± 0.06, 0.37 ± 0.06, and 0.32 ± 0.09, respectively, for the upper, middle, and bottom parts of the left humerus for the enrolled infants. To our knowledge, this is the first pilot study to apply a CNN model to humerus segmentation and to measure BMD in preterm infants. These preliminary results may accelerate the progress of BMD research in critical medicine and assist with nutritional care in premature infants. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs Using Deep Convolutional Neural Networks
Diagnostics 2020, 10(11), 932; https://doi.org/10.3390/diagnostics10110932 - 10 Nov 2020
Cited by 1
Abstract
Knee osteoarthritis (OA) is the most common musculoskeletal disease in the world. In primary healthcare, knee OA is diagnosed using clinical examination and radiographic assessment. Osteoarthritis Research Society International (OARSI) atlas of OA radiographic features allows performing independent assessment of knee osteophytes, joint [...] Read more.
Knee osteoarthritis (OA) is the most common musculoskeletal disease in the world. In primary healthcare, knee OA is diagnosed using clinical examination and radiographic assessment. Osteoarthritis Research Society International (OARSI) atlas of OA radiographic features allows performing independent assessment of knee osteophytes, joint space narrowing and other knee features. This provides a fine-grained OA severity assessment of the knee, compared to the gold standard and most commonly used Kellgren–Lawrence (KL) composite score. In this study, we developed an automatic method to predict KL and OARSI grades from knee radiographs. Our method is based on Deep Learning and leverages an ensemble of residual networks with 50 layers. We used transfer learning from ImageNet with a fine-tuning on the Osteoarthritis Initiative (OAI) dataset. An independent testing of our model was performed on the Multicenter Osteoarthritis Study (MOST) dataset. Our method yielded Cohen’s kappa coefficients of 0.82 for KL-grade and 0.79, 0.84, 0.94, 0.83, 0.84 and 0.90 for femoral osteophytes, tibial osteophytes and joint space narrowing for lateral and medial compartments, respectively. Furthermore, our method yielded area under the ROC curve of 0.98 and average precision of 0.98 for detecting the presence of radiographic OA, which is better than the current state-of-the-art. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
A Performance Comparison between Automated Deep Learning and Dental Professionals in Classification of Dental Implant Systems from Dental Imaging: A Multi-Center Study
Diagnostics 2020, 10(11), 910; https://doi.org/10.3390/diagnostics10110910 - 07 Nov 2020
Abstract
In this study, the efficacy of the automated deep convolutional neural network (DCNN) was evaluated for the classification of dental implant systems (DISs) and the accuracy of the performance was compared against that of dental professionals using dental radiographic images collected from three [...] Read more.
In this study, the efficacy of the automated deep convolutional neural network (DCNN) was evaluated for the classification of dental implant systems (DISs) and the accuracy of the performance was compared against that of dental professionals using dental radiographic images collected from three dental hospitals. A total of 11,980 panoramic and periapical radiographic images with six different types of DISs were divided into training (n = 9584) and testing (n = 2396) datasets. To compare the accuracy of the trained automated DCNN with dental professionals (including six board-certified periodontists, eight periodontology residents, and 11 residents not specialized in periodontology), 180 images were randomly selected from the test dataset. The accuracy of the automated DCNN based on the AUC, Youden index, sensitivity, and specificity, were 0.954, 0.808, 0.955, and 0.853, respectively. The automated DCNN outperformed most of the participating dental professionals, including board-certified periodontists, periodontal residents, and residents not specialized in periodontology. The automated DCNN was highly effective in classifying similar shapes of different types of DISs based on dental radiographic images. Further studies are necessary to determine the efficacy and feasibility of applying an automated DCNN in clinical practice. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
CoSinGAN: Learning COVID-19 Infection Segmentation from a Single Radiological Image
Diagnostics 2020, 10(11), 901; https://doi.org/10.3390/diagnostics10110901 - 03 Nov 2020
Abstract
Computed tomography (CT) images are currently being adopted as the visual evidence for COVID-19 diagnosis in clinical practice. Automated detection of COVID-19 infection from CT images based on deep models is important for faster examination. Unfortunately, collecting large-scale training data systematically in the [...] Read more.
Computed tomography (CT) images are currently being adopted as the visual evidence for COVID-19 diagnosis in clinical practice. Automated detection of COVID-19 infection from CT images based on deep models is important for faster examination. Unfortunately, collecting large-scale training data systematically in the early stage is difficult. To address this problem, we explore the feasibility of learning deep models for lung and COVID-19 infection segmentation from a single radiological image by resorting to synthesizing diverse radiological images. Specifically, we propose a novel conditional generative model, called CoSinGAN, which can be learned from a single radiological image with a given condition, i.e., the annotation mask of the lungs and infected regions. Our CoSinGAN is able to capture the conditional distribution of the single radiological image, and further synthesize high-resolution (512 × 512) and diverse radiological images that match the input conditions precisely. We evaluate the efficacy of CoSinGAN in learning lung and infection segmentation from very few radiological images by performing 5-fold cross validation on COVID-19-CT-Seg dataset (20 CT cases) and an independent testing on the MosMed dataset (50 CT cases). Both 2D U-Net and 3D U-Net, learned from four CT slices by using our CoSinGAN, have achieved notable infection segmentation performance, surpassing the COVID-19-CT-Seg-Benchmark, i.e., the counterparts trained on an average of 704 CT slices, by a large margin. Such results strongly confirm that our method has the potential to learn COVID-19 infection segmentation from few radiological images in the early stage of COVID-19 pandemic. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Open AccessArticle
Automated Segmentation and Severity Analysis of Subdural Hematoma for Patients with Traumatic Brain Injuries
Diagnostics 2020, 10(10), 773; https://doi.org/10.3390/diagnostics10100773 - 30 Sep 2020
Abstract
Detection and severity assessment of subdural hematoma is a major step in the evaluation of traumatic brain injuries. This is a retrospective study of 110 computed tomography (CT) scans from patients admitted to the Michigan Medicine Neurological Intensive Care Unit or Emergency Department. [...] Read more.
Detection and severity assessment of subdural hematoma is a major step in the evaluation of traumatic brain injuries. This is a retrospective study of 110 computed tomography (CT) scans from patients admitted to the Michigan Medicine Neurological Intensive Care Unit or Emergency Department. A machine learning pipeline was developed to segment and assess the severity of subdural hematoma. First, the probability of each point belonging to the hematoma region was determined using a combination of hand-crafted and deep features. This probability provided the initial state of the segmentation. Next, a 3D post-processing model was applied to evolve the initial state and delineate the hematoma. The recall, precision, and Dice similarity coefficient of the proposed segmentation method were 78.61%, 76.12%, and 75.35%, respectively, for the entire population. The Dice similarity coefficient was 79.97% for clinically significant hematomas, which compared favorably to an inter-rater Dice similarity coefficient. In volume-based severity analysis, the proposed model yielded an F1, recall, and specificity of 98.22%, 98.81%, and 92.31%, respectively, in detecting moderate and severe subdural hematomas based on hematoma volume. These results show that the combination of classical image processing and deep learning can outperform deep learning only methods to achieve greater average performance and robustness. Such a system can aid critical care physicians in reducing time to intervention and thereby improve long-term patient outcomes. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Analyzing Malaria Disease Using Effective Deep Learning Approach
Diagnostics 2020, 10(10), 744; https://doi.org/10.3390/diagnostics10100744 - 24 Sep 2020
Abstract
Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria [...] Read more.
Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria patients, although there may be atypical cases that need more time for an assessment. This research used 7000 images of Xception, Inception-V3, ResNet-50, NasNetMobile, VGG-16 and AlexNet models for verification and analysis. These are prevalent models that classify the image precision and use a rotational method to improve the performance of validation and the training dataset with convolutional neural network models. Xception, using the state of the art activation function (Mish) and optimizer (Nadam), improved the effectiveness, as found by the outcomes of the convolutional neural model evaluation of these models for classifying the malaria disease from thin blood smear images. In terms of the performance, recall, accuracy, precision, and F1 measure, a combined score of 99.28% was achieved. Consequently, 10% of all non-dataset training and testing images were evaluated utilizing this pattern. Notable aspects for the improvement of a computer-aided diagnostic to produce an optimum malaria detection approach have been found, supported by a 98.86% accuracy level. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Back to TopTop