Special Issue "Artificial Intelligence for Medical Image Analysis"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: 30 September 2020.

Special Issue Editor

Dr. Hab. Anna Fabijańska
Website
Guest Editor
Institute of Applied Computer Science, Lodz University of Technology, 90-924 Lodz, Poland
Interests: image processing; image analysis; image segmentation; artificial intelligence; deep learning; machine learning; computer aided diagnosis; applied computer science

Special Issue Information

Dear Colleagues,

We are inviting submissions to the Special Issue on Artificial Intelligence for Medical Image Analysis.

Over the last few years, we have witnessed artificial intelligence (AI) revolutionizing the sector of medical imaging. Numerous AI-based tools have been developed to automate medical image analysis and improve automated image interpretation. Especially, deep learning approaches have demonstrated exceptional performance in the screening and diagnosis of many diseases. A further challenge of AI‐driven solutions is to develop tools for a personalized disease assessment through deep learning models by taking advantage of their ability to learn patterns and relationships in data, utilizing massive volumes of medical images, and combining radiomics extracted from them with other forms of medical data. 

With the above mentioned in mind, this Special Issue aims to promote the latest cutting edge AI-driven research in medical image processing and analysis. Of particular interest are submissions regarding computer-aided diagnosis and improvement of automated image interpretation. However, contributions concerning other aspects of medical image processing (including, but not limited to image quality improvement, image restoration, image segmentation, image registration, radiomics analysis) are also welcomed.

Dr. Hab. Anna Fabijańska
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • big data
  • computer aided diagnosis
  • deep learning
  • image guided therapy
  • image registration
  • image restoration
  • image segmentation
  • machine learning
  • personalized medicine
  • prediction of clinical outcomes
  • radiomics

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Render U-Net: A Unique Perspective on Render to Explore Accurate Medical Image Segmentation
Appl. Sci. 2020, 10(18), 6439; https://doi.org/10.3390/app10186439 - 16 Sep 2020
Abstract
Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from [...] Read more.
Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from natural image segmentation models. These models usually ignore the importance of the boundary. To solve this difficulty, in this paper, we provided a unique perspective on rendering to explore accurate medical image segmentation. We adapt a subdivision-based point-sampling method to get high-quality boundaries. In addition, we integrated the attention mechanism and nested U-Net architecture into the proposed network Render U-Net.Render U-Net was evaluated on three public datasets, including LiTS, CHAOS, and DSB. This model obtained the best performance on five medical image segmentation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Graphical abstract

Open AccessArticle
Automatic Detection of Airway Invasion from Videofluoroscopy via Deep Learning Technology
Appl. Sci. 2020, 10(18), 6179; https://doi.org/10.3390/app10186179 - 05 Sep 2020
Abstract
In dysphagia, food materials frequently invade the laryngeal airway, potentially resulting in serious consequences, such as asphyxia or pneumonia. The VFSS (videofluoroscopic swallowing study) procedure can be used to visualize the occurrence of airway invasion, but its reliability is limited by human errors [...] Read more.
In dysphagia, food materials frequently invade the laryngeal airway, potentially resulting in serious consequences, such as asphyxia or pneumonia. The VFSS (videofluoroscopic swallowing study) procedure can be used to visualize the occurrence of airway invasion, but its reliability is limited by human errors and fatigue. Deep learning technology may improve the efficiency and reliability of VFSS analysis by reducing the human effort required. A deep learning model has been developed that can detect airway invasion from VFSS images in a fully automated manner. The model consists of three phases: (1) image normalization, (2) dynamic ROI (region of interest) determination, and (3) airway invasion detection. Noise induced by movement and learning from unintended areas is minimized by defining a “dynamic” ROI with respect to the center of the cervical spinal column as segmented using U-Net. An Xception module, trained on a dataset consisting of 267,748 image frames obtained from 319 VFSS video files, is used for the detection of airway invasion. The present model shows an overall accuracy of 97.2% in classifying image frames and 93.2% in classifying video files. It is anticipated that the present model will enable more accurate analysis of VFSS data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

Open AccessArticle
Classification of Dermoscopy Skin Lesion Color-Images Using Fractal-Deep Learning Features
Appl. Sci. 2020, 10(17), 5954; https://doi.org/10.3390/app10175954 - 27 Aug 2020
Abstract
The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided [...] Read more.
The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided diagnosis based on 1D fractal signatures of texture-based features combining with deep-learning features using transferred learning based in Densenet-201. This proposal works with three 1D fractal signatures built per color-image. The energy, variance, and entropy of the fractal signatures are used combined with 100 features extracted from Densenet-201 to construct the features vector. Because commonly, the classes in the dataset of skin lesion images are imbalanced, we use the technique of ensemble of classifiers: K-nearest neighbors and two types of support vector machines. The computer-aided diagnosis output was determined based on the linear plurality vote. In this work, we obtained an average accuracy of 97.35%, an average precision of 91.61%, an average sensitivity of 66.45%, and an average specificity of 97.85% in the eight classes’ classification in the International Skin Imaging Collaboration (ISIC) archive-2019. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

Open AccessArticle
A Transfer Learning Method for Pneumonia Classification and Visualization
Appl. Sci. 2020, 10(8), 2908; https://doi.org/10.3390/app10082908 - 23 Apr 2020
Cited by 4
Abstract
Pneumonia is an infectious disease that affects the lungs and is one of the principal causes of death in children under five years old. The Chest X-ray images technique is one of the most used for diagnosing pneumonia. Several Machine Learning algorithms have [...] Read more.
Pneumonia is an infectious disease that affects the lungs and is one of the principal causes of death in children under five years old. The Chest X-ray images technique is one of the most used for diagnosing pneumonia. Several Machine Learning algorithms have been successfully used in order to provide computer-aided diagnosis by automatic classification of medical images. For its remarkable results, the Convolutional Neural Networks (models based on Deep Learning) that are widely used in Computer Vision tasks, such as classification of injuries and brain abnormalities, among others, stand out. In this paper, we present a transfer learning method that automatically classifies between 3883 chest X-ray images characterized as depicting pneumonia and 1349 labeled as normal. The proposed method uses the Xception Network pre-trained weights on ImageNet as an initialization. Our model is competitive with respect to state-of-the-art proposals. To make comparisons with other models, we have used four well-known performance measures, obtaining the following results: precision (0.84), recall (0.99), F1-score (0.91) and area under the ROC curve (0.97). These positive results allow us to consider our proposal as an alternative that can be useful in countries with a lack of equipment and specialized radiologists. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

Open AccessArticle
Computer-Aided Diagnosis of Skin Diseases Using Deep Neural Networks
Appl. Sci. 2020, 10(7), 2488; https://doi.org/10.3390/app10072488 - 04 Apr 2020
Cited by 3
Abstract
Propensity of skin diseases to manifest in a variety of forms, lack and maldistribution of qualified dermatologists, and exigency of timely and accurate diagnosis call for automated Computer-Aided Diagnosis (CAD). This study aims at extending previous works on CAD for dermatology by exploring [...] Read more.
Propensity of skin diseases to manifest in a variety of forms, lack and maldistribution of qualified dermatologists, and exigency of timely and accurate diagnosis call for automated Computer-Aided Diagnosis (CAD). This study aims at extending previous works on CAD for dermatology by exploring the potential of Deep Learning to classify hundreds of skin diseases, improving classification performance, and utilizing disease taxonomy. We trained state-of-the-art Deep Neural Networks on two of the largest publicly available skin image datasets, namely DermNet and ISIC Archive, and also leveraged disease taxonomy, where available, to improve classification performance of these models. On DermNet we establish new state-of-the-art with 80% accuracy and 98% Area Under the Curve (AUC) for classification of 23 diseases. We also set precedence for classifying all 622 unique sub-classes in this dataset and achieved 67% accuracy and 98% AUC. On ISIC Archive we classified all 7 diseases with 93% average accuracy and 99% AUC. This study shows that Deep Learning has great potential to classify a vast array of skin diseases with near-human accuracy and far better reproducibility. It can have a promising role in practical real-time skin disease diagnosis by assisting physicians in large-scale screening using clinical or dermoscopic images. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop