Artificial Intelligence for Medical Image Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (10 May 2021) | Viewed by 47721

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Applied Computer Science, Lodz University of Technology, 90-924 Lodz, Poland
Interests: image processing; image analysis; image segmentation; artificial intelligence; deep learning; machine learning; computer aided diagnosis; applied computer science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are inviting submissions to the Special Issue on Artificial Intelligence for Medical Image Analysis.

Over the last few years, we have witnessed artificial intelligence (AI) revolutionizing the sector of medical imaging. Numerous AI-based tools have been developed to automate medical image analysis and improve automated image interpretation. Especially, deep learning approaches have demonstrated exceptional performance in the screening and diagnosis of many diseases. A further challenge of AI‐driven solutions is to develop tools for a personalized disease assessment through deep learning models by taking advantage of their ability to learn patterns and relationships in data, utilizing massive volumes of medical images, and combining radiomics extracted from them with other forms of medical data. 

With the above mentioned in mind, this Special Issue aims to promote the latest cutting edge AI-driven research in medical image processing and analysis. Of particular interest are submissions regarding computer-aided diagnosis and improvement of automated image interpretation. However, contributions concerning other aspects of medical image processing (including, but not limited to image quality improvement, image restoration, image segmentation, image registration, radiomics analysis) are also welcomed.

Dr. Hab. Anna Fabijańska
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • big data
  • computer aided diagnosis
  • deep learning
  • image guided therapy
  • image registration
  • image restoration
  • image segmentation
  • machine learning
  • personalized medicine
  • prediction of clinical outcomes
  • radiomics

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 5544 KiB  
Article
Automatic Segmentation of Choroid Layer Using Deep Learning on Spectral Domain Optical Coherence Tomography
by Wei Ping Hsia, Siu Lun Tse, Chia Jen Chang and Yu Len Huang
Appl. Sci. 2021, 11(12), 5488; https://doi.org/10.3390/app11125488 - 13 Jun 2021
Cited by 14 | Viewed by 3108
Abstract
The purpose of this article is to evaluate the accuracy of the optical coherence tomography (OCT) measurement of choroidal thickness in healthy eyes using a deep-learning method with the Mask R-CNN model. Thirty EDI-OCT of thirty patients were enrolled. A mask region-based convolutional [...] Read more.
The purpose of this article is to evaluate the accuracy of the optical coherence tomography (OCT) measurement of choroidal thickness in healthy eyes using a deep-learning method with the Mask R-CNN model. Thirty EDI-OCT of thirty patients were enrolled. A mask region-based convolutional neural network (Mask R-CNN) model composed of deep residual network (ResNet) and feature pyramid networks (FPNs) with standard convolution and fully connected heads for mask and box prediction, respectively, was used to automatically depict the choroid layer. The average choroidal thickness and subfoveal choroidal thickness were measured. The results of this study showed that ResNet 50 layers deep (R50) model and ResNet 101 layers deep (R101). R101 U R50 (OR model) demonstrated the best accuracy with an average error of 4.85 pixels and 4.86 pixels, respectively. The R101 ∩ R50 (AND model) took the least time with an average execution time of 4.6 s. Mask-RCNN models showed a good prediction rate of choroidal layer with accuracy rates of 90% and 89.9% for average choroidal thickness and average subfoveal choroidal thickness, respectively. In conclusion, the deep-learning method using the Mask-RCNN model provides a faster and accurate measurement of choroidal thickness. Comparing with manual delineation, it provides better effectiveness, which is feasible for clinical application and larger scale of research on choroid. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

14 pages, 22576 KiB  
Article
Segmentation of Liver Anatomy by Combining 3D U-Net Approaches
by Abir Affane, Adrian Kucharski, Paul Chapuis, Samuel Freydier, Marie-Ange Lebre, Antoine Vacavant and Anna Fabijańska
Appl. Sci. 2021, 11(11), 4895; https://doi.org/10.3390/app11114895 - 26 May 2021
Cited by 10 | Viewed by 3922
Abstract
Accurate liver vessel segmentation is of crucial importance for the clinical diagnosis and treatment of many hepatic diseases. Recent state-of-the-art methods for liver vessel reconstruction mostly utilize deep learning methods, namely, the U-Net model and its variants. However, to the best of our [...] Read more.
Accurate liver vessel segmentation is of crucial importance for the clinical diagnosis and treatment of many hepatic diseases. Recent state-of-the-art methods for liver vessel reconstruction mostly utilize deep learning methods, namely, the U-Net model and its variants. However, to the best of our knowledge, no comparative evaluation has been proposed to compare these approaches in the liver vessel segmentation task. Moreover, most research works do not consider the liver volume segmentation as a preprocessing step, in order to keep only inner hepatic vessels, for Couinaud representation for instance. For these reasons, in this work, we propose using accurate Dense U-Net liver segmentation and conducting a comparison between 3D U-Net models inside the obtained volumes. More precisely, 3D U-Net, Dense U-Net, and MultiRes U-Net are pitted against each other in the vessel segmentation task on the IRCAD dataset. For each model, three alternative setups that allow adapting the selected CNN architectures to volumetric data are tested, namely, full 3D, slab-based, and box-based setups are considered. The results showed that the most accurate setup is the full 3D process, providing the highest Dice for most of the considered models. However, concerning the particular models, the slab-based MultiRes U-Net provided the best score. With our accurate vessel segmentations, several medical applications can be investigated, such as automatic and personalized Couinaud zoning of the liver. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

21 pages, 12317 KiB  
Article
Whole Heart Segmentation Using 3D FM-Pre-ResNet Encoder–Decoder Based Architecture with Variational Autoencoder Regularization
by Marija Habijan, Irena Galić, Hrvoje Leventić and Krešimir Romić
Appl. Sci. 2021, 11(9), 3912; https://doi.org/10.3390/app11093912 - 26 Apr 2021
Cited by 11 | Viewed by 2961
Abstract
An accurate whole heart segmentation (WHS) on medical images, including computed tomography (CT) and magnetic resonance (MR) images, plays a crucial role in many clinical applications, such as cardiovascular disease diagnosis, pre-surgical planning, and intraoperative treatment. Manual whole-heart segmentation is a time-consuming process, [...] Read more.
An accurate whole heart segmentation (WHS) on medical images, including computed tomography (CT) and magnetic resonance (MR) images, plays a crucial role in many clinical applications, such as cardiovascular disease diagnosis, pre-surgical planning, and intraoperative treatment. Manual whole-heart segmentation is a time-consuming process, prone to subjectivity and error. Therefore, there is a need to develop a quick, automatic, and accurate whole heart segmentation systems. Nowadays, convolutional neural networks (CNNs) emerged as a robust approach for medical image segmentation. In this paper, we first introduce a novel connectivity structure of residual unit that we refer to as a feature merge residual unit (FM-Pre-ResNet). The proposed connectivity allows the creation of distinctly deep models without an increase in the number of parameters compared to the pre-activation residual units. Second, we propose a three-dimensional (3D) encoder–decoder based architecture that successfully incorporates FM-Pre-ResNet units and variational autoencoder (VAE). In an encoding stage, FM-Pre-ResNet units are used for learning a low-dimensional representation of the input. After that, the variational autoencoder (VAE) reconstructs the input image from the low-dimensional latent space to provide a strong regularization of all model weights, simultaneously preventing overfitting on the training data. Finally, the decoding stage creates the final whole heart segmentation. We evaluate our method on the 40 test subjects of the MICCAI Multi-Modality Whole Heart Segmentation (MM-WHS) Challenge. The average dice values of whole heart segmentation are 90.39% (CT images) and 89.50% (MRI images), which are both highly comparable to the state-of-the-art. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

17 pages, 3264 KiB  
Article
Virtual UV Fluorescence Microscopy from Hematoxylin and Eosin Staining of Liver Images Using Deep Learning Convolutional Neural Network
by Dorota Oszutowska-Mazurek, Miroslaw Parafiniuk and Przemyslaw Mazurek
Appl. Sci. 2020, 10(21), 7815; https://doi.org/10.3390/app10217815 - 04 Nov 2020
Cited by 2 | Viewed by 2485
Abstract
The use of UV (ultraviolet fluorescence) light in microscopy allows improving the quality of images and observation of structures that are not visible in visible spectrum. The disadvantage of this method is the degradation of microstructures in the slide due to exposure to [...] Read more.
The use of UV (ultraviolet fluorescence) light in microscopy allows improving the quality of images and observation of structures that are not visible in visible spectrum. The disadvantage of this method is the degradation of microstructures in the slide due to exposure to UV light. The article examines the possibility of using a convolutional neural network to perform this type of conversion without damaging the slides. Using eosin hematoxylin stained slides, a database of image pairs was created for visible light (halogen lamp) and UV light. This database was used to train a multi–layer unidirectional convolutional neural network. The results of the study were subjectively and objectively assessed using the SSIM (Structural Similarity Index Measure) and SSIM (structure only) image quality measures. The results show that it is possible to perform this type of conversion (the studies used liver slides for 100× magnification), and in some cases there was an additional improvement in image quality. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

16 pages, 2583 KiB  
Article
Multi-Task Learning for Small Brain Tumor Segmentation from MRI
by Duc-Ky Ngo, Minh-Trieu Tran, Soo-Hyung Kim, Hyung-Jeong Yang and Guee-Sang Lee
Appl. Sci. 2020, 10(21), 7790; https://doi.org/10.3390/app10217790 - 03 Nov 2020
Cited by 21 | Viewed by 4188
Abstract
Segmenting brain tumors accurately and reliably is an essential part of cancer diagnosis and treatment planning. Brain tumor segmentation of glioma patients is a challenging task because of the wide variety of tumor sizes, shapes, positions, scanning modalities, and scanner’s acquisition protocols. Many [...] Read more.
Segmenting brain tumors accurately and reliably is an essential part of cancer diagnosis and treatment planning. Brain tumor segmentation of glioma patients is a challenging task because of the wide variety of tumor sizes, shapes, positions, scanning modalities, and scanner’s acquisition protocols. Many convolutional neural network (CNN) based methods have been proposed to solve the problem of brain tumor segmentation and achieved great success. However, most previous studies do not fully take into account multiscale tumors and often fail to segment small tumors, which may have a significant impact on finding early-stage cancers. This paper deals with the brain tumor segmentation of any sizes, but specially focuses on accurately identifying small tumors, thereby increasing the performance of the brain tumor segmentation of overall sizes. Instead of using heavyweight networks with multi-resolution or multiple kernel sizes, we propose a novel approach for better segmentation of small tumors by dilated convolution and multi-task learning. Dilated convolution is used for multiscale feature extraction, however it does not work well with very small tumor segmentation. For dealing with small-sized tumors, we try multi-task learning, where an auxiliary task of feature reconstruction is used to retain the features of small tumors. The experiment shows the effectiveness of segmenting small tumors with the proposed method. This paper contributes to the detection and segmentation of small tumors, which have seldom been considered before and the new development of hierarchical analysis using multi-task learning. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Graphical abstract

14 pages, 5382 KiB  
Article
Kidney Boundary Detection Algorithm Based on Extended Maxima Transformations for Computed Tomography Diagnosis
by Tomasz Les, Tomasz Markiewicz, Miroslaw Dziekiewicz and Malgorzata Lorent
Appl. Sci. 2020, 10(21), 7512; https://doi.org/10.3390/app10217512 - 26 Oct 2020
Cited by 5 | Viewed by 1919
Abstract
This article describes the automated computed tomography (CT) image processing technique supporting kidney detection. The main goal of the study is a fully automatic generation of a kidney boundary for each slice in the set of slices obtained in the computed tomography examination. [...] Read more.
This article describes the automated computed tomography (CT) image processing technique supporting kidney detection. The main goal of the study is a fully automatic generation of a kidney boundary for each slice in the set of slices obtained in the computed tomography examination. This work describes three main tasks in the process of automatic kidney identification: the initial location of the kidneys using the U-Net convolutional neural network, the generation of an accurate kidney boundary using extended maxima transformation, and the application of the slice scanning algorithm supporting the process of generating the result for the next slice, using the result of the previous one. To assess the quality of the proposed technique of medical image analysis, automatic numerical tests were performed. In the test section, we presented numerical results, calculating the F1-score of kidney boundary detection by an automatic system, compared to the kidneys boundaries manually generated by a human expert from a medical center. The influence of the use of U-Net support in the initial detection of the kidney on the final F1-score of generating the kidney outline was also evaluated. The F1-score achieved by the automated system is 84% ± 10% for the system without U-Net support and 89% ± 9% for the system with U-Net support. Performance tests show that the presented technique can generate the kidney boundary up to 3 times faster than raw U-Net-based network. The proposed kidney recognition system can be successfully used in systems that require a very fast image processing time. The measurable effect of the developed techniques is a practical help for doctors, specialists from medical centers dealing with the analysis and description of medical image data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

21 pages, 8022 KiB  
Article
Render U-Net: A Unique Perspective on Render to Explore Accurate Medical Image Segmentation
by Chen Li, Wei Chen and Yusong Tan
Appl. Sci. 2020, 10(18), 6439; https://doi.org/10.3390/app10186439 - 16 Sep 2020
Cited by 5 | Viewed by 3549
Abstract
Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from [...] Read more.
Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from natural image segmentation models. These models usually ignore the importance of the boundary. To solve this difficulty, in this paper, we provided a unique perspective on rendering to explore accurate medical image segmentation. We adapt a subdivision-based point-sampling method to get high-quality boundaries. In addition, we integrated the attention mechanism and nested U-Net architecture into the proposed network Render U-Net.Render U-Net was evaluated on three public datasets, including LiTS, CHAOS, and DSB. This model obtained the best performance on five medical image segmentation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Graphical abstract

16 pages, 5472 KiB  
Article
Automatic Detection of Airway Invasion from Videofluoroscopy via Deep Learning Technology
by Seong Jae Lee, Joo Young Ko, Hyun Il Kim and Sang-Il Choi
Appl. Sci. 2020, 10(18), 6179; https://doi.org/10.3390/app10186179 - 05 Sep 2020
Cited by 9 | Viewed by 4054
Abstract
In dysphagia, food materials frequently invade the laryngeal airway, potentially resulting in serious consequences, such as asphyxia or pneumonia. The VFSS (videofluoroscopic swallowing study) procedure can be used to visualize the occurrence of airway invasion, but its reliability is limited by human errors [...] Read more.
In dysphagia, food materials frequently invade the laryngeal airway, potentially resulting in serious consequences, such as asphyxia or pneumonia. The VFSS (videofluoroscopic swallowing study) procedure can be used to visualize the occurrence of airway invasion, but its reliability is limited by human errors and fatigue. Deep learning technology may improve the efficiency and reliability of VFSS analysis by reducing the human effort required. A deep learning model has been developed that can detect airway invasion from VFSS images in a fully automated manner. The model consists of three phases: (1) image normalization, (2) dynamic ROI (region of interest) determination, and (3) airway invasion detection. Noise induced by movement and learning from unintended areas is minimized by defining a “dynamic” ROI with respect to the center of the cervical spinal column as segmented using U-Net. An Xception module, trained on a dataset consisting of 267,748 image frames obtained from 319 VFSS video files, is used for the detection of airway invasion. The present model shows an overall accuracy of 97.2% in classifying image frames and 93.2% in classifying video files. It is anticipated that the present model will enable more accurate analysis of VFSS data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

19 pages, 5389 KiB  
Article
Classification of Dermoscopy Skin Lesion Color-Images Using Fractal-Deep Learning Features
by Edgar Omar Molina-Molina, Selene Solorza-Calderón and Josué Álvarez-Borrego
Appl. Sci. 2020, 10(17), 5954; https://doi.org/10.3390/app10175954 - 27 Aug 2020
Cited by 21 | Viewed by 3200
Abstract
The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided [...] Read more.
The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided diagnosis based on 1D fractal signatures of texture-based features combining with deep-learning features using transferred learning based in Densenet-201. This proposal works with three 1D fractal signatures built per color-image. The energy, variance, and entropy of the fractal signatures are used combined with 100 features extracted from Densenet-201 to construct the features vector. Because commonly, the classes in the dataset of skin lesion images are imbalanced, we use the technique of ensemble of classifiers: K-nearest neighbors and two types of support vector machines. The computer-aided diagnosis output was determined based on the linear plurality vote. In this work, we obtained an average accuracy of 97.35%, an average precision of 91.61%, an average sensitivity of 66.45%, and an average specificity of 97.85% in the eight classes’ classification in the International Skin Imaging Collaboration (ISIC) archive-2019. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

18 pages, 4477 KiB  
Article
A Transfer Learning Method for Pneumonia Classification and Visualization
by Juan Eduardo Luján-García, Cornelio Yáñez-Márquez, Yenny Villuendas-Rey and Oscar Camacho-Nieto
Appl. Sci. 2020, 10(8), 2908; https://doi.org/10.3390/app10082908 - 23 Apr 2020
Cited by 73 | Viewed by 6167
Abstract
Pneumonia is an infectious disease that affects the lungs and is one of the principal causes of death in children under five years old. The Chest X-ray images technique is one of the most used for diagnosing pneumonia. Several Machine Learning algorithms have [...] Read more.
Pneumonia is an infectious disease that affects the lungs and is one of the principal causes of death in children under five years old. The Chest X-ray images technique is one of the most used for diagnosing pneumonia. Several Machine Learning algorithms have been successfully used in order to provide computer-aided diagnosis by automatic classification of medical images. For its remarkable results, the Convolutional Neural Networks (models based on Deep Learning) that are widely used in Computer Vision tasks, such as classification of injuries and brain abnormalities, among others, stand out. In this paper, we present a transfer learning method that automatically classifies between 3883 chest X-ray images characterized as depicting pneumonia and 1349 labeled as normal. The proposed method uses the Xception Network pre-trained weights on ImageNet as an initialization. Our model is competitive with respect to state-of-the-art proposals. To make comparisons with other models, we have used four well-known performance measures, obtaining the following results: precision (0.84), recall (0.99), F1-score (0.91) and area under the ROC curve (0.97). These positive results allow us to consider our proposal as an alternative that can be useful in countries with a lack of equipment and specialized radiologists. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

13 pages, 9530 KiB  
Article
Computer-Aided Diagnosis of Skin Diseases Using Deep Neural Networks
by Muhammad Naseer Bajwa, Kaoru Muta, Muhammad Imran Malik, Shoaib Ahmed Siddiqui, Stephan Alexander Braun, Bernhard Homey, Andreas Dengel and Sheraz Ahmed
Appl. Sci. 2020, 10(7), 2488; https://doi.org/10.3390/app10072488 - 04 Apr 2020
Cited by 68 | Viewed by 10133
Abstract
Propensity of skin diseases to manifest in a variety of forms, lack and maldistribution of qualified dermatologists, and exigency of timely and accurate diagnosis call for automated Computer-Aided Diagnosis (CAD). This study aims at extending previous works on CAD for dermatology by exploring [...] Read more.
Propensity of skin diseases to manifest in a variety of forms, lack and maldistribution of qualified dermatologists, and exigency of timely and accurate diagnosis call for automated Computer-Aided Diagnosis (CAD). This study aims at extending previous works on CAD for dermatology by exploring the potential of Deep Learning to classify hundreds of skin diseases, improving classification performance, and utilizing disease taxonomy. We trained state-of-the-art Deep Neural Networks on two of the largest publicly available skin image datasets, namely DermNet and ISIC Archive, and also leveraged disease taxonomy, where available, to improve classification performance of these models. On DermNet we establish new state-of-the-art with 80% accuracy and 98% Area Under the Curve (AUC) for classification of 23 diseases. We also set precedence for classifying all 622 unique sub-classes in this dataset and achieved 67% accuracy and 98% AUC. On ISIC Archive we classified all 7 diseases with 93% average accuracy and 99% AUC. This study shows that Deep Learning has great potential to classify a vast array of skin diseases with near-human accuracy and far better reproducibility. It can have a promising role in practical real-time skin disease diagnosis by assisting physicians in large-scale screening using clinical or dermoscopic images. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop