Image Processing Techniques for Biomedical Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (20 August 2021) | Viewed by 135224

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: computer vision; image processing; machine learning; deep learning; artificial intelligence; medical image analysis; biomedical image analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy
Interests: computer vision; medical image analysis; shape analysis and matching; image retrieval and classification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue of the journal Applied Sciences entitled Image-Processing Techniques for Biomedical Applications aims to present recent advances in the generation and utilization of image-processing techniques and future prospects of this key, fundamental, research area. All interested authors are invited to submit their newest results on biomedical image processing and analysis for possible publication in this Special Issue. All papers need to present original, previously unpublished work and will be subject to the normal standards and peer-review processes of this journal. Potential topics include but are not limited to:

  • Medical image reconstruction;
  • Medical image retrieval;
  • Medical image segmentation;
  • Deep or handcrafted features for biomedical image classification;
  • Visualization in biomedical imaging;
  • Machine learning and artificial intelligence;
  • Image analysis of anatomical structures and lesions;
  • Computer-aided detection/diagnosis;
  • Multimodality fusion for diagnosis, image analysis, and image-guided interventions;
  • Combination of image analysis with clinical data mining and analytics;
  • Applications of big data in imaging;
  • Microscopy and histology image analysis;
  • Ophthalmic image analysis;
  • Applications of computational pathology in the clinic.

Prof. Dr. Di Ruberto Cecilia
Dr. Andrea Loddo
Dr. Lorenzo Putzu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image preprocessing
  • Image segmentation
  • Feature extraction
  • Statistical methods
  • Orthogonal moments
  • Shape matching
  • Deep learning
  • Machine learning
  • Cellular shape analysis
  • Tissue classification
  • Blood image analysis

Related Special Issue

Published Papers (28 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

4 pages, 177 KiB  
Editorial
Special Issue on Image Processing Techniques for Biomedical Applications
by Cecilia Di Ruberto, Andrea Loddo and Lorenzo Putzu
Appl. Sci. 2022, 12(20), 10338; https://doi.org/10.3390/app122010338 - 14 Oct 2022
Cited by 1 | Viewed by 984
Abstract
In recent years, there has been growing interest in creating powerful biomedical image processing tools to assist medical specialists [...] Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)

Research

Jump to: Editorial, Review, Other

12 pages, 3207 KiB  
Article
Systematic Quantification of Cell Confluence in Human Normal Oral Fibroblasts
by Ching-Hsiang Chiu, Jyh-Der Leu, Tzu-Ting Lin, Pin-Hua Su, Wan-Chun Li, Yi-Jang Lee and Da-Chuan Cheng
Appl. Sci. 2020, 10(24), 9146; https://doi.org/10.3390/app10249146 - 21 Dec 2020
Cited by 4 | Viewed by 8047
Abstract
Background: The accurate determination of cell confluence is a critical step for generating reasonable results of designed experiments in cell biological studies. However, the cell confluence of the same culture may be diversely predicted by individual researchers. Herein, we designed a systematic quantification [...] Read more.
Background: The accurate determination of cell confluence is a critical step for generating reasonable results of designed experiments in cell biological studies. However, the cell confluence of the same culture may be diversely predicted by individual researchers. Herein, we designed a systematic quantification scheme implemented on the Matlab platform, the so-called “Confluence-Viewer” program, to assist cell biologists to better determine the cell confluence. Methods: Human normal oral fibroblasts (hOFs) seeded in 10 cm culture dishes were visualized under an inverted microscope for the acquisition of cell images. The images were subjected to the cell segmentation algorithm with top-hat transformation and the Otsu thresholding technique. A regression model was built using a quadratic model and shape-preserving piecewise cubic model. Results: The cell segmentation algorithm generated a regression curve that was highly correlated with the cell confluence determined by experienced researchers. However, the correlation was low when compared to the cell confluence determined by novice students. Interestingly, the cell confluence determined by experienced researchers became more diverse when they checked the same images without a time limitation (up to 1 min). Conclusion: This tool could prevent unnecessary human-made mistakes and meaningless repeats for novice researchers working on cell-based studies in health care or cancer research. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

15 pages, 15327 KiB  
Article
An Empirical Evaluation of Nuclei Segmentation from H&E Images in a Real Application Scenario
by Lorenzo Putzu and Giorgio Fumera
Appl. Sci. 2020, 10(22), 7982; https://doi.org/10.3390/app10227982 - 10 Nov 2020
Cited by 5 | Viewed by 1929
Abstract
Cell nuclei segmentation is a challenging task, especially in real applications, when the target images significantly differ between them. This task is also challenging for methods based on convolutional neural networks (CNNs), which have recently boosted the performance of cell nuclei segmentation systems. [...] Read more.
Cell nuclei segmentation is a challenging task, especially in real applications, when the target images significantly differ between them. This task is also challenging for methods based on convolutional neural networks (CNNs), which have recently boosted the performance of cell nuclei segmentation systems. However, when training data are scarce or not representative of deployment scenarios, they may suffer from overfitting to a different extent, and may hardly generalise to images that differ from the ones used for training. In this work, we focus on real-world, challenging application scenarios when no annotated images from a given dataset are available, or when few images (even unlabelled) of the same domain are available to perform domain adaptation. To simulate this scenario, we performed extensive cross-dataset experiments on several CNN-based state-of-the-art cell nuclei segmentation methods. Our results show that some of the existing CNN-based approaches are capable of generalising to target images which resemble the ones used for training. In contrast, their effectiveness considerably degrades when target and source significantly differ in colours and scale. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

16 pages, 47058 KiB  
Article
Multi-Steps Registration Protocol for Multimodal MR Images of Hip Skeletal Muscles in a Longitudinal Study
by Lucia Fontana, Alfonso Mastropietro, Elisa Scalco, Denis Peruzzo, Elena Beretta, Sandra Strazzer, Filippo Arrigoni and Giovanna Rizzo
Appl. Sci. 2020, 10(21), 7823; https://doi.org/10.3390/app10217823 - 04 Nov 2020
Cited by 5 | Viewed by 2233
Abstract
Image registration is crucial in multimodal longitudinal skeletal muscle Magnetic Resonance Imaging (MRI) studies to extract reliable parameters that can be used as indicators for physio/pathological characterization of muscle tissue and for assessing the effectiveness of treatments. This paper aims at proposing a [...] Read more.
Image registration is crucial in multimodal longitudinal skeletal muscle Magnetic Resonance Imaging (MRI) studies to extract reliable parameters that can be used as indicators for physio/pathological characterization of muscle tissue and for assessing the effectiveness of treatments. This paper aims at proposing a reliable registration protocol and evaluating its accuracy in a longitudinal study. The hips of 6 subjects were scanned, in a multimodal protocol, at 2 different time points by a 3 Tesla scanner; the proposed multi-step registration pipeline is based on rigid and elastic transformations implemented in SimpleITK using a multi-resolution technique. The effects of different image pre-processing (muscle masks, isotropic voxels) and different parameters’ values (learning rates and mesh sizes) were quantitatively assessed using standard accuracy indexes. Rigid registration alone does not provide satisfactory accuracy for inter-sessions alignment and a further elastic step is needed. The use of isotropic voxels, combined with the muscle masking, provides the best result in terms of accuracy. Learning rates can be increased to speed up the process without affecting the final results. The protocol described in this paper, complemented by open-source software, can be a useful guide for researchers that approach for the first time the issues related to the muscle MR image registration. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

14 pages, 4443 KiB  
Article
Noise Level and Similarity Analysis for Computed Tomographic Thoracic Image with Fast Non-Local Means Denoising Algorithm
by Bae-Guen Kim, Seong-Hyeon Kang, Chan Rok Park, Hyun-Woo Jeong and Youngjin Lee
Appl. Sci. 2020, 10(21), 7455; https://doi.org/10.3390/app10217455 - 23 Oct 2020
Cited by 10 | Viewed by 2062
Abstract
Although conventional denoising filters have been developed for noise reduction from digital images, these filters simultaneously cause blurring in the images. To address this problem, we proposed the fast non-local means (FNLM) denoising algorithm which would preserve the edge information of objects better [...] Read more.
Although conventional denoising filters have been developed for noise reduction from digital images, these filters simultaneously cause blurring in the images. To address this problem, we proposed the fast non-local means (FNLM) denoising algorithm which would preserve the edge information of objects better than conventional denoising filters. In this study, we obtained thoracic computed tomography (CT) images from a male adult mesh (MASH) phantom modeled by computer and a five-year-old phantom to perform both the simulation study and the practical study. Subsequently, the FNLM denoising algorithm and conventional denoising filters, such as the Gaussian, median, and Wiener filters, were applied to the MASH phantom image adding Gaussian noise with a standard deviation of 0.002 and practical CT images. Finally, the results were compared quantitatively in terms of the coefficient of variation (COV), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and correlation coefficient (CC). The results showed that the FNLM denoising algorithm was more efficient than the conventional denoising filters. In conclusion, through the simulation study and the practical study, this study demonstrated the feasibility of the FNLM denoising algorithm for noise reduction from thoracic CT images. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

20 pages, 6348 KiB  
Article
CDA-Net for Automatic Prostate Segmentation in MR Images
by Zhiying Lu, Mingyue Zhao and Yong Pang
Appl. Sci. 2020, 10(19), 6678; https://doi.org/10.3390/app10196678 - 24 Sep 2020
Cited by 10 | Viewed by 2466
Abstract
Automatic and accurate prostate segmentation is an essential prerequisite for assisting diagnosis and treatment, such as guiding biopsy procedures and radiation therapy. Therefore, this paper proposes a cascaded dual attention network (CDA-Net) for automatic prostate segmentation in MRI scans. The network includes two [...] Read more.
Automatic and accurate prostate segmentation is an essential prerequisite for assisting diagnosis and treatment, such as guiding biopsy procedures and radiation therapy. Therefore, this paper proposes a cascaded dual attention network (CDA-Net) for automatic prostate segmentation in MRI scans. The network includes two stages of RAS-FasterRCNN and RAU-Net. Firstly, RAS-FasterRCNN uses improved FasterRCNN and sequence correlation processing to extract regions of interest (ROI) of organs. This ROI extraction serves as a hard attention mechanism to focus the segmentation of the subsequent network on a certain area. Secondly, the addition of residual convolution block and self-attention mechanism in RAU-Net enables the network to gradually focus on the area where the organ exists while making full use of multiscale features. The algorithm was evaluated on the PROMISE12 and ASPS13 datasets and presents the dice similarity coefficient of 92.88% and 92.65%, respectively, surpassing the state-of-the-art algorithms. In a variety of complex slice images, especially for the base and apex of slice sequences, the algorithm also achieved credible segmentation performance. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

9 pages, 4595 KiB  
Article
An Open-Source Android Application to Measure Anterior–Posterior Knee Translation
by Gil Serrancolí, Peter Bogatikov, Guillem Tanyà Palacios, Jordi Torner, Joan Carles Monllau and Simone Perelli
Appl. Sci. 2020, 10(17), 5896; https://doi.org/10.3390/app10175896 - 26 Aug 2020
Cited by 1 | Viewed by 2361
Abstract
There are widely used standard clinical tests to estimate the instability of an anterior cruciate ligament (ACL) deficient knee by assessing the translation of the tibia with respect to the femur. However, the assessment of those tests could be quite subjective. The goal [...] Read more.
There are widely used standard clinical tests to estimate the instability of an anterior cruciate ligament (ACL) deficient knee by assessing the translation of the tibia with respect to the femur. However, the assessment of those tests could be quite subjective. The goal of this study is to present a universally affordable open-source Android application that is easy and quick. Moreover, it provides the possibility for a quantitative and objective analysis of that instability. The anterior–posterior knee translation of seven subjects was assessed using the open-source Android application developed. A single Android smartphone and the placement of three green skin adhesives are all that is required to use it. The application was developed using the image-processing features of the open-source OpenCV Library. An open-source Android application was developed to measure anterior–posterior (AP) translation in ACL-deficient subjects. The application identified differences in the AP translation between the ipsilateral and the contralateral legs of seven ACL-deficient subjects during Lachman and Pivot–Shift tests. Three out of seven subjects were under anesthesia. Those three were also the ones with significant differences. The application detected differences in the AP translation between the ipsilateral and contralateral legs of subjects with ACL deficiency. The use of the application represents an easy, low-cost, reliable and quick way to assess knee instability quantitatively. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Graphical abstract

13 pages, 7312 KiB  
Article
SelectStitch: Automated Frame Segmentation and Stitching to Create Composite Images from Otoscope Video Clips
by Hamidullah Binol, Aaron C. Moberly, Muhammad Khalid Khan Niazi, Garth Essig, Jay Shah, Charles Elmaraghy, Theodoros Teknos, Nazhat Taj-Schaal, Lianbo Yu and Metin N. Gurcan
Appl. Sci. 2020, 10(17), 5894; https://doi.org/10.3390/app10175894 - 26 Aug 2020
Cited by 13 | Viewed by 2774
Abstract
Background and Objective: the aim of this study is to develop and validate an automated image segmentation-based frame selection and stitching framework to create enhanced composite images from otoscope videos. The proposed framework, called SelectStitch, is useful for classifying eardrum abnormalities using a [...] Read more.
Background and Objective: the aim of this study is to develop and validate an automated image segmentation-based frame selection and stitching framework to create enhanced composite images from otoscope videos. The proposed framework, called SelectStitch, is useful for classifying eardrum abnormalities using a single composite image instead of the entire raw otoscope video dataset. Methods: SelectStitch consists of a convolutional neural network (CNN) based semantic segmentation approach to detect the eardrum in each frame of the otoscope video, and a stitching engine to generate a high-quality composite image from the detected eardrum regions. In this study, we utilize two separate datasets: the first one has 36 otoscope videos that were used to train a semantic segmentation model, and the second one, containing 100 videos, which was used to test the proposed method. Cases from both adult and pediatric patients were used in this study. A configuration of 4-levels depth U-Net architecture was trained to automatically find eardrum regions in each otoscope video frame from the first dataset. After the segmentation, we automatically selected meaningful frames from otoscope videos by using a pre-defined threshold, i.e., it should contain at least an eardrum region of 20% of a frame size. We have generated 100 composite images from the test dataset. Three ear, nose, and throat (ENT) specialists (ENT-I, ENT-II, ENT-III) compared in two rounds the composite images produced by SelectStitch against the composite images that were generated by the base processes, i.e., stitching all the frames from the same video data, in terms of their diagnostic capabilities. Results: In the first round of the study, ENT-I, ENT-II, ENT-III graded improvement for 58, 57, and 71 composite images out of 100, respectively, for SelectStitch over the base composite, reflecting greater diagnostic capabilities. In the repeat assessment, these numbers were 56, 56, and 64, respectively. We observed that only 6%, 3%, and 3% of the cases received a lesser score than the base composite images, respectively, for ENT-I, ENT-II, and ENT-III in Round-1, and 4%, 0%, and 2% of the cases in Round-2. Conclusions: We conclude that the frame selection and stitching will increase the probability of detecting a lesion even if it appears in a few frames. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

19 pages, 8651 KiB  
Article
Automatic Segmentation of Macular Edema in Retinal OCT Images Using Improved U-Net++
by Zhijun Gao, Xingle Wang and Yi Li
Appl. Sci. 2020, 10(16), 5701; https://doi.org/10.3390/app10165701 - 17 Aug 2020
Cited by 8 | Viewed by 3171
Abstract
The number and volume of retinal macular edemas are important indicators for screening and diagnosing retinopathy. Aiming at the problem that the segmentation method of macular edemas in a retinal optical coherence tomography (OCT) image is not ideal in segmentation of diverse edemas, [...] Read more.
The number and volume of retinal macular edemas are important indicators for screening and diagnosing retinopathy. Aiming at the problem that the segmentation method of macular edemas in a retinal optical coherence tomography (OCT) image is not ideal in segmentation of diverse edemas, this paper proposes a new method of automatic segmentation of macular edema regions in retinal OCT images using the improved U-Net++. The proposed method makes full use of the U-Net++ re-designed skip pathways and dense convolution block; reduces the semantic gap of the feature maps in the encoder/decoder sub-network; and adds the improved Resnet network as the backbone, which make the extraction of features in the edema regions more accurate and improves the segmentation effect. The proposed method was trained and validated on the public dataset of Duke University, and the experiments demonstrated the proposed method can not only improve the overall segmentation effect, but also can significantly improve the segmented precision for diverse edema in multi-regions, as well as reducing the error of the number of edema regions. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

18 pages, 6142 KiB  
Article
Decoding Visual Motions from EEG Using Attention-Based RNN
by Dongxu Yang, Yadong Liu, Zongtan Zhou, Yang Yu and Xinbin Liang
Appl. Sci. 2020, 10(16), 5662; https://doi.org/10.3390/app10165662 - 14 Aug 2020
Cited by 10 | Viewed by 4612
Abstract
The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both investigated in [...] Read more.
The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both investigated in this study. Attention mechanisms and a variant of recurrent neural networks (RNNs) are incorporated as the decoding model. Attention mechanisms emphasize task-related responses and reduce redundant information of EEG, whereas RNN learns feature representations for classification from the processed EEG data. To promote generalization of the decoding model, a novel online data augmentation method that randomly averages EEG sequences to generate artificial signals is proposed for single-trial EEG. For our dataset, the data augmentation method improves the accuracy of our model (based on RNN) and two benchmark models (based on convolutional neural networks) by 5.60%, 3.92%, and 3.02%, respectively. The attention-based RNN reaches mean accuracies of 67.18% for single-trial EEG decoding with data augmentation. When performing multi-trial EEG classification, the amount of training data decreases linearly after averaging, which may result in poor generalization. To address this deficiency, we devised three schemes to randomly combine data for network training. Accordingly, the results indicate that the proposed strategies effectively prevent overfitting and improve the correct classification rate compared with averaging EEG fixedly (by up to 19.20%). The highest accuracy of the three strategies for multi-trial EEG classification achieves 82.92%. The decoding performance for the methods proposed in this work indicates they have application potential in the brain–computer interface (BCI) system based on visual motion perception. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Graphical abstract

9 pages, 2417 KiB  
Article
Total Variation-Based Noise Reduction Image Processing Algorithm for Confocal Laser Scanning Microscopy Applied to Activity Assessment of Early Carious Lesions
by Hee-Eun Kim, Seong-Hyeon Kang, Kyuseok Kim and Youngjin Lee
Appl. Sci. 2020, 10(12), 4090; https://doi.org/10.3390/app10124090 - 13 Jun 2020
Cited by 7 | Viewed by 2706
Abstract
The confocal laser scanning microscopy (CLSM) system has been widely used to analyze early carious lesions with fluorescent ligands in dental imaging. This system can be used to examine the physiological condition of cellular colonization in the tooth structure. However, the undesirable noise [...] Read more.
The confocal laser scanning microscopy (CLSM) system has been widely used to analyze early carious lesions with fluorescent ligands in dental imaging. This system can be used to examine the physiological condition of cellular colonization in the tooth structure. However, the undesirable noise in CLSM images hinders accurate activity assessment of early carious lesions. To address this limitation, a total variation (TV)-based noise reduction algorithm with good edge preservation was developed, and its applicability to medical tooth specimen images obtained with CLSM was verified. To evaluate the imaging performance, the proposed algorithm was compared with conventional filtering methods in terms of the normalized noise power spectrum, contrast-to-noise ratio, and coefficient of variation. The results indicate that the proposed algorithm achieved better noise performance and fine-detail preservation, in comparison with the conventional methods. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

18 pages, 396 KiB  
Article
Double-Shot Transfer Learning for Breast Cancer Classification from X-Ray Images
by Mohammad Alkhaleefah, Shang-Chih Ma, Yang-Lang Chang, Bormin Huang, Praveen Kumar Chittem and Vishnu Priya Achhannagari
Appl. Sci. 2020, 10(11), 3999; https://doi.org/10.3390/app10113999 - 09 Jun 2020
Cited by 22 | Viewed by 4582
Abstract
Differentiation between benign and malignant breast cancer cases in X-ray images can be difficult due to their similar features. In recent studies, the transfer learning technique has been used to classify benign and malignant breast cancer by fine-tuning various pre-trained networks such as [...] Read more.
Differentiation between benign and malignant breast cancer cases in X-ray images can be difficult due to their similar features. In recent studies, the transfer learning technique has been used to classify benign and malignant breast cancer by fine-tuning various pre-trained networks such as AlexNet, visual geometry group (VGG), GoogLeNet, and residual network (ResNet) on breast cancer datasets. However, these pre-trained networks have been trained on large benchmark datasets such as ImageNet, which do not contain labeled images related to breast cancers which lead to poor performance. In this research, we introduce a novel technique based on the concept of transfer learning, called double-shot transfer learning (DSTL). DSTL is used to improve the overall accuracy and performance of the pre-trained networks for breast cancer classification. DSTL updates the learnable parameters (weights and biases) of any pre-trained network by fine-tuning them on a large dataset that is similar to the target dataset. Then, the updated networks are fine-tuned with the target dataset. Moreover, the number of X-ray images is enlarged by a combination of augmentation methods including different variations of rotation, brightness, flipping, and contrast to reduce overfitting and produce robust results. The proposed approach has demonstrated a significant improvement in classification accuracy and performance of the pre-trained networks, making them more suitable for medical imaging. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

17 pages, 5192 KiB  
Article
Multi-Path Recurrent U-Net Segmentation of Retinal Fundus Image
by Yun Jiang, Falin Wang, Jing Gao and Simin Cao
Appl. Sci. 2020, 10(11), 3777; https://doi.org/10.3390/app10113777 - 29 May 2020
Cited by 29 | Viewed by 3373
Abstract
Diabetes can induce diseases including diabetic retinopathy, cataracts, glaucoma, etc. The blindness caused by these diseases is irreversible. Early analysis of retinal fundus images, including optic disc and optic cup detection and retinal blood vessel segmentation, can effectively identify these diseases. The existing [...] Read more.
Diabetes can induce diseases including diabetic retinopathy, cataracts, glaucoma, etc. The blindness caused by these diseases is irreversible. Early analysis of retinal fundus images, including optic disc and optic cup detection and retinal blood vessel segmentation, can effectively identify these diseases. The existing methods lack sufficient discrimination power for the fundus image and are easily affected by pathological regions. This paper proposes a novel multi-path recurrent U-Net architecture to achieve the segmentation of retinal fundus images. The effectiveness of the proposed network structure was proved by two segmentation tasks: optic disc and optic cup segmentation and retinal vessel segmentation. Our method achieved state-of-the-art results in the segmentation of the Drishti-GS1 dataset. Regarding optic disc segmentation, the accuracy and Dice values reached 0.9967 and 0.9817, respectively; as regards optic cup segmentation, the accuracy and Dice values reached 0.9950 and 0.8921, respectively. Our proposed method was also verified on the retinal blood vessel segmentation dataset DRIVE and achieved a good accuracy rate. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

19 pages, 1122 KiB  
Article
PySpark-Based Optimization of Microwave Image Reconstruction Algorithm for Head Imaging Big Data on High-Performance Computing and Google Cloud Platform
by Rahmat Ullah and Tughrul Arslan
Appl. Sci. 2020, 10(10), 3382; https://doi.org/10.3390/app10103382 - 14 May 2020
Cited by 10 | Viewed by 4398
Abstract
For processing large-scale medical imaging data, adopting high-performance computing and cloud-based resources are getting attention rapidly. Due to its low–cost and non-invasive nature, microwave technology is being investigated for breast and brain imaging. Microwave imaging via space-time algorithm and its extended versions are [...] Read more.
For processing large-scale medical imaging data, adopting high-performance computing and cloud-based resources are getting attention rapidly. Due to its low–cost and non-invasive nature, microwave technology is being investigated for breast and brain imaging. Microwave imaging via space-time algorithm and its extended versions are commonly used, as it provides high-quality images. However, due to intensive computation and sequential execution, these algorithms are not capable of producing images in an acceptable time. In this paper, a parallel microwave image reconstruction algorithm based on Apache Spark on high-performance computing and Google Cloud Platform is proposed. The input data is first converted to a resilient distributed data set and then distributed to multiple nodes on a cluster. The subset of pixel data is calculated in parallel on these nodes, and the results are retrieved to a master node for image reconstruction. Using Apache Spark, the performance of the parallel microwave image reconstruction algorithm is evaluated on high-performance computing and Google Cloud Platform, which shows an average speed increase of 28.56 times on four homogeneous computing nodes. Experimental results revealed that the proposed parallel microwave image reconstruction algorithm fully inherits the parallelism, resulting in fast reconstruction of images from radio frequency sensor’s data. This paper also illustrates that the proposed algorithm is generalized and can be deployed on any master-slave architecture. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

13 pages, 981 KiB  
Article
Transfer Learning Algorithm of P300-EEG Signal Based on XDAWN Spatial Filter and Riemannian Geometry Classifier
by Feng Li, Yi Xia, Fei Wang, Dengyong Zhang, Xiaoyu Li and Fan He
Appl. Sci. 2020, 10(5), 1804; https://doi.org/10.3390/app10051804 - 05 Mar 2020
Cited by 35 | Viewed by 4184
Abstract
The electroencephalogram (EEG) signal in the brain–computer interface (BCI) has suffered great cross-subject variability. The BCI system needs to be retrained before each time it is used, which is a waste of resources and time. Thus, it is difficult to generalize a fixed [...] Read more.
The electroencephalogram (EEG) signal in the brain–computer interface (BCI) has suffered great cross-subject variability. The BCI system needs to be retrained before each time it is used, which is a waste of resources and time. Thus, it is difficult to generalize a fixed classification method for all subjects. Therefore, the transfer learning method proposed in this article, which combines XDAWN spatial filter and Riemannian Geometry classifier (RGC), can achieve offline cross-subject transfer learning in the P300-speller paradigm. The XDAWN spatial filter is used to enhanced the P300 components in the raw signal as well as reduce its dimensions. Then, the Riemannian Geometry Mean (RGM) is used as the reference matrix to perform the affine transformation of the symmetric positive definite (SPD) covariance matrix calculated from the filtered signal, which makes the data from different subjects comparable. Finally, the RGC is used to obtain the result of transfer learning experiments. The proposed algorithm was evaluated on two datasets (Dataset I from real patients and Dataset II from the laboratory). By comparing with two state-of-the-art and classic algorithms in the current BCI field, Ensemble of Support Vector Machine (E-SVM) and Stepwise Linear Discriminant Analysis (SWLDA), the maximum averaged area under the receiver operating characteristic curve (AUC) score of our algorithm reached 0.836, proving the potential of our proposed algorithm. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

24 pages, 1494 KiB  
Article
Spinal Cord Segmentation in Ultrasound Medical Imagery
by Bilel Benjdira, Kais Ouni, Mohamad M. Al Rahhal, Abdulrahman Albakr, Amro Al-Habib and Emad Mahrous
Appl. Sci. 2020, 10(4), 1370; https://doi.org/10.3390/app10041370 - 18 Feb 2020
Cited by 16 | Viewed by 5820
Abstract
In this paper, we study and evaluate the task of semantic segmentation of the spinal cord in ultrasound medical imagery. This task is useful for neurosurgeons to analyze the spinal cord movement during and after the laminectomy surgical operation. Laminectomy is performed on [...] Read more.
In this paper, we study and evaluate the task of semantic segmentation of the spinal cord in ultrasound medical imagery. This task is useful for neurosurgeons to analyze the spinal cord movement during and after the laminectomy surgical operation. Laminectomy is performed on patients that suffer from an abnormal pressure made on the spinal cord. The surgeon operates by cutting the bones of the laminae and the intervening ligaments to relieve this pressure. During the surgery, ultrasound waves can pass through the laminectomy area to give real-time exploitable images of the spinal cord. The surgeon uses them to confirm spinal cord decompression or, occasionally, to assess a tumor adjacent to the spinal cord. The Freely pulsating spinal cord is a sign of adequate decompression. To evaluate the semantic segmentation approaches chosen in this study, we constructed two datasets using images collected from 10 different patients performing the laminectomy surgery. We found that the best solution for this task is Fully Convolutional DenseNets if the spinal cord is already in the train set. If the spinal cord does not exist in the train set, U-Net is the best. We also studied the effect of integrating inside both models some deep learning components like Atrous Spatial Pyramid Pooling (ASPP) and Depthwise Separable Convolution (DSC). We added a post-processing step and detailed the configurations to set for both models. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

13 pages, 14793 KiB  
Article
Blob Detection and Deep Learning for Leukemic Blood Image Analysis
by Cecilia Di Ruberto, Andrea Loddo and Giovanni Puglisi
Appl. Sci. 2020, 10(3), 1176; https://doi.org/10.3390/app10031176 - 10 Feb 2020
Cited by 33 | Viewed by 7917
Abstract
In microscopy, laboratory tests make use of cell counters or flow cytometers to perform tests on blood cells, like the complete blood count, rapidly. However, a manual blood smear examination is still needed to verify the counter results and to monitor patients under [...] Read more.
In microscopy, laboratory tests make use of cell counters or flow cytometers to perform tests on blood cells, like the complete blood count, rapidly. However, a manual blood smear examination is still needed to verify the counter results and to monitor patients under therapy. Moreover, the manual inspection permits the description of the cells’ appearance, as well as any abnormalities. Unfortunately, manual analysis is long and tedious, and its result can be subjective and error-prone. Nevertheless, using image processing techniques, it is possible to automate the entire workflow, both reducing the operators’ workload and improving the diagnosis results. In this paper, we propose a novel method for recognizing white blood cells from microscopic blood images and classify them as healthy or affected by leukemia. The presented system is tested on public datasets for leukemia detection, the SMC-IDB, the IUMS-IDB, and the ALL-IDB. The results are promising, achieving 100% accuracy for the first two datasets and 99.7% for the ALL-IDB in white cells detection and 94.1% in leukemia classification, outperforming the state-of-the-art. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

17 pages, 5648 KiB  
Article
A High-Accuracy Mathematical Morphology and Multilayer Perceptron-Based Approach for Melanoma Detection
by Luz-María Sánchez-Reyes, Juvenal Rodríguez-Reséndiz, Sebastián Salazar-Colores, Gloria Nélida Avecilla-Ramírez and Gerardo Israel Pérez-Soto
Appl. Sci. 2020, 10(3), 1098; https://doi.org/10.3390/app10031098 - 06 Feb 2020
Cited by 17 | Viewed by 17406
Abstract
According to the World Health Organization (WHO), melanoma is the most severe type of skin cancer and is the leading cause of death from skin cancer worldwide. Certain features of melanoma include size, shape, color, or texture changes of a mole. In this [...] Read more.
According to the World Health Organization (WHO), melanoma is the most severe type of skin cancer and is the leading cause of death from skin cancer worldwide. Certain features of melanoma include size, shape, color, or texture changes of a mole. In this work, a novel, robust and efficient method for the detection and classification of melanoma in simple and dermatological images is proposed. It is achieved by using HSV (Hue, Saturation, Value) color space along with mathematical morphology and a Gaussian filter to detect the region of interest and estimate four descriptors: symmetry, edge, color, and size. Although these descriptors have been used for several years, the way they are computed for this proposal is one of the things that enhances the results. Subsequently, a multilayer perceptron is employed to classify between malignant and benign melanoma. Three datasets of simple and dermatological images commonly used in the literature were employed to train and evaluate the performance of the proposed method. According to k-fold cross-validation, the method outperforms three state-of-art works, achieving an accuracy of 98.5% and 98.6%, a sensitivity of 96.68% and 98.05%, and a specificity of 98.15%, and 98.01%, in simple and dermatological images, respectively. The results have proven that its use as an assistive device for the detection of melanoma would improve reliability levels compared to conventional methods. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

15 pages, 1507 KiB  
Article
Enhancing Multi-tissue and Multi-scale Cell Nuclei Segmentation with Deep Metric Learning
by Tomas Iesmantas, Agne Paulauskaite-Taraseviciene and Kristina Sutiene
Appl. Sci. 2020, 10(2), 615; https://doi.org/10.3390/app10020615 - 15 Jan 2020
Cited by 13 | Viewed by 2681
Abstract
(1) Background: The segmentation of cell nuclei is an essential task in a wide range of biomedical studies and clinical practices. The full automation of this process remains a challenge due to intra- and internuclear variations across a wide range of tissue morphologies, [...] Read more.
(1) Background: The segmentation of cell nuclei is an essential task in a wide range of biomedical studies and clinical practices. The full automation of this process remains a challenge due to intra- and internuclear variations across a wide range of tissue morphologies, differences in staining protocols and imaging procedures. (2) Methods: A deep learning model with metric embeddings such as contrastive loss and triplet loss with semi-hard negative mining is proposed in order to accurately segment cell nuclei in a diverse set of microscopy images. The effectiveness of the proposed model was tested on a large-scale multi-tissue collection of microscopy image sets. (3) Results: The use of deep metric learning increased the overall segmentation prediction by 3.12% in the average value of Dice similarity coefficients as compared to no metric learning. In particular, the largest gain was observed for segmenting cell nuclei in H&E -stained images when deep learning network and triplet loss with semi-hard negative mining were considered for the task. (4) Conclusion: We conclude that deep metric learning gives an additional boost to the overall learning process and consequently improves the segmentation performance. Notably, the improvement ranges approximately between 0.13% and 22.31% for different types of images in the terms of Dice coefficients when compared to no metric deep learning. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

15 pages, 2911 KiB  
Article
Quantitative Analysis of Melanosis Coli Colonic Mucosa Using Textural Patterns
by Chung-Ming Lo, Chun-Chang Chen, Yu-Hsuan Yeh, Chun-Chao Chang and Hsing-Jung Yeh
Appl. Sci. 2020, 10(1), 404; https://doi.org/10.3390/app10010404 - 05 Jan 2020
Cited by 4 | Viewed by 4796
Abstract
Melanosis coli (MC) is a disease related to long-term use of anthranoid laxative agents. Patients with clinical constipation or obesity are more likely to use these drugs for long periods. Moreover, patients with MC are more likely to develop polyps, particularly adenomatous polyps. [...] Read more.
Melanosis coli (MC) is a disease related to long-term use of anthranoid laxative agents. Patients with clinical constipation or obesity are more likely to use these drugs for long periods. Moreover, patients with MC are more likely to develop polyps, particularly adenomatous polyps. Adenomatous polyps can transform to colorectal cancer. Recognizing multiple polyps from MC is challenging due to their heterogeneity. Therefore, this study proposed a quantitative assessment of MC colonic mucosa with texture patterns. In total, the MC colonoscopy images of 1092 person-times were included in this study. At the beginning, the correlations among carcinoembryonic antigens, polyp texture, and pathology were analyzed. Then, 181 patients with MC were extracted for further analysis while patients having unclear images were excluded. By gray-level co-occurrence matrix, texture patterns in the colorectal images were extracted. Pearson correlation analysis indicated five texture features were significantly correlated with pathological results (p < 0.001). This result should be used in the future to design an instant help software to help the physician. The information of colonoscopy and image analystic data can provide clinicians with suggestions for assessing patients with MC. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

15 pages, 9639 KiB  
Article
Optimized Resolution-Oriented Many-to-One Intensity Standardization Method for Magnetic Resonance Images
by Yuan Gao, Yuanyuan Wang and Jinhua Yu
Appl. Sci. 2019, 9(24), 5531; https://doi.org/10.3390/app9245531 - 16 Dec 2019
Cited by 2 | Viewed by 1921
Abstract
With the development of big data, Radiomics and deep-learning methods based on magnetic resonance (MR) images, it is necessary to conduct large databases containing MR images from multiple centers. Having huge intensity distribution differences among images reduced or even eliminated, robust computer-aided diagnosis [...] Read more.
With the development of big data, Radiomics and deep-learning methods based on magnetic resonance (MR) images, it is necessary to conduct large databases containing MR images from multiple centers. Having huge intensity distribution differences among images reduced or even eliminated, robust computer-aided diagnosis models could be established. Therefore, an optimized intensity standardization model is proposed. The network structure, loss function, and data input strategy were optimized to better avoid the image resolution loss during transformation. The experimental dataset was obtained from five MR scanners located in four hospitals and was divided into nine groups based on the imaging parameters, during which 9152 MR images from 499 participants were collected. Experiments show the superiority of the proposed method to the previously proposed unified model in resolution metrics including the peak signal-to-noise ratio, structural similarity, visual information fidelity, universal quality index, and image fidelity criterion. Another experiment further shows the advantage of the proposed method in increasing the effectiveness of following computer-aided diagnosis models by better preservation of MR image details. Moreover, the advantage over conventional standardization methods are also shown. Thus, MR images from different centers can be standardized using the proposed method, which will facilitate numerous data-driven medical imaging studies. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

15 pages, 2569 KiB  
Article
Deep Learning for Non-Invasive Determination of the Differentiation Status of Human Neuronal Cells by Using Phase-Contrast Photomicrographs
by Maya Ooka, Yuta Tokuoka, Shori Nishimoto, Noriko F. Hiroi, Takahiro G. Yamada and Akira Funahashi
Appl. Sci. 2019, 9(24), 5503; https://doi.org/10.3390/app9245503 - 14 Dec 2019
Cited by 3 | Viewed by 4251
Abstract
Regenerative medicine using neural stem cells (NSCs), which self-renew and have pluripotency, has recently attracted a lot of interest. Much research has focused on the transplantation of differentiated NSCs to damaged tissues for the treatment of various neurodegenerative diseases and spinal cord injuries. [...] Read more.
Regenerative medicine using neural stem cells (NSCs), which self-renew and have pluripotency, has recently attracted a lot of interest. Much research has focused on the transplantation of differentiated NSCs to damaged tissues for the treatment of various neurodegenerative diseases and spinal cord injuries. However, current approaches for distinguishing differentiated from non-differentiated NSCs at the single-cell level have low reproducibility or are invasive to the cells. Here, we developed a fully automated, non-invasive convolutional neural network-based model to determine the differentiation status of human NSCs at the single-cell level from phase-contrast photomicrographs; after training, our model showed an accuracy of identification greater than 94%. To understand how our model distinguished between differentiated and non-differentiated NSCs, we evaluated the informative features it learned for the two cell types and found that it had learned several biologically relevant features related to NSC shape during differentiation. We also used our model to examine the differentiation of NSCs over time; the findings confirmed our model’s ability to distinguish between non-differentiated and differentiated NSCs. Thus, our model was able to non-invasively and quantitatively identify differentiated NSCs with high accuracy and reproducibility, and, therefore, could be an ideal means of identifying differentiated NSCs in the clinic. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

18 pages, 31820 KiB  
Article
IVUS Image Segmentation Using Superpixel-Wise Fuzzy Clustering and Level Set Evolution
by Menghua Xia, Wenjun Yan, Yi Huang, Yi Guo, Guohui Zhou and Yuanyuan Wang
Appl. Sci. 2019, 9(22), 4967; https://doi.org/10.3390/app9224967 - 18 Nov 2019
Cited by 11 | Viewed by 8177
Abstract
Reliable detection of the media-adventitia border (MAB) and the lumen-intima border (LIB) in intravascular ultrasound (IVUS) images remains a challenging task that is of high clinical interest. In this paper, we propose a superpixel-wise fuzzy clustering technique modified by edges, followed by level [...] Read more.
Reliable detection of the media-adventitia border (MAB) and the lumen-intima border (LIB) in intravascular ultrasound (IVUS) images remains a challenging task that is of high clinical interest. In this paper, we propose a superpixel-wise fuzzy clustering technique modified by edges, followed by level set evolution (SFCME-LSE), for automatic border extraction in 40 MHz IVUS images. The contributions are three-fold. First, the usage of superpixels suppresses the influence of speckle noise in ultrasound images on the clustering results. Second, we propose a region of interest (ROI) assignment scheme to prevent the segmentation from being distracted by pathological structures and artifacts. Finally, the contour is converged towards the target boundary through LSE with an appropriately improved edge indicator. Quantitative evaluations on two IVUS datasets by the Jaccard measure (JM), the percentage of area difference (PAD), and the Hausdorff distance (HD) demonstrate the effectiveness of the proposed SFCME-LSE method. SFCME-LSE achieves the minimal HD of 1.20 ± 0.66 mm and 1.18 ± 0.70 mm for the MAB and LIB, respectively, among several state-of-the-art methods on a publicly available dataset. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

11 pages, 1682 KiB  
Article
Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features
by Chung-Ming Lo, Yu-Chih Chen, Rui-Cian Weng and Kevin Li-Chun Hsieh
Appl. Sci. 2019, 9(22), 4926; https://doi.org/10.3390/app9224926 - 16 Nov 2019
Cited by 15 | Viewed by 3676
Abstract
According to a classification of central nervous system tumors by the World Health Organization, diffuse gliomas are classified into grade 2, 3, and 4 gliomas in accordance with their aggressiveness. To quantitatively evaluate a tumor’s malignancy from brain magnetic resonance imaging, this study [...] Read more.
According to a classification of central nervous system tumors by the World Health Organization, diffuse gliomas are classified into grade 2, 3, and 4 gliomas in accordance with their aggressiveness. To quantitatively evaluate a tumor’s malignancy from brain magnetic resonance imaging, this study proposed a computer-aided diagnosis (CAD) system based on a deep convolutional neural network (DCNN). Gliomas from a multi-center database (The Cancer Imaging Archive) composed of a total of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas were used for the training and evaluation of the proposed CAD. Using transfer learning to fine-tune AlexNet, a DCNN, its internal layers, and parameters trained from a million images were transferred to learn how to differentiate the acquired gliomas. Data augmentation was also implemented to increase possible spatial and geometric variations for a better training model. The transferred DCNN achieved an accuracy of 97.9% with a standard deviation of ±1% and an area under the receiver operation characteristics curve (Az) of 0.9991 ± 0, which were superior to handcrafted image features, the DCNN without pretrained features, which only achieved a mean accuracy of 61.42% with a standard deviation of ±7% and a mean Az of 0.8222 ± 0.07, and the DCNN without data augmentation, which was the worst with a mean accuracy of 59.85% with a standard deviation ±16% and a mean Az of 0.7896 ± 0.18. The DCNN with pretrained features and data augmentation can accurately and efficiently classify grade 2, 3, and 4 gliomas. The high accuracy is promising in providing diagnostic suggestions to radiologists in the clinic. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

14 pages, 1627 KiB  
Article
Systematic Method for Morphological Reconstruction of the Semicircular Canals Using a Fully Automatic Skeletonization Process
by Iván Cortés-Domínguez, María A. Fernández-Seara, Nicolás Pérez-Fernández and Javier Burguete
Appl. Sci. 2019, 9(22), 4904; https://doi.org/10.3390/app9224904 - 15 Nov 2019
Cited by 4 | Viewed by 3572
Abstract
We present a novel method to characterize the morphology of semicircular canals of the inner ear. Previous experimental works have a common nexus, the human-operator subjectivity. Although these methods are mostly automatic, they rely on a human decision to determine some particular anatomical [...] Read more.
We present a novel method to characterize the morphology of semicircular canals of the inner ear. Previous experimental works have a common nexus, the human-operator subjectivity. Although these methods are mostly automatic, they rely on a human decision to determine some particular anatomical positions. We implement a systematic analysis where there is no human subjectivity. Our approach is based on a specific magnetic resonance study done in a group of 20 volunteers. From the raw data, the proposed method defines the centerline of all three semicircular canals through a skeletonization process and computes the angle of the functional pair and other geometrical parameters. This approach allows us to assess the inter-operator effect on other methods. From our results, we conclude that, although an average geometry can be defined, the inner ear anatomy cannot be reduced to a single geometry as seen in previous experimental works. We observed a relevant variability of the geometrical parameters in our cohort of volunteers that hinders this usual simplification. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

15 pages, 3257 KiB  
Article
Multiple Feature Integration for Classification of Thoracic Disease in Chest Radiography
by Thi Kieu Khanh Ho and Jeonghwan Gwak
Appl. Sci. 2019, 9(19), 4130; https://doi.org/10.3390/app9194130 - 02 Oct 2019
Cited by 69 | Viewed by 5005
Abstract
The accurate localization and classification of lung abnormalities from radiological images are important for clinical diagnosis and treatment strategies. However, multilabel classification, wherein medical images are interpreted to point out multiple existing or suspected pathologies, presents practical constraints. Building a highly precise classification [...] Read more.
The accurate localization and classification of lung abnormalities from radiological images are important for clinical diagnosis and treatment strategies. However, multilabel classification, wherein medical images are interpreted to point out multiple existing or suspected pathologies, presents practical constraints. Building a highly precise classification model typically requires a huge number of images manually annotated with labels and finding masks that are expensive to acquire in practice. To address this intrinsically weakly supervised learning problem, we present the integration of different features extracted from shallow handcrafted techniques and a pretrained deep CNN model. The model consists of two main approaches: a localization approach that concentrates adaptively on the pathologically abnormal regions utilizing pretrained DenseNet-121 and a classification approach that integrates four types of local and deep features extracted respectively from SIFT, GIST, LBP, and HOG, and convolutional CNN features. We demonstrate that our approaches efficiently leverage interdependencies among target annotations and establish the state of the art classification results of 14 thoracic diseases in comparison with current reference baselines on the publicly available ChestX-ray14 dataset. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

Review

Jump to: Editorial, Research, Other

34 pages, 3867 KiB  
Review
Quantification of Liver Fibrosis—A Comparative Study
by Alexandros Arjmand, Markos G. Tsipouras, Alexandros T. Tzallas, Roberta Forlano, Pinelopi Manousou and Nikolaos Giannakeas
Appl. Sci. 2020, 10(2), 447; https://doi.org/10.3390/app10020447 - 08 Jan 2020
Cited by 25 | Viewed by 13961
Abstract
Liver disease has been targeted as the fifth most common cause of death worldwide and tends to steadily rise. In the last three decades, several publications focused on the quantification of liver fibrosis by means of the estimation of the collagen proportional area [...] Read more.
Liver disease has been targeted as the fifth most common cause of death worldwide and tends to steadily rise. In the last three decades, several publications focused on the quantification of liver fibrosis by means of the estimation of the collagen proportional area (CPA) in liver biopsies obtained from digital image analysis (DIA). In this paper, early and recent studies on this topic have been reviewed according to these research aims: the datasets used for the analysis, the employed image processing techniques, the obtained results, and the derived conclusions. The purpose is to identify the major strengths and “gray-areas” in the landscape of this topic. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

Other

13 pages, 1498 KiB  
Technical Note
Color Enhancement Strategies for 3D Printing of X-ray Computed Tomography Bone Data for Advanced Anatomy Teaching Models
by Megumi Inoue, Tristan Freel, Anthony Van Avermaete and W. Matthew Leevy
Appl. Sci. 2020, 10(5), 1571; https://doi.org/10.3390/app10051571 - 25 Feb 2020
Cited by 10 | Viewed by 4080
Abstract
Three-dimensional (3D) printed anatomical models are valuable visual aids that are widely used in clinical and academic settings to teach complex anatomy. Procedures for converting human biomedical image datasets, like X-ray computed tomography (CT), to prinTable 3D files were explored, allowing easy reproduction [...] Read more.
Three-dimensional (3D) printed anatomical models are valuable visual aids that are widely used in clinical and academic settings to teach complex anatomy. Procedures for converting human biomedical image datasets, like X-ray computed tomography (CT), to prinTable 3D files were explored, allowing easy reproduction of highly accurate models; however, these largely remain monochrome. While multi-color 3D printing is available in two accessible modalities (binder-jetting and poly-jet/multi-jet systems), studies embracing the viability of these technologies in the production of anatomical teaching models are relatively sparse, especially for sub-structures within a segmentation of homogeneous tissue density. Here, we outline a strategy to manually highlight anatomical subregions of a given structure and multi-color 3D print the resultant models in a cost-effective manner. Readily available high-resolution 3D reconstructed models are accessible to the public in online libraries. From these databases, four representative files (of a femur, lumbar vertebra, scapula, and innominate bone) were selected and digitally color enhanced with one of two strategies (painting or splitting) guided by Feneis and Dauber’s Pocket Atlas of Human Anatomy. Resulting models were created via 3D printing with binder-jet and/or poly-jet machines with important features, such as muscle origin and insertion points, highlighted using multiple colors. The resulting multi-color, physical models are promising teaching tools that will enhance the anatomical learning experience. Full article
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)
Show Figures

Figure 1

Back to TopTop