Special Issue "Computer-aided Biomedical Imaging 2020: Advances and Prospects"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (31 December 2020).

Special Issue Editors

Prof. Dr. Marcos Ortega Hortas
E-Mail Website
Guest Editor
VARPA Group, Faculty of Informatics D420, CITIC Research Center, University of A Coruña, Campus de Elviña S/N 15071, A Coruña, Spain
Interests: computer vision; biomedical image processing; pattern recognition and medical informatics
Special Issues and Collections in MDPI journals
Dr. Jorge Novo Buján
E-Mail Website
Guest Editor
University of A Coruña, 3CITIC-Research Center of Information and Communication Technologies
Interests: computer vision; image processing; pattern recognition; biomedical image processing; machine learning
Special Issues and Collections in MDPI journals
Dr. Pablo Mesejo Santiago
E-Mail Website
Guest Editor
Marie Curie Individual Fellowship, Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
Interests: computer vision (image segmentation, image classification, image registration); machine learning (deep and shallow neural networks, ensemble classifiers); soft computing (metaheuristics); biomedical image analysis (in neuroscience, gastroenterology, and forensic sciences)

Special Issue Information

Dear Colleagues,

At present, image acquisition and analysis are a fundamental basis in many disciplines of biomedical scope. In tasks such as screening, diagnosis, treatment, drug development, molecular analysis, etc., visual information is crucial for a successful performance. Given the existence of numerous image modalities with more and more quality, the time and effort demanded from the specialists for its manual analysis is appalling, leading to an underutilization of the available visual information. Thus, computerized solutions for aiding in the process of the image analysis via automatic or semi-automatic procedures come as a necessity. In recent years, along with the availability of huge amounts of biomedical imaging data, new computer-based paradigms (e.g., Big Data or deep learning) are experiencing increasing popularity, improving traditional procedures related to many applications, including biomedical image analysis.

This Special Issue focuses on the recent advances and prospects in computer-aided biomedical imaging and welcomes contributions in topics that include but are not limited to:

  • Biomedical image analysis
  • Deep learning in biomedicine
  • Artificial Intelligence in biomedicine
  • Applied soft computing
  • Computer-assisted diagnosis
  • Image-guided therapy
  • Image-guided surgery and intervention
  • 2D and 3D modeling
  • 2D and 3D segmentation
  • 2D and 3D reconstruction
  • 2d and 3D registration and fusion
  • Motion analysis
  • Telemedicine with medical images
  • Image quality assessment
  • Applications of Big Data in imaging
  • Biomedical robotics and haptics

Prof. Dr. Marcos Ortega Hortas
Dr. Jorge Novo Buján
Dr. Pablo Mesejo Santiago
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomedical image analysis 
  • Deep learning in biomedicine
  • Artificial Intelligence in biomedicine 
  • Applied soft computing 
  • Computer-assisted diagnosis
  • Image-guided therapy 
  • Image-guided surgery and intervention
  • 2D and 3D modeling
  • 2D and 3D segmentation 
  • 2D and 3D reconstruction
  • 2d and 3D registration and fusion
  • Motion analysis
  • Telemedicine with medical images
  • Image quality assessment 
  • Applications of Big Data in imaging 
  • Biomedical robotics and haptics

Published Papers (35 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
OtoPair: Combining Right and Left Eardrum Otoscopy Images to Improve the Accuracy of Automated Image Analysis
Appl. Sci. 2021, 11(4), 1831; https://doi.org/10.3390/app11041831 - 19 Feb 2021
Cited by 1 | Viewed by 616
Abstract
The accurate diagnosis of otitis media (OM) and other middle ear and eardrum abnormalities is difficult, even for experienced otologists. In our earlier studies, we developed computer-aided diagnosis systems to improve the diagnostic accuracy. In this study, we investigate a novel approach, called [...] Read more.
The accurate diagnosis of otitis media (OM) and other middle ear and eardrum abnormalities is difficult, even for experienced otologists. In our earlier studies, we developed computer-aided diagnosis systems to improve the diagnostic accuracy. In this study, we investigate a novel approach, called OtoPair, which uses paired eardrum images together rather than using a single eardrum image to classify them as ‘normal’ or ‘abnormal’. This also mimics the way that otologists evaluate ears, because they diagnose eardrum abnormalities by examining both ears. Our approach creates a new feature vector, which is formed with extracted features from a pair of high-resolution otoscope images or images that are captured by digital video-otoscopes. The feature vector has two parts. The first part consists of lookup table-based values created by using deep learning techniques reported in our previous OtoMatch content-based image retrieval system. The second part consists of handcrafted features that are created by recording registration errors between paired eardrums, color-based features, such as histogram of a* and b* component of the L*a*b* color space, and statistical measurements of these color channels. The extracted features are concatenated to form a single feature vector, which is then classified by a tree bagger classifier. A total of 150-pair (300-single) of eardrum images, which are either the same category (normal-normal and abnormal-abnormal) or different category (normal-abnormal and abnormal-normal) pairs, are used to perform several experiments. The proposed approach increases the accuracy from 78.7% (±0.1%) to 85.8% (±0.2%) on a three-fold cross-validation method. These are promising results with a limited number of eardrum pairs to demonstrate the feasibility of using a pair of eardrum images instead of single eardrum images to improve the diagnostic accuracy. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Desktop 3D Printing: Key for Surgical Navigation in Acral Tumors?
Appl. Sci. 2020, 10(24), 8984; https://doi.org/10.3390/app10248984 - 16 Dec 2020
Cited by 1 | Viewed by 492
Abstract
Surgical navigation techniques have shown potential benefits in orthopedic oncologic surgery. However, the translation of these results to acral tumor resection surgeries is challenging due to the large number of joints with complex movements of the affected areas (located in distal extremities). This [...] Read more.
Surgical navigation techniques have shown potential benefits in orthopedic oncologic surgery. However, the translation of these results to acral tumor resection surgeries is challenging due to the large number of joints with complex movements of the affected areas (located in distal extremities). This study proposes a surgical workflow that combines an intraoperative open-source navigation software, based on a multi-camera tracking, with desktop three-dimensional (3D) printing for accurate navigation of these tumors. Desktop 3D printing was used to fabricate patient-specific 3D printed molds to ensure that the distal extremity is in the same position both in preoperative images and during image-guided surgery (IGS). The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). The validation involved deformation analysis of the 3D-printed mold after sterilization, accuracy of the system in patient-specific 3D-printed phantoms, and feasibility of the workflow during the surgical intervention. The sterilization process did not lead to significant deformations of the mold (mean error below 0.20 mm). The overall accuracy of the system was 1.88 mm evaluated on the phantoms. IGS guidance was feasible during both surgeries, allowing surgeons to verify enough margin during tumor resection. The results obtained have demonstrated the viability of combining open-source navigation and desktop 3D printing for acral tumor surgeries. The suggested framework can be easily personalized to any patient and could be adapted to other surgical scenarios. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Diabetic Macular Edema Characterization and Visualization Using Optical Coherence Tomography Images
Appl. Sci. 2020, 10(21), 7718; https://doi.org/10.3390/app10217718 - 31 Oct 2020
Viewed by 473
Abstract
Diabetic Retinopathy and Diabetic Macular Edema (DME) represent one of the main causes of blindness in developed countries. They are characterized by fluid deposits in the retinal layers, causing a progressive vision loss over the time. The clinical literature defines three DME types [...] Read more.
Diabetic Retinopathy and Diabetic Macular Edema (DME) represent one of the main causes of blindness in developed countries. They are characterized by fluid deposits in the retinal layers, causing a progressive vision loss over the time. The clinical literature defines three DME types according to the texture and disposition of the fluid accumulations: Cystoid Macular Edema (CME), Diffuse Retinal Thickening (DRT) and Serous Retinal Detachment (SRD). Detecting each one is essential as, depending on their presence, the expert will decide on the adequate treatment of the pathology. In this work, we propose a robust detection and visualization methodology based on the analysis of independent image regions. We study a complete and heterogeneous library of 375 texture and intensity features in a dataset of 356 labeled images from two of the most used capture devices in the clinical domain: a CIRRUSTM HD-OCT 500 Carl Zeiss Meditec and 179 OCT images from a modular HRA + OCT SPECTRALIS® from Heidelberg Engineering, Inc. We extracted 33,810 samples for each type of DME for the feature analysis and incremental training of four different classifier paradigms. This way, we achieved an 84.04% average accuracy for CME, 78.44% average accuracy for DRT and 95.40% average accuracy for SRD. These models are used to generate an intuitive visualization of the fluid regions. We use an image sampling and voting strategy, resulting in a system capable of detecting and characterizing the three types of DME presenting them in an intuitive and repeatable way. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Automatic Diagnosis of Chronic Thromboembolic Pulmonary Hypertension Based on Volumetric Data from SPECT Ventilation and Perfusion Images
Appl. Sci. 2020, 10(15), 5360; https://doi.org/10.3390/app10155360 - 03 Aug 2020
Cited by 1 | Viewed by 717
Abstract
Chronic thromboembolic pulmonary hypertension (CTEPH) is confirmed by visual analysis of single-photon emission computer tomography (SPECT) ventilation and perfusion (V/Q) images. Defects in the perfusion image discordant with the ventilation image indicate obstructed segments and the positive diagnosis of CTEPH. A quantitative metric [...] Read more.
Chronic thromboembolic pulmonary hypertension (CTEPH) is confirmed by visual analysis of single-photon emission computer tomography (SPECT) ventilation and perfusion (V/Q) images. Defects in the perfusion image discordant with the ventilation image indicate obstructed segments and the positive diagnosis of CTEPH. A quantitative metric and classification algorithm are proposed based on volumetric data from SPECT V/Q images. The difference in ventilation and perfusion volumes (VV-P) is defined as a quantitative metric to identify discordant defects in the SPECT images. The algorithm was validated with 22 patients grouped according to their diagnosis: (1) CTEPH and (2) respiratory pathology. Volumetric data from SPECT perfusion images was also compared before and after treatment for CTEPH. CTEPH was detected with a sensitivity of 0.67 and specificity of 0.80. The performance of volumetric data from SPECT perfusion images for the evaluation of treatment response was studied for two cases and improvement of pulmonary perfusion was observed in one case. This study uses volumetric data from SPECT V/Q images for the diagnosis of CTEPH and its differentiation from respiratory pathologies. The results indicate that the defined metric is a viable option for a quantitative analysis of SPECT V/Q images. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Automated Classification of Blood Loss from Transurethral Resection of the Prostate Surgery Videos Using Deep Learning Technique
Appl. Sci. 2020, 10(14), 4908; https://doi.org/10.3390/app10144908 - 17 Jul 2020
Cited by 1 | Viewed by 559
Abstract
Transurethral resection of the prostate (TURP) is a surgical removal of obstructing prostate tissue. The total bleeding area is used to determine the performance of the TURP surgery. Although the traditional method for the detection of bleeding areas provides accurate results, it cannot [...] Read more.
Transurethral resection of the prostate (TURP) is a surgical removal of obstructing prostate tissue. The total bleeding area is used to determine the performance of the TURP surgery. Although the traditional method for the detection of bleeding areas provides accurate results, it cannot detect them in time for surgery diagnosis. Moreover, it is easily disturbed to judge bleeding areas for experienced physicians because a red light pattern arising from the surgical cutting loop often appears on the images. Recently, the automatic computer-aided technique and artificial intelligence deep learning are broadly used in medical image recognition, which can effectively extract the desired features to reduce the burden of physicians and increase the accuracy of diagnosis. In this study, we integrated two state-of-the-art deep learning techniques for recognizing and extracting the red light areas arising from the cutting loop in the TURP surgery. First, the ResNet-50 model was used to recognize the red light pattern appearing in the chipped frames of the surgery videos. Then, the proposed Res-Unet model was used to segment the areas with the red light pattern and remove these areas. Finally, the hue, saturation, and value color space were used to classify the four levels of the blood loss under the circumstances of non-red light pattern images. The experiments have shown that the proposed Res-Unet model achieves higher accuracy than other segmentation algorithms in classifying the images with the red and non-red lights, and is able to extract the red light patterns and effectively remove them in the TURP surgery images. The proposed approaches presented here are capable of obtaining the level classifications of blood loss, which are helpful for physicians in diagnosis. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Surface Muscle Segmentation Using 3D U-Net Based on Selective Voxel Patch Generation in Whole-Body CT Images
Appl. Sci. 2020, 10(13), 4477; https://doi.org/10.3390/app10134477 - 28 Jun 2020
Cited by 2 | Viewed by 577
Abstract
This study aimed to develop and validate an automated segmentation method for surface muscles using a three-dimensional (3D) U-Net based on selective voxel patches from whole-body computed tomography (CT) images. Our method defined a voxel patch (VP) as the input images, which consisted [...] Read more.
This study aimed to develop and validate an automated segmentation method for surface muscles using a three-dimensional (3D) U-Net based on selective voxel patches from whole-body computed tomography (CT) images. Our method defined a voxel patch (VP) as the input images, which consisted of 56 slices selected at equal intervals from the whole slices. In training, one VP was used for each case. In the test, multiple VPs were created according to the number of slices in the test case. Segmentation was then performed for each VP and the results of each VP merged. The proposed method achieved a segmentation accuracy mean dice coefficient of 0.900 for 8 cases. Although challenges remain in muscles adjacent to visceral organs and in small muscle areas, VP is useful for surface muscle segmentation using whole-body CT images with limited annotation data. The limitation of our study is that it is limited to cases of muscular disease with atrophy. Future studies should address whether the proposed method is effective for other modalities or using data with different imaging ranges. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Computer-Aided Biomedical Imaging of Periiliac Adipose Tissue Identifies Perivascular Fat as a Marker of Disease Complexity in Patients with Lower Limb Ischemia
Appl. Sci. 2020, 10(13), 4456; https://doi.org/10.3390/app10134456 - 28 Jun 2020
Viewed by 425
Abstract
The aim of the study was to develop a semi-automated, computer-aided imaging technique to quantify the amount and distribution of perivascular fat at the level of the iliac arteries (periiliac adipose tissue—PIAT), and to investigate the association of this new computer-aided imaging biomarker [...] Read more.
The aim of the study was to develop a semi-automated, computer-aided imaging technique to quantify the amount and distribution of perivascular fat at the level of the iliac arteries (periiliac adipose tissue—PIAT), and to investigate the association of this new computer-aided imaging biomarker with other biomedical imaging biomarkers, which characterize the pelvic adipose tissue (SAT—subcutaneous adipose tissue; VAT—visceral adipose tissue). We included 34 patients with peripheral arterial disease, in whom the volumes of PIAT, SAT and VAT were quantified using a dedicated software, at the level of right and left iliac arteries. Median value of PIAT was five milliliters. Patients with PIAT > five milliliters were in more advanced Fontaine classes, with more complex arterial lesions, compared to those with low PIAT (<5 mL) (p < 0.0001). PIAT volumes presented a gradual increase with the Trans-Atlantic Inter-Society Consensus (TASC) class (2.57 +/− 1.98 in TASC A, 4.65 +/− 1.63 in TASC B, 8.79 +/− 1.99 in TASC C and 13.77 +/− 2.74 in TASC D). The distribution of PIAT between the left and right iliac axis was quasi-uniform (correlation between right and left PIAT: r = 0.46, p = 0.005). Linear regression analysis showed that the mean PIAT volume was correlated with VAT (r = 0.38, p = 0.02), but not with the SAT at the level of iliac artery origin (r = 0.16, p = 0.34). PIAT may represent a novel biomedical imaging derived biomarker, which characterizes the distribution of adipose tissue in the pelvic area and may serve as an indicator of the severity and complexity of lower limb ischemia. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
A Measurement Software for Professional Training in Early Detection of Melanoma
Appl. Sci. 2020, 10(12), 4351; https://doi.org/10.3390/app10124351 - 24 Jun 2020
Viewed by 792
Abstract
Software systems have been long introduced as support to the early detection of melanoma through the automatic analysis of suspicious skin lesions. Nevertheless, their behavior is not yet similar to the performance exhibited by expert dermatologists in terms of diagnostic accuracy. Instead, a [...] Read more.
Software systems have been long introduced as support to the early detection of melanoma through the automatic analysis of suspicious skin lesions. Nevertheless, their behavior is not yet similar to the performance exhibited by expert dermatologists in terms of diagnostic accuracy. Instead, a software system should be adopted by non-experienced dermatologists in order to improve the measurement and detection results for skin atypical patterns and the accuracy of the corresponding second opinion. This paper describes an image-based measurement and classification system able to score pigmented skin lesions according to the Seven-Point Check-list diagnostic method. Focus is devoted to the measurement procedure of biological structures more closely related to the atypical character of the nevus. Moreover, the performances of the measurement system are evaluated by considering the support to dermatologists with different experiences during the clinical activity. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Cyst Detection and Motion Artifact Elimination in Enface Optical Coherence Tomography Angiograms
Appl. Sci. 2020, 10(11), 3994; https://doi.org/10.3390/app10113994 - 09 Jun 2020
Viewed by 525
Abstract
The correct detection of cysts in Optical Coherence Tomography Angiography images is of crucial importance for allowing reliable quantitative evaluation in patients with macular edema. However, this is a challenging task, since the commercially available software only allows manual cysts delineation. Moreover, even [...] Read more.
The correct detection of cysts in Optical Coherence Tomography Angiography images is of crucial importance for allowing reliable quantitative evaluation in patients with macular edema. However, this is a challenging task, since the commercially available software only allows manual cysts delineation. Moreover, even small eye movements can cause motion artifacts that are not always compensated by the commercial software. In this paper, we propose a novel algorithm based on the use of filters and morphological operators, to eliminate the motion artifacts and delineate the cysts contours/borders. The method has been validated on a dataset including 194 images from 30 patients, comparing the algorithm results with the ground truth produced by the medical doctors. The Jaccard index between the algorithmic and the manual detection is 98.97%, with an overall accuracy of 99.62%. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Skin Lesion Segmentation Using Image Bit-Plane Multilayer Approach
Appl. Sci. 2020, 10(9), 3045; https://doi.org/10.3390/app10093045 - 27 Apr 2020
Cited by 3 | Viewed by 773
Abstract
The establishment of automatic diagnostic systems able to detect and classify skin lesions at the initial stage are getting really relevant and effective in providing support for medical personnel during clinical assessment. Image segmentation has a determinant part in computer-aided skin lesion diagnosis [...] Read more.
The establishment of automatic diagnostic systems able to detect and classify skin lesions at the initial stage are getting really relevant and effective in providing support for medical personnel during clinical assessment. Image segmentation has a determinant part in computer-aided skin lesion diagnosis pipeline because it makes possible to extract and highlight information on lesion contour texture as, for example, skewness and area unevenness. However, artifacts, low contrast, indistinct boundaries, and different shapes and areas contribute to make skin lesion segmentation a challenging task. In this paper, a fully automatic computer-aided system for skin lesion segmentation in dermoscopic images is indicated. Adopting this method, noise and artifacts are initially reduced by the singular value decomposition; afterward lesion decomposition into a frame of bit-plane layers is performed. A specific procedure is implemented for redundant data reduction using simple Boolean operators. Since lesion and background are rarely homogeneous regions, the obtained segmentation region could contain some disjointed areas classified as lesion. To obtain a single zone classified as lesion avoiding spurious pixels or holes inside the image under test, mathematical morphological techniques are implemented. The performance obtained highlights the method validity. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Computer Aided Detection of Pulmonary Embolism Using Multi-Slice Multi-Axial Segmentation
Appl. Sci. 2020, 10(8), 2945; https://doi.org/10.3390/app10082945 - 24 Apr 2020
Viewed by 670
Abstract
Pulmonary Embolism (PE) is a respiratory disease caused by blood clots lodged in the pulmonary arteries, blocking perfusion, limiting blood oxygenation, and inducing a higher load on the right ventricle. Pulmonary embolism is diagnosed using contrast enhanced Computed Tomography Pulmonary Angiography (CTPA), resulting [...] Read more.
Pulmonary Embolism (PE) is a respiratory disease caused by blood clots lodged in the pulmonary arteries, blocking perfusion, limiting blood oxygenation, and inducing a higher load on the right ventricle. Pulmonary embolism is diagnosed using contrast enhanced Computed Tomography Pulmonary Angiography (CTPA), resulting in a 3 D image where the pulmonary arteries appear as bright structures, and emboli appear as filling defects, with these often being difficult to see, especially in the subsegmental case. In comparison to an expert panel, the average radiologist has a sensitivity of between 77% and 94 % . Computer Aided Detection (CAD) is regarded as a promising system to detect emboli, but current algorithms are hindered by a high false positive rate. In this paper, we propose a novel methodology for emboli detection. Instead of finding candidate points and characterizing them, we find emboli directly on the whole image slice. Detections across different slices are merged into a single detection volume that is post-processed to generate emboli detections. The system was evaluated on a public PE database of 80 scans. On 20 test scans, our system obtained a per-embolus sensitivity of 68% at a regime of one false positive per scan, improving on state-of-the-art methods. We therefore conclude that our multi-slice emboli segmentation CAD for PE method is a valuable alternative to the standard methods of candidate point selection and classification. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Graphical abstract

Article
Classification of Lentigo Maligna at Patient-Level by Means of Reflectance Confocal Microscopy Data
Appl. Sci. 2020, 10(8), 2830; https://doi.org/10.3390/app10082830 - 19 Apr 2020
Cited by 1 | Viewed by 805
Abstract
Reflectance confocal microscopy is an appropriate tool for the diagnosis of lentigo maligna. Compared with dermoscopy, this device can provide abundant information as a mosaic and/or a stack of images. In this particular context, the number of images per patient varied between 2 [...] Read more.
Reflectance confocal microscopy is an appropriate tool for the diagnosis of lentigo maligna. Compared with dermoscopy, this device can provide abundant information as a mosaic and/or a stack of images. In this particular context, the number of images per patient varied between 2 and 833 images and the objective, ultimately, is to be able to discern between benign and malignant classes. First, this paper evaluated classification at the image level, with the help of handcrafted methods derived from the literature and transfer learning methods. The transfer learning feature extraction methods outperformed the handcrafted feature extraction methods from literature, with a F 1 score value of 0.82. Secondly, this work proposed patient-level supervised methods based on image decisions and a comparison of these with multi-instance learning methods. This study achieved comparable results to those of the dermatologists, with an auc score of 0.87 for supervised patient diagnosis and an auc score of 0.88 for multi-instance learning patient diagnosis. According to these results, computer-aided diagnosis methods presented in this paper could be easily used in a clinical context to save time or confirm a diagnosis and can be oriented to detect images of interest. Also, this methodology can be used to serve future works based on multimodality. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Optimisation of 2D U-Net Model Components for Automatic Prostate Segmentation on MRI
Appl. Sci. 2020, 10(7), 2601; https://doi.org/10.3390/app10072601 - 09 Apr 2020
Cited by 2 | Viewed by 950
Abstract
In this paper, we develop an optimised state-of-the-art 2D U-Net model by studying the effects of the individual deep learning model components in performing prostate segmentation. We found that for upsampling, the combination of interpolation and convolution is better than the use of [...] Read more.
In this paper, we develop an optimised state-of-the-art 2D U-Net model by studying the effects of the individual deep learning model components in performing prostate segmentation. We found that for upsampling, the combination of interpolation and convolution is better than the use of transposed convolution. For combining feature maps in each convolution block, it is only beneficial if a skip connection with concatenation is used. With respect to pooling, average pooling is better than strided-convolution, max, RMS or L2 pooling. Introducing a batch normalisation layer before the activation layer gives further performance improvement. The optimisation is based on a private dataset as it has a fixed 2D resolution and voxel size for every image which mitigates the need of a resizing operation in the data preparation process. Non-enhancing data preprocessing was applied and five-fold cross-validation was used to evaluate the fully automatic segmentation approach. We show it outperforms the traditional methods that were previously applied on the private dataset, as well as outperforming other comparable state-of-the-art 2D models on the public dataset PROMISE12. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Evaluation of Effectiveness of Digital Technologies During Anatomy Learning in Nursing School
Appl. Sci. 2020, 10(7), 2357; https://doi.org/10.3390/app10072357 - 30 Mar 2020
Cited by 3 | Viewed by 716
Abstract
The disciplines of biosciences included in the curricula of a nursing degree represent a daunting but crucial type of knowledge that a well-prepared nurse should acquire. Given the importance and the objective difficulties of these courses, nursing students experience anxiety, especially for the [...] Read more.
The disciplines of biosciences included in the curricula of a nursing degree represent a daunting but crucial type of knowledge that a well-prepared nurse should acquire. Given the importance and the objective difficulties of these courses, nursing students experience anxiety, especially for the anatomy course. This anxiety and the related rate of exam failures lead professors to analyze their teaching approach, by diversifying the lecturing methods. The aim of our study was to test the use of a virtual dissection table (DT) during the anatomy lectures of a nursing course, evaluating the anxiety level before the exam and evaluating the exam score. The feedback of the evaluated student population was positive overall. The integration of the DT in anatomy lectures improved the learning performance and mostly enhanced the self-confidence of the first year nursing students. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
An Anatomical-Based Subject-Specific Model of In-Vivo Knee Joint 3D Kinematics From Medical Imaging
Appl. Sci. 2020, 10(6), 2100; https://doi.org/10.3390/app10062100 - 20 Mar 2020
Cited by 8 | Viewed by 980
Abstract
Biomechanical models of the knee joint allow the development of accurate procedures as well as novel devices to restore the joint natural motion. They are also used within musculoskeletal models to perform clinical gait analysis on patients. Among relevant knee models in the [...] Read more.
Biomechanical models of the knee joint allow the development of accurate procedures as well as novel devices to restore the joint natural motion. They are also used within musculoskeletal models to perform clinical gait analysis on patients. Among relevant knee models in the literature, the anatomy-based spatial parallel mechanisms represent the joint motion using rigid links for the ligaments’ isometric fibres and point contacts for the articular surfaces. To customize analyses, therapies and devices, there is the need to define subject-specific models, but relevant procedures and their accuracy are still questioned. A procedure is here proposed and validated to define a customized knee model based on a spatial parallel mechanism. Computed tomography, magnetic resonance and 3D-video-fluoroscopy were performed on a healthy volunteer to define the personalized model geometry. The model was then validated by comparing the measured and the replicated joint motion. The model showed mean absolute difference and standard deviations in translations and rotations, respectively of 0.98 ± 0.40 mm and 0.68 ± 0.29 ° for the tibia–femur motion, and of 0.77 ± 0.15 mm and 2.09 ± 0.69 ° for the patella–femur motion. These results show that accurate personalized spatial models of knee kinematics can be obtained from in-vivo imaging. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Three-Dimensional CAD in Skull Reconstruction: A Narrative Review with Focus on Cranioplasty and Its Potential Relevance to Brain Sciences
Appl. Sci. 2020, 10(5), 1847; https://doi.org/10.3390/app10051847 - 07 Mar 2020
Cited by 1 | Viewed by 841
Abstract
In patients suffering from severe traumatic brain injury and massive stroke (hemorrhagic or ischemic), decompressive craniectomy (DC) is a surgical strategy used to reduce intracranial pressure, and to prevent brainstem compromise from subsequent brain edema. In surviving patients, cranioplasty surgery helps to protect [...] Read more.
In patients suffering from severe traumatic brain injury and massive stroke (hemorrhagic or ischemic), decompressive craniectomy (DC) is a surgical strategy used to reduce intracranial pressure, and to prevent brainstem compromise from subsequent brain edema. In surviving patients, cranioplasty surgery helps to protect brain tissue, and correct the external deformity. The aesthetic outcome of cranioplasty using an asymmetrical implant can negatively influence patients physically and mentally, especially young patients. Advancements in the development of biomaterials have now made three-dimensional (3-D) computer-assisted design/manufacturing (CAD/CAM)-fabricated implants an optimal choice for the repair of skull defects following DC. Here, we summarize the various materials for cranioplasty, including xenogeneic, autogenous, and alloplastic grafts. The processing procedures of the CAD/CAM technique are briefly outlined, and reflected our experiences to reconstruct skull CAD models using commercial software, published previously, to assess aesthetic outcomes of regular 3-D CAD models without contouring elevation or depression. The establishment of a 3-D CAD model ensures a possibility for better aesthetic outcomes of CAM-derived alloplastic implants. Finally, clinical consideration of the CAD algorithms for adjusting contours and their potential application in prospective healthcare are briefly outlined. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Adenocarcinoma Recognition in Endoscopy Images Using Optimized Convolutional Neural Networks
Appl. Sci. 2020, 10(5), 1650; https://doi.org/10.3390/app10051650 - 01 Mar 2020
Cited by 6 | Viewed by 820
Abstract
Colonoscopy, which refers to the endoscopic examination of colon using a camera, is considered as the most effective method for diagnosis of colorectal cancer. Colonoscopy is performed by a medical doctor who visually inspects one’s colon to find protruding or cancerous polyps. In [...] Read more.
Colonoscopy, which refers to the endoscopic examination of colon using a camera, is considered as the most effective method for diagnosis of colorectal cancer. Colonoscopy is performed by a medical doctor who visually inspects one’s colon to find protruding or cancerous polyps. In some situations, these polyps are difficult to find by the human eye, which may lead to a misdiagnosis. In recent years, deep learning has revolutionized the field of computer vision due to its exemplary performance. This study proposes a Convolutional Neural Network (CNN) architecture for classifying colonoscopy images as normal, adenomatous polyps, and adenocarcinoma. The main objective of this study is to aid medical practitioners in the correct diagnosis of colorectal cancer. Our proposed CNN architecture consists of 43 convolutional layers and one fully-connected layer. We trained and evaluated our proposed network architecture on the colonoscopy image dataset with 410 test subjects provided by Gachon University Hospital. Our experimental results showed an accuracy of 94.39% over 410 test subjects. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Correlation between LAA Morphological Features and Computational Fluid Dynamics Analysis for Non-Valvular Atrial Fibrillation Patients
Appl. Sci. 2020, 10(4), 1448; https://doi.org/10.3390/app10041448 - 20 Feb 2020
Cited by 6 | Viewed by 1099
Abstract
The left atrial appendage (LAA) is a complex cardiovascular structure which can yield to thrombi formation in patients with non-valvular atrial fibrillation (AF). The study of LAA fluid dynamics together with morphological features should be investigated in order to evaluate the possible connection [...] Read more.
The left atrial appendage (LAA) is a complex cardiovascular structure which can yield to thrombi formation in patients with non-valvular atrial fibrillation (AF). The study of LAA fluid dynamics together with morphological features should be investigated in order to evaluate the possible connection of geometrical and hemodynamics indices with the stroke risk. To reach this goal, we conducted a morphological analysis of four different LAA shapes considering their variation during the cardiac cycle and computational fluid dynamics (CFD) simulations in AF conditions were carried out. The analysis of main geometrical LAA parameters showed a huger ostium and a reduced motility for the cauliflower and cactus shapes, as well as a lower velocity values from the CFD analysis. Such findings are in line with literature and highlight the importance of coupling dynamics imaging data with CFD calculations for providing information not available at clinical level. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Cross-Domain Data Augmentation for Deep-Learning-Based Male Pelvic Organ Segmentation in Cone Beam CT
Appl. Sci. 2020, 10(3), 1154; https://doi.org/10.3390/app10031154 - 08 Feb 2020
Cited by 4 | Viewed by 1081
Abstract
For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In [...] Read more.
For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In this work, a 3D U-net deep-learning architecture was trained to segment bladder, rectum, and prostate on CBCT scans. Due to the scarcity of contoured CBCT scans, the training set was augmented with CT scans already contoured in the current clinical workflow. Our network was then tested on 63 CBCT scans. The Dice similarity coefficient (DSC) increased significantly with the number of CBCT and CT scans in the training set, reaching 0.874 ± 0.096 , 0.814 ± 0.055 , and 0.758 ± 0.101 for bladder, rectum, and prostate, respectively. This was about 10% better than conventional approaches based on deformable image registration between planning CT and treatment CBCT scans, except for prostate. Interestingly, adding 74 CT scans to the CBCT training set allowed maintaining high DSCs, while halving the number of CBCT scans. Hence, our work showed that although CBCT scans included artifacts, cross-domain augmentation of the training set was effective and could rely on large datasets available for planning CT scans. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
A Deep-Learning Approach for Diagnosis of Metastatic Breast Cancer in Bones from Whole-Body Scans
Appl. Sci. 2020, 10(3), 997; https://doi.org/10.3390/app10030997 - 03 Feb 2020
Cited by 6 | Viewed by 2162
Abstract
(1) Background: Bone metastasis is one of the most frequent diseases in breast, lung and prostate cancer; bone scintigraphy is the primary imaging method of screening that offers the highest sensitivity (95%) regarding metastases. To address the considerable problem of bone metastasis diagnosis, [...] Read more.
(1) Background: Bone metastasis is one of the most frequent diseases in breast, lung and prostate cancer; bone scintigraphy is the primary imaging method of screening that offers the highest sensitivity (95%) regarding metastases. To address the considerable problem of bone metastasis diagnosis, focused on breast cancer patients, artificial intelligence methods devoted to deep-learning algorithms for medical image analysis are investigated in this research work; (2) Methods: Deep learning is a powerful algorithm for automatic classification and diagnosis of medical images whereas its implementation is achieved by the use of convolutional neural networks (CNNs). The purpose of this study is to build a robust CNN model that will be able to classify images of whole-body scans in patients suffering from breast cancer, depending on whether or not they are infected by metastasis of breast cancer; (3) Results: A robust CNN architecture is selected based on CNN exploration performance for bone metastasis diagnosis using whole-body scan images, achieving a high classification accuracy of 92.50%. The best-performing CNN method is compared with other popular and well-known CNN architectures for medical imaging like ResNet50, VGG16, MobileNet, and DenseNet, reported in the literature, providing superior classification accuracy; and (4) Conclusions: Prediction results show the efficacy of the proposed deep learning approach in bone metastasis diagnosis for breast cancer patients in nuclear medicine. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Accurate BAPL Score Classification of Brain PET Images Based on Convolutional Neural Networks with a Joint Discriminative Loss Function
Appl. Sci. 2020, 10(3), 965; https://doi.org/10.3390/app10030965 - 02 Feb 2020
Cited by 2 | Viewed by 975
Abstract
Alzheimer’s disease (AD) is an irreversible progressive cerebral disease with most of its symptoms appearing after 60 years of age. Alzheimer’s disease has been largely attributed to accumulation of amyloid beta (Aβ), but a complete cure has remained elusive. 18F-Florbetaben amyloid positron emission [...] Read more.
Alzheimer’s disease (AD) is an irreversible progressive cerebral disease with most of its symptoms appearing after 60 years of age. Alzheimer’s disease has been largely attributed to accumulation of amyloid beta (Aβ), but a complete cure has remained elusive. 18F-Florbetaben amyloid positron emission tomography (PET) has been shown as a more powerful tool for understanding AD-related brain changes than magnetic resonance imaging and computed tomography. In this paper, we propose an accurate classification method for scoring brain amyloid plaque load (BAPL) based on deep convolutional neural networks. A joint discriminative loss function was formulated by adding a discriminative intra-loss function to the conventional (cross-entropy) loss function. The performance of the proposed joint loss function was compared with that of the conventional loss function in three state-of-the-art deep neural network architectures. The intra-loss function significantly improved the BAPL classification performance. In addition, we showed that the mix-up data augmentation method, originally proposed for natural image classification, was also useful for medical image classification. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Machine Learning and DWI Brain Communicability Networks for Alzheimer’s Disease Detection
Appl. Sci. 2020, 10(3), 934; https://doi.org/10.3390/app10030934 - 31 Jan 2020
Cited by 7 | Viewed by 1399
Abstract
Signal processing and machine learning techniques are changing the clinical practice based on medical imaging from many perspectives. A major topic is related to (i) the development of computer aided diagnosis systems to provide clinicians with novel, non-invasive and low-cost support-tools, and (ii) [...] Read more.
Signal processing and machine learning techniques are changing the clinical practice based on medical imaging from many perspectives. A major topic is related to (i) the development of computer aided diagnosis systems to provide clinicians with novel, non-invasive and low-cost support-tools, and (ii) to the development of new methodologies for the analysis of biomedical data for finding new disease biomarkers. Advancements have been recently achieved in the context of Alzheimer’s disease (AD) diagnosis through the use of diffusion weighted imaging (DWI) data. When combined with tractography algorithms, this imaging modality enables the reconstruction of the physical connections of the brain that can be subsequently investigated through a complex network-based approach. A graph metric particularly suited to describe the disruption of the brain connectivity due to AD is communicability. In this work, we develop a machine learning framework for the classification and feature importance analysis of AD based on communicability at the whole brain level. We fairly compare the performance of three state-of-the-art classification models, namely support vector machines, random forests and artificial neural networks, on the connectivity networks of a balanced cohort of healthy control subjects and AD patients from the ADNI database. Moreover, we clinically validate the information content of the communicability metric by performing a feature importance analysis. Both performance comparison and feature importance analysis provide evidence of the robustness of the method. The results obtained confirm that the whole brain structural communicability alterations due to AD are a valuable biomarker for the characterization and investigation of pathological conditions. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
White Matter Network Alterations in Alzheimer’s Disease Patients
Appl. Sci. 2020, 10(3), 919; https://doi.org/10.3390/app10030919 - 31 Jan 2020
Cited by 3 | Viewed by 803
Abstract
Previous studies have revealed the occurrence of alterations of white matter (WM) and grey matter (GM) microstructures in Alzheimer’s disease (AD) and their prodromal state amnestic mild cognitive impairment (MCI). In general, these alterations can be studied comprehensively by modeling the brain as [...] Read more.
Previous studies have revealed the occurrence of alterations of white matter (WM) and grey matter (GM) microstructures in Alzheimer’s disease (AD) and their prodromal state amnestic mild cognitive impairment (MCI). In general, these alterations can be studied comprehensively by modeling the brain as a complex network, which describes many important topological properties, such as the small-world property, modularity, and efficiency. In this study, we systematically investigated white matter abnormalities using unbiased whole brain network analysis. We compared regional and network related WM features between groups of 19 AD and 25 MCI patients and 22 healthy controls (HC) using tract-based spatial statistics (TBSS), network based statistics (NBS) and graph theoretical analysis. We did not find significant differences in fractional anisotropy (FA) between two groups on TBSS analysis. However, observable alterations were noticed at a network level. Brain network measures such as global efficiency and small world properties were low in AD patients compared to HCs. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Pre-Cancerous Stomach Lesion Detections with Multispectral-Augmented Endoscopic Prototype
Appl. Sci. 2020, 10(3), 795; https://doi.org/10.3390/app10030795 - 22 Jan 2020
Cited by 2 | Viewed by 731
Abstract
In this paper, we are interested in the in vivo detection of pre-cancerous stomach lesions. Pre-cancerous lesions are unfortunately rarely explored in research papers as most of them are focused on cancer detection or conducted ex-vivo. For this purpose, a novel prototype [...] Read more.
In this paper, we are interested in the in vivo detection of pre-cancerous stomach lesions. Pre-cancerous lesions are unfortunately rarely explored in research papers as most of them are focused on cancer detection or conducted ex-vivo. For this purpose, a novel prototype is introduced. It consists of a standard endoscope with multispectral cameras, an optical setup, a fiberscope, and an external light source. Reflectance spectra are acquired in vivo on 16 patients with a healthy stomach, chronic gastritis, or intestinal metaplasia. A specific pipeline has been designed for the classification of spectra between healthy mucosa and different pathologies. The pipeline includes a wavelength clustering algorithm, spectral features computation, and the training of a classifier in a “leave one patient out” manner. Good classification results, around 80%, have been obtained, and two attractive wavelength ranges were found in the red and near-infrared ranges: [745, 755 nm] and [780, 840 nm]. The new prototype and the associated results give good arguments in favor of future common use in operating rooms, during upper gastrointestinal exploration of the stomach for the detection of stomach diseases. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Kudo’s Classification for Colon Polyps Assessment Using a Deep Learning Approach
Appl. Sci. 2020, 10(2), 501; https://doi.org/10.3390/app10020501 - 10 Jan 2020
Cited by 7 | Viewed by 1225
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world. This disease could begin as a non-cancerous polyp in the colon, when not treated in a timely manner, these polyps could induce cancer, and in turn, death. We propose [...] Read more.
Colorectal cancer (CRC) is the second leading cause of cancer death in the world. This disease could begin as a non-cancerous polyp in the colon, when not treated in a timely manner, these polyps could induce cancer, and in turn, death. We propose a deep learning model for classifying colon polyps based on the Kudo’s classification schema, using basic colonoscopy equipment. We train a deep convolutional model with a private dataset from the University of Deusto with and without using a VGG model as a feature extractor, and compared the results. We obtained 83% of accuracy and 83% of F1-score after fine tuning our model with the VGG filter. These results show that deep learning algorithms are useful to develop computer-aided tools for early CRC detection, and suggest combining it with a polyp segmentation model for its use by specialists. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Metal Artifact Reduction in X-ray CT via Ray Profile Correction
Appl. Sci. 2020, 10(1), 66; https://doi.org/10.3390/app10010066 - 20 Dec 2019
Viewed by 661
Abstract
In computed tomography (CT), metal implants increase the inconsistencies between the measured data and the linear assumption of the Radon transform made by the analytic CT reconstruction algorithm. The inconsistencies appear in the form of dark and bright bands and streaks in the [...] Read more.
In computed tomography (CT), metal implants increase the inconsistencies between the measured data and the linear assumption of the Radon transform made by the analytic CT reconstruction algorithm. The inconsistencies appear in the form of dark and bright bands and streaks in the reconstructed image, collectively called metal artifacts. The standard method for metal artifact reduction (MAR) replaces the inconsistent data with interpolated data. However, sinogram interpolation not only introduces new artifacts but it also suffers from the loss of detail near the implanted metals. With the help of a prior image that is usually estimated from the metal artifact-degraded image via computer vision techniques, improvements are feasible but still no MAR method exists that is widely accepted and utilized. We propose a technique that utilizes a prior image from a CT scan taken of the patient before implanting the metal objects. Hence, there is a sufficient amount of structural similarity to cover the loss of detail around the metal implants. Using the prior scan and a segmentation or model of the metal implant, our method then replaces sinogram interpolation with ray profile matching and estimation, which yields much more reliable data estimates for the affected sinogram regions. Experiments with clinical dataset obtained using surgical imaging CT scanner show very promising results. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Identification of Knee Cartilage Changing Pattern
Appl. Sci. 2019, 9(17), 3469; https://doi.org/10.3390/app9173469 - 22 Aug 2019
Cited by 1 | Viewed by 924
Abstract
This paper studied the changing pattern of knee cartilage using 3D knee magnetic resonance (MR) images over a 12-month period. As a pilot study, we focused on the medial tibia compartment of the knee joint. To quantify the thickness of cartilage in this [...] Read more.
This paper studied the changing pattern of knee cartilage using 3D knee magnetic resonance (MR) images over a 12-month period. As a pilot study, we focused on the medial tibia compartment of the knee joint. To quantify the thickness of cartilage in this compartment, we utilized two methods: one was measurement through manual segmentation of cartilage on each slice of the 3D MR sequence; the other was measurement through cartilage damage index (CDI), which quantified the thickness on a few informative locations on cartilage. We employed the artificial neural networks (ANNs) to model the changing pattern of cartilage thickness. The input feature space was composed of the thickness information at a cartilage location as well as its neighborhood from baseline year data. The output categories were ‘changed’ and ‘no-change’, based on the thickness difference at the same location between the baseline year and the 12-month follow-up data. Different ANN models were trained by using CDI features and manual segmentation features. Further, for each type of feature, individual models were trained at different subregions of the medial tibia compartment, i.e., the bottom part, the middle part, the upper part, and the whole. Based on the experiment results, we found that CDI features generated better prediction performance than manual segmentation, on both whole medial tibia compartment and any subregion. For CDI, the best performance in term of AUC was obtained using the central CDI locations (AUC = 0.766), while the best performance for manual segmentation was obtained using all slices of the 3D MR sequence (AUC = 0.656). As experiment results showed, the CDI method demonstrated a stronger pattern of cartilage change than the manual segmentation method, which required up to 6-hour manual delineation of all MRI slices. The result should be further validated by extending the experiment to other compartments. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Article
Quantitative CT Analysis for Predicting the Behavior of Part-Solid Nodules with Solid Components Less than 6 mm: Size, Density and Shape Descriptors
Appl. Sci. 2019, 9(16), 3428; https://doi.org/10.3390/app9163428 - 20 Aug 2019
Cited by 4 | Viewed by 1170
Abstract
Persistent part-solid nodules (PSNs) with a solid component <6 mm usually represent minimally invasive adenocarcinomas and are significantly less aggressive than PSNs with a solid component ≥6 mm. However, not all PSNs with a small solid component behave in the same way: some [...] Read more.
Persistent part-solid nodules (PSNs) with a solid component <6 mm usually represent minimally invasive adenocarcinomas and are significantly less aggressive than PSNs with a solid component ≥6 mm. However, not all PSNs with a small solid component behave in the same way: some nodules exhibit an indolent course, whereas others exhibit more aggressive behavior. Thus, predicting the future behavior of this subtype of PSN remains a complex and fascinating diagnostic challenge. The main purpose of this study was to apply open-source software to investigate which quantitative computed tomography (CT) features may be useful for predicting the behavior of a select group of PSNs. We retrospectively selected 50 patients with a single PSN with a solid component <6 mm and diameter <15 mm. Computerized analysis was performed using ImageJ software for each PSN and various quantitative features were calculated from the baseline CT images. The area, perimeter, mean Feret diameter, linear mass density, circularity and solidity were significantly related to nodule growth (p ≤ 0.031). Therefore, quantitative CT analysis was helpful for predicting the future behavior of a select group of PSNs with a solid component <6 mm and diameter <15 mm. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Graphical abstract

Article
Multi-Scale Heterogeneous 3D CNN for False-Positive Reduction in Pulmonary Nodule Detection, Based on Chest CT Images
Appl. Sci. 2019, 9(16), 3261; https://doi.org/10.3390/app9163261 - 09 Aug 2019
Cited by 8 | Viewed by 1386
Abstract
Currently, lung cancer has one of the highest mortality rates because it is often caught too late. Therefore, early detection is essential to reduce the risk of death. Pulmonary nodules are considered key indicators of primary lung cancer. Developing an efficient and accurate [...] Read more.
Currently, lung cancer has one of the highest mortality rates because it is often caught too late. Therefore, early detection is essential to reduce the risk of death. Pulmonary nodules are considered key indicators of primary lung cancer. Developing an efficient and accurate computer-aided diagnosis system for pulmonary nodule detection is an important goal. Typically, a computer-aided diagnosis system for pulmonary nodule detection consists of two parts: candidate nodule extraction and false-positive reduction of candidate nodules. The reduction of false positives (FPs) of candidate nodules remains an important challenge due to morphological characteristics of nodule height changes and similar characteristics to other organs. In this study, we propose a novel multi-scale heterogeneous three-dimensional (3D) convolutional neural network (MSH-CNN) based on chest computed tomography (CT) images. There are three main strategies of the design: (1) using multi-scale 3D nodule blocks with different levels of contextual information as inputs; (2) using two different branches of 3D CNN to extract the expression features; (3) using a set of weights which are determined by back propagation to fuse the expression features produced by step 2. In order to test the performance of the algorithm, we trained and tested on the Lung Nodule Analysis 2016 (LUNA16) dataset, achieving an average competitive performance metric (CPM) score of 0.874 and a sensitivity of 91.7% at two FPs/scan. Moreover, our framework is universal and can be easily extended to other candidate false-positive reduction tasks in 3D object detection, as well as 3D object classification. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Review

Jump to: Research

Review
Pattern Classification Approaches for Breast Cancer Identification via MRI: State-Of-The-Art and Vision for the Future
Appl. Sci. 2020, 10(20), 7201; https://doi.org/10.3390/app10207201 - 15 Oct 2020
Cited by 1 | Viewed by 584
Abstract
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed [...] Read more.
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi-parametric computer-aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi-supervised deep learning and self-supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high-dimensional medical imaging analysis platform that is based on multi-task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE-MRI. Since some of the approaches discussed are also based on time-lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Review
A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification
Appl. Sci. 2020, 10(14), 4703; https://doi.org/10.3390/app10144703 - 08 Jul 2020
Viewed by 979
Abstract
This paper represents the first survey on the application of AI techniques for the analysis of biomedical images with forensic human identification purposes. Human identification is of great relevance in today’s society and, in particular, in medico-legal contexts. As consequence, all technological advances [...] Read more.
This paper represents the first survey on the application of AI techniques for the analysis of biomedical images with forensic human identification purposes. Human identification is of great relevance in today’s society and, in particular, in medico-legal contexts. As consequence, all technological advances that are introduced in this field can contribute to the increasing necessity for accurate and robust tools that allow for establishing and verifying human identity. We first describe the importance and applicability of forensic anthropology in many identification scenarios. Later, we present the main trends related to the application of computer vision, machine learning and soft computing techniques to the estimation of the biological profile, the identification through comparative radiography and craniofacial superimposition, traumatism and pathology analysis, as well as facial reconstruction. The potentialities and limitations of the employed approaches are described, and we conclude with a discussion about methodological issues and future research. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Review
A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion
Appl. Sci. 2020, 10(5), 1894; https://doi.org/10.3390/app10051894 - 10 Mar 2020
Cited by 6 | Viewed by 1469
Abstract
Computer-aided diagnostic (CAD) systems use machine learning methods that provide a synergistic effect between the neuroradiologist and the computer, enabling an efficient and rapid diagnosis of the patient’s condition. As part of the early diagnosis of Alzheimer’s disease (AD), which is a major [...] Read more.
Computer-aided diagnostic (CAD) systems use machine learning methods that provide a synergistic effect between the neuroradiologist and the computer, enabling an efficient and rapid diagnosis of the patient’s condition. As part of the early diagnosis of Alzheimer’s disease (AD), which is a major public health problem, the CAD system provides a neuropsychological assessment that helps mitigate its effects. The use of data fusion techniques by CAD systems has proven to be useful, they allow for the merging of information relating to the brain and its tissues from MRI, with that of other types of modalities. This multimodal fusion refines the quality of brain images by reducing redundancy and randomness, which contributes to improving the clinical reliability of the diagnosis compared to the use of a single modality. The purpose of this article is first to determine the main steps of the CAD system for brain magnetic resonance imaging (MRI). Then to bring together some research work related to the diagnosis of brain disorders, emphasizing AD. Thus the most used methods in the stages of classification and brain regions segmentation are described, highlighting their advantages and disadvantages. Secondly, on the basis of the raised problem, we propose a solution within the framework of multimodal fusion. In this context, based on quantitative measurement parameters, a performance study of multimodal CAD systems is proposed by comparing their effectiveness with those exploiting a single MRI modality. In this case, advances in information fusion techniques in medical imagery are accentuated, highlighting their advantages and disadvantages. The contribution of multimodal fusion and the interest of hybrid models are finally addressed, as well as the main scientific assertions made, in the field of brain disease diagnosis. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Review
Conventional and Deep Learning Methods for Skull Stripping in Brain MRI
Appl. Sci. 2020, 10(5), 1773; https://doi.org/10.3390/app10051773 - 04 Mar 2020
Cited by 4 | Viewed by 1303
Abstract
Skull stripping in brain magnetic resonance volume has recently been attracting attention due to an increased demand to develop an efficient, accurate, and general algorithm for diverse datasets of the brain. Accurate skull stripping is a critical step for neuroimaging diagnostic systems because [...] Read more.
Skull stripping in brain magnetic resonance volume has recently been attracting attention due to an increased demand to develop an efficient, accurate, and general algorithm for diverse datasets of the brain. Accurate skull stripping is a critical step for neuroimaging diagnostic systems because neither the inclusion of non-brain tissues nor removal of brain parts can be corrected in subsequent steps, which results in unfixed error through subsequent analysis. The objective of this review article is to give a comprehensive overview of skull stripping approaches, including recent deep learning-based approaches. In this paper, the current methods of skull stripping have been divided into two distinct groups—conventional or classical approaches, and convolutional neural networks or deep learning approaches. The potentials of several methods are emphasized because they can be applied to standard clinical imaging protocols. Finally, current trends and future developments are addressed giving special attention to recent deep learning algorithms. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Review
Laryngeal Image Processing of Vocal Folds Motion
Appl. Sci. 2020, 10(5), 1556; https://doi.org/10.3390/app10051556 - 25 Feb 2020
Cited by 3 | Viewed by 941
Abstract
This review provides a comprehensive compilation, from a digital image processing point of view of the most important techniques currently developed to characterize and quantify the vibration behaviour of the vocal folds, along with a detailed description of the laryngeal image modalities currently [...] Read more.
This review provides a comprehensive compilation, from a digital image processing point of view of the most important techniques currently developed to characterize and quantify the vibration behaviour of the vocal folds, along with a detailed description of the laryngeal image modalities currently used in the clinic. The review presents an overview of the most significant glottal-gap segmentation and facilitative playbacks techniques used in the literature for the mentioned purpose, and shows the drawbacks and challenges that still remain unsolved to develop robust vocal folds vibration function analysis tools based on digital image processing. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Review
Application of Image Fusion in Diagnosis and Treatment of Liver Cancer
Appl. Sci. 2020, 10(3), 1171; https://doi.org/10.3390/app10031171 - 09 Feb 2020
Cited by 12 | Viewed by 1305
Abstract
With the accelerated development of medical imaging equipment and techniques, image fusion technology has been effectively applied for diagnosis, biopsy and radiofrequency ablation, especially for liver tumor. Tumor treatment relying on a single medical imaging modality might face challenges, due to the deep [...] Read more.
With the accelerated development of medical imaging equipment and techniques, image fusion technology has been effectively applied for diagnosis, biopsy and radiofrequency ablation, especially for liver tumor. Tumor treatment relying on a single medical imaging modality might face challenges, due to the deep positioning of the lesions, operation history and the specific background conditions of the liver disease. Image fusion technology has been employed to address these challenges. Using the image fusion technology, one could obtain real-time anatomical imaging superimposed by functional images showing the same plane to facilitate the diagnosis and treatments of liver tumors. This paper presents a review of the key principles of image fusion technology, its application in tumor treatments, particularly in liver tumors, and concludes with a discussion of the limitations and prospects of the image fusion technology. Full article
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)
Show Figures

Figure 1

Back to TopTop