Special Issue "Artificial Intelligence Applied to Medical Imaging and Computational Biology"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 13229

Special Issue Editors

Dr. Leonardo Rundo
E-Mail Website
Guest Editor
Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano, SA, Italy
Interests: biomedical image analysis, radiomics, machine learning, computational Intelligence, high-performance computing
Special Issues, Collections and Topics in MDPI journals
Dr. Carmelo Militello
E-Mail Website
Guest Editor
Italian National Research Council (CNR), Palermo, Italy
Interests: digital image analysis and processing; biomedical imaging; radiomics; applied machine learning; digital architectures; hardware programmable devices
Special Issues, Collections and Topics in MDPI journals
Dr. Andrea Tangherloni
E-Mail Website
Guest Editor
Department of Human Sciences, University of Bergamo, Bergamo, Italy
Interests: computational intelligence; machine learning; computational systems biology; bioinformatics; high-performance computing

Special Issue Information

Dear Colleagues,

Medical imaging and computational biology continuously pose new fundamental medical and biological questions that often give rise to novel challenges in Artificial Intelligence (AI). Thus, in these research fields, there is an increasing need for the application of cutting-edge computational approaches that generally involve Machine Learning (ML) or Computational Intelligence (CI) techniques. On the one hand, ML and CI techniques can effectively perform image processing operations (such as segmentation, co-registration, classification, and dimensionality reduction), in the fields of neuroimaging and oncological imaging. Although the manual approach often remains the golden standard in some tasks (e.g., segmentation), ML can be exploited to automate and facilitate the work of researchers and clinicians. On the other hand, ML- and CI-based strategies have been continuously applied to solve problems in Bioinformatics and Computational Systems Biology (e.g., alignments, dimensionality reduction, and parameter estimation). In addition, these fields often present new clustering and classification challenges, as well as combinatorial problems, which can be effectively addressed using novel strategies based on ML and CI techniques. Frequently used approaches include Support Vector Machines (SVMs) for classification problems, graph-based methods, Artificial Neural Networks (ANNs), Evolutionary Computation (EC) and Swarm Intelligence (SI) techniques.

More recently, Deep Learning (DL) approaches were shown to be very successful in computer vision and bioinformatics tasks owing to their ability to automatically extract hierarchical descriptive features from input images or gene expression data. They have also been used in the oncological, neuroimaging, and microscopy imaging domains for the automatic disease diagnosis, tissue segmentation, and even synthetic image generation. The main issue, however, remains the relative sample paucity of the typical datasets that leads to poor generalization of the employed deep ANNs, considering the high number of required parameters. Consequently, parameter-efficient design paradigms specifically tailored to biomedical applications ought to be devised, also by exploiting CI-based techniques (e.g., EC, SI, and neuroevolution).

In this context, these advanced ML techniques can be suitably exploited to combine heterogeneous sources of information, allowing for multiomics data integration. Such kinds of analyses may represent a significant step towards personalized medicine.

This Special Issue will provide a forum to publish original research papers covering state-of-the-art and novel algorithms, methodologies, and applications of AI methods for biomedical data analysis, ranging from classic ML to DL.

Topics of interest include but are not limited to:

  • ML and CI techniques for segmentation, co-registration, classification, or dimensionality reduction of medical images.
  • Generative adversarial models for medical image super-resolution, denoising, and synthesis.
  • Deep learning for neuroimaging and oncological imaging analysis.
  • Application of graph theory to MRI and functional MRI (fMRI) data.
  • Computational modeling and analysis of neuroimaging.
  • Radiomic analyses for disease phenotyping.
  • Radiogenomics for intra- and intertumoral heterogeneity evaluation.
  • CI methods for optimizing biomedical data analysis tasks.
  • Integration of multiomics data.
  • ML and CI techniques for combinatorial problems in bioinformatics and computational biology.
  • Deep neural networks for classification tasks in single-cell data analysis.
  • New clustering approaches for single-cell data analysis.

Dr. Leonardo Rundo
Dr. Carmelo Militello
Dr. Andrea Tangherloni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2300 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • computational intelligence
  • biomedical image analysis
  • radiomics
  • radiogenomics
  • bioinformatics
  • computational biology
  • multiomics data
  • single-cell data analysis

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Editorial
Artificial Intelligence Applied to Medical Imaging and Computational Biology
Appl. Sci. 2022, 12(18), 9052; https://doi.org/10.3390/app12189052 - 08 Sep 2022
Viewed by 257
Abstract
The Special Issue “Artificial Intelligence Applied to Medical Imaging and Computational Biology” of the Applied Sciences Journal has been curated from February 2021 to May 2022, which covered the state-of-the-art and novel algorithms and applications of Artificial Intelligence methods for biomedical [...] Read more.
The Special Issue “Artificial Intelligence Applied to Medical Imaging and Computational Biology” of the Applied Sciences Journal has been curated from February 2021 to May 2022, which covered the state-of-the-art and novel algorithms and applications of Artificial Intelligence methods for biomedical data analysis, ranging from classic Machine Learning to Deep Learning [...] Full article

Research

Jump to: Editorial, Review

Article
Evaluation of Post-Stroke Impairment in Fine Tactile Sensation by Electroencephalography (EEG)-Based Machine Learning
Appl. Sci. 2022, 12(9), 4796; https://doi.org/10.3390/app12094796 - 09 May 2022
Cited by 2 | Viewed by 505
Abstract
Electroencephalography (EEG)-based measurements of fine tactile sensation produce large amounts of data, with high costs for manual evaluation. In this study, an EEG-based machine-learning (ML) model with support vector machine (SVM) was established to automatically evaluate post-stroke impairments in fine tactile sensation. Stroke [...] Read more.
Electroencephalography (EEG)-based measurements of fine tactile sensation produce large amounts of data, with high costs for manual evaluation. In this study, an EEG-based machine-learning (ML) model with support vector machine (SVM) was established to automatically evaluate post-stroke impairments in fine tactile sensation. Stroke survivors (n = 12, stroke group) and unimpaired participants (n = 15, control group) received stimulations with cotton, nylon, and wool fabrics to the different upper limbs of a stroke participant and the dominant side of the control. The average and maximal values of relative spectral power (RSP) of EEG in the stimulations were used as the inputs to the SVM-ML model, which was first optimized for classification accuracies for different limb sides through hyperparameter selection (γ, C) in radial basis function (RBF) kernel and cross-validation during cotton stimulation. Model generalization was investigated by comparing accuracies during stimulations with different fabrics to different limbs. The highest accuracies were achieved with (γ = 21, C = 23) for the RBF kernel (76.8%) and six-fold cross-validation (75.4%), respectively, in the gamma band for cotton stimulation; these were selected as optimal parameters for the SVM-ML model. In model generalization, significant differences in the post-stroke fabric stimulation accuracies were shifted to higher (beta/gamma) bands. The EEG-based SVM-ML model generated results similar to manual evaluation of cortical responses to fabric stimulations; this may aid automatic assessments of post-stroke fine tactile sensations. Full article
Show Figures

Figure 1

Article
Unsupervised Segmentation in NSCLC: How to Map the Output of Unsupervised Segmentation to Meaningful Histological Labels by Linear Combination?
Appl. Sci. 2022, 12(8), 3718; https://doi.org/10.3390/app12083718 - 07 Apr 2022
Cited by 2 | Viewed by 537
Abstract
Background: Segmentation is, in many Pathomics projects, an initial step. Usually, in supervised settings, well-annotated and large datasets are required. Regarding the rarity of such datasets, unsupervised learning concepts appear to be a potential solution. Against this background, we tested for a small [...] Read more.
Background: Segmentation is, in many Pathomics projects, an initial step. Usually, in supervised settings, well-annotated and large datasets are required. Regarding the rarity of such datasets, unsupervised learning concepts appear to be a potential solution. Against this background, we tested for a small dataset on lung cancer tissue microarrays (TMA) if a model (i) first can be in a previously published unsupervised setting and (ii) secondly can be modified and retrained to produce meaningful labels, and (iii) we finally compared this approach to standard segmentation models. Methods: (ad i) First, a convolutional neuronal network (CNN) segmentation model is trained in an unsupervised fashion, as recently described by Kanezaki et al. (ad ii) Second, the model is modified by adding a remapping block and is retrained on an annotated dataset in a supervised setting. (ad iii) Third, the segmentation results are compared to standard segmentation models trained on the same dataset. Results: (ad i–ii) By adding an additional mapping-block layer and by retraining, models previously trained in an unsupervised manner can produce meaningful labels. (ad iii) The segmentation quality is inferior to standard segmentation models trained on the same dataset. Conclusions: Unsupervised training in combination with subsequent supervised training offers for histological images here no benefit. Full article
Show Figures

Figure 1

Article
Deep Learning-Based Automatic Segmentation of Mandible and Maxilla in Multi-Center CT Images
Appl. Sci. 2022, 12(3), 1358; https://doi.org/10.3390/app12031358 - 27 Jan 2022
Cited by 1 | Viewed by 701
Abstract
Sophisticated segmentation of the craniomaxillofacial bones (the mandible and maxilla) in computed tomography (CT) is essential for diagnosis and treatment planning for craniomaxillofacial surgeries. Conventional manual segmentation is time-consuming and challenging due to intrinsic properties of craniomaxillofacial bones and head CT such as [...] Read more.
Sophisticated segmentation of the craniomaxillofacial bones (the mandible and maxilla) in computed tomography (CT) is essential for diagnosis and treatment planning for craniomaxillofacial surgeries. Conventional manual segmentation is time-consuming and challenging due to intrinsic properties of craniomaxillofacial bones and head CT such as the variance in the anatomical structures, low contrast of soft tissue, and artifacts caused by metal implants. However, data-driven segmentation methods, including deep learning, require a large consistent dataset, which creates a bottleneck in their clinical applications due to limited datasets. In this study, we propose a deep learning approach for the automatic segmentation of the mandible and maxilla in CT images and enhanced the compatibility for multi-center datasets. Four multi-center datasets acquired by various conditions were applied to create a scenario where the model was trained with one dataset and evaluated with the other datasets. For the neural network, we designed a hierarchical, parallel and multi-scale residual block to the U-Net (HPMR-U-Net). To evaluate the performance, segmentation with in-house dataset and with external datasets from multi-center were conducted in comparison to three other neural networks: U-Net, Res-U-Net and mU-Net. The results suggest that the segmentation performance of HPMR-U-Net is comparable to that of other models, with superior data compatibility. Full article
Show Figures

Figure 1

Article
On Unsupervised Methods for Medical Image Segmentation: Investigating Classic Approaches in Breast Cancer DCE-MRI
Appl. Sci. 2022, 12(1), 162; https://doi.org/10.3390/app12010162 - 24 Dec 2021
Cited by 4 | Viewed by 1373
Abstract
Unsupervised segmentation techniques, which do not require labeled data for training and can be more easily integrated into the clinical routine, represent a valid solution especially from a clinical feasibility perspective. Indeed, large-scale annotated datasets are not always available, undermining their immediate implementation [...] Read more.
Unsupervised segmentation techniques, which do not require labeled data for training and can be more easily integrated into the clinical routine, represent a valid solution especially from a clinical feasibility perspective. Indeed, large-scale annotated datasets are not always available, undermining their immediate implementation and use in the clinic. Breast cancer is the most common cause of cancer death in women worldwide. In this study, breast lesion delineation in Dynamic Contrast Enhanced MRI (DCE-MRI) series was addressed by means of four popular unsupervised segmentation approaches: Split-and-Merge combined with Region Growing (SMRG), k-means, Fuzzy C-Means (FCM), and spatial FCM (sFCM). They represent well-established pattern recognition techniques that are still widely used in clinical research. Starting from the basic versions of these segmentation approaches, during our analysis, we identified the shortcomings of each of them, proposing improved versions, as well as developing ad hoc pre- and post-processing steps. The obtained experimental results, in terms of area-based—namely, Dice Index (DI), Jaccard Index (JI), Sensitivity, Specificity, False Positive Ratio (FPR), False Negative Ratio (FNR)—and distance-based metrics—Mean Absolute Distance (MAD), Maximum Distance (MaxD), Hausdorff Distance (HD)—encourage the use of unsupervised machine learning techniques in medical image segmentation. In particular, fuzzy clustering approaches (namely, FCM and sFCM) achieved the best performance. In fact, for area-based metrics, they obtained DI = 78.23% ± 6.50 (sFCM), JI = 65.90% ± 8.14 (sFCM), sensitivity = 77.84% ± 8.72 (FCM), specificity = 87.10% ± 8.24 (sFCM), FPR = 0.14 ± 0.12 (sFCM), and FNR = 0.22 ± 0.09 (sFCM). Concerning distance-based metrics, they obtained MAD = 1.37 ± 0.90 (sFCM), MaxD = 4.04 ± 2.87 (sFCM), and HD = 2.21 ± 0.43 (FCM). These experimental findings suggest that further research would be useful for advanced fuzzy logic techniques specifically tailored to medical image segmentation. Full article
Show Figures

Figure 1

Article
Development of Detection and Volumetric Methods for the Triceps of the Lower Leg Using Magnetic Resonance Images with Deep Learning
Appl. Sci. 2021, 11(24), 12006; https://doi.org/10.3390/app112412006 - 16 Dec 2021
Cited by 1 | Viewed by 677
Abstract
Purpose: A deep learning technique was used to analyze the triceps surae muscle. The devised interpolation method was used to determine muscle’s volume and verify the usefulness of the method. Materials and Methods: Thirty-eight T1-weighted cross-sectional magnetic resonance images of the triceps of [...] Read more.
Purpose: A deep learning technique was used to analyze the triceps surae muscle. The devised interpolation method was used to determine muscle’s volume and verify the usefulness of the method. Materials and Methods: Thirty-eight T1-weighted cross-sectional magnetic resonance images of the triceps of the lower leg were divided into three classes, i.e., gastrocnemius lateralis (GL), gastrocnemius medialis (GM), and soleus (SOL), and the regions of interest (ROIs) were manually defined. The supervised images were classified as per each patient. A total of 1199 images were prepared. Six different datasets separated patient-wise were prepared for K-fold cross-validation. A network model of the DeepLabv3+ was used for training. The images generated by the created model were divided as per each patient and classified into each muscle types. The model performance and the interpolation method were evaluated by calculating the Dice similarity coefficient (DSC) and error rates of the volume of the predicted and interpolated images, respectively. Results: The mean DSCs for the predicted images were >0.81 for GM and SOL and 0.71 for GL. The mean error rates for volume were approximately 11% for GL, SOL, and total error and 23% for GL. DSCs in the interpolated images were >0.8 for all muscles. The mean error rates of volume were <10% for GL, SOL, and total error and 18% for GM. There was no significant difference between the volumes obtained from the supervised images and interpolated images. Conclusions: Using the semantic segmentation of the deep learning technique, the triceps were detected with high accuracy and the interpolation method used in this study to find the volume was useful. Full article
Show Figures

Figure 1

Article
Automated Breast Lesion Detection and Characterization with the Wavelia Microwave Breast Imaging System: Methodological Proof-of-Concept on First-in-Human Patient Data
Appl. Sci. 2021, 11(21), 9998; https://doi.org/10.3390/app11219998 - 26 Oct 2021
Cited by 2 | Viewed by 687
Abstract
Microwave Breast Imaging (MBI) is an emerging non-ionizing imaging modality, with the potential to support breast diagnosis and management. Wavelia is an MBI system prototype, of 1st generation, which has recently completed a First-In-Human (FiH) clinical investigation on a 25-symptomatic patient cohort, to [...] Read more.
Microwave Breast Imaging (MBI) is an emerging non-ionizing imaging modality, with the potential to support breast diagnosis and management. Wavelia is an MBI system prototype, of 1st generation, which has recently completed a First-In-Human (FiH) clinical investigation on a 25-symptomatic patient cohort, to explore the capacity of the technology to detect and characterize malignant (invasive carcinoma) and benign (fibroadenoma, cyst) breast disease. Two recent publications presented promising results demonstrated by the device in this FiH study in detecting and localizing, as well as delineating size and malignancy risk, of malignant and benign palpable breast lesions. In this paper, the methodology that has been employed in the Wavelia semi-automated Quantitative Imaging Function (QIF), to support breast lesion detection and characterization in the FiH clinical investigation of the device, is presented and the critical design parameters are highlighted. Full article
Show Figures

Figure 1

Article
Artificial Neural Network-Derived Cerebral Metabolic Rate of Oxygen for Differentiating Glioblastoma and Brain Metastasis in MRI: A Feasibility Study
Appl. Sci. 2021, 11(21), 9928; https://doi.org/10.3390/app11219928 - 24 Oct 2021
Cited by 1 | Viewed by 875
Abstract
Glioblastoma may appear similar to cerebral metastasis on conventional MRI in some cases, but their therapies differ significantly. This prospective feasibility study was aimed at differentiating them by applying the quantitative susceptibility mapping and quantitative blood-oxygen-level-dependent (QSM + qBOLD) model to these entities [...] Read more.
Glioblastoma may appear similar to cerebral metastasis on conventional MRI in some cases, but their therapies differ significantly. This prospective feasibility study was aimed at differentiating them by applying the quantitative susceptibility mapping and quantitative blood-oxygen-level-dependent (QSM + qBOLD) model to these entities for the first time. We prospectively included 15 untreated patients with glioblastoma (n = 7, median age: 68 years, range: 54–84 years) or brain metastasis (n = 8, median age 66 years, range: 50–78 years) who underwent preoperative MRI including multi-gradient echo and arterial spin labeling sequences. Oxygen extraction fraction (OEF), cerebral blood flow (CBF) and cerebral metabolic rate of oxygen (CMRO2) were calculated in the contrast-enhancing tumor (CET) and peritumoral non-enhancing T2 hyperintense region (NET2), using an artificial neural network. We demonstrated that OEF in CET was significantly lower (p = 0.03) for glioblastomas than metastases, all features were significantly higher (p = 0.01) in CET than in NET2 for metastasis patients only, and the ratios of CET/NET2 for CBF (p = 0.04) and CMRO2 (p = 0.01) were significantly higher in metastasis patients than in glioblastoma patients. Discriminative power of a support-vector machine classifier was highest with a combination of two features, yielding an area under the receiver operating characteristic curve of 0.94 with 93% diagnostic accuracy. QSM + qBOLD allows for robust differentiation of glioblastoma and cerebral metastasis while yielding insights into tumor oxygenation. Full article
Show Figures

Figure 1

Article
Choroidal Neovascularization Screening on OCT-Angiography Choriocapillaris Images by Convolutional Neural Networks
Appl. Sci. 2021, 11(19), 9313; https://doi.org/10.3390/app11199313 - 08 Oct 2021
Cited by 2 | Viewed by 729
Abstract
Choroidal Neovascularization (CNV) is the advanced stage of Age-related Macular Degeneration (AMD), which is the leading cause of irreversible visual loss for elder people in developed countries. Optical Coherence Tomography Angiography (OCTA) is a recent non-invasive imaging technique widely used nowadays in diagnosis [...] Read more.
Choroidal Neovascularization (CNV) is the advanced stage of Age-related Macular Degeneration (AMD), which is the leading cause of irreversible visual loss for elder people in developed countries. Optical Coherence Tomography Angiography (OCTA) is a recent non-invasive imaging technique widely used nowadays in diagnosis and follow-up of CNV. In this study, an automatic screening of CNV based on deep learning is performed using OCTA choriocapillaris images. CNV eyes (advanced wet AMD) are diagnosed among healthy eyes (no AMD) and eyes with drusen (intermediate AMD). An OCTA dataset of 1396 images is used to train and evaluate the model. A pre-trained convolutional neural network (CNN) is fine-tuned and validated on 80% of the dataset while the remaining 20% is used independently for predictions. The model can accurately detect CNV on the test set with an accuracy of 89.74%, precision of 0.96 and 0.99 area under the curve of the receiver operating characteristic. A good overall classification accuracy of 88.46% is obtained on a balanced test set. Detailed analysis of misclassified images shows that they are also considered ambiguous images for expert clinicians. This novel CNN-based application is truly a breakthrough to assist clinicians in the challenging task of screening for neovascular complications. Full article
Show Figures

Figure 1

Article
Deep Learning-Based Segmentation of Various Brain Lesions for Radiosurgery
Appl. Sci. 2021, 11(19), 9180; https://doi.org/10.3390/app11199180 - 02 Oct 2021
Cited by 2 | Viewed by 849
Abstract
Semantic segmentation of medical images with deep learning models is rapidly being developed. In this study, we benchmarked state-of-the-art deep learning segmentation algorithms on our clinical stereotactic radiosurgery dataset. The dataset consists of 1688 patients with various brain lesions (pituitary tumors, meningioma, schwannoma, [...] Read more.
Semantic segmentation of medical images with deep learning models is rapidly being developed. In this study, we benchmarked state-of-the-art deep learning segmentation algorithms on our clinical stereotactic radiosurgery dataset. The dataset consists of 1688 patients with various brain lesions (pituitary tumors, meningioma, schwannoma, brain metastases, arteriovenous malformation, and trigeminal neuralgia), and we divided the dataset into a training set (1557 patients) and test set (131 patients). This study demonstrates the strengths and weaknesses of deep-learning algorithms in a fairly practical scenario. We compared the model performances concerning their sampling method, model architecture, and the choice of loss functions, identifying suitable settings for their applications and shedding light on the possible improvements. Evidence from this study led us to conclude that deep learning could be promising in assisting the segmentation of brain lesions even if the training dataset was of high heterogeneity in lesion types and sizes. Full article
Show Figures

Figure 1

Article
Multichannel Multiscale Two-Stage Convolutional Neural Network for the Detection and Localization of Myocardial Infarction Using Vectorcardiogram Signal
Appl. Sci. 2021, 11(17), 7920; https://doi.org/10.3390/app11177920 - 27 Aug 2021
Cited by 2 | Viewed by 808
Abstract
Myocardial infarction (MI) occurs due to the decrease in the blood flow into one part of the heart, and it further causes damage to the heart muscle. The 12-channel electrocardiogram (ECG) has been widely used to detect and localize MI pathology in clinical [...] Read more.
Myocardial infarction (MI) occurs due to the decrease in the blood flow into one part of the heart, and it further causes damage to the heart muscle. The 12-channel electrocardiogram (ECG) has been widely used to detect and localize MI pathology in clinical studies. The vectorcardiogram (VCG) is a 3-channel recording system used to measure the heart’s electrical activity in sagittal, transverse, and frontal planes. The VCG signals have advantages over the 12-channel ECG to localize posterior MI pathology. Detection and localization of MI using VCG signals are vital in clinical practice. This paper proposes a multi-channel multi-scale two-stage deep-learning-based approach to detect and localize MI using VCG signals. In the first stage, the multivariate variational mode decomposition (MVMD) decomposes the three-channel-based VCG signal beat into five components along each channel. The multi-channel multi-scale VCG tensor is formulated using the modes of each channel of VCG data, and it is used as the input to the deep convolutional neural network (CNN) to classify MI and normal sinus rhythm (NSR) classes. In the second stage, the multi-class deep CNN is used for the categorization of anterior MI (AMI), anterior-lateral MI (ALMI), anterior-septal MI (ASMI), inferior MI (IMI), inferior-lateral MI (ILMI), inferior-posterior-lateral (IPLMI) classes using MI detected multi-channel multi-scale VCG instances from the first stage. The proposed approach is developed using the VCG data obtained from a public database. The results reveal that the approach has obtained the accuracy, sensitivity, and specificity values of 99.58%, 99.18%, and 99.87%, respectively, for MI detection. Moreover, for MI localization, we have obtained the overall accuracy value of 99.86% in the second stage for our proposed network. The proposed approach has demonstrated superior classification performance compared to the existing VCG signal-based MI detection and localization techniques. Full article
Show Figures

Figure 1

Article
Quantitative and Qualitative Image Analysis of In Vitro Co-Culture 3D Tumor Spheroid Model by Employing Image-Processing Techniques
Appl. Sci. 2021, 11(10), 4636; https://doi.org/10.3390/app11104636 - 19 May 2021
Cited by 1 | Viewed by 898
Abstract
This work proposes a novel region-estimation (RE) algorithm using the quantification of colon-cancer (HCT-8) and fibroblasts (NIH3T3) cells to estimate the densest region of colon-cancer cells in in vitro 3D co-cultured spheroids. Cells were labelled with different cell tracker dyes to track the [...] Read more.
This work proposes a novel region-estimation (RE) algorithm using the quantification of colon-cancer (HCT-8) and fibroblasts (NIH3T3) cells to estimate the densest region of colon-cancer cells in in vitro 3D co-cultured spheroids. Cells were labelled with different cell tracker dyes to track the cells. The technique involves staining cells with cell trackers The quantification of HCT-8 and NIH3T3 cells by the RE algorithm leads to distribution pattern analysis of cells from the core to the periphery, which ultimately estimates the densest region of HCT-8 cells in an in vitro 3D cell spheroid. Cell quantification by the RE algorithm was compared with the results of cell quantification by ImageJ software. Results demonstrated the distribution patterns of cells from the core to the peripheral region of the in vitro 3D cell spheroid. The overall experimentation showed that the proposed methodology outperformed state-of-the-art approaches in terms of segmentation, quantification, and reducing biasing error. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

Review
Deep Learning for Orthopedic Disease Based on Medical Image Analysis: Present and Future
Appl. Sci. 2022, 12(2), 681; https://doi.org/10.3390/app12020681 - 11 Jan 2022
Cited by 4 | Viewed by 998
Abstract
Since its development, deep learning has been quickly incorporated into the field of medicine and has had a profound impact. Since 2017, many studies applying deep learning-based diagnostics in the field of orthopedics have demonstrated outstanding performance. However, most published papers have focused [...] Read more.
Since its development, deep learning has been quickly incorporated into the field of medicine and has had a profound impact. Since 2017, many studies applying deep learning-based diagnostics in the field of orthopedics have demonstrated outstanding performance. However, most published papers have focused on disease detection or classification, leaving some unsatisfactory reports in areas such as segmentation and prediction. This review introduces research published in the field of orthopedics classified according to disease from the perspective of orthopedic surgeons, and areas of future research are discussed. This paper provides orthopedic surgeons with an overall understanding of artificial intelligence-based image analysis and the information that medical data should be treated with low prejudice, providing developers and researchers with insight into the real-world context in which clinicians are embracing medical artificial intelligence. Full article
Show Figures

Figure 1

Back to TopTop