Artificial Intelligence Applications in Cancer and Other Diseases

A special issue of Biomedicines (ISSN 2227-9059). This special issue belongs to the section "Biomedical Engineering and Materials".

Deadline for manuscript submissions: closed (31 March 2025) | Viewed by 16219

Special Issue Editor


E-Mail Website
Guest Editor
School of Engineering Technology, Purdue University, Knoy Hall of Technology, West Lafayette, IN 47907, USA
Interests: artificial intelligence; machine learning; neural networks; deep learning; obesity; diabetes; cancer; other diseases; pathology; drug discovery
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and its subsets, machine learning, neural networks, deep learning, etc., have the potential to revolutionize the medical field. AI is not only useful for analyzing medical images, such as ECG, EEG, etc., but is also useful for labelled and unlabeled data. Various machine learning algorithms, such as naïve basis, support vector machines (SVMs), etc., are useful in predicting breast cancer occurrence, pattern, and early detection. AI can be used for both communicable and non-communicable diseases. The supervised learning, unsupervised learning, and semi-supervised learning models of machine learning have advanced algorithms to work on the type of data available in addition to the images most commonly used in this kind of research. With the possibility of one out of two men and one out of three women suffering from cancer in the US, as well as the global increase in obesity, diabetes, cancer, and other diseases, the need for additional tools, besides conventional ones, such as AI in the early detection and prediction of cancer and other diseases, as well as its applications in pathology, drug discovery, etc., cannot be overstated. Towards this end, this Special Issue invites original research articles as well as detailed review articles and short communications on the applications of AI in cancer and other diseases.

Prof. Dr. Raji Sundararajan
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomedicines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • neural networks
  • deep learning
  • obesity
  • diabetes
  • cancer
  • other diseases
  • pathology
  • drug discovery

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

14 pages, 7028 KiB  
Article
Deep Learning-Based Real-Time Organ Localization and Transit Time Estimation in Wireless Capsule Endoscopy
by Seung-Joo Nam, Gwiseong Moon, Jung-Hwan Park, Yoon Kim, Yun Jeong Lim and Hyun-Soo Choi
Biomedicines 2024, 12(8), 1704; https://doi.org/10.3390/biomedicines12081704 - 31 Jul 2024
Cited by 2 | Viewed by 1619
Abstract
Background: Wireless capsule endoscopy (WCE) has significantly advanced the diagnosis of gastrointestinal (GI) diseases by allowing for the non-invasive visualization of the entire small intestine. However, machine learning-based methods for organ classification in WCE often rely on color information, leading to decreased performance [...] Read more.
Background: Wireless capsule endoscopy (WCE) has significantly advanced the diagnosis of gastrointestinal (GI) diseases by allowing for the non-invasive visualization of the entire small intestine. However, machine learning-based methods for organ classification in WCE often rely on color information, leading to decreased performance when obstacles such as food debris are present. This study proposes a novel model that integrates convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to analyze multiple frames and incorporate temporal information, ensuring that it performs well even when visual information is limited. Methods: We collected data from 126 patients using PillCam™ SB3 (Medtronic, Minneapolis, MN, USA), which comprised 2,395,932 images. Our deep learning model was trained to identify organs (stomach, small intestine, and colon) using data from 44 training and 10 validation cases. We applied calibration using a Gaussian filter to enhance the accuracy of detecting organ boundaries. Additionally, we estimated the transit time of the capsule in the gastric and small intestine regions using a combination of a convolutional neural network (CNN) and a long short-term memory (LSTM) designed to be aware of the sequence information of continuous videos. Finally, we evaluated the model’s performance using WCE videos from 72 patients. Results: Our model demonstrated high performance in organ classification, achieving an accuracy, sensitivity, and specificity of over 95% for each organ (stomach, small intestine, and colon), with an overall accuracy and F1-score of 97.1%. The Matthews Correlation Coefficient (MCC) and Geometric Mean (G-mean) were used to evaluate the model’s performance on imbalanced datasets, achieving MCC values of 0.93 for the stomach, 0.91 for the small intestine, and 0.94 for the colon, and G-mean values of 0.96 for the stomach, 0.95 for the small intestine, and 0.97 for the colon. Regarding the estimation of gastric and small intestine transit times, the mean time differences between the model predictions and ground truth were 4.3 ± 9.7 min for the stomach and 24.7 ± 33.8 min for the small intestine. Notably, the model’s predictions for gastric transit times were within 15 min of the ground truth for 95.8% of the test dataset (69 out of 72 cases). The proposed model shows overall superior performance compared to a model using only CNN. Conclusions: The combination of CNN and LSTM proves to be both accurate and clinically effective for organ classification and transit time estimation in WCE. Our model’s ability to integrate temporal information allows it to maintain high performance even in challenging conditions where color information alone is insufficient. Including MCC and G-mean metrics further validates the robustness of our approach in handling imbalanced datasets. These findings suggest that the proposed method can significantly improve the diagnostic accuracy and efficiency of WCE, making it a valuable tool in clinical practice for diagnosing and managing GI diseases. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

18 pages, 2469 KiB  
Article
Deep Learning-Based Surgical Treatment Recommendation and Nonsurgical Prognosis Status Classification for Scaphoid Fractures by Automated X-ray Image Recognition
by Ja-Hwung Su, Yu-Cheng Tung, Yi-Wen Liao, Hung-Yu Wang, Bo-Hong Chen, Ching-Di Chang, Yu-Fan Cheng, Wan-Ching Chang and Chu-Yu Chin
Biomedicines 2024, 12(6), 1198; https://doi.org/10.3390/biomedicines12061198 - 28 May 2024
Viewed by 1493
Abstract
Biomedical information retrieval for diagnosis, treatment and prognosis has been studied for a long time. In particular, image recognition using deep learning has been shown to be very effective for cancers and diseases. In these fields, scaphoid fracture recognition is a hot topic [...] Read more.
Biomedical information retrieval for diagnosis, treatment and prognosis has been studied for a long time. In particular, image recognition using deep learning has been shown to be very effective for cancers and diseases. In these fields, scaphoid fracture recognition is a hot topic because the appearance of scaphoid fractures is not easy to detect. Although there have been a number of recent studies on this topic, no studies focused their attention on surgical treatment recommendations and nonsurgical prognosis status classification. Indeed, a successful treatment recommendation will assist the doctor in selecting an effective treatment, and the prognosis status classification will help a radiologist recognize the image more efficiently. For these purposes, in this paper, we propose potential solutions through a comprehensive empirical study assessing the effectiveness of recent deep learning techniques on surgical treatment recommendation and nonsurgical prognosis status classification. In the proposed system, the scaphoid is firstly segmented from an unknown X-ray image. Next, for surgical treatment recommendation, the fractures are further filtered and recognized. According to the recognition result, the surgical treatment recommendation is generated. Finally, even without sufficient fracture information, the doctor can still make an effective decision to opt for surgery or not. Moreover, for nonsurgical patients, the current prognosis status of avascular necrosis, non-union and union can be classified. The related experimental results made using a real dataset reveal that the surgical treatment recommendation reached 80% and 86% in accuracy and AUC (Area Under the Curve), respectively, while the nonsurgical prognosis status classification reached 91% and 96%, respectively. Further, the methods using transfer learning and data augmentation can bring out obvious improvements, which, on average, reached 21.9%, 28.9% and 5.6%, 7.8% for surgical treatment recommendations and nonsurgical prognosis image classification, respectively. Based on the experimental results, the recommended methods in this paper are DenseNet169 and ResNet50 for surgical treatment recommendation and nonsurgical prognosis status classification, respectively. We believe that this paper can provide an important reference for future research on surgical treatment recommendation and nonsurgical prognosis classification for scaphoid fractures. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

19 pages, 3276 KiB  
Article
Machine Learning Model in Obesity to Predict Weight Loss One Year after Bariatric Surgery: A Pilot Study
by Enrique Nadal, Esther Benito, Ana María Ródenas-Navarro, Ana Palanca, Sergio Martinez-Hervas, Miguel Civera, Joaquín Ortega, Blanca Alabadi, Laura Piqueras, Juan José Ródenas and José T. Real
Biomedicines 2024, 12(6), 1175; https://doi.org/10.3390/biomedicines12061175 - 25 May 2024
Cited by 2 | Viewed by 1558
Abstract
Roux-en-Y gastric bypass (RYGB) is a treatment for severe obesity. However, many patients have insufficient total weight loss (TWL) after RYGB. Although multiple factors have been involved, their influence is incompletely known. The aim of this exploratory study was to evaluate the feasibility [...] Read more.
Roux-en-Y gastric bypass (RYGB) is a treatment for severe obesity. However, many patients have insufficient total weight loss (TWL) after RYGB. Although multiple factors have been involved, their influence is incompletely known. The aim of this exploratory study was to evaluate the feasibility and reliability of the use of machine learning (ML) techniques to estimate the success in weight loss after RYGP, based on clinical, anthropometric and biochemical data, in order to identify morbidly obese patients with poor weight responses. We retrospectively analyzed 118 patients, who underwent RYGB at the Hospital Clínico Universitario of Valencia (Spain) between 2013 and 2017. We applied a ML approach using local linear embedding (LLE) as a tool for the evaluation and classification of the main parameters in conjunction with evolutionary algorithms for the optimization and adjustment of the parameter model. The variables associated with one-year postoperative %TWL were obstructive sleep apnea, osteoarthritis, insulin treatment, preoperative weight, insulin resistance index, apolipoprotein A, uric acid, complement component 3, and vitamin B12. The model correctly classified 71.4% of subjects with TWL < 30% although 36.4% with TWL ≥ 30% were incorrectly classified as “unsuccessful procedures”. The ML-model processed moderate discriminatory precision in the validation set. Thus, in severe obesity, ML-models can be useful to assist in the selection of patients before bariatric surgery. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

18 pages, 7405 KiB  
Article
Equilibrium Optimization Algorithm with Deep Learning Enabled Prostate Cancer Detection on MRI Images
by Eunmok Yang, K. Shankar, Sachin Kumar, Changho Seo and Inkyu Moon
Biomedicines 2023, 11(12), 3200; https://doi.org/10.3390/biomedicines11123200 - 1 Dec 2023
Cited by 7 | Viewed by 2048
Abstract
The enlargement of the prostate gland in the reproductive system of males is considered a form of prostate cancer (PrC). The survival rate is considerably improved with earlier diagnosis of cancer; thus, timely intervention should be administered. In this study, a new automatic [...] Read more.
The enlargement of the prostate gland in the reproductive system of males is considered a form of prostate cancer (PrC). The survival rate is considerably improved with earlier diagnosis of cancer; thus, timely intervention should be administered. In this study, a new automatic approach combining several deep learning (DL) techniques was introduced to detect PrC from MRI and ultrasound (US) images. Furthermore, the presented method describes why a certain decision was made given the input MRI or US images. Many pretrained custom-developed layers were added to the pretrained model and employed in the dataset. The study presents an Equilibrium Optimization Algorithm with Deep Learning-based Prostate Cancer Detection and Classification (EOADL-PCDC) technique on MRIs. The main goal of the EOADL-PCDC method lies in the detection and classification of PrC. To achieve this, the EOADL-PCDC technique applies image preprocessing to improve the image quality. In addition, the EOADL-PCDC technique follows the CapsNet (capsule network) model for the feature extraction model. The EOA is based on hyperparameter tuning used to increase the efficiency of CapsNet. The EOADL-PCDC algorithm makes use of the stacked bidirectional long short-term memory (SBiLSTM) model for prostate cancer classification. A comprehensive set of simulations of the EOADL-PCDC algorithm was tested on the benchmark MRI dataset. The experimental outcome revealed the superior performance of the EOADL-PCDC approach over existing methods in terms of different metrics. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

11 pages, 3266 KiB  
Article
A Radiotherapy Dose Map-Guided Deep Learning Method for Predicting Pathological Complete Response in Esophageal Cancer Patients after Neoadjuvant Chemoradiotherapy Followed by Surgery
by Wing-Keen Yap, Ing-Tsung Hsiao, Wing-Lake Yap, Tsung-You Tsai, Yi-An Lu, Chan-Keng Yang, Meng-Ting Peng, En-Lin Su and Shih-Chun Cheng
Biomedicines 2023, 11(11), 3072; https://doi.org/10.3390/biomedicines11113072 - 16 Nov 2023
Cited by 7 | Viewed by 2205
Abstract
Esophageal cancer is a deadly disease, and neoadjuvant chemoradiotherapy can improve patient survival, particularly for patients achieving a pathological complete response (ypCR). However, existing imaging methods struggle to accurately predict ypCR. This study explores computer-aided detection methods, considering both imaging data and radiotherapy [...] Read more.
Esophageal cancer is a deadly disease, and neoadjuvant chemoradiotherapy can improve patient survival, particularly for patients achieving a pathological complete response (ypCR). However, existing imaging methods struggle to accurately predict ypCR. This study explores computer-aided detection methods, considering both imaging data and radiotherapy dose variations to enhance prediction accuracy. It involved patients with node-positive esophageal squamous cell carcinoma undergoing neoadjuvant chemoradiotherapy and surgery, with data collected from 2014 to 2017, randomly split into five subsets for 5-fold cross-validation. The algorithm DCRNet, an advanced version of OCRNet, integrates RT dose distribution into dose contextual representations (DCR), combining dose and pixel representation with ten soft regions. Among the 80 enrolled patients (mean age 55.68 years, primarily male, with stage III disease and middle-part lesions), the ypCR rate was 28.75%, showing no significant demographic or disease differences between the ypCR and non-ypCR groups. Among the three summarization methods, the maximum value across the CTV method produced the best results with an AUC of 0.928. The HRNetV2p model with DCR performed the best among the four backbone models tested, with an AUC of 0.928 (95% CI, 0.884–0.972) based on 5-fold cross-validation, showing significant improvement compared to other models. This underscores DCR-equipped models’ superior AUC outcomes. The study highlights the potential of dose-guided deep learning in ypCR prediction, necessitating larger, multicenter studies to validate the results. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

21 pages, 6153 KiB  
Article
Effective Invasiveness Recognition of Imbalanced Data by Semi-Automated Segmentations of Lung Nodules
by Yu-Cheng Tung, Ja-Hwung Su, Yi-Wen Liao, Yeong-Chyi Lee, Bo-An Chen, Hong-Ming Huang, Jia-Jhan Jhang, Hsin-Yi Hsieh, Yu-Shun Tong, Yu-Fan Cheng, Chien-Hao Lai and Wan-Ching Chang
Biomedicines 2023, 11(11), 2938; https://doi.org/10.3390/biomedicines11112938 - 30 Oct 2023
Cited by 2 | Viewed by 1669
Abstract
Over the past few decades, recognition of early lung cancers was researched for effective treatments. In early lung cancers, the invasiveness is an important factor for expected survival rates. Hence, how to effectively identify the invasiveness by computed tomography (CT) images became a [...] Read more.
Over the past few decades, recognition of early lung cancers was researched for effective treatments. In early lung cancers, the invasiveness is an important factor for expected survival rates. Hence, how to effectively identify the invasiveness by computed tomography (CT) images became a hot topic in the field of biomedical science. Although a number of previous works were shown to be effective on this topic, there remain some problems unsettled still. First, it needs a large amount of marked data for a better prediction, but the manual cost is high. Second, the accuracy is always limited in imbalance data. To alleviate these problems, in this paper, we propose an effective CT invasiveness recognizer by semi-automated segmentation. In terms of semi-automated segmentation, it is easy for doctors to mark the nodules. Just based on one clicked pixel, a nodule object in a CT image can be marked by fusing two proposed segmentation methods, including thresholding-based morphology and deep learning-based mask region-based convolutional neural network (Mask-RCNN). For thresholding-based morphology, an initial segmentation is derived by adaptive pixel connections. Then, a mathematical morphology is performed to achieve a better segmentation. For deep learning-based mask-RCNN, the anchor is fixed by the clicked pixel to reduce the computational complexity. To incorporate advantages of both, the segmentation is switched between these two sub-methods. After segmenting the nodules, a boosting ensemble classification model with feature selection is executed to identify the invasiveness by equalized down-sampling. The extensive experimental results on a real dataset reveal that the proposed segmentation method performs better than the traditional segmentation ones, which can reach an average dice improvement of 392.3%. Additionally, the proposed ensemble classification model infers better performances than the compared method, which can reach an area under curve (AUC) improvement of 5.3% and a specificity improvement of 14.3%. Moreover, in comparison with the models with imbalance data, the improvements of AUC and specificity can reach 10.4% and 33.3%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

15 pages, 3792 KiB  
Article
ASNET: A Novel AI Framework for Accurate Ankylosing Spondylitis Diagnosis from MRI
by Nevsun Pihtili Tas, Oguz Kaya, Gulay Macin, Burak Tasci, Sengul Dogan and Turker Tuncer
Biomedicines 2023, 11(9), 2441; https://doi.org/10.3390/biomedicines11092441 - 1 Sep 2023
Cited by 15 | Viewed by 2726
Abstract
Background: Ankylosing spondylitis (AS) is a chronic, painful, progressive disease usually seen in the spine. Traditional diagnostic methods have limitations in detecting the early stages of AS. The early diagnosis of AS can improve patients’ quality of life. This study aims to diagnose [...] Read more.
Background: Ankylosing spondylitis (AS) is a chronic, painful, progressive disease usually seen in the spine. Traditional diagnostic methods have limitations in detecting the early stages of AS. The early diagnosis of AS can improve patients’ quality of life. This study aims to diagnose AS with a pre-trained hybrid model using magnetic resonance imaging (MRI). Materials and Methods: In this research, we collected a new MRI dataset comprising three cases. Furthermore, we introduced a novel deep feature engineering model. Within this model, we utilized three renowned pretrained convolutional neural networks (CNNs): DenseNet201, ResNet50, and ShuffleNet. Through these pretrained CNNs, deep features were generated using the transfer learning approach. For each pretrained network, two feature vectors were generated from an MRI. Three feature selectors were employed during the feature selection phase, amplifying the number of features from 6 to 18 (calculated as 6 × 3). The k-nearest neighbors (kNN) classifier was utilized in the classification phase to determine classification results. During the information phase, the iterative majority voting (IMV) algorithm was applied to secure voted results, and our model selected the output with the highest classification accuracy. In this manner, we have introduced a self-organized deep feature engineering model. Results: We have applied the presented model to the collected dataset. The proposed method yielded 99.80%, 99.60%, 100%, and 99.80% results for accuracy, recall, precision, and F1-score for the collected axial images dataset. The collected coronal image dataset yielded 99.45%, 99.20%, 99.70%, and 99.45% results for accuracy, recall, precision, and F1-score, respectively. As for contrast-enhanced images, accuracy of 95.62%, recall of 80.72%, precision of 94.24%, and an F1-score of 86.96% were attained. Conclusions: Based on the results, the proposed method for classifying AS disease has demonstrated successful outcomes using MRI. The model has been tested on three cases, and its consistently high classification performance across all cases underscores the model’s general robustness. Furthermore, the ability to diagnose AS disease using only axial images, without the need for contrast-enhanced MRI, represents a significant advancement in both healthcare and economic terms. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

Other

Jump to: Research

26 pages, 1777 KiB  
Systematic Review
Machine Learning Models in Sepsis Outcome Prediction for ICU Patients: Integrating Routine Laboratory Tests—A Systematic Review
by Florentina Mușat, Dan Nicolae Păduraru, Alexandra Bolocan, Cosmin Alexandru Palcău, Andreea-Maria Copăceanu, Daniel Ion, Viorel Jinga and Octavian Andronic
Biomedicines 2024, 12(12), 2892; https://doi.org/10.3390/biomedicines12122892 - 19 Dec 2024
Cited by 3 | Viewed by 1726
Abstract
Background. Sepsis presents significant diagnostic and prognostic challenges, and traditional scoring systems, such as SOFA and APACHE, show limitations in predictive accuracy. Machine learning (ML)-based predictive survival models can support risk assessment and treatment decision-making in the intensive care unit (ICU) by accounting [...] Read more.
Background. Sepsis presents significant diagnostic and prognostic challenges, and traditional scoring systems, such as SOFA and APACHE, show limitations in predictive accuracy. Machine learning (ML)-based predictive survival models can support risk assessment and treatment decision-making in the intensive care unit (ICU) by accounting for the numerous and complex factors that influence the outcome in the septic patient. Methods. A systematic literature review of studies published from 2014 to 2024 was conducted using the PubMed database. Eligible studies investigated the development of ML models incorporating commonly available laboratory and clinical data for predicting survival outcomes in adult ICU patients with sepsis. Study selection followed the PRISMA guidelines and relied on predefined inclusion criteria. All records were independently assessed by two reviewers, with conflicts resolved by a third senior reviewer. Data related to study design, methodology, results, and interpretation of the results were extracted in a predefined grid. Results. Overall, 19 studies were identified, encompassing primarily logistic regression, random forests, and neural networks. Most used datasets were US-based (MIMIC-III, MIMIC-IV, and eICU-CRD). The most common variables used in model development were age, albumin levels, lactate levels, and ventilator. ML models demonstrated superior performance metrics compared to conventional methods and traditional scoring systems. The best-performing model was a gradient boosting decision tree, with an area under curve of 0.992, an accuracy of 0.954, and a sensitivity of 0.917. However, several critical limitations should be carefully considered when interpreting the results, such as population selection bias (i.e., single center studies), small sample sizes, limited external validation, and model interpretability. Conclusions. Through real-time integration of routine laboratory and clinical data, ML-based tools can assist clinical decision-making and enhance the consistency and quality of sepsis management across various healthcare contexts, including ICUs with limited resources. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

Back to TopTop