Applications of Deep Learning in Bioinformatics and Image Processing

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 16928

Special Issue Editor


E-Mail Website
Guest Editor
Biomedical Center (BMC), Lund University, 22362 Lund, Sweden
Interests: bioinformatics; machine learning; deep learning; bioimages; medical images

Special Issue Information

Dear Colleagues,

This Special Issue emphasizes the importance of deep learning in the disciplines of bioinformatics and image processing. Deep learning models are changing how academics approach data-driven research by increasingly being used to analyze and interpret challenging data in a variety of disciplines.

The articles in this Special Issue address a wide range of deep-learning-related topics, including the development of novel algorithms, the application of deep learning to genomics and drug discovery, as well as the classification, segmentation, and feature extraction of images. A number of the studies also look at the integration of deep learning with other technologies such as cloud computing and blockchain.

The articles in this Special Issue show how deep learning is flexible and capable of solving a variety of issues.

Dr. Muhammad Kabir
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6685 KiB  
Article
Improving the Classification of Unexposed Potsherd Cavities by Means of Preprocessing
by Randy Cahya Wihandika, Yoonji Lee, Mahendra Data, Masayoshi Aritsugi, Hiroki Obata and Israel Mendonça
Information 2024, 15(5), 243; https://doi.org/10.3390/info15050243 - 23 Apr 2024
Viewed by 373
Abstract
The preparation of raw images for subsequent analysis, known as image preprocessing, is a crucial step that can boost the performance of an image classification model. Although deep learning has succeeded in image classification without handcrafted features, certain studies underscore the continued significance [...] Read more.
The preparation of raw images for subsequent analysis, known as image preprocessing, is a crucial step that can boost the performance of an image classification model. Although deep learning has succeeded in image classification without handcrafted features, certain studies underscore the continued significance of image preprocessing for enhanced performance during the training process. Nonetheless, this task is often demanding and requires high-quality images to effectively train a classification model. The quality of training images, along with other factors, impacts the classification model’s performance and insufficient image quality can lead to suboptimal classification performance. On the other hand, achieving high-quality training images requires effective image preprocessing techniques. In this study, we perform exploratory experiments aimed at improving a classification model of unexposed potsherd cavities images via image preprocessing pipelines. These pipelines are evaluated on two distinct image sets: a laboratory-made, experimental image set that contains archaeological images with controlled lighting and background conditions, and a Jōmon–Yayoi image set that contains images of real-world potteries from the Jōmon period through the Yayoi period with varying conditions. The best accuracy performances obtained on the experimental images and the more challenging Jōmon–Yayoi images are 90.48% and 78.13%, respectively. The comprehensive analysis and experimentation conducted in this study demonstrate a noteworthy enhancement in performance metrics compared to the established baseline benchmark. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

13 pages, 1409 KiB  
Article
Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning
by Eduardo Morales-Vargas, Hayde Peregrina-Barreto, Rita Q. Fuentes-Aguilar, Juan Pablo Padilla-Martinez, Wendy Argelia Garcia-Suastegui and Julio C. Ramirez-San-Juan
Information 2024, 15(4), 185; https://doi.org/10.3390/info15040185 - 29 Mar 2024
Viewed by 738
Abstract
Microvasculature analysis is an important task in the medical field due to its various applications. It has been used for the diagnosis and threat of diseases in fields such as ophthalmology, dermatology, and neurology by measuring relative blood flow or blood vessel morphological [...] Read more.
Microvasculature analysis is an important task in the medical field due to its various applications. It has been used for the diagnosis and threat of diseases in fields such as ophthalmology, dermatology, and neurology by measuring relative blood flow or blood vessel morphological properties. However, light scattering at the periphery of the blood vessel causes a decrease in contrast around the vessel borders and an increase in the noise of the image, making the localization of blood vessels a challenging task. Therefore, this work proposes integrating known information from the experimental setup into a deep learning architecture with multiple inputs to improve the generalization of a computational model for the segmentation of blood vessels and depth estimation in a single inference step. The proposed R-UNET + ET + LA obtained an intersection over union of 0.944 ± 0.065 and 0.812 ± 0.080 in the classification task for validation (in vitro) and test sets (in vivo), respectively, and a root mean squared error of 0.0085 ± 0.0275 μm in the depth estimation. This approach improves the generalization of current solutions by pre-training with in vitro data and adding information from the experimental setup. Additionally, the method can infer the depth of a blood vessel pixel by pixel instead of in regions as the current state of the art does. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

14 pages, 4060 KiB  
Article
E-MuLA: An Ensemble Multi-Localized Attention Feature Extraction Network for Viral Protein Subcellular Localization
by Grace-Mercure Bakanina Kissanga, Hasan Zulfiqar, Shenghan Gao, Sophyani Banaamwini Yussif, Biffon Manyura Momanyi, Lin Ning, Hao Lin and Cheng-Bing Huang
Information 2024, 15(3), 163; https://doi.org/10.3390/info15030163 - 13 Mar 2024
Cited by 1 | Viewed by 980
Abstract
Accurate prediction of subcellular localization of viral proteins is crucial for understanding their functions and developing effective antiviral drugs. However, this task poses a significant challenge, especially when relying on expensive and time-consuming classical biological experiments. In this study, we introduced a computational [...] Read more.
Accurate prediction of subcellular localization of viral proteins is crucial for understanding their functions and developing effective antiviral drugs. However, this task poses a significant challenge, especially when relying on expensive and time-consuming classical biological experiments. In this study, we introduced a computational model called E-MuLA, based on a deep learning network that combines multiple local attention modules to enhance feature extraction from protein sequences. The superior performance of the E-MuLA has been demonstrated through extensive comparisons with LSTM, CNN, AdaBoost, decision trees, KNN, and other state-of-the-art methods. It is noteworthy that the E-MuLA achieved an accuracy of 94.87%, specificity of 98.81%, and sensitivity of 84.18%, indicating that E-MuLA has the potential to become an effective tool for predicting virus subcellular localization. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

12 pages, 37559 KiB  
Article
Improving Breast Tumor Multi-Classification from High-Resolution Histological Images with the Integration of Feature Space Data Augmentation
by Nadia Brancati and Maria Frucci
Information 2024, 15(2), 98; https://doi.org/10.3390/info15020098 - 08 Feb 2024
Viewed by 1103
Abstract
To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle [...] Read more.
To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle these limitations, we propose a deep learning-based breast tumor gigapixel histological image multi-classifier integrated with a high-resolution data augmentation model to process the entire slide by exploring its local and global information and generating its different synthetic versions. The key idea is to perform the classification and augmentation in feature latent space, reducing the computational cost while preserving the class label of the input. We adopt a deep learning-based multi-classification method and evaluate the contribution given by a conditional generative adversarial network-based data augmentation model on the classifier’s performance for three tumor classes in the BRIGHT Challenge dataset. The proposed method has allowed us to achieve an average F1 equal to 69.5, considering only the WSI dataset of the Challenge. The results are comparable to those obtained by the Challenge winning method (71.6), also trained on the annotated tumor region dataset of the Challenge. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

22 pages, 7459 KiB  
Article
Metaverse Applications in Bioinformatics: A Machine Learning Framework for the Discrimination of Anti-Cancer Peptides
by Sufyan Danish, Asfandyar Khan, L. Minh Dang, Mohammed Alonazi, Sultan Alanazi, Hyoung-Kyu Song and Hyeonjoon Moon
Information 2024, 15(1), 48; https://doi.org/10.3390/info15010048 - 15 Jan 2024
Cited by 1 | Viewed by 1766
Abstract
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of [...] Read more.
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of bioinformatics and genomic data, facilitate groundbreaking developments. The precise prediction of ACPs from complex biological sequences remains an ongoing challenge in the genomic area. Currently, conventional approaches such as chemotherapy, target therapy, radiotherapy, and surgery are widely used for cancer treatment. However, these methods fail to completely eradicate neoplastic cells or cancer stem cells and damage healthy tissues, resulting in morbidity and even mortality. To control such diseases, oncologists and drug designers highly desire to develop new preventive techniques with more efficiency and minor side effects. Therefore, this research provides an optimized computational-based framework for discriminating against ACPs. In addition, the proposed approach intelligently integrates four peptide encoding methods, namely amino acid occurrence analysis (AAOA), dipeptide occurrence analysis (DOA), tripeptide occurrence analysis (TOA), and enhanced pseudo amino acid composition (EPseAAC). To overcome the issue of bias and reduce true error, the synthetic minority oversampling technique (SMOTE) is applied to balance the samples against each class. The empirical results over two datasets, where the accuracy of the proposed model on the benchmark dataset is 97.56% and on the independent dataset is 95.00%, verify the effectiveness of our ensemble learning mechanism and show remarkable performance when compared with state-of-the-art (SOTA) methods. In addition, the application of metaverse technology in healthcare holds promise for transformative innovations, potentially enhancing patient experiences and providing novel solutions in the realm of preventive techniques and patient care. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

19 pages, 6379 KiB  
Article
Chem2Side: A Deep Learning Model with Ensemble Augmentation (Conventional + Pix2Pix) for COVID-19 Drug Side-Effects Prediction from Chemical Images
by Muhammad Asad Arshed, Muhammad Ibrahim, Shahzad Mumtaz, Muhammad Tanveer and Saeed Ahmed
Information 2023, 14(12), 663; https://doi.org/10.3390/info14120663 - 16 Dec 2023
Viewed by 1397
Abstract
Drug side effects (DSEs) or adverse drug reactions (ADRs) are a major concern in the healthcare industry, accounting for a significant number of annual deaths in Europe alone. Identifying and predicting DSEs early in the drug development process is crucial to mitigate their [...] Read more.
Drug side effects (DSEs) or adverse drug reactions (ADRs) are a major concern in the healthcare industry, accounting for a significant number of annual deaths in Europe alone. Identifying and predicting DSEs early in the drug development process is crucial to mitigate their impact on public health and reduce the time and costs associated with drug development. Objective: In this study, our primary objective is to predict multiple drug side effects using 2D chemical structures, especially for COVID-19, departing from the conventional approach of relying on 1D chemical structures. We aim to develop a novel model for DSE prediction that leverages the CNN-based transfer learning architecture of ResNet152V2. Motivation: The motivation behind this research stems from the need to enhance the efficiency and accuracy of DSE prediction, enabling the pharmaceutical industry to identify potential drug candidates with fewer adverse effects. By utilizing 2D chemical structures and employing data augmentation techniques, we seek to revolutionize the field of drug side-effect prediction. Novelty: This study introduces several novel aspects. The proposed study is the first of its kind to use 2D chemical structures for predicting drug side effects, departing from the conventional 1D approaches. Secondly, we employ data augmentation with both conventional and diffusion-based models (Pix2Pix), a unique strategy in the field. These innovations set the stage for a more advanced and accurate approach to DSE prediction. Results: Our proposed model, named CHEM2SIDE, achieved an impressive average training accuracy of 0.78. Moreover, the average validation and test accuracy, precision, and recall were all at 0.73. When evaluated for COVID-19 drugs, our model exhibited an accuracy of 0.72, a precision of 0.79, a recall of 0.72, and an F1 score of 0.73. Comparative assessments against established transfer learning and machine learning models (VGG16, MobileNetV2, DenseNet121, and KNN) showcased the exceptional performance of CHEM2SIDE, marking a significant advancement in drug side-effect prediction. Conclusions: Our study introduces a groundbreaking approach to predicting drug side effects by using 2D chemical structures and incorporating data augmentation. The CHEM2SIDE model demonstrates remarkable accuracy and outperforms existing models, offering a promising solution to the challenges posed by DSEs in drug development. This research holds great potential for improving drug safety and reducing the associated time and costs. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

32 pages, 18478 KiB  
Article
Explainable Deep Learning Approach for Multi-Class Brain Magnetic Resonance Imaging Tumor Classification and Localization Using Gradient-Weighted Class Activation Mapping
by Tahir Hussain and Hayaru Shouno
Information 2023, 14(12), 642; https://doi.org/10.3390/info14120642 - 30 Nov 2023
Cited by 2 | Viewed by 2122
Abstract
Brain tumors (BT) present a considerable global health concern because of their high mortality rates across diverse age groups. A delay in diagnosing BT can lead to death. Therefore, a timely and accurate diagnosis through magnetic resonance imaging (MRI) is crucial. A radiologist [...] Read more.
Brain tumors (BT) present a considerable global health concern because of their high mortality rates across diverse age groups. A delay in diagnosing BT can lead to death. Therefore, a timely and accurate diagnosis through magnetic resonance imaging (MRI) is crucial. A radiologist makes the final decision to identify the tumor through MRI. However, manual assessments are flawed, time-consuming, and rely on experienced radiologists or neurologists to identify and diagnose a BT. Computer-aided classification models often lack performance and explainability for clinical translation, particularly in neuroscience research, resulting in physicians perceiving the model results as inadequate due to the black box model. Explainable deep learning (XDL) can advance neuroscientific research and healthcare tasks. To enhance the explainability of deep learning (DL) and provide diagnostic support, we propose a new classification and localization model, combining existing methods to enhance the explainability of DL and provide diagnostic support. We adopt a pre-trained visual geometry group (pre-trained-VGG-19), scratch-VGG-19, and EfficientNet model that runs a modified form of the class activation mapping (CAM), gradient-weighted class activation mapping (Grad-CAM) and Grad-CAM++ algorithms. These algorithms, introduced into a convolutional neural network (CNN), uncover a crucial part of the classification and can provide an explanatory interface for diagnosing BT. The experimental results demonstrate that the pre-trained-VGG-19 with Grad-CAM provides better classification and visualization results than the scratch-VGG-19, EfficientNet, and cutting-edge DL techniques regarding visual and quantitative evaluations with increased accuracy. The proposed approach may contribute to reducing the diagnostic uncertainty and validating BT classification. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

16 pages, 3963 KiB  
Article
A Novel Predictor for the Analysis and Prediction of Enhancers and Their Strength via Multi-View Features and Deep Forest
by Mehwish Gill, Saeed Ahmed, Muhammad Kabir and Maqsood Hayat
Information 2023, 14(12), 636; https://doi.org/10.3390/info14120636 - 28 Nov 2023
Cited by 1 | Viewed by 1190
Abstract
Enhancers are short DNA segments (50–1500 bp) that effectively activate gene transcription when transcription factors (TFs) are present. There is a correlation between the genetic differences in enhancers and numerous human disorders including cancer and inflammatory bowel disease. In computational biology, the accurate [...] Read more.
Enhancers are short DNA segments (50–1500 bp) that effectively activate gene transcription when transcription factors (TFs) are present. There is a correlation between the genetic differences in enhancers and numerous human disorders including cancer and inflammatory bowel disease. In computational biology, the accurate categorization of enhancers can yield important information for drug discovery and development. High-throughput experimental approaches are thought to be vital tools for researching enhancers’ key characteristics; however, because these techniques require a lot of labor and time, it might be difficult for researchers to forecast enhancers and their powers. Therefore, computational techniques are considered an alternate strategy for handling this issue. Based on the types of algorithms that have been used to construct predictors, the current methodologies can be divided into three primary categories: ensemble-based methods, deep learning-based approaches, and traditional ML-based techniques. In this study, we developed a novel two-layer deep forest-based predictor for accurate enhancer and strength prediction, namely, NEPERS. Enhancers and non-enhancers are divided at the first level by NEPERS, whereas strong and weak enhancers are divided at the second level. To evaluate the effectiveness of feature fusion, block-wise deep forest and other algorithms were combined with multi-view features such as PSTNPss, PSTNPdss, CKSNAP, and NCP via 10-fold cross-validation and independent testing. Our proposed technique performs better than competing models across all parameters, with an ACC of 0.876, Sen of 0.864, Spe of 0.888, MCC of 0.753, and AUC of 0.940 for layer 1 and an ACC of 0.959, Sen of 0.960, Spe of 0.958, MCC of 0.918, and AUC of 0.990 for layer 2, respectively, for the benchmark dataset. Similarly, for the independent test, the ACC, Sen, Spe, MCC, and AUC were 0.863, 0.865, 0.860, 0.725, and 0.948 for layer 1 and 0.890, 0.940, 0.840, 0.784, and 0.951 for layer 2, respectively. This study provides conclusive insights for the accurate and effective detection and characterization of enhancers and their strengths. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

22 pages, 17315 KiB  
Article
Progressive-Augmented-Based DeepFill for High-Resolution Image Inpainting
by Muzi Cui, Hao Jiang and Chaozhuo Li
Information 2023, 14(9), 512; https://doi.org/10.3390/info14090512 - 18 Sep 2023
Viewed by 1266
Abstract
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the [...] Read more.
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the boundaries might be uninformative or noisy, leading to blurred images. As complementary, global visual features from the remote image contexts depict the overall structure and texture of the vanilla images, contributing to generating pixels that blend seamlessly with the existing visual elements. In this paper, we propose a novel model, PA-DeepFill, to repair high-resolution images. The generator network follows a novel progressive learning paradigm, starting with low-resolution images and gradually improving the resolutions by stacking more layers. A novel attention-based module, the gathered attention block, is further integrated into the generator to learn the importance of different distant visual components adaptively. In addition, we have designed a local discriminator that is more suitable for image inpainting tasks, multi-task guided mask-level local discriminator based PatchGAN, which can guide the model to distinguish between regions from the original image and regions completed by the model at a finer granularity. This local discriminator can capture more detailed local information, thereby enhancing the model’s discriminative ability and resulting in more realistic and natural inpainted images. Our proposal is extensively evaluated over popular datasets, and the experimental results demonstrate the superiority of our proposal. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

14 pages, 2658 KiB  
Article
Multi-Class Skin Cancer Classification Using Vision Transformer Networks and Convolutional Neural Network-Based Pre-Trained Models
by Muhammad Asad Arshed, Shahzad Mumtaz, Muhammad Ibrahim, Saeed Ahmed, Muhammad Tahir and Muhammad Shafi
Information 2023, 14(7), 415; https://doi.org/10.3390/info14070415 - 18 Jul 2023
Cited by 8 | Viewed by 3544
Abstract
Skin cancer, particularly melanoma, has been recognized as one of the most lethal forms of cancer. Detecting and diagnosing skin lesions accurately can be challenging due to the striking similarities between the various types of skin lesions, such as melanoma and nevi, especially [...] Read more.
Skin cancer, particularly melanoma, has been recognized as one of the most lethal forms of cancer. Detecting and diagnosing skin lesions accurately can be challenging due to the striking similarities between the various types of skin lesions, such as melanoma and nevi, especially when examining the color images of the skin. However, early diagnosis plays a crucial role in saving lives and reducing the burden on medical resources. Consequently, the development of a robust autonomous system for skin cancer classification becomes imperative. Convolutional neural networks (CNNs) have been widely employed over the past decade to automate cancer diagnosis. Nonetheless, the emergence of the Vision Transformer (ViT) has recently gained a considerable level of popularity in the field and has emerged as a competitive alternative to CNNs. In light of this, the present study proposed an alternative method based on the off-the-shelf ViT for identifying various skin cancer diseases. To evaluate its performance, the proposed method was compared with 11 CNN-based transfer learning methods that have been known to outperform other deep learning techniques that are currently in use. Furthermore, this study addresses the issue of class imbalance within the dataset, a common challenge in skin cancer classification. In addressing this concern, the proposed study leverages the vision transformer and the CNN-based transfer learning models to classify seven distinct types of skin cancers. Through our investigation, we have found that the employment of pre-trained vision transformers achieved an impressive accuracy of 92.14%, surpassing CNN-based transfer learning models across several evaluation metrics for skin cancer diagnosis. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: An Enhanced Multi-Localized Attention Feature Extraction Network for Viral Protein Subcellular Localization
Authors: Bakanina Kissanga Grace-Mercure; Yussif Banaamwini Sophyani; Biffon Manyura Momanyi; Lin Ning; Hasan Zulfiqar; Hao Lin
Affiliation: --
Abstract: Accurate prediction of subcellular localization of viral proteins is crucial for understanding their functions and developing effective antiviral drugs. However, this task presents a significant challenge, especially when using expensive and time-consuming classical biological experiments. In this study, we introduced a computational framework, called MuLA, based on deep learning network, which combined multiple local attention modules to enhance feature extraction of protein sequences. The superior performance of the MuLA model has been demonstrated through extensive comparisons with LSTM, CNN, AdaBoost, decision trees, and KNN. It is worth noting that the MuLA could produce the accuracy of 93.66%, specificity of 99.21%, and sensitivity of 89.81%. indicating that MuLA can become an effective tool for predicting virus subcellular localization.

Title: Advancing Early Diagnosis of Cardiac Anomalies through Enhanced ECG Classification: An Experimental Evaluation
Authors: Naba Rahim; Uzair Iqbal; Umar Aftab; Qamar Zaman; Hafiz Tayyab Rauf; Mohamed Sharaf
Affiliation: Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
Abstract: The Electrocardiogram (ECG) is the primary diagnostic tool for determining the numerous cardiac problems that can affect a person. Identification of early diagnosis of cardiovascular issues is very beneficial. Atrial fibrillation (AFIB) is a heart rhythm problem that can cause blood clots in the heart. Myocardial infarction (MI), another name for a heart attack, is a severe medical emergency in which the blood supply to the heart is abruptly interrupted, usually by a blood clot. Early identification of the correlation between cardiac anomalies helps reduce life-threatening hazards, especially in heart stroke variations. The primary motivation for conducting this study is the standardization of retrospective critical analysis, which is crucial for the robust and accurate early diagnosis of several potentially fatal cardiac cases and for safeguarding human life. In this study, ECG classification is done using different machine learning (Support Vector, Naïve Bayes, and Decision Tree) and deep learning methods (Convolutional neural networks or CNN). The study also presents a specific, fine-tuned design for convolutional neural networks (CNN) for verifiable and precise co-relations; MIT-BIH, PTBDB, and SVDB datasets were used for experiments. The results show that Machine learning models did not perform well on arrhythmia classification and provided an accuracy of 71% on the MITDB dataset compared to deep learning models, which performed exceptionally well on both the MITDB dataset and PTBDB dataset with average accuracy and f1-score of 99%. For AFIB and MI correlation, PTBDB, MIT-BIH, and Customized datasets were used with CNN, and all three show exceptional F1-Score of 99% and 97%. In the future, peak detection techniques should be identified and used, which can improve ECG visualization and feature extraction. This study did not consider the segmentation and peak detection, which might have benefited from feature extraction and ECG visualization. Future studies should focus on techniques for peak identification that enhance ECG interpretation. However, there is still potential for enhancements in the proposed approach, such as having the ability to detect brain diseases like stroke or cognitive impairment.

Title: Exploring Human Dietary Sensitivities via BERTopic: A Bioinformatics Approach to Understand Genetic Polymorphisms
Authors: Giovanni Maria De Filippis; Antonio Maria Rinaldi; Cristiano Russo; Cristian Tommasino
Affiliation: Department of Electrical Engineering and Information Technology, University of Naples Federico II, via Claudio 21, 80125, Italy
Abstract: Navigating the vast genomic data on human genetic polymorphisms poses complex analytical challenges in the bioinformatics field. To interpret the health implications due to the interactions of polymorphisms with environmental factors, we need refined, data-driven approaches. Our study applies a topic modeling technique on a comprehensive dataset of abstracts sourced from genomic literature, aiming to comprehend the functional implications of these polymorphisms. The focal areas of our investigation were food tolerances, allergies, diet-induced oxidative stress, and xenobiotics metabolism. By employing BERTopic, a state-of-the-art machine learning model–on PubMed abstracts, we successfully identify the key nutrition-related topics where genetic variation could impact adverse food response and sensitivities towards diet-related oxidative stress. Our research underscores the vital role that machine learning applications, especially deep learning, plays in handling multilayered genomic data, thereby revealing critical associations for personalized dietary therapy and preventive healthcare. The proposed methodology can be replicated across diverse disciplines to effectively decode data complexity.

Title: Improving breast tumor multi-classification from high-resolution histological images with the integration of feature space data augmentation
Authors: Nadia Brancati; Maria Frucci
Affiliation: Institut for High Performance and Networking of the National Research Council of Italy (ICAR-CNR), Naples, Italy
Abstract: To support pathologists in breast tumor diagnosis, deep learning plays a key role in developing histological whole slide image classification methods. However, automatic classification is challenging due to the enormous size of images and the scarcity of representative training data. To tackle these limitations, we propose integrating a deep learning-based breast tumor gigapixel histological image multi-classifier with a high-resolution data augmentation model to process the entire slide by exploring its local and global information and generating its different synthetic versions. The key idea is to perform classification and augmentation in feature latent space, reducing computational cost while preserving the class label of the input. Precisely, we adopt the deep learning-based multi-classification baseline of the BRIGHT Challenge and evaluate the contribution given by a Conditional Generative Adversarial Network-based data augmentation model on the performance of the breast tumor multi-classification for three tumor classes. The proposed method allows achieving an average F1 equal to 69.5, considering only the WSI dataset of the Challenge. The results are comparable to the method presented by the Challenge winner (71.6), who has also been trained on the annotated tumor region dataset of the Challenge.

Back to TopTop