Applications of Deep Learning in Bioinformatics and Image Processing

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 September 2025 | Viewed by 38233

Special Issue Editor


E-Mail Website
Guest Editor
Biomedical Center (BMC), Lund University, 22362 Lund, Sweden
Interests: bioinformatics; machine learning; deep learning; bioimages; medical images

Special Issue Information

Dear Colleagues,

This Special Issue emphasizes the importance of deep learning in the disciplines of bioinformatics and image processing. Deep learning models are changing how academics approach data-driven research by increasingly being used to analyze and interpret challenging data in a variety of disciplines.

The articles in this Special Issue address a wide range of deep-learning-related topics, including the development of novel algorithms, the application of deep learning to genomics and drug discovery, as well as the classification, segmentation, and feature extraction of images. A number of the studies also look at the integration of deep learning with other technologies such as cloud computing and blockchain.

The articles in this Special Issue show how deep learning is flexible and capable of solving a variety of issues.

Dr. Muhammad Kabir
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1433 KiB  
Article
Optimized Deep Learning for Mammography: Augmentation and Tailored Architectures
by Syed Ibrar Hussain and Elena Toscano
Information 2025, 16(5), 359; https://doi.org/10.3390/info16050359 - 29 Apr 2025
Abstract
This paper investigates the categorization of mammogram images into benign, malignant and normal categories, providing novel approaches based on Deep Convolutional Neural Networks to the early identification and classification of breast lesions. Multiple DCNN models were tested to see how well deep learning [...] Read more.
This paper investigates the categorization of mammogram images into benign, malignant and normal categories, providing novel approaches based on Deep Convolutional Neural Networks to the early identification and classification of breast lesions. Multiple DCNN models were tested to see how well deep learning worked for difficult, multi-class categorization problems. These models were trained on pre-processed datasets with optimized hyperparameters (e.g., the batch size, learning rate, and dropout) which increased the precision of classification. Evaluation measures like confusion matrices, accuracy, and loss demonstrated their great classification efficiency with low overfitting and the validation results well aligned with the training. DenseNet-201 and MobileNet-V3 Large displayed significant generalization skills, whilst EfficientNetV2-B3 and NASNet Mobile struck the optimum mix of accuracy and efficiency, making them suitable for practical applications. The use of data augmentation also improved the management of data imbalances, resulting in more accurate large-scale detection. Unlike prior approaches, the combination of the architectures, pre-processing approaches, and data augmentation improved the system’s accuracy, indicating that these models are suitable for medical imaging tasks that require transfer learning. The results have shown precise and accurate classifications in terms of dealing with class imbalances and dataset poor quality. In particular, we have not defined a new framework for computer-aided diagnosis here, but we have reviewed a variety of promising solutions for future developments in this field. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

17 pages, 3802 KiB  
Article
Automated Fungal Identification with Deep Learning on Time-Lapse Images
by Marjan Mansourvar, Karol Rafal Charylo, Rasmus John Normand Frandsen, Steen Smidth Brewer and Jakob Blæsbjerg Hoof
Information 2025, 16(2), 109; https://doi.org/10.3390/info16020109 - 5 Feb 2025
Viewed by 1182
Abstract
The identification of species within filamentous fungi is crucial in various fields such as agriculture, environmental monitoring, and medical mycology. Traditional identification methods based on morphology have a low demand for advanced equipment usage and heavily depend on manual observation and expertise. However, [...] Read more.
The identification of species within filamentous fungi is crucial in various fields such as agriculture, environmental monitoring, and medical mycology. Traditional identification methods based on morphology have a low demand for advanced equipment usage and heavily depend on manual observation and expertise. However, this approach may struggle to differentiate between species in a genus due to their potential visual similarities, making the process time-consuming and subjective. In this study, we present an AI-based fungal species recognition model that utilizes deep learning techniques applied to time-lapse images. The training dataset, derived from fungi strains in the IBT Culture Collection, comprised 26,451 high-resolution images representing 110 species from 35 genera. The dataset was divided into a training set and validation subsets. We implemented three advanced deep learning architectures—ResNet50, DenseNet-121, and Vision Transformer (ViT)—to assess their effectiveness in accurately classifying fungal species. By utilizing images from early growth stages (days 2–3.5) for training and testing and later stages (days 4–7) for validation, our approach shortens the fungal identification process by 2–3 days, significantly reducing the associated workload and costs. Among the models, the Vision Transformer achieved the highest accuracy of 92.6%, demonstrating the effectiveness of our method. This work contributes to the automation of fungal identification, providing a reliable and efficient solution for monitoring fungal growth and diversity over time, which would be useful for culture collections or other institutions that handle a large number of new isolates in their daily work. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

15 pages, 1641 KiB  
Article
Interactive Segmentation for Medical Images Using Spatial Modeling Mamba
by Yuxin Tang, Yu Li, Hua Zou and Xuedong Zhang
Information 2024, 15(10), 633; https://doi.org/10.3390/info15100633 - 14 Oct 2024
Viewed by 1812
Abstract
Interactive segmentation methods utilize user-provided positive and negative clicks to guide the model in accurately segmenting target objects. Compared to fully automatic medical image segmentation, these methods can achieve higher segmentation accuracy with limited image data, demonstrating significant potential in clinical applications. Typically, [...] Read more.
Interactive segmentation methods utilize user-provided positive and negative clicks to guide the model in accurately segmenting target objects. Compared to fully automatic medical image segmentation, these methods can achieve higher segmentation accuracy with limited image data, demonstrating significant potential in clinical applications. Typically, for each new click provided by the user, conventional interactive segmentation methods reprocess the entire network by re-inputting the click into the segmentation model, which greatly increases the user’s interaction burden and deviates from the intended goal of interactive segmentation tasks. To address this issue, we propose an efficient segmentation network, ESM-Net, for interactive medical image segmentation. It obtains high-quality segmentation masks based on the user’s initial clicks, reducing the complexity of subsequent refinement steps. Recent studies have demonstrated the strong performance of the Mamba model in various vision tasks; however, its application in interactive segmentation remains unexplored. In our study, we incorporate the Mamba module into our framework for the first time and enhance its spatial representation capabilities by developing a Spatial Augmented Convolution (SAC) module. These components are combined as the fundamental building blocks of our network. Furthermore, we designed a novel and efficient segmentation head to fuse multi-scale features extracted from the encoder, optimizing the generation of the predicted segmentation masks. Through comprehensive experiments, our method achieved state-of-the-art performance on three medical image datasets. Specifically, we achieved 1.43 NoC@90 on the Kvasir-SEG dataset, 1.57 NoC@90 on the CVC-ClinicDB polyp segmentation dataset, and 1.03 NoC@90 on the ADAM retinal disk segmentation dataset. The assessments on these three medical image datasets highlight the effectiveness of our approach in interactive medical image segmentation. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Graphical abstract

18 pages, 3952 KiB  
Article
WGCAMNet: Wasserstein Generative Adversarial Network Augmented and Custom Attention Mechanism Based Deep Neural Network for Enhanced Brain Tumor Detection and Classification
by Fatema Binte Alam, Tahasin Ahmed Fahim, Md Asef, Md Azad Hossain and M. Ali Akber Dewan
Information 2024, 15(9), 560; https://doi.org/10.3390/info15090560 - 11 Sep 2024
Cited by 1 | Viewed by 1511
Abstract
Brain tumor detection and categorization of its subtypes are essential for early diagnosis and improving patient outcomes. This research presents a cutting-edge approach that employs advanced data augmentation and deep learning methodologies for brain tumor classification. For this work, a dataset of 6982 [...] Read more.
Brain tumor detection and categorization of its subtypes are essential for early diagnosis and improving patient outcomes. This research presents a cutting-edge approach that employs advanced data augmentation and deep learning methodologies for brain tumor classification. For this work, a dataset of 6982 MRI images from the IEEE Data Port was considered, in which a total of 5712 images of four classes (1321 glioma, 1339 meningioma, 1595 no tumor, and 1457 pituitary) were used in the training set and a total of 1270 images of the same four classes were used in the testing set. A Wasserstein Generative Adversarial Network was implemented to generate synthetic images to address class imbalance, resulting in a balanced and consistent dataset. A comparison was conducted between various data augmentation metholodogies demonstrating that Wasserstein Generative Adversarial Network-augmented results perform excellently over traditional augmentation (such as rotation, shift, zoom, etc.) and no augmentation. Additionally, a Gaussian filter and normalization were applied during preprocessing to reduce noise, highlighting its superior accuracy and edge preservation by comparing its performance to Median and Bilateral filters. The classifier model combines parallel feature extraction from modified InceptionV3 and VGG19 followed by custom attention mechanisms for effectively capturing the characteristics of each tumor type. The model was trained for 64 epochs using model checkpoints to save the best-performing model based on validation accuracy and learning rate adjustments. The model achieved a 99.61% accuracy rate on the testing set, with precision, recall, AUC, and loss of 0.9960, 0.9960, 0.0153, and 0.9999, respectively. The proposed architecture’s explainability has been enhanced by t-SNE plots, which show unique tumor clusters, and Grad-CAM representations, which highlight crucial areas in MRI scans. This research showcases an explainable and robust approach for correctly classifying four brain tumor types, combining WGAN-augmented data with advanced deep learning models in feature extraction. The framework effectively manages class imbalance and integrates a custom attention mechanism, outperforming other models, thereby improving diagnostic accuracy and reliability in clinical settings. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

20 pages, 6685 KiB  
Article
Improving the Classification of Unexposed Potsherd Cavities by Means of Preprocessing
by Randy Cahya Wihandika, Yoonji Lee, Mahendra Data, Masayoshi Aritsugi, Hiroki Obata and Israel Mendonça
Information 2024, 15(5), 243; https://doi.org/10.3390/info15050243 - 23 Apr 2024
Viewed by 1536
Abstract
The preparation of raw images for subsequent analysis, known as image preprocessing, is a crucial step that can boost the performance of an image classification model. Although deep learning has succeeded in image classification without handcrafted features, certain studies underscore the continued significance [...] Read more.
The preparation of raw images for subsequent analysis, known as image preprocessing, is a crucial step that can boost the performance of an image classification model. Although deep learning has succeeded in image classification without handcrafted features, certain studies underscore the continued significance of image preprocessing for enhanced performance during the training process. Nonetheless, this task is often demanding and requires high-quality images to effectively train a classification model. The quality of training images, along with other factors, impacts the classification model’s performance and insufficient image quality can lead to suboptimal classification performance. On the other hand, achieving high-quality training images requires effective image preprocessing techniques. In this study, we perform exploratory experiments aimed at improving a classification model of unexposed potsherd cavities images via image preprocessing pipelines. These pipelines are evaluated on two distinct image sets: a laboratory-made, experimental image set that contains archaeological images with controlled lighting and background conditions, and a Jōmon–Yayoi image set that contains images of real-world potteries from the Jōmon period through the Yayoi period with varying conditions. The best accuracy performances obtained on the experimental images and the more challenging Jōmon–Yayoi images are 90.48% and 78.13%, respectively. The comprehensive analysis and experimentation conducted in this study demonstrate a noteworthy enhancement in performance metrics compared to the established baseline benchmark. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

13 pages, 1409 KiB  
Article
Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning
by Eduardo Morales-Vargas, Hayde Peregrina-Barreto, Rita Q. Fuentes-Aguilar, Juan Pablo Padilla-Martinez, Wendy Argelia Garcia-Suastegui and Julio C. Ramirez-San-Juan
Information 2024, 15(4), 185; https://doi.org/10.3390/info15040185 - 29 Mar 2024
Cited by 2 | Viewed by 1891
Abstract
Microvasculature analysis is an important task in the medical field due to its various applications. It has been used for the diagnosis and threat of diseases in fields such as ophthalmology, dermatology, and neurology by measuring relative blood flow or blood vessel morphological [...] Read more.
Microvasculature analysis is an important task in the medical field due to its various applications. It has been used for the diagnosis and threat of diseases in fields such as ophthalmology, dermatology, and neurology by measuring relative blood flow or blood vessel morphological properties. However, light scattering at the periphery of the blood vessel causes a decrease in contrast around the vessel borders and an increase in the noise of the image, making the localization of blood vessels a challenging task. Therefore, this work proposes integrating known information from the experimental setup into a deep learning architecture with multiple inputs to improve the generalization of a computational model for the segmentation of blood vessels and depth estimation in a single inference step. The proposed R-UNET + ET + LA obtained an intersection over union of 0.944 ± 0.065 and 0.812 ± 0.080 in the classification task for validation (in vitro) and test sets (in vivo), respectively, and a root mean squared error of 0.0085 ± 0.0275 μm in the depth estimation. This approach improves the generalization of current solutions by pre-training with in vitro data and adding information from the experimental setup. Additionally, the method can infer the depth of a blood vessel pixel by pixel instead of in regions as the current state of the art does. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

14 pages, 4060 KiB  
Article
E-MuLA: An Ensemble Multi-Localized Attention Feature Extraction Network for Viral Protein Subcellular Localization
by Grace-Mercure Bakanina Kissanga, Hasan Zulfiqar, Shenghan Gao, Sophyani Banaamwini Yussif, Biffon Manyura Momanyi, Lin Ning, Hao Lin and Cheng-Bing Huang
Information 2024, 15(3), 163; https://doi.org/10.3390/info15030163 - 13 Mar 2024
Cited by 2 | Viewed by 2160
Abstract
Accurate prediction of subcellular localization of viral proteins is crucial for understanding their functions and developing effective antiviral drugs. However, this task poses a significant challenge, especially when relying on expensive and time-consuming classical biological experiments. In this study, we introduced a computational [...] Read more.
Accurate prediction of subcellular localization of viral proteins is crucial for understanding their functions and developing effective antiviral drugs. However, this task poses a significant challenge, especially when relying on expensive and time-consuming classical biological experiments. In this study, we introduced a computational model called E-MuLA, based on a deep learning network that combines multiple local attention modules to enhance feature extraction from protein sequences. The superior performance of the E-MuLA has been demonstrated through extensive comparisons with LSTM, CNN, AdaBoost, decision trees, KNN, and other state-of-the-art methods. It is noteworthy that the E-MuLA achieved an accuracy of 94.87%, specificity of 98.81%, and sensitivity of 84.18%, indicating that E-MuLA has the potential to become an effective tool for predicting virus subcellular localization. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

12 pages, 37559 KiB  
Article
Improving Breast Tumor Multi-Classification from High-Resolution Histological Images with the Integration of Feature Space Data Augmentation
by Nadia Brancati and Maria Frucci
Information 2024, 15(2), 98; https://doi.org/10.3390/info15020098 - 8 Feb 2024
Cited by 1 | Viewed by 2000
Abstract
To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle [...] Read more.
To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle these limitations, we propose a deep learning-based breast tumor gigapixel histological image multi-classifier integrated with a high-resolution data augmentation model to process the entire slide by exploring its local and global information and generating its different synthetic versions. The key idea is to perform the classification and augmentation in feature latent space, reducing the computational cost while preserving the class label of the input. We adopt a deep learning-based multi-classification method and evaluate the contribution given by a conditional generative adversarial network-based data augmentation model on the classifier’s performance for three tumor classes in the BRIGHT Challenge dataset. The proposed method has allowed us to achieve an average F1 equal to 69.5, considering only the WSI dataset of the Challenge. The results are comparable to those obtained by the Challenge winning method (71.6), also trained on the annotated tumor region dataset of the Challenge. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

22 pages, 7459 KiB  
Article
Metaverse Applications in Bioinformatics: A Machine Learning Framework for the Discrimination of Anti-Cancer Peptides
by Sufyan Danish, Asfandyar Khan, L. Minh Dang, Mohammed Alonazi, Sultan Alanazi, Hyoung-Kyu Song and Hyeonjoon Moon
Information 2024, 15(1), 48; https://doi.org/10.3390/info15010048 - 15 Jan 2024
Cited by 7 | Viewed by 3513
Abstract
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of [...] Read more.
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of bioinformatics and genomic data, facilitate groundbreaking developments. The precise prediction of ACPs from complex biological sequences remains an ongoing challenge in the genomic area. Currently, conventional approaches such as chemotherapy, target therapy, radiotherapy, and surgery are widely used for cancer treatment. However, these methods fail to completely eradicate neoplastic cells or cancer stem cells and damage healthy tissues, resulting in morbidity and even mortality. To control such diseases, oncologists and drug designers highly desire to develop new preventive techniques with more efficiency and minor side effects. Therefore, this research provides an optimized computational-based framework for discriminating against ACPs. In addition, the proposed approach intelligently integrates four peptide encoding methods, namely amino acid occurrence analysis (AAOA), dipeptide occurrence analysis (DOA), tripeptide occurrence analysis (TOA), and enhanced pseudo amino acid composition (EPseAAC). To overcome the issue of bias and reduce true error, the synthetic minority oversampling technique (SMOTE) is applied to balance the samples against each class. The empirical results over two datasets, where the accuracy of the proposed model on the benchmark dataset is 97.56% and on the independent dataset is 95.00%, verify the effectiveness of our ensemble learning mechanism and show remarkable performance when compared with state-of-the-art (SOTA) methods. In addition, the application of metaverse technology in healthcare holds promise for transformative innovations, potentially enhancing patient experiences and providing novel solutions in the realm of preventive techniques and patient care. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

19 pages, 6379 KiB  
Article
Chem2Side: A Deep Learning Model with Ensemble Augmentation (Conventional + Pix2Pix) for COVID-19 Drug Side-Effects Prediction from Chemical Images
by Muhammad Asad Arshed, Muhammad Ibrahim, Shahzad Mumtaz, Muhammad Tanveer and Saeed Ahmed
Information 2023, 14(12), 663; https://doi.org/10.3390/info14120663 - 16 Dec 2023
Cited by 1 | Viewed by 2272
Abstract
Drug side effects (DSEs) or adverse drug reactions (ADRs) are a major concern in the healthcare industry, accounting for a significant number of annual deaths in Europe alone. Identifying and predicting DSEs early in the drug development process is crucial to mitigate their [...] Read more.
Drug side effects (DSEs) or adverse drug reactions (ADRs) are a major concern in the healthcare industry, accounting for a significant number of annual deaths in Europe alone. Identifying and predicting DSEs early in the drug development process is crucial to mitigate their impact on public health and reduce the time and costs associated with drug development. Objective: In this study, our primary objective is to predict multiple drug side effects using 2D chemical structures, especially for COVID-19, departing from the conventional approach of relying on 1D chemical structures. We aim to develop a novel model for DSE prediction that leverages the CNN-based transfer learning architecture of ResNet152V2. Motivation: The motivation behind this research stems from the need to enhance the efficiency and accuracy of DSE prediction, enabling the pharmaceutical industry to identify potential drug candidates with fewer adverse effects. By utilizing 2D chemical structures and employing data augmentation techniques, we seek to revolutionize the field of drug side-effect prediction. Novelty: This study introduces several novel aspects. The proposed study is the first of its kind to use 2D chemical structures for predicting drug side effects, departing from the conventional 1D approaches. Secondly, we employ data augmentation with both conventional and diffusion-based models (Pix2Pix), a unique strategy in the field. These innovations set the stage for a more advanced and accurate approach to DSE prediction. Results: Our proposed model, named CHEM2SIDE, achieved an impressive average training accuracy of 0.78. Moreover, the average validation and test accuracy, precision, and recall were all at 0.73. When evaluated for COVID-19 drugs, our model exhibited an accuracy of 0.72, a precision of 0.79, a recall of 0.72, and an F1 score of 0.73. Comparative assessments against established transfer learning and machine learning models (VGG16, MobileNetV2, DenseNet121, and KNN) showcased the exceptional performance of CHEM2SIDE, marking a significant advancement in drug side-effect prediction. Conclusions: Our study introduces a groundbreaking approach to predicting drug side effects by using 2D chemical structures and incorporating data augmentation. The CHEM2SIDE model demonstrates remarkable accuracy and outperforms existing models, offering a promising solution to the challenges posed by DSEs in drug development. This research holds great potential for improving drug safety and reducing the associated time and costs. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

32 pages, 18478 KiB  
Article
Explainable Deep Learning Approach for Multi-Class Brain Magnetic Resonance Imaging Tumor Classification and Localization Using Gradient-Weighted Class Activation Mapping
by Tahir Hussain and Hayaru Shouno
Information 2023, 14(12), 642; https://doi.org/10.3390/info14120642 - 30 Nov 2023
Cited by 17 | Viewed by 4502
Abstract
Brain tumors (BT) present a considerable global health concern because of their high mortality rates across diverse age groups. A delay in diagnosing BT can lead to death. Therefore, a timely and accurate diagnosis through magnetic resonance imaging (MRI) is crucial. A radiologist [...] Read more.
Brain tumors (BT) present a considerable global health concern because of their high mortality rates across diverse age groups. A delay in diagnosing BT can lead to death. Therefore, a timely and accurate diagnosis through magnetic resonance imaging (MRI) is crucial. A radiologist makes the final decision to identify the tumor through MRI. However, manual assessments are flawed, time-consuming, and rely on experienced radiologists or neurologists to identify and diagnose a BT. Computer-aided classification models often lack performance and explainability for clinical translation, particularly in neuroscience research, resulting in physicians perceiving the model results as inadequate due to the black box model. Explainable deep learning (XDL) can advance neuroscientific research and healthcare tasks. To enhance the explainability of deep learning (DL) and provide diagnostic support, we propose a new classification and localization model, combining existing methods to enhance the explainability of DL and provide diagnostic support. We adopt a pre-trained visual geometry group (pre-trained-VGG-19), scratch-VGG-19, and EfficientNet model that runs a modified form of the class activation mapping (CAM), gradient-weighted class activation mapping (Grad-CAM) and Grad-CAM++ algorithms. These algorithms, introduced into a convolutional neural network (CNN), uncover a crucial part of the classification and can provide an explanatory interface for diagnosing BT. The experimental results demonstrate that the pre-trained-VGG-19 with Grad-CAM provides better classification and visualization results than the scratch-VGG-19, EfficientNet, and cutting-edge DL techniques regarding visual and quantitative evaluations with increased accuracy. The proposed approach may contribute to reducing the diagnostic uncertainty and validating BT classification. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

16 pages, 3963 KiB  
Article
A Novel Predictor for the Analysis and Prediction of Enhancers and Their Strength via Multi-View Features and Deep Forest
by Mehwish Gill, Saeed Ahmed, Muhammad Kabir and Maqsood Hayat
Information 2023, 14(12), 636; https://doi.org/10.3390/info14120636 - 28 Nov 2023
Cited by 5 | Viewed by 1926
Abstract
Enhancers are short DNA segments (50–1500 bp) that effectively activate gene transcription when transcription factors (TFs) are present. There is a correlation between the genetic differences in enhancers and numerous human disorders including cancer and inflammatory bowel disease. In computational biology, the accurate [...] Read more.
Enhancers are short DNA segments (50–1500 bp) that effectively activate gene transcription when transcription factors (TFs) are present. There is a correlation between the genetic differences in enhancers and numerous human disorders including cancer and inflammatory bowel disease. In computational biology, the accurate categorization of enhancers can yield important information for drug discovery and development. High-throughput experimental approaches are thought to be vital tools for researching enhancers’ key characteristics; however, because these techniques require a lot of labor and time, it might be difficult for researchers to forecast enhancers and their powers. Therefore, computational techniques are considered an alternate strategy for handling this issue. Based on the types of algorithms that have been used to construct predictors, the current methodologies can be divided into three primary categories: ensemble-based methods, deep learning-based approaches, and traditional ML-based techniques. In this study, we developed a novel two-layer deep forest-based predictor for accurate enhancer and strength prediction, namely, NEPERS. Enhancers and non-enhancers are divided at the first level by NEPERS, whereas strong and weak enhancers are divided at the second level. To evaluate the effectiveness of feature fusion, block-wise deep forest and other algorithms were combined with multi-view features such as PSTNPss, PSTNPdss, CKSNAP, and NCP via 10-fold cross-validation and independent testing. Our proposed technique performs better than competing models across all parameters, with an ACC of 0.876, Sen of 0.864, Spe of 0.888, MCC of 0.753, and AUC of 0.940 for layer 1 and an ACC of 0.959, Sen of 0.960, Spe of 0.958, MCC of 0.918, and AUC of 0.990 for layer 2, respectively, for the benchmark dataset. Similarly, for the independent test, the ACC, Sen, Spe, MCC, and AUC were 0.863, 0.865, 0.860, 0.725, and 0.948 for layer 1 and 0.890, 0.940, 0.840, 0.784, and 0.951 for layer 2, respectively. This study provides conclusive insights for the accurate and effective detection and characterization of enhancers and their strengths. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

22 pages, 17315 KiB  
Article
Progressive-Augmented-Based DeepFill for High-Resolution Image Inpainting
by Muzi Cui, Hao Jiang and Chaozhuo Li
Information 2023, 14(9), 512; https://doi.org/10.3390/info14090512 - 18 Sep 2023
Cited by 2 | Viewed by 2581
Abstract
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the [...] Read more.
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the boundaries might be uninformative or noisy, leading to blurred images. As complementary, global visual features from the remote image contexts depict the overall structure and texture of the vanilla images, contributing to generating pixels that blend seamlessly with the existing visual elements. In this paper, we propose a novel model, PA-DeepFill, to repair high-resolution images. The generator network follows a novel progressive learning paradigm, starting with low-resolution images and gradually improving the resolutions by stacking more layers. A novel attention-based module, the gathered attention block, is further integrated into the generator to learn the importance of different distant visual components adaptively. In addition, we have designed a local discriminator that is more suitable for image inpainting tasks, multi-task guided mask-level local discriminator based PatchGAN, which can guide the model to distinguish between regions from the original image and regions completed by the model at a finer granularity. This local discriminator can capture more detailed local information, thereby enhancing the model’s discriminative ability and resulting in more realistic and natural inpainted images. Our proposal is extensively evaluated over popular datasets, and the experimental results demonstrate the superiority of our proposal. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

14 pages, 2658 KiB  
Article
Multi-Class Skin Cancer Classification Using Vision Transformer Networks and Convolutional Neural Network-Based Pre-Trained Models
by Muhammad Asad Arshed, Shahzad Mumtaz, Muhammad Ibrahim, Saeed Ahmed, Muhammad Tahir and Muhammad Shafi
Information 2023, 14(7), 415; https://doi.org/10.3390/info14070415 - 18 Jul 2023
Cited by 44 | Viewed by 7389
Abstract
Skin cancer, particularly melanoma, has been recognized as one of the most lethal forms of cancer. Detecting and diagnosing skin lesions accurately can be challenging due to the striking similarities between the various types of skin lesions, such as melanoma and nevi, especially [...] Read more.
Skin cancer, particularly melanoma, has been recognized as one of the most lethal forms of cancer. Detecting and diagnosing skin lesions accurately can be challenging due to the striking similarities between the various types of skin lesions, such as melanoma and nevi, especially when examining the color images of the skin. However, early diagnosis plays a crucial role in saving lives and reducing the burden on medical resources. Consequently, the development of a robust autonomous system for skin cancer classification becomes imperative. Convolutional neural networks (CNNs) have been widely employed over the past decade to automate cancer diagnosis. Nonetheless, the emergence of the Vision Transformer (ViT) has recently gained a considerable level of popularity in the field and has emerged as a competitive alternative to CNNs. In light of this, the present study proposed an alternative method based on the off-the-shelf ViT for identifying various skin cancer diseases. To evaluate its performance, the proposed method was compared with 11 CNN-based transfer learning methods that have been known to outperform other deep learning techniques that are currently in use. Furthermore, this study addresses the issue of class imbalance within the dataset, a common challenge in skin cancer classification. In addressing this concern, the proposed study leverages the vision transformer and the CNN-based transfer learning models to classify seven distinct types of skin cancers. Through our investigation, we have found that the employment of pre-trained vision transformers achieved an impressive accuracy of 92.14%, surpassing CNN-based transfer learning models across several evaluation metrics for skin cancer diagnosis. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

Back to TopTop