Skip Content
You are currently on the new version of our website. Access the old version .
  • Tracked for
    Impact Factor
  • 3.4
    CiteScore
  • 20 days
    Time to First Decision

BioMedInformatics

BioMedInformatics is an international, peer-reviewed, open access journal on all areas of biomedical informatics, as well as computational biology and medicine, published bimonthly online by MDPI.

All Articles (343)

Background: Depression is a common mental disorder, and early and objective diagnosis of depression is challenging. New advances in deep learning show promise for processing audio and video content when screening for depression. Nevertheless, the majority of current methods rely on raw video processing or multimodal pipelines, which are computationally costly and challenging to understand and create privacy issues, restricting their use in actual clinical settings. Methods: Based solely on spatiotemporal 3D face landmark representations, we describe a unique, totally visual, and lightweight deep learning approach to overcome these constraints. In this paper we introduce, for the first time, a pure visual deep learning framework, based on spatiotemporal 3D facial landmarks extracted from clinical interview videos contained in the DAIC-WOZ and Extended DAIC-WOZ (E-DAIC) datasets. Our method does not use raw video or any type of semi-automated multimodal fusion. Whereas raw video streaming can be computationally expensive and is not well suited to investigating specific variables, we first take a temporal series of 3D landmarks, convert them to pseudo-images (224 × 224 × 3), and then use them within a CNN-LSTM framework. Importantly, CNN-LSTM provides the ability to analyze the spatial configuration and temporal dimensions of facial behavior. Results: The experimental results indicate macro-average F1 scores of 0.74 on DAIC-WOZ and 0.762 on E-DAIC, demonstrating robust performance under heavy class imbalances, with a variability of ±0.03 across folds. Conclusion: These results indicate that landmark-based spatiotemporal modeling represents the future of lightweight, interpretable, and scalable automatic depression detection. Second, our results suggest exciting opportunities for completely embedding ADI systems within the framework of real-world MHA.

4 February 2026

CNN-LSTM architecture and class imbalance techniques.

Artificial intelligence (AI) has shown promising performance in brain tumor diagnosis and prognosis; however, most reported advances remain difficult to translate into clinical practice due to limited interpretability, inconsistent evaluation protocols, and weak generalization across datasets and institutions. In this work, we present a critical synthesis of recent brain tumor AI studies (2020–2025) guided by two novel conceptual tools: a unified diagnostic-prognostic framework and a triadic evaluation model emphasizing interpretability, computational efficiency, and generalizability as core dimensions of clinical readiness. Following PRISMA 2020 guidelines, we screened and analyzed over 100 peer-reviewed studies. A structured analysis of reported metrics reveals systematic trends and trade-offs—for instance, between model accuracy and inference latency—rather than providing a direct performance benchmark. This synthesis exposes critical gaps in current evaluation practices, particularly the under-reporting of interpretability validation, deployment-level efficiency, and external generalization. By integrating conceptual structuring with evidence-driven analysis, this work provides a framework for more clinically grounded development and evaluation of AI systems in neuro-oncology.

27 January 2026

Unified diagnostic–prognostic conceptual framework for brain tumor AI. Conceptual illustration of a clinically oriented AI workflow encompassing tumor classification, segmentation (ET, tumor core, whole tumor), and prognostic prediction (e.g., survival, MGMT status). A bounding-box localization links detection to downstream analysis, while a feedback mechanism indicates how diagnostic or prognostic outputs may guide adaptive focus in subsequent processing.

The lack of clinical data for chronic kidney disease (CKD) prediction frequently results in model overfitting and inadequate generalization to novel samples. This research mitigates this constraint by utilizing a Conditional Tabular Generative Adversarial Network (CTGAN) to enhance a constrained CKD dataset sourced from the University of California, Irvine (UCI) Machine Learning Repository. The CTGAN model was trained to produce realistic synthetic samples that preserve the statistical and feature distributions of the original dataset. Multiple machine learning models, such as AdaBoost, Random Forest, Gradient Boosting, and K-Nearest Neighbors (KNN), were assessed on both the original and enhanced datasets with incrementally increasing degrees of synthetic data dilution. AdaBoost attained 100% accuracy on the original dataset, signifying considerable overfitting; however, the model exhibited enhanced generalization and stability with the CTGAN-augmented data. The occurrence of 100% test accuracy in several models should not be interpreted as realistic clinical performance. Instead, it reflects the limited size, clean structure, and highly separable feature distributions of the UCI CKD dataset. Similar behavior has been reported in multiple previous studies using this dataset. Such perfect accuracy is a strong indication of overfitting and limited generalizability, rather than feature or label leakage. This observation directly motivates the need for controlled data augmentation to introduce variability and improve model robustness. The dataset with the greatest dilution, comprising 2000 synthetic cases, attained a test accuracy of 95.27% utilizing a stochastic gradient boosting approach. Ensemble learning techniques, particularly gradient boosting and random forest, regularly surpassed conventional models like KNN in terms of predicted accuracy and resilience. The results demonstrate that CTGAN-based data augmentation introduces critical variability, diminishes model bias, and serves as an effective regularization technique. This method provides a viable alternative for reducing overfitting and improving predictive modeling accuracy in data-deficient medical fields, such as chronic kidney disease diagnosis.

22 January 2026

Flowchart showing the methods employed in the current investigation.

Evaluating the difficulty of endotracheal intubation during pre-anesthesia assessment has consistently posed a challenge for clinicians. Accurate prediction of intubation difficulty is crucial for subsequent treatment planning. However, existing diagnostic methods often suffer from low accuracy. To tackle this issue, this study presented an automated airway classification method utilizing Convolutional Neural Networks (CNNs). We proposed Adaptive Attention DenseNet for Laryngeal Ultrasound Classification (AdaDenseNet-LUC), a network architecture that enhances classification performance by integrating an adaptive attention mechanism into DenseNet (Dense Convolutional Network), enabling the extraction of deep features that aid in difficult airway classification. This model associates laryngeal ultrasound images with actual intubation difficulty, providing healthcare professionals with scientific evidence to help improve the accuracy of clinical decision-making. Experiments were performed on a dataset of 1391 ultrasound images, utilizing 5-fold cross-validation to assess the model’s performance. The experimental results show that the proposed method achieves a classification accuracy of 87.41%, sensitivity of 86.05%, specificity of 88.59%, F1 score of 0.8638, and AUC of 0.94. Grad-CAM visualization techniques indicate that the model’s attention is attention to the tracheal region. The results demonstrate that the proposed method outperforms current approaches, delivering objective and accurate airway classification outcomes, which serve as a valuable reference for evaluating the difficulty of endotracheal intubation and providing guidance for clinicians.

16 January 2026

Laryngeal ultrasound image of surgical patients.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
BioMedInformatics - ISSN 2673-7426