You are currently viewing a new version of our website. To view the old version click .

Next-Generation Imbalanced Learning: Trustworthiness, Scalability, and Representation: A Special Issue Dedicated to the Memory of Prof. Barbara Pes

This special issue belongs to the section “Artificial Intelligence“.

Special Issue Information

Dear Colleagues,

In the era of Big Data, class imbalance has shifted from a specific edge case to a pervasive characteristic of real-world environments. While the minority class often holds the highest analytical value, representing critical events like system failures, rare diseases, or security breaches, standard machine learning algorithms remain inherently biased toward the majority.

However, the challenge is no longer just about skewness. Modern imbalance learning must now operate at the intersection of massive dimensionality, distributed data silos, and the urgent need for transparency. We are moving beyond traditional resampling methods toward advanced strategies that integrate robust feature engineering (selection, extraction, and reduction) to tackle the "curse of dimensionality."

Furthermore, as AI deployment scales in sensitive sectors, "black box" solutions are no longer acceptable. The cutting-edge of research now focuses on Explainable AI (XAI) to ensure that minority-class predictions are interpretable and Federated Learning to handle imbalanced data across decentralized networks without compromising privacy.

The aim of this Special Issue is to chart the path forward, gathering cutting-edge contributions that bridge the gap between theoretical novelty and industrial applicability.

We invite papers addressing the following key themes:

  1. Advanced Representation and Feature Engineering:
  • Novel strategies for dimensionality reduction and manifold learning in imbalanced spaces.
  • Feature selection and extraction techniques that are tailored for highly skewed datasets.
  • Representation learning and embeddings for minority class enhancement.
  1. Trustworthy and Distributed AI:
  • Explainability (XAI): Interpretability mechanisms for models trained on imbalanced data.
  • Federated Learning: Handling non-IID and imbalanced data in privacy-preserving, distributed environments.
  • Fairness-aware learning and bias mitigation.
  1. Architectures and Methodologies:
  • Deep Learning and Ensemble architectures for complex imbalance.
  • Hybrid data-level and algorithm-level strategies.
  • Learning from imbalanced data streams and concept drift adaptation.
  • Multi-label, multi-class, and noisy-label learning scenarios.
  1. Applications:
  • Industrial IoT monitoring.
  • Fraud and intrusion detection.
  • Radiomics and medical diagnostics.
  • Software defect prediction.
  • Social media behavior analysis.

We dedicate this Special Issue to the memory of Prof. Barbara Pes, whose pioneering work on high-dimensional and imbalanced data has inspired much of the current research in this area. Her contributions — notably on combining feature selection with cost-sensitive and ensemble methods to address class imbalance in high-dimensional biomedical datasets, set a strong foundation for the directions we pursue in this special issue.

Dr. Andrea Loddo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • imbalanced learning
  • explainable AI (XAI)
  • federated learning
  • feature engineering
  • dimensionality reduction
  • trustworthy AI
  • deep learning for imbalance learning
  • distributed machine learning
  • fairness and bias mitigation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Published Papers

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Information - ISSN 2078-2489