Robust and Uncertainty-Aware Learning from Real-World Data

A topical collection in Machine Learning and Knowledge Extraction (ISSN 2504-4990). This collection belongs to the section "Learning".

Viewed by 417

Editor


E-Mail Website
Collection Editor
1. Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milano, Italy
2. Digital Health & Wellbeing Center, Fondazione Bruno Kessler (FBK), 38122 Trento, Italy
Interests: human-computer interaction; health informatics; decision support; information quality; socio-technical systems

Topical Collection Information

Dear Colleagues,

Machine learning in real-world contexts often operates under non-ideal conditions, including imperfect data and supervision [Han, 2024]. This Topic Collection invites contributions that advance robustness and uncertainty-awareness in machine learning systems, enabling them to function reliably, transparently, and safely across diverse and challenging environments.

We welcome original research, methodological innovations, and in-depth reviews addressing a wide range of data imperfections—such as measurement noise, missing values, artifacts, outliers, and distributional shifts [Ovadia et al., 2019]—as well as supervisory uncertainty, including ambiguous or noisy labels, partial annotations, soft/probabilistic labels, and annotator disagreement.

We particularly encourage submissions that develop principled approaches for modeling, estimating, and propagating uncertainty across the entire learning pipeline. Methods offering formal reliability guarantees [Shi, 2025], calibration strategies, or robust evaluation under adverse conditions are of special interest. Contributions that demonstrate theoretical rigor alongside practical relevance, especially in critical domains like healthcare, engineering, or the environmental sciences, are highly valued [Campagner et al., 2025; Marconi & Cabitza, 2025].

A strong requirement for this Topic Collection is the use of real-world data. We do not encourage experiments conducted solely on toy problems or synthetic datasets without a clear connection to real-world applications.

This collection also welcomes work at the intersection of robust learning and related areas, such as algorithmic fairness, data-centric AI, causality-aware learning, semi/self-supervised learning under weak supervision, and techniques for enhancing the reproducibility, interpretability, and transparency of ML systems [Cabitza & Parimbelli, 2025; Salvi et al., 2025].

Topics of interest include, but are not limited to, the following:

  • Robust machine learning under noisy, incomplete, or corrupted data;
  • Learning with distributional shifts and covariate shift adaptation;
  • Uncertainty quantification and propagation in supervised and unsupervised learning;
  • Learning from noisy, soft, probabilistic, or partially labeled data;
  • Annotator disagreement modeling and aggregation;
  • Calibration, reliability analysis, and principled evaluation metrics;
  • Out-of-distribution detection and robust generalization;
  • Trustworthy, safe, or interpretable ML under uncertainty;
  • Applications to real-world domains (e.g., medicine, engineering, climate science);
  • Benchmarks, datasets, and tools for robust and uncertainty-aware learning.

References

Cabitza, F., & Parimbelli, E. (2025). Let XAI generate reliability metadata, not medical explanations. Computer Methods and Programs in Biomedicine, 109090.

Campagner, A., Biganzoli, E. M., Balsano, C., Cereda, C., & Cabitza, F. (2025). Modeling Unknowns: A Vision for Uncertainty-Aware Machine Learning in Healthcare. International Journal of Medical Informatics, 106014.

Han, B. (2024, August). Trustworthy machine learning under imperfect data. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (pp. 8535-8540).

Marconi, L., & Cabitza, F. (2025). Show and tell: A critical review on robustness and uncertainty for a more responsible medical AI. International Journal of Medical Informatics, 105970.

Ovadia, Y., Fertig, E., Ren, J., Nado, Z., Sculley, D., Nowozin, S., ... & Snoek, J. (2019). Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Advances in neural information processing systems, 32.

Salvi, M., Seoni, S., Campagner, A., Gertych, A., Acharya, U. R., Molinari, F., & Cabitza, F. (2025). Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare. International Journal of Medical Informatics, 197, 105846.

Shi, Y. (2025). Reliable Uncertainty Quantification in Machine Learning via Conformal Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 39, No. 28, pp. 29299-29300).

Prof. Dr. Federico Cabitza
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • algorithmic robustness
  • weak supervision
  • epistemic and aleatoric uncertainty
  • Bayesian deep learning
  • self-supervised learning
  • semi-supervised learning
  • causal inference in ML
  • data-centric machine learning
  • fairness and bias mitigation
  • ML reproducibility and replicability
  • trustworthy AI systems
  • active learning under uncertainty
  • label noise modeling
  • probabilistic graphical models
  • learning with small or imbalanced datasets

Published Papers

This collection is now open for submission.
Back to TopTop