Next Article in Journal
Pulse-Driven Spin Paradigm for Noise-Aware Quantum Classification
Previous Article in Journal
Fair and Energy-Efficient Charging Resource Allocation for Heterogeneous UGV Fleets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Uncertainty-Aware Artificial Intelligence: Editorial †

by
H. M. Dipu Kabir
1,2,* and
Subrota Kumar Mondal
3
1
Artificial Intelligence and Cyber Futures Institute, Charles Sturt University, Bathurst, NSW 2795, Australia
2
Rural Health Research Institute, Charles Sturt University, Orange, NSW 2800, Australia
3
School of Computer Science and Engineering, Macau University of Science and Technology, Macau 999078, China
*
Author to whom correspondence should be addressed.
This Editorial is for the special issue: “Uncertainty-Aware Artificial Intelligence.” Link to the Special Issue: https://www.mdpi.com/journal/computers/special_issues/R97JUA0Z98.
Computers 2025, 14(11), 474; https://doi.org/10.3390/computers14110474
Submission received: 23 September 2025 / Accepted: 30 October 2025 / Published: 1 November 2025
Artificial Intelligence (AI) has revolutionized the way we think, perceive, and interact, delivering remarkable advances across domains ranging from computer vision and natural language processing to healthcare, power, finance, autonomy, and philosophies [1,2,3,4,5]. In many domains, machines are outperforming humans in speed, accuracy, and scale, enabling solutions that would have been unthinkable only a few decades ago. Despite these successes, AI models remain error-prone [6,7]. They often provide unreliable or even misleading predictions under certain conditions, raising concerns about robustness, transparency, risk, interpretability, and trust [8,9,10].
Understanding when and why AI models can potentially fail is more important than celebrating their successes. Poor or unexpected performance may cause significant damage and raise concerns for the greater community. In this age of AI, poor performance is not the only concern. Any unexpected exceptional performance can also raise concern. Google’s AI models have learned the Bengali language, although they were not trained to do so [11]. AI is experiencing real-world data from many authentic and unauthentic sources. Moreover, AI models are being applied to data collected from billions of people. That raises a concern in the greater community.
We experience uncertainty in every part of our lives. While making a product, there is uncertainty regarding its lifetime, demand, transportation, competitors, production time, cost, and the quality of the product [12]. While maintaining a cloud-based system [13], power grid [14], or any supply chain, we need to consider uncertainties over time [15]. When we travel, we experience uncertainty from myriad sources. Knowledge of uncertainty can potentially bring about both safety and improved management [16]. Uncertainty is the unavoidable part of any modeling [17,18]. We made the scope of this special issue wide so that authors of diverse backgrounds could submit papers.
The research community has been developing new concepts for the quantification of uncertainty for several centuries [19]. Whenever we train a model with a mean squared error (MSE) minimization, that MSE indicates the overall error probability [20,21]. A model with lower test-time error is considered a superior model. However, some uncertainties are inherent to the system. Some uncertainties happen due to unknown factors, and improved modeling with the same input combinations cannot reduce those uncertainties. Consideration of more input parameters often reduces the error value [22]. Moreover, that error probability is the overall error. Models can perform well for certain input combinations compared to other input combinations. Systems often exhibit varying inherent randomness over the input domain [16]. Therefore, researchers have proposed to quantify heteroscedastic uncertainty [23]. Interval forecast with input-domain-varying width is the most popular representation of heteroscedastic uncertainty in regression tasks. However, traditional methods of interval-based uncertainty quantification are based on strong assumptions [23]. Recent advancements in AI have brought us the opportunity to determine assumption-free uncertainty quantification [24]. Moreover, uncertainty in models needs to be quantified properly in critical situations. People usually observe predictions in rare situations. Therefore, researchers have proposed several AI-based methods that perform more uniformly in rare and critical situations. However, interval-based uncertainty is debatable as intervals do not represent the distribution of uncertainties. The most probable region of the prediction may not be the mid-region of the intervals. The distribution of probability can be skewed-Gaussian, lognormal, multimodal, or of any other type [16].
The application of AI is not limited to point prediction. Different types of applications need different types of uncertainty quantification. In classification, accuracy, precision, recall, etc., matrices represent uncertainties. A confusion matrix indicates both the class-wise misclassification probability and which class patterns are quite similar to each other to an AI model. Some researchers are also proposing uncertainty quantification matrices for segmentation [25], and some other researchers are doubtful about the usefulness of those matrices [26]. In the future, AI can potentially explore many new fields, and we may see many uncertainty quantification methods.
We launch this special issue to explore some thoughts from people who can potentially submit a paper. We invited contributions that introduce new concepts and surveys. Through the review process, editors and reviewers have evaluated submissions and provided constructive feedback.
Lucke et al. (Contribution 1) have proposed a soft-label supervised meta-model with adversarial training for uncertainty quantification. They applied their method to the SVHN and CIFAR-10 datasets. They received improved sensitivity, specificity, and average precision compared to a traditional counterpart.
Manna et al. (Contribution 2) have introduced a derivative-based spike encoding method and two loss functions for spiking neural networks. They investigated their method on two electricity load datasets. The proposed DecodingLoss consistently outperformed the existing SLAYER cost function on the investigated datasets.
Al-Saidi et al. (Contribution 3) have proposed a neutrosophic hidden Markov model for automated sign language recognition, addressing scalability and uncertainty. The proposed model brought 7% higher accuracy over the type-2 fuzzy HMM in sign language recognition.
Panayides et al. (Contribution 4) have proposed a least squares minimum class variance SVM that leverages class distributional information for efficient optimization.
Igual et al. (Contribution 5) have proposed an interactive training protocol for regression models to control myoelectric devices that adapts to real-time subject performance. They tested their method on twenty healthy and four limb-absent persons.
Sasani et al. (Contribution 6) have performed forecasts on Bitcoin illiquidity using Bitcoin hash rate and Twitter-based textual features. They applied and compared seven different models, including ANN, LSTM, GRU, etc.
Sunghae Jun (Contribution 7) has addressed zero-inflated text data by combining generative adversarial networks with statistical modeling. Zero-inflated data contain excessive zeros. They replaced excessive zeros with small noise-infused values and that improved performance of models.
Dey et al. (Contribution 8) have proposed a dropout-based neural network for tool wear prediction. They considered both data and model uncertainty.
Jebur et al. (Contribution 9) have presented a deep feature fusion framework for violence detection using feature fusion and transfer learning. That paper has received more than twenty citations in two years.
Lel et al. (Contribution 10) have evaluated CNN-based ensemble models for detecting COVID-19 from chest X-rays. Their findngs show ensembles achieve higher accuracy and F1-scores compared to the best-performing individual model.
Tsoulos et al. (Contribution 11) have proposed a two-stage optimization method for neural network training. In the initial stage, they estimate parameter bounds using Particle Swarm Optimization. Later, they performed global optimization to refine parameters within these bounds.
Cevallos et al. (Contribution 12) have written a review on machine unlearning techniques. The study analyzed thirty-seven studies and evaluated principles, metrics, and methods across regression and classification tasks, highlighting recent advances, challenges, and trade-offs.
The published papers strengthen the understanding of the research community on neural network-based solutions. There was a collaborative effort to finalize concepts and publish their works. Several papers of this special issue have already received a good number of citations or downloads [27,28].

Conflicts of Interest

The authors declare no conflict of interest.

List of Contributions

  • Lucke, K.; Vakanski, A.; Xian, M. Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification. Computers 2025, 14, 12. https://doi.org/10.3390/computers14010012.
  • Manna, D.L.; Vicente-Sola, A.; Kirkl, P.; Bihl, T.J.; Di Caterina, G. Time Series Forecasting via Derivative Spike Encoding and Bespoke Loss Functions for Spiking Neural Networks. Computers 2024, 13, 202. https://doi.org/10.3390/computers13080202.
  • Al-Saidi, M.; Ballagi, Á; Hassen, O.A.; Saad, S.M. Cognitive classifier of hand gesture images for automated sign language recognition: Soft robot assistance based on Neutrosophic Markov Chain paradigm. Computers 2024, 13, 106. https://doi.org/10.3390/computers13040106.
  • Panayides, M.; Artemiou, A. Least squares minimum class variance support vector machines. Computers 2024, 13, 34. https://doi.org/10.3390/computers13020034.
  • Igual, C.; Castillo, A.; Igual, J. An interactive training model for myoelectric regression control based on human–machine cooperative performance. Computers 2024, 13, 29. https://doi.org/10.3390/computers13010029.
  • Sasani, F.; Moghareh, Dehkordi, M.; Ebrahimi, Z.; Dustmohammadloo, H.; Bouzari, P.; Ebrahimi, P.; Lencsés, E.; Fekete-Farkas, M. Forecasting of Bitcoin Illiquidity Using High-Dimensional and Textual Features. Computers 2024, 13, 20. https://doi.org/10.3390/computers13010020.
  • Jun, S. Zero-Inflated Text Data Analysis using Generative Adversarial Networks and Statistical Modeling. Computers 2023, 12, 258. https://doi.org/10.3390/computers12120258.
  • Dey, A.; Yodo, N.; Yadav, O.P.; Shanmugam, R.; Ramoni, M. Addressing uncertainty in tool wear prediction with dropout-based neural network. Computers 2023, 12, 187. https://doi.org/10.3390/computers12090187.
  • Jebur, S.A.; Hussein, K.A.; Hoomod, H.K.; Alzubaidi, L. Novel deep feature fusion framework for multi-scenario violence detection. Computers 2023, 12, 175. https://doi.org/10.3390/computers12090175.
  • El Lel, T.; Ahsan, M.; Haider, J. Detecting COVID-19 from chest X-rays using convolutional neural network ensembles. Computers 2023, 12, 105. https://doi.org/10.3390/computers12050105.
  • Tsoulos, I.G.; Tzallas, A.; Karvounis, E.; Tsalikakis, D. Bound the parameters of neural networks using particle swarm optimization. Computers 2023, 12, 82. https://doi.org/10.3390/computers12040082.
  • Cevallos, I.D.; Benalcázar, M.E.; Valdivieso Caraguay, Á.L.; Zea, J.A.; Barona-López, L.I. A Systematic Literature Review of Machine Unlearning Techniques in Neural Networks. Computers 2025, 14, 150. https://doi.org/10.3390/computers14040150.

References

  1. Ofori-Oduro, M.; Amer, M. Defending object detection models against image distortions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 3854–3863. [Google Scholar]
  2. Theisen, R.; Wang, H.; Varshney, L.R.; Xiong, C.; Socher, R. Evaluating state-of-the-art classification models against bayes optimality. Adv. Neural Inf. Process. Syst. 2021, 34, 9367–9377. [Google Scholar]
  3. Gujarathi-Mehta, S.; Shrivastava, R.; Angadi, S. SpinalCNN: Spinal convolutional neural network based kidney cancer detection. Biomed. Signal Process. Control 2026, 112, 108587. [Google Scholar] [CrossRef]
  4. Cao, L. Ai in finance: Challenges, techniques, and opportunities. ACM Comput. Surv. (CSUR) 2022, 55, 1–38. [Google Scholar]
  5. Maniruzzaman, M.; Jaman, M.S.; Abid, M.A.S.; Mahmud, Z.; Rahman, M.E.; Siddiky, M.N.A. A Hybrid mRMR-RFE and AI Framework for Advancing Alzheimer’s Biomarkers Discovery. In Proceedings of the 2025 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 18–21 February 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 0282–0287. [Google Scholar]
  6. Gal, Y.; Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; pp. 1050–1059. [Google Scholar]
  7. Malinin, A.; Gales, M. Predictive uncertainty estimation via prior networks. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
  8. Dolar, T.; Chen, J.; Chen, W. Uncertainty quantification driven machine learning for improving model accuracy in imbalanced regression tasks. Expert Syst. Appl. 2025, 261, 125526. [Google Scholar] [CrossRef]
  9. Zhang, F.; Wang, M.; Li, L.; Liu, Y.; Wang, H. Probabilistic intervals prediction based on adaptive regression with attention residual connections and covariance constraints. Eng. Appl. Artif. Intell. 2025, 156, 111013. [Google Scholar] [CrossRef]
  10. Mahmood, S. Heavy metals in poultry products in Bangladesh: A possible death threat to future generations. J. Soc. Political Sci. 2019, 2, 98–105. [Google Scholar]
  11. Pichai, S. Google AI Teaches Itself Bangla. Available online: www.tbsnews.net/tech/ai-teaches-itself-bangla-619070 (accessed on 22 September 2025).
  12. Reddy, V.V.K.; Reddy, R.V.K.; Munaga, M.S.K.; Karnam, B.; Maddila, S.K.; Kolli, C.S. Deep learning-based credit card fraud detection in federated learning. Expert Syst. Appl. 2024, 255, 124493. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Wang, P.; Cheng, K.; Zhao, J.; Tao, J.; Hai, J.; Feng, J.; Deng, C.; Wang, X. Building Accurate and Interpretable Online Classifiers on Edge Devices. IEEE Trans. Parallel Distrib. Syst. 2025, 36, 1779–1796. [Google Scholar] [CrossRef]
  14. Lu, X.; Qiu, J.; Lei, G.; Zhu, J. An interval prediction method for day-ahead electricity price in wholesale market considering weather factors. IEEE Trans. Power Syst. 2023, 39, 2558–2569. [Google Scholar] [CrossRef]
  15. Sharmin, A.; Mahmud, B.U.; Nabi, N.; Shaima, M.; Faruk, M.J.H. Cyber Attacks on Space Information Networks: Vulnerabilities, Threats, and Countermeasures for Satellite Security. J. Cybersecur. Priv. 2025, 5, 76. [Google Scholar] [CrossRef]
  16. Kabir, H.D.; Mondal, S.K.; Khanam, S.; Khosravi, A.; Rahman, S.; Qazani, M.R.C.; Alizadehsani, R.; Asadi, H.; Mohamed, S.; Nahavandi, S.; et al. Uncertainty aware neural network from similarity and sensitivity. Appl. Soft Comput. 2023, 149, 111027. [Google Scholar] [CrossRef]
  17. Pannattee, P.; Kumwilaisak, W.; Hansakunbuntheung, C.; Thatphithakkul, N.; Kuo, C.C.J. American Sign language fingerspelling recognition in the wild with spatio temporal feature extraction and multi-task learning. Expert Syst. Appl. 2024, 243, 122901. [Google Scholar] [CrossRef]
  18. Zilly, J.; Achille, A.; Censi, A.; Frazzoli, E. On plasticity, invariance, and mutually frozen weights in sequential task learning. Adv. Neural Inf. Process. Syst. 2021, 34, 12386–12399. [Google Scholar]
  19. Chrystal, G. On some Fundamental Principles in the Theory of Probability1. Trans. Actuar. Soc. Edinb. 1891, 2, 420–439. [Google Scholar] [CrossRef]
  20. Pearson, K. Contributions to the mathematical theory of evolution. Philos. Trans. R. Soc. Lond. A 1894, 185, 71–110. [Google Scholar]
  21. Kabir, H.D. Reduction of class activation uncertainty with background information. IEEE Trans. Artif. Intell. 2025; Early Access. [Google Scholar]
  22. Kohavi, R.; John, G.H. Automatic parameter selection by minimizing estimated error. In Proceedings of the Machine Learning Proceedings 1995, Tahoe City, CA, USA, 9–12 July 1995; Elsevier: Amsterdam, The Netherlands, 1995; pp. 304–312. [Google Scholar]
  23. Khosravi, A.; Nahavandi, S.; Creighton, D.; Atiya, A.F. Comprehensive review of neural network-based prediction intervals and new advances. IEEE Trans. Neural Netw. 2011, 22, 1341–1356. [Google Scholar] [CrossRef] [PubMed]
  24. Khosravi, A.; Nahavandi, S.; Creighton, D.; Atiya, A.F. Lower upper bound estimation method for construction of neural network-based prediction intervals. IEEE Trans. Neural Netw. 2010, 22, 337–346. [Google Scholar] [CrossRef] [PubMed]
  25. Baumgartner, C.F.; Tezcan, K.C.; Chaitanya, K.; Hötker, A.M.; Muehlematter, U.J.; Schawkat, K.; Becker, A.S.; Donati, O.; Konukoglu, E. Phiseg: Capturing uncertainty in medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 119–127. [Google Scholar]
  26. Czolbe, S.; Arnavaz, K.; Krause, O.; Feragen, A. Is segmentation uncertainty useful? In Proceedings of the International Conference on Information Processing in Medical Imaging, Virtual Event, 28–30 June 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 715–726. [Google Scholar]
  27. Jebur, S.A.; Hussein, K.A.; Hoomod, H.K.; Alzubaidi, L. Novel deep feature fusion framework for multi-scenario violence detection. Computers 2023, 12, 175. [Google Scholar] [CrossRef]
  28. Cevallos, I.D.; Benalcázar, M.E.; Valdivieso Caraguay, Á.L.; Zea, J.A.; Barona-López, L.I. A Systematic Literature Review of Machine Unlearning Techniques in Neural Networks. Computers 2025, 14, 150. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabir, H.M.D.; Mondal, S.K. Uncertainty-Aware Artificial Intelligence: Editorial. Computers 2025, 14, 474. https://doi.org/10.3390/computers14110474

AMA Style

Kabir HMD, Mondal SK. Uncertainty-Aware Artificial Intelligence: Editorial. Computers. 2025; 14(11):474. https://doi.org/10.3390/computers14110474

Chicago/Turabian Style

Kabir, H. M. Dipu, and Subrota Kumar Mondal. 2025. "Uncertainty-Aware Artificial Intelligence: Editorial" Computers 14, no. 11: 474. https://doi.org/10.3390/computers14110474

APA Style

Kabir, H. M. D., & Mondal, S. K. (2025). Uncertainty-Aware Artificial Intelligence: Editorial. Computers, 14(11), 474. https://doi.org/10.3390/computers14110474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop