Uncertainty-Aware Artificial Intelligence

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: 31 January 2025 | Viewed by 20502

Special Issue Editors


E-Mail Website
Guest Editor
1. Research Fellow, Artificial Intelligence and Cyber Futures Institute, Charles Sturt University, Orange, NSW 2800, Australia
2. Research Fellow, Rural Health Research Institute, Charles Sturt University, Orange, NSW 2800, Australia
Interests: artificial intelligence; uncertainty quantification; imbalanced data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. MARTIANS Lab (Machine Learning and ARTificial Intelligence for Advancing Nuclear Systems), Missouri University of Science and Technology, Rolla, MO 65409, USA
2. Nuclear Plasma and Radiological Engineering, University of Illinois Urbana, Champaign, IL 61801, USA
Interests: digital twin; computation nuclear; uncertainty quantification; explainable AI; robust optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China
Interests: cloud computing; networks and distributed systems; blockchain; deep learning; natural language processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, North Dakota State University, Fargo, ND 58102, USA
Interests: artificial/computational Intelligence; autonomy applications in aerospace; cybersecurity; 3D printing command/control and assessment; educational assessment in computing disciplines
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Neural networks have brought eye-catching performance improvements the approaches to many prediction and decision-making problems. Machines can perform a variety of complex tasks that only humans could perform several decades ago. In fact, machines are performing better than humans in various fields. However, neural network models provide poor predictions in many situations. The user of neural networks must develop an understanding of situations where neural networks can potentially provide poor performance. A good knowledge of the causes of uncertainties can potentially assist  future researchers to design more robust models. Additionally, current users of the prediction systems would be able to  understand the credibility of the prediction.

The purpose of this Special Issue is to explore potential improvements that can lead us toward more stable neural network-based solutions. Potential authors are encouraged to submit new concepts according to the submission guidelines. Editors and reviewers will aim to understand and improve the concepts and provide effective feedback to researchers. The issue can potentially bring technological improvements and an improved understanding of concepts among everyone involved, including readers. 

Dr. Hussain Mohammed Dipu Kabir
Dr. Syed Bahauddin Alam
Dr. Subrota Kumar Mondal
Dr. Jeremy Straub
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • uncertainty
  • robust modeling
  • uncertainty-aware artificial intelligence
  • explainable artificial intelligence
  • probabilistic forecast

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1241 KiB  
Article
Time Series Forecasting via Derivative Spike Encoding and Bespoke Loss Functions for Spiking Neural Networks
by Davide Liberato Manna, Alex Vicente-Sola, Paul Kirkland, Trevor Joseph Bihl and Gaetano Di Caterina
Computers 2024, 13(8), 202; https://doi.org/10.3390/computers13080202 - 15 Aug 2024
Viewed by 981
Abstract
The potential of neuromorphic (NM) solutions often lies in their low-SWaP (Size, Weight, and Power) capabilities, which often drive their application to domains that could benefit from this. Nevertheless, spiking neural networks (SNNs), with their inherent time-based nature, present an attractive alternative also [...] Read more.
The potential of neuromorphic (NM) solutions often lies in their low-SWaP (Size, Weight, and Power) capabilities, which often drive their application to domains that could benefit from this. Nevertheless, spiking neural networks (SNNs), with their inherent time-based nature, present an attractive alternative also for areas where data features are present in the time dimension, such as time series forecasting. Time series data, characterized by seasonality and trends, can benefit from the unique processing capabilities of SNNs, which offer a novel approach for this type of task. Additionally, time series data can serve as a benchmark for evaluating SNN performance, providing a valuable alternative to traditional datasets. However, the challenge lies in the real-valued nature of time series data, which is not inherently suited for SNN processing. In this work, we propose a novel spike-encoding mechanism and two loss functions to address this challenge. Our encoding system, inspired by NM event-based sensors, converts the derivative of a signal into spikes, enhancing interoperability with the NM technology and also making the data suitable for SNN processing. Our loss functions then optimize the learning of subsequent spikes by the SNN. We train a simple SNN using SLAYER as a learning rule and conduct experiments using two electricity load forecasting datasets. Our results demonstrate that SNNs can effectively learn from encoded data, and our proposed DecodingLoss function consistently outperforms SLAYER’s SpikeTime loss function. This underscores the potential of SNNs for time series forecasting and sets the stage for further research in this promising area of research. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

25 pages, 1263 KiB  
Article
Cognitive Classifier of Hand Gesture Images for Automated Sign Language Recognition: Soft Robot Assistance Based on Neutrosophic Markov Chain Paradigm
by Muslem Al-Saidi, Áron Ballagi, Oday Ali Hassen and Saad M. Saad
Computers 2024, 13(4), 106; https://doi.org/10.3390/computers13040106 - 22 Apr 2024
Cited by 2 | Viewed by 1716
Abstract
In recent years, Sign Language Recognition (SLR) has become an additional topic of discussion in the human–computer interface (HCI) field. The most significant difficulty confronting SLR recognition is finding algorithms that will scale effectively with a growing vocabulary size and a limited supply [...] Read more.
In recent years, Sign Language Recognition (SLR) has become an additional topic of discussion in the human–computer interface (HCI) field. The most significant difficulty confronting SLR recognition is finding algorithms that will scale effectively with a growing vocabulary size and a limited supply of training data for signer-independent applications. Due to its sensitivity to shape information, automated SLR based on hidden Markov models (HMMs) cannot characterize the confusing distributions of the observations in gesture features with sufficiently precise parameters. In order to simulate uncertainty in hypothesis spaces, many scholars provide an extension of the HMMs, utilizing higher-order fuzzy sets to generate interval-type-2 fuzzy HMMs. This expansion is helpful because it brings the uncertainty and fuzziness of conventional HMM mapping under control. The neutrosophic sets are used in this work to deal with indeterminacy in a practical SLR setting. Existing interval-type-2 fuzzy HMMs cannot consider uncertain information that includes indeterminacy. However, the neutrosophic hidden Markov model successfully identifies the best route between states when there is vagueness. This expansion is helpful because it brings the uncertainty and fuzziness of conventional HMM mapping under control. The neutrosophic three membership functions (truth, indeterminate, and falsity grades) provide more layers of autonomy for assessing HMM’s uncertainty. This approach could be helpful for an extensive vocabulary and hence seeks to solve the scalability issue. In addition, it may function independently of the signer, without needing data gloves or any other input devices. The experimental results demonstrate that the neutrosophic HMM is nearly as computationally difficult as the fuzzy HMM but has a similar performance and is more robust to gesture variations. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

13 pages, 392 KiB  
Article
Least Squares Minimum Class Variance Support Vector Machines
by Michalis Panayides and Andreas Artemiou
Computers 2024, 13(2), 34; https://doi.org/10.3390/computers13020034 - 26 Jan 2024
Viewed by 1764
Abstract
In this paper, we propose a Support Vector Machine (SVM)-type algorithm, which is statistically faster among other common algorithms in the family of SVM algorithms. The new algorithm uses distributional information of each class and, therefore, combines the benefits of using the class [...] Read more.
In this paper, we propose a Support Vector Machine (SVM)-type algorithm, which is statistically faster among other common algorithms in the family of SVM algorithms. The new algorithm uses distributional information of each class and, therefore, combines the benefits of using the class variance in the optimization with the least squares approach, which gives an analytic solution to the minimization problem and, therefore, is computationally efficient. We demonstrate an important property of the algorithm which allows us to address the inversion of a singular matrix in the solution. We also demonstrate through real data experiments that we improve on the computational time without losing any of the accuracy when compared to previously proposed algorithms. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

15 pages, 1827 KiB  
Article
An Interactive Training Model for Myoelectric Regression Control Based on Human–Machine Cooperative Performance
by Carles Igual, Alberto Castillo and Jorge Igual
Computers 2024, 13(1), 29; https://doi.org/10.3390/computers13010029 - 21 Jan 2024
Cited by 2 | Viewed by 1717
Abstract
Electromyography-based wearable biosensors are used for prosthetic control. Machine learning prosthetic controllers are based on classification and regression models. The advantage of the regression approach is that it permits us to obtain a smoother and more natural controller. However, the existing training methods [...] Read more.
Electromyography-based wearable biosensors are used for prosthetic control. Machine learning prosthetic controllers are based on classification and regression models. The advantage of the regression approach is that it permits us to obtain a smoother and more natural controller. However, the existing training methods for regression-based solutions is the same as the training protocol used in the classification approach, where only a finite set of movements are trained. In this paper, we present a novel training protocol for myoelectric regression-based solutions that include a feedback term that allows us to explore more than a finite set of movements and is automatically adjusted according to real-time performance of the subject during the training session. Consequently, the algorithm distributes the training time efficiently, focusing on the movements where the performance is worse and optimizing the training for each user. We tested and compared the existing and new training strategies in 20 able-bodied participants and 4 amputees. The results show that the novel training procedure autonomously produces a better training session. As a result, the new controller outperforms the one trained with the existing method: for the able-bodied participants, the average number of targets hit is increased from 86% to 95% and the path efficiency from 40% to 84%, while for the subjects with limb deficiencies, the completion rate is increased from 58% to 69% and the path efficiency from 24% to 56%. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

37 pages, 8647 KiB  
Article
Forecasting of Bitcoin Illiquidity Using High-Dimensional and Textual Features
by Faraz Sasani, Mohammad Moghareh Dehkordi, Zahra Ebrahimi, Hakimeh Dustmohammadloo, Parisa Bouzari, Pejman Ebrahimi, Enikő Lencsés and Mária Fekete-Farkas
Computers 2024, 13(1), 20; https://doi.org/10.3390/computers13010020 - 9 Jan 2024
Cited by 2 | Viewed by 2112
Abstract
Liquidity is the ease of converting an asset (physical/digital) into cash or another asset without loss and is shown by the relationship between the time scale and the price scale of an investment. This article examines the illiquidity of Bitcoin (BTC). Bitcoin hash [...] Read more.
Liquidity is the ease of converting an asset (physical/digital) into cash or another asset without loss and is shown by the relationship between the time scale and the price scale of an investment. This article examines the illiquidity of Bitcoin (BTC). Bitcoin hash rate information was collected at three different time intervals; parallel to these data, textual information related to these intervals was collected from Twitter for each day. Due to the regression nature of illiquidity prediction, approaches based on recurrent networks were suggested. Seven approaches: ANN, SVM, SANN, LSTM, Simple RNN, GRU, and IndRNN, were tested on these data. To evaluate these approaches, three evaluation methods were used: random split (paper), random split (run) and linear split (run). The research results indicate that the IndRNN approach provided better results. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

12 pages, 3053 KiB  
Article
Zero-Inflated Text Data Analysis using Generative Adversarial Networks and Statistical Modeling
by Sunghae Jun
Computers 2023, 12(12), 258; https://doi.org/10.3390/computers12120258 - 10 Dec 2023
Cited by 4 | Viewed by 1847
Abstract
In big data analysis, various zero-inflated problems are occurring. In particular, the problem of inflated zeros has a great influence on text big data analysis. In general, the preprocessed data from text documents are a matrix consisting of the documents and terms for [...] Read more.
In big data analysis, various zero-inflated problems are occurring. In particular, the problem of inflated zeros has a great influence on text big data analysis. In general, the preprocessed data from text documents are a matrix consisting of the documents and terms for row and column, respectively. Each element of this matrix is an occurred frequency of term in a document. Most elements of the matrix are zeros, because the number of columns is much larger than the rows. This problem is a cause of decreasing model performance in text data analysis. To overcome this problem, we propose a method of zero-inflated text data analysis using generative adversarial networks (GAN) and statistical modeling. In this paper, we solve the zero-inflated problem using synthetic data generated from the original data with zero inflation. The main finding of our study is how to change zero values to the very small numeric values with random noise through the GAN. The generator and discriminator of the GAN learned the zero-inflated text data together and built a model that generates synthetic data that can replace the zero-inflated data. We conducted experiments and showed the results, using real and simulation data sets to verify the improved performance of our proposed method. In our experiments, we used five quantitative measures, prediction sum of squares, R-squared, log-likelihood, Akaike information criterion and Bayesian information criterion to evaluate the model’s performance between original and synthetic data sets. We found that all performances of our proposed method are better than the traditional methods. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2268 KiB  
Article
Addressing Uncertainty in Tool Wear Prediction with Dropout-Based Neural Network
by Arup Dey, Nita Yodo, Om P. Yadav, Ragavanantham Shanmugam and Monsuru Ramoni
Computers 2023, 12(9), 187; https://doi.org/10.3390/computers12090187 - 19 Sep 2023
Cited by 1 | Viewed by 1701
Abstract
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and [...] Read more.
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

22 pages, 15152 KiB  
Article
Novel Deep Feature Fusion Framework for Multi-Scenario Violence Detection
by Sabah Abdulazeez Jebur, Khalid A. Hussein, Haider Kadhim Hoomod and Laith Alzubaidi
Computers 2023, 12(9), 175; https://doi.org/10.3390/computers12090175 - 5 Sep 2023
Cited by 20 | Viewed by 2138
Abstract
Detecting violence in various scenarios is a difficult task that requires a high degree of generalisation. This includes fights in different environments such as schools, streets, and football stadiums. However, most current research on violence detection focuses on a single scenario, limiting its [...] Read more.
Detecting violence in various scenarios is a difficult task that requires a high degree of generalisation. This includes fights in different environments such as schools, streets, and football stadiums. However, most current research on violence detection focuses on a single scenario, limiting its ability to generalise across multiple scenarios. To tackle this issue, this paper offers a new multi-scenario violence detection framework that operates in two environments: fighting in various locations and rugby stadiums. This framework has three main steps. Firstly, it uses transfer learning by employing three pre-trained models from the ImageNet dataset: Xception, Inception, and InceptionResNet. This approach enhances generalisation and prevents overfitting, as these models have already learned valuable features from a large and diverse dataset. Secondly, the framework combines features extracted from the three models through feature fusion, which improves feature representation and enhances performance. Lastly, the concatenation step combines the features of the first violence scenario with the second scenario to train a machine learning classifier, enabling the classifier to generalise across both scenarios. This concatenation framework is highly flexible, as it can incorporate multiple violence scenarios without requiring training from scratch with additional scenarios. The Fusion model, which incorporates feature fusion from multiple models, obtained an accuracy of 97.66% on the RLVS dataset and 92.89% on the Hockey dataset. The Concatenation model accomplished an accuracy of 97.64% on the RLVS and 92.41% on the Hockey datasets with just a single classifier. This is the first framework that allows for the classification of multiple violent scenarios within a single classifier. Furthermore, this framework is not limited to violence detection and can be adapted to different tasks. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

22 pages, 4355 KiB  
Article
Detecting COVID-19 from Chest X-rays Using Convolutional Neural Network Ensembles
by Tarik El Lel, Mominul Ahsan and Julfikar Haider
Computers 2023, 12(5), 105; https://doi.org/10.3390/computers12050105 - 16 May 2023
Cited by 6 | Viewed by 2882
Abstract
Starting in late 2019, the coronavirus SARS-CoV-2 began spreading around the world and causing disruption in both daily life and healthcare systems. The disease is estimated to have caused more than 6 million deaths worldwide [WHO]. The pandemic and the global reaction to [...] Read more.
Starting in late 2019, the coronavirus SARS-CoV-2 began spreading around the world and causing disruption in both daily life and healthcare systems. The disease is estimated to have caused more than 6 million deaths worldwide [WHO]. The pandemic and the global reaction to it severely affected the world economy, causing a significant increase in global inflation rates, unemployment, and the cost of energy commodities. To stop the spread of the virus and dampen its global effect, it is imperative to detect infected patients early on. Convolutional neural networks (CNNs) can effectively diagnose a patient’s chest X-ray (CXR) to assess whether they have been infected. Previous medical image classification studies have shown exceptional accuracies, and the trained algorithms can be shared and deployed using a computer or a mobile device. CNN-based COVID-19 detection can be employed as a supplement to reverse transcription-polymerase chain reaction (RT-PCR). In this research work, 11 ensemble networks consisting of 6 CNN architectures and a classifier layer are evaluated on their ability to differentiate the CXRs of patients with COVID-19 from those of patients that have not been infected. The performance of ensemble models is then compared to the performance of individual CNN architectures. The best ensemble model COVID-19 detection accuracy was achieved using the logistic regression ensemble model, with an accuracy of 96.29%, which is 1.13% higher than the top-performing individual model. The highest F1-score was achieved by the standard vector classifier ensemble model, with a value of 88.6%, which was 2.06% better than the score achieved by the best-performing individual model. This work demonstrates that combining a set of top-performing COVID-19 detection models could lead to better results if the models are integrated together into an ensemble. The model can be deployed in overworked or remote health centers as an accurate and rapid supplement or back-up method for detecting COVID-19. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

17 pages, 653 KiB  
Article
Bound the Parameters of Neural Networks Using Particle Swarm Optimization
by Ioannis G. Tsoulos, Alexandros Tzallas, Evangelos Karvounis and Dimitrios Tsalikakis
Computers 2023, 12(4), 82; https://doi.org/10.3390/computers12040082 - 17 Apr 2023
Cited by 1 | Viewed by 1902
Abstract
Artificial neural networks are machine learning models widely used in many sciences as well as in practical applications. The basic element of these models is a vector of parameters; the values of these parameters should be estimated using some computational method, and this [...] Read more.
Artificial neural networks are machine learning models widely used in many sciences as well as in practical applications. The basic element of these models is a vector of parameters; the values of these parameters should be estimated using some computational method, and this process is called training. For effective training of the network, computational methods from the field of global minimization are often used. However, for global minimization techniques to be effective, the bounds of the objective function should also be clearly defined. In this paper, a two-stage global optimization technique is presented for efficient training of artificial neural networks. In the first stage, the bounds for the neural network parameters are estimated using Particle Swarm Optimization and, in the following phase, the parameters of the network are optimized within the bounds of the first phase using global optimization techniques. The suggested method was used on a series of well-known problems in the literature and the experimental results were more than encouraging. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop