Deep Learning and Explainable Artificial Intelligence (2nd Edition)
A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "AI-Driven Innovations".
Deadline for manuscript submissions: 31 July 2026 | Viewed by 2
Special Issue Editor
Interests: predictive maintenance; heath monitoring for ground and aerial vehicles; data analytics; AI; innovation; nonlinear systems analysis and synthesis; adaptation; estimation; filtering; control; general artificial intelligence
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Dear Colleagues,
Breakthroughs in 'deep learning' via the use of intermediate features in multilayer 'neural networks' and generative adversarial networks using neural networks as generative and discriminative models, combined with the massive increase in computing power of GPU chips, have resulted in the widespread popularity and use of 'artificial intelligence' in the past decade. The apostrophes in the previous sentence are inserted intentionally to remind the reader that learning, in the biological sense, which improves survival outcomes via biological nervous systems or intelligent decisions that enhance energy and resource availability, is far beyond what current software can hope to achieve. The first phase of this Special Issue was successful in making AI/ML methods explainable in a variety of applications, ranging from agriculture to fluid mechanics to law enforcement. This second phase aims to go one step further towards the ultimate objective of making the human decision makers who use AI/ML methods accountable for their decisions. This is possible when AI/ML methods generate outcomes with predictable properties when fed with data satisfying certain conditions. We do not want automatic machines massacring humans or opening dam waters or shutting down power grids or markets with no human being held responsible for negligence, damages, or even chargeable offences.
To this end, this issue calls for articles that lay out the limits of where and how AI in its present form can be used, besides continuing to make it explainable and transparent.
Thus, it is hoped that the Special Issue will stimulate AI that will increase efficiencies while not compromising safety, trust, fairness, predictability, and reliability when applied to systems with large energy use, such as power, water, transport, or financial grids, law, and government policy. As a first step towards this goal of transparency of AI algorithms, we seek papers that document the methods that meet the following criteria:
- The results are reproducible, at least in the statistical sense;
- Algorithms are provided in a common language of sequences of vector matrix algebra operations, which also underlies much deep learning;
- Conditions satisfied by data inputs, objective functions of optimization, or curve fitting are explicitly listed;
- The propagation of data uncertainty to algorithmic outcomes is documented through sensitivity analysis or Monte Carlo simulation;
- Show why AI/ML methods are suitable for the application from the perspective of decision-maker accountability.
Potential issues of interest include the following: while there is no repeatability, in general, in the training of weights in deep learning or most neural networks, there is repeatability in approximating functions or decision boundaries for similar sets of input data. Such results also exist in adaptive control, where there is asymptotic tracking without the convergence of parameter estimates. Similarly, a ChatGPT-like AI needs to maintain the consistency of its conclusions, provided the inputs remain consistent. The use of AI in the law can have, for example, quantifiable goals such as the prompt compensation of the victim and long-term reformation of the criminal to higher levels of productivity rather than classical legal outcomes of punishment or retribution, which are subjective. Can a chess or GO GAN handle some level of randomness in the rules of the game? Is the repeatability or predictability of AI/ML high enough in a certain application to permit use in critical areas?
Dr. Kartik B. Ariyur
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- model interpretability
- model explainability
- transparency in AI
- trustworthy AI
- human-in-the-loop AI
- human/decision maker accountability
- algorithmic accountability
- AI ethics
- causality/causal inference
- fairness, accountability, and transparency (FAT/ML)
- responsible AI
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.
Further information on MDPI's Special Issue policies can be found here.
Related Special Issue
- Deep Learning and Explainable Artificial Intelligence in Computers (14 articles)
