Advances in Trustworthy and Robust Artificial Intelligence

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 20 July 2025 | Viewed by 2338

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Baylor University, Waco, TX, USA
Interests: fairness-aware machine learning; uncertainty quantification; domain generalization; graph mining; structural causal mechanisms

E-Mail Website
Guest Editor
School of New Media and Communication, Tianjin University, Tianjin 300072, China
Interests: deep learning; machine learning; trustworthy AI; anomaly detection; graph mining; intelligent communication; natural language processing; data mining and data science in real-world applications

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is rapidly evolving, with significant advancements in machine learning, deep learning, and autonomous systems. As AI models are increasingly deployed in critical sectors such as healthcare, finance, autonomous driving, and cybersecurity, ensuring that these systems are robust, reliable, and trustworthy is crucial. AI systems must perform reliably across diverse conditions, including unforeseen and adversarial situations, while ensuring fairness, interpretability, and accountability. Research in trustworthy and robust AI aims to address these challenges by developing theoretical foundations, frameworks, and methods that ensure AI systems can operate safely and fairly in real-world scenarios. This Special Issue seeks to highlight the latest developments in this important research area, which lies at the intersection of mathematics, computer science, and artificial intelligence.

We are pleased to invite you to contribute to this Special Issue, which focuses on advances in trustworthy and robust artificial intelligence. This Special Issue aligns with the aims of the Mathematics journal, particularly within the section Mathematics and Computer Science, by emphasizing the critical role of mathematical models, algorithms, and computational frameworks in building AI systems that are both reliable and resilient to adversarial conditions. The objective of this Special Issue is to bring together a collection of original research articles and comprehensive reviews that contribute to the development of theoretical and computational methodologies for ensuring AI robustness, fairness, and trustworthiness across various domains.

This Special Issue aims to explore several themes, including but not limited to uncertainty quantification, fairness-aware machine learning, adversarial robustness, explainable and interpretable AI, secure AI systems, and ethical AI frameworks. We welcome submissions that address these challenges from both mathematical and computational perspectives, including theoretical analyses, algorithm development, empirical studies, and real-world applications. Potential article types include original research papers, systematic reviews, and case studies, with a focus on interdisciplinary contributions that leverage mathematics to enhance the trustworthiness and robustness of AI systems.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Machine learning models for fairness-aware AI;
  • Uncertainty quantification and risk management in AI systems;
  • Adversarial robustness in machine learning algorithms;
  • Explainable AI (XAI) and interpretability techniques;
  • Fairness and bias mitigation in algorithmic decision making;
  • Secure AI: privacy-preserving methods and robustness against cyberattacks;
  • Ethical AI frameworks and accountability mechanisms;
  • Generalization and robustness across shifting domains;
  • Bayesian methods for trustworthy AI;
  • Mathematical foundations of AI safety and robustness.

We look forward to receiving your contributions.

Dr. Chen Zhao
Dr. Minglai Shao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • trustworthy AI
  • robust machine learning
  • explainable AI (XAI)
  • distribution shift
  • out-of-distribution detection
  • uncertainty quantification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 389 KiB  
Article
MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation
by Pengfei Yue, Hailiang Tang, Wanyu Li, Wenxiao Zhang and Bingjie Yan
Mathematics 2025, 13(9), 1463; https://doi.org/10.3390/math13091463 - 29 Apr 2025
Abstract
Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they [...] Read more.
Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they often rely heavily on extensive training and fine-tuning, which results in low efficiency. To tackle these challenges, we introduce our MLKGC framework, a novel approach that combines large language models (LLMs) with multi-modal modules (MMs). LLMs leverage their advanced language understanding and reasoning abilities to enrich the contextual information for KGC, while MMs integrate multi-modal data, such as audio and images, to bridge knowledge gaps. This integration augments the capability of the model to address long-tail entities, enhances its reasoning processes, and facilitates more robust information integration through the incorporation of diverse inputs. By harnessing the synergy between LLMs and MMs, our approach reduces dependence on traditional text-based training and fine-tuning, providing a more efficient and accurate solution for KGC tasks. It also offers greater flexibility in addressing complex relationships and diverse entities. Extensive experiments on multiple benchmark KGC datasets demonstrate that MLKGC effectively leverages the strengths of both LLMs and multi-modal data, achieving superior performance in link-prediction tasks. Full article
(This article belongs to the Special Issue Advances in Trustworthy and Robust Artificial Intelligence)
22 pages, 1097 KiB  
Article
Temporal Community Detection and Analysis with Network Embeddings
by Limengzi Yuan, Xuanming Zhang, Yuxian Ke, Zhexuan Lu, Xiaoming Li and Changzheng Liu
Mathematics 2025, 13(5), 698; https://doi.org/10.3390/math13050698 - 21 Feb 2025
Viewed by 589
Abstract
As dynamic systems, social networks exhibit continuous topological changes over time, and are typically modeled as temporal networks. In order to understand their dynamic characteristics, it is essential to investigate temporal community detection (TCD), which poses significant challenges compared to static network analysis. [...] Read more.
As dynamic systems, social networks exhibit continuous topological changes over time, and are typically modeled as temporal networks. In order to understand their dynamic characteristics, it is essential to investigate temporal community detection (TCD), which poses significant challenges compared to static network analysis. These challenges arise from the need to simultaneously detect community structures and track their evolutionary behaviors. To address these issues, we propose TCDA-NE, a novel TCD algorithm that combines evolutionary clustering with convex non-negative matrix factorization (Convex-NMF). Our method innovatively integrates community structure into network embedding, preserving both microscopic details and community-level information in node representations while effectively capturing the evolutionary dynamics of networks. A distinctive feature of TCDA-NE is its utilization of a common-neighbor similarity matrix, which significantly enhances the algorithm’s ability to identify meaningful community structures in temporal networks. By establishing coherent relationships between node representations and community structures, we optimize both the Convex-NMF-based representation learning model and the evolutionary clustering-based TCD model within a unified framework. We derive the updating rules and provide rigorous theoretical proofs for the algorithm’s validity and convergence. Extensive experiments on synthetic and real-world social networks, including email and phone call networks, demonstrate the superior performance of our model in community detection and tracking temporal network evolution. Notably, TCDA-NE achieves a maximum improvement of up to 0.1 in the normalized mutual information (NMI) index compared to state-of-the-art methods, highlighting its effectiveness in temporal community detection. Full article
(This article belongs to the Special Issue Advances in Trustworthy and Robust Artificial Intelligence)
Show Figures

Figure 1

40 pages, 40760 KiB  
Article
Dynamic-Max-Value ReLU Functions for Adversarially Robust Machine Learning Models
by Korn Sooksatra and Pablo Rivas
Mathematics 2024, 12(22), 3551; https://doi.org/10.3390/math12223551 - 13 Nov 2024
Cited by 2 | Viewed by 1364
Abstract
The proliferation of deep learning has transformed artificial intelligence, demonstrating prowess in domains such as image recognition, natural language processing, and robotics. Nonetheless, deep learning models are susceptible to adversarial examples, well-crafted inputs that can induce erroneous predictions, particularly in safety-critical contexts. Researchers [...] Read more.
The proliferation of deep learning has transformed artificial intelligence, demonstrating prowess in domains such as image recognition, natural language processing, and robotics. Nonetheless, deep learning models are susceptible to adversarial examples, well-crafted inputs that can induce erroneous predictions, particularly in safety-critical contexts. Researchers actively pursue countermeasures such as adversarial training and robust optimization to fortify model resilience. This vulnerability is notably accentuated by the ubiquitous utilization of ReLU functions in deep learning models. A previous study proposed an innovative solution to mitigate this vulnerability, presenting a capped ReLU function tailored to bolster neural network robustness against adversarial examples. However, the approach had a scalability problem. To address this limitation, a series of comprehensive experiments are undertaken across diverse datasets, and we introduce the dynamic-max-value ReLU function to address the scalability problem. Full article
(This article belongs to the Special Issue Advances in Trustworthy and Robust Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop