Topic Editors

School of Cyber Engineering, Xidian University, Xi'an 710071, China
School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
State Key Laboratory of Integrated Service Network, Xidian University, Xi'an 710071, China

The Future of Artificial Intelligence: Trends, Challenges, and Developments

Abstract submission deadline
30 June 2026
Manuscript submission deadline
30 September 2026
Viewed by
862

Topic Information

Dear Colleagues,

Artificial Intelligence (AI) has become a transformative force across various industries, with widespread applications in healthcare, education, transportation, and scientific research. One of the most significant advancements in AI is the progress made in deep learning and reinforcement learning. In recent years, breakthroughs in these areas have driven developments in natural language processing, computer vision, and autonomous systems. At present, AI is experiencing widespread adoption in edge computing and the Internet of Things (IoT), playing a crucial role in traffic management for smart cities and predictive maintenance in intelligent manufacturing. With ongoing advancements in algorithms and computing hardware, AI is being widely deployed across logistics, healthcare, and agriculture, significantly enhancing efficiency, automation, and safety. Additionally, AI is transforming remote sensing, which is actively used for satellite image analysis, environmental monitoring, land use classification, disaster prediction, and climate change studies. However, the fields integrated with AI extend far beyond the aforementioned areas.

Despite its promising future, AI still faces many challenges. Among them, the interpretability of AI models has become a critical factor limiting their widespread application, especially in high-risk fields such as healthcare, finance, justice, and autonomous driving. Current deep learning models are often considered "black boxes" due to their lack of transparency, making it difficult for users to understand the decision-making process. Therefore, enhancing the interpretability of AI models and building secure and trustworthy AI systems has become a key research direction.

Integrating AI with quantum computing could potentially lead to breakthroughs in computational power, solving complex problems that are currently beyond reach. Moreover, the interpretability of AI will continue to be closely integrated with various application scenarios, such as enhancing the transparency of AI decisions in financial risk control, providing verifiable safety guarantees in autonomous driving, and enabling more precise environmental monitoring in remote sensing. The widespread adoption of collaborative AI will enhance intelligent human–machine interaction, fostering innovation and productivity. Overall, the future of AI is filled with both opportunities and challenges. This Topic aims to solicit the latest research findings in artificial intelligence.

Topics of interest include but are not limited to the following:

  • Differential Privacy in AI models;
  • Healthcare and medical applications in AI;
  • Autonomous Vehicles and Intelligent Transportation Systems in AI;
  • Federated Learning for Decentralized AI Systems;
  • AI-powered Robotics and Human–Robot Collaboration;
  • Edge AI and Real-time Inferencing for IoT;
  • Quantum Computing Integration in AI;
  • Large Language Model (LLM) and Generative AI;
  • Backdoor attacks and defenses in deep learning;
  • Robustness verification of AI models;
  • Secure AI for autonomous systems and critical infrastructure;
  • AI-driven satellite image processing and enhancement;
  • Multi-spectral and hyperspectral remote sensing in AI;
  • Smart contract vulnerabilities in AI-driven applications;
  • AI-driven intrusion detection and threat intelligence;
  • AI-based Predictive Maintenance in Industrial Applications;
  • Interpretable AI-enabled remote sensing;
  • Interpretable AI-enabled financial risk;
  • Human–AI Interaction for interpretability control.

Prof. Dr. Yinbin Miao
Dr. Jun Feng
Dr. Wenqian Dong
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • generative AI
  • federated learning
  • large language model

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 18.9 Days CHF 1600 Submit
Applied Sciences
applsci
2.5 5.3 2011 18.4 Days CHF 2400 Submit
Designs
designs
- 3.9 2017 21.7 Days CHF 1600 Submit
Electronics
electronics
2.6 5.3 2012 16.4 Days CHF 2400 Submit
Mathematics
mathematics
2.3 4.0 2013 18.3 Days CHF 2600 Submit
Remote Sensing
remotesensing
4.2 8.3 2009 23.9 Days CHF 2700 Submit
Sensors
sensors
3.4 7.3 2001 18.6 Days CHF 2600 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (1 paper)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
17 pages, 29778 KiB  
Article
Hybrid Uncertainty Metrics-Based Privacy-Preserving Alternating Multimodal Representation Learning
by Zhe Sun, Yaowei Huang, Aohai Zhang, Chao Li, Lifan Jiang, Xiaotong Liao, Ran Li and Junping Wan
Appl. Sci. 2025, 15(10), 5229; https://doi.org/10.3390/app15105229 - 8 May 2025
Viewed by 226
Abstract
Multimodal learning enhances model performance by integrating heterogeneous data but is hindered by modality laziness and privacy vulnerabilities. Modality laziness occurs when the model overly relies on a single modality for predictions, underutilizing other modalities and leading to suboptimal performance and poor cross-modal [...] Read more.
Multimodal learning enhances model performance by integrating heterogeneous data but is hindered by modality laziness and privacy vulnerabilities. Modality laziness occurs when the model overly relies on a single modality for predictions, underutilizing other modalities and leading to suboptimal performance and poor cross-modal integration. Privacy vulnerabilities arise when sensitive data from individual modalities are exposed during training or inference, risking unauthorized access or attacks, especially in shared model components. In this paper, we propose Privacy-Preserving Alternating Multimodal Representation Learning (PAMRL). Built on Multimodal Learning with Alternating Unimodal Adaptation (MLA), PAMRL alternately optimizes unimodal encoders and a shared representation head to mitigate modality laziness and improve cross-modal consistency. It introduces a hybrid uncertainty metric combining KL divergence and entropy to enhance prediction robustness while applying differential privacy to protect sensitive data in unimodal encoders, preserving the shared head for efficient cross-modal fusion. Extensive experiments on the MVSA and CREMA-D datasets, comparing PAMRL with MLA and other baselines, demonstrate its superior performance, achieving an optimal balance of predictive accuracy, attack resilience, and privacy protection, thus supporting secure, efficient multimodal applications. Full article
Show Figures

Figure 1

Back to TopTop