Journal Description
AI
AI
is an international, peer-reviewed, open access journal on artificial intelligence (AI), including broad aspects of cognition and reasoning, perception and planning, machine learning, intelligent robotics, and applications of AI, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within ESCI (Web of Science), Scopus, EBSCO, and other databases.
- Journal Rank: JCR - Q1 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Artificial Intelligence)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.7 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
5.0 (2024);
5-Year Impact Factor:
4.6 (2024)
Latest Articles
Mapping Mental Trajectories to Physical Risk: An AI Framework for Predicting Sarcopenia from Dynamic Depression Patterns in Public Health
AI 2025, 6(12), 300; https://doi.org/10.3390/ai6120300 - 21 Nov 2025
Abstract
Background: The accelerating global population aging underscores the urgency of addressing public health challenges. Sarcopenia and depression are prevalent, interrelated conditions in older adults, yet prevailing research often treats depression as a static state, neglecting its longitudinal progression and limiting predictive capability for
[...] Read more.
Background: The accelerating global population aging underscores the urgency of addressing public health challenges. Sarcopenia and depression are prevalent, interrelated conditions in older adults, yet prevailing research often treats depression as a static state, neglecting its longitudinal progression and limiting predictive capability for sarcopenia. Methods: Using data from four waves (2011–2018) of the China Health and Retirement Longitudinal Study (CHARLS), we identified distinct depressive symptom trajectories via Group-Based Trajectory Modeling. Seven machine learning algorithms were employed to develop predictive models for sarcopenia risk, incorporating these trajectory patterns and baseline characteristics. Results: Three depressive symptom trajectories were identified: ‘Persistently Low’, ‘Persistently Moderate’, and ‘Persistently High’. Tree-based ensemble methods, particularly Random Forest and XGBoost, demonstrated superior and robust performance (mean accuracy: 0.8265 and 0.8178; mean weighted F1-score: 0.8075 and 0.8084, respectively). Feature importance analysis confirmed depressive symptoms as a core, independent predictor, ranking third (5.7% importance) in the optimal Random Forest model, only after BMI and cognitive function, and surpassing traditional risk factors like age and waist circumference. Conclusions: This study validates that longitudinal depressive symptom trajectories provide superior predictive power for sarcopenia risk compared to single-time-point assessments, effectively mapping mental health trajectories to physical risk. The robust ML framework not only enables early identification of high-risk individuals but also reveals a multidimensional risk profile, highlighting the intricate mind–body connection in aging. These findings advocate for integrating dynamic mental health monitoring into routine geriatric assessments, demonstrating the potential of AI to facilitate a paradigm shift towards proactive, personalized, and scalable prevention strategies in public health and clinical practice.
Full article
(This article belongs to the Topic Artificial Intelligence in Public Health: Current Trends and Future Possibilities, 2nd Edition)
►
Show Figures
Open AccessArticle
An Enhanced Machine Learning Framework for Network Anomaly Detection
by
Oumaima Chentoufi, Mouad Choukhairi and Khalid Chougdali
AI 2025, 6(11), 299; https://doi.org/10.3390/ai6110299 - 20 Nov 2025
Abstract
Given the increasing volume and sophistication of cyber-attacks, there has always been a need for improved and adaptive real-time intrusion detection systems. Machine learning algorithms have presented a promising approach for enhancing their capabilities. This research has focused on investigating the impact of
[...] Read more.
Given the increasing volume and sophistication of cyber-attacks, there has always been a need for improved and adaptive real-time intrusion detection systems. Machine learning algorithms have presented a promising approach for enhancing their capabilities. This research has focused on investigating the impact of different dimensionality reduction approaches on performance, and we have chosen to work with both Batch PCA and Incremental PCA alongside Logistic Regression, SVM, and Decision Tree classifiers. We started this work by applying machine learning algorithms directly on pre-processed data, then applied the same algorithms on the reduced data. Our results have yielded an accuracy of 98.61% and an F1-score of 98.64% with a prediction time of only 0.09 s using Incremental PCA with Decision Tree. We also have obtained an accuracy of 98.44% and an F1-score of 98.47% with a prediction time of 0.04 s from Batch PCA with SVM, and an accuracy of 98.47% and an F1-score of 98.51% with a prediction time of 0.05 s from Incremental PCA with Logistic Regression. The findings demonstrate that Incremental PCA offers near real-time IDS deployment in large networks.
Full article
(This article belongs to the Special Issue Intelligent Defenses: The Role of AI in Strengthening Information Security)
►▼
Show Figures

Figure 1
Open AccessArticle
AdaLite: A Distilled AdaBins Model for Depth Estimation on Resource-Limited Devices
by
Mohammed Chaouki Ziara, Mohamed Elbahri, Nasreddine Taleb, Kidiyo Kpalma and Sid Ahmed El Mehdi Ardjoun
AI 2025, 6(11), 298; https://doi.org/10.3390/ai6110298 - 20 Nov 2025
Abstract
►▼
Show Figures
This paper presents AdaLite, a knowledge distillation framework for monocular depth estimation designed for efficient deployment on resource-limited devices, without relying on quantization or pruning. While large-scale depth estimation networks achieve high accuracy, their computational and memory demands hinder real-time use. To address
[...] Read more.
This paper presents AdaLite, a knowledge distillation framework for monocular depth estimation designed for efficient deployment on resource-limited devices, without relying on quantization or pruning. While large-scale depth estimation networks achieve high accuracy, their computational and memory demands hinder real-time use. To address this problem, a large model is adopted as a teacher, and a compact encoder–decoder student with few trainable parameters is trained under a dual-supervision scheme that aligns its predictions with both teacher feature maps and ground-truth depths. AdaLite is evaluated on the NYUv2, SUN-RGBD and KITTI benchmarks using standard depth metrics and deployment-oriented measures, including inference latency. The distilled model achieves a 94% reduction in size and reaches 1.02 FPS on a Raspberry Pi 2 (2 GB CPU), while preserving 96.8% of the teacher’s accuracy ( ) and providing over 11× faster inference. These results demonstrate the effectiveness of distillation-driven compression for real-time depth estimation in resource-limited environments. The code is publically available.
Full article

Figure 1
Open AccessArticle
Can Open-Source Large Language Models Detect Medical Errors in Real-World Ophthalmology Reports?
by
Ante Kreso, Bosko Jaksic, Filip Rada, Zvonimir Boban, Darko Batistic, Donald Okmazic, Lara Veldic, Ivan Luksic, Ljubo Znaor, Sandro Glumac, Josko Bozic and Josip Vrdoljak
AI 2025, 6(11), 297; https://doi.org/10.3390/ai6110297 - 20 Nov 2025
Abstract
Accurate documentation is critical in ophthalmology, yet clinical notes often contain subtle errors that can affect decision-making. This study prospectively compared contemporary large language models (LLMs) for detecting clinically salient errors in emergency ophthalmology encounter notes and generating actionable corrections. 129 de-identified notes,
[...] Read more.
Accurate documentation is critical in ophthalmology, yet clinical notes often contain subtle errors that can affect decision-making. This study prospectively compared contemporary large language models (LLMs) for detecting clinically salient errors in emergency ophthalmology encounter notes and generating actionable corrections. 129 de-identified notes, each seeded with a predefined target error, were independently audited by four LLMs (o3 (OpenAI, closed-source), DeepSeek-v3-r1 (Deepseek, open-source), MedGemma-27B (Google, open-source), and GPT-4o (OpenAI, closed-source)) using a standardized prompt. Two masked ophthalmologists graded error localization, relevance of additional issues, and overall recommendation quality, with within-case analyses applying appropriate nonparametric tests. Performance varied significantly across models (Cochran’s Q = 71.13, p = 2.44 × 10−15). o3 achieved the highest error localization accuracy at 95.7% (95% CI, 89.5–98.8), followed by DeepSeek-v3-r1 (90.3%), MedGemma-27b (80.9%), and GPT-4o (53.2%). Ordinal outcomes similarly favored o3 and DeepSeek-v3-r1 (both p < 10−9 vs. GPT-4o), with mean recommendation quality scores of 3.35, 3.05, 2.54, and 2.11, respectively. These findings demonstrate that LLMs can serve as accurate “second-eyes” for ophthalmology documentation. A proprietary model led on all metrics, while a strong open-source alternative approached its performance, offering potential for privacy-preserving on-premise deployment. Clinical translation will require oversight, workflow integration, and careful attention to ethical considerations.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Figure 1
Open AccessArticle
Modeling Dynamic Risk Perception Using Large Language Model (LLM) Agents
by
He Wen, Mojtaba Parsaee and Zaman Sajid
AI 2025, 6(11), 296; https://doi.org/10.3390/ai6110296 - 19 Nov 2025
Abstract
Background: Understanding how accident risk escalates during unfolding industrial events is essential for developing intelligent safety systems. This study proposes a large language model (LLM)-based framework that simulates human-like risk reasoning over sequential accident precursors. Methods: Using 100 investigation reports from
[...] Read more.
Background: Understanding how accident risk escalates during unfolding industrial events is essential for developing intelligent safety systems. This study proposes a large language model (LLM)-based framework that simulates human-like risk reasoning over sequential accident precursors. Methods: Using 100 investigation reports from the U.S. Chemical Safety Board (CSB), two Generative Pre-trained Transformer (GPT) agents were developed: (1) an Accident Precursor Extractor to identify and classify time-ordered events, and (2) a Subjective Probability Estimator to update perceived accident likelihood as precursors unfold. Results: The subjective accident probability increases near-linearly, with an average escalation of 8.0% ± 0.9% per precursor ( ). A consistent tipping point occurs at the fourth precursor, marking a perceptual shift to high-risk awareness. Across 90 analyzed cases, Agent 1 achieved 0.88 precision and 0.84 recall, while Agent 2 reproduced human-like probabilistic reasoning within ±0.08 of expert baselines. The magnitude of escalation differed across precursor types. Organizational factors were perceived as the highest risk (median = 0.56), followed by human error (median = 0.47). Technical and environmental factors demonstrated comparatively smaller effects. Conclusions: These findings confirm that LLM agents can emulate Bayesian-like updating in dynamic risk perception, offering a scalable and explainable foundation for adaptive, sequence-aware safety monitoring in safety-critical systems.
Full article
(This article belongs to the Special Issue Safe and Secure Artificial Intelligence (AI) in Chemical Engineering: Current and Future Developments)
►▼
Show Figures

Figure 1
Open AccessReview
Artificial Intelligence in Finance: From Market Prediction to Macroeconomic and Firm-Level Forecasting
by
Flavius Gheorghe Popa and Vlad Muresan
AI 2025, 6(11), 295; https://doi.org/10.3390/ai6110295 - 17 Nov 2025
Abstract
This review surveys how contemporary machine learning is reshaping financial and economic forecasting across markets, macroeconomics, and corporate planning. We synthesize evidence on model families, such as regularized linear methods, tree ensembles, and deep neural architecture, and explain their optimization (with gradient-based training)
[...] Read more.
This review surveys how contemporary machine learning is reshaping financial and economic forecasting across markets, macroeconomics, and corporate planning. We synthesize evidence on model families, such as regularized linear methods, tree ensembles, and deep neural architecture, and explain their optimization (with gradient-based training) and design choices (activation and loss functions). Across tasks, Random Forest and gradient-boosted trees emerge as robust baselines, offering strong out-of-sample accuracy and interpretable variable importance. For sequential signals, recurrent models, especially LSTM ensembles, consistently improve directional classification and volatility-aware predictions, while transformer-style attention is a promising direction for longer contexts. Practical performance hinges on aligning losses with business objectives (for example cross-entropy vs. RMSE/MAE), handling class imbalance, and avoiding data leakage through rigorous cross-validation. In high-dimensional settings, regularization (such as ridge/lasso/elastic-net) stabilizes estimation and enhances generalization. We compile task-specific feature sets for macro indicators, market microstructure, and firm-level data, and distill implementation guidance covering hyperparameter search, evaluation metrics, and reproducibility. We conclude in open challenges (accuracy–interpretability trade-off, limited causal insight) and outline a research agenda combining econometrics with representation learning and data-centric evaluation.
Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
►▼
Show Figures

Figure 1
Open AccessArticle
AI-Delphi: Emulating Personas Toward Machine–Machine Collaboration
by
Lucas Nóbrega, Luiz Felipe Martinez, Luísa Marschhausen, Yuri Lima, Marcos Antonio de Almeida, Alan Lyra, Carlos Eduardo Barbosa and Jano Moreira de Souza
AI 2025, 6(11), 294; https://doi.org/10.3390/ai6110294 - 14 Nov 2025
Abstract
Recent technological advancements have made Large Language Models (LLMs) easily accessible through apps such as ChatGPT, Claude.ai, Google Gemini, and HuggingChat, allowing text generation on diverse topics with a simple prompt. Considering this scenario, we propose three machine–machine collaboration models to streamline and
[...] Read more.
Recent technological advancements have made Large Language Models (LLMs) easily accessible through apps such as ChatGPT, Claude.ai, Google Gemini, and HuggingChat, allowing text generation on diverse topics with a simple prompt. Considering this scenario, we propose three machine–machine collaboration models to streamline and accelerate Delphi execution time by leveraging the extensive knowledge of LLMs. We then applied one of these models—the Iconic Minds Delphi—to run Delphi questionnaires focused on the future of work and higher education in Brazil. Therefore, we prompted ChatGPT to assume the role of well-known public figures from various knowledge areas. To validate the effectiveness of this approach, we asked one of the emulated experts to evaluate his responses. Although this individual validation was not sufficient to generalize the approach’s effectiveness, it revealed an 85% agreement rate, suggesting a promising alignment between the emulated persona and the real expert’s opinions. Our work contributes to leveraging Artificial Intelligence (AI) in Futures Research, emphasizing LLMs’ potential as collaborators in shaping future visions while discussing their limitations. In conclusion, our research demonstrates the synergy between Delphi and LLMs, providing a glimpse into a new method for exploring central themes, such as the future of work and higher education.
Full article
(This article belongs to the Topic Generative AI and Interdisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
FedEHD: Entropic High-Order Descent for Robust Federated Multi-Source Environmental Monitoring
by
Koffka Khan, Winston Elibox, Treina Dinoo Ramlochan, Wayne Rajkumar and Shanta Ramnath
AI 2025, 6(11), 293; https://doi.org/10.3390/ai6110293 - 14 Nov 2025
Abstract
►▼
Show Figures
We propose Federated Entropic High-Order Descent (FedEHD), a drop-in client optimizer that augments local SGD with (i) an entropy (sign) term and (ii) quadratic and cubic gradient components for drift control and implicit clipping. Across non-IID CIFAR-10 and CIFAR-100 benchmarks (100 clients, 10%
[...] Read more.
We propose Federated Entropic High-Order Descent (FedEHD), a drop-in client optimizer that augments local SGD with (i) an entropy (sign) term and (ii) quadratic and cubic gradient components for drift control and implicit clipping. Across non-IID CIFAR-10 and CIFAR-100 benchmarks (100 clients, 10% sampled per round), FedEHD achieves faster and higher convergence than strong baselines including FedAvg, FedProx, SCAFFOLD, FedDyn, MOON, and FedAdam. On CIFAR-10, it reaches 70% accuracy in approximately 80 rounds (versus 100 for MOON and 130 for SCAFFOLD) and attains a final accuracy of 72.5%. On CIFAR-100, FedEHD surpasses 60% accuracy by about 150 rounds (compared with 250 for MOON and 300 for SCAFFOLD) and achieves a final accuracy of 68.0%. In an environmental monitoring case study involving four distributed air-quality stations, FedEHD yields the highest macro AUC/F1 and improved calibration (ECE 0.183 versus 0.186–0.210 for competing federated methods) without additional communication and with only local overhead. The method further provides scale-invariant coefficients with optional automatic adaptation, theoretical guarantees for surrogate descent and drift reduction, and convergence curves that illustrate smooth and stable learning dynamics.
Full article

Figure 1
Open AccessArticle
Attitudes Toward Artificial Intelligence in Organizational Contexts
by
Silvia Marocco, Diego Bellini, Barbara Barbieri, Fabio Presaghi, Elena Grossi and Alessandra Talamo
AI 2025, 6(11), 292; https://doi.org/10.3390/ai6110292 - 14 Nov 2025
Abstract
►▼
Show Figures
The adoption of artificial intelligence (AI) is reshaping organizational practices, yet workers’ attitudes remain crucial for its successful integration. This study examines how perceived organizational ethical culture, organizational innovativeness, and job performance influence workers’ attitudes towards AI. A survey was administered to 356
[...] Read more.
The adoption of artificial intelligence (AI) is reshaping organizational practices, yet workers’ attitudes remain crucial for its successful integration. This study examines how perceived organizational ethical culture, organizational innovativeness, and job performance influence workers’ attitudes towards AI. A survey was administered to 356 workers across diverse sectors, with analyses focusing on 154 participants who reported prior AI use. Measures included the Attitudes Towards Artificial Intelligence at Work (AAAW), Corporate Ethical Virtues (CEV), Inventory of Organizational Innovativeness (IOI), and an adapted version of the In-Role Behaviour Scale. Hierarchical regression analyses revealed that ethical culture dimensions, particularly Clarity and Feasibility, significantly predicted attitudes towards AI, such as anxiety and job insecurity, with Feasibility also associated with the attribution of human-like traits to AI. Supportability, reflecting a cooperative work environment, was linked to lower perceptions of AI human-likeness and adaptability. Among innovation dimensions, only Raising Projects, the active encouragement of employees’ ideas, was positively related to perceptions of AI adaptability, highlighting the importance of participatory innovation practices over abstract signals. Most importantly, perceived job performance improvements through AI predicted more positive attitudes, including greater perceived quality, utility, and reduced anxiety. Overall, this study contributes to the growing literature on AI in organizations by offering an exploratory yet integrative framework that captures the multifaceted nature of AI acceptance in the workplace.
Full article

Figure 1
Open AccessArticle
Super-Resolution Reconstruction Approach for MRI Images Based on Transformer Network
by
Xin Liu, Chuangxin Huang, Jianli Meng, Qi Chen, Wuzheng Ji and Qiuliang Wang
AI 2025, 6(11), 291; https://doi.org/10.3390/ai6110291 - 14 Nov 2025
Abstract
►▼
Show Figures
Magnetic Resonance Imaging (MRI) serves as a pivotal medical diagnostic technique widely deployed in clinical practice, yet high-resolution reconstruction frequently introduces motion artifacts and degrades signal-to-noise ratios. To enhance imaging efficiency and improve reconstruction quality, this study proposes a Transformer network-based super-resolution framework
[...] Read more.
Magnetic Resonance Imaging (MRI) serves as a pivotal medical diagnostic technique widely deployed in clinical practice, yet high-resolution reconstruction frequently introduces motion artifacts and degrades signal-to-noise ratios. To enhance imaging efficiency and improve reconstruction quality, this study proposes a Transformer network-based super-resolution framework for MRI images. The methodology integrates Nonuniform Fast Fourier Transform (NUFFT) with a hybrid-attention Transformer network to achieve high-fidelity reconstruction. The embedded NUFFT module adaptively applies density compensation to k-space data based on sampling trajectories, while the Mixed Attention Block (MAB) activates broader pixel engagement to amplify feature extraction capabilities. The Interactive Attention Block (IAB) facilitates cross-window information fusion via overlapping windows, effectively suppressing artifacts. Evaluated on the fastMRI dataset under 4× radial undersampling, the network demonstrates 3.52 dB higher PSNR and 0.21 SSIM improvement over baselines, outperforming state-of-the-art methods across quantitative metrics. Visual assessments further confirm superior detail preservation and artifact suppression. This work establishes an effective pipeline for high-quality radial MRI reconstruction, providing a novel technical pathway for low-field MRI systems with significant research and application value.
Full article

Figure 1
Open AccessArticle
KGGCN: A Unified Knowledge Graph-Enhanced Graph Convolutional Network Framework for Chinese Named Entity Recognition
by
Xin Chen, Liang He, Weiwei Hu and Sheng Yi
AI 2025, 6(11), 290; https://doi.org/10.3390/ai6110290 - 13 Nov 2025
Abstract
►▼
Show Figures
Recent advances in Chinese Named Entity Recognition (CNER) have integrated lexical features and factual knowledge into pretrained language models. However, existing lexicon-based methods often inject knowledge as restricted, isolated token-level information, lacking rich semantic and structural context. Knowledge graphs (KGs), comprising relational triples,
[...] Read more.
Recent advances in Chinese Named Entity Recognition (CNER) have integrated lexical features and factual knowledge into pretrained language models. However, existing lexicon-based methods often inject knowledge as restricted, isolated token-level information, lacking rich semantic and structural context. Knowledge graphs (KGs), comprising relational triples, offer explicit relational semantics and reasoning capabilities, while Graph Convolutional Networks (GCNs) effectively capture complex sentence structures. We propose KGGCN, a unified KG-enhanced GCN framework for CNER. KGGCN introduces external factual knowledge without disrupting the original word order, employing a novel end-append serialization scheme and a visibility matrix to control interaction scope. The model further utilizes a two-phase GCN stack, combining a standard GCN for robust aggregation with a multi-head attention GCN for adaptive structural refinement, to capture multi-level structural information. Experiments on four Chinese benchmark datasets demonstrate KGGCN’s superior performance. It achieves the highest F1-scores on MSRA (95.96%) and Weibo (71.98%), surpassing previous bests by 0.26 and 1.18 percentage points, respectively. Additionally, KGGCN obtains the highest Recall on OntoNotes (84.28%) and MSRA (96.14%), and the highest Precision on MSRA (95.82%), Resume (96.40%), and Weibo (72.14%). These results highlight KGGCN’s effectiveness in leveraging structured knowledge and multi-phase graph modeling to enhance entity recognition accuracy and coverage across diverse Chinese texts.
Full article

Figure 1
Open AccessArticle
DVAD: A Dynamic Visual Adaptation Framework for Multi-Class Anomaly Detection
by
Han Gao, Huiyuan Luo, Fei Shen and Zhengtao Zhang
AI 2025, 6(11), 289; https://doi.org/10.3390/ai6110289 - 8 Nov 2025
Abstract
Despite the superior performance of existing anomaly detection methods, they are often limited to single-class detection tasks, requiring separate models for each class. This constraint hinders their detection performance and deployment efficiency when applied to real-world multi-class data. In this paper, we propose
[...] Read more.
Despite the superior performance of existing anomaly detection methods, they are often limited to single-class detection tasks, requiring separate models for each class. This constraint hinders their detection performance and deployment efficiency when applied to real-world multi-class data. In this paper, we propose a dynamic visual adaptation framework for multi-class anomaly detection, enabling the dynamic and adaptive capture of features based on multi-class data, thereby enhancing detection performance. Specifically, our method introduces a network plug-in, the Hyper AD Plug-in, which dynamically adjusts model parameters according to the input data to extract dynamic features. By leveraging the collaboration between the Mamba block, the CNN block, and the proposed Hyper AD Plug-in, we extract global, local, and dynamic features simultaneously. Furthermore, we incorporate the Mixture-of-Experts (MoE) module, which achieves a dynamic balance across different features through its dynamic routing mechanism and multi-expert collaboration. As a result, the proposed method achieves leading accuracy on the MVTec AD and VisA datasets, with image-level mAU-ROC scores of 98.8% and 95.1%, respectively.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Industrial Systems: From Data Acquisition to Intelligent Decision-Making)
►▼
Show Figures

Figure 1
Open AccessArticle
Understanding What the Brain Sees: Semantic Recognition from EEG Responses to Visual Stimuli Using Transformer
by
Ahmed Fares
AI 2025, 6(11), 288; https://doi.org/10.3390/ai6110288 - 7 Nov 2025
Abstract
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal
[...] Read more.
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal transformer architecture that pioneers automated semantic recognition from brain activity patterns, advancing beyond conventional brain state classification to interpret higher level cognitive understanding. Our methodology addresses three fundamental innovations: First, we develop a topology-preserving 2D electrode mapping that, combined with temporal indexing, generates 3D spatial–temporal representations capturing both anatomical relationships and dynamic neural correlations. Second, we integrate discrete cosine transform (DCT) embeddings with standard patch and positional embeddings in the transformer architecture, enabling frequency-domain analysis that quantifies activation variability across spectral bands and enhances attention mechanisms. Third, we introduce the Semantics-EEG dataset comprising ten semantic categories extracted from visual stimuli, providing a benchmark for brain-perceived semantic recognition research. The proposed DCT-ViT model achieves 72.28% recognition accuracy on Semantics-EEG, substantially outperforming LSTM-based and attention-augmented recurrent baselines. Ablation studies demonstrate that DCT embeddings contribute meaningfully to model performance, validating their effectiveness in capturing frequency-specific neural signatures. Interpretability analyses reveal neurobiologically plausible attention patterns, with visual semantics activating occipital–parietal regions and abstract concepts engaging frontal–temporal networks, consistent with established cognitive neuroscience models. To address systematic misclassification between perceptually similar categories, we develop a hierarchical classification framework with boundary refinement mechanisms. This approach substantially reduces confusion between overlapping semantic categories, elevating overall accuracy to 76.15%. Robustness evaluations demonstrate superior noise resilience, effective cross-subject generalization, and few-shot transfer capabilities to novel categories. This work establishes the technical foundation for brain–computer interfaces capable of decoding semantic understanding, with implications for assistive technologies, cognitive assessment, and human–AI interaction. Both the Semantics-EEG dataset and DCT-ViT implementation are publicly released to facilitate reproducibility and advance research in neural semantic decoding.
Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
►▼
Show Figures

Figure 1
Open AccessArticle
Development and Validation of a Questionnaire to Evaluate AI-Generated Summaries for Radiologists: ELEGANCE (Expert-Led Evaluation of Generative AI Competence and ExcelleNCE)
by
Yuriy A. Vasilev, Anton V. Vladzymyrskyy, Olga V. Omelyanskaya, Yulya A. Alymova, Dina A. Akhmedzyanova, Yuliya F. Shumskaya, Maria R. Kodenko, Ivan A. Blokhin and Roman V. Reshetnikov
AI 2025, 6(11), 287; https://doi.org/10.3390/ai6110287 - 5 Nov 2025
Abstract
Background/Objectives: Large language models (LLMs) are increasingly considered for use in radiology, including the summarization of patient medical records to support radiologists in processing large volumes of data under time constraints. This task requires not only accuracy and completeness but also clinical applicability.
[...] Read more.
Background/Objectives: Large language models (LLMs) are increasingly considered for use in radiology, including the summarization of patient medical records to support radiologists in processing large volumes of data under time constraints. This task requires not only accuracy and completeness but also clinical applicability. Automatic metrics and general-purpose questionnaires fail to capture these dimensions, and no standardized tool currently exists for the expert evaluation of LLM-generated summaries in radiology. Here, we aimed to develop and validate such a tool. Methods: Items for the questionnaire were formulated and refined through focus group testing with radiologists. Validation was performed on 132 LLM-generated summaries of 44 patient records, each independently assessed by radiologists. Criterion validity was evaluated through known-group differentiation and construct validity through confirmatory factor analysis. Results: The resulting seven-item instrument, ELEGANCE (Expert-Led Evaluation of Generative AI Competence and Excellence), demonstrated excellent internal consistency (Cronbach’s α = 0.95). It encompasses seven dimensions: relevance, completeness, applicability, falsification, satisfaction, structure, and correctness of language and terminology. Confirmatory factor analysis supported a two-factor structure (content and form), with strong fit indices (RMSEA = 0.079, CFI = 0.989, TLI = 0.982, SRMR = 0.029). Criterion validity was confirmed by significant between-group differences (p < 0.001). Conclusions: ELEGANCE is the first validated tool for expert evaluation of LLM-generated medical record summaries for radiologists, providing a standardized framework to ensure quality and clinical utility.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Engineering: Challenges and Developments)
►▼
Show Figures

Figure 1
Open AccessArticle
“In Metaverse Cryptocurrencies We (Dis)Trust?”: Mediators and Moderators of Blockchain-Enabled Non-Fungible Token (NFT) Adoption in AI-Powered Metaverses
by
Seunga Venus Jin
AI 2025, 6(11), 286; https://doi.org/10.3390/ai6110286 - 4 Nov 2025
Abstract
Metaverses have been hailed as the next arena for a wide spectrum of technovation and business opportunities. This research (∑ N = 714) focuses on the three underexplored areas of virtual commerce in AI-enabled metaverses: blockchain-powered cryptocurrencies, non-fungible tokens (NFTs), and AI-powered virtual
[...] Read more.
Metaverses have been hailed as the next arena for a wide spectrum of technovation and business opportunities. This research (∑ N = 714) focuses on the three underexplored areas of virtual commerce in AI-enabled metaverses: blockchain-powered cryptocurrencies, non-fungible tokens (NFTs), and AI-powered virtual influencers. Study 1 reports the mediating effects of (dis)trust in AI-enabled blockchain technologies and the moderating effects of consumers’ technopian perspectives in explaining the relationship between blockchain transparency perception and intention to use cryptocurrencies in AI-powered metaverses. Study 1 also reports the mediating effects of Neo-Luddism perspectives regarding metaverses and the moderating effects of consumers’ social phobia in explaining the relationship between AI-algorithm awareness and behavioral intention to engage with AI-powered virtual influencers in metaverses. Study 2 reports the serial mediating effects of general perception of NFT ownership and psychological ownership of NFTs as well as the moderating effects of the investment value of NFTs in explaining the relationship between acknowledgment of the nature of NFTs and intention to use NFTs in AI-enabled metaverses. Theoretical contributions to the literature on digital materiality and psychological ownership of blockchain/cryptocurrency-powered NFTs as emerging forms of digital consumption objects are discussed. Practical implications for NFT-based branding/entrepreneurship and creative industries in blockchain-enabled metaverses are provided.
Full article
(This article belongs to the Special Issue When Trust Meets Intelligence: The Intersection Between Blockchain, Artificial Intelligence and Internet of Things)
►▼
Show Figures

Figure 1
Open AccessReview
From Black Box to Glass Box: A Practical Review of Explainable Artificial Intelligence (XAI)
by
Xiaoming Liu, Danni Huang, Jingyu Yao, Jing Dong, Litong Song, Hui Wang, Chao Yao and Weishen Chu
AI 2025, 6(11), 285; https://doi.org/10.3390/ai6110285 - 3 Nov 2025
Abstract
Explainable Artificial Intelligence (XAI) has become essential as machine learning systems are deployed in high-stakes domains such as security, finance, and healthcare. Traditional models often act as “black boxes”, limiting trust and accountability. Traditional models often act as “black boxes”, limiting trust and
[...] Read more.
Explainable Artificial Intelligence (XAI) has become essential as machine learning systems are deployed in high-stakes domains such as security, finance, and healthcare. Traditional models often act as “black boxes”, limiting trust and accountability. Traditional models often act as “black boxes”, limiting trust and accountability. However, most existing reviews treat explainability either as a technical problem or a philosophical issue, without connecting interpretability techniques to their real-world implications for security, privacy, and governance. This review fills that gap by integrating theoretical foundations with practical applications and societal perspectives. define transparency and interpretability as core concepts and introduce new economics-inspired notions of marginal transparency and marginal interpretability to highlight diminishing returns in disclosure and explanation. Methodologically, we examine model-agnostic approaches such as LIME and SHAP, alongside model-specific methods including decision trees and interpretable neural networks. We also address ante-hoc vs. post hoc strategies, local vs. global explanations, and emerging privacy-preserving techniques. To contextualize XAI’s growth, we integrate capital investment and publication trends, showing that research momentum has remained resilient despite market fluctuations. Finally, we propose a roadmap for 2025–2030, emphasizing evaluation standards, adaptive explanations, integration with Zero Trust architectures, and the development of self-explaining agents supported by global standards. By combining technical insights with societal implications, this article provides both a scholarly contribution and a practical reference for advancing trustworthy AI.
Full article
(This article belongs to the Section AI Systems: Theory and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
YOLOv11-4ConvNeXtV2: Enhancing Persimmon Ripeness Detection Under Visual Challenges
by
Bohan Zhang, Zhaoyuan Zhang and Xiaodong Zhang
AI 2025, 6(11), 284; https://doi.org/10.3390/ai6110284 - 1 Nov 2025
Abstract
►▼
Show Figures
Reliable and efficient detection of persimmons provides the foundation for precise maturity evaluation. Persimmon ripeness detection remains challenging due to small target sizes, frequent occlusion by foliage, and motion- or focus-induced blur that degrades edge information. This study proposes YOLOv11-4ConvNeXtV2, an enhanced detection
[...] Read more.
Reliable and efficient detection of persimmons provides the foundation for precise maturity evaluation. Persimmon ripeness detection remains challenging due to small target sizes, frequent occlusion by foliage, and motion- or focus-induced blur that degrades edge information. This study proposes YOLOv11-4ConvNeXtV2, an enhanced detection framework that integrates a ConvNeXtV2 backbone with Fully Convolutional Masked Auto-Encoder (FCMAE) pretraining, Global Response Normalization (GRN), and Single-Head Self-Attention (SHSA) mechanisms. We present a comprehensive persimmon dataset featuring sub-block segmentation that preserves local structural integrity while expanding dataset diversity. The model was trained on 4921 annotated images (original 703 + 6 × 703 augmented) collected under diverse orchard conditions and optimized for 300 epochs using the Adam optimizer with early stopping. Comprehensive experiments demonstrate that YOLOv11-4ConvNeXtV2 achieves 95.9% precision and 83.7% recall, with mAP@0.5 of 88.4% and mAP@0.5:0.95 of 74.8%, outperforming state-of-the-art YOLO variants (YOLOv5n, YOLOv8n, YOLOv9t, YOLOv10n, YOLOv11n, YOLOv12n) by 3.8–6.3 percentage points in mAP@0.5:0.95. The model demonstrates superior robustness to blur, occlusion, and varying illumination conditions, making it suitable for deployment in challenging maturity detection environments.
Full article

Figure 1
Open AccessReview
Ethical Bias in AI-Driven Injury Prediction in Sport: A Narrative Review of Athlete Health Data, Autonomy and Governance
by
Zbigniew Waśkiewicz, Kajetan J. Słomka, Tomasz Grzywacz and Grzegorz Juras
AI 2025, 6(11), 283; https://doi.org/10.3390/ai6110283 - 1 Nov 2025
Abstract
►▼
Show Figures
The increasing use of artificial intelligence (AI) in athlete health monitoring and injury prediction presents both technological opportunities and complex ethical challenges. This narrative review critically examines 24 empirical and conceptual studies focused on AI-driven injury forecasting systems across diverse sports disciplines, including
[...] Read more.
The increasing use of artificial intelligence (AI) in athlete health monitoring and injury prediction presents both technological opportunities and complex ethical challenges. This narrative review critically examines 24 empirical and conceptual studies focused on AI-driven injury forecasting systems across diverse sports disciplines, including professional, collegiate, youth, and Paralympic contexts. Applying an IMRAD framework, the analysis identifies five dominant ethical concerns: privacy and data protection, algorithmic fairness, informed consent, athlete autonomy, and long-term data governance. While studies commonly report the effectiveness of AI models—such as those employing decision trees, neural networks, and explainability tools like SHAP and HiPrCAM—few offers robust ethical safeguards or athlete-centered governance structures. Power asymmetries persist between athletes and institutions, with limited recognition of data ownership, transparency, and the right to contest predictive outputs. The findings highlight that ethical risks vary by sport type and competitive level, underscoring the need for sport-specific frameworks. Recommendations include establishing enforceable data rights, participatory oversight mechanisms, and regulatory protections to ensure that AI systems align with principles of fairness, transparency, and athlete agency. Without such frameworks, the integration of AI in sports medicine risks reinforcing structural inequalities and undermining the autonomy of those it intends to support.
Full article

Figure 1
Open AccessArticle
Scaling Swarm Coordination with GNNs—How Far Can We Go?
by
Gianluca Aguzzi, Davide Domini, Filippo Venturini and Mirko Viroli
AI 2025, 6(11), 282; https://doi.org/10.3390/ai6110282 - 1 Nov 2025
Abstract
The scalability of coordination policies is a critical challenge in swarm robotics, where agent numbers may vary substantially between deployment scenarios. Reinforcement learning (RL) offers a promising avenue for learning decentralized policies from local interactions, yet a fundamental question remains: can policies trained
[...] Read more.
The scalability of coordination policies is a critical challenge in swarm robotics, where agent numbers may vary substantially between deployment scenarios. Reinforcement learning (RL) offers a promising avenue for learning decentralized policies from local interactions, yet a fundamental question remains: can policies trained on one swarm size transfer to different population scales without retraining? This zero-shot transfer problem is particularly challenging because the traditional RL approaches learn fixed-dimensional representations tied to specific agent counts, making them brittle to population changes at deployment time. While existing work addresses scalability through population-aware training (e.g., mean-field methods) or multi-size curricula (e.g., population transfer learning), these approaches either impose restrictive assumptions or require explicit exposure to varied team sizes during training. Graph Neural Networks (GNNs) offer a fundamentally different path. Their permutation invariance and ability to process variable-sized graphs suggest potential for zero-shot generalization across swarm sizes, where policies trained on a single population scale could deploy directly to larger or smaller teams. However, this capability remains largely unexplored in the context of swarm coordination. For this reason, we empirically investigate this question by combining GNNs with deep Q-learning in cooperative swarms. We focused on well-established 2D navigation tasks that are commonly used in the swarm robotics literature to study coordination and scalability, providing a controlled yet meaningful setting for our analysis. To address this, we introduce Deep Graph Q-Learning (DGQL), which embeds agent-neighbor graphs into Q-learning and trains on fixed-size swarms. Across two benchmarks (goal reaching and obstacle avoidance), we deploy up to three times larger teams. The DGQL preserves a functional coordination without retraining, but efficiency degrades with size. The ultimate goal distance grows monotonically (15–29 agents) and worsens beyond roughly twice the training size ( agents), with task-dependent trade-offs. Our results quantify scalability limits of GNN-enhanced DQL and suggest architectural and training strategies to better sustain performance across scales.
Full article
(This article belongs to the Section AI in Autonomous Systems)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Artificial Neural Network, Attention Mechanism and Fuzzy Logic-Based Approaches for Medical Diagnostic Support: A Systematic Review
by
Noel Zacarias-Morales, Pablo Pancardo, José Adán Hernández-Nolasco and Matias Garcia-Constantino
AI 2025, 6(11), 281; https://doi.org/10.3390/ai6110281 - 1 Nov 2025
Abstract
Accurate medical diagnosis is essential for informed decision making and the delivery of effective treatment. Traditionally, this process relies on clinical judgment, integrating data and medical expertise to inform decision making. In recent years, artificial neural networks (ANNs) have proven to be valuable
[...] Read more.
Accurate medical diagnosis is essential for informed decision making and the delivery of effective treatment. Traditionally, this process relies on clinical judgment, integrating data and medical expertise to inform decision making. In recent years, artificial neural networks (ANNs) have proven to be valuable tools for diagnostic support. Attention mechanisms have enhanced ANNs performance, while fuzzy logic has contributed to managing uncertainty inherent in clinical data. This systematic review analyzes how the integration of these three approaches enhances computational models for medical diagnostic support. Following PRISMA 2020 guidelines, a comprehensive search was conducted across five scientific databases (IEEE Xplore, ScienceDirect, Web of Science, SpringerLink, and ACM Digital Library) for studies published between 2020 and 2025 that implemented the combined use of ANNs, attention mechanisms, and fuzzy logic for medical diagnostic support. Inclusion and exclusion criteria were applied, along with a quality assessment. Data extraction and synthesis were conducted independently by two reviewers and verified by a third. Out of 269 initially identified articles, 32 met the inclusion criteria. The findings consistently indicate that the integration of ANNs, attention mechanisms, and fuzzy logic significantly improves the performance of diagnostic models. ANNs effectively capture complex data patterns, attention mechanisms prioritize the most relevant features, and fuzzy logic provides robust handling of ambiguity and imprecise information through continuous degrees of membership. This integration leads to more accurate and interpretable diagnostic models. Future research should focus on leveraging multimodal data, enhancing model generalization, reducing computational complexity, and exploring novel fuzzy logic techniques and training paradigms to improve adaptability in real-world clinical settings.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Drones, Electronics, Mathematics, Sensors
AI and Data-Driven Advancements in Industry 4.0, 2nd Edition
Topic Editors: Teng Huang, Yan Pang, Qiong Wang, Jianjun Li, Jin Liu, Jia WangDeadline: 15 December 2025
Topic in
Electronics, Eng, Future Internet, Information, Sensors, Sustainability, AI
Artificial Intelligence and Machine Learning in Cyber–Physical Systems
Topic Editors: Wei Wang, Junxin Chen, Jinyu TianDeadline: 31 December 2025
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
AI, Buildings, Electronics, Symmetry, Smart Cities, Urban Science, Automation
Application of Smart Technologies in Buildings
Topic Editors: Yin Zhang, Limei Peng, Ming TaoDeadline: 28 February 2026
Conferences
Special Issues
Special Issue in
AI
AI-Driven Innovations: Emerging Trends, Security, and Industrial Solutions
Guest Editors: Mohammed Abdulhakim Al-Absi, Ahmed A. Abdulhakim Al-Absi, Nadhem EbrahimDeadline: 30 November 2025
Special Issue in
AI
New Advances in the Application of Artificial Intelligence in Antenna Systems
Guest Editor: Marco A. PanduroDeadline: 30 November 2025
Special Issue in
AI
Bio-Inspired Approaches in Artificial Intelligence: Innovations in Machine Learning and Multi-Agent Systems
Guest Editors: Elhadj Benkhelifa, Brij B. Gupta, Tamara Zhukabayeva, Chirine Ghedira Guegan, Nadia KabachiDeadline: 30 November 2025
Special Issue in
AI
Artificial Intelligence (AI) and Internet of Things (IoT) Applications for Resilient and Sustainable Energy Systems
Guest Editors: Tarek Kandil, Hassan M. Hussein Farh, Saad Mekhilef, Hayrettin Karayaka, Adam HarrisDeadline: 30 November 2025






