Journal Description
AI
AI
is an international, peer-reviewed, open access journal on artificial intelligence (AI), including broad aspects of cognition and reasoning, perception and planning, machine learning, intelligent robotics, and applications of AI, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within ESCI (Web of Science), Scopus, EBSCO, and other databases.
- Journal Rank: JCR - Q1 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Artificial Intelligence)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.2 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
5.0 (2024);
5-Year Impact Factor:
4.6 (2024)
Latest Articles
Multi-Agent Transfer Learning Based on Contrastive Role Relationship Representation
AI 2026, 7(1), 13; https://doi.org/10.3390/ai7010013 - 6 Jan 2026
Abstract
This paper presents the Multi-agent Transfer Learning Based on Contrastive Role Relationship Representation (MCRR), focusing on the unique function of role mechanisms in cross-task knowledge transfer. The framework employs contrastive learning-driven role representation modeling to capture the differences and commonalities of agent behavior
[...] Read more.
This paper presents the Multi-agent Transfer Learning Based on Contrastive Role Relationship Representation (MCRR), focusing on the unique function of role mechanisms in cross-task knowledge transfer. The framework employs contrastive learning-driven role representation modeling to capture the differences and commonalities of agent behavior patterns among multiple tasks. We generate generalizable role representations and embed them into transfer policy networks, enabling agents to efficiently share role assignment knowledge during source task training and achieve policy transfer through precise role adaptation in unseen tasks. Unlike traditional methods relying on the generalization ability of neural networks, MCRR breaks through the coordination bottleneck in multi-agent systems for dynamic team collaboration by explicitly modeling role dynamics among tasks and constructing a cross-task role contrast model. In the SMAC benchmark task series, including mixed formations and quantity variations, MCRR significantly improves win rates in both source and unseen tasks. By outperforming mainstream baselines like MATTAR and UPDeT, MCRR validates the effectiveness of roles as a bridge for knowledge transfer.
Full article
(This article belongs to the Section AI in Autonomous Systems)
Open AccessEditorial
Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture
by
Arslan Munir
AI 2026, 7(1), 12; https://doi.org/10.3390/ai7010012 - 6 Jan 2026
Abstract
Agriculture is entering a profound period of transformation, driven by the accelerating integration of artificial intelligence (AI), machine learning, computer vision, autonomous sensing, and data-driven decision support [...]
Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
►▼
Show Figures

Figure 1
Open AccessReview
AI-Driven Advances in Precision Oncology: Toward Optimizing Cancer Diagnostics and Personalized Treatment
by
Luka Bulić, Petar Brlek, Nenad Hrvatin, Eva Brenner, Vedrana Škaro, Petar Projić, Sunčica Andreja Rogan, Marko Bebek, Parth Shah and Dragan Primorac
AI 2026, 7(1), 11; https://doi.org/10.3390/ai7010011 - 4 Jan 2026
Abstract
Cancer remains one of the main global public health challenges, with rising incidence and mortality rates demanding more effective diagnostic and therapeutic approaches. Recent advances in artificial intelligence (AI) have positioned it as a transformative force in oncology, offering the ability to process
[...] Read more.
Cancer remains one of the main global public health challenges, with rising incidence and mortality rates demanding more effective diagnostic and therapeutic approaches. Recent advances in artificial intelligence (AI) have positioned it as a transformative force in oncology, offering the ability to process vast and complex datasets that extend beyond human analytic capabilities. By integrating radiological, histopathological, genomic, and clinical data, AI enables more precise tumor characterization, including refined molecular classification, thereby improving risk stratification and facilitating individualized therapeutic decisions. In diagnostics, AI-driven image analysis platforms have demonstrated excellent performance, particularly in radiology and pathology. Prognostic algorithms are increasingly applied to predict survival, recurrence, and treatment response, while reinforcement learning models are being explored for dynamic radiotherapy and optimization of complex treatment regimens. Beyond direct patient care, AI is accelerating drug discovery and clinical trial design, reducing costs and timelines associated with translating novel therapies into clinical practice. Clinical decision support systems are gradually being integrated into practice, assisting physicians in managing the growing complexity of cancer care. Despite this progress, challenges such as data quality, interoperability, algorithmic bias, and the opacity of complex models limit widespread integration. Additionally, ethical and regulatory hurdles must be addressed to ensure that AI applications are safe, equitable, and clinically effective. Nevertheless, the trajectory of current research suggests that AI will play an increasingly important role in the evolution of precision oncology, complementing human expertise and improving patient outcomes.
Full article
(This article belongs to the Special Issue Artificial Intelligence for Future Healthcare: Advancement, Impact, and Prospect in the Field of Cancer)
Open AccessReview
Applications of Artificial Intelligence in Dental Malocclusion: A Scoping Review of Recent Advances (2020–2025)
by
Man Hung, Owen Cohen, Nicholas Beasley, Cairo Ziebarth, Connor Schwartz, Alicia Parry and Martin S. Lipsky
AI 2026, 7(1), 10; https://doi.org/10.3390/ai7010010 - 31 Dec 2025
Abstract
►▼
Show Figures
Introduction: Dental malocclusion affects more than half of the global population, causing significant functional and esthetic consequences. The integration of artificial intelligence (AI) into orthodontic care for malocclusion has the potential to enhance diagnostic accuracy, treatment planning, and clinical efficiency. However, existing research
[...] Read more.
Introduction: Dental malocclusion affects more than half of the global population, causing significant functional and esthetic consequences. The integration of artificial intelligence (AI) into orthodontic care for malocclusion has the potential to enhance diagnostic accuracy, treatment planning, and clinical efficiency. However, existing research remains fragmented, and recent advances have not been comprehensively synthesized. This scoping review aimed to map the current landscape of AI applications in dental malocclusion from 2020 to 2025. Methods: The review followed the Joanna Briggs Institute methodology and the PRISMA-ScR guidelines. The authors conducted a systematic search across four databases (PubMed, Scopus, Web of Science, and IEEE Xplore) to identify original, peer-reviewed research applying AI to malocclusion diagnosis, classification, treatment planning, or monitoring. The review screened, selected, and extracted data using predefined criteria. Results: Ninety-five studies met the inclusion criteria. The majority employed convolutional neural networks and deep learning models, particularly for diagnosis and classification tasks. Accuracy rates frequently exceeded 90%, with robust performance in cephalometric landmark detection, skeletal classification, and 3D segmentation. Most studies focused on Angle’s classification, while anterior open bite, crossbite/asymmetry, and soft tissue modeling were comparatively underrepresented. Although model performance was generally high, study limitations included small sample sizes, lack of external validation, and limited demographic diversity. Conclusions: AI offers the potential to support and enhance the diagnosis and management of malocclusion. However, to ensure safe and effective clinical adoption, future research must include reproducible reporting, rigorous external validation across sites/devices, and evaluation in diverse populations and real-world clinical workflows.
Full article

Figure 1
Open AccessArticle
Robust Covert Spatial Attention Decoding from Low-Channel Dry EEG by Hybrid AI Model
by
Doyeon Kim and Jaeho Lee
AI 2026, 7(1), 9; https://doi.org/10.3390/ai7010009 - 30 Dec 2025
Abstract
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a
[...] Read more.
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a compact Hybrid encoder (CNN-LSTM-MHSA) with robustness-oriented training (noise/shift/channel-dropout and supervised consistency). Results: Online, the Hybrid All-on-Wav achieved 0.695 accuracy with end-to-end latency ~2.03 s per 2.0 s decision window; the pure model inference latency is ≈185 ms on CPU and ≈11 ms on GPU. The same backbone without defenses reached 0.673, a CNN-LSTM 0.612, and a compact CNN 0.578. Offline subject-wise analyses showed a CSA median Δ balanced accuracy (BAcc) of +2.9%p (paired Wilcoxon p = 0.037; N = 16), with usability-aligned improvements (error 0.272 → 0.268; information transfer rate (ITR) 3.120 → 3.240). Effects were smaller for MI and present for Emotion. Conclusions: Even with simple hardware, compact attention-augmented models and training-time defenses support feasible, low-latency left–right CSA control above chance, suitable for embedded or laptop-class deployment.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Figure 1
Open AccessReview
The Current Landscape of Automatic Radiology Report Generation with Deep Learning: A Scoping Review
by
Patricio Meléndez Rojas, Jaime Jamett Rojas, María Fernanda Villalobos Dellafiori, Pablo R. Moya and Alejandro Veloz Baeza
AI 2026, 7(1), 8; https://doi.org/10.3390/ai7010008 - 29 Dec 2025
Abstract
Automatic radiology report generation (ARRG) has emerged as a promising application of deep learning (DL) with the potential to alleviate reporting workload and improve diagnostic consistency. However, despite rapid methodological advances, the field remains technically fragmented and not yet mature for routine clinical
[...] Read more.
Automatic radiology report generation (ARRG) has emerged as a promising application of deep learning (DL) with the potential to alleviate reporting workload and improve diagnostic consistency. However, despite rapid methodological advances, the field remains technically fragmented and not yet mature for routine clinical adoption. This scoping review maps the current ARRG research landscape by examining DL architectures, multimodal integration strategies, and evaluation practices from 2015 to April 2025. Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, a comprehensive literature search identified 89 eligible studies, revealing a marked predominance of chest radiography datasets (87.6%), primarily driven by their public availability and the accelerated development of automated tools during the COVID-19 pandemic. Most models employed hybrid architectures (73%), particularly CNN–Transformer pairings, reflecting a shift toward systems that combine local feature extraction with global contextual reasoning. Although these approaches have achieved measurable gains in textual and semantic coherence, several challenges persist, including limited anatomical diversity, weak alignment with radiological rationale, and evaluation metrics that insufficiently reflect diagnostic adequacy or clinical impact. Overall, the findings indicate a rapidly evolving but clinically immature field, underscoring the need for validation frameworks that more closely reflect radiological practice and support future deployment in real-world settings.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Graphical abstract
Open AccessArticle
View-Aware Pose Analysis: A Robust Pipeline for Multi-Person Joint Injury Prediction from Single Camera
by
Basant Adel, Ahmad Salah, Mahmoud A. Mahdi and Heba Mohsen
AI 2026, 7(1), 7; https://doi.org/10.3390/ai7010007 - 27 Dec 2025
Abstract
This paper presents a novel, accessible pipeline for the prediction and prevention of motion-related joint injuries in multiple individuals. Current methodologies for biomechanical analysis often rely on complex, restrictive setups such as multi-camera systems, wearable sensors, or markers, limiting their applicability in everyday
[...] Read more.
This paper presents a novel, accessible pipeline for the prediction and prevention of motion-related joint injuries in multiple individuals. Current methodologies for biomechanical analysis often rely on complex, restrictive setups such as multi-camera systems, wearable sensors, or markers, limiting their applicability in everyday environments. To overcome these limitations, we propose a comprehensive solution that utilizes only single-camera 2D images. Our pipeline comprises four distinct stages: (1) extraction of 2D human pose keypoints for multiple persons using a pretrained Human Pose Estimation model; (2) a novel ensemble learning model for person-view classification—distinguishing between front, back, and side perspectives—which is critical for accurate subsequent analysis; (3) a view-specific module that calculates body-segment angles, robustly handling movement pairs (e.g., flexion–extension) and mirrored joints; and (4) a pose assessment module that evaluates calculated angles against established biomechanical Range of Motion (ROM) standards to detect potentially injurious movements. Evaluated on a custom dataset of high-risk poses and diverse images, the end-to-end pipeline demonstrated an 87% success rate in identifying dangerous postures. The view classification stage, a key contribution of this work, achieved a 90% overall accuracy. The system delivers individualized, joint-specific feedback, offering a scalable and deployable solution for enhancing human health and safety in various settings, from home environments to workplaces, without the need for specialized equipment.
Full article
(This article belongs to the Special Issue Machine Learning in Action: Practical Applications and Emerging Trends)
►▼
Show Figures

Figure 1
Open AccessReview
From Pilots to Practices: A Scoping Review of GenAI-Enabled Personalization in Computer Science Education
by
Iman Reihanian, Yunfei Hou and Qingquan Sun
AI 2026, 7(1), 6; https://doi.org/10.3390/ai7010006 - 23 Dec 2025
Abstract
Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023–2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education CS contexts.
[...] Read more.
Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023–2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education CS contexts. We identify five application domains—intelligent tutoring, personalized materials, formative feedback, AI-augmented assessment, and code review—and analyze how design choices shape learning outcomes. Designs incorporating explanation-first guidance, solution withholding, graduated hint ladders, and artifact grounding (student code, tests, and rubrics) consistently show more positive learning processes than unconstrained chat interfaces. Successful implementations share four patterns: context-aware tutoring anchored in student artifacts, multi-level hint structures requiring reflection, composition with traditional CS infrastructure (autograders and rubrics), and human-in-the-loop quality assurance. We propose an exploration-firstadoption framework emphasizing piloting, instrumentation, learning-preserving defaults, and evidence-based scaling. Four recurrent risks—academic integrity, privacy, bias and equity, and over-reliance—are paired with operational mitigation. Critical evidence gaps include longitudinal effects on skill retention, comparative evaluations of guardrail designs, equity impacts at scale, and standardized replication metrics. The evidence supports generative AI as a mechanism for precision scaffolding when embedded in exploration-first, audit-ready workflows that preserve productive struggle while scaling personalized support.
Full article
(This article belongs to the Topic Generative Artificial Intelligence in Higher Education)
►▼
Show Figures

Figure 1
Open AccessArticle
Remote Sensing Scene Classification via Multi-Feature Fusion Based on Discriminative Multiple Canonical Correlation Analysis
by
Shavkat Fazilov, Ozod Yusupov, Yigitali Khandamov, Erali Eshonqulov, Jalil Khamidov and Khabiba Abdieva
AI 2026, 7(1), 5; https://doi.org/10.3390/ai7010005 - 23 Dec 2025
Abstract
Scene classification in remote sensing images is one of the urgent tasks that requires an improvement in recognition accuracy due to complex spatial structures and high inter-class similarity. Although feature extraction using convolutional neural networks provides high efficiency, combining deep features obtained from
[...] Read more.
Scene classification in remote sensing images is one of the urgent tasks that requires an improvement in recognition accuracy due to complex spatial structures and high inter-class similarity. Although feature extraction using convolutional neural networks provides high efficiency, combining deep features obtained from different architectures in a semantically consistent manner remains an important scientific problem. In this study, a DMCCA + SVM model is proposed, in which Discriminative Multiple Canonical Correlation Analysis (DMCCA) is applied to fuse multi-source deep features, and final classification is performed using a Support Vector Machine (SVM). Unlike conventional fusion methods, DMCCA projects heterogeneous features into a unified low-dimensional latent space by maximizing within-class correlation and minimizing between-class correlation, resulting in a more separable and compact feature space. The proposed approach was evaluated on three widely used benchmark datasets—NWPU-RESISC45, AID, and PatternNet—and achieved accuracy scores of 92.75%, 93.92%, and 99.35%, respectively. The results showed that the model outperforms modern individual CNN architectures. Additionally, the model’s stability and generalization capability were confirmed through K-fold cross-validation. Overall, the proposed DMCCA + SVM model was experimentally validated as an effective and reliable solution for high-accuracy classification of remote sensing scenes.
Full article
(This article belongs to the Special Issue Deep Learning Technologies and Their Applications in Image Processing, Computer Vision, and Computational Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
A Real-Time Consensus-Free Accident Detection Framework for Internet of Vehicles Using Vision Transformer and EfficientNet
by
Zineb Seghir, Lyamine Guezouli, Kamel Barka, Djallel Eddine Boubiche, Homero Toral-Cruz and Rafael Martínez-Peláez
AI 2026, 7(1), 4; https://doi.org/10.3390/ai7010004 - 22 Dec 2025
Abstract
►▼
Show Figures
Objectives: Traffic accidents cause severe social and economic impacts, demanding fast and reliable detection to minimize secondary collisions and improve emergency response. However, existing cloud-dependent detection systems often suffer from high latency and limited scalability, motivating the need for an edge-centric and
[...] Read more.
Objectives: Traffic accidents cause severe social and economic impacts, demanding fast and reliable detection to minimize secondary collisions and improve emergency response. However, existing cloud-dependent detection systems often suffer from high latency and limited scalability, motivating the need for an edge-centric and consensus-free accident detection framework in IoV environments. Methods: This study presents a real-time accident detection framework tailored for Internet of Vehicles (IoV) environments. The proposed system forms an integrated IoV architecture combining on-vehicle inference, RSU-based validation, and asynchronous cloud reporting. The system integrates a lightweight ensemble of Vision Transformer (ViT) and EfficientNet models deployed on vehicle nodes to classify video frames. Accident alerts are generated only when both models agree (vehicle-level ensemble consensus), ensuring high precision. These alerts are transmitted to nearby Road Side Units (RSUs), which validate the events and broadcast safety messages without requiring inter-vehicle or inter-RSU consensus. Structured reports are also forwarded asynchronously to the cloud for long-term model retraining and risk analysis. Results: Evaluated on the CarCrash and CADP datasets, the framework achieves an F1-score of 0.96 with average decision latency below 60 ms, corresponding to an overall accuracy of 98.65% and demonstrating measurable improvement over single-model baselines. Conclusions: By combining on-vehicle inference, edge-based validation, and optional cloud integration, the proposed architecture offers both immediate responsiveness and adaptability, contrasting with traditional cloud-dependent approaches.
Full article

Figure 1
Open AccessArticle
A Unified Fuzzy–Explainable AI Framework (FAS-XAI) for Customer Service Value Prediction and Strategic Decision-Making
by
Gabriel Marín Díaz
AI 2026, 7(1), 3; https://doi.org/10.3390/ai7010003 - 22 Dec 2025
Abstract
Real-world decision-making often involves uncertainty, incomplete data, and the need to evaluate alternatives based on both quantitative and qualitative criteria. To address these challenges, this study presents FAS-XAI, a unified methodological framework that integrates fuzzy clustering and explainable artificial intelligence (XAI). FAS-XAI supports
[...] Read more.
Real-world decision-making often involves uncertainty, incomplete data, and the need to evaluate alternatives based on both quantitative and qualitative criteria. To address these challenges, this study presents FAS-XAI, a unified methodological framework that integrates fuzzy clustering and explainable artificial intelligence (XAI). FAS-XAI supports interpretable, data-driven decision-making by combining three key components: fuzzy clustering to uncover latent behavioral profiles under ambiguity, supervised prediction models to estimate decision outcomes, and expert-guided interpretation to contextualize results and enhance transparency. The framework ensures both global and local interpretability through SHAP, LIME, and ELI5, placing human reasoning and transparency at the center of intelligent decision systems. To demonstrate its applicability, FAS-XAI is applied to a real-world B2B customer service dataset from a global ERP software distributor. Customer engagement is modeled using the RFID approach (Recency, Frequency, Importance, Duration), with Fuzzy C-Means employed to identify overlapping customer profiles and XGBoost models predicting attrition risk with explainable outputs. This case study illustrates the coherence, interpretability, and operational value of the FAS-XAI methodology in managing customer relationships and supporting strategic decision-making. Finally, the study reflects additional applications across education, physics, and industry, positioning FAS-XAI as a general-purpose, human-centered framework for transparent, explainable, and adaptive decision-making across domains.
Full article
(This article belongs to the Special Issue The Use of Artificial Intelligence in Business: Innovations, Applications and Impacts)
►▼
Show Figures

Figure 1
Open AccessArticle
Ensemble Learning-Driven Flood Risk Management Using Hybrid Defense Systems
by
Nadir Murtaza and Ghufran Ahmed Pasha
AI 2026, 7(1), 2; https://doi.org/10.3390/ai7010002 - 22 Dec 2025
Abstract
Climate-induced flooding is a major issue throughout the globe, resulting in damage to infrastructure, loss of life, and the economy. Therefore, there is an urgent need for sustainable flood risk management. This paper assesses the effectiveness of the hybrid defense system using advanced
[...] Read more.
Climate-induced flooding is a major issue throughout the globe, resulting in damage to infrastructure, loss of life, and the economy. Therefore, there is an urgent need for sustainable flood risk management. This paper assesses the effectiveness of the hybrid defense system using advanced artificial intelligence (AI) techniques. A data series of energy dissipation (ΔE), flow conditions, roughness, and vegetation density was collected from literature and laboratory experiments. Out of the selected 136 data points, 80 points were collected from literature and 56 from a laboratory experiment. Advanced AI models like Random Forest (RF), Extreme Boosting Gradient (XGBoost) with Particle Swarm Optimization (PSO), Support Vector Regression (SVR) with PSO, and artificial neural network (ANN) with PSO were trained on the collected data series for predicting floodwater energy dissipation. The predictive capability of each model was evaluated through performance indicators, including the coefficient of determination (R2) and root mean square error (RMSE). Further, the relationship between input and output parameters was evaluated using a correlation heatmap, scatter pair plot, and HEC-contour maps. The results demonstrated the superior performance of the Random Forest (RF) model, with a high coefficient of determination (R2 = 0.96) and a low RMSE of 3.03 during training. This superiority was further supported by statistical analyses, where ANOVA and t-tests confirmed the significant performance differences among the models, and Taylor’s diagram showed closer agreement between RF predictions and observed energy dissipation. Further, scatter pair plot and HEC-contour maps also supported the result of SHAP analysis, demonstrating greater impact of the roughness condition followed by vegetation density in reducing floodwater energy dissipation under diverse flow conditions. The findings of this study concluded that RF has the capability of modeling flood risk management, indicating the role of AI models in combination with a hybrid defense system for enhanced flood risk management.
Full article
(This article belongs to the Special Issue Sensing the Future: IOT-AI Synergy for Climate Action)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Responsible Generative Artificial Intelligence Based Multi-Agent Framework for Preserving Data Utility and Privacy
by
Abhinav Tiwari and Hany E. Z. Farag
AI 2026, 7(1), 1; https://doi.org/10.3390/ai7010001 - 21 Dec 2025
Abstract
►▼
Show Figures
The exponential growth in the usage of textual data across industries and data sharing across institutions underscores the critical need for frameworks that effectively balance data utility and privacy. This paper proposes an innovative agentic AI-based framework specifically tailored for textual data, integrating
[...] Read more.
The exponential growth in the usage of textual data across industries and data sharing across institutions underscores the critical need for frameworks that effectively balance data utility and privacy. This paper proposes an innovative agentic AI-based framework specifically tailored for textual data, integrating user-driven qualitative inputs, differential privacy, and generative AI methodologies. The framework comprises four interlinked topics: (1) A novel quantitative approach that translates qualitative user inputs, such as textual completeness, relevance, or coherence, into precise, context-aware utility thresholds through semantic embedding and adaptive metric mapping. (2) A differential privacy-driven mechanism optimizing text embedding perturbations, dynamically balancing semantic fidelity against rigorous privacy constraints. (3) An advanced generative AI approach to synthesize and augment textual datasets, preserving semantic coherence while minimizing sensitive information leakage. (4) An adaptable dataset-dependent optimization system that autonomously profiles textual datasets, selects dataset-specific privacy strategies (e.g., anonymization, paraphrasing), and adapts in real-time to evolving privacy and utility requirements. Each topic is operationalized via specialized agentic modules with explicit mathematical formulations and inter-agent coordination, establishing a robust and adaptive solution for modern textual data challenges.
Full article

Figure 1
Open AccessArticle
Automatic Vehicle Recognition: A Practical Approach with VMMR and VCR
by
Andrei Istrate, Madalin-George Boboc, Daniel-Tiberius Hritcu, Florin Rastoceanu, Constantin Grozea and Mihai Enache
AI 2025, 6(12), 329; https://doi.org/10.3390/ai6120329 - 18 Dec 2025
Abstract
Background: Automatic vehicle recognition has recently become an area of great interest, providing substantial support for multiple use cases, including law enforcement and surveillance applications. In real traffic conditions, where for various reasons license plate recognition is impossible or license plates are forged,
[...] Read more.
Background: Automatic vehicle recognition has recently become an area of great interest, providing substantial support for multiple use cases, including law enforcement and surveillance applications. In real traffic conditions, where for various reasons license plate recognition is impossible or license plates are forged, alternative solutions are required to support human personnel in identifying vehicles used for illegal activities. In such cases, appearance-based approaches relying on vehicle make and model recognition (VMMR) and vehicle color recognition (VCR) can successfully complement license plate recognition. Methods: This research addresses appearance-based vehicle identification, in which VMMR and VCR rely on inherent visual cues such as body contours, stylistic details, and exterior color. In the first stage, vehicles passing through an intersection are detected, and essential visual characteristics are extracted for the two recognition tasks. The proposed system employs deep learning with semantic segmentation and data augmentation for color recognition, while histogram of oriented gradients (HOG) feature extraction combined with a support vector machine (SVM) classifier is used for make-model recognition. For the VCR task, five different neural network architectures are evaluated to identify the most effective solution. Results: The proposed system achieves an overall accuracy of 94.89% for vehicle make and model recognition. For vehicle color recognition, the best-performing models obtain a Top-1 accuracy of 94.17% and a Top-2 accuracy of 98.41%, demonstrating strong robustness under real-world traffic conditions. Conclusions: The experimental results show that the proposed automatic vehicle recognition system provides an efficient and reliable solution for appearance-based vehicle identification. By combining region-tailored data, segmentation-guided processing, and complementary recognition strategies, the system effectively supports real-world surveillance and law-enforcement scenarios where license plate recognition alone is insufficient.
Full article
(This article belongs to the Topic State-of-the-Art Object Detection, Tracking, and Recognition Techniques)
►▼
Show Figures

Figure 1
Open AccessArticle
Abnormal Alliance Detection Method Based on a Dynamic Community Identification and Tracking Method for Time-Varying Bipartite Networks
by
Beibei Zhang, Fan Gao, Shaoxuan Li, Xiaoyan Xu and Yichuan Wang
AI 2025, 6(12), 328; https://doi.org/10.3390/ai6120328 - 16 Dec 2025
Abstract
►▼
Show Figures
Identifying abnormal group behavior formed by multi-type participants from large-scale historical industry and tax data is important for regulators to prevent potential criminal activity. We propose an Abnormal Alliance detection framework comprising two methods. For detecting joint behavior among multi-type participants, we present
[...] Read more.
Identifying abnormal group behavior formed by multi-type participants from large-scale historical industry and tax data is important for regulators to prevent potential criminal activity. We propose an Abnormal Alliance detection framework comprising two methods. For detecting joint behavior among multi-type participants, we present DyCIAComDet, a dynamic community identification and tracking method for large-scale, time-varying bipartite multi-type participant networks, and introduce three community-splitting measurement indicators—cohesion, integration, and leadership—to improve community division. To verify whether joint behavior is abnormal, termed an Abnormal Alliance, we propose BMPS, a frequent-sequence identification algorithm that mines key features along community evolution paths based on bitmap matrices, sequence matrices, prefix-projection matrices, and repeated-projection matrices. The framework is designed to address sampling limitations, temporal issues, and subjectivity that hinder traditional analyses and to remain scalable to large datasets. Experiments on the Southern Women benchmark and a real tax dataset show DyCIAComDet yields a mean modularity Q improvement of 24.6% over traditional community detection algorithms. Compared with PrefixSpan, BMPS improves mean time and space efficiency by up to 34.8% and 35.3%, respectively. Together, DyCIAComDet and BMPS constitute an effective, scalable detection pipeline for identifying abnormal alliances in tax datasets and supporting regulatory analysis.
Full article

Figure 1
Open AccessArticle
Application of Vision-Language Models in the Automatic Recognition of Bone Tumors on Radiographs: A Retrospective Study
by
Robert Kaczmarczyk, Philipp Pieroh, Sebastian Koob, Frank Sebastian Fröschen, Sebastian Scheidt, Kristian Welle, Ron Martin and Jonas Roos
AI 2025, 6(12), 327; https://doi.org/10.3390/ai6120327 - 16 Dec 2025
Abstract
Background: Vision-language models show promise in medical image interpretation, but their performance in musculoskeletal tumor diagnostics remains underexplored. Objective: To evaluate the diagnostic accuracy of six large language models on orthopedic radiographs for tumor detection, classification, anatomical localization, and X-ray view interpretation, and
[...] Read more.
Background: Vision-language models show promise in medical image interpretation, but their performance in musculoskeletal tumor diagnostics remains underexplored. Objective: To evaluate the diagnostic accuracy of six large language models on orthopedic radiographs for tumor detection, classification, anatomical localization, and X-ray view interpretation, and to assess the impact of demographic context and self-reported certainty. Methods: We retrospectively evaluated six VLMs on 3746 expert-annotated orthopedic radiographs from the Bone Tumor X-ray Radiograph dataset. Each image was analyzed by all models with and without patient age and sex using a standardized prompting scheme across four predefined tasks. Results: Over 48,000 predictions were analyzed. Tumor detection accuracy ranged from 59.9–73.5%, with the Gemini Ensemble achieving the highest F1 score (0.723) and recall (0.822). Benign/malignant classification reached up to 85.2% accuracy; tumor type identification 24.6–55.7%; body region identification 97.4%; and view classification 82.8%. Demographic data improved tumor detection accuracy (+1.8%, p < 0.001) but had no significant effect on other tasks. Certainty scores were weakly correlated with correctness, with Gemini Pro highest (r = 0.089). Conclusion: VLMs show strong potential for basic musculoskeletal radiograph interpretation without task-specific training but remain less accurate than specialized deep learning models for complex classification. Limited calibration, interpretability, and contextual reasoning must be addressed before clinical use. This is the first systematic assessment of image-based diagnosis and self-assessment in LLMs using a real-world radiology dataset.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Graphical abstract
Open AccessReview
Application of Artificial Intelligence in Control Systems: Trends, Challenges, and Opportunities
by
Enrique Ramón Fernández Mareco and Diego Pinto-Roa
AI 2025, 6(12), 326; https://doi.org/10.3390/ai6120326 - 14 Dec 2025
Abstract
The integration of artificial intelligence (AI) into intelligent control systems has advanced significantly, enabling improved adaptability, robustness, and performance in nonlinear and uncertain environments. This study conducts a PRISMA-2020-compliant systematic mapping of 188 peer-reviewed articles published between 2000 and 15 January 2025, identified
[...] Read more.
The integration of artificial intelligence (AI) into intelligent control systems has advanced significantly, enabling improved adaptability, robustness, and performance in nonlinear and uncertain environments. This study conducts a PRISMA-2020-compliant systematic mapping of 188 peer-reviewed articles published between 2000 and 15 January 2025, identified through fully documented Boolean queries across IEEE Xplore, ScienceDirect, SpringerLink, Wiley, and Google Scholar. The screening process applied predefined inclusion–exclusion criteria, deduplication rules, and dual independent review, yielding an inter-rater agreement of κ = 0.87. The resulting synthesis reveals three dominant research directions: (i) control model strategies (36.2%), (ii) parameter optimization methods (45.2%), and (iii) adaptability mechanisms (18.6%). The most frequently adopted approaches include fuzzy logic structures, hybrid neuro-fuzzy controllers, artificial neural networks, evolutionary and swarm-based metaheuristics, model predictive control, and emerging deep reinforcement learning frameworks. Although many studies report enhanced accuracy, disturbance rejection, and energy efficiency, the analysis identifies persistent limitations, including overreliance on simulations, inconsistent reporting of hyperparameters, limited real-world validation, and heterogeneous evaluation criteria. This review consolidates current AI-enabled control technologies, compares methodological trade-offs, and highlights application-specific outcomes across renewable energy, robotics, agriculture, and industrial processes. It also delineates key research gaps related to reproducibility, scalability, computational constraints, and the need for standardized experimental benchmarks. The results aim to provide a rigorous and reproducible foundation for guiding future research and the development of next-generation intelligent control systems.
Full article
(This article belongs to the Topic The Future of Artificial Intelligence: Trends, Challenges, and Developments)
►▼
Show Figures

Figure 1
Open AccessArticle
Online On-Device Adaptation of Linguistic Fuzzy Models for TinyML Systems
by
Javier Martín-Moreno, Francisco A. Márquez, Ana M. Roldán and Antonio Peregrín
AI 2025, 6(12), 325; https://doi.org/10.3390/ai6120325 - 12 Dec 2025
Abstract
Background: Many everyday electronic devices incorporate embedded computers, allowing them to offer advanced functions such as Internet connectivity or the execution of artificial intelligence algorithms, giving rise to Tiny Machine Learning (TinyML) and Edge AI applications. In these contexts, models must be both
[...] Read more.
Background: Many everyday electronic devices incorporate embedded computers, allowing them to offer advanced functions such as Internet connectivity or the execution of artificial intelligence algorithms, giving rise to Tiny Machine Learning (TinyML) and Edge AI applications. In these contexts, models must be both efficient and explainable, especially when they are intended for systems that must be understood, interpreted, validated, or certified by humans in contrast to other approaches that are less interpretable. Among these algorithms, linguistic fuzzy systems have traditionally been valued for their interpretability and their ability to represent uncertainty with low computational cost, making them a relevant choice for embedded intelligence. However, in dynamic and changing environments, it is essential that these models can continuously adapt. While there are fuzzy approaches capable of adapting to changing conditions, few studies explicitly address their adaptation and optimization in resource-constrained devices. Methods: This paper focuses on this challenge and presents a lightweight evolutionary strategy, based on a micro genetic algorithm, adapted for constrained hardware online on-device tuning of linguistic (Mamdani-type) fuzzy models, while preserving their interpretability. Results: A prototype implementation on an embedded platform demonstrates the feasibility of the approach and highlights its potential to bring explainable self-adaptation to TinyML and Edge AI scenarios. Conclusions: The main contribution lies in showing how an appropriate integration of carefully chosen tuning mechanisms and model structure enables efficient on-device adaptation under severe resource constraints, making continuous linguistic adjustment feasible within TinyML systems.
Full article
(This article belongs to the Special Issue Advances in Tiny Machine Learning (TinyML): Applications, Models, and Implementation)
►▼
Show Figures

Figure 1
Open AccessArticle
A Novel Deep Learning Approach for Alzheimer’s Disease Detection: Attention-Driven Convolutional Neural Networks with Multi-Activation Fusion
by
Mohammed G. Alsubaie, Suhuai Luo, Kamran Shaukat, Weijia Zhang and Jiaming Li
AI 2025, 6(12), 324; https://doi.org/10.3390/ai6120324 - 10 Dec 2025
Abstract
►▼
Show Figures
Alzheimer’s disease (AD) affects over 50 million people worldwide, making early and accurate diagnosis essential for effective treatment and care planning. Diagnosing AD through neuroimaging continues to face challenges, including reliance on subjective clinical evaluations, the need for manual feature extraction, and limited
[...] Read more.
Alzheimer’s disease (AD) affects over 50 million people worldwide, making early and accurate diagnosis essential for effective treatment and care planning. Diagnosing AD through neuroimaging continues to face challenges, including reliance on subjective clinical evaluations, the need for manual feature extraction, and limited generalisability across diverse populations. Recent advances in deep learning, especially convolutional neural networks (CNNs) and vision transformers, have improved diagnostic performance, but many models still depend on large labelled datasets and high computational resources. This study introduces an attention-enhanced CNN with a multi-activation fusion (MAF) module and evaluates it using the Alzheimer’s Disease Neuroimaging Initiative dataset. The channel attention mechanism helps the model focus on the most important brain regions in 3D MRI scans, while the MAF module, inspired by multi-head attention, uses parallel fully connected layers with different activation functions to capture varied and complementary feature patterns. This design improves feature representation and increases robustness across heterogeneous patient groups. The proposed model achieved 92.1% accuracy and 0.99 AUC, with precision, recall, and F1-scores of 91.3%, 89.3%, and 92%, respectively. Ten-fold cross-validation confirmed its reliability, showing consistent performance with 91.23% accuracy, 0.93 AUC, 90.29% precision, and 88.30% recall. Comparative analysis also shows that the model outperforms several state-of-the-art deep learning approaches for AD classification. Overall, these findings highlight the potential of combining attention mechanisms with multi-activation modules to improve automated AD diagnosis and enhance diagnostic reliability.
Full article

Figure 1
Open AccessArticle
An Adaptative Wavelet Time–Frequency Transform with Mamba Network for OFDM Automatic Modulation Classification
by
Hongji Xing, Xiaogang Tang, Lu Wang, Binquan Zhang and Yuepeng Li
AI 2025, 6(12), 323; https://doi.org/10.3390/ai6120323 - 9 Dec 2025
Abstract
Background: With the development of wireless communication technologies, the rapid advancement of 5G and 6G communication systems has spawned an urgent demand for low latency and high data rates. Orthogonal Frequency Division Multiplexing (OFDM) communication using high-order digital modulation has become a key
[...] Read more.
Background: With the development of wireless communication technologies, the rapid advancement of 5G and 6G communication systems has spawned an urgent demand for low latency and high data rates. Orthogonal Frequency Division Multiplexing (OFDM) communication using high-order digital modulation has become a key technology due to its characteristics, such as high reliability, high data rate, and low latency, and has been widely applied in various fields. As a component of cognitive radios, automatic modulation classification (AMC) plays an important role in remote sensing and electromagnetic spectrum sensing. However, under current complex channel conditions, there are issues such as low signal-to-noise ratio (SNR), Doppler frequency shift, and multipath propagation. Methods: Coupled with the inherent problem of indistinct characteristics in high-order modulation, these currently make it difficult for AMC to focus on OFDM and high-order digital modulation. Existing methods are mainly based on a single model-driven approach or data-driven approach. The Adaptive Wavelet Mamba Network (AWMN) proposed in this paper attempts to combine model-driven adaptive wavelet transform feature extraction with the Mamba deep learning architecture. A module based on the lifting wavelet scheme effectively captures discriminative time–frequency features using learnable operations. Meanwhile, a Mamba network constructed based on the State Space Model (SSM) can capture long-term temporal dependencies. This network realizes a combination of model-driven and data-driven methods. Results: Tests conducted on public datasets and a custom-built real-time received OFDM dataset show that the proposed AWMN achieves a performance reaching higher accuracies of 62.39%, 64.50%, and 74.95% on the public Rml2016(a) and Rml2016(b) datasets and our formulated EVAS dataset, while maintaining a compact parameter size of 0.44 M. Conclusions: These results highlight its potential for improving the automatic modulation classification of high-order OFDM modulation in 5G/6G systems.
Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
AI, Buildings, Electronics, Symmetry, Smart Cities, Urban Science, Automation
Application of Smart Technologies in Buildings
Topic Editors: Yin Zhang, Limei Peng, Ming TaoDeadline: 28 February 2026
Topic in
AI, BDCC, Fire, GeoHazards, Remote Sensing
AI for Natural Disasters Detection, Prediction and Modeling
Topic Editors: Moulay A. Akhloufi, Mozhdeh ShahbaziDeadline: 31 March 2026
Topic in
AI, Applied Sciences, Systems, JTAER, Healthcare
Data Science and Intelligent Management
Topic Editors: Dongxiao Gu, Jiantong Zhang, Jia LiDeadline: 30 April 2026
Conferences
Special Issues
Special Issue in
AI
Smart Networks for a Smart World: Trends in Wireless Communication
Guest Editor: Lavric AlexandruDeadline: 7 January 2026
Special Issue in
AI
Integrating Data Sources for Smarter Interdisciplinary AI Solutions: Challenges and Opportunities
Guest Editors: Jens Dörpinghaus, Michael TiemannDeadline: 15 January 2026
Special Issue in
AI
AI-Powered Smart Cities: Towards Sustainable Urban Environments
Guest Editor: Robert LauriniDeadline: 31 January 2026
Special Issue in
AI
Artificial Intelligence in Industrial Systems: From Data Acquisition to Intelligent Decision-Making
Guest Editors: Hugo Landaluce, Ignacio Angulo Martínez, Ander GarciaDeadline: 31 January 2026





