Next Issue
Volume 7, February
Previous Issue
Volume 6, December
 
 

AI, Volume 7, Issue 1 (January 2026) – 36 articles

Cover Story (view full-size image): Cancer care is increasingly challenged by the growing volume and complexity of clinical, imaging, and molecular data. Artificial intelligence (AI) has emerged as a powerful tool capable of integrating radiological, histopathological, genomic, and clinical information to support more precise diagnostics, risk assessment, and prognostics, as well as optimal treatment. Recent advances in machine learning and deep learning have demonstrated strong performance across key oncological domains, including tumor detection, molecular classification, risk stratification, radiotherapy planning, and drug discovery. This paper reviews the current clinical and translational applications of AI in precision oncology, highlighting its potential to complement clinical expertise while addressing existing technical, ethical, and regulatory challenges. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 6118 KB  
Article
Effective Approach for Classifying EMG Signals Through Reconstruction Using Autoencoders
by Natalia Rendón Caballero, Michelle Rojo González, Marcos Aviles, José Manuel Alvarez Alvarado, José Billerman Robles-Ocampo, Perla Yazmin Sevilla-Camacho and Juvenal Rodríguez-Reséndiz
AI 2026, 7(1), 36; https://doi.org/10.3390/ai7010036 - 22 Jan 2026
Viewed by 163
Abstract
The study of muscle signal classification has been widely explored for the control of myoelectric prostheses. Traditional approaches rely on manually designed features extracted from time- or frequency-domain representations, which may limit the generalization and adaptability of EMG-based systems. In this work, an [...] Read more.
The study of muscle signal classification has been widely explored for the control of myoelectric prostheses. Traditional approaches rely on manually designed features extracted from time- or frequency-domain representations, which may limit the generalization and adaptability of EMG-based systems. In this work, an autoencoder-based framework is proposed for automatic feature extraction, enabling the learning of compact latent representations directly from raw EMG signals and reducing dependence on handcrafted features. A custom instrumentation system with three surface EMG sensors was developed and placed on selected forearm muscles to acquire signals associated with five hand movements from 20 healthy participants aged 18 to 40 years. The signals were segmented into 200 ms windows with 75% overlap. The proposed method employs a recurrent autoencoder with a symmetric encoder–decoder architecture, trained independently for each sensor to achieve accurate signal reconstruction, with a minimum reconstruction loss of 3.3×104V2. The encoder’s latent representations were then used to train a dense neural network for gesture classification. An overall efficiency of 93.84% was achieved, demonstrating that the proposed reconstruction-based approach provides high classification performance and represents a promising solution for future EMG-based assistive and control applications. Full article
(This article belongs to the Special Issue Transforming Biomedical Innovation with Artificial Intelligence)
Show Figures

Figure 1

34 pages, 6023 KB  
Article
Multi-Dimensional Evaluation of Auto-Generated Chain-of-Thought Traces in Reasoning Models
by Luis F. Becerra-Monsalve, German Sanchez-Torres and John W. Branch-Bedoya
AI 2026, 7(1), 35; https://doi.org/10.3390/ai7010035 - 21 Jan 2026
Viewed by 243
Abstract
Automatically generated chains-of-thought (gCoTs) have become common as large language models adopt deliberative behaviors. Prior work emphasizes fidelity to internal processes, leaving explanatory properties underexplored. Our central hypothesis is that these traces, produced by highly capable reasoning models, are not arbitrary by-products of [...] Read more.
Automatically generated chains-of-thought (gCoTs) have become common as large language models adopt deliberative behaviors. Prior work emphasizes fidelity to internal processes, leaving explanatory properties underexplored. Our central hypothesis is that these traces, produced by highly capable reasoning models, are not arbitrary by-products of decoding but exhibit stable and practically valuable textual properties beyond answer fidelity. We apply a multidimensional text-evaluation framework that quantifies four axes—structural coherence, logical–factual consistency, linguistic clarity, and coverage/informativeness—that are standard dimensions for assessing textual quality, and use it to evaluate five reasoning models on the GSM8K arithmetic word-problem benchmark (~1.3 k–1.4 k items) with reproducible, normalized metrics. Logical verification shows near-ceiling self-consistency, measured by the Aggregate Consistency Score (ACS ≈ 0.95–1.00), and high final-answer entailment, measured by Final Answer Soundness (FAS0 ≈ 0.85–1.00); when sound, justifications are compact, with Justification Set Size (JSS ≈ 0.51–0.57) and moderate redundancy, measured by the Redundant Constraint Ratio (RCR ≈ 0.62–0.70). Results also show consistent coherence and clarity; from gCoT to answer implication is stricter than from question to gCoT support, indicating chains anchored to the prompt. We find no systematic trade-off between clarity and informativeness (within-model slopes ≈ 0). In addition to these automatic and logic-based metrics, we include an exploratory expert rating of a subset (four raters; 50 items × five models) to contextualize model differences; these human judgments are not intended to support dataset-wide generalization. Overall, gCoTs display explanatory value beyond fidelity, primarily supported by the automated and logic-based analyses, motivating hybrid evaluation (automatic + exploratory human) to map convergence/divergence zones for user-facing applications. Full article
Show Figures

Figure 1

22 pages, 1714 KB  
Article
Integrating Machine-Learning Methods with Importance–Performance Maps to Evaluate Drivers for the Acceptance of New Vaccines: Application to AstraZeneca COVID-19 Vaccine
by Jorge de Andrés-Sánchez, Mar Souto-Romero and Mario Arias-Oliva
AI 2026, 7(1), 34; https://doi.org/10.3390/ai7010034 - 21 Jan 2026
Viewed by 221
Abstract
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) [...] Read more.
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) model—with machine-learning techniques (decision tree regression, random forest, and Extreme Gradient Boosting) and SHapley Additive exPlanations (SHAP) integrated into an importance–performance map (IPM) to prioritize determinants of vaccination intention. Using survey data collected in Spain in September 2020 (N = 600), when the AstraZeneca vaccine had not yet been approved, we examine the roles of perceived efficacy (EF), fear of COVID-19 (FC), fear of the vaccine (FV), and social influence (SI). Results: EF and SI consistently emerged as the most influential determinants across modelling approaches. Ensemble learners (random forest and Extreme Gradient Boosting) achieved stronger out-of-sample predictive performance than the single decision tree, while decision tree regression provided an interpretable, rule-based representation of the main decision pathways. Exploiting the local nature of SHAP values, we also constructed SHAP-based IPMs for the full sample and for the low-acceptance segment, enhancing the policy relevance of the prioritization exercise. Conclusions: By combining theory-driven structural modelling with predictive and explainable machine learning, the proposed framework offers a transparent and replicable tool to support the design of vaccination communication strategies and can be transferred to other settings involving emerging health technologies. Full article
Show Figures

Figure 1

12 pages, 944 KB  
Perspective
Could You Be Wrong: Metacognitive Prompts for Improving Human Decision Making Help LLMs Identify Their Own Biases
by Thomas T. Hills
AI 2026, 7(1), 33; https://doi.org/10.3390/ai7010033 - 19 Jan 2026
Viewed by 324
Abstract
Because LLMs are still in development, what is true today may be false tomorrow. We therefore need general strategies for debiasing LLMs that will outlive current models. Strategies developed for debiasing human decision making offer one promising approach as they incorporate an LLM-style [...] Read more.
Because LLMs are still in development, what is true today may be false tomorrow. We therefore need general strategies for debiasing LLMs that will outlive current models. Strategies developed for debiasing human decision making offer one promising approach as they incorporate an LLM-style prompt intervention designed to access additional latent knowledge during decision making. LLMs trained on vast amounts of information contain information about potential biases, counter-arguments, and contradictory evidence, but that information may only be brought to bear if appropriately prompted. Metacognitive prompts developed in the human decision making literature are designed to achieve this and, as I demonstrate here, they show promise with LLMs. The prompt I focus on is “could you be wrong?” Following an LLM response, this prompt leads LLMs to produce additional information, including why they answered as they did, identifying errors, biases, contradictory evidence, and alternatives, none of which were present in their initial response. Further, this metaknowledge often reveals that how LLMs and users interpret prompts are not aligned. I demonstrate this prompt in three cases. In the first two cases I use a set of questions taken from recent articles identifying LLM biases, including implicit discriminatory biases and failures of metacognition. “Could you be wrong” prompts the LLM to identify its own biases and produce cogent metacognitive reflection. In the last case I present an example involving convincing but incomplete information about scientific research (the too much choice effect), which is readily corrected by “could you be wrong?” In sum, this work argues that human psychology offers a valuable avenue for prompt engineering, leveraging a long history of effective prompt-based improvements to decision making. Full article
Show Figures

Figure 1

18 pages, 773 KB  
Article
A Radiomics-Based Machine Learning Model for Predicting Pneumonitis During Durvalumab Treatment in Locally Advanced NSCLC
by Takeshi Masuda, Daisuke Kawahara, Wakako Daido, Nobuki Imano, Naoko Matsumoto, Kosuke Hamai, Yasuo Iwamoto, Yusuke Takayama, Sayaka Ueno, Masahiko Sumii, Hiroyasu Shoda, Nobuhisa Ishikawa, Masahiro Yamasaki, Yoshifumi Nishimura, Shigeo Kawase, Naoki Shiota, Yoshikazu Awaya, Soichi Kitaguchi, Yuji Murakami, Yasushi Nagata and Noboru Hattoriadd Show full author list remove Hide full author list
AI 2026, 7(1), 32; https://doi.org/10.3390/ai7010032 - 16 Jan 2026
Viewed by 308
Abstract
Introduction: Pneumonitis represents one of the clinically significant adverse events observed in patients with non-small-cell lung cancer (NSCLC) who receive durvalumab as consolidation therapy after chemoradiotherapy (CRT). Although clinical factors such as radiation dose (e.g., V20) and interstitial lung abnormalities (ILAs) have been [...] Read more.
Introduction: Pneumonitis represents one of the clinically significant adverse events observed in patients with non-small-cell lung cancer (NSCLC) who receive durvalumab as consolidation therapy after chemoradiotherapy (CRT). Although clinical factors such as radiation dose (e.g., V20) and interstitial lung abnormalities (ILAs) have been reported as risk predictors, accurate and objective prognostication remains difficult. This study aimed to develop a radiomics-based machine learning model to predict grade ≥ 2 pneumonitis. Methods: This retrospective study included patients with unresectable NSCLC who received CRT followed by durvalumab. Radiomic features, including first-order and texture and shape-based features with wavelet transformation were extracted from whole-lung regions on pre-durvalumab computed tomography (CT) images. Machine learning models, support vector machines, k-nearest neighbor, neural networks, and naïve Bayes classifiers were developed and evaluated using a testing cohort. Model performance was assessed using five-fold cross-validation. Conventional predictors, including V20 and ILAs, were also assessed using logistic regression and receiver operating characteristic analysis. Results: Among 123 patients, 44 (35.8%) developed grade ≥ 2 pneumonitis. The best-performing model, a support vector machine, achieved an AUC of 0.88 and accuracy of 0.81, the conventional model showed lower performance with an AUC of 0.71 and accuracy of 0.64. Conclusions: Radiomics-based machine learning demonstrated superior performance over clinical parameters in predicting pneumonitis. This approach may enable individualized risk stratification and support early intervention in patients with NSCLC. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

41 pages, 1800 KB  
Systematic Review
Explainable Generative AI: A Two-Stage Review of Existing Techniques and Future Research Directions
by Prabha M. Kumarage and Mirka Saarela
AI 2026, 7(1), 31; https://doi.org/10.3390/ai7010031 - 16 Jan 2026
Viewed by 490
Abstract
Generative Artificial Intelligence (GenAI) models produce increasingly sophisticated outputs, yet their underlying mechanisms remain opaque. To clarify how explainability is conceptualized and implemented in GenAI research, this two-stage review systematically examined 261 articles retrieved from six major databases. After removing duplicates and applying [...] Read more.
Generative Artificial Intelligence (GenAI) models produce increasingly sophisticated outputs, yet their underlying mechanisms remain opaque. To clarify how explainability is conceptualized and implemented in GenAI research, this two-stage review systematically examined 261 articles retrieved from six major databases. After removing duplicates and applying predefined inclusion criteria, 63 articles were retained for full analysis. In the first stage, an umbrella review synthesized insights from 18 review papers to identify prevailing frameworks, strategies, and conceptual challenges surrounding explainability in GenAI. In the second stage, an empirical review analyzed 45 primary studies to assess how explainability is operationalized, evaluated, and applied in practice. Across both stages, findings reveal fragmented approaches, a lack of standardized evaluation frameworks, and persistent challenges, including limited generalizability, interpretability–performance trade-offs, and high computational costs. The review concludes by outlining future research directions aimed at developing user-centric, regulation-aware explainability methods tailored to the unique architectures and application contexts of GenAI. By consolidating theoretical and empirical evidence, this study establishes a comprehensive foundation for advancing transparent, interpretable, and trustworthy GenAI systems. Full article
Show Figures

Figure 1

31 pages, 1485 KB  
Article
Explainable Multi-Modal Medical Image Analysis Through Dual-Stream Multi-Feature Fusion and Class-Specific Selection
by Naeem Ullah, Ivanoe De Falco and Giovanna Sannino
AI 2026, 7(1), 30; https://doi.org/10.3390/ai7010030 - 16 Jan 2026
Viewed by 399
Abstract
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. [...] Read more.
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. Handcrafted descriptors include frequency-domain and texture features, while deep features are summarized using 26 statistical metrics to enhance interpretability. In the fusion stage, complementary features are combined at both the feature and decision levels. Decision-level integration combines calibrated soft voting, weighted voting, and stacking ensembles with optimized classifiers, including decision trees, random forests, gradient boosting, and logistic regression. To further refine performance, a hybrid class-specific feature selection strategy is proposed, combining mutual information, recursive elimination, and random forest importance to select the most discriminative features for each class. This hybrid selection approach eliminates redundancy, improves computational efficiency, and ensures robust classification. Explainability is provided through Local Interpretable Model-Agnostic Explanations, which offer transparent details about the ensemble model’s predictions and link influential handcrafted features to clinically meaningful image characteristics. The framework is validated on three benchmark datasets, i.e., BTTypes (brain MRI), Ultrasound Breast Images, and ACRIMA Retinal Fundus Images, demonstrating generalizability across modalities (MRI, ultrasound, retinal fundus) and disease categories (brain tumor, breast cancer, glaucoma). Full article
(This article belongs to the Special Issue Digital Health: AI-Driven Personalized Healthcare and Applications)
Show Figures

Figure 1

23 pages, 5052 KB  
Article
Exploratory Study on Hybrid Systems Performance: A First Approach to Hybrid ML Models in Breast Cancer Classification
by Francisco J. Rojas-Pérez, José R. Conde-Sánchez, Alejandra Morlett-Paredes, Fernando Moreno-Barbosa, Julio C. Ramos-Fernández, José Luna-Muñoz, Genaro Vargas-Hernández, Blanca E. Jaramillo-Loranca, Juan M. Xicotencatl-Pérez and Eucario G. Pérez-Pérez
AI 2026, 7(1), 29; https://doi.org/10.3390/ai7010029 - 15 Jan 2026
Viewed by 281
Abstract
The classification of breast cancer using machine learning techniques has become a critical tool in modern medical diagnostics. This study analyzes the performance of hybrid models that combine traditional machine learning algorithms (TMLAs) with a convolutional neural network (CNN)-based VGG16 model for feature [...] Read more.
The classification of breast cancer using machine learning techniques has become a critical tool in modern medical diagnostics. This study analyzes the performance of hybrid models that combine traditional machine learning algorithms (TMLAs) with a convolutional neural network (CNN)-based VGG16 model for feature extraction to improve accuracy for classifying eight breast cancer subtypes (BCS). The methodology consists of three steps. First, image preprocessing is performed on the BreakHis dataset at 400× magnification, which contains 1820 histopathological images classified into eight BCS. Second, the CNN VGG16 is modified to function as a feature extractor that converts images into representative vectors. These vectors constitute the training set for TMLAs, such as Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naive Bayes (NB), leveraging VGG16’s ability to capture relevant features. Third, k-fold cross-validation is applied to evaluate the model’s performance by averaging the metrics obtained across all folds. The results reveal that hybrid models leveraging a CNN-based VGG16 model for feature extraction, followed by TMLAs, achieve accuracy outstanding experimental accuracy. The KNN-based hybrid model stood out with a precision of 0.97, accuracy of 0.96, sensitivity of 0.96, specificity of 0.99, F1-score of 0.96, and ROC-AUC of 0.97. These findings suggest that, with an appropriate methodology, hybrid models based on TMLA have strong potential in classification tasks, offering a balance between performance and predictive capability. Full article
Show Figures

Figure 1

19 pages, 3161 KB  
Article
Multi-Modal Multi-Stage Multi-Task Learning for Occlusion-Aware Facial Landmark Localisation
by Yean Chun Ng, Alexander G. Belyaev, Florence Choong, Shahrel Azmin Suandi, Joon Huang Chuah and Bhuvendhraa Rudrusamy
AI 2026, 7(1), 28; https://doi.org/10.3390/ai7010028 - 15 Jan 2026
Viewed by 286
Abstract
Thermal facial imaging enables non-contact measurements of face heat patterns that are valuable for healthcare and affective computing, but common occluders (glasses, masks, scarves) and the single-channel, texture-poor nature of thermal frames make robust landmark localisation and visibility estimation challenging. We propose M [...] Read more.
Thermal facial imaging enables non-contact measurements of face heat patterns that are valuable for healthcare and affective computing, but common occluders (glasses, masks, scarves) and the single-channel, texture-poor nature of thermal frames make robust landmark localisation and visibility estimation challenging. We propose M3MSTL, a multi-modal, multi-stage, multi-task framework for occlusion-aware landmarking on thermal faces. M3MSTL pairs a ResNet-50 backbone with two lightweight heads: a compact fully connected landmark regressor and a Vision Transformer occlusion classifier that explicitly fuses per-landmark temperature cues. A three-stage curriculum (mask-based backbone pretraining, head specialisation with a frozen trunk, and final joint fine-tuning) stabilises optimisation and improves generalisation from limited thermal data. On the TFD68 dataset, M3MSTL substantially improves both visibility and localisation: the occlusion accuracy reaches 91.8% (baseline 89.7%), the mean NME reaches 0.246 (baseline 0.382), the ROC–AUC reaches 0.974, and the AP is 0.966. Paired statistical tests confirm that these gains are significant. Our approach aims to improve the reliability of temperature-based biometric and clinical measurements in the presence of realistic occluders. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

33 pages, 582 KB  
Article
In Silico Proof of Concept: Conditional Deep Learning-Based Prediction of Short Mitochondrial DNA Fragments in Archosaurs
by Dimitris Angelakis, Dionisis Cavouras, Dimitris Th. Glotsos, Spiros A. Kostopoulos, Emmanouil I. Athanasiadis, Ioannis K. Kalatzis and Pantelis A. Asvestas
AI 2026, 7(1), 27; https://doi.org/10.3390/ai7010027 - 14 Jan 2026
Viewed by 290
Abstract
This study presents an in silico proof of concept exploring whether deep learning models can perform conditional mitochondrial DNA (mtDNA) sequence prediction across species boundaries. A CNN–BiLSTM model was trained under a leave-one-species-out (LOSO) scheme on complete mitochondrial genomes from 21 vertebrate species, [...] Read more.
This study presents an in silico proof of concept exploring whether deep learning models can perform conditional mitochondrial DNA (mtDNA) sequence prediction across species boundaries. A CNN–BiLSTM model was trained under a leave-one-species-out (LOSO) scheme on complete mitochondrial genomes from 21 vertebrate species, primarily archosaurs. Model behavior was evaluated through multiple complementary tests. Under context-conditioned settings, the model performed next-nucleotide prediction using overlapping 200 bp windows to assemble contiguous 2000 bp fragments for held-out species; the resulting high token-level accuracy (>99%) under teacher forcing is reported as a diagnostic of conditional modeling capacity. To assess leakage-free performance, a two-flank masked-span imputation task was conducted as the primary evaluation, requiring free-running reconstruction of 500 bp interior spans using only distal flanking context; in this setting, the model consistently outperformed nearest-neighbor and demonstrated competitive performance relative to flank-copy baselines. Additional robustness analyses examined sensitivity to window placement, genomic region (coding versus D-loop), and random initialization. Biological plausibility was further assessed by comparing predicted fragments to reconstructed ancestral sequences and against composition-matched null models, where observed identities significantly exceeded null expectations. Using the National Center for Biotechnology Information (NCBI) BLAST web interface, BLASTn species identification was performed solely as a biological plausibility check, recovering the correct species as the top hit in all cases. Although limited by dataset size and the absence of ancient DNA damage modeling, these results demonstrate the feasibility of conditional mtDNA sequence prediction as an initial step toward more advanced generative and evolutionary modeling frameworks. Full article
(This article belongs to the Special Issue Transforming Biomedical Innovation with Artificial Intelligence)
Show Figures

Figure 1

28 pages, 2594 KB  
Review
From Algorithm to Medicine: AI in the Discovery and Development of New Drugs
by Ana Beatriz Lopes, Célia Fortuna Rodrigues and Francisco A. M. Silva
AI 2026, 7(1), 26; https://doi.org/10.3390/ai7010026 - 14 Jan 2026
Viewed by 676
Abstract
The discovery and development of new drugs is a lengthy, complex, and costly process, often requiring 10–20 years to progress from initial concept to market approval, with clinical trials representing the most resource-intensive stage. In recent years, Artificial Intelligence (AI) has emerged as [...] Read more.
The discovery and development of new drugs is a lengthy, complex, and costly process, often requiring 10–20 years to progress from initial concept to market approval, with clinical trials representing the most resource-intensive stage. In recent years, Artificial Intelligence (AI) has emerged as a transformative technology capable of reshaping the entire pharmaceutical research and development (R&D) pipeline. The purpose of this narrative review is to examine the role of AI in drug discovery and development, highlighting its contributions, challenges, and future implications for pharmaceutical sciences and global public health. A comprehensive review of the scientific literature was conducted, focusing on published studies, reviews, and reports addressing the application of AI across the stages of drug discovery, preclinical development, clinical trials, and post-marketing surveillance. Key themes were identified, including AI-driven target identification, molecular screening, de novo drug design, predictive toxicity modelling, and clinical monitoring. The reviewed evidence indicates that AI has significantly accelerated drug discovery and development by reducing timeframes, costs, and failure rates. AI-based approaches have enhanced the efficiency of target identification, optimized lead compound selection, improved safety predictions, and supported adaptive clinical trial designs. Collectively, these advances position AI as a catalyst for innovation, particularly in promoting accessible, efficient, and sustainable healthcare solutions. However, substantial challenges remain, including reliance on high-quality and representative biomedical data, limited algorithmic transparency, high implementation costs, regulatory uncertainty, and ethical and legal concerns related to data privacy, bias, and equitable access. In conclusion, AI represents a paradigm shift in pharmaceutical research and drug development, offering unprecedented opportunities to improve efficiency and innovation. Addressing its technical, ethical, and regulatory limitations will be essential to fully realize its potential as a sustainable and globally impactful tool for therapeutic innovation. Full article
(This article belongs to the Special Issue Transforming Biomedical Innovation with Artificial Intelligence)
Show Figures

Figure 1

26 pages, 911 KB  
Article
Pedagogical Transformation Using Large Language Models in a Cybersecurity Course
by Rodolfo Ostos, Vanessa G. Félix, Luis J. Mena, Homero Toral-Cruz, Alberto Ochoa-Brust, Apolinar González-Potes, Ramón A. Félix, Julio C. Ramírez Pacheco, Víctor Flores and Rafael Martínez-Peláez
AI 2026, 7(1), 25; https://doi.org/10.3390/ai7010025 - 13 Jan 2026
Viewed by 458
Abstract
Large Language Models (LLMs) are increasingly used in higher education, but their pedagogical role in fields like cybersecurity remains under-investigated. This research explores integrating LLMs into a university cybersecurity course using a designed pedagogical approach based on active learning, problem-based learning (PBL), and [...] Read more.
Large Language Models (LLMs) are increasingly used in higher education, but their pedagogical role in fields like cybersecurity remains under-investigated. This research explores integrating LLMs into a university cybersecurity course using a designed pedagogical approach based on active learning, problem-based learning (PBL), and computational thinking (CT). Instead of viewing LLMs as definitive sources of knowledge, the framework sees them as cognitive tools that support reasoning, clarify ideas, and assist technical problem-solving while maintaining human judgment and verification. The study uses a qualitative, practice-based case study over three semesters. It features four activities focusing on understanding concepts, installing and configuring tools, automating procedures, and clarifying terminology, all incorporating LLM use in individual and group work. Data collection involved classroom observations, team reflections, and iterative improvements guided by action research. Results show that LLMs can provide valuable, customized support when students actively engage in refining, validating, and solving problems through iteration. LLMs are especially helpful for clarifying concepts and explaining procedures during moments of doubt or failure. Still, common issues like incomplete instructions, mismatched context, and occasional errors highlight the importance of verifying LLM outputs with trusted sources. Interestingly, these limitations often act as teaching opportunities, encouraging critical thinking crucial in cybersecurity. Ultimately, this study offers empirical evidence of human–AI collaboration in education, demonstrating how LLMs can enrich active learning. Full article
(This article belongs to the Special Issue How Is AI Transforming Education?)
Show Figures

Figure 1

20 pages, 1544 KB  
Article
No Free Lunch in Language Model Bias Mitigation? Targeted Bias Reduction Can Exacerbate Unmitigated LLM Biases
by Shireen Chand, Faith Baca and Emilio Ferrara
AI 2026, 7(1), 24; https://doi.org/10.3390/ai7010024 - 13 Jan 2026
Viewed by 527
Abstract
Large Language Models (LLMs) inherit societal biases from their training data, potentially leading to harmful outputs. While various techniques aim to mitigate these biases, their effects are typically evaluated only along the targeted dimension, leaving cross-dimensional consequences unexplored. This work provides the first [...] Read more.
Large Language Models (LLMs) inherit societal biases from their training data, potentially leading to harmful outputs. While various techniques aim to mitigate these biases, their effects are typically evaluated only along the targeted dimension, leaving cross-dimensional consequences unexplored. This work provides the first systematic quantification of cross-category spillover effects in LLM bias mitigation. We evaluate four bias mitigation techniques (Logit Steering, Activation Patching, BiasEdit, Prompt Debiasing) across ten models from seven families, measuring impact on racial, religious, profession-, and gender-related biases using the StereoSet benchmark. Across 160 experiments yielding 640 evaluations, we find that targeted interventions cause collateral degradations to model coherence and performance along debiasing objectives in 31.5% of untargeted dimension evaluations. These findings provide empirical evidence that debiasing improvements along one dimension can come at the cost of degradation in others. We introduce a multi-dimensional auditing framework and demonstrate that single-target evaluations mask potentially severe spillover effects, underscoring the need for robust, multi-dimensional evaluation tools when examining and developing bias mitigation strategies to avoid inadvertently shifting or worsening bias along untargeted axes. Full article
Show Figures

Figure 1

42 pages, 4198 KB  
Systematic Review
Machine Learning and Deep Learning in Lung Cancer Diagnostics: A Systematic Review of Technical Breakthroughs, Clinical Barriers, and Ethical Imperatives
by Mobarak Abumohsen, Enrique Costa-Montenegro, Silvia García-Méndez, Amani Yousef Owda and Majdi Owda
AI 2026, 7(1), 23; https://doi.org/10.3390/ai7010023 - 11 Jan 2026
Viewed by 543
Abstract
The use of machine learning (ML) and deep learning (DL) in lung cancer detection and classification offers great promise for improving early diagnosis and reducing death rates. Despite major advances in research, there is still a significant gap between successful model development and [...] Read more.
The use of machine learning (ML) and deep learning (DL) in lung cancer detection and classification offers great promise for improving early diagnosis and reducing death rates. Despite major advances in research, there is still a significant gap between successful model development and clinical use. This review identifies the main obstacles preventing ML/DL tools from being adopted in real healthcare settings and suggests practical advice to tackle them. Using PRISMA guidelines, we examined over 100 studies published between 2022 and 2024, focusing on technical accuracy, clinical relevance, and ethical aspects. Most of the reviewed studies rely on computed tomography (CT) imaging, reflecting its dominant role in current lung cancer screening workflows. While many models achieve high performance on public datasets (e.g., >95% sensitivity on LUNA16), they often perform poorly on real clinical data due to issues like domain shift and bias, especially toward underrepresented groups. Promising solutions include federated learning for data privacy, synthetic data to support rare subtypes, and explainable AI to build trust. We also present a checklist to guide the development of clinically applicable tools, emphasizing generalizability, transparency, and workflow integration. The study recommends early collaboration between developers, clinicians, and policymakers to ensure practical adoption. Ultimately, for ML/DL solutions to gain clinical acceptance, they must be designed with healthcare professionals from the beginning. Full article
Show Figures

Figure 1

26 pages, 3995 KB  
Article
Neural Vessel Segmentation and Gaussian Splatting for 3D Reconstruction of Cerebral Angiography
by Oleh Kryvoshei, Patrik Kamencay and Ladislav Polak
AI 2026, 7(1), 22; https://doi.org/10.3390/ai7010022 - 10 Jan 2026
Viewed by 373
Abstract
Cerebrovascular diseases are a leading cause of global mortality, underscoring the need for objective and quantitative 3D visualization of cerebral vasculature from dynamic imaging modalities. Conventional analysis is often labor-intensive, subjective, and prone to errors due to image noise and subtraction artifacts. This [...] Read more.
Cerebrovascular diseases are a leading cause of global mortality, underscoring the need for objective and quantitative 3D visualization of cerebral vasculature from dynamic imaging modalities. Conventional analysis is often labor-intensive, subjective, and prone to errors due to image noise and subtraction artifacts. This study tackles the challenge of achieving fast and accurate volumetric reconstruction from angiography sequences. We propose a multi-stage pipeline that begins with image restoration to enhance input quality, followed by neural segmentation to extract vascular structures. Camera poses and sparse geometry are estimated through Structure-from-Motion, and these reconstructions are refined by leveraging the segmentation maps to isolate vessel-specific features. The resulting data are then used to initialize and optimize a 3D Gaussian Splatting model, enabling anatomically precise representation of cerebral vasculature. The integration of deep neural segmentation priors with explicit geometric initialization yields highly detailed 3D reconstructions of cerebral angiography. The resulting models leverage the computational efficiency of 3D Gaussian Splatting, achieving near-real-time rendering performance competitive with state-of-the-art reconstruction methods. The segmentation of brain vessels using nnU-Net and our trained model achieved an accuracy of 84.21%, highlighting the improvement in the performance of the proposed approach. Overall, our pipeline significantly improves both the efficiency and accuracy of volumetric cerebral vasculature reconstruction, providing a robust foundation for quantitative clinical analysis and enhanced guidance during endovascular procedures. Full article
Show Figures

Figure 1

40 pages, 9015 KB  
Article
Wildfire Probability Mapping in Southeastern Europe Using Deep Learning and Machine Learning Models Based on Open Satellite Data
by Uroš Durlević, Velibor Ilić and Bojana Aleksova
AI 2026, 7(1), 21; https://doi.org/10.3390/ai7010021 - 9 Jan 2026
Viewed by 511
Abstract
Wildfires, which encompass all fires that occur outside urban areas, represent one of the most frequent forms of natural disaster worldwide. This study presents the wildfire occurrence across the territory of Southeastern Europe, covering an area of 800,000 km2 (Greece, Romania, Serbia, [...] Read more.
Wildfires, which encompass all fires that occur outside urban areas, represent one of the most frequent forms of natural disaster worldwide. This study presents the wildfire occurrence across the territory of Southeastern Europe, covering an area of 800,000 km2 (Greece, Romania, Serbia, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro, Albania, North Macedonia, Bulgaria, and Moldova). The research applies geospatial artificial intelligence techniques, based on the integration of machine learning (Random Forest (RF), XGBoost), deep learning (Deep Neural Network (DNN), Kolmogorov–Arnold Networks (KAN)), remote sensing (Sentinel-2, VIIRS), and Geographic Information Systems (GIS). From the geospatial database, 11 natural and anthropogenic criteria were analyzed, along with a wildfire inventory comprising 28,952 historical fire events. The results revealed that areas of very high susceptibility were most prevalent in Greece (10.5%), while the smallest susceptibility percentage was recorded in Slovenia (0.2%). Among the applied models, RF demonstrated the highest predictive performance (AUC = 90.7%), whereas XGBoost, DNN, and KAN achieved AUC values ranging from 86.7% to 90.5%. Through a SHAP analysis, it was determined that the most influential factors were global horizontal irradiation, elevation, and distance from settlements. The obtained results hold international significance for the implementation of preventive wildfire protection measures. Full article
(This article belongs to the Special Issue AI Applications in Emergency Response and Fire Safety)
Show Figures

Figure 1

18 pages, 10421 KB  
Article
A Deep Learning Framework with Multi-Scale Texture Enhancement and Heatmap Fusion for Face Super Resolution
by Bing Xu, Lei Wang, Yanxia Wu, Xiaoming Liu and Lu Gan
AI 2026, 7(1), 20; https://doi.org/10.3390/ai7010020 - 9 Jan 2026
Viewed by 436
Abstract
Face super-resolution (FSR) has made great progress thanks to deep learning and facial priors. However, many existing methods do not fully exploit landmark heatmaps and lack effective multi-scale texture modeling, which often leads to texture loss and artifacts under large upscaling factors. To [...] Read more.
Face super-resolution (FSR) has made great progress thanks to deep learning and facial priors. However, many existing methods do not fully exploit landmark heatmaps and lack effective multi-scale texture modeling, which often leads to texture loss and artifacts under large upscaling factors. To address these problems, we propose a Multi-Scale Residual Stacking Network (MRSNet), which integrates multi-scale texture enhancement with multi-stage heatmap fusion. The MRSNet is built upon Residual Attention-Guided Units (RAGUs) and incorporates a Face Detail Enhancer (FDE), which applies edge, texture, and region branches to achieve differentiated enhancement across facial components. Furthermore, we design a Multi-Scale Texture Enhancement Module (MTEM) that employs progressive average pooling to construct hierarchical receptive fields and employs heatmap-guided attention for adaptive texture refinement. In addition, we introduce a multi-stage heatmap fusion strategy that injects landmark priors into multiple phases of the network, including feature extraction, texture enhancement, and detail reconstruction, enabling deep sharing and progressive integration of prior knowledge. Extensive experiments on CelebA and Helen demonstrate that the proposed method achieves superior detail recovery and generates perceptually realistic high-resolution face images. Both quantitative and qualitative evaluations confirm that our approach outperforms state-of-the-art methods. Full article
Show Figures

Figure 1

30 pages, 1553 KB  
Article
Combining User and Venue Personality Proxies with Customers’ Preferences and Opinions to Enhance Restaurant Recommendation Performance
by Andreas Gregoriades, Herodotos Herodotou, Maria Pampaka and Evripides Christodoulou
AI 2026, 7(1), 19; https://doi.org/10.3390/ai7010019 - 9 Jan 2026
Viewed by 298
Abstract
Recommendation systems are popular information systems that help consumers manage information overload. Whilst personality has been recognised as an important factor influencing consumers’ choice, it has not yet been fully exploited in recommendation systems. This study proposes a restaurant recommendation approach that integrates [...] Read more.
Recommendation systems are popular information systems that help consumers manage information overload. Whilst personality has been recognised as an important factor influencing consumers’ choice, it has not yet been fully exploited in recommendation systems. This study proposes a restaurant recommendation approach that integrates customer personality traits, opinions and preferences, extracted either directly from online review platforms or derived from electronic word of mouth (eWOM) text using information extraction techniques. The proposed method leverages the concept of venue personality grounded in personality–brand congruence theory, which posits that customers are more satisfied with brands whose personalities align with their own. A novel model is introduced that combines fine-tuned BERT embeddings with linguistic features to infer users’ personality traits from the text of their reviews. Customers’ preferences are identified using a custom named-entity recogniser, while their opinions are extracted through structural topic modelling. The overall framework integrates neural collaborative filtering (NCF) features with both directly observed and derived information from eWOM to train an extreme gradient boosting (XGBoost) regression model. The proposed approach is compared to baseline collaborative filtering methods and state-of-the-art neural network techniques commonly used in industry. Results across multiple performance metrics demonstrate that incorporating personality, preferences and opinions significantly improves recommendation performance. Full article
Show Figures

Figure 1

34 pages, 1434 KB  
Review
Artificial Intelligence Driven Smart Hierarchical Control for Micro Grids―A Comprehensive Review
by Thamilmaran Alwar and Prabhakar Karthikeyan Shanmugam
AI 2026, 7(1), 18; https://doi.org/10.3390/ai7010018 - 8 Jan 2026
Viewed by 474
Abstract
The increasing demand for energy combined with depleting conventional energy sources has led to the evolution of distributed generation using renewable energy sources. Integrating these distributed generations with the existing grid is a complicated task, as it risks the stability and synchronisation of [...] Read more.
The increasing demand for energy combined with depleting conventional energy sources has led to the evolution of distributed generation using renewable energy sources. Integrating these distributed generations with the existing grid is a complicated task, as it risks the stability and synchronisation of the system. Microgrids (MG) have evolved as a concrete solution for integrating these DGs into the existing system with the ability to operate in either grid-connected or islanded modes, thereby improving reliability and increasing grid functionality. However, owing to the intermittent nature of renewable energy sources, managing the energy balance and its coordination with the grid is a strenuous task. The hierarchical control structure paves the way for managing the dynamic performance of MGs, including economic aspects. However, this structure lacks the ability to provide effective solutions because of the increased complexity and system dynamics. The incorporation of artificial intelligence techniques for the control of MG has been gaining attention for the past decade to enhance its functionality and operation. Therefore, this paper presents a critical review of various artificial intelligence (AI) techniques that have been implemented for the hierarchical control of MGs and their significance, along with the basic control strategy. Full article
Show Figures

Figure 1

18 pages, 9181 KB  
Article
Automatic Optimization of Industrial Robotic Workstations for Sustainable Energy Consumption
by Rostislav Wierbica, Jakub Krejčí, Ján Babjak, Tomáš Kot, Václav Krys and Zdenko Bobovský
AI 2026, 7(1), 17; https://doi.org/10.3390/ai7010017 - 8 Jan 2026
Viewed by 373
Abstract
Industrial robotic workstations contribute substantially to the total energy demand of modern manufacturing, yet most existing energy-saving approaches focus on modifying robot trajectories, motion parameters, or the position of the robot’s base. This paper proposes a novel methodology for the automatic optimization of [...] Read more.
Industrial robotic workstations contribute substantially to the total energy demand of modern manufacturing, yet most existing energy-saving approaches focus on modifying robot trajectories, motion parameters, or the position of the robot’s base. This paper proposes a novel methodology for the automatic optimization of the spatial placement of a fixed technological trajectory within the robot workspace, without altering the task itself. The method combines pre-simulation filtering of infeasible configurations, large-scale energy simulation in ABB RobotStudio, and real measurement using a dual acquisition system consisting of the robot’s controller and an external power meter. A digital twin of the workstation is used to systematically evaluate thousands of candidate positions of a standardized trajectory. Experimental validation on an ABB IRB 1600–10/1.2 confirms a 23.4% difference in total energy consumption between two workspace configurations selected from the simulation study. The non-optimal configuration exhibits higher current draw, greater power variability, and a more intensive warm-up phase, indicating increased mechanical loading arising purely from geometric placement. By providing a scalable, trajectory-preserving approach grounded in digital-twin analysis and IoT-based measurement, this work establishes a data foundation for future AI-driven predictive and adaptive energy optimization in smart manufacturing environments. Full article
Show Figures

Figure 1

28 pages, 3824 KB  
Article
Comparison Between Early and Intermediate Fusion of Multimodal Techniques: Lung Disease Diagnosis
by Ahad Alloqmani and Yoosef B. Abushark
AI 2026, 7(1), 16; https://doi.org/10.3390/ai7010016 - 7 Jan 2026
Viewed by 407
Abstract
Early and accurate diagnosis of lung diseases is essential for effective treatment and patient management. Conventional diagnostic models trained on a single data type often miss important clinical information. This study explored a multimodal deep learning framework that integrates cough sounds, chest radiograph [...] Read more.
Early and accurate diagnosis of lung diseases is essential for effective treatment and patient management. Conventional diagnostic models trained on a single data type often miss important clinical information. This study explored a multimodal deep learning framework that integrates cough sounds, chest radiograph (X-rays), and computed tomography (CT) scans to enhance disease classification performance. Two fusion strategies, early and intermediate fusion, were implemented and evaluated against three single-modality baselines. The dataset was collected from different sources. Each dataset underwent preprocessing steps, including noise removal, grayscale conversion, image cropping, and class balancing, to ensure data quality. Convolutional neural network (CNN) and Extreme Inception (Xception) architectures were used for feature extraction and classification. The results show that multimodal learning achieves superior performance compared with single models. The intermediate fusion model achieved 98% accuracy, while the early fusion model reached 97%. In contrast, single CXR and CT models achieved 94%, and the cough sound model achieved 79%. These results confirm that multimodal integration, particularly intermediate fusion, offers a more reliable framework for automated lung disease diagnosis. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

37 pages, 1157 KB  
Review
Deploying LLM Transformer on Edge Computing Devices: A Survey of Strategies, Challenges, and Future Directions
by Endah Kristiani, Vinod Kumar Verma and Chao-Tung Yang
AI 2026, 7(1), 15; https://doi.org/10.3390/ai7010015 - 7 Jan 2026
Viewed by 885
Abstract
The intersection of edge computing, Large Language Models (LLMs), and the Transformer architecture is a very active and fascinating area of research. The core tension is that LLMs, which are built on the Transformer architecture, are massive and computationally intensive, while edge devices [...] Read more.
The intersection of edge computing, Large Language Models (LLMs), and the Transformer architecture is a very active and fascinating area of research. The core tension is that LLMs, which are built on the Transformer architecture, are massive and computationally intensive, while edge devices are resource-constrained in terms of power, memory, and processing capabilities. Therefore, LLMs based on the Transformer architecture are inherently unsuitable for edge computing in their original, full-sized form. They were designed for powerful, resource-rich cloud data centers. However, there is a massive and growing effort to make them suitable for edge devices. Implementing Transformer-based LLMs on edge computing devices is a complex but crucial task that requires a multi-faceted strategy. This paper reviews LLM deployment strategies for Transformer models on edge computing devices, examines the challenges, and estimates future directions. To address these challenges, researchers are exploring methods to compress LLMs and optimize their inference capabilities, making them more efficient for edge environments. Recent advancements in compact LLMs have shown promise in enhancing their deployment on edge devices, enabling improved performance while addressing the limitations of traditional models. This approach not only reduces computational costs but also enhances user privacy and security. Full article
Show Figures

Figure 1

31 pages, 1534 KB  
Article
Causal Reasoning and Large Language Models for Military Decision-Making: Rethinking the Command Structures in the Era of Generative AI
by Dimitrios Doumanas, Andreas Soularidis and Konstantinos Kotis
AI 2026, 7(1), 14; https://doi.org/10.3390/ai7010014 - 7 Jan 2026
Viewed by 645
Abstract
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed [...] Read more.
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed by military experts. With the rapid advancement of Large Language Models (LLMs) such as ChatGPT, Claude, and DeepSeek, a new research question emerges: can LLMs perform causal reasoning at a level that could meaningfully replace human decision-makers, or should they remain human-led decision-support tools in high-stakes environments? This paper explores the causal reasoning capabilities of LLMs for operational and strategic military decisions. Unlike conventional AI models that rely primarily on correlation-based predictions, LLMs are now able to engage in multi-perspective reasoning, intervention analysis, and scenario-based assessments. We introduce a structured empirical evaluation framework to assess LLM performance through 10 de-identified real-world-inspired battle scenarios, ensuring models reason over provided inputs rather than memorized data. Critically, LLM outputs are systematically compared against a human expert baseline, composed of military officers across multiple ranks and years of operational experience. The evaluation focuses on precision, recall, causal reasoning depth, adaptability, and decision soundness. Our findings provide a rigorous comparative assessment of whether carefully prompted LLMs can assist, complement, or approach expert-level performance in military planning. While fully autonomous AI-led command remains premature, the results suggest that LLMs can offer valuable support in complex decision processes when integrated as part of hybrid human-AI decision-support frameworks. Since our evaluation directly tests this capability, this paradigm shift raises fundamental question: Is there a possibility to fully replace high-ranking officers/commanders in leading critical military operations, or should AI-driven tools remain as decision-support systems enhancing human-driven battlefield strategies? Full article
Show Figures

Figure 1

16 pages, 485 KB  
Article
Multi-Agent Transfer Learning Based on Contrastive Role Relationship Representation
by Zixuan Wu, Jintao Wu and Jiajia Zhang
AI 2026, 7(1), 13; https://doi.org/10.3390/ai7010013 - 6 Jan 2026
Viewed by 528
Abstract
This paper presents the Multi-agent Transfer Learning Based on Contrastive Role Relationship Representation (MCRR), focusing on the unique function of role mechanisms in cross-task knowledge transfer. The framework employs contrastive learning-driven role representation modeling to capture the differences and commonalities of agent behavior [...] Read more.
This paper presents the Multi-agent Transfer Learning Based on Contrastive Role Relationship Representation (MCRR), focusing on the unique function of role mechanisms in cross-task knowledge transfer. The framework employs contrastive learning-driven role representation modeling to capture the differences and commonalities of agent behavior patterns among multiple tasks. We generate generalizable role representations and embed them into transfer policy networks, enabling agents to efficiently share role assignment knowledge during source task training and achieve policy transfer through precise role adaptation in unseen tasks. Unlike traditional methods relying on the generalization ability of neural networks, MCRR breaks through the coordination bottleneck in multi-agent systems for dynamic team collaboration by explicitly modeling role dynamics among tasks and constructing a cross-task role contrast model. In the SMAC benchmark task series, including mixed formations and quantity variations, MCRR significantly improves win rates in both source and unseen tasks. By outperforming mainstream baselines like MATTAR and UPDeT, MCRR validates the effectiveness of roles as a bridge for knowledge transfer. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

7 pages, 601 KB  
Editorial
Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture
by Arslan Munir
AI 2026, 7(1), 12; https://doi.org/10.3390/ai7010012 - 6 Jan 2026
Viewed by 450
Abstract
Agriculture is entering a profound period of transformation, driven by the accelerating integration of artificial intelligence (AI), machine learning, computer vision, autonomous sensing, and data-driven decision support [...] Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

21 pages, 1190 KB  
Review
AI-Driven Advances in Precision Oncology: Toward Optimizing Cancer Diagnostics and Personalized Treatment
by Luka Bulić, Petar Brlek, Nenad Hrvatin, Eva Brenner, Vedrana Škaro, Petar Projić, Sunčica Andreja Rogan, Marko Bebek, Parth Shah and Dragan Primorac
AI 2026, 7(1), 11; https://doi.org/10.3390/ai7010011 - 4 Jan 2026
Viewed by 1046
Abstract
Cancer remains one of the main global public health challenges, with rising incidence and mortality rates demanding more effective diagnostic and therapeutic approaches. Recent advances in artificial intelligence (AI) have positioned it as a transformative force in oncology, offering the ability to process [...] Read more.
Cancer remains one of the main global public health challenges, with rising incidence and mortality rates demanding more effective diagnostic and therapeutic approaches. Recent advances in artificial intelligence (AI) have positioned it as a transformative force in oncology, offering the ability to process vast and complex datasets that extend beyond human analytic capabilities. By integrating radiological, histopathological, genomic, and clinical data, AI enables more precise tumor characterization, including refined molecular classification, thereby improving risk stratification and facilitating individualized therapeutic decisions. In diagnostics, AI-driven image analysis platforms have demonstrated excellent performance, particularly in radiology and pathology. Prognostic algorithms are increasingly applied to predict survival, recurrence, and treatment response, while reinforcement learning models are being explored for dynamic radiotherapy and optimization of complex treatment regimens. Beyond direct patient care, AI is accelerating drug discovery and clinical trial design, reducing costs and timelines associated with translating novel therapies into clinical practice. Clinical decision support systems are gradually being integrated into practice, assisting physicians in managing the growing complexity of cancer care. Despite this progress, challenges such as data quality, interoperability, algorithmic bias, and the opacity of complex models limit widespread integration. Additionally, ethical and regulatory hurdles must be addressed to ensure that AI applications are safe, equitable, and clinically effective. Nevertheless, the trajectory of current research suggests that AI will play an increasingly important role in the evolution of precision oncology, complementing human expertise and improving patient outcomes. Full article
Show Figures

Figure 1

44 pages, 657 KB  
Review
Applications of Artificial Intelligence in Dental Malocclusion: A Scoping Review of Recent Advances (2020–2025)
by Man Hung, Owen Cohen, Nicholas Beasley, Cairo Ziebarth, Connor Schwartz, Alicia Parry and Martin S. Lipsky
AI 2026, 7(1), 10; https://doi.org/10.3390/ai7010010 - 31 Dec 2025
Viewed by 568
Abstract
Introduction: Dental malocclusion affects more than half of the global population, causing significant functional and esthetic consequences. The integration of artificial intelligence (AI) into orthodontic care for malocclusion has the potential to enhance diagnostic accuracy, treatment planning, and clinical efficiency. However, existing research [...] Read more.
Introduction: Dental malocclusion affects more than half of the global population, causing significant functional and esthetic consequences. The integration of artificial intelligence (AI) into orthodontic care for malocclusion has the potential to enhance diagnostic accuracy, treatment planning, and clinical efficiency. However, existing research remains fragmented, and recent advances have not been comprehensively synthesized. This scoping review aimed to map the current landscape of AI applications in dental malocclusion from 2020 to 2025. Methods: The review followed the Joanna Briggs Institute methodology and the PRISMA-ScR guidelines. The authors conducted a systematic search across four databases (PubMed, Scopus, Web of Science, and IEEE Xplore) to identify original, peer-reviewed research applying AI to malocclusion diagnosis, classification, treatment planning, or monitoring. The review screened, selected, and extracted data using predefined criteria. Results: Ninety-five studies met the inclusion criteria. The majority employed convolutional neural networks and deep learning models, particularly for diagnosis and classification tasks. Accuracy rates frequently exceeded 90%, with robust performance in cephalometric landmark detection, skeletal classification, and 3D segmentation. Most studies focused on Angle’s classification, while anterior open bite, crossbite/asymmetry, and soft tissue modeling were comparatively underrepresented. Although model performance was generally high, study limitations included small sample sizes, lack of external validation, and limited demographic diversity. Conclusions: AI offers the potential to support and enhance the diagnosis and management of malocclusion. However, to ensure safe and effective clinical adoption, future research must include reproducible reporting, rigorous external validation across sites/devices, and evaluation in diverse populations and real-world clinical workflows. Full article
Show Figures

Figure 1

41 pages, 5539 KB  
Article
Robust Covert Spatial Attention Decoding from Low-Channel Dry EEG by Hybrid AI Model
by Doyeon Kim and Jaeho Lee
AI 2026, 7(1), 9; https://doi.org/10.3390/ai7010009 - 30 Dec 2025
Viewed by 717
Abstract
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a [...] Read more.
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a compact Hybrid encoder (CNN-LSTM-MHSA) with robustness-oriented training (noise/shift/channel-dropout and supervised consistency). Results: Online, the Hybrid All-on-Wav achieved 0.695 accuracy with end-to-end latency ~2.03 s per 2.0 s decision window; the pure model inference latency is ≈185 ms on CPU and ≈11 ms on GPU. The same backbone without defenses reached 0.673, a CNN-LSTM 0.612, and a compact CNN 0.578. Offline subject-wise analyses showed a CSA median Δ balanced accuracy (BAcc) of +2.9%p (paired Wilcoxon p = 0.037; N = 16), with usability-aligned improvements (error 0.272 → 0.268; information transfer rate (ITR) 3.120 → 3.240). Effects were smaller for MI and present for Emotion. Conclusions: Even with simple hardware, compact attention-augmented models and training-time defenses support feasible, low-latency left–right CSA control above chance, suitable for embedded or laptop-class deployment. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

25 pages, 1405 KB  
Review
The Current Landscape of Automatic Radiology Report Generation with Deep Learning: A Scoping Review
by Patricio Meléndez Rojas, Jaime Jamett Rojas, María Fernanda Villalobos Dellafiori, Pablo R. Moya and Alejandro Veloz Baeza
AI 2026, 7(1), 8; https://doi.org/10.3390/ai7010008 - 29 Dec 2025
Viewed by 1008
Abstract
Automatic radiology report generation (ARRG) has emerged as a promising application of deep learning (DL) with the potential to alleviate reporting workload and improve diagnostic consistency. However, despite rapid methodological advances, the field remains technically fragmented and not yet mature for routine clinical [...] Read more.
Automatic radiology report generation (ARRG) has emerged as a promising application of deep learning (DL) with the potential to alleviate reporting workload and improve diagnostic consistency. However, despite rapid methodological advances, the field remains technically fragmented and not yet mature for routine clinical adoption. This scoping review maps the current ARRG research landscape by examining DL architectures, multimodal integration strategies, and evaluation practices from 2015 to April 2025. Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, a comprehensive literature search identified 89 eligible studies, revealing a marked predominance of chest radiography datasets (87.6%), primarily driven by their public availability and the accelerated development of automated tools during the COVID-19 pandemic. Most models employed hybrid architectures (73%), particularly CNN–Transformer pairings, reflecting a shift toward systems that combine local feature extraction with global contextual reasoning. Although these approaches have achieved measurable gains in textual and semantic coherence, several challenges persist, including limited anatomical diversity, weak alignment with radiological rationale, and evaluation metrics that insufficiently reflect diagnostic adequacy or clinical impact. Overall, the findings indicate a rapidly evolving but clinically immature field, underscoring the need for validation frameworks that more closely reflect radiological practice and support future deployment in real-world settings. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Graphical abstract

31 pages, 9622 KB  
Article
View-Aware Pose Analysis: A Robust Pipeline for Multi-Person Joint Injury Prediction from Single Camera
by Basant Adel, Ahmad Salah, Mahmoud A. Mahdi and Heba Mohsen
AI 2026, 7(1), 7; https://doi.org/10.3390/ai7010007 - 27 Dec 2025
Viewed by 608
Abstract
This paper presents a novel, accessible pipeline for the prediction and prevention of motion-related joint injuries in multiple individuals. Current methodologies for biomechanical analysis often rely on complex, restrictive setups such as multi-camera systems, wearable sensors, or markers, limiting their applicability in everyday [...] Read more.
This paper presents a novel, accessible pipeline for the prediction and prevention of motion-related joint injuries in multiple individuals. Current methodologies for biomechanical analysis often rely on complex, restrictive setups such as multi-camera systems, wearable sensors, or markers, limiting their applicability in everyday environments. To overcome these limitations, we propose a comprehensive solution that utilizes only single-camera 2D images. Our pipeline comprises four distinct stages: (1) extraction of 2D human pose keypoints for multiple persons using a pretrained Human Pose Estimation model; (2) a novel ensemble learning model for person-view classification—distinguishing between front, back, and side perspectives—which is critical for accurate subsequent analysis; (3) a view-specific module that calculates body-segment angles, robustly handling movement pairs (e.g., flexion–extension) and mirrored joints; and (4) a pose assessment module that evaluates calculated angles against established biomechanical Range of Motion (ROM) standards to detect potentially injurious movements. Evaluated on a custom dataset of high-risk poses and diverse images, the end-to-end pipeline demonstrated an 87% success rate in identifying dangerous postures. The view classification stage, a key contribution of this work, achieved a 90% overall accuracy. The system delivers individualized, joint-specific feedback, offering a scalable and deployable solution for enhancing human health and safety in various settings, from home environments to workplaces, without the need for specialized equipment. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop