Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (581)

Search Parameters:
Keywords = domain-specific AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3643 KB  
Article
Optimizing Performance of Equipment Fleets Under Dynamic Operating Conditions: Generalizable Shift Detection and Multimodal LLM-Assisted State Labeling
by Bilal Chabane, Georges Abdul-Nour and Dragan Komljenovic
Sustainability 2026, 18(1), 132; https://doi.org/10.3390/su18010132 - 22 Dec 2025
Abstract
This paper presents OpS-EWMA-LLM (Operational State Shifts Detection using Exponential Weighted Moving Average and Labeling using Large Language Model), a hybrid framework that combines fleet-normalized statistical shift detection with LLM-assisted diagnostics to identify and interpret operational state changes across heterogeneous fleets. First, we [...] Read more.
This paper presents OpS-EWMA-LLM (Operational State Shifts Detection using Exponential Weighted Moving Average and Labeling using Large Language Model), a hybrid framework that combines fleet-normalized statistical shift detection with LLM-assisted diagnostics to identify and interpret operational state changes across heterogeneous fleets. First, we introduce a residual-based EWMA control chart methodology that uses deviations of each component’s sensor reading from its fleet-wide expected value to detect anomalies. This statistical approach yields near-zero false negatives and flags incipient faults earlier than conventional methods, without requiring component-specific tuning. Second, we implement a pipeline that integrates an LLM with retrieval-augmented generation (RAG) architecture. Through a three-phase prompting strategy, the LLM ingests time-series anomalies, domain knowledge, and contextual information to generate human-interpretable diagnostic insights. Finaly, unlike existing approaches that treat anomaly detection and diagnosis as separate steps, we assign to each detected event a criticality label based on both statistical score of the anomaly and semantic score from the LLM analysis. These labels are stored in the OpS-Vector to extend the knowledge base of cases for future retrieval. We demonstrate the framework on SCADA data from a fleet of wind turbines: OpS-EWMA successfully identifies critical temperature deviations in various components that standard alarms missed, and the LLM (augmented with relevant documents) provides rationalized explanations for each anomaly. The framework demonstrated robust performance and outperformed baseline methods in a realistic zero-tuning deployment across thousands of heterogeneous equipment units operating under diverse conditions, without component-specific calibration. By fusing lightweight statistical process control with generative AI, the proposed solution offers a scalable, interpretable tool for condition monitoring and asset management in Industry 4.0/5.0 settings. Beyond its technical contributions, the outcome of this research is aligned with the UN Sustainable Development Goals SDG 7, SDG 9, SDG 12, SDG 13. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

27 pages, 3025 KB  
Review
From Local to Global Perspective in AI-Based Digital Twins in Healthcare
by Maciej Piechowiak, Aleksander Goch, Ewelina Panas, Jolanta Masiak, Dariusz Mikołajewski, Izabela Rojek and Emilia Mikołajewska
Appl. Sci. 2026, 16(1), 83; https://doi.org/10.3390/app16010083 (registering DOI) - 21 Dec 2025
Abstract
Digital twins (DTs) powered by artificial intelligence (AI) are becoming important transformational tools in healthcare, enabling real-time simulation and personalized decision support at the patient level. The aim of this review is to critically examine the evolution, current applications, and future potential of [...] Read more.
Digital twins (DTs) powered by artificial intelligence (AI) are becoming important transformational tools in healthcare, enabling real-time simulation and personalized decision support at the patient level. The aim of this review is to critically examine the evolution, current applications, and future potential of AI-based DTs in healthcare, with a particular focus on their role in enabling real-time simulation and personalized patient-level decision support. Specifically, the review aims to provide a comprehensive overview of how AI-based DTs are being developed and implemented in various clinical domains, identifying existing scientific and technical gaps and highlighting methodological, regulatory, and ethical issues. Taking a “local to global” perspective, the review aims to explore how individual patient-level models can be scaled and integrated to inform population health strategies, global data networks, and collaborative research ecosystems. This will provide a structured foundation for future research, clinical applications, and policy development in this rapidly evolving field. Locally, DTs allow medical professionals to model individual patient physiology, predict disease progression, and optimize treatment strategies. Hospitals are implementing AI-based DT platforms to simulate workflows, efficiently allocate resources, and improve patient safety. Generative AI further enhances these applications by creating synthetic patient data for training, filling gaps in incomplete records, and enabling privacy-respecting research. On a broader scale, regional health systems can use connected DTs to model population health trends and predict responses to public health interventions. On a national scale, governments and policymakers can use these insights for strategic planning, resource allocation, and increasing resilience to health crises. Internationally and globally, AI-based DTs can integrate diverse datasets across borders to support research collaboration and improve early pandemic detection. Generative AI contributes to global efforts by harmonizing heterogeneous data, creating standardized virtual patient cohorts, and supporting cross-cultural medical education. Combining local precision with global insights highlights DTs’ role as a bridge between personalized and global health. Despite the efforts of medical and technical specialists, ethical, regulatory, and data governance challenges remain crucial to ensuring responsible and equitable implementation worldwide. In conclusion, AI-based DTs represent a transformative paradigm, combining individual patient care with systemic and global health management. These perspectives highlight the potential of AI-based DTs to bridge precision medicine and public health, provided ethical, regulatory, and governance challenges are addressed responsibly. Full article
26 pages, 2214 KB  
Review
Nanobody Therapeutics in Alzheimer’s Disease: From Molecular Mechanisms to Translational Approaches
by Deepika Godugu, Kranthi Gattu, Parul Suri, Abel B. Daartey, Krishna Jadhav and Satish Rojekar
Antibodies 2026, 15(1), 1; https://doi.org/10.3390/antib15010001 - 19 Dec 2025
Viewed by 145
Abstract
Nanobodies (single-domain antibodies, VHHs) have emerged as versatile tools for evaluating and treating Alzheimer’s disease (AD). They offer distinct engineering benefits compared with traditional antibodies and small molecules, including small size, stability, and specificity. In AD, nanobodies have been shown in preclinical models [...] Read more.
Nanobodies (single-domain antibodies, VHHs) have emerged as versatile tools for evaluating and treating Alzheimer’s disease (AD). They offer distinct engineering benefits compared with traditional antibodies and small molecules, including small size, stability, and specificity. In AD, nanobodies have been shown in preclinical models to neutralize toxic amyloid-β oligomers, inhibit tau generation and aggregation, and modulate neuroinflammation, thereby demonstrating significant therapeutic potential. However, all nanobody applications in AD are discussed strictly as preclinical therapeutic potential rather than established clinical therapies, and direct clinical evidence in patients with AD is still lacking. Advanced engineering strategies, including intranasal and intrathecal routes, receptor-mediated transport, plasma protein binding with albumin, and focused ultrasound to facilitate brain penetration. Additionally, to improve nanobody delivery precision, half-life, and efficacy, strategies such as integrating nanobodies with nanoparticles, dendrimers, liposomes, and viral vectors are being employed. In fact, nanobodies are applied beyond monotherapy across multiple technological platforms to optimize brain delivery and target multiple targets. Nanobodies have been used on bispecific and trispecific antibody platforms, as well as in CRISPR/Cas9 editing and AI-driven technologies, to expand their applications. Recently, preclinical evidence has been mounting on the efficacy of nanobodies in clearing Aβ and tau, preserving synapses, and normalizing biomarkers. Comparison with FDA-approved anti-Aβ monoclonal antibodies (aducanumab, lecanemab, and donanemab) highlights opportunities and current translational gaps, including safety testing, half-life extension, and delivery optimization. This review critically delineates the current molecular mechanisms, emerging strategies, and delivery platforms, and emphasizes the potential of nanobodies as promising therapeutic and diagnostic molecules in AD therapeutics. Full article
(This article belongs to the Section Antibody-Based Therapeutics)
Show Figures

Graphical abstract

19 pages, 2690 KB  
Article
Pattern Learning and Knowledge Distillation for Single-Cell Data Annotation
by Ming Zhang, Boran Ren and Xuedong Li
Biology 2026, 15(1), 2; https://doi.org/10.3390/biology15010002 - 19 Dec 2025
Viewed by 164
Abstract
Transferring cell type annotations from reference dataset to query dataset is a fundamental problem in AI-based single-cell data analysis. However, single-cell measurement techniques lead to domain gaps between multiple batches or datasets. The existing deep learning methods lack consideration on batch integration when [...] Read more.
Transferring cell type annotations from reference dataset to query dataset is a fundamental problem in AI-based single-cell data analysis. However, single-cell measurement techniques lead to domain gaps between multiple batches or datasets. The existing deep learning methods lack consideration on batch integration when learning reference annotations, which is a challenge for cell type annotation on multiple query batches. For cell representation, batch integration can not only eliminate the gaps between batches or datasets but also improve the heterogeneity of cell clusters. In this study, we proposed PLKD, a cell type annotation method based on pattern learning and knowledge distillation. PLKD consists of Teacher (Transformer) and Student (MLP). Teacher groups all input genes (features) into different gene sets (patterns), and each pattern represents a specific biological function. This design enables model to focus on biologically relevant functions interaction rather than gene-level expression that is susceptible to gaps of batches. In addition, knowledge distillation makes lightweight Student resistant to noise, allowing Student to infer quickly and robustly. Furthermore, PLKD supports multi-modal cell type annotation, multi-modal integration and other tasks. Benchmark experiments demonstrate that PLKD is able to achieve accurate and robust cell type annotation. Full article
Show Figures

Figure 1

14 pages, 6479 KB  
Article
Automating Air Pollution Map Analysis with Multi-Modal AI and Visual Context Engineering
by Szymon Cogiel, Mateusz Zareba, Tomasz Danek and Filip Arnaut
Atmosphere 2026, 17(1), 2; https://doi.org/10.3390/atmos17010002 - 19 Dec 2025
Viewed by 97
Abstract
The increasing volume of data from IoT sensors has made manual inspection time-consuming and prone to bias, particularly for spatiotemporal air pollution maps. While rule-based methods are adequate for simple datasets or individual maps, they are insufficient for interpreting multi-year time series data [...] Read more.
The increasing volume of data from IoT sensors has made manual inspection time-consuming and prone to bias, particularly for spatiotemporal air pollution maps. While rule-based methods are adequate for simple datasets or individual maps, they are insufficient for interpreting multi-year time series data with 1 h timestamps, which require both domain-specific expertise and significant time investment. This limitation is especially critical in environmental monitoring, where analyzing long-term spatiotemporal PM2.5 maps derived from 52 low-cost sensors remains labor-intensive and susceptible to human error. This study investigates the potential of generative artificial intelligence, specifically multi-modal large language models (MLLMs), for interpreting spatiotemporal PM2.5 maps. Both open-source models (Janus-Pro and LLaVA-1.5) and commercial large language models (GPT-4o and Gemini 2.5 Pro) were evaluated. The initial results showed a limited performance, highlighting the difficulty of extracting meaningful information directly from raw sensor-derived maps. To address this, a visual context engineering framework was introduced, comprising systematic optimization of colormaps, normalization of intensity ranges, and refinement of map layers and legends to improve clarity and interpretability for AI models. Evaluation using the GEval metric demonstrated that visual context engineering increased interpretation accuracy (defined as the detection of PM2.5 spatial extrema) by over 32.3% (relative improvement). These findings provide strong evidence that tailored visual preprocessing enables MLLMs to effectively interpret complex environmental time series data, representing a novel approach that bridges data-driven modeling with ecological monitoring and offers a scalable solution for automated, reliable, and reproducible analysis of high-resolution air quality datasets. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

19 pages, 7643 KB  
Article
Pixel-Level Fuzzy Rule Attention Maps for Interpretable MRI Classification
by Tae-Wan Kim and Keun-Chang Kwak
Symmetry 2025, 17(12), 2187; https://doi.org/10.3390/sym17122187 - 18 Dec 2025
Viewed by 87
Abstract
Although Artificial Intelligence (AI) has achieved notable performance, particularly in medicine, the structural opacity leading to the black-box phenomenon inhibits interpretability, thus necessitating a balance (Symmetry) between performance and transparency. Specifically, in the medical domain, effective diagnosis requires that high predictive performance be [...] Read more.
Although Artificial Intelligence (AI) has achieved notable performance, particularly in medicine, the structural opacity leading to the black-box phenomenon inhibits interpretability, thus necessitating a balance (Symmetry) between performance and transparency. Specifically, in the medical domain, effective diagnosis requires that high predictive performance be symmetrically counterbalanced by sufficient trust and explainability for clinical practice. Existing visualization techniques like Grad-CAM can highlight attention regions but provide limited insight into the reasoning process and often focus on irrelevant areas. To address this limitation, we propose a Fuzzy Attention Rule (FAR) model that extends fuzzy inference to MRI (Magnetic Resonance Imaging) image classification. The FAR model applies pixel-level fuzzy membership functions and logical operations (AND, OR, AND + OR, AND × OR) to generate rule-based attention maps, enabling explainable and convolution-free feature extraction. Experiments on Kaggle’s Brain MRI and Alzheimer’s MRI datasets show that FAR achieves comparable accuracy to Resnet50 while using far fewer parameters and significantly outperforming MLP. Quantitative and qualitative analyses confirm that FAR focuses more precisely on lesion regions than Grad-CAM. These results demonstrate that fuzzy logic can enhance both the explainability and reliability of medical AI systems without compromising performance. Full article
28 pages, 4151 KB  
Article
FANet: Frequency-Aware Attention-Based Tiny-Object Detection in Remote Sensing Images
by Zixiao Wen, Peifeng Li, Yuhan Liu, Jingming Chen, Xiantai Xiang, Yuan Li, Huixian Wang, Yongchao Zhao and Guangyao Zhou
Remote Sens. 2025, 17(24), 4066; https://doi.org/10.3390/rs17244066 - 18 Dec 2025
Viewed by 161
Abstract
In recent years, deep learning-based remote sensing object detection has achieved remarkable progress, yet the detection of tiny objects remains a significant challenge. Tiny objects in remote sensing images typically occupy only a few pixels, resulting in low contrast, poor resolution, and high [...] Read more.
In recent years, deep learning-based remote sensing object detection has achieved remarkable progress, yet the detection of tiny objects remains a significant challenge. Tiny objects in remote sensing images typically occupy only a few pixels, resulting in low contrast, poor resolution, and high sensitivity to localization errors. Their diverse scales and appearances, combined with complex backgrounds and severe class imbalance, further complicate the detection tasks. Conventional spatial feature extraction methods often struggle to capture the discriminative characteristics of tiny objects, especially in the presence of noise and occlusion. To address these challenges, we propose a frequency-aware attention-based tiny-object detection network with two plug-and-play modules that leverage frequency-domain information to enhance the targets. Specifically, we introduce a Multi-Scale Frequency Feature Enhancement Module (MSFFEM) to adaptively highlight the contour and texture details of tiny objects while suppressing background noise. Additionally, a Channel Attention-based RoI Enhancement Module (CAREM) is proposed to selectively emphasize high-frequency responses within RoI features, further improving object localization and classification. Furthermore, to mitigate sample imbalance, we employ multi-directional flip sample augmentation and redundancy filtering strategies, which significantly boost detection performance for few-shot categories. Extensive experiments on public object detection datasets, i.e., AI-TOD, VisDrone2019, and DOTA-v1.5, demonstrate that the proposed FANet consistently improves detection performance for tiny objects, outperforming existing methods and providing new insights into the integration of frequency-domain analysis and attention mechanisms for robust tiny-object detection in remote sensing applications. Full article
(This article belongs to the Special Issue Deep Learning-Based Small-Target Detection in Remote Sensing)
Show Figures

Figure 1

17 pages, 493 KB  
Article
A Generalizable Agentic AI Pipeline for Developing Chatbots Using Small Language Models: A Case Study on Thai Student Loan Fund Services
by Jakkaphong Inpun, Watcharaporn Cholamjiak, Piyada Phrueksawatnon and Kanokwatt Shiangjen
Computation 2025, 13(12), 297; https://doi.org/10.3390/computation13120297 - 18 Dec 2025
Viewed by 240
Abstract
The rising deployment of artificial intelligence in public services is constrained by computational costs and limited domain-specific data, particularly in multilingual contexts. This study proposes a generalizable Agentic AI pipeline for developing question–answer chatbot systems using small language models (SLMs), demonstrated through a [...] Read more.
The rising deployment of artificial intelligence in public services is constrained by computational costs and limited domain-specific data, particularly in multilingual contexts. This study proposes a generalizable Agentic AI pipeline for developing question–answer chatbot systems using small language models (SLMs), demonstrated through a case study on the Thai Student Loan Fund (TSLF). The pipeline integrates four stages: OCR-based document digitization using Typhoon2-3B, agentic question–answer dataset construction via a clean–check–plan–generate (CCPG) workflow, parameter-efficient fine-tuning with QLoRA on Typhoon2-1B and Typhoon2-3B models, and retrieval-augmented generation (RAG) for source-grounded responses. Evaluation using BERTScore and CondBERT confirmed high semantic consistency (FBERT = 0.9807) and stylistic reliability (FBERT = 0.9839) of the generated QA corpus. Fine-tuning improved the 1B model’s domain alignment (FBERT: 0.8593 → 0.8641), while RAG integration further enhanced factual grounding (FBERT = 0.8707) and citation transparency. Cross-validation with GPT-5 and Gemini 2.5 Pro demonstrated dataset transferability and reliability. The results establish that Agentic AI combined with SLMs offers a cost-effective, interpretable, and scalable framework for automating bilingual advisory services in resource-constrained government and educational institutions. Full article
(This article belongs to the Special Issue Generative AI in Action: Trends, Applications, and Implications)
21 pages, 1991 KB  
Article
Zero-Shot Resume–Job Matching with LLMs via Structured Prompting and Semantic Embeddings
by Panagiotis Skondras, Panagiotis Zervas and Giannis Tzimas
Electronics 2025, 14(24), 4960; https://doi.org/10.3390/electronics14244960 - 17 Dec 2025
Viewed by 221
Abstract
In this article, we present a tool for matching resumes to job posts and vice versa (job post to resumes). With minor modifications, it may also be adapted to other domains where text matching is necessary. This tool may help organizations save time [...] Read more.
In this article, we present a tool for matching resumes to job posts and vice versa (job post to resumes). With minor modifications, it may also be adapted to other domains where text matching is necessary. This tool may help organizations save time during the hiring process, as well as assist applicants by allowing them to match their resumes to job posts they have selected. To achieve text matching without any model training (zero-shot matching), we constructed dynamic structured prompts that consisted of unstructured and semi-structured job posts and resumes based on specific criteria, and we utilized the Chain of Thought (CoT) technique on the Mistral model (open-mistral-7b). In response, the model generated structured (segmented) job posts and resumes. Then, the job posts and resumes were cleaned and preprocessed. We utilized state-of-the-art sentence similarity models hosted on Hugging face (nomic-embed-text-v1-5 and google-embedding-gemma-300m) through inference endpoints to create sentence embeddings for each resume and job post segment. We used the cosine similarity metric to determine the optimal matching, and the matching operation was applied to eleven different occupations. The results we achieved reached up to 87% accuracy for some of the occupations and underscore the potential of zero-shot techniques in text matching utilizing LLMs. The dataset we used was from indeed.com, and the Spring AI framework was used for the implementation of the tool. Full article
(This article belongs to the Special Issue Advances in Text Mining and Analytics)
Show Figures

Figure 1

22 pages, 450 KB  
Review
Exploring the Security of Mobile Face Recognition: Attacks, Defenses, and Future Directions
by Elísabet Líf Birgisdóttir, Michał Ignacy Kunkel, Lukáš Pleva, Maria Papaioannou, Gaurav Choudhary and Nicola Dragoni
Appl. Sci. 2025, 15(24), 13232; https://doi.org/10.3390/app152413232 - 17 Dec 2025
Viewed by 184
Abstract
Biometric authentication on smartphones has advanced rapidly in recent years, with face recognition becoming the dominant modality due to its convenience and easy integration with modern mobile hardware. However, despite these developments, smartphone-based facial recognition systems remain vulnerable to a broad spectrum of [...] Read more.
Biometric authentication on smartphones has advanced rapidly in recent years, with face recognition becoming the dominant modality due to its convenience and easy integration with modern mobile hardware. However, despite these developments, smartphone-based facial recognition systems remain vulnerable to a broad spectrum of attacks. This survey provides an updated and comprehensive examination of the evolving attack landscape and corresponding defense mechanisms, incorporating recent advances up to 2025. A key contribution of this work is a structured taxonomy of attack types targeting smartphone facial recognition systems, encompassing (i) 2D and 3D presentation attacks; (ii) digital attacks; and (iii) dynamic attack patterns that exploit acquisition conditions. We analyze how these increasingly realistic and condition-dependent attacks challenge the robustness and generalization capabilities of modern face anti-spoofing (FAS) systems. On the defense side, the paper reviews recent progress in liveness detection, deep-learning- and transformer-based approaches, quality-aware and domain-generalizable models, and emerging unified frameworks capable of handling both physical and digital spoofing. Hardware-assisted methods and multi-modal techniques are also examined, with specific attention to their applicability in mobile environments. Furthermore, we provide a systematic overview of commonly used datasets, evaluation metrics, and cross-domain testing protocols, identifying limitations related to demographic bias, dataset variability, and controlled laboratory conditions. Finally, the survey outlines key research challenges and future directions, including the need for mobile-efficient anti-spoofing models, standardized in-the-wild evaluation protocols, and defenses robust to unseen and AI-generated spoof types. Collectively, this work offers an integrated view of current trends and emerging paradigms in smartphone-based face anti-spoofing, supporting the development of more secure and resilient biometric authentication systems. Full article
(This article belongs to the Collection Innovation in Information Security)
Show Figures

Figure 1

53 pages, 1902 KB  
Review
Edge AI for Smart Cities: Foundations, Challenges, and Opportunities
by Krishna Sruthi Velaga, Yifan Guo and Wei Yu
Smart Cities 2025, 8(6), 211; https://doi.org/10.3390/smartcities8060211 - 16 Dec 2025
Viewed by 463
Abstract
Smart cities seek to improve urban living by embedding advanced technologies into infrastructures, services, and governance. Edge Artificial Intelligence (Edge AI) has emerged as a critical enabler by moving computation and learning closer to data sources, enabling real-time decision-making, improving privacy, and reducing [...] Read more.
Smart cities seek to improve urban living by embedding advanced technologies into infrastructures, services, and governance. Edge Artificial Intelligence (Edge AI) has emerged as a critical enabler by moving computation and learning closer to data sources, enabling real-time decision-making, improving privacy, and reducing reliance on centralized cloud infrastructure. This survey provides a comprehensive review of the foundations, challenges, and opportunities of edge AI in smart cities. In particular, we begin with an overview of layer-wise designs for edge AI-enabled smart cities, followed by an introduction to the core components of edge AI systems, including applications, sensing data, models, and infrastructure. Then, we summarize domain-specific applications spanning manufacturing, healthcare, transportation, buildings, and environments, highlighting both the softcore (e.g., AI algorithm design) and the hardcore (e.g., edge device selection) in heterogeneous applications. Next, we analyze the sources of sensing data generation, model design strategies, and hardware infrastructure that underpin edge AI deployment. Building on these, we finally identify several open challenges and provide future research directions in this domain. Our survey outlines a future research roadmap to advance edge AI technologies, thereby supporting the development of adaptive, harmonic, and sustainable smart cities. Full article
Show Figures

Figure 1

36 pages, 3105 KB  
Review
Reinforcement Learning for Industrial Automation: A Comprehensive Review of Adaptive Control and Decision-Making in Smart Factories
by Yasser M. Alginahi, Omar Sabri and Wael Said
Machines 2025, 13(12), 1140; https://doi.org/10.3390/machines13121140 - 15 Dec 2025
Viewed by 395
Abstract
The accelerating integration of Artificial Intelligence (AI) in Industrial Automation has established Reinforcement Learning (RL) as a transformative paradigm for adaptive control, intelligent optimization, and autonomous decision-making in smart factories. Despite the growing literature, existing reviews often emphasize algorithmic performance or domain-specific applications, [...] Read more.
The accelerating integration of Artificial Intelligence (AI) in Industrial Automation has established Reinforcement Learning (RL) as a transformative paradigm for adaptive control, intelligent optimization, and autonomous decision-making in smart factories. Despite the growing literature, existing reviews often emphasize algorithmic performance or domain-specific applications, neglecting broader links between methodological evolution, technological maturity, and industrial readiness. To address this gap, this study presents a bibliometric review mapping the development of RL and Deep Reinforcement Learning (DRL) research in Industrial Automation and robotics. Following the PRISMA 2020 protocol to guide the data collection procedures and inclusion criteria, 672 peer-reviewed journal articles published between 2017 and 2026 were retrieved from Scopus, ensuring high-quality, interdisciplinary coverage. Quantitative bibliometric analyses were conducted in R using Bibliometrix and Biblioshiny, including co-authorship, co-citation, keyword co-occurrence, and thematic network analyses, to reveal collaboration patterns, influential works, and emerging research trends. Results indicate that 42% of studies employed DRL, 27% focused on Multi-Agent RL (MARL), and 31% relied on classical RL, with applications concentrated in robotic control (33%), process optimization (28%), and predictive maintenance (19%). However, only 22% of the studies reported real-world or pilot implementations, highlighting persistent challenges in scalability, safety validation, interpretability, and deployment readiness. By integrating a review with bibliometric mapping, this study provides a comprehensive taxonomy and a strategic roadmap linking theoretical RL research with practical industrial applications. This roadmap is structured across four critical dimensions: (1) Algorithmic Development (e.g., safe, explainable, and data-efficient RL), (2) Integration Technologies (e.g., digital twins and IoT), (3) Validation Maturity (from simulation to real-world pilots), and (4) Human-Centricity (addressing trust, collaboration, and workforce transition). These insights can guide researchers, engineers, and policymakers in developing scalable, safe, and human-centric RL solutions, prioritizing research directions, and informing the implementation of Industry 5.0–aligned intelligent automation systems emphasizing transparency, sustainability, and operational resilience. Full article
Show Figures

Figure 1

26 pages, 1441 KB  
Review
Artificial Intelligence and Machine Learning in Lung Cancer: Advances in Imaging, Detection, and Prognosis
by Mohammad Farhan Arshad, Adiba Tabassum Chowdhury, Zain Sharif, Md. Sakib Bin Islam, Md. Shaheenur Islam Sumon, Amshiya Mohammedkasim, Muhammad E. H. Chowdhury and Shona Pedersen
Cancers 2025, 17(24), 3985; https://doi.org/10.3390/cancers17243985 - 14 Dec 2025
Viewed by 540
Abstract
Background/Objectives: As the primary cause of cancer-related death globally, lung cancer highlights the critical need for early identification, precise staging, and individualized treatment planning. By enabling automated diagnosis, staging, and prognostic evaluation, recent developments in artificial intelligence (AI) and machine learning (ML) have [...] Read more.
Background/Objectives: As the primary cause of cancer-related death globally, lung cancer highlights the critical need for early identification, precise staging, and individualized treatment planning. By enabling automated diagnosis, staging, and prognostic evaluation, recent developments in artificial intelligence (AI) and machine learning (ML) have completely changed the treatment of lung cancer. The goal of this narrative review is to compile the most recent data on uses of AI and ML throughout the lung cancer care continuum. Methods: A comprehensive literature search was conducted across major scientific databases to identify peer-reviewed studies focused on AI-based imaging, detection, and prognostic modeling in lung cancer. Studies were categorized into three thematic domains: (1) detection and screening, (2) staging and diagnosis, and (3) risk prediction and prognosis. Results: Convolutional neural networks (CNNs), in particular, have shown significant sensitivity and specificity in nodule recognition, segmentation, and false-positive reduction. Radiomics-based models and other multimodal frameworks combining imaging and clinical data have great promise for forecasting treatment outcomes and survival rates. The accuracy of non-small-cell lung cancer (NSCLC) staging, lymph node evaluation, and malignancy classification were regularly improved by AI algorithms, frequently matching or exceeding radiologist performance. Conclusions: There are still issues with data heterogeneity, interpretability, repeatability, and clinical acceptability despite significant advancements. Standardized datasets, ethical AI implementation, and transparent model evaluation should be the top priorities for future initiatives. AI and ML have revolutionary potential for intelligent, personalized, and real-time lung cancer treatment by connecting computational innovation with precision oncology. Full article
(This article belongs to the Special Issue AI-Based Applications in Cancers)
Show Figures

Figure 1

24 pages, 831 KB  
Article
Task-Centered Analysis of Higher Education Students’ Uses of Generative Artificial Intelligence
by Arnon Hershkovitz, Michal Tabach and Lilach Lurie
Educ. Sci. 2025, 15(12), 1676; https://doi.org/10.3390/educsci15121676 - 12 Dec 2025
Viewed by 351
Abstract
This study examines how higher education students use generative artificial intelligence (GenAI) for academic tasks and identifies the types of tasks in which these tools are employed. Our research was conducted in a large multidisciplinary university in Israel. Data from 825 eligible responses [...] Read more.
This study examines how higher education students use generative artificial intelligence (GenAI) for academic tasks and identifies the types of tasks in which these tools are employed. Our research was conducted in a large multidisciplinary university in Israel. Data from 825 eligible responses to an open-ended item underwent a conventional content analysis using a bottom-up coding approach; data were coded inductively in an iterative process, achieving substantial inter-rater reliability. The findings produced the SOI-MARSMeLLAW framework, which maps eight themes of GenAI use (by frequency, from higher to lowest)—Writing, Learning, Reading, Searching, Meta-learning, Multimedia, Analysis, and Learning Aids—across three information channels: Self, Output, and Input. Overall, we find that students rely on GenAI primarily to support rather than replace their learning, suggesting a flexible and strategic approach that balances efficiency with agency. Two follow-up analyses using the broad data looked at domain-specificity and Task-Centered GenAI Literacy. We found that STEM students emphasizing Learning tasks and non-STEM students highlighting Reading, Meta-learning, and Searching; and that literacy was higher for Writing-related tasks than for Learning-related tasks. These findings have implications for GenAI policy in higher education institutions, and for the redesign of pedagogy and assessment in higher education. Full article
Show Figures

Figure 1

22 pages, 5105 KB  
Article
From News to Knowledge: Leveraging AI and Knowledge Graphs for Real-Time ESG Insights
by Omar Mohmmed Hassan Nassar, Fahimeh Jafari and Chanchal Jain
Sustainability 2025, 17(24), 11128; https://doi.org/10.3390/su172411128 - 12 Dec 2025
Viewed by 437
Abstract
Traditional Environmental, Social, and Governance (ESG) assessments rely heavily on corporate disclosures and third-party ratings, which are often delayed, inconsistent, and prone to bias. These limitations leave stakeholders without timely visibility into rapidly evolving ESG events. These assessment frameworks also fail to capture [...] Read more.
Traditional Environmental, Social, and Governance (ESG) assessments rely heavily on corporate disclosures and third-party ratings, which are often delayed, inconsistent, and prone to bias. These limitations leave stakeholders without timely visibility into rapidly evolving ESG events. These assessment frameworks also fail to capture the dynamic nature of ESG issues reflected in public news media. This research addresses these limitations by proposing and implementing an automated framework utilising Artificial Intelligence (AI), specifically Natural Language Processing (NLP) and Knowledge Graphs (KG), to analyse ESG news data for companies listed on major stock indices. The methodology involves several stages: collecting a registry of target companies; retrieving relevant news articles; applying Named Entity Recognition (NER), sentiment analysis, and ESG domain classification; and constructing a linked property knowledge graph to structure the extracted information semantically. The framework culminates in an interactive dashboard for visualising and querying the resulting graph database. The resulting knowledge graph supports comparative inferential analytics across indices and sectors, uncovering divergent ESG sentiment profiles and thematic priorities that traditional reports overlook. The analysis also reveals comparative insights into sentiment trends and ESG focus areas across different exchanges and sectors, offering perspectives often missing from traditional methods. Findings indicate differing ESG sentiment profiles and thematic focuses between the UK (FTSE) and Australian (ASX) indices within the analysed dataset. This study confirms AI/KG’s potential for a modular, dynamic, and semantically rich ESG intelligence approach, transforming unstructured news into interconnected insights. Limitations and areas for future work, including model refinement and integration of financial data, are also discussed. This proposed framework augments traditional ESG evaluations with automated, scalable, and context-rich analysis. Full article
Show Figures

Figure 1

Back to TopTop