Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,816)

Search Parameters:
Keywords = AI-driven learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4968 KB  
Article
Integrating Machine Learning and Dynamic Bayesian Networks to Identify the Factors Associated with Subsequent Intrapulmonary Metastasis Classification After Initial Single Primary Lung Cancer
by Wei Liu, Aliss T. C. Chang, Joyce W. Y. Chan, Junko C. S. Chan, Rainbow W. H. Lau, Tony S. K. Mok and Calvin S. H. Ng
Cancers 2026, 18(8), 1185; https://doi.org/10.3390/cancers18081185 - 8 Apr 2026
Abstract
Background/Objectives: Intrapulmonary metastasis (IPM) after an initial single primary lung cancer (SPLC) is an adverse follow-up pattern; however, when studying population-based longitudinal records, the determinants remain unclear. We aimed to identify factors associated with subsequent IPM after initial SPLC using artificial intelligence (AI)-driven [...] Read more.
Background/Objectives: Intrapulmonary metastasis (IPM) after an initial single primary lung cancer (SPLC) is an adverse follow-up pattern; however, when studying population-based longitudinal records, the determinants remain unclear. We aimed to identify factors associated with subsequent IPM after initial SPLC using artificial intelligence (AI)-driven analytical approaches. Methods: We used Surveillance, Epidemiology, and End Results (SEER) lung cancer records from 2000 to 2019. Adults with at least two records were restricted to those with SPLC at the first record. Outcome at the second record was registry-classified IPM versus persistent SPLC. A machine learning framework based on random forest models was developed using baseline variables, first record characteristics, and the interval between records. Temporal validation was performed by training on cases from 2000 to 2013 and testing on cases from 2014 to 2019. A dynamic Bayesian network (DBN) supported simulated intervention (SI) analyses to estimate model-implied risk ratios (RRs) with 95% confidence intervals (CIs). Results: Among 3450 patients, 361 had registry-classified IPM at the second record. The random forest model achieved an area under the curve (AUC) of 0.852 in internal validation and 0.929 in temporal validation. Surgery and record timing were the leading predictors. The DBN retained surgery as the only direct parent and achieved an AUC of 0.779. SI analyses showed higher IPM probability for pleural invasion level (PL) 3 versus PL 0, RR 1.378 (95% CI, 1.080–1.657). Lobectomy with mediastinal lymph node dissection versus wedge resection lowered the IPM probability, RR 0.378 (95% CI, 0.219–0.636). Conclusions: AI-based time-sequence modeling integrating machine learning and a DBN allowed for the identification of surgery, pleural invasion, and record timing as key factors associated with subsequent IPM classification after initial SPLC. This framework demonstrates the potential of combining predictive and probabilistic dependency modeling to investigate registry-based disease classification patterns, and may support hypothesis generation for future prospective studies. Full article
Show Figures

Figure 1

20 pages, 1160 KB  
Review
Integrating Artificial Intelligence into Breast Cancer Histopathology: Toward Improved Diagnosis and Prognosis
by Gavino Faa, Eleonora Lai, Flaviana Cau, Ferdinando Coghe, Massimo Rugge, Jasjit S. Suri, Claudia Codipietro, Benedetta Congiu, Simona Graziano, Ekta Tiwari, Andrea Pretta, Pina Ziranu, Mario Scartozzi and Matteo Fraschini
Cancers 2026, 18(7), 1184; https://doi.org/10.3390/cancers18071184 - 7 Apr 2026
Abstract
Histopathological evaluation of tissue sections remains the gold standard for the diagnosis, classification, and grading of breast cancer (BC). The widespread adoption of whole-slide imaging (WSI) has enabled the digitization of histological slides and facilitated the development of artificial intelligence (AI) approaches for [...] Read more.
Histopathological evaluation of tissue sections remains the gold standard for the diagnosis, classification, and grading of breast cancer (BC). The widespread adoption of whole-slide imaging (WSI) has enabled the digitization of histological slides and facilitated the development of artificial intelligence (AI) approaches for computational pathology. In recent years, machine learning and deep learning (DL) algorithms have been increasingly investigated for the analysis of hematoxylin and eosin (H&E)-stained images, with potential applications in tumor detection, histological classification, prognostic stratification, and prediction of treatment response. This narrative review summarizes recent developments in AI-driven models applied to BC histopathology and discusses their potential role in supporting diagnostic and prognostic assessment. Several studies have demonstrated the promising performance of DL algorithms in tasks such as the detection of lymph node metastases, assessment of residual tumor after neoadjuvant therapy, and prediction of clinical outcomes from histopathological images. Emerging research has also explored the possibility of inferring molecular and biomarker information from histology images, although these approaches currently identify statistical associations rather than direct molecular measurements. Despite the rapid expansion of this research field, significant barriers remain before routine clinical implementation can be achieved. Key challenges include dataset bias, variability in staining and image acquisition, limited external validation across institutions, and the need for transparent and reproducible model development. In addition, the translation of AI-based systems into clinical practice requires compliance with regulatory frameworks governing software used for medical purposes, such as those established by the U.S. Food and Drug Administration. Overall, AI represents a promising research direction in computational pathology and may contribute to decision-support tools capable of assisting pathologists in the analysis of digital slides. Continued efforts toward methodological rigor, large multicenter datasets, and prospective validation studies will be essential to determine the future role of AI in BC histopathology. Full article
(This article belongs to the Collection Artificial Intelligence in Oncology)
Show Figures

Figure 1

29 pages, 1848 KB  
Review
The Role of AI-Integrated Drone Systems in Agricultural Productivity and Sustainable Pest Management
by Muhammad Towfiqur Rahman, A. S. M. Bakibillah, Adib Hossain, Ali Ahasan, Md. Naimul Basher, Kabiratun Ummi Oyshe and Asma Mariam
AgriEngineering 2026, 8(4), 142; https://doi.org/10.3390/agriengineering8040142 - 7 Apr 2026
Abstract
Artificial intelligence (AI)-assisted drone technology in agriculture has transformed productivity and pest control techniques, resulting in novel solutions to modern farming challenges. Drones utilizing sensors, cameras, and AI algorithms can precisely monitor crop health, soil conditions, and insect infestations. Using AI-assisted drones for [...] Read more.
Artificial intelligence (AI)-assisted drone technology in agriculture has transformed productivity and pest control techniques, resulting in novel solutions to modern farming challenges. Drones utilizing sensors, cameras, and AI algorithms can precisely monitor crop health, soil conditions, and insect infestations. Using AI-assisted drones for precision irrigation and yield predictions further improves resource allocation, promotes sustainability, and reduces operating costs. This review examines recent advancements in AI and unmanned aerial vehicles (UAVs) in precision agriculture. Key trends include AI-driven crop disease detection, UAV-enabled multispectral imaging, precision pest management, smart tractors, variable-rate fertilization, and integration with IoT-based decision support systems. This study synthesizes current research to identify technological progress, implementation challenges, scalability barriers, and opportunities for sustainable agricultural transformation. This review of peer-reviewed studies published between 2013 and 2025 uses major scientific databases and predefined inclusion and exclusion criteria covering crop monitoring, precision input application, integrated pest management (IPM), and livestock (especially cattle) monitoring. We describe the platform and payload trade-offs that govern coverage, endurance, and spray quality; the dominant analytics trends, from classical machine learning to deep learning and embedded/edge inference; and the emerging shift from monitoring-only UAV use toward closed-loop decision-making (detection–prediction–intervention). Across the literature, the strongest opportunities lie in robust field validation, multi-modal data fusion (UAV + ground sensors + farm records), and interoperable standards that enable actionable IPM decisions. Key gaps include limited cross-site generalization, scarce reporting of economic indicators (ROI, payback period, and adoption rate), and regulatory and safety barriers for routine autonomous operations. Finally, we present some case studies to emphasize the feasibility and highlight future research directions of AI-assisted drone technology. Through this review, we aim to demonstrate technological advancements, challenges, and future opportunities in AI-assisted drone applications, ultimately advocating for more sustainable and cost-effective farming practices. Full article
Show Figures

Figure 1

18 pages, 535 KB  
Review
Artificial Intelligence in Intraoperative Imaging and Navigation for Spine Surgery: A Narrative Review
by Mina Girgis, Allison Kelliher, Michael S. Pheasant, Alex Tang, Siddharth Badve and Tan Chen
J. Clin. Med. 2026, 15(7), 2779; https://doi.org/10.3390/jcm15072779 - 7 Apr 2026
Abstract
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize [...] Read more.
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize operative workflows. In particular, AI-driven innovations in image acquisition and navigation are reshaping intraoperative decision-making and technical execution. This narrative review provides an overview of AI applications relevant to intraoperative imaging and navigation in spine surgery. We begin by defining key concepts in AI, ML, and deep learning and briefly outline the historical evolution of AI within spine practice. We then examine current capabilities in image recognition and automated pathology detection, emphasizing their clinical relevance. Given the central role of imaging accuracy in modern navigation-assisted procedures, we review conventional acquisition platforms, including intraoperative computed tomography (CT) systems (e.g., O-arm, GE, Airo), surface-based registration to preoperative CT (Stryker, Medtronic), and optical surface mapping technologies (e.g., 7D Surgical). Emerging AI-optimized advancements are subsequently discussed, including low-dose intraoperative CT protocols, expanded scan windows, metal artifact reduction algorithms, integration of 2D fluoroscopy with preoperative CT datasets, and 3D reconstruction derived from 2D imaging. These developments aim to improve image quality, reduce radiation exposure, and enhance navigational accuracy. By synthesizing current evidence and technological progress, this review highlights how AI-enhanced imaging systems are redefining intraoperative spine surgery and shaping the future of precision-based care. The primary purpose of this review is to outline the applications of AI and its potential for perioperative and intraoperative optimization, including radiation exposure reduction, workflow streamlining, preoperative planning, robot-assisted surgery, and navigation. The secondary purpose is to define AI, machine learning, and deep learning within the medical context, describe image and pathology recognition, and provide a historical overview of AI in orthopedic spine surgery. Full article
(This article belongs to the Special Issue Spine Surgery: Current Practice and Future Directions)
Show Figures

Figure 1

25 pages, 3712 KB  
Article
An AI-Enabled Single-Cell Transcriptomic Analysis Pipeline for Gene Signature Discovery in Natural Killer Cells Linked to Remission Outcomes in Chronic Myeloid Leukemia
by Santoshi Borra, Da Yan, Robert S. Welner and Zongliang Yue
Biology 2026, 15(7), 588; https://doi.org/10.3390/biology15070588 - 6 Apr 2026
Abstract
Background: A major technical challenge in single-cell transcriptomics is the absence of an integrative analytic pipeline that can simultaneously leverage gene regulatory network (GRN) architecture, AI-assisted gene panel discovery, and functional relevance analyses to generate coherent biological insights. Existing approaches often treat these [...] Read more.
Background: A major technical challenge in single-cell transcriptomics is the absence of an integrative analytic pipeline that can simultaneously leverage gene regulatory network (GRN) architecture, AI-assisted gene panel discovery, and functional relevance analyses to generate coherent biological insights. Existing approaches often treat these components independently, focusing on clusters, marker genes, or predictive features without integrating them into a mechanistically grounded framework. Consequently, comprehensive screening that links regulatory association, gene signature screening, and functional interpretation within single-cell datasets remains limited, underscoring the need for an integrated strategy. Methods: We developed an integrative bioinformatics pipeline based on Gene regulatory network–AI–Functional Analysis (GAFA), combining latent-space integration, unsupervised clustering, diffusion pseudotime analysis, lineage-resolved generalized additive modeling, GRN inference, and machine learning-based gene panel discovery. This framework enables systematic mapping of cell-state structure, reconstruction of differentiation and effector trajectories, and identification of transcriptional and regulatory features strongly associated with clinical outcomes. As a case study, we applied the pipeline to NK cell transcriptomes from six CML patients (two early relapse, two late relapse, two durable treatment-free remission—TFR; 15 samples) collected at TKI discontinuation and 6–12 months after therapy cessation. Results: We reanalyzed publicly available scRNA-seq data from a previously published CML cohort to evaluate NK-cell transcriptional programs associated with treatment-free remission and relapse. We resolved six transcriptionally distinct NK cell states spanning CD56bright-like cytokine-responsive, early activated, terminally mature, cytotoxic, lymphoid trafficking, and HLA-DR+ immunoregulatory populations, each exhibiting outcome-specific compositional differences. Pseudotime analysis revealed two major NK cell lineages—a maturation trajectory and a cytotoxic effector trajectory. TFR samples displayed balanced occupancy of both lineages, whereas early relapse samples showed marked depletion of the maturation branch and preferential accumulation in cytotoxic end states. AI-guided feature selection and random forest modeling identified an 18-gene panel that distinguished NK cells from TFR and relapse samples in an exploratory manner. Among them, CST7, FCER1G, GNLY, GZMA, and HLA-C were conventional NK-associated genes, whereas ACTB, CYBA, IFITM2, IFITM3, LYZ, MALAT1, MT2A, MYOM2, NFKBIA, PIM1, S100A8, S100B, and TSC22D3 were novel. The GRN inference further uncovered outcome-specific regulatory modules, with RUNX3, EOMES, ELK4, and REL regulons enriched in TFR, whereas FOSL2 and MAF regulons were enriched in relapse, and their downstream targets linked to IFN-γ signaling, metabolic reprogramming, and immunoregulatory feedback circuits. Conclusions: This AI-enabled single-cell analysis demonstrates how NK cell state composition, differentiation trajectories, and regulatory network rewiring collectively shape TFR versus relapse following TKI discontinuation in CML. The integrative pipeline provides a modular framework that could be extended to additional datasets for data-driven biomarker discovery and mechanistic stratification, and highlights candidate transcriptional regulators and NK cell programs that may be leveraged to improve remission durability, pending validation in larger patient cohorts. Full article
Show Figures

Figure 1

22 pages, 812 KB  
Review
AI-Driven BCR Modeling for Precision Immunology
by Tao Liu, Xusheng Zhao and Fan Yang
Int. J. Mol. Sci. 2026, 27(7), 3296; https://doi.org/10.3390/ijms27073296 - 5 Apr 2026
Viewed by 302
Abstract
The B cell receptor (BCR) repertoire captures an individual’s immunological history and antigen-driven evolution within a vast, high-dimensional sequence space. Although bulk and single-cell adaptive immune receptor repertoire sequencing (AIRR-seq) now enables deep profiling of BCR diversity, interpreting these datasets remains challenging due [...] Read more.
The B cell receptor (BCR) repertoire captures an individual’s immunological history and antigen-driven evolution within a vast, high-dimensional sequence space. Although bulk and single-cell adaptive immune receptor repertoire sequencing (AIRR-seq) now enables deep profiling of BCR diversity, interpreting these datasets remains challenging due to strong inter-individual heterogeneity, nonlinear sequence–structure–function relationships, dynamic clonal evolution, and the rarity of functionally relevant clones. Artificial intelligence (AI) provides a conceptual and computational framework for addressing these challenges. Here, we summarize how advanced deep learning architectures, including antibody-specific language models, graph neural networks (GNNs), and generative frameworks, uncover clonal topology, structural features, and antigen-binding semantics. We further highlight applications in cancer, infectious disease, and autoimmunity. Finally, we propose a closed-loop framework that integrates multimodal datasets, interpretable AI, and iterative experimental validation to advance predictive immunology and accelerate therapeutic antibody discovery. Full article
(This article belongs to the Special Issue Molecular Mechanism of Immune Response)
Show Figures

Figure 1

21 pages, 2333 KB  
Systematic Review
Artificial-Intelligence-Based Radiologic, Histopathologic, and Molecular Models for the Diagnosis and Classification of Malignant Salivary Gland Tumors: A Systematic Review and Functional Meta-Synthesis
by Carlos M. Ardila, Eliana Pineda-Vélez, Anny M. Vivares-Builes and Alejandro I. Díaz-Laclaustra
Med. Sci. 2026, 14(2), 183; https://doi.org/10.3390/medsci14020183 - 5 Apr 2026
Viewed by 171
Abstract
Background/Objectives: Malignant salivary gland tumors (MSGTs) are rare, biologically heterogeneous neoplasms in which histopathologic diagnosis and classification are challenging and subject to interobserver variability. Artificial intelligence (AI) approaches using radiologic, histopathologic, and molecular data, including radiomics, deep learning, and biomarker-based models, have been [...] Read more.
Background/Objectives: Malignant salivary gland tumors (MSGTs) are rare, biologically heterogeneous neoplasms in which histopathologic diagnosis and classification are challenging and subject to interobserver variability. Artificial intelligence (AI) approaches using radiologic, histopathologic, and molecular data, including radiomics, deep learning, and biomarker-based models, have been proposed as adjunctive diagnostic tools. This systematic review aimed to identify and critically appraise AI/ML models across radiologic, histopathologic, and molecular domains for distinct diagnostic tasks in MSGTs, and to integrate their diagnostic roles through a functional meta-synthesis. Methods: We conducted a PRISMA 2020-compliant systematic review. Embase, PubMed/MEDLINE, and Scopus were searched from inception to February 2026. Eligible studies developed or validated AI/ML diagnostic or classification models in human salivary gland tumor cohorts and reported extractable performance metrics. Results: From 1265 records, eight studies (1922 participants) met the inclusion criteria, spanning CT/MRI radiomics or deep learning (n = 4), whole-slide histopathology deep learning (n = 3), and DNA methylation-based classification (n = 1). External validation was reported in two CT-based benign–malignant discrimination studies, with AUCs of 0.890 (95% CI 0.844–0.937) and 0.745 (95% CI 0.699–0.791). Heterogeneity in model construction, outcome definitions, and validation strategies precluded meta-analysis. Risk of bias was frequently high in QUADAS-2/PROBAST assessments, driven by retrospective sampling, limited blinding, and analysis-related concerns, while calibration and utility were rarely assessed. Conclusions: AI/ML models for MSGTs demonstrate promising diagnostic performance, particularly for preoperative benign–malignant discrimination, but the current evidence base is limited by heterogeneity, predominantly internal validation, and high risk of bias. The functional meta-synthesis identified three convergent diagnostic domains: malignancy discrimination, histopathologic subtype classification, and molecular/epigenetic taxonomy refinement. Full article
(This article belongs to the Section Translational Medicine)
Show Figures

Figure 1

31 pages, 2118 KB  
Review
Artificial Intelligence Enabling Intelligent Solar Energy Systems: Integration and Emerging Directions
by Rogelio Ochoa-Barragán, Luis David Saavedra-Sánchez, Fabricio Nápoles-Rivera, César Ramírez-Márquez, Luis Fernando Lira-Barragán and José María Ponce-Ortega
Processes 2026, 14(7), 1167; https://doi.org/10.3390/pr14071167 - 4 Apr 2026
Viewed by 191
Abstract
The integration of artificial intelligence (AI) into solar energy systems has emerged as a transformative pathway to enhance efficiency, reliability, and sustainability in renewable energy. This review examines recent advances in AI-driven optimization and integration strategies across photovoltaic and solar thermal technologies with [...] Read more.
The integration of artificial intelligence (AI) into solar energy systems has emerged as a transformative pathway to enhance efficiency, reliability, and sustainability in renewable energy. This review examines recent advances in AI-driven optimization and integration strategies across photovoltaic and solar thermal technologies with elements of bibliometric analysis to identify trends, methodologies, and research directions. A particular emphasis is placed on machine learning and deep learning techniques applied to solar irradiance forecasting, maximum power point tracking, fault detection, energy management, and predictive maintenance. Unlike earlier reviews that focused on isolated applications, this work highlights the systemic role of AI in enabling smart grids, hybrid systems, and large-scale energy storage integration. The novelty of this contribution lies in mapping the evolution from traditional control methods to intelligent, self-adaptive frameworks that couple physical modeling with data-driven approaches, offering a structured roadmap for future developments. Furthermore, the review identifies challenges such as data scarcity, computational demand, and interpretability of AI models, while outlining opportunities for process intensification, resilience, and techno-economic optimization. By bridging technical progress with implementation prospects, this article provides an updated reference for researchers, policymakers, and industry stakeholders seeking to accelerate the deployment of AI-enhanced solar energy solutions. Full article
(This article belongs to the Special Issue Modeling, Simulation and Control in Energy Systems—2nd Edition)
Show Figures

Figure 1

19 pages, 935 KB  
Article
Collaborative Optimization Strategy of Virtual Power Plants Considering Flexible HVDC Transmission of New Energy Sources to Enhance the Wind–Solar Power Consumption
by Jiajun Ou, Hao Lu, Jingyi Li, Di Cai, Nan Yang and Shiao Wang
Processes 2026, 14(7), 1162; https://doi.org/10.3390/pr14071162 - 3 Apr 2026
Viewed by 202
Abstract
In the scenario where renewable energy sources (RESs) are connected to the power system (PS) through a flexible high-voltage direct current (HVDC) transmission system, their output becomes highly intermittent and volatile due to meteorological factors like wind direction and speed. This variability poses [...] Read more.
In the scenario where renewable energy sources (RESs) are connected to the power system (PS) through a flexible high-voltage direct current (HVDC) transmission system, their output becomes highly intermittent and volatile due to meteorological factors like wind direction and speed. This variability poses significant challenges to the real-time power balance and control of the PS. To address the uncertainties in system operation and the challenges of RES consumption, this paper proposes an artificial intelligence (AI) algorithm-driven collaborative optimization strategy for virtual power plants (VPPs) considering RESs transmitted by flexible HVDC. Firstly, a self-attention mechanism and multiple gated structures are integrated into a long short-term memory (LSTM) deep learning model. This enhancement improves the model’s ability to capture multi-timescale characteristics of RESs, increasing forecasting accuracy and robustness. Based on these forecasts, a total cost optimization model for VPP operation is developed, which includes high penalty costs for wind and solar curtailment. By embedding economic constraints that prioritize RESs usage, the model can reduce waste caused by traditional cost-driven scheduling. Additionally, to solve the high-dimensional nonlinear optimization problem in VPP scheduling, an improved population-based incremental learning (PBIL) algorithm is introduced. It incorporates an elite retention strategy and an adaptive mutation operator to boost global search efficiency and convergence speed. Simulations based on an VPP incorporating typical offshore wind and solar RESs transmitted via flexible HVDC demonstrate that the improved LSTM reduces MAPE by 7.14% for wind and 4.27% for PV compared to classical LSTM, and the proposed method achieves the lowest curtailment rates (wind 10.74%, PV 10.23%) and total cost (43,752 RMB), outperforming GA, PSO, and GW by 10–18% in cost reduction. Simulation results show that the proposed strategy enhances RESs consumption while maintaining system economy under flexible HVDC transmission. This work offers theoretical and practical insights for optimizing PS with high RES penetration and supports the low-carbon transition of new-type PS. Full article
Show Figures

Figure 1

28 pages, 4837 KB  
Article
AI-Driven Adaptive Encryption Framework for a Modular Hardware-Based Data Security Device: Conceptual Architecture, Formal Foundations, and Security Analysis
by Pruthviraj Pawar and Gregory Epiphaniou
Appl. Sci. 2026, 16(7), 3522; https://doi.org/10.3390/app16073522 - 3 Apr 2026
Viewed by 128
Abstract
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module [...] Read more.
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module connected by unidirectional buses. We formalise the adaptive encryption policy as a constrained Markov decision process (CMDP) over a discrete action space of 216 cryptographic configurations, with safety constraints that provably prevent convergence to insecure states. A formal threat model based on extended Dolev–Yao assumptions with four physical access tiers defines attacker capabilities, and anti-downgrade safeguards enforce a monotonically non-decreasing security floor during threat escalation. An information-theoretic analysis shows that adaptive algorithm selection contributes an additional entropy term H(α) to ciphertext uncertainty, upper-bounded by log2(|L_enc|) ≈ 1.58 bits, while noting this represents increased attacker uncertainty rather than a strengthening of any individual cipher. A component-level latency model estimates 0.91–1.00 ms pipeline latency under normal operation and 3.14–3.42 ms under active threat, including integration overhead. Simulation validation over 1000 episodes compares a tabular Q-learning baseline against the proposed Deep Q-Network operating on the continuous state space: the DQN achieves 82% fewer constraint violations, 6× faster threat response, and more stable policy switching, demonstrating the advantage of continuous-state reinforcement learning for safety-critical adaptive encryption. All claims are positioned as theoretical contributions requiring empirical validation through prototype implementation. Full article
Show Figures

Figure 1

26 pages, 1892 KB  
Review
Artificial Intelligence–Driven Tools in Mental Health Service Delivery: A Scoping Review
by Yeshin Woo and Kibum Jung
Healthcare 2026, 14(7), 943; https://doi.org/10.3390/healthcare14070943 - 3 Apr 2026
Viewed by 209
Abstract
Background: Artificial intelligence (AI) holds transformative potential for mental health services. However, existing reviews have predominantly focused on algorithmic accuracy, with limited attention to how these technologies are implemented and integrated into real-world service delivery. This scoping review addresses this gap by [...] Read more.
Background: Artificial intelligence (AI) holds transformative potential for mental health services. However, existing reviews have predominantly focused on algorithmic accuracy, with limited attention to how these technologies are implemented and integrated into real-world service delivery. This scoping review addresses this gap by examining the contexts in which AI technologies—including large language models (LLMs) and machine learning—are implemented, as well as the factors influencing their sustainable adoption within real-world mental health service systems. Methods: Following the established methodological framework, a systematic search (2015–2026) was conducted in PubMed and Scopus. Two independent reviewers screened an initial pool of 829 records using Zotero and Rayyan to minimize selection bias. Following title, abstract, and full-text screening based on predefined eligibility criteria, 26 studies focusing on real-world AI applications (e.g., clinical settings, community services, and case management) were included in the final synthesis. Results: The findings indicate a rapid acceleration in research, with 50% of included studies (n = 13) published since 2024. AI-driven decision support systems were the most prevalent (50%, n = 13), followed by predictive machine learning models (27%) and generative AI applications (15%). Most tools were designed for clinician use (77%) and implemented in hospital-based settings (46%). Although 46% of studies reported real-world implementation, more than half remained at the pilot stage. Notably, research emphasis has shifted from technical efficacy toward feasibility, and implementation contexts (n = 17). Conclusion: AI in mental health is transitioning from laboratory validation to real-world integration. However, the current landscape remains heavily centered on clinician workflows and screening functions, with limited expansion into community-based recovery and long-term prevention. To move beyond the pilot stage, future initiatives should prioritize seamless workflow integration and the application of structured ethical and implementation frameworks that support clinician–patient relationships. This review provides an evidentiary basis for advancing sustainable, AI-enhanced mental health service delivery. Full article
(This article belongs to the Special Issue Artificial Intelligence in Health Services Research and Organizations)
Show Figures

Figure 1

32 pages, 436 KB  
Article
Learning-Augmented Quasi-Gradient Operators for Constrained Optimization: A Contraction–Bias–Variance Decomposition
by Gilberto Pérez-Lechuga, Marco Antonio Coronel García and Ana Lidia Martínez Salazar
Mathematics 2026, 14(7), 1202; https://doi.org/10.3390/math14071202 - 3 Apr 2026
Viewed by 195
Abstract
This paper develops a rigorous operator-theoretic framework for learning-augmented quasi-gradient methods in constrained optimization. We consider the minimization of an objective function over a closed convex feasible set, where feasibility is enforced via projection and directional updates may incorporate data-driven corrections. Such settings [...] Read more.
This paper develops a rigorous operator-theoretic framework for learning-augmented quasi-gradient methods in constrained optimization. We consider the minimization of an objective function over a closed convex feasible set, where feasibility is enforced via projection and directional updates may incorporate data-driven corrections. Such settings arise naturally in modern optimization algorithms that integrate artificial intelligence components under structural constraints. The proposed formulation introduces an explicit contraction–bias–variance decomposition of the iterative dynamics. Curvature induces deterministic contraction, alignment distortion—quantified by a geometric parameter—modifies the effective contraction margin, and stochastic learning components inject controlled dispersion. Explicit error recursions yield convergence guarantees under strong convexity, the Polyak–Łojasiewicz condition, and smooth nonconvexity. The analysis establishes that stability regions and first-order complexity bounds are preserved whenever alignment distortion remains below unity and bounded second-moment conditions hold. A fully reproducible computational study provides quantitative validation: the empirically observed steady-state error closely matches the theoretical prediction proportional to \(\sigma^2/\mu (1- \eta)\). Comparative experiments with gradient, stochastic gradient, and momentum methods confirm that the proposed operator retains classical stability margins and conditioning sensitivity while enabling principled integration of learned directional components. The results provide a transparent mathematical bridge between stochastic approximation theory and contemporary AI-enhanced constrained optimization. Full article
17 pages, 4279 KB  
Review
Bibliometric Analysis on Control Architectures for Robotics in Agriculture
by Simone Figorilli, Simona Violino, Simone Vasta, Federico Pallottino, Giorgio Manca, Lorenzo Bianchi and Corrado Costa
Robotics 2026, 15(4), 75; https://doi.org/10.3390/robotics15040075 - 3 Apr 2026
Viewed by 168
Abstract
(1) Background: Robotics and advanced control architectures are increasingly central to the development of precision agriculture (PA), supporting automated, efficient, and data-driven farm management. This review offers a comprehensive analysis of scientific literature on robotic control systems applied to PA, focusing on technological [...] Read more.
(1) Background: Robotics and advanced control architectures are increasingly central to the development of precision agriculture (PA), supporting automated, efficient, and data-driven farm management. This review offers a comprehensive analysis of scientific literature on robotic control systems applied to PA, focusing on technological progress, methodological approaches, and emerging research trends. (2) Methods: A systematic review was conducted according to PRISMA guidelines, combined with a bibliometric analysis using VOSviewer to examine term co-occurrences, thematic clusters, and topic evolution over time. Publications indexed in Scopus between 1976 and 2025 were analyzed. (3) Results: Results reveal a sharp growth in publications after 2010, with a strong acceleration from 2015 onward, reflecting advances in autonomous systems and the integration of artificial intelligence, sensor technologies, and distributed software frameworks. Three principal clusters emerged: algorithmic and control methods (e.g., neural networks, path tracking, simulation); sensing and infrastructure technologies (e.g., LiDAR, SLAM, IMU, ROS, deep learning-based perception); and agronomic applications, including crop monitoring, irrigation, yield estimation, and farm management. Citation trends indicate a shift from foundational control theory to AI-driven solutions. (4) Conclusions: Overall, control architectures are evolving toward modular, scalable, and interoperable systems enabling autonomous decision-making in complex agricultural environments. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

15 pages, 1755 KB  
Article
A Faculty-Constructed AI Tutor for Personalized Learning and Remediation in a U.S. PharmD Immunology Course: An “In-House” Evaluation of New Learning Technology
by Ashim Malhotra
Pharmacy 2026, 14(2), 59; https://doi.org/10.3390/pharmacy14020059 - 3 Apr 2026
Viewed by 163
Abstract
While generative AI becomes increasingly available in higher education, faculties find it challenging to design, implement, and evaluate AI-enabled personalized learning systems within accreditation-constrained professional curricula. This method paper describes ADAPT (Assessment-Driven AI for Personalized Tutoring), a home-grown AI tutoring and remediation ecosystem [...] Read more.
While generative AI becomes increasingly available in higher education, faculties find it challenging to design, implement, and evaluate AI-enabled personalized learning systems within accreditation-constrained professional curricula. This method paper describes ADAPT (Assessment-Driven AI for Personalized Tutoring), a home-grown AI tutoring and remediation ecosystem implemented in a required PharmD immunology course. Using standard learning management (Canvas) and assessment (ExamSoft) platforms, a 20-item quiz mapped to six immunology mastery domains (N = 34; mean 69.1%, SD 17.9; Cronbach’s α = 0.73) was used to trigger tiered, structured generative AI remediation at both individual student and cohort levels. Instructional impact was evaluated using reliability indices, item-level difficulty analyses, and paired pre/post-assessment comparisons. Following AI-guided remediation, mean performance increased to 79.8% (+10.7 percentage points), variability decreased (SD 14.4), and assessment reliability improved (ExamSoft KR-20 0.87) compared with the diagnostic exam, the first midterm exam, and the final exam, respectively. Item difficulty stabilized (mean ≈ 0.80), with sustained retention of targeted concepts on the final examination. ADAPT provides a replicable, low-cost methodological blueprint for faculties to independently construct assessment-driven AI tutoring systems and lays the foundational steps for future AI-based predictive analysis workflow for at-risk students. Full article
(This article belongs to the Section Pharmacy Education and Student/Practitioner Training)
Show Figures

Figure 1

15 pages, 381 KB  
Article
Assessment Validity in the Age of Generative AI: A Natural Experiment
by Håvar Brattli, Alexander Utne and Matthew Lynch
Informatics 2026, 13(4), 56; https://doi.org/10.3390/informatics13040056 - 3 Apr 2026
Viewed by 229
Abstract
Universities play a dual role as sites of learning and as institutions that certify student competence through assessment. The rapid diffusion of generative artificial intelligence (GenAI) challenges this certification function by altering the conditions under which assessment evidence is produced. When powerful AI [...] Read more.
Universities play a dual role as sites of learning and as institutions that certify student competence through assessment. The rapid diffusion of generative artificial intelligence (GenAI) challenges this certification function by altering the conditions under which assessment evidence is produced. When powerful AI tools are widely available, grades may increasingly reflect a combination of individual understanding and external cognitive support rather than solely independent competence. This study examines how changes in assessment format interact with GenAI availability to reshape observable performance outcomes in higher education. Using exam grade data from a compulsory undergraduate course delivered over five years (2021–2025; N = 1066), the study exploits a naturally occurring change in assessment conditions as a natural experiment. From 2021 to 2024, the course was assessed using an AI-permissive take-home examination, while in 2025 the assessment shifted to an AI-restricted, supervised in-person examination. Course content, intended learning outcomes, grading criteria, examiner continuity, and the structural design of the examination tasks remained stable across cohorts. The results reveal a pronounced shift in grade distributions coinciding with the format change. Failure rates increased sharply in 2025, mid-range grades declined, and the proportion of top grades remained largely unchanged. Statistical analysis indicates a significant association between examination period and grade outcomes (χ2(5, N = 1066) = 60.62, p < 0.001), with a small-to-moderate effect size (Cramér’s V = 0.24), driven primarily by the increase in failing grades. These findings suggest that AI-permissive and AI-restricted assessment formats may not be measurement-equivalent under conditions of widespread GenAI use. The results raise concerns about construct validity and the credibility of grades as signals of independent competence, while also highlighting tensions between certification credibility and assessment authenticity. Full article
Show Figures

Figure 1

Back to TopTop