Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,981)

Search Parameters:
Keywords = multi-agent model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4593 KB  
Article
Integrated Omics Approach to Delineate the Mechanisms of Doxorubicin-Induced Cardiotoxicity
by Mohamed S. Dabour, Ibrahim Y. Abdelgawad, Bushra Sadaf, Mary R. Daniel, Marianne K. O. Grant, Anne H. Blaes, Pamala A. Jacobson and Beshay N. Zordoky
Pharmaceuticals 2026, 19(2), 234; https://doi.org/10.3390/ph19020234 - 29 Jan 2026
Abstract
Background/Objectives: Doxorubicin (DOX) is an effective chemotherapeutic agent whose clinical utility is limited by cardiotoxicity. To investigate underlying mechanisms, we employed a multi-omics approach integrating transcriptomics and proteomics, leveraging established mouse models of chronic DOX-induced cardiotoxicity. Methods: Five-week-old male mice received weekly [...] Read more.
Background/Objectives: Doxorubicin (DOX) is an effective chemotherapeutic agent whose clinical utility is limited by cardiotoxicity. To investigate underlying mechanisms, we employed a multi-omics approach integrating transcriptomics and proteomics, leveraging established mouse models of chronic DOX-induced cardiotoxicity. Methods: Five-week-old male mice received weekly DOX (4 mg/kg) or saline injections for six weeks, with heart tissues harvested 4 days post-treatment. Differentially expressed genes (DEGs) and proteins (DEPs) were identified by bulk RNA-seq and proteomics, validated via qPCR and Western blot, respectively. Key DEPs were validated in plasma samples from DOX-treated breast cancer patients. Additionally, temporal comparison was conducted between DEPs in the mice hearts 4 days and 6 weeks post-DOX. Results: RNA-seq revealed upregulation of stress-responsive genes (Phlda3, Trp53inp1) and circadian regulators (Nr1d1), with downregulation of Apelin and Cd74. Proteomics identified upregulation of serpina3n, thrombospondin-1, and epoxide hydrolase 1. Plasma SERPINA3 concentrations were significantly elevated in breast cancer patients 24 h post-DOX. Gene set enrichment analysis (GSEA) revealed upregulated pathways, including p53 signaling, apoptosis, and unfolded protein response. Integrated omics analysis revealed 2089 gene–protein pairs. GSEA of concordant gene–protein pairs implicated p53 signaling, apoptosis, and epithelial–mesenchymal transition in upregulated pathways, while oxidative phosphorylation and metabolic pathways were downregulated. Temporal comparison with a delayed timepoint (6 weeks post-DOX) uncovered dynamic remodeling of cardiac signaling, with early response dominated by inflammatory and apoptotic responses, and delayed response marked by cell cycle and DNA repair pathway activation. Conclusions: This integrated omics study reveals key molecular pathways and temporal changes in DOX-induced cardiotoxicity, identifying potential biomarkers for future cardioprotective strategies. Full article
(This article belongs to the Special Issue Advances in Cancer Treatment and Toxicity)
Show Figures

Figure 1

19 pages, 2008 KB  
Proceeding Paper
A Novel Security Index for Assessing Information Systems in Industrial Organizations Using Web Technologies and Fuzzy Logic
by Sulieman Khaddour, Fares Abu-Abed and Valery Bogatikov
Eng. Proc. 2025, 117(1), 38; https://doi.org/10.3390/engproc2025117038 - 29 Jan 2026
Abstract
Industrial information systems based on web technologies (ISOWT) face escalating security challenges, particularly in critical sectors such as energy. Traditional qualitative security assessments often lack the ability to deliver actionable, real-time insights for managing complex, dynamic threats. This paper proposes a novel security [...] Read more.
Industrial information systems based on web technologies (ISOWT) face escalating security challenges, particularly in critical sectors such as energy. Traditional qualitative security assessments often lack the ability to deliver actionable, real-time insights for managing complex, dynamic threats. This paper proposes a novel security index for evaluating ISOWT in industrial organizations by integrating fuzzy logic, metric-based evaluation, fuzzy Markov chains, and multi-agent systems. The proposed index quantifies deviations from an ideal “center of safety,” enabling early risk detection and proactive mitigation. The methodology is validated through two real-world case studies on Syria’s energy sector, namely the Ministry of Electricity website and Mahrukat fuel management system. Experimental results demonstrate substantial improvements, including a 45.9–58.5% increase in security index, 56.9–60.3% reduction in page load times, and 78.3–82.4% decrease in detected vulnerabilities. Comparative analysis shows that the proposed approach outperforms existing methods in terms of quantitative precision, real-time monitoring, and predictive capabilities. The proposed framework is scalable, automated, and adaptable, addressing key limitations of existing ISOWT security assessment techniques and providing a robust tool for enhancing system resilience. Its flexibility enable applicability across diverse industrial domains, contributing to advanced cybersecurity practices for critical infrastructure. Future work will focus on integrating advanced technologies, expanding the framework to additional sectors, developing adaptive fuzzy models, accounting for human factors, and improving visualization techniques to further address the evolving security challenges faced by industrial organizations. Full article
Show Figures

Figure 1

37 pages, 5937 KB  
Article
A Multi-Task Service Composition Method Considering Inter-Task Fairness in Cloud Manufacturing
by Zhou Fang, Yanmeng Ying, Qian Cao, Dongsheng Fang and Daijun Lu
Symmetry 2026, 18(2), 238; https://doi.org/10.3390/sym18020238 - 29 Jan 2026
Abstract
Within the cloud manufacturing paradigm, Cloud Manufacturing Service Composition (CMSC) is a core technology for intelligent resource orchestration in Cloud Manufacturing Platforms (CMP). However, existing research faces critical limitations in real-world CMP operations: single-task-centric optimization ignores resource sharing/competition among coexisting manufacturing tasks (MTs), [...] Read more.
Within the cloud manufacturing paradigm, Cloud Manufacturing Service Composition (CMSC) is a core technology for intelligent resource orchestration in Cloud Manufacturing Platforms (CMP). However, existing research faces critical limitations in real-world CMP operations: single-task-centric optimization ignores resource sharing/competition among coexisting manufacturing tasks (MTs), causing performance degradation and resource “starvation”; traditional heuristics require full re-execution for new scenarios, failing to support real-time online decision-making; single-agent reinforcement learning (RL) lacks mechanisms to balance global efficiency and inter-task fairness, suffering from inherent fairness defects. To address these challenges, this paper proposes a fairness-aware multi-task CMSC method based on Multi-Agent Reinforcement Learning (MARL) under the Centralized Training with Decentralized Execution (CTDE) framework, targeting the symmetry-breaking issue of uneven resource allocation among MTs and aiming to achieve symmetry restoration by restoring relative balance in resource acquisition. The method constructs a multi-task CMSC model that captures real-world resource sharing/competition among concurrent MTs, and integrates a centralized global coordination agent into the MARL framework (with independent task agents per MT) to dynamically regulate resource selection probabilities, overcoming single-agent fairness defects while preserving distributed autonomy. Additionally, a two-layer attention mechanism is introduced—task-level self-attention for intra-task subtask correlations and global state self-attention for critical resource features—enabling precise synergy between local task characteristics and global resource states. Experiments verify that the proposed method significantly enhances inter-task fairness while maintaining superior global Quality of Service (QoS), demonstrating its effectiveness in balancing efficiency and fairness for dynamic multi-task CMSC. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 1982 KB  
Article
Optimization of Monitoring Node Layout in Desert–Gobi–Wasteland Regions Based on Deep Reinforcement Learning
by Zifen Han, Qingquan Lv, Zhihua Xie, Runxiang Li and Jiuyuan Huo
Symmetry 2026, 18(2), 237; https://doi.org/10.3390/sym18020237 - 29 Jan 2026
Abstract
Desert–Gobi–wasteland regions possess abundant wind resources and are strategic areas for future renewable energy development and meteorological monitoring. However, existing studies have limited capability in addressing the highly complex and dynamic environmental characteristics of these regions. In particular, few modeling approaches can jointly [...] Read more.
Desert–Gobi–wasteland regions possess abundant wind resources and are strategic areas for future renewable energy development and meteorological monitoring. However, existing studies have limited capability in addressing the highly complex and dynamic environmental characteristics of these regions. In particular, few modeling approaches can jointly represent terrain variability, solar radiation distribution, and wind-field characteristics within a unified framework. Moreover, conventional deep reinforcement learning methods often suffer from learning instability and coordination difficulties when applied to multi-agent layout optimization tasks. To address these challenges, this study constructs a multidimensional environmental simulation model that integrates terrain, solar radiation, and wind speed, enabling a quantitative and controllable representation of the meteorological monitoring network layout problem. Based on this environment, an Environment-Aware Proximal Policy Optimization (EA-PPO) algorithm is proposed. EA-PPO adopts a compact environment-related state representation and a utility-guided reward mechanism to improve learning stability under decentralized decision-making. Furthermore, a Global Layout Optimization Algorithm based on EA-PPO (GLOAE) is developed to enable coordinated optimization among multiple monitoring nodes through shared utility feedback. Simulation results demonstrate that the proposed methods achieve superior layout quality and convergence performance compared with conventional approaches, while exhibiting enhanced robustness under dynamic environmental conditions. These results indicate that the proposed framework provides a practical and effective solution for intelligent layout optimization of meteorological monitoring networks in desert–Gobi–wasteland regions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

38 pages, 4412 KB  
Article
A Modular ROS–MARL Framework for Cooperative Multi-Robot Task Allocation in Construction Digital Environments
by Xinghui Xu, Samuel A. Prieto and Borja García de Soto
Buildings 2026, 16(3), 539; https://doi.org/10.3390/buildings16030539 - 28 Jan 2026
Abstract
The deployment of autonomous robots in construction remains constrained by the complexity and variability of real-world environments. Conventional programming and single-agent approaches lack the adaptability required for dynamic multi-robot operating conditions, underscoring the need for cooperative, learning-based systems. This paper presents an ROS-based [...] Read more.
The deployment of autonomous robots in construction remains constrained by the complexity and variability of real-world environments. Conventional programming and single-agent approaches lack the adaptability required for dynamic multi-robot operating conditions, underscoring the need for cooperative, learning-based systems. This paper presents an ROS-based modular framework that integrates Multi-Agent Reinforcement Learning (MARL) into a generic 2D simulation and execution pipeline for cooperative mobile robots in construction-oriented digital environments to enable adaptive task allocation and coordinated execution without predefined datasets or manual scheduling. The framework adopts a centralized-training, decentralized-execution (CTDE) scheme based on Multi-Agent Proximal Policy Optimization (MAPPO) and decomposes the system into interchangeable modules for environment modelling, task representation, robot interfaces, and learning, allowing different layouts, task sets, and robot models to be instantiated without redesigning the core architecture. Validation through an ROS-based 2D simulation and real-world experiments using TurtleBot3 robots demonstrated effective task scheduling, adaptive navigation, and cooperative behavior under uncertainty. In simulation, the learned MAPPO policy is benchmarked against non-learning baselines for multi-robot task allocation, and in real-robot experiments, the same policy is evaluated to quantify and discuss the performance gap between simulated and physical execution. Rather than presenting a complete construction-site deployment, this first study focuses on proposing and validating a reusable MARL–ROS framework and digital testbed for multi-robot task allocation in construction-oriented digital environments. The results show that the framework supports effective cooperative task scheduling, adaptive navigation, and logic-consistent behavior, while highlighting practical issues that arise in sim-to-real transfer. Overall, the framework provides a reusable digital foundation and benchmark for studying adaptive and cooperative multi-robot systems in construction-related planning and management contexts. Full article
(This article belongs to the Special Issue Robotics, Automation and Digitization in Construction)
20 pages, 1085 KB  
Article
Evaluating Fairness in LLM Negotiator Agents via Economic Games Using Multi-Agent Systems
by Ahmad Mouri Zadeh Khaki and Ahyoung Choi
Mathematics 2026, 14(3), 458; https://doi.org/10.3390/math14030458 - 28 Jan 2026
Abstract
With the surge of artificial intelligence (AI) systems, autonomous Large Language Model (LLM)-based negotiator agents are being developed to negotiate on behalf of humans, particularly in commercial contexts. In human interactions, marginalized groups, such as racial minorities and women, often face unequal outcomes [...] Read more.
With the surge of artificial intelligence (AI) systems, autonomous Large Language Model (LLM)-based negotiator agents are being developed to negotiate on behalf of humans, particularly in commercial contexts. In human interactions, marginalized groups, such as racial minorities and women, often face unequal outcomes due to gender and social biases. Since these models are trained on human data, a key question arises: do LLM-based agents reflect existing biases in human interaction in their negotiation strategies? To address this question, we investigated the impact of such biases in one of the most advanced LLMs available, ChatGPT-4 Turbo, by employing a buyer–seller game approach using male and female agents from four racial groups (White, Black, Asian, and Latino). We found that when either the seller or buyer is aware of the gender and race of the other player, they secure more profit compared to when negotiations are gender- and race-blind. Additionally, we examined the influence of conditioning buyer agents to improve their negotiation strategy by prompting them with additional persona. Interestingly, we observed that such conditioning can mitigate LLM-based agents’ biases, suggesting a way to empower underrepresented groups to achieve more equitable outcomes. Based on the findings of this study, while LLM-generated text may not exhibit explicit biases, hidden gender and social biases in the training data can still lead to skewed outcomes for users. Therefore, it is crucial to mitigate these biases and prevent their transfer during dataset curation to ensure fair human–agent interactions and build user trust. Full article
23 pages, 4070 KB  
Article
Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory
by Ruiqi Huang, Zhanyou Ma and Nana He
Mathematics 2026, 14(3), 456; https://doi.org/10.3390/math14030456 - 28 Jan 2026
Abstract
In multi-agent systems, the interactions between autonomous agents within dynamic and uncertain environments are crucial for achieving their objectives. Current research leverages model checking techniques to verify these interactions, with social accessibility relations commonly used to formalize agent interactions. In multi-agent systems that [...] Read more.
In multi-agent systems, the interactions between autonomous agents within dynamic and uncertain environments are crucial for achieving their objectives. Current research leverages model checking techniques to verify these interactions, with social accessibility relations commonly used to formalize agent interactions. In multi-agent systems that incorporate generalized possibility measures, the quantification, computation, and model checking of trust properties present significant challenges. This paper introduces an indirect model checking algorithm designed to transform social trust under uncertainty into quantifiable properties for verification. A Generalized Possibilistic Trust Interpreted System (GPTIS) is proposed to model and characterize multi-agent systems with trust-related uncertainties. Subsequently, the trust operators are extended based on Generalized Possibilistic Computation Tree Logic (GPoCTL) to develop the Generalized Possibilistic Trust Computation Tree Logic (GPTCTL), which is employed to express the trust properties of the system. Then, a model checking algorithm that maps trust accessibility relations to trust actions is introduced, thereby transforming the model checking of GPTCTL on GPTIS into model checking of GPoCTL on Generalized Possibility Kripke Structures (GPKSs). The proposed algorithm is provided with a correctness proof and complexity analysis, followed by an example demonstrating its practical feasibility. Full article
(This article belongs to the Section D2: Operations Research and Fuzzy Decision Making)
28 pages, 2984 KB  
Article
Behaviorally Embedded Multi-Agent Optimization for Urban Microgrid Energy Coordination Under Social Influence Dynamics
by Dawei Wang, Cheng Gong, Yifei Li, Hao Ma, Tianle Li and Shanna Luo
Energies 2026, 19(3), 687; https://doi.org/10.3390/en19030687 - 28 Jan 2026
Abstract
Urban microgrids are evolving into socially coupled energy systems in which prosumer decisions are shaped by both market incentives and peer influence. Conventional optimization approaches overlook this behavioral interdependence and offer limited adaptability under environmental disturbances. This study develops a behaviorally embedded multi-agent [...] Read more.
Urban microgrids are evolving into socially coupled energy systems in which prosumer decisions are shaped by both market incentives and peer influence. Conventional optimization approaches overlook this behavioral interdependence and offer limited adaptability under environmental disturbances. This study develops a behaviorally embedded multi-agent optimization framework that integrates social influence propagation with physical power network coordination. Each prosumer’s decision process incorporates economic, comfort, and behavioral components, while a community operator enforces system-wide feasibility. The resulting bilevel structure is formulated as an equilibrium problem with equilibrium constraints (EPEC) and solved using an iterative hierarchical algorithm. A modified 33-bus urban microgrid with 40 socially connected agents is assessed under stochastic wildfire ignition and propagation scenarios to evaluate resilience under hazard-driven uncertainty. Incorporating behavioral responses increases welfare by 11.8%, reduces cost variance by 9.1%, and improves voltage stability by 23% compared with conventional models. Under wildfire stress, socially cohesive agents converge more rapidly and maintain more stable dispatch patterns. The findings highlight the critical role of social topology in shaping both equilibrium behavior and resilience. The framework provides a foundation for socially responsive and hazard-adaptive optimization in next-generation human-centric energy systems. Full article
31 pages, 1431 KB  
Article
Multi-Scenario Assessment of Imbalance Settlement Mechanisms in a Provincial Dual-Track Electricity Market: An EMS-Oriented Framework
by Mingyang Wang and Haoyong Chen
Energies 2026, 19(3), 683; https://doi.org/10.3390/en19030683 - 28 Jan 2026
Abstract
In provincial electricity markets where long-term contracts and spot trading coexist, multiple categories of imbalance funds arise from congestion, energy deviations and dual-track price differences, posing challenges to energy management systems (EMS) in terms of fair and robust settlement. This paper proposes an [...] Read more.
In provincial electricity markets where long-term contracts and spot trading coexist, multiple categories of imbalance funds arise from congestion, energy deviations and dual-track price differences, posing challenges to energy management systems (EMS) in terms of fair and robust settlement. This paper proposes an EMS-oriented framework to assess and diagnose alternative imbalance settlement mechanisms in a provincial dual-track market. First, a unified settlement model is developed that reconstructs key imbalance fund categories and allocates them to heterogeneous agents—thermal, renewable and storage units and different user groups—under a library of settlement rules. Second, a multi-scenario simulation platform is built, covering normal operation, tight supply and high-renewable-volatility conditions. Third, a multi-criteria evaluation scheme is designed to quantify economic efficiency, fairness, risk and renewable support for each mechanism–scenario combination. Finally, a category–agent two-dimensional diagnostic module is introduced to reveal misallocation patterns and the main money-transfer paths among fund categories and agent groups. A case study on a realistic provincial system shows that the proposed framework can distinguish mechanisms with better overall robustness, identify severe cross-subsidies in extreme scenarios and provide practical guidance for refining imbalance settlement parameters within EMS-driven market operations. Full article
Show Figures

Figure 1

24 pages, 4295 KB  
Article
Construction of a Prognostic Model for Lung Adenocarcinoma Based on Necrosis by Sodium Overload-Related Genes and Identification of DENND1C as a New Prognostic Marker
by Huijun Tan, Yang Zhang, Maoting Tan and Depeng Jiang
Curr. Issues Mol. Biol. 2026, 48(2), 146; https://doi.org/10.3390/cimb48020146 - 28 Jan 2026
Abstract
Background: Lung adenocarcinoma (LUAD) remains a leading cause of cancer-related mortality. The prognostic significance and functional role of sodium overload-induced necrosis (a novel form of regulated cell death driven by disrupted sodium homeostasis, hereafter abbreviated as NECSO) in LUAD are largely unexplored. Methods: [...] Read more.
Background: Lung adenocarcinoma (LUAD) remains a leading cause of cancer-related mortality. The prognostic significance and functional role of sodium overload-induced necrosis (a novel form of regulated cell death driven by disrupted sodium homeostasis, hereafter abbreviated as NECSO) in LUAD are largely unexplored. Methods: A prognostic model was constructed utilizing the NECSO key gene TRPM4 and analyzed through Cox, LASSO, and multivariate Cox regression analyses. LUAD patients were stratified into high- and low-risk groups. The model’s predictive performance was evaluated using time-dependent ROC curves and nomograms. Functional enrichment analysis elucidated underlying biological disparities. The tumor immune microenvironment was characterized using ESTIMATE, ssGSEA, CIBERSORTx, and TIDE algorithms, with results corrected for multiple testing. Drug sensitivity to chemotherapeutic and targeted agents was predicted. The functional role of a key gene, DENND1C, was validated in vitro. Its association with immunotherapy survival outcomes was assessed in a real-world cohort. Results: The NECSO-based prognostic signature demonstrated robust performance in risk stratification across training and independent validation cohorts. Patients in the high-risk group exhibited significantly shorter overall survival. Functional enrichment revealed associations with processes related to plasma membrane integrity, cell death, metabolism, and immune response. Multi-algorithm immunogenomic analyses consistently identified an immunosuppressive microenvironment in high-risk patients. The risk score was predictive of differential sensitivity to therapeutics, including taxanes and EGFR inhibitors. In vitro experiments confirmed DENND1C as a tumor suppressor, inhibiting LUAD cell proliferation, invasion, and migration. Furthermore, high DENND1C expression was associated with improved survival in patients receiving immunotherapy. Conclusions: This study establishes and validates a novel NECSO-based prognostic model for LUAD. DENND1C is identified as a key tumor suppressor and a potential biomarker for immunotherapy, offering insights for personalized treatment strategies in LUAD. Full article
Show Figures

Figure 1

30 pages, 4996 KB  
Article
Energy-Efficient, Multi-Agent Deep Reinforcement Learning Approach for Adaptive Beacon Selection in AUV-Based Underwater Localization
by Zahid Ullah Khan, Hangyuan Gao, Farzana Kulsoom, Syed Agha Hassnain Mohsan, Aman Muhammad and Hassan Nazeer Chaudry
J. Mar. Sci. Eng. 2026, 14(3), 262; https://doi.org/10.3390/jmse14030262 - 27 Jan 2026
Viewed by 12
Abstract
Accurate and energy-efficient localization of autonomous underwater vehicles (AUVs) remains a fundamental challenge due to the complex, bandwidth-limited, and highly dynamic nature of underwater acoustic environments. This paper proposes a fully adaptive deep reinforcement learning (DRL)-driven localization framework for AUVs operating in Underwater [...] Read more.
Accurate and energy-efficient localization of autonomous underwater vehicles (AUVs) remains a fundamental challenge due to the complex, bandwidth-limited, and highly dynamic nature of underwater acoustic environments. This paper proposes a fully adaptive deep reinforcement learning (DRL)-driven localization framework for AUVs operating in Underwater Acoustic Sensor Networks (UAWSNs). The localization problem is formulated as a Markov Decision Process (MDP) in which an intelligent agent jointly optimizes beacon selection and transmit power allocation to minimize long-term localization error and energy consumption. A hierarchical learning architecture is developed by integrating four actor–critic algorithms, which are (i) Twin Delayed Deep Deterministic Policy Gradient (TD3), (ii) Soft Actor–Critic (SAC), (iii) Multi-Agent Deep Deterministic Policy Gradient (MADDPG), and (iv) Distributed DDPG (D2DPG), enabling robust learning under non-stationary channels, cooperative multi-AUV scenarios, and large-scale deployments. A round-trip time (RTT)-based geometric localization model incorporating a depth-dependent sound speed gradient is employed to accurately capture realistic underwater acoustic propagation effects. A multi-objective reward function jointly balances localization accuracy, energy efficiency, and ranging reliability through a risk-aware metric. Furthermore, the Cramér–Rao Lower Bound (CRLB) is derived to characterize the theoretical performance limits, and a comprehensive complexity analysis is performed to demonstrate the scalability of the proposed framework. Extensive Monte Carlo simulations show that the proposed DRL-based methods achieve significantly lower localization error, lower energy consumption, faster convergence, and higher overall system utility than classical TD3. These results confirm the effectiveness and robustness of DRL for next-generation adaptive underwater localization systems. Full article
(This article belongs to the Section Ocean Engineering)
35 pages, 2414 KB  
Article
Hierarchical Caching for Agentic Workflows: A Multi-Level Architecture to Reduce Tool Execution Overhead
by Farhana Begum, Craig Scott, Kofi Nyarko, Mansoureh Jeihani and Fahmi Khalifa
Mach. Learn. Knowl. Extr. 2026, 8(2), 30; https://doi.org/10.3390/make8020030 - 27 Jan 2026
Viewed by 50
Abstract
Large Language Model (LLM) agents depend heavily on multiple external tools such as APIs, databases and computational services to perform complex tasks. However, these tool executions create latency and introduce costs, particularly when agents handle similar queries or workflows. Most current caching methods [...] Read more.
Large Language Model (LLM) agents depend heavily on multiple external tools such as APIs, databases and computational services to perform complex tasks. However, these tool executions create latency and introduce costs, particularly when agents handle similar queries or workflows. Most current caching methods focus on LLM prompt–response pairs or execution plans and overlook redundancies at the tool level. To address this, we designed a multi-level caching architecture that captures redundancy at both the workflow and tool level. The proposed system integrates four key components: (1) hierarchical caching that operates at both the workflow and tool level to capture coarse and fine-grained redundancies; (2) dependency-aware invalidation using graph-based techniques to maintain consistency when write operations affect cached reads across execution contexts; (3) category-specific time-to-live (TTL) policies tailored to different data types, e.g., weather APIs, user location, database queries and filesystem and computational tasks; and (4) session isolation to ensure multi-tenant cache safety through automatic session scoping. We evaluated the system using synthetic data with 2.25 million queries across ten configurations in fifteen runs. In addition, we conducted four targeted evaluations—write intensity robustness from 4 to 30% writes, personalized memory effects under isolated vs. shared cache modes, workflow-level caching comparison and workload sensitivity across five access distributions—on an additional 2.565 million queries, bringing the total experimental scope to 4.815 million executed queries. The architecture achieved 76.5% caching efficiency, reducing query processing time by 13.3× and lowering estimated costs by 73.3% compared to a no-cache baseline. Multi-tenant testing with fifteen concurrent tenants confirmed robust session isolation and 74.1% efficiency under concurrent workloads. Our evaluation used controlled synthetic workloads following Zipfian distributions, which are commonly used in caching research. While absolute hit rates vary by deployment domain, the architectural principles of hierarchical caching, dependency tracking and session isolation remain broadly applicable. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

25 pages, 2201 KB  
Article
Design and Research of a Dual-Target Drug Molecular Generation Model Based on Reinforcement Learning
by Peilin Li, Ziyan Yan, Yuchen Zhou, Hongyun Li, Wei Gao and Dazhou Li
Inventions 2026, 11(1), 12; https://doi.org/10.3390/inventions11010012 - 26 Jan 2026
Viewed by 146
Abstract
Dual-target drug design addresses complex diseases and drug resistance, yet existing computational approaches struggle with simultaneous multi-protein optimization. This study presents SFG-Drug, a novel dual-target molecular generation model combining Monte Carlo tree search with gated recurrent unit neural networks for simultaneous MEK1 and [...] Read more.
Dual-target drug design addresses complex diseases and drug resistance, yet existing computational approaches struggle with simultaneous multi-protein optimization. This study presents SFG-Drug, a novel dual-target molecular generation model combining Monte Carlo tree search with gated recurrent unit neural networks for simultaneous MEK1 and mTOR targeting. The methodology employed DigFrag digital fragmentation on ZINC-250k dataset, integrated low-frequency masking techniques for enhanced diversity, and utilized molecular docking scores as reward functions. Comprehensive evaluation on MOSES benchmark demonstrated superior performance compared to state-of-the-art methods, achieving perfect validity (1.000), uniqueness (1.000), and novelty (1.000) scores with highest internal diversity indices (0.878 for IntDiv1, 0.860 for IntDiv2). Over 90% of generated molecules exhibited favorable binding affinity with both targets, showing optimal drug-like properties including QED values in [0.2, 0.7] range and high synthetic accessibility scores. Generated compounds demonstrated structural novelty with Tanimoto coefficients below 0.25 compared to known inhibitors while maintaining dual-target binding capability. The SFG-Drug model successfully bridges the gap between computational prediction and practical drug discovery, offering significant potential for developing new dual-target therapeutic agents and advancing AI-driven pharmaceutical research methodologies. Full article
Show Figures

Figure 1

22 pages, 4388 KB  
Article
Multivariable Intelligent Control Methods for Pretreatment Processes in the Safe Utilization of Phosphogypsum
by Xiangjin Zeng and Cong Xi
Processes 2026, 14(3), 436; https://doi.org/10.3390/pr14030436 - 26 Jan 2026
Viewed by 79
Abstract
The safe pretreatment of phosphogypsum involves a multivariable control process with strong coupling and nonlinear behavior, which limits the effectiveness of conventional control methods. To address this issue, an intelligent control strategy combining fuzzy control with a deep deterministic policy gradient (DDPG) algorithm [...] Read more.
The safe pretreatment of phosphogypsum involves a multivariable control process with strong coupling and nonlinear behavior, which limits the effectiveness of conventional control methods. To address this issue, an intelligent control strategy combining fuzzy control with a deep deterministic policy gradient (DDPG) algorithm is proposed. A multi-input multi-output control model is established using pH, moisture content, and flow rate as key variables, and a DDPG agent is employed to adaptively adjust the gain of the fuzzy controller. Simulation results demonstrate that the proposed method achieves faster response and improved stability, yielding a pH settling time of approximately 2.5 s and a steady-state moisture-content error on the order of 0.02 under representative operating conditions. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

20 pages, 1225 KB  
Systematic Review
Efficacy of Phytotherapy for Cancer-Related Fatigue: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
by Silvio Matsas, Ursula Medeiros Araujo de Matos, Carolina Molina Llata and Auro del Giglio
Diseases 2026, 14(2), 39; https://doi.org/10.3390/diseases14020039 - 26 Jan 2026
Viewed by 104
Abstract
Background: Cancer-related fatigue (CRF) is one of the most common and burdensome symptoms faced by patients with cancer, yet effective drug-based treatments remain limited. In recent years, phytotherapeutic agents have drawn attention as complementary options, supported by plausible anti-inflammatory, antioxidant, and immunomodulatory mechanisms. [...] Read more.
Background: Cancer-related fatigue (CRF) is one of the most common and burdensome symptoms faced by patients with cancer, yet effective drug-based treatments remain limited. In recent years, phytotherapeutic agents have drawn attention as complementary options, supported by plausible anti-inflammatory, antioxidant, and immunomodulatory mechanisms. Methods: We performed a systematic review and meta-analysis to quantitatively synthesize randomized controlled trial evidence on the efficacy of phytotherapeutic interventions for cancer-related fatigue and to assess the certainty of evidence. Databases were searched from inception, with the final search update completed in October 2025. Eligible studies included adults with CRF and compared herbal interventions with placebo controls. Standardized mean differences (SMDs) were pooled using a DerSimonian–Laird random-effects model. We also evaluated risk of bias (RoB 2), publication bias, and certainty of evidence using GRADE. This systematic review and meta-analysis was conducted in accordance with the PRISMA 2020 guidelines. Results: Fourteen trials were included, studying agents such as Paullinia cupana, Panax ginseng, multi-herbal Traditional Chinese Medicine formulations, and other botanical extracts. Overall, phytotherapy provided a modest improvement in CRF (SMD = 0.31; 95% CI, 0.08–0.53; p = 0.022), though heterogeneity was substantial (I2 = 56.7%). In subgroup analyses, only the group of “other formulations” demonstrated significant benefit; ginseng and guaraná did not demonstrate statistically significant effects. Most trials had high or unclear risk of bias, and the certainty of evidence was rated very low. Conclusions: Current evidence does not firmly support phytotherapeutic agents as effective treatments for CRF, hindered largely by methodological weaknesses, heterogeneous interventions, and imprecise effect estimates. Even so, the biological rationale and the variability in clinical responses point toward an opportunity for the emerging field of precision herbal oncology. Well-designed, multicenter trials are essential to determine whether phytotherapy can meaningfully contribute to CRF management. Full article
Show Figures

Figure 1

Back to TopTop