Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (570)

Search Parameters:
Keywords = chain rule

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 5585 KB  
Article
A General Procedure for Basic Kinematic Chain Formation and Topology Selection for Planar Mechanisms
by Arthur Erdman, John Titus, Mahmud Suhaimi Ibrahim and Sean Mather
Designs 2026, 10(3), 46; https://doi.org/10.3390/designs10030046 (registering DOI) - 27 Apr 2026
Abstract
In a complete kinematic synthesis process, a designer must select a planar linkage topology that is well suited to their problem situation. This involves weighing a set of competing priorities. For example, is it better to choose a simple topology like a four-bar [...] Read more.
In a complete kinematic synthesis process, a designer must select a planar linkage topology that is well suited to their problem situation. This involves weighing a set of competing priorities. For example, is it better to choose a simple topology like a four-bar mechanism that will be cheaper to produce, or a complex topology like an eight-bar mechanism that can produce intricate motions but will also be more expensive and more difficult to synthesize? The process of selecting the topology is broadly known as type synthesis, or sometimes structure synthesis, and has been studied in the past. However, past works on planar linkage type synthesis have overemphasized isomorphism detection, identifying the complete set of unique topologies up to a certain number of links, while the central problem of choosing the ideal topology has often been overlooked. In this work, a general procedure for forming basic kinematic chains (BKCs), a simplified topological representation, is presented. Then, a set of rules and design principles is provided that can help a designer narrow the infinite possible BKC options down to a manageable set. A few practical examples are provided to demonstrate the concepts and show that the procedure is effective. A literature review is also provided that examines past works, as well as introducing alternative approaches, such as simultaneous algorithmic methods and spatial methods. Full article
(This article belongs to the Section Mechanical Engineering Design)
Show Figures

Figure 1

22 pages, 1737 KB  
Article
Data-Driven Simulation–Optimization for Sustainable (s, S) Inventory Policy Design Under Demand and Lead-Time Uncertainty
by Deng-Guei You, Chun-Ho Wang and Yen-Te Li
Sustainability 2026, 18(9), 4305; https://doi.org/10.3390/su18094305 (registering DOI) - 27 Apr 2026
Abstract
Inventory policy design in modern supply chains must balance cost efficiency, service reliability, and responsible resource utilization under significant demand and supply uncertainty. In many real-world supply chains, both customer demand and replenishment lead time exhibit substantial variability, making the design of continuous-review [...] Read more.
Inventory policy design in modern supply chains must balance cost efficiency, service reliability, and responsible resource utilization under significant demand and supply uncertainty. In many real-world supply chains, both customer demand and replenishment lead time exhibit substantial variability, making the design of continuous-review (s, S) inventory policies challenging. Although stochastic inventory models have been widely studied, many existing approaches rely on simplified assumptions or single-objective formulations, which may limit their applicability under simultaneous demand and lead-time uncertainty. This study proposes a data-driven multi-objective simulation–optimization framework for designing sustainable (s, S) inventory policies under dual uncertainty. The framework integrates empirical stochastic modeling, Monte Carlo simulation, and evolutionary multi-objective optimization to evaluate trade-offs between expected inventory cost and service performance. To enhance methodological rigor, statistical reliability control is incorporated into the simulation-based evaluation process to ensure that Pareto dominance relationships are not distorted by simulation noise. Historical operational data are used to estimate probability distributions for demand and lead time, which are incorporated into a stochastic simulation model representing inventory system dynamics. A multi-objective evolutionary algorithm (NSGA-II) is employed to identify Pareto-efficient policy parameters. An empirical case study from a health supplement supply chain demonstrates how the framework identifies efficient replenishment policies under realistic uncertainty conditions. The results reveal structural trade-offs between inventory cost and service level and show that data-driven policy design can improve decision transparency compared with heuristic replenishment rules. The proposed approach provides a structured decision-support tool for selecting replenishment policies that balance service continuity and inventory sustainability in shelf-life-constrained supply chains. Full article
Show Figures

Figure 1

26 pages, 5595 KB  
Article
A Digital Restoration Method Driven by Mathematical Composition Rules and Their Application: A Case Study of Ming Dynasty Pavilion-Style Stone Pagodas in Fuzhou and the Restoration of the Luoxing Pagoda’s Finial
by Yuanyi Zhang, Lele Zhu, Jinhong Li and Gang Chen
Buildings 2026, 16(9), 1701; https://doi.org/10.3390/buildings16091701 (registering DOI) - 26 Apr 2026
Abstract
In the practice of historic building conservation and restoration, the authentic restoration of damaged components often faces challenges due to the lack of definitive design evidence. To address this issue, this paper proposes a restoration derivation method that integrates digital survey technologies, such [...] Read more.
In the practice of historic building conservation and restoration, the authentic restoration of damaged components often faces challenges due to the lack of definitive design evidence. To address this issue, this paper proposes a restoration derivation method that integrates digital survey technologies, such as UAV oblique photogrammetry and 3D laser scanning, with the analysis of historical mathematical composition rules. Taking five Ming Dynasty pavilion-style stone pagodas in Fuzhou as subjects, this study first employed digital surveying and cross-verification with ancient texts to reveal their shared, precise proportional system: the eave–column ratio of the Ruiyun Pagoda approaches √2 (≈1.414), while the other four pagodas approach the golden ratio of 1.618. Furthermore, the pagoda silhouettes are governed by a √2 hierarchical system and a √3/2 visual correction mechanism. Based on these mathematical rules, a triple logical chain of “historical evidence verification–functional constraints–traditional adaptation” was constructed and applied to the quantitative restoration design of the damaged finial of the Luoxing Pagoda. This process ultimately derived the relationship between its total height and the first-story width as (L/2 + √2/2), with the finial height being 1/7 of the pagoda body’s total height. This case study validates the effectiveness of the proposed method in transforming profound historical wisdom into clear engineering parameters, offering a replicable and verifiable technical pathway for the digital conservation and scientific restoration of similar architectural heritage. Full article
(This article belongs to the Special Issue Urban Renewal: Protection and Restoration of Existing Buildings)
21 pages, 1231 KB  
Article
Disaster-Resilient Service Function Chain Deployment Based on Multi-Path Routing and Deep Reinforcement Learning
by Yun Xie and Junbin Liang
Electronics 2026, 15(9), 1795; https://doi.org/10.3390/electronics15091795 - 23 Apr 2026
Viewed by 110
Abstract
Network function virtualization (NFV) enables flexible service deployment by implementing network functions as software, with service function chains (SFCs) linking virtual network functions (VNFs) in a specific order to deliver end-to-end services. However, ensuring SFC resilience against large-scale disasters that can disrupt entire [...] Read more.
Network function virtualization (NFV) enables flexible service deployment by implementing network functions as software, with service function chains (SFCs) linking virtual network functions (VNFs) in a specific order to deliver end-to-end services. However, ensuring SFC resilience against large-scale disasters that can disrupt entire disaster zones (DZs) remains a significant challenge. In this paper, we study the multipath disaster-resilient SFC deployment problem, aiming to minimize the total bandwidth and computing resource overhead by jointly optimizing VNF placement, multipath routing, and protection mechanisms, subject to DZ-disjoint constraints. We formulate this problem as a Mixed-Integer Nonlinear Programming (MINLP) model and prove it to be NP-hard. To solve it efficiently, we propose a two-stage adaptive deployment approach; the first stage employs heuristic rules to generate a set of candidate paths satisfying DZ-disjoint constraints, and the second stage leverages deep reinforcement learning to intelligently place VNFs along these candidate paths, approximating the global optimum. Simulation results on real network topologies demonstrate that, compared to traditional dedicated protection strategies and a state-of-the-art exact algorithm, the proposed approach reduces resource overhead by up to 20% while effectively guaranteeing SFC disaster resilience, exhibiting good scalability and online deployment potential. Full article
24 pages, 2587 KB  
Article
Logistical Performance of a COVID-19 Vaccination Campaign in a Decentralized Health System
by Amanda Caroline Silva Rívolli, Isabela Antunes de Souza Lima, Camila Candida Compagnoni dos Reis, Íngrid Ribeiro Antonio and Márcia Marcondes Altimari Samed
COVID 2026, 6(5), 73; https://doi.org/10.3390/covid6050073 - 23 Apr 2026
Viewed by 117
Abstract
Background/Objectives: The COVID-19 pandemic imposed logistical challenges on health systems, particularly for mass vaccination campaigns under emergency conditions. In decentralized health systems, the absence of a structured preparedness phase may compromise coordination, allocation, and operational performance. This study analyzes the vaccination campaign in [...] Read more.
Background/Objectives: The COVID-19 pandemic imposed logistical challenges on health systems, particularly for mass vaccination campaigns under emergency conditions. In decentralized health systems, the absence of a structured preparedness phase may compromise coordination, allocation, and operational performance. This study analyzes the vaccination campaign in a municipality in southern Brazil, examining how the overlap of the preparedness and response phases affected outcomes and how alternative logistical scenarios could have altered campaign performance. Methods: An empirical analysis was conducted using scenario-based simulation with stock and flow structures. The model represents vaccine procurement, distribution across national, state, regional, and municipal levels, and municipal vaccination capacity. Real data from the 2021 vaccination campaign in the municipality were used to build a Business-as-Usual scenario, compared with alternative scenarios involving changes in procurement predictability, allocation rules, and operational capacity. Results: Vaccination outcomes were strongly conditioned by upstream allocation decisions, particularly at the national state level. Isolated adjustments at intermediate supply chain levels produced limited improvements when upstream constraints persisted. Scenarios combining improved alignment between forecasted and acquired doses with operational capacity showed higher vaccination potential, revealing a gap between observed performance and system capacity. Conclusions: The findings reinforce that preparedness is a critical determinant of vaccination performance and must precede response in emergency contexts. Supply predictability alone is insufficient without coordinated allocation mechanisms and operational readiness across governance levels. This study provides empirical evidence on how preparation-related decisions shape vaccination outcomes in decentralized health systems and inform logistical coordination in future emergencies. Full article
(This article belongs to the Section COVID Public Health and Epidemiology)
Show Figures

Figure 1

81 pages, 5436 KB  
Article
Global Virtual Prosumer Framework for Secure Cross-Border Energy Transactions Using IoT, Multi-Agent Intelligence, and Blockchain Smart Contracts
by Nikolaos Sifakis
Information 2026, 17(4), 396; https://doi.org/10.3390/info17040396 - 21 Apr 2026
Viewed by 180
Abstract
Global decarbonization and the rapid growth of distributed energy resources increase the need for information-centric mechanisms that can support secure, scalable, cross-border coordination under heterogeneous technical and regulatory conditions. This paper proposes a Global Virtual Prosumer (GVP) framework that integrates IoT sensing, multi-agent [...] Read more.
Global decarbonization and the rapid growth of distributed energy resources increase the need for information-centric mechanisms that can support secure, scalable, cross-border coordination under heterogeneous technical and regulatory conditions. This paper proposes a Global Virtual Prosumer (GVP) framework that integrates IoT sensing, multi-agent coordination, and permissioned blockchain smart contracts to operationalize cross-border energy services as auditable service commitments rather than physical power exchange. Building on prior work that validated MAS-based power management and blockchain-secured operation within individual Virtual Prosumers, the present contribution lies in the cross-border coordination layer and its associated contractual and evaluation mechanisms, not in the constituent technologies themselves. A layered IoT–AI–blockchain architecture is introduced, where off-chain optimization produces allocations and admissibility indicators and on-chain contracts enforce identity, feasibility guards, delegation and partner-assignment rules, oracle verification, and settlement time compliance outcomes. The contractual lifecycle is formalized through four smart-contract algorithms covering trade registration, conditional delegation, cooperative fulfillment, and cross-border settlement with explicit failure semantics and event-based audit trails. The framework is evaluated on a global case study with seven Virtual Prosumers and quantified using contract-centric KPIs that capture registration time rejections, settlement success versus non-compliance, oracle-driven failure attribution, and full lifecycle traceability. The results demonstrate internal consistency of the proposed lifecycle and the practical value of KPI-driven accountability for cross-border energy service coordination. At the same time, the evaluation is based on synthetic parameterization and an emulated contract environment; realistic deployment constraints—including consensus latency, cross-region communication reliability, and regulatory overlap—are discussed as explicit limitations and directions for future empirical validation. Full article
(This article belongs to the Special Issue IoT, AI, and Blockchain: Applications, Security, and Perspectives)
Show Figures

Figure 1

20 pages, 4963 KB  
Article
Complex-Scene-Oriented Autonomous Decision-Making Method for UAVs
by Hongwei Qu and Jinlin Zou
Electronics 2026, 15(8), 1757; https://doi.org/10.3390/electronics15081757 - 21 Apr 2026
Viewed by 199
Abstract
The extensive application of unmanned aerial vehicles (UAVs) in power inspection, military operations and environmental monitoring demands stronger robustness and adaptability for autonomous decision-making systems. Existing methods suffer from heavy map dependence, high computational complexity and insufficient exploration and generalization. Traditional approaches based [...] Read more.
The extensive application of unmanned aerial vehicles (UAVs) in power inspection, military operations and environmental monitoring demands stronger robustness and adaptability for autonomous decision-making systems. Existing methods suffer from heavy map dependence, high computational complexity and insufficient exploration and generalization. Traditional approaches based on expert rules and planning algorithms only suit fixed scenarios and degrade severely in complex dynamic environments. To address these problems, this paper proposes a complex-scene-oriented autonomous decision-making method for UAVs (CADU). It builds a closed-loop decision chain by integrating perception, strategy and execution modules, and adopts curiosity mechanism and contrastive learning to enhance exploration and adaptability. Experimental results show that the proposed CADU achieves an average reward of 0.85, a trajectory smoothness of 0.87, a flight stability of 0.85, and a cumulative collision count of 8±1.2, which significantly outperforms DDPG, PPO and SAC baselines. It provides a reliable and efficient scheme for UAV autonomous decision-making in complex scenarios. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1865 KB  
Article
Loop-Constrained Connectivity Calculation for Planar Multi-Loop Mechanisms: Base–End-Effector Localization and Functional-Constraint Screening
by Xiaoxiong Li and Huafeng Ding
Machines 2026, 14(4), 455; https://doi.org/10.3390/machines14040455 - 20 Apr 2026
Viewed by 225
Abstract
Planar multi-loop mechanisms often generate a large number of non-isomorphic candidate topological graphs during automatic synthesis, making it difficult to efficiently identify configurations that satisfy engineering-oriented functional requirements. To address this issue, a loop-constrained connectivity calculation method and a connectivity-based localization and screening [...] Read more.
Planar multi-loop mechanisms often generate a large number of non-isomorphic candidate topological graphs during automatic synthesis, making it difficult to efficiently identify configurations that satisfy engineering-oriented functional requirements. To address this issue, a loop-constrained connectivity calculation method and a connectivity-based localization and screening procedure are proposed. The proposed connectivity calculation is directly formulated for general planar non-fractionated kinematic chains (NFKCs), including those with multiple joints. For planar fractionated kinematic chains (FKCs), however, the present method is not applied directly at the full-system level, but only to decomposed non-fractionated subchains after system-level decomposition. Starting from a structurally admissible set of candidate topological graphs, a connectivity matrix is established for automatic localization of the base and the end-effector (EE). Functional screening is then performed by combining the connectivity criterion with object-oriented rules on hydraulic driving-pair arrangement and driving-redundancy patterns. The method was validated using the 10-link, 3-DOF single-joint equivalent of the KC1 subchain of a mine scaler manipulator arm. Under the prescribed structural and functional constraints, 249 admissible configurations were obtained. The results indicate that the proposed method provides an effective basis for application-oriented topological screening and subsequent dimensional synthesis. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

26 pages, 639 KB  
Article
Advancing Life Cycle Assessment of Pasture-Based Beef Systems: A High-Resolution Cradle-to-Grave Framework for Global Benchmarking
by Rodolfo Bongiovanni, Leticia Tuninetti, Javier Echazarreta, Ana Muzlera Klappenbach, Javier Lozano, Leonel Alisio and Mariano Avilés
Sustainability 2026, 18(8), 3930; https://doi.org/10.3390/su18083930 - 15 Apr 2026
Viewed by 337
Abstract
Beef production is widely recognized as a significant contributor to global greenhouse gas emissions, making robust and transparent environmental assessments essential for advancing sustainability within supply chains. This study applies a comprehensive cradle-to-grave Life Cycle Assessment (LCA) to evaluate the environmental performance of [...] Read more.
Beef production is widely recognized as a significant contributor to global greenhouse gas emissions, making robust and transparent environmental assessments essential for advancing sustainability within supply chains. This study applies a comprehensive cradle-to-grave Life Cycle Assessment (LCA) to evaluate the environmental performance of beef destined for export, following ISO 14040, ISO 14044 and ISO 14067 standards and the Product Category Rules for meat of mammals. Sixteen impact categories were quantified for 1 kg of vacuum-packed beef using detailed primary data from a pasture-based production system and a representative processing facility. The total climate change impact was 3.27 × 101 kg CO2eq, with enteric methane and feed production jointly responsible for over 70% of overall impacts. Slaughtering and distribution were associated mainly with fossil energy use and ozone depletion, while soil carbon sequestration partially compensated biogenic emissions. The results were consistent with international benchmarks, highlighting the environmental advantages of pasture-based systems, low fertilizer use, and stable land management. Key hotspots were identified in animal growth, feed efficiency, and manure management, with logistics also contributing notably. Overall, the study provides a high-resolution environmental baseline that can support Environmental Product Declarations and guide targeted mitigation strategies across beef supply chains. While the results are derived from a specific pasture-based production system, the study is positioned as a case-study-based application of a high-resolution LCA framework, illustrating how detailed inventories can support environmental benchmarking and hotspot identification without implying statistical representativeness of all beef production systems. Full article
Show Figures

Graphical abstract

11 pages, 1719 KB  
Article
Investigations of the α-Olefin Polymerization Process Using the Classic α-Diimine Nickel Catalyst
by Ying Wang, Jingjing Lai, Zhihui Song, Rong Gao, Qingqiang Gou, Bingyi Li, Gang Zheng, Randi Zhang, Qiang Yue and Yuanning Gu
Polymers 2026, 18(8), 961; https://doi.org/10.3390/polym18080961 - 15 Apr 2026
Viewed by 308
Abstract
This work provides a comprehensive exploration of α-olefin polymerization characteristics catalyzed by the classic α-diimine Ni catalyst. The polymerization process exhibited quasi-living behaviour, and a reaction kinetic model for the monomer coordination–insertion process was established. It was observed that the reaction exhibits living [...] Read more.
This work provides a comprehensive exploration of α-olefin polymerization characteristics catalyzed by the classic α-diimine Ni catalyst. The polymerization process exhibited quasi-living behaviour, and a reaction kinetic model for the monomer coordination–insertion process was established. It was observed that the reaction exhibits living polymerization features during the first 10 min, and the coordination–insertion rate constant was determined to be 1.08 L·mol−1·s−1 at 30 °C. The regulation rules for factors including co-catalyst amount, monomer concentration, polymerization temperature, monomer type on the molecular weight, molecular weight distribution and chain structure of poly(α-olefin)s were clarified. The co-catalyst (methylaluminoxane) primarily served to activate the catalyst without inducing a chain transfer effect, suggesting that chain stagnation is likely the primary cause of the deviation from typical living polymerization behaviour. Based on temperature-controlled experiments, the activation energy for the coordination–insertion reaction was calculated to be 28.40 kJ·mol−1 through GPC curve analysis. The kinetic model established in this study, along with the revealed chain branching rules, provides a theoretical foundation for the design of poly(α-olefin)s with novel structures and functions. Full article
(This article belongs to the Section Polymer Chemistry)
Show Figures

Figure 1

22 pages, 4784 KB  
Article
Comparative Study on Continuous and Discrete Design Optimization for the Fairlead Chain Stopper of Large-Scale Floating Offshore Wind Turbines
by Min-Seok Cheong and Chang-Yong Song
Energies 2026, 19(8), 1893; https://doi.org/10.3390/en19081893 - 14 Apr 2026
Viewed by 362
Abstract
This study presents a comparative investigation of continuous and discrete design optimization for the fairlead chain stopper of large-scale 10 MW floating offshore wind turbines. The fairlead chain stopper plays a key role in ensuring mooring integrity, rapid port evacuation, and efficient maintenance [...] Read more.
This study presents a comparative investigation of continuous and discrete design optimization for the fairlead chain stopper of large-scale 10 MW floating offshore wind turbines. The fairlead chain stopper plays a key role in ensuring mooring integrity, rapid port evacuation, and efficient maintenance under extreme weather conditions driven by global warming. The objective is to minimize structural weight while maintaining safety in accordance with the international classification rules of Det Norske Veritas. Three representative design load scenarios covering mooring and towing conditions are defined, and finite element analysis confirmed that the baseline design satisfies allowable stress limits. In the optimization stage, the thicknesses of nine principal components are selected as design variables. Continuous and discrete formulations are solved using particle swarm optimization, a non-dominated sorting genetic algorithm, and an evolutionary algorithm, and their convergence behavior and computational efficiency are compared. The results show that discrete optimization, which reflects actual manufacturing plate thicknesses, achieves nearly the same weight reduction as the continuous approach while offering superior practical applicability. Among the three techniques, the evolutionary algorithm provided the best convergence characteristics and attained up to 3.73 percent weight reduction. The proposed comparative methodology offers a useful guideline for rational weight-efficient design of core mooring equipment on large floating offshore wind power platforms. Full article
(This article belongs to the Special Issue Latest Challenges in Wind Turbine Maintenance, Operation, and Safety)
Show Figures

Figure 1

32 pages, 15510 KB  
Article
Continuous-Time Scheduling of Berths and Onshore Power Supply in Cold-Chain Logistics: A Chance-Constrained Stochastic Programming Model and RL-ALNS Algorithm
by Zheyin Zhao and Jin Zhu
Mathematics 2026, 14(8), 1292; https://doi.org/10.3390/math14081292 - 13 Apr 2026
Viewed by 198
Abstract
Amid tightening emission rules and growing cold-chain demand, ports face complex multi-objective scheduling under dual uncertainties in vessel arrivals and operations. This work develops a multi-objective chance-constrained stochastic MILP model for joint berth, QC, and OPS scheduling. Heavy-tailed operational delays are managed via [...] Read more.
Amid tightening emission rules and growing cold-chain demand, ports face complex multi-objective scheduling under dual uncertainties in vessel arrivals and operations. This work develops a multi-objective chance-constrained stochastic MILP model for joint berth, QC, and OPS scheduling. Heavy-tailed operational delays are managed via chance constraints, converting Weibull distributions to time buffers, while convex formulations allow piecewise cargo damage penalties to be computed linearly. A reinforcement learning-based adaptive large neighborhood search (RL-ALNS) algorithm is proposed to solve this NP-hard continuous-time problem, integrating a spatiotemporal decoder and an MDP-based selector to ensure microgrid limits and efficiency. Simulations demonstrate RL-ALNS’s superior Pareto convergence versus conventional heuristics. The model cuts the 95th-percentile tail risk by 46.59% and actual costs by 24.44% under mild delays, compared to deterministic scheduling. Overall, it quantifies the non-linear cost–emission–reliability trade-off, providing a robust tool for port decision-making. Full article
Show Figures

Figure 1

36 pages, 7325 KB  
Article
Intelligent Scheduling of Rail-Guided Shuttle Cars via Deep Reinforcement Learning Integrating Dynamic Graph Neural Networks and Transformer Model
by Fang Zhu and Shanshan Peng
Algorithms 2026, 19(4), 289; https://doi.org/10.3390/a19040289 - 8 Apr 2026
Viewed by 239
Abstract
With the rapid development of e-commerce and smart manufacturing, automated warehouse systems have become critical infrastructure for modern logistics. In China’s vast market, the dynamic scheduling of Rail-Guided Vehicles (RGVs) faces significant challenges due to complex task uncertainties, hierarchical supply chain structures, and [...] Read more.
With the rapid development of e-commerce and smart manufacturing, automated warehouse systems have become critical infrastructure for modern logistics. In China’s vast market, the dynamic scheduling of Rail-Guided Vehicles (RGVs) faces significant challenges due to complex task uncertainties, hierarchical supply chain structures, and real-time collision avoidance requirements. Traditional rule-based methods and static optimization models often fail to adapt to such dynamic environments. To address these issues, this paper proposes a novel hybrid deep reinforcement learning framework integrating a Dynamic Graph Neural Network (DGNN) and a Transformer model. The DGNN captures the spatiotemporal dependencies of the warehouse network topology, while the Transformer mechanism enhances long-range feature extraction for task prioritization. Furthermore, we design a centralized Deep Q-network (DQN) framework with parameterized action spaces to coordinate multiple RGVs collaboratively. While the system manages multiple physical vehicles, the learning architecture employs a single-agent global scheduler to avoid the non-stationarity issues inherent in multi-agent reinforcement learning. Experimental results based on real-world data from a large-scale electronics manufacturing warehouse demonstrate that our method reduces average task completion time by 18.5% and improves system throughput by 22.3% compared to state-of-the-art baselines. The proposed approach demonstrates potential for intelligent warehouse management in dynamic industrial scenarios. Full article
Show Figures

Figure 1

23 pages, 1155 KB  
Review
Evidence-Based Clinical Management of Canine Cognitive Dysfunction Syndrome: Diagnostic Algorithms, Practical Guidelines, Critical Appraisal of Biomarkers and Translational Limitations
by Maurizio Dondi, Ezio Bianchi, Paolo Borghetti, Valentina Buffagni, Rosanna Di Lecce, Giacomo Gnudi, Chiara Guarnieri, Francesca Ravanetti, Roberta Saleri and Attilio Corradi
Animals 2026, 16(7), 1114; https://doi.org/10.3390/ani16071114 - 4 Apr 2026
Viewed by 953
Abstract
Canine Cognitive Dysfunction Syndrome (CCDS) is a progressive neurodegenerative disease affecting older dogs that shares many pathological mechanisms with human Alzheimer’s disease (AD). Although it is common in geriatric dogs, CCDS is often underdiagnosed in veterinary medicine. Both CCDS and AD involve a [...] Read more.
Canine Cognitive Dysfunction Syndrome (CCDS) is a progressive neurodegenerative disease affecting older dogs that shares many pathological mechanisms with human Alzheimer’s disease (AD). Although it is common in geriatric dogs, CCDS is often underdiagnosed in veterinary medicine. Both CCDS and AD involve a gradual decline in cognitive functions such as memory, learning and executive abilities. From a pathological perspective, dogs with CCDS show brain changes similar to those seen in AD, including cerebral atrophy, loss of neurons and accumulation of amyloid-beta plaques. CCDS is diagnosed by exclusion, meaning that other medical or neurological conditions that could cause similar behavioural signs must first be ruled out. Clinical evaluation mainly relies on structured questionnaires completed by owners. Magnetic resonance imaging is used to confirm cerebral atrophy and, at the same time, to exclude other brain disorders, such as cerebrovascular accidents and neoplasia. Current research focuses on identifying fluid biomarkers, such as amyloid-beta, neurofilament light chain and glial fibrillary acidic protein, to support an early and objective diagnosis. The most effective management combines pharmacological therapy, targeted nutrition and non-pharmacological strategies, including environmental enrichment and behavioural support. Early intervention, ideally during mild cognitive impairment, is crucial to slow disease progression and maintain quality of life. Full article
(This article belongs to the Special Issue Cognitive Dysfunction and Neurodegenerative Diseases in Dogs and Cats)
Show Figures

Figure 1

33 pages, 6049 KB  
Article
Blockchain-Based Mixed-Node Auction Mechanism
by Xu Liu and Junwu Zhu
Electronics 2026, 15(7), 1516; https://doi.org/10.3390/electronics15071516 - 4 Apr 2026
Viewed by 328
Abstract
Blockchain-based auctions often utilize smart contracts to automate auction rules, with much research focusing on enhancing privacy and fairness through cryptographic techniques. However, the authenticity of external data input into these systems is frequently overlooked. In particular, rational nodes may manipulate bidding data [...] Read more.
Blockchain-based auctions often utilize smart contracts to automate auction rules, with much research focusing on enhancing privacy and fairness through cryptographic techniques. However, the authenticity of external data input into these systems is frequently overlooked. In particular, rational nodes may manipulate bidding data by submitting false types to maximize their utility, compromising market fairness and the reliability of auction outcomes. The aim of this study is to propose an alternative blockchain-based auction mechanism to incentivize nodes to report types honestly. We propose the Mixed-Node Advertising Auction (MNAA) mechanism for digital advertising auctions on blockchain systems. MNAA integrates quasi-linear and value maximization utility models to design allocation and pricing rules that eliminate nodes’ incentives to misreport their types, ensuring the authenticity of data submitted to the auction. To enhance efficiency, MNAA employs state channel technology and off-chain smart contracts, reducing main chain interactions. Theoretical analysis confirms that MNAA incentivizes truthful behavior and ensures security and correctness. Simulation results show that MNAA outperforms Generalized Second Price (GSP), Mixed Bidders with Private Classes (MPR), and Vickrey–Clarke–Grooves (VCG) auctions in terms of liquid social welfare (LSW), publisher revenue, and allocation efficiency, while also improving the transaction throughput and showing good performance in terms of transaction costs and latency. Full article
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems, Volume II)
Show Figures

Figure 1

Back to TopTop