Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,946)

Search Parameters:
Keywords = cyber security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 1490 KB  
Review
Technological Advances in Energy Storage: Environmental and Cyber Challenges, Opportunities and Threats—A Review
by Piotr Filipowicz, Michał Dziuba and Bogdan Saletnik
Sustainability 2026, 18(7), 3230; https://doi.org/10.3390/su18073230 - 26 Mar 2026
Abstract
Energy storage plays a key role in the energy transition by enabling the effective integration of variable renewable energy sources such as solar and wind power and by supporting the stability and flexibility of modern energy systems. The rapid development of energy storage [...] Read more.
Energy storage plays a key role in the energy transition by enabling the effective integration of variable renewable energy sources such as solar and wind power and by supporting the stability and flexibility of modern energy systems. The rapid development of energy storage technologies has become one of the pillars of sustainable energy management; however, it simultaneously raises environmental, material, and systemic challenges. This review analyses the environmental implications of energy storage development using an integrative perspective that combines technological, environmental, and system-level analysis. The paper examines major classes of energy storage technologies, including electrochemical, mechanical and physical, thermal energy storage, and chemical pathways within Power-to-X, with particular emphasis on their technical characteristics, maturity, and life cycle environmental performance. Lithium-ion battery systems typically achieve round-trip efficiencies of 85–92% and cycle lifetimes exceeding 5000 cycles, while flow batteries may exceed 10,000 cycles under stationary operating conditions. Mechanical storage technologies such as pumped hydro provide efficiencies of approximately 70–85% with operational lifetimes exceeding several decades. Key challenges related to critical raw material availability, recycling, end-of-life management, and ecosystem impacts are discussed, highlighting the importance of sustainable production and recovery strategies in supporting the circular economy. In addition, the review addresses the consequences of insufficient reuse of secondary materials and the growing relevance of digitisation and cyber resilience of energy storage systems as indirect contributors to environmental risk. The review also considers geopolitical aspects related to critical material supply chains and the cyber security of energy storage infrastructure, emphasising their growing importance for the resilience and environmental sustainability of future energy systems. The analysis indicates that further development of energy storage technologies will significantly influence not only power systems but also transport, industry, and heat sectors. The results emphasise that sustainable deployment of energy storage requires hybrid system architectures and policy frameworks that account for environmental performance, system flexibility, and long-term resilience in line with the principles of sustainable development. Full article
Show Figures

Figure 1

31 pages, 5541 KB  
Article
Preference-Guided Reinforcement Learning for Dynamic Green Flexible Assembly Job Shop Scheduling with Learning–Forgetting Effects
by Ruyi Wang, Xiaojuan Liao, Guangzhu Chen, Yaxin Liu and Leyuan Liu
Sustainability 2026, 18(7), 3222; https://doi.org/10.3390/su18073222 - 25 Mar 2026
Abstract
With the evolution from Industry 4.0 to 5.0, flexible assembly scheduling must simultaneously address production efficiency, environmental sustainability, and human factors, while remaining adaptive to real-time disruptions. This study investigates the dynamic green scheduling problem in dual-resource Flexible Assembly Job Shops with worker [...] Read more.
With the evolution from Industry 4.0 to 5.0, flexible assembly scheduling must simultaneously address production efficiency, environmental sustainability, and human factors, while remaining adaptive to real-time disruptions. This study investigates the dynamic green scheduling problem in dual-resource Flexible Assembly Job Shops with worker learning and forgetting, aiming to minimize makespan and total energy consumption. To tackle this problem, a Hierarchical Dual-Agent Deep Reinforcement Learning algorithm (HAD-DRL) is proposed. The framework integrates a Heterogeneous Graph Neural Network to extract real-time workshop states and employs two collaborative agents, i.e., a high-level preference decision agent and a low-level scheduling execution agent. The upper agent dynamically adjusts the preference weights between economic and environmental objectives, while the lower agent generates corresponding scheduling actions. Unlike existing multi-agent methods that optimize a single objective at each step, HAD-DRL achieves adaptive coordination and balanced trade-offs among conflicting goals. Experimental results demonstrate that the proposed method outperforms heuristic and baseline DRL approaches in both objectives, validating its effectiveness and practical applicability for intelligent and sustainable manufacturing. Full article
(This article belongs to the Special Issue Sustainable Manufacturing Systems in the Context of Industry 4.0)
Show Figures

Figure 1

33 pages, 2907 KB  
Article
Reimagining Bitcoin Mining as a Virtual Energy Storage Mechanism in Grid Modernization: Enhancing Security, Sustainability, and Resilience of Smart Cities Against False Data Injection Cyberattacks
by Ehsan Naderi
Electronics 2026, 15(7), 1359; https://doi.org/10.3390/electronics15071359 - 25 Mar 2026
Abstract
The increasing penetration of intermittent renewable energy demands innovative solutions to maintain grid stability, resilience, and security in the body of smart cities. This paper presents a novel framework that redefines Bitcoin mining as a form of virtual energy storage, a flexible and [...] Read more.
The increasing penetration of intermittent renewable energy demands innovative solutions to maintain grid stability, resilience, and security in the body of smart cities. This paper presents a novel framework that redefines Bitcoin mining as a form of virtual energy storage, a flexible and controllable load capable of delivering large-scale demand response services, positioning it as a competitive alternative to traditional energy storage systems, including electrical, mechanical, thermal, chemical, and electrochemical storage solutions. By strategically aligning mining activities with grid conditions, Bitcoin mining can absorb excess electricity during periods of oversupply, converting it into digital assets, and reduce operations during times of scarcity, effectively emulating the behavior of conventional energy storage systems without the associated capital expenditures and material requirements. Beyond its operational flexibility, this paper explores the cyber–physical benefits of integrating Bitcoin mining into the power transmission systems as a defensive mechanism against false data injection (FDI) cyberattacks in smart city infrastructure. To achieve this goal, a decentralized and adaptive control strategy is proposed, in which mining loads dynamically adjust based on authenticated grid-state information, thereby improving system observability and hindering adversarial efforts to disrupt state estimation. In addition, to handle the proposed approach, this paper introduces a high-performance algorithm, a combination of quantum-augmented particle swarm optimization and wavelet-oriented whale optimization (QAPSO-WOWO). Simulation results confirm that strategic deployment of mining loads improves grid sustainability by utilizing curtailed renewables, enhances resilience by mitigating load-generation imbalances, and bolsters cybersecurity by reducing the impacts of FDI attacks. This work lays the foundation for a transdisciplinary paradigm shift, positioning Bitcoin mining not as a passive energy consumer but as an active participant in securing and stabilizing the future power grid in smart cities. Full article
Show Figures

Figure 1

16 pages, 897 KB  
Data Descriptor
A Dataset Capturing Decision Processes, Tool Interactions and Provenance Links in Autonomous AI Agents
by Yasser Hmimou, Mohamed Tabaa, Azeddine Khiat and Zineb Hidila
Data 2026, 11(4), 66; https://doi.org/10.3390/data11040066 (registering DOI) - 25 Mar 2026
Abstract
Agent-based systems built on large language models (LLMs) increasingly rely on complex internal reasoning processes, tool interactions, and memory mechanisms. However, the internal decision-making dynamics of such agents remain difficult to observe, analyze, and compare in a systematic manner. To address this limitation, [...] Read more.
Agent-based systems built on large language models (LLMs) increasingly rely on complex internal reasoning processes, tool interactions, and memory mechanisms. However, the internal decision-making dynamics of such agents remain difficult to observe, analyze, and compare in a systematic manner. To address this limitation, we present AgentSec, a curated dataset of structured agent interaction traces designed to support the analysis of agent-level reasoning and action behaviors. The dataset consists of 30 deterministic and non-redundant scenario instances, each capturing a complete agent interaction session under a fixed and validated schema. Quantitatively, the 30 released sessions comprise 67 decision nodes and 45 tool calls (73.3% successful), with provenance graphs exhibiting an average depth of 4.53 (max 7) and a maximum branching factor of 3. Scenarios are organized according to a predefined taxonomy of agent behavioral patterns, including tool success and failure modes, fallback strategies, memory conflicts and overwrites, decision rollbacks, and provenance branching structures. Each scenario encodes a distinct analytical case rather than a parametric variation, enabling focused and interpretable study of agent decision-making processes. AgentSec provides detailed records of decision traces, tool calls, memory updates, and provenance relations, and is intended to facilitate reproducible research on agent behavior analysis, auditing, and evaluation. The dataset is released alongside its schema, scenario manifest, and validation tooling to support reuse and extension by the research community. Rather than serving as a large-scale performance benchmark, AgentSec is explicitly designed as a diagnostic and unit-test suite for auditing agent-level reasoning logic and provenance consistency under controlled structural conditions. Full article
Show Figures

Figure 1

30 pages, 564 KB  
Article
A Context-Aware Cybersecurity Readiness Assessment Framework for Organisations in Developing and Emerging Environments
by Raymond Agyemang, Steven Furnell and Tim Muller
Future Internet 2026, 18(4), 178; https://doi.org/10.3390/fi18040178 - 24 Mar 2026
Abstract
Organisations increasingly face complex cybersecurity threats shaped not only by internal capabilities but also by external regulatory, institutional, and environmental conditions. While existing cybersecurity standards and maturity models provide valuable guidance, they often offer limited support for assessing organisational readiness in a manner [...] Read more.
Organisations increasingly face complex cybersecurity threats shaped not only by internal capabilities but also by external regulatory, institutional, and environmental conditions. While existing cybersecurity standards and maturity models provide valuable guidance, they often offer limited support for assessing organisational readiness in a manner that is both context-sensitive and diagnostically meaningful. This paper presents a context-aware cybersecurity readiness assessment framework designed to support organisational evaluation of cybersecurity readiness while explicitly accounting for external environmental influences. The framework adopts a two-tier architecture. Tier 1 assesses organisational awareness of and engagement with the external cybersecurity environment, including national regulatory obligations, institutional support mechanisms, and international collaboration. Tier 2 evaluates internal organisational cybersecurity readiness across governance, operational controls, awareness and culture, and external collaboration practices. The two tiers are designed to operate independently, enabling complementary interpretation without assuming deterministic relationships between external context and internal capability. The framework is developed and evaluated using a Design Science Research approach and is operationalised through a structured assessment instrument and an interpretable scoring model. Empirical validation is conducted across multiple organisational contexts operating in developing and emerging environments, with qualitative case study evidence where available. The results demonstrate that the framework differentiates meaningfully across readiness domains, avoids artificial score inflation or compression, and supports interpretable diagnosis of alignment gaps between external expectations and internal practices. The study contributes a validated assessment artefact that extends cybersecurity awareness research into a broader organisational readiness perspective. From a practical standpoint, the framework provides organisations, policymakers, and researchers with a structured tool to support incremental improvement, informed decision-making, and reflective engagement with both internal cybersecurity practices and external environmental conditions. Full article
Show Figures

Figure 1

21 pages, 300 KB  
Article
Tides of Change: Counter-Terrorism, Rights, and Commercial Efficiency in UK Ports
by Selina Wai Ming Robinson
Laws 2026, 15(2), 21; https://doi.org/10.3390/laws15020021 - 24 Mar 2026
Viewed by 76
Abstract
UK ports handle the vast majority of national trade by volume and constitute Critical National Infrastructure. Since 2004, the SOLAS/ISPS Code and the Port Security Regulations 2009 have established baseline security requirements, recently supplemented by the National Security and Investment Act 2021 and [...] Read more.
UK ports handle the vast majority of national trade by volume and constitute Critical National Infrastructure. Since 2004, the SOLAS/ISPS Code and the Port Security Regulations 2009 have established baseline security requirements, recently supplemented by the National Security and Investment Act 2021 and the National Security Act 2023, creating overlapping obligations. This contribution maps the evolving regulatory framework (ISPS/Port Security Regulations, NSI 2021, NSA 2023, and CNI-related guidance). It assesses operational impacts using industry metrics and draws comparative lessons from Singapore and Rotterdam. Empirical research indicates that security regulation is not uniformly detrimental to performance: targeted, intelligence-led, and technology-enabled measures can coincide with productivity gains, whereas fragmented or blanket compliance regimes are more consistently associated with increased dwell times and throughput loss. These delays propagate through supply chains and intensify cost pressures, with proportionally greater impacts on mid-sized ports. Comparative evidence indicates that risk-based screening, integrated cyber–physical platforms, transparent governance, and clear cost-sharing frameworks can maintain security without compromising commercial performance. The contribution recommends (i) tiered, risk-based screening with transparent indicators; (ii) the consolidation of overlapping regulatory obligations; (iii) clearer liability frameworks, including model terms and alternative dispute resolution; and (iv) scheduled review provisions to maintain proportionality over time. Full article
(This article belongs to the Special Issue Criminal Justice: Rights and Practice)
34 pages, 1728 KB  
Systematic Review
Integrating Machine Learning and Business Intelligence into Supply Chain Risk Management for a Comprehensive Cybersecurity Framework: A Systematic Literature Review
by Rasha Aljaafreh, Firas Al-Doghman, Farookh Hussain, Fazlullah Khan and Ali Aljaafreh
Technologies 2026, 14(4), 194; https://doi.org/10.3390/technologies14040194 - 24 Mar 2026
Viewed by 78
Abstract
Supply chain cybersecurity is a growing concern for businesses as they utilize increasingly interconnected digital systems. This systematic literature review examines how machine learning (ML) and business intelligence (BI) may be used in conjunctions to improve supply chain cyber security risk management. This [...] Read more.
Supply chain cybersecurity is a growing concern for businesses as they utilize increasingly interconnected digital systems. This systematic literature review examines how machine learning (ML) and business intelligence (BI) may be used in conjunctions to improve supply chain cyber security risk management. This review followed PRISMA guidelines. A quality evaluation was performed based on CASP to evaluate 35 peer-reviewed articles published in 2016–2025. The review analysis indicates that although ML has been extensively utilized for threat detection, BI utilization is fragmented. Additionally, there is a lack of integrated ML-BI frameworks, specifically for small–medium enterprises (SMEs) and developing economies. As such, this literature review provides a conceptual four-layer framework of predictive and analytical capabilities for threat detection, risk assessment, and decision-making. It also identifies a structured research agenda with which to advance the field of research. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Figure 1

20 pages, 4497 KB  
Article
Remote Sensing Identification of Benggang Using a Two-Stream Network with Multimodal Feature Enhancement and Sparse Attention
by Xuli Rao, Qihao Chen, Kexin Zhu, Zhide Chen, Jinshi Lin and Yanhe Huang
Electronics 2026, 15(6), 1331; https://doi.org/10.3390/electronics15061331 - 23 Mar 2026
Viewed by 108
Abstract
Benggang (Benggang), a typical landform characterized by severe erosion and a geohazard in the red-soil hilly regions of southern China, is characterized by a fragmented texture, irregular boundaries, and high similarity to background objects such as bare soil and roads, which poses a [...] Read more.
Benggang (Benggang), a typical landform characterized by severe erosion and a geohazard in the red-soil hilly regions of southern China, is characterized by a fragmented texture, irregular boundaries, and high similarity to background objects such as bare soil and roads, which poses a dual challenge of “multiscale variability + strong noise” for automated identification at regional scales. To address insufficient information from a single modality and the limited representation of cross-scale features, this study proposes a dual-stream feature-fusion network (DF-Net) for multisource data consisting of a digital orthophoto map (DOM) and a digital elevation model (DEM). The method adopts ResNeSt50d as the backbone of the two branches: on the DOM side, a Canny-edge channel is stacked to enhance high-frequency boundary information; on the DEM side, derived terrain factors, including slope, aspect, curvature, and hillshade, are introduced to provide morphological constraints. In the cross-modal fusion stage, a multiscale sparse attention fusion module is designed, which acquires contextual information via multiwindow average pooling and suppresses noise interference through top-K sparsification. In the decision stage, a multibranch ensemble is employed to improve classification stability. Taking Anxi County, Fujian Province, as the study area, a coregistered dataset of GF-2 (1 m) DOM and ALOS (12.5 m) DEMs is constructed, and a zonal partitioning strategy is adopted to evaluate the model’s generalization ability. The experimental results show that DF-Net achieves 97.44% accuracy, 85.71% recall, and an 82.98% F1 score in the independent test zone, outperforming multiple mainstream CNN/transformer classification models. This study indicates that the strategy of “multimodal feature enhancement + sparse attention fusion” tailored to Benggang erosional landforms can significantly improve recognition performance under complex backgrounds, providing technical support for rapid Benggang surveys and governance-effectiveness assessments. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 785 KB  
Article
Bayesian Networks for Cybersecurity Decision Support: Enhancing Human-Machine Interaction in Technical Systems
by Karla Maradova, Petr Blecha, Vendula Samelova, Tomáš Marada and Daniel Zuth
Appl. Sci. 2026, 16(6), 3053; https://doi.org/10.3390/app16063053 - 21 Mar 2026
Viewed by 100
Abstract
The increasing digitization of manufacturing and the integration of CNC and industrial control systems into the industry 4.0 environment have introduced new cybersecurity risks that directly affect operational reliability. Traditional deterministic risk-assessment methods used for securing ICS—such as SCADA, PLC, and CNC systems—struggle [...] Read more.
The increasing digitization of manufacturing and the integration of CNC and industrial control systems into the industry 4.0 environment have introduced new cybersecurity risks that directly affect operational reliability. Traditional deterministic risk-assessment methods used for securing ICS—such as SCADA, PLC, and CNC systems—struggle to address uncertainty, dynamic operating conditions, and complex dependencies between technical and organizational factors. To overcome these limitations, this study develops a Bayesian Network (BN) model that captures probabilistic relationships between machine-level configuration parameters, network conditions, and potential security incidents. The model is applied to a CNC machining center (ZPS MCG1000i), where it supports scenario-based prediction of cybersecurity risks and provides interpretable outputs suitable for operator decision-making and human–machine interaction. The results demonstrate that BNs are effective in environments with limited data availability and high uncertainty, offering transparent and quantifiable insights into how specific misconfigurations—such as active remote access or irregular firmware updates—elevate overall system exposure. The proposed approach aligns with current regulatory and standardization requirements, including the NIS2 Directive (EU 2022/2555), ISO/IEC 27001:2022, ISO/IEC 27005:2022, and Regulation (EU) 2024/2847 (Cyber Resilience Act), which define cybersecurity obligations for products with digital elements. The study provides a reproducible and future-oriented methodology for integrating cybersecurity into machinery-safety evaluation in modern industrial environments. Full article
(This article belongs to the Special Issue New Advances in Cybersecurity Technology and Cybersecurity Management)
Show Figures

Figure 1

39 pages, 1614 KB  
Article
LLM-Powered Proactive Cyber-Defense Framework Using Cyber-Threat Indicators Collected from X Platform
by Nawal Almutairi
Electronics 2026, 15(6), 1305; https://doi.org/10.3390/electronics15061305 - 20 Mar 2026
Viewed by 116
Abstract
Security organizations increasingly rely on cyber threat intelligence (CTI) sharing to enhance their resilience against cyberattacks. Indicators of Compromise (IoCs) play a critical operational role in CTI by providing malicious artifacts that support threat detection, incident response, and facilitate proactive defense. However, the [...] Read more.
Security organizations increasingly rely on cyber threat intelligence (CTI) sharing to enhance their resilience against cyberattacks. Indicators of Compromise (IoCs) play a critical operational role in CTI by providing malicious artifacts that support threat detection, incident response, and facilitate proactive defense. However, the rapid growth of social media as CTI sources, characterized by short-text content, poses significant challenges to automated IoC extraction, contextual interpretation, operational integration, and reliable verification. To address these challenges, this study proposes a comprehensive framework that integrates Large Language Models (LLMs) across multiple stages of the CTI pipeline. The framework leverages LLM-driven data augmentation, a hybrid classification model, and contextual summarization to enhance short-text understanding while supporting expert-in-the-loop validation for operational reliability. Extensive experimental evaluations demonstrate that LLM-driven data augmentation substantially improves model robustness and generalization while reducing false-positive alerts, achieving a precision of 98.87%. Quantitative diversity analysis and expert-based human evaluation further confirm the linguistic quality and correctness of the generated augmented samples. In addition, IoC reports are validated using both reference-based and reference-free evaluation metrics that show strong alignment and high semantic adequacy. Moreover, a technology acceptance model was integrated with cybersecurity domain constructs to assess the acceptance factors of the proposed framework. Regression analysis showed that perceived usefulness, behavioral intention, trust in automation, and risk were the strongest predictors of actual use. These predictors are commonly interpreted as indicators of technology acceptance. Full article
(This article belongs to the Special Issue AI-Enhanced Security: Advancing Threat Detection and Defense)
Show Figures

Figure 1

41 pages, 4390 KB  
Article
AE3GIS—An Agile Emulated Educational Environment for Guided Industrial Security Training
by Tollan Berhanu, Hunter Squires, Braxton Marlatt, Scott Anderson, Benton Wilson, Robert A. Borrelli and Constantinos Kolias
Future Internet 2026, 18(3), 166; https://doi.org/10.3390/fi18030166 - 20 Mar 2026
Viewed by 104
Abstract
Industrial Control Systems (ICSs) are the backbone of modern critical infrastructure, such as electric power, water treatment, oil and gas distribution, and manufacturing operations. While the convergence of IT and OT has greatly increased efficiency and observability, it has also greatly expanded the [...] Read more.
Industrial Control Systems (ICSs) are the backbone of modern critical infrastructure, such as electric power, water treatment, oil and gas distribution, and manufacturing operations. While the convergence of IT and OT has greatly increased efficiency and observability, it has also greatly expanded the attack surface of these once-isolated systems. High-profile cyber-physical attacks, including Stuxnet (2010), TRITON (2017), and the Colonial Pipeline ransomware attack (2021), have shown that ICS-targeted cyberattacks can cause physical damage, disrupt economic stability, and put public safety at risk. Despite the growing prevalence and intensity of such threats, ICS-based cybersecurity education remains largely under-resourced and underfunded. Traditional ICS training laboratories require highly specialized hardware, vendor-specific tools, and expensive licensing that significantly raise barriers to entry. Traditional labs typically require on-site participation and pose physical safety concerns when cyber-physical attack scenarios are performed. These barriers leave students unable to get necessary security training for ICSs. Therefore, this paper introduces AE3GIS: Agile Emulated Educational Environment for Guided Industrial Security—a fully virtual, lightweight, open-source platform designed to democratize ICS cybersecurity education. Based on the GNS3 network simulation tool, AE3GIS enables rapid deployment of comprehensive ICS environments containing IT and OT systems, industrial communication protocols, control logic, and diverse security tools. AE3GIS is designed to provide practical training for students using realistic ICS cybersecurity scenarios through a local or remote training platform without the cost, safety, or accessibility limitations of hardware-based labs. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

23 pages, 6306 KB  
Article
Trustless Federated Reinforcement Learning for VPP Dispatch
by Xin Zhang and Fan Liang
Electronics 2026, 15(6), 1303; https://doi.org/10.3390/electronics15061303 - 20 Mar 2026
Viewed by 158
Abstract
Large-scale Virtual Power Plants (VPPs) are increasingly essential as Distributed Energy Resources (DERs) assume ancillary service duties once supplied by conventional generation, yet scaling a VPP exposes a persistent trilemma among economic efficiency, data privacy, and operational security. Centralized coordination can approach optimal [...] Read more.
Large-scale Virtual Power Plants (VPPs) are increasingly essential as Distributed Energy Resources (DERs) assume ancillary service duties once supplied by conventional generation, yet scaling a VPP exposes a persistent trilemma among economic efficiency, data privacy, and operational security. Centralized coordination can approach optimal revenue but requires collecting fine-grained DER operational data and creates a single point of compromise. Federated Learning (FL) mitigates raw data centralization by keeping measurements and experience local, but it introduces a fragile trust assumption that the aggregator will correctly and fairly combine model updates. This trust gap is acute in reinforcement learning-based VPP control because aggregation deviations, including selectively dropping updates, manipulating weights, replaying stale models, or injecting a replacement model, can silently bias the learned policy and degrade both profit and compliance. We propose a zero-knowledge federated reinforcement learning framework for trustless VPP coordination in which each DER trains a local deep reinforcement learning agent to solve a multi-objective dispatch problem that balances ancillary service revenue against battery degradation under operational and grid constraints, while the global aggregation step is made externally verifiable. In each round, participants bind membership via signed receipts and commit to their updates, and the aggregator produces a zk-SNARK, proving that the published global parameters equal the agreed aggregation rule applied to the receipt-bound set of committed updates under a fixed-point encoding with range constraints. Verification is lightweight and can be performed independently by each DER, removing the need to trust the aggregator for aggregation integrity without centralizing raw DER operational data or trajectories. The proposed design does not aim to hide model updates from the aggregator. Instead, it provides external verifiability of the aggregation computation while keeping raw measurements and local experience. We formalize the threat model and verifiable security properties for aggregation correctness and update inclusion, present a circuit construction with proof complexity characterized by model dimension and fleet size, and evaluate the approach in power and cyber co-simulation on the IEEE 33 bus feeder with ancillary service signals. Results show near-centralized economic performance under benign conditions and improved robustness to aggregator side deviations compared to standard federated reinforcement learning. Full article
Show Figures

Figure 1

22 pages, 3299 KB  
Article
DualStream-RTNet: A Multimodal Deep Learning Framework for Grape Cultivar Classification and Soluble Solid Content Prediction
by Zhiguo Liu, Yufei Song, Aoran Liu, Xi Meng, Chang Liu, Shanshan Li, Xiangqing Wang and Guifa Teng
Foods 2026, 15(6), 1095; https://doi.org/10.3390/foods15061095 - 20 Mar 2026
Viewed by 181
Abstract
Accurate and non-destructive evaluation of grape quality is crucial for intelligent viticulture, yet most existing approaches address cultivar classification and soluble solid content (SSC) prediction as independent tasks based on single-modality data, limiting robustness and practical applicability. This study proposes DualStream-RTNet, a unified [...] Read more.
Accurate and non-destructive evaluation of grape quality is crucial for intelligent viticulture, yet most existing approaches address cultivar classification and soluble solid content (SSC) prediction as independent tasks based on single-modality data, limiting robustness and practical applicability. This study proposes DualStream-RTNet, a unified multimodal deep learning framework that simultaneously performs grape cultivar classification and SSC prediction by integrating RGB-HSV fused images and PCA-compressed hyperspectral spectra. The dual-stream architecture enables the complementary learning of external chromatic–textural cues and internal physicochemical information, while a Transformer-enhanced fusion module strengthens global representation and cross-modal correlation. A dataset of 864 berries from five grape cultivars was used to validate the model. DualStream-RTNet achieved 93.64% classification accuracy, outperforming ResNet18 and other CNN baselines, and produced more compact and consistent confusion-matrix patterns. For SSC prediction, it consistently yielded the highest performance across cultivars, with R2p values up to 0.9693 and RMSE as low as 0.2567, surpassing the PLSR, SVR, LSTM, and Transformer regression models. These results demonstrate the superiority of the proposed framework in capturing both visual and spectral characteristics. DualStream-RTNet provides an efficient and scalable solution for comprehensive grape quality assessment, offering strong potential for real-time sorting, precision grading, and smart agricultural applications. Full article
(This article belongs to the Section Food Engineering and Technology)
Show Figures

Figure 1

32 pages, 2268 KB  
Article
Symmetry-Driven Multi-Objective Dream Optimization for Intelligent Healthcare Resource Management and Emergency Response
by Ashraf A. Abu-Ein, Ahmed R. El-Saeed, Obaida M. Al-Hazaimeh, Hanin Ardah, Gaber Hassan, Mohammed Tawfik and Islam S. Fathi
Symmetry 2026, 18(3), 530; https://doi.org/10.3390/sym18030530 - 20 Mar 2026
Viewed by 121
Abstract
Structural symmetry appears as a natural feature in both optimal solution landscapes and hospital scheduling behaviors, representing an inherent balance that can be deliberately leveraged to improve how quickly algorithms converge and how reliably systems perform in intricate healthcare optimization contexts. Managing hospital [...] Read more.
Structural symmetry appears as a natural feature in both optimal solution landscapes and hospital scheduling behaviors, representing an inherent balance that can be deliberately leveraged to improve how quickly algorithms converge and how reliably systems perform in intricate healthcare optimization contexts. Managing hospital resources is a multifaceted challenge that requires simultaneously addressing several competing goals, such as reducing costs, improving patient experiences, making the most of available resources, distributing staff workload fairly, and strengthening readiness for emergencies. Traditional optimization approaches frequently struggle to cope with the complexity and ever-changing nature of modern healthcare environments. To address this gap, this study introduces a novel Multi-Objective Dream Optimization Algorithm (MO-DOA) tailored for smart healthcare resource management, which adapts a biologically inspired optimization framework to meet the specific demands of healthcare settings. The MO-DOA is built around three core mechanisms: a foundational memory component that retains high-quality solutions, a forgetting-supplementation component that maintains a productive balance between exploration and exploitation, and a dream-sharing component that promotes diversity among candidate solutions. Rigorous testing across realistic hospital environments confirms MO-DOA’s outstanding effectiveness, with results showing a 21.86% gain in resource utilization, a 30.95% decrease in patient waiting times, a 19.06% boost in patient satisfaction, and a 29.56% improvement in how evenly staff workloads are distributed. The algorithm’s emergency response capabilities are especially noteworthy, achieving bed assignments within 4.23 min and an equipment deployment success rate of 94.56%. Computationally, the algorithm proves highly efficient, with an average response time of 18.87 s and strong scalability across different operational scales. Collectively, these findings position MO-DOA as a powerful and practical tool for optimizing hospital operations in real time. Full article
(This article belongs to the Special Issue Symmetry in Complex Analysis Operators Theory)
Show Figures

Figure 1

27 pages, 1493 KB  
Article
Single-Attention Large Language Model for Efficient Multi-Regional Electricity Demand and Generation Forecasting
by Muhammad Zulfiqar, Kelum A. A. Gamage and M. B. Rasheed
Energies 2026, 19(6), 1522; https://doi.org/10.3390/en19061522 - 19 Mar 2026
Viewed by 240
Abstract
Electricity forecasting is one of the most crucial aspects in maintaining stable, reliable, and autonomous power systems. While recently developed forecasting methods based on large language models can make accurate predictions, these models are still struggling due to their computational complexity, which requires [...] Read more.
Electricity forecasting is one of the most crucial aspects in maintaining stable, reliable, and autonomous power systems. While recently developed forecasting methods based on large language models can make accurate predictions, these models are still struggling due to their computational complexity, which requires more computing power, and their reliance on carefully designed prompts. This makes them complicated and harder to use in practice. To address this, we propose a Single-Attention Large Language Model (SA-LLM) that uses a unified attention mechanism to understand relationships between main and additional variables, without the need for manually created prompts. The proposed framework has been tested on real electricity supply and demand datasets, which are obtained from major U.S. electricity markets, including PJM, MISO, NYISO, ISO New England, ERCOT, SPP, and CAISO. Experimental results demonstrate that the proposed SA-LLM method outperforms the existing counterpart methods in terms of accuracy and associated errors. More specifically, the SA-LLM has also achieved a 22.5% improvement in the mean absolute error compared with traditional LSTM-based models, while reducing memory usage by 52.1% and training time by 38.4% relative to recent LLM-based methods. Furthermore, the SA-LLM demonstrates strong zero-shot generalization, achieving an additional 18.2% improvement in the MAE on previously unseen regions. Full article
(This article belongs to the Special Issue Advanced Load Forecasting Technologies for Power Systems)
Show Figures

Figure 1

Back to TopTop