Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (289)

Search Parameters:
Keywords = linear and bounded operator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4566 KB  
Article
Sequential Convex Trajectory Planning for Space-Debris Conjunction Mitigation in Satellite Formations
by Michał Błażejczyk and Paweł Zagórski
Appl. Sci. 2026, 16(8), 3707; https://doi.org/10.3390/app16083707 - 10 Apr 2026
Abstract
The growing density of space debris in Low Earth Orbit poses significant risks to Distributed Space Systems (DSSs), where multiple satellites operate in close proximity. Conventional single-satellite collision avoidance maneuvers do not account for internal formation safety and may induce secondary conjunction risks. [...] Read more.
The growing density of space debris in Low Earth Orbit poses significant risks to Distributed Space Systems (DSSs), where multiple satellites operate in close proximity. Conventional single-satellite collision avoidance maneuvers do not account for internal formation safety and may induce secondary conjunction risks. This work presents a formation-level trajectory optimization framework for short-term conjunction mitigation that jointly addresses external debris avoidance and inter-satellite collision prevention. The proposed Space-Debris Evasion with Internal-Collision-Avoidance (SDEICA) method formulates the problem as a sequential convex programming scheme. A probabilistic debris keep-out region is modeled as an elliptical collision tube derived from the relative position covariance at the Time of Closest Approach (TCA) and convexified via tangent-plane approximation. Internal safety constraints are incorporated through successive linearization of inter-satellite separation conditions. The framework is evaluated on 1197 conjunction scenarios derived from ESA Collision-Avoidance Challenge data for a three-satellite formation. Results demonstrate a systematic reduction in the probability of collision below the operational threshold of 105 in all cases, within numerical tolerance, eliminating intersatellite distance violations, maintaining bounded formation deviation, and requiring only moderate control effort. The median computational time is 17.12 s per scenario. These findings indicate that sequential convex optimization provides a practical approach for coordinated, fuel-efficient collision avoidance in satellite formations operating in increasingly congested orbital environments. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

20 pages, 1293 KB  
Article
Enhancing Long-Term Forecasting Stability in Smart Grids: A Hybrid Mamba-LSTM-Attention Framework
by Fusheng Chen, Chong Fo Lei, Te Guo and Chiawei Chu
Energies 2026, 19(8), 1855; https://doi.org/10.3390/en19081855 - 9 Apr 2026
Abstract
Accurate multivariate long-term time series forecasting (LTSF) is critical for smart grid operations. However, non-stationary distribution shifts frequently induce compounding error accumulation in conventional architectures. This study proposes the Mamba-LSTM-Attention (MLA) framework, a distribution-aware architecture engineered for forecasting stability. The pipeline integrates Reversible [...] Read more.
Accurate multivariate long-term time series forecasting (LTSF) is critical for smart grid operations. However, non-stationary distribution shifts frequently induce compounding error accumulation in conventional architectures. This study proposes the Mamba-LSTM-Attention (MLA) framework, a distribution-aware architecture engineered for forecasting stability. The pipeline integrates Reversible Instance Normalization (RevIN) to neutralize statistical drift. To address computational bottlenecks, the architecture utilizes a linear-time Selective State Space Model (Mamba) to capture global trend dynamics, cascaded with a single-layer gated Long Short-Term Memory (LSTM) unit to model localized non-linear residuals. A terminal information bottleneck structurally bounds cross-step error propagation. Empirical results across standard ETT and Electricity benchmarks reveal a precision–stability trade-off. By prioritizing structural resilience, the MLA framework limits error accumulation on highly volatile datasets, yielding MSEs of 0.210 and 0.128 on ETTh2 and ETTm2 at the T = 96 horizon. This structural bottleneck inherently smooths high-frequency periodic patterns, yielding lower absolute accuracy on stationary benchmarks such as ETTh1 and ETTm1. Ultimately, the architecture establishes a computationally efficient, structurally stable baseline tailored for non-stationary anomaly tracking in smart grids. Full article
(This article belongs to the Special Issue Forecasting Electricity Demand Using AI and Machine Learning)
Show Figures

Figure 1

24 pages, 2227 KB  
Article
Prime-Enforced Symmetry Constraints in Thermodynamic Recoils: Unifying Phase Behaviors and Transport Phenomena via a Covariant Fugacity Hessian
by Muhamad Fouad
Symmetry 2026, 18(4), 610; https://doi.org/10.3390/sym18040610 - 4 Apr 2026
Viewed by 352
Abstract
The Zeta-Minimizer Theorem establishes that the Riemann zeta function ζ(s) and the primes arise variationally as unique minimizers of a phase functional defined on a symmetric measure space XμG equipped with helical operators. Three fundamental axioms—strict concave entropy [...] Read more.
The Zeta-Minimizer Theorem establishes that the Riemann zeta function ζ(s) and the primes arise variationally as unique minimizers of a phase functional defined on a symmetric measure space XμG equipped with helical operators. Three fundamental axioms—strict concave entropy maximization (Axiom 1), spectral Gibbs minima with non-vanishing ground states (Axiom 2), and irreducible bounded oscillations with flux conservation (Axiom 3)—allow for the selection of the non-proper Archimedean conical helix as the sole topology satisfying all constraints. Primes emerge as indivisible minimal cycles in the associated representation graph Γ (via Hilbert irreducibility and Maschke’s theorem), while the Euler product is recovered through the spectral Dirichlet mapping of the helical eigenvalues. The partial zeta product, Zs=j11pjs,sR0, constitutes the exact grand partition function of any finite subsystem. Numerical inversion of this product directly recovers the mixture frequency s from any experimental compressibility factor Zmix. Mole fractions xi(s), interaction parameters Δ(xi), and the Lyapunov spectrum λ(xi) then follow deductively via the helical transfer matrix and the closed-form linear ODE for Δ. Occupation numbers N(xi) attain sharp maxima precisely at Fibonacci ratios Fr/Fr+1, leading to the molecular prime-ID rule. For twelve representative purely binary (irreducible) systems spanning atomic noble gases, simple diatomics, polar molecules, and an aromatic ring, the residuals satisfy |ZsZmix|<1.5×108. The resulting λ(xi) curves accurately reproduce critical points, liquid ranges, and thermodynamic anomalies with zero adjustable parameters. The Riemann Hypothesis follows rigorously as a theorem: the unique fixed point of the duality functor s1s that preserves the orthogonality condition cos2θk=1 is Re(s)=1/2, enforced by Axiom 1 concavity and Axiom 3 irreducibility. The framework is fully deductive and parameter-free and extends naturally to arbitrary mixtures and multiplicities through the helical representation graph. It provides a variational unification of analytic number theory, spectral geometry, thermodynamic phase behavior, and the Riemann Hypothesis from first principles. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

26 pages, 6403 KB  
Article
RDD-DETR Algorithm for Full-Scale Detection of Rice Diseases
by Ziyan Yang, Wensi Zhang, Chengfeng Hu, Zehao Feng and Jie Li
Agriculture 2026, 16(7), 799; https://doi.org/10.3390/agriculture16070799 - 3 Apr 2026
Viewed by 183
Abstract
To tackle the challenges of high computational expense, limited detection accuracy, and imbalanced detection performance across multi-scale targets in rice disease identification within complex natural environments, we propose the Rice Disease Deformable Detection Transformer (RDD-DETR). This model serves as a full-scale detection framework [...] Read more.
To tackle the challenges of high computational expense, limited detection accuracy, and imbalanced detection performance across multi-scale targets in rice disease identification within complex natural environments, we propose the Rice Disease Deformable Detection Transformer (RDD-DETR). This model serves as a full-scale detection framework based on the Deformable Detection Transformer (Deformable DETR). The model introduces a Rectified Linear Unit (ReLU)-enhanced lightweight linear attention module, which uses differentiated position coding and ReLU kernel mapping to reduce computational complexity. A cross-layer dynamic fusion and inter-layer supervision module is designed to break the serial dependence in decoders and strengthen interlayer supervision, enabling the decoder to generate more accurate and robust target representations. Furthermore, we design an optimization mechanism for sub-scale positioning loss to substantially boost detection accuracy across all target scales. Experiments on our custom RiceLeafDisease-RSOD dataset demonstrate that RDD-DETR achieves an average precision (AP) at Intersection over Union (IoU) threshold 0.5:0.95 of 0.7363 across all categories, surpassing the baseline model by 6.09%. Notably, detection accuracy improves by 6.10% for small targets, 6.61% for medium targets, and 5.42% for large targets. Evaluated on the validation set (671 images with 2482 labeled bounding boxes), the model achieves an AP at IoU threshold 0.5 of 0.9684 while reducing computational cost by 37.41% (from 136.02 to 85.1 Giga Floating Point Operations, GFLOPs) compared to the original Deformable DETR. These results validate RDD-DETR as an effective solution for accurate and efficient real-time rice disease monitoring in complex field environments. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

25 pages, 352 KB  
Article
Resolvent-Generated Generalized Spectral Operators for Nonlinear Dynamical Systems via Koopman Semigroups
by Rui A. P. Perdigão
Mathematics 2026, 14(7), 1145; https://doi.org/10.3390/math14071145 - 29 Mar 2026
Viewed by 262
Abstract
Spectral methods form a cornerstone of linear dynamics, where evolution is resolved into harmonic modes governed by eigenvalues and spectral measures of normal operators. For nonlinear dynamical systems, however, the harmonic eigenfunction paradigm typically breaks down: Koopman operators are often non-normal, may possess [...] Read more.
Spectral methods form a cornerstone of linear dynamics, where evolution is resolved into harmonic modes governed by eigenvalues and spectral measures of normal operators. For nonlinear dynamical systems, however, the harmonic eigenfunction paradigm typically breaks down: Koopman operators are often non-normal, may possess a continuous spectrum, and rarely admit complete eigenbases on natural observable spaces. This work develops a resolvent-centered operator-theoretic framework for generalized spectral representations of nonlinear flows through their associated Koopman C0 semigroups. Rather than relying on diagonalization, we construct resolvent-generated generalized spectral operators that yield weak integral representations of the semigroup valid in non-normal and continuous-spectrum regimes. We show that, under mild polynomial resolvent growth bounds along vertical lines, these spectral distributions become finite complex Radon measures on bounded spectral regions, thereby recovering a measure-theoretic interpretation analogous to classical spectral integrals. In the normal case, the framework reduces to the standard spectral theorem. The resulting resolvent-based perspective naturally incorporates pseudospectral amplification and transient growth, providing a unified description of both asymptotic and non-modal dynamics. Full article
30 pages, 5902 KB  
Article
Research on a Precision Counting Method and Web Deployment for Natural-Form Bothriochloa ischaemum Spikes and Seeds Based on Object Detection
by Huamin Zhao, Yongzhuo Zhang, Yabo Zheng, Erkang Zeng, Linjun Jiang, Weiqi Yan, Fangshan Xia and Defang Xu
Agronomy 2026, 16(7), 706; https://doi.org/10.3390/agronomy16070706 - 27 Mar 2026
Viewed by 291
Abstract
Bothriochloa ischaemum is a key forage species with strong grazing tolerance and high nutritional value, making precise quantification of spike and seed traits essential for germplasm evaluation and yield prediction. However, the compact architecture and minute seed size in natural field conditions render [...] Read more.
Bothriochloa ischaemum is a key forage species with strong grazing tolerance and high nutritional value, making precise quantification of spike and seed traits essential for germplasm evaluation and yield prediction. However, the compact architecture and minute seed size in natural field conditions render manual counting inefficient and labor-intensive. To address this limitation, this study presents a non-destructive and automated quantification framework integrating advanced object detection and regression analysis for accurate in situ estimation of spikes and seed numbers. To further address the challenges of dense spike detection caused by occlusion and small object sizes, this study developed a modified model named YOLOv12-DAN by integrating DySample dynamic upsampling, ASFF feature fusion, and NWD loss, which achieved a mean average precision (mAP) of 91.6%. Meanwhile, for the detection of dense kernels on compact spikes, an improved YOLOv12 architecture incorporating an Explicit Visual Center (EVC) module was proposed to enhance multi-scale feature representation. The optimized model attained a bounding box precision of 96.5%, a recall rate of 86.4%, an mAP50 of 94.3%, and an mAP50-95 of 73.9%. Furthermore, a univariate linear regression model based on 132 spike samples verified the reliable consistency between the predicted and actual seed counts, with a mean absolute error (MAE) of 6.30, a mean absolute percentage error (MAPE) of 9.35, and an R-squared (R2) value of 0.808. Finally, the model was deployed through a lightweight end-to-end web application, enabling real-time field operation and promoting its applicability in breeding programs and agronomic decision-making. This study provides a robust technical pathway for automated phenotyping and precision forage improvement. Full article
(This article belongs to the Special Issue Digital Twins in Precision Agriculture)
Show Figures

Figure 1

24 pages, 3168 KB  
Article
Application of Machine Learning Models to Oil Refinery Programming
by Evar Umeozor
Processes 2026, 14(7), 1072; https://doi.org/10.3390/pr14071072 - 27 Mar 2026
Viewed by 349
Abstract
Transparent and evidence-based representations of global crude oil refining systems remain limited in the public literature, constraining robust energy systems modeling and policy analysis. This study develops a comprehensive, configuration-based modeling framework for all operating crude oil refineries worldwide using plant-level process unit [...] Read more.
Transparent and evidence-based representations of global crude oil refining systems remain limited in the public literature, constraining robust energy systems modeling and policy analysis. This study develops a comprehensive, configuration-based modeling framework for all operating crude oil refineries worldwide using plant-level process unit data. Forty unique refinery configurations are identified through an unsupervised decision tree-based clustering approach that accounts for process unit presence and relative conversion intensity. An extremely randomized trees (ETR) machine learning model is trained on approximately 11,000 refinery-year observations to predict refined product yields as a function of refinery configuration, capacity, and crude oil diet. The model achieves out-of-sample coefficients of determination exceeding 0.90 for all major products and outperforms multiple linear regression and other ensemble methods. The predictive model is integrated with a differential evolution optimization algorithm to enable refinery programming under operational and feedstock constraints. The application of this model to Gulf Cooperation Council (GCC) refineries shows that, under existing technologies, petrochemical feedstock yields are bounded at approximately 37%, significantly below announced long-term diversification targets of 70–85%. Yield improvements of up to 6 percentage points are feasible through operational optimization but are associated with capacity utilization adjustments and product trade-offs. The framework provides a scalable tool for refinery benchmarking, energy transition analysis, and strategic planning across facility, national, and global levels. Full article
(This article belongs to the Special Issue Feature Review Papers in Section "Chemical Processes and Systems")
Show Figures

Graphical abstract

14 pages, 5621 KB  
Article
Mechanism of Gas Control and Fracturing Release in Mid-Shallow High-Rank Coal Reservoirs and Its Engineering Practice
by Yanhui Yang, Zongyuan Li, Haozeng Jin, Xiuqin Lu, Zhihong Zhao and Yuting Wang
Processes 2026, 14(7), 1031; https://doi.org/10.3390/pr14071031 - 24 Mar 2026
Viewed by 255
Abstract
To achieve efficient development of medium-depth and shallow high-rank coalbed methane in the Qinshui Basin of Shanxi Province, the authors focused on the microscopic methane release mechanism. Through scanning electron microscopy, nuclear magnetic resonance, and isothermal adsorption experiments, the pore structure, distribution patterns, [...] Read more.
To achieve efficient development of medium-depth and shallow high-rank coalbed methane in the Qinshui Basin of Shanxi Province, the authors focused on the microscopic methane release mechanism. Through scanning electron microscopy, nuclear magnetic resonance, and isothermal adsorption experiments, the pore structure, distribution patterns, and influence of hydration effects in this type of coal were revealed. It was clarified that the ineffective utilization of “bound-state” methane within nanopores is the key factor leading to low productivity and efficiency in coalbed methane development. Further, based on molecular simulations, the competitive adsorption characteristics between water and methane molecules were quantified, indicating that about 78% of the methane in the internal pores of 4 nm coal molecular clusters cannot be desorbed through pressure reduction. Meanwhile, the production enhancement mechanism of hydraulic fracturing on coal seam depressurization, permeability enhancement, reduction in low-speed diffusion distance, and enhancement of high-speed linear flow was clarified. Through large-scale pad water injection and stepwise slow production increase, the coal seam can be fully communicated, the reservoir effectively stimulated, and the adsorbed methane sufficiently released. This paper establishes a “channeled” fracturing concept and its supporting technological system for medium-depth and shallow high-rank coal, which has been successfully applied in field operations. The pilot well group achieved stable daily production exceeding 50,000 cubic meters per day, laying a solid foundation for the continuous and stable production increase in medium-depth and shallow high-rank coalbed methane in the Qinshui Basin. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

19 pages, 393 KB  
Article
Topology-Dependent Performance of Free-Space Photonic Quantum Networks Under Noise
by Stefalo Acha and Sun Yi
Photonics 2026, 13(4), 310; https://doi.org/10.3390/photonics13040310 - 24 Mar 2026
Viewed by 291
Abstract
Photonic quantum communication enables secure and high-fidelity information transfer beyond classical limits, with direct relevance to emerging quantum networks operating in free-space environments. While physical-layer models of depolarizing noise, Gamma–Gamma turbulence statistics, entanglement swapping, and decoy-state QKD security bounds are individually well established, [...] Read more.
Photonic quantum communication enables secure and high-fidelity information transfer beyond classical limits, with direct relevance to emerging quantum networks operating in free-space environments. While physical-layer models of depolarizing noise, Gamma–Gamma turbulence statistics, entanglement swapping, and decoy-state QKD security bounds are individually well established, prior work typically treats these components in isolation or under fixed network assumptions. In this work, we develop a unified topology-aware analytical framework that simultaneously integrates free-space optical link budgets, turbulence-induced visibility degradation, depolarizing qubit noise, multi-hop entanglement cascade dynamics, teleportation fidelity thresholds, CHSH nonlocality certification, and asymptotic decoy-state secret key rate bounds across star, mesh, and ring graph structures. Rather than introducing new physical channel models, we demonstrate that identical physical links exhibit fundamentally different end-to-end performance once embedded within different network topologies. Mesh architectures minimize visibility cascade through hop-count reduction but incur quadratic hardware scaling. Star topologies minimize link count but concentrate noise and synchronization overhead at the hub. Ring configurations offer linear hardware scaling with multiplicative fidelity degradation. The results establish topology as a first-order design parameter in near-term free-space quantum networks operating without full quantum repeater infrastructures. While motivated by distributed multi-agent architectures, the framework applies broadly to terrestrial, airborne, and satellite-assisted photonic quantum communication systems. Full article
Show Figures

Figure 1

44 pages, 28577 KB  
Article
Triggered Fault-Tolerant Control Method Integrating Zonotope-Based Interval Estimation with Fatigue Load Prediction Model for Wind Turbines
by Yixin Zhou, Jia Liu, Yixiao Gao, Shuhao Cheng and Lei Fu
Sustainability 2026, 18(6), 2954; https://doi.org/10.3390/su18062954 - 17 Mar 2026
Viewed by 206
Abstract
In traditional wind turbine (WT) operation and maintenance, fault diagnosis and repair have long been relied on, yet the demand for continuous operation under faults persists. To address this, this study proposes a triggered fault-tolerant control framework for wind turbines with zonotope-based interval [...] Read more.
In traditional wind turbine (WT) operation and maintenance, fault diagnosis and repair have long been relied on, yet the demand for continuous operation under faults persists. To address this, this study proposes a triggered fault-tolerant control framework for wind turbines with zonotope-based interval estimation. The method enhances safety from point to range estimation of FDI, reduces network traffic load via a WT load region-based adaptive event-triggered mechanism, and enables fast, robust fault diagnosis/isolation using interval residuals. A damage equivalent load (DEL)-sensitive cost term balances structural fatigue suppression while ensuring power tracking and safety constraints. Theoretically, Linear Matrix Inequality (LMI) conditions based on common quadratic Lyapunov ensure closed-loop stability and bounded observation errors, with proven interval residual fault sensitivity and triggering reliability. Numerically, on the standard NREL 5-MW WT model under multi-conditions (turbulence, faulty communication), it achieves an average power tracking accuracy of 95.56%, 28.68% fatigue suppression, and 67.40% bandwidth saving. Overall, it synergistically optimizes robust estimation, economical communication, and fatigue-aware control, providing a theoretically rigorous and experimentally validated technical framework for engineering-scale WT reliability improvement and lifespan extension. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

25 pages, 2146 KB  
Article
Machine Learning-Based Predictive Modelling of Key Operating Parameters in an Industrial-Scale Wet Vertical Stirred Media Mill
by Okay Altun, Aydın Kaya, Ali Seydi Keçeli, Ece Uzun, Meltem Güler and Nurettin Alper Toprak
Minerals 2026, 16(3), 311; https://doi.org/10.3390/min16030311 - 16 Mar 2026
Viewed by 503
Abstract
To the authors’ knowledge, this is the first industrial machine learning (ML) study focused on wet vertical stirred media milling. The study develops and validates machine learning (ML) models to predict the key operating parameters, namely mill discharge product size, mill feed slurry [...] Read more.
To the authors’ knowledge, this is the first industrial machine learning (ML) study focused on wet vertical stirred media milling. The study develops and validates machine learning (ML) models to predict the key operating parameters, namely mill discharge product size, mill feed slurry flow rate, mill power draw, and the specific energy consumption of an industrial wet vertical stirred media mill operating at a copper plant. A physics-guided workflow was adapted, combining relief coefficient-based variable screening with fundamental stirred milling principles to define 20 different structured model input scenarios. In the scope, six regression approaches, linear regression (LR), fine tree regression (FTR), support vector regression (SVR), random forest regression (RFR), artificial neural network regression (ANN), and Gaussian process regression (GPR), were trained and validated using plant sensor data and evaluated using R2 and RMSE. Overall performance was reasonable, with GPR providing the highest predictive accuracy, followed by RFR/ANN, while LR, SVR, and FTR performed lower. The potential benefit of feed size was also assessed conceptually through an upper-bound sensitivity analysis, representing a best-case scenario where an online feed size measurement would be available. Because the feed size descriptor (F80) was not independently measured but derived from an energy–size relationship, the associated accuracy gains are reported as theoretical upper-bound indications rather than independent predictive capability. Overall, the findings support ML-based decision support in stirred milling operations and motivate future work using independently measured feed size (or reliable proxy sensing). Full article
(This article belongs to the Collection Advances in Comminution: From Crushing to Grinding Optimization)
Show Figures

Figure 1

25 pages, 1610 KB  
Article
Supervised Imitation Learning for Optimal Setpoint Trajectory Prediction in Energy Management Under Dynamic Electricity Pricing
by Philipp Wohlgenannt, Vinzent Vetter, Lukas Moosbrugger, Mohan Kolhe, Elias Eder and Peter Kepplinger
Energies 2026, 19(6), 1459; https://doi.org/10.3390/en19061459 - 13 Mar 2026
Viewed by 372
Abstract
Energy management systems operating under dynamic electricity pricing require fast and cost-optimal control strategies for flexible loads. Mixed-integer linear programming (MILP) can compute theoretically optimal control trajectories but is computationally expensive and typically relies on accurate load forecasts, limiting its practical real-time applicability. [...] Read more.
Energy management systems operating under dynamic electricity pricing require fast and cost-optimal control strategies for flexible loads. Mixed-integer linear programming (MILP) can compute theoretically optimal control trajectories but is computationally expensive and typically relies on accurate load forecasts, limiting its practical real-time applicability. This paper proposes a supervised imitation learning (IL) framework that learns optimal setpoint trajectories for a conventional proportional (P) controller directly from electricity price signals and temporal features, thereby eliminating the need for explicit load forecasting. The learned model predicts setpoint trajectories in an open-loop manner, while a lower-level P controller ensures stable closed-loop operation within a two-stage control architecture. The approach is validated in an industrial case study involving load shifting of a refrigeration system under dynamic electricity pricing and benchmarked against MILP optimization, reinforcement learning (RL), heuristic strategies, and various machine learning models. The MILP solution achieves a cost reduction of 21.07% and represents a theoretical upper bound under perfect information. The proposed Transformer model closely approximates this optimum, achieving 19.33% cost reduction while enabling real-time inference. Overall, the results demonstrate that the proposed supervised IL approach can achieve near-optimal control performance with substantially reduced computational effort for real-time energy management applications. Full article
(This article belongs to the Special Issue AI-Driven Modeling and Optimization for Industrial Energy Systems)
Show Figures

Figure 1

19 pages, 707 KB  
Article
Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics
by George A. Anastassiou, Seda Karateke and Metin Zontul
Algorithms 2026, 19(3), 217; https://doi.org/10.3390/a19030217 - 13 Mar 2026
Viewed by 213
Abstract
In this paper, we first introduce the adjustable half-hyperbolic (adj HH) tangent function as an activation function. We then establish both quantitative and qualitative convergence results for HH-activated convolution-type positive linear operators (PLOs) acting on the space of bounded and continuous functions on [...] Read more.
In this paper, we first introduce the adjustable half-hyperbolic (adj HH) tangent function as an activation function. We then establish both quantitative and qualitative convergence results for HH-activated convolution-type positive linear operators (PLOs) acting on the space of bounded and continuous functions on the real line. The theoretical convergence results are numerically validated by means of error decay plots obtained using Python (version 3.13). Moreover, we compare three different classes of HHC-type operators in terms of their convergence behavior and approximation performance. Finally, we conclude by discussing several potential application areas that illustrate the relevance of the presented theoretical framework. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
Show Figures

Figure 1

36 pages, 11335 KB  
Article
An Intelligent Hybrid PIDF Enhanced by a Fuzzy Fractional-Order Controller for Robust Load Frequency Regulation in a Two-Area Interconnected Power System
by Saleh Almutairi, Fatih Anayi, Michael Packianather, Mohammad Almutairi and Mokhtar Shouran
Energies 2026, 19(6), 1442; https://doi.org/10.3390/en19061442 - 12 Mar 2026
Viewed by 564
Abstract
Maintaining frequency regulation in interconnected power systems becomes increasingly difficult in the presence of nonlinear operating conditions. To address this issue, this study develops a hybrid load frequency control scheme in which a fuzzy fractional-order FOPI–FOPD controller is incorporated within a PIDF framework [...] Read more.
Maintaining frequency regulation in interconnected power systems becomes increasingly difficult in the presence of nonlinear operating conditions. To address this issue, this study develops a hybrid load frequency control scheme in which a fuzzy fractional-order FOPI–FOPD controller is incorporated within a PIDF framework for a two-area LFC system. The controller parameters are optimized using the Dwarf Mongoose Optimization Algorithm (DMOA) and the Catch Fish Optimization Algorithm (CFOA), while the Integral of Time-Weighted Absolute Error (ITAE) is adopted as the performance criterion. The proposed strategy is examined under both linear and nonlinear scenarios, including the effects of Governor Dead Band (GDB) and Generation Rate Constraints (GRC). In the linear case, the DMOA-based design achieves an ITAE of 0.02939 with a tie-line settling time of 13.5478 s, whereas the CFOA-based design produces a bounded and convergent response with an ITAE of 0.03937 and a settling time of 14.4947 s. When GDB nonlinearity is introduced, the DMOA-tuned controller exhibits performance deterioration, yielding an ITAE of 0.1098 and a settling time of 19.0416 s, while the CFOA-tuned design shows more favorable time-domain performance with a lower ITAE of 0.05845 and a bounded settling time of 16.3595 s. These findings indicate that the CFOA-optimized PIDF–Fuzzy FOPI–FOPD controller provides an effective LFC solution under the examined nonlinear operating conditions. Full article
(This article belongs to the Special Issue Challenges and Innovations in Stability and Control of Power Systems)
Show Figures

Figure 1

34 pages, 9228 KB  
Article
Analyzing the Impact of Kernel Fusion on GPU Tensor Operation Performance: A Systematic Performance Study
by Matija Dodović, Milica Veselinović and Marko Mišić
Electronics 2026, 15(5), 1034; https://doi.org/10.3390/electronics15051034 - 2 Mar 2026
Viewed by 1109
Abstract
Large numbers of small tensor kernels are executed by GPUs in modern deep learning frameworks, where total performance is frequently constrained by memory bandwidth and kernel launch overheads. Systems such as TensorFlow XLA, PyTorch JIT, and cuDNN often use kernel fusion, which is [...] Read more.
Large numbers of small tensor kernels are executed by GPUs in modern deep learning frameworks, where total performance is frequently constrained by memory bandwidth and kernel launch overheads. Systems such as TensorFlow XLA, PyTorch JIT, and cuDNN often use kernel fusion, which is defined as combining many tensor operations into a single GPU kernel, to reduce intermediate memory transfers and boost efficiency. Nevertheless, it is difficult to measure the true performance impact of fusion on both isolated tensor operations and end-to-end model execution. An experimental investigation of kernel fusion on three different NVIDIA GPUs is presented in this work. For four sample tensor operations: element-wise addition, fused multiply–add, linear transformation with ReLU activation, and map-reduce, we build fused and unfused CUDA kernels using FP32, FP16, and mixed-precision arithmetics. We measure execution time, speedup, and effective memory bandwidth across a range of input sizes. For memory-bound and activation-heavy workloads, fusion yields consistent speedups between 1.5× and 3.13×, particularly for small and medium inputs where kernel launch overhead is significant. For operations dominated by atomic updates, the benefit is limited to between 1.01× and 1.44×. When the reduction strategy is reformulated using block-level shared-memory aggregation, kernel fusion becomes effective again, achieving speedups of up to 2× by eliminating global synchronization bottlenecks. We further evaluate the effect of fusion on image classification models using PyTorch 2.10.0 JIT, achieving 1.54× to 1.83× faster inference. Our results provide practical guidelines on when kernel fusion is most effective. Full article
(This article belongs to the Special Issue Advances in High-Performance and Parallel Computing)
Show Figures

Figure 1

Back to TopTop