Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (41,428)

Search Parameters:
Keywords = operating conditions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2816 KB  
Article
Improved Piecewise Terminal Integral Sliding-Mode Adaptive Control for PMSM Speed Regulation in Rail Transit Traction
by Jiahui Wang, Zhongli Wang and Jingyu Zhang
Energies 2026, 19(8), 1992; https://doi.org/10.3390/en19081992 (registering DOI) - 21 Apr 2026
Abstract
Aiming at solving the problems of severe chattering, irreconcilable convergence speed, and steady-state accuracy in traditional sliding-mode control (SMC) for the speed regulation system of permanent magnet synchronous motors (PMSMs) in rail transit traction, as well as its poor adaptability to complex disturbances [...] Read more.
Aiming at solving the problems of severe chattering, irreconcilable convergence speed, and steady-state accuracy in traditional sliding-mode control (SMC) for the speed regulation system of permanent magnet synchronous motors (PMSMs) in rail transit traction, as well as its poor adaptability to complex disturbances such as frequent acceleration/deceleration and sudden load changes under traction conditions, a sliding-mode control strategy integrating improved piecewise terminal integral sliding-mode control (IPTISMC) with an adaptive smooth exponential reaching law (ASERL) is proposed. Taking the surface-mounted PMSM for rail transit traction as the research object, the d-q axis mathematical model is established, and a terminal integral sliding surface with a piecewise nonlinear function is designed, which resolves the problems of complex solutions and steady-state errors of the traditional sliding surface through a piecewise cooperative mechanism for large and small error stages. The designed ASERL realizes adaptive gain adjustment based on the state variables of the sliding surface and replaces the sign function with the hyperbolic tangent function, thus alleviating the inherent contradiction between convergence and chattering in the fixed-gain reaching law. The global stability and finite-time convergence of the system are rigorously proved based on Lyapunov stability theory. Furthermore, comparative experiments involving no-load operation, acceleration and deceleration, sudden load application and removal, and parameter perturbation are carried out on a DSP experimental platform for SMC-ERL, ISMC-ERL, IPTISMC-ERL and the proposed IPTISMC-ASERL. Experimental results show that the proposed IPTISMC-ASERL strategy can significantly improve the dynamic response and steady-state control accuracy of the PMSM speed regulation system for rail transit traction, effectively suppress chattering to enhance riding comfort, and simultaneously strengthen the system’s anti-disturbance capability and parametric robustness. It can fully meet the engineering control requirements for high precision and high stability of PMSMs in rail transit traction applications. Full article
Show Figures

Figure 1

16 pages, 471 KB  
Article
Urban Mobility Experiences and Perceived Stress Along a High-Intensity Corridor in a Mexican Border City
by Francisco Isaías Rivera-Meza, Jaime Wenceslao Parra-Moroyoqui, José Leonardo Jiménez-Ortiz, Omar Arodi Flores-Laguna, Guillermo Cano-Verdugo and Gener José Avilés-Rodríguez
Future Transp. 2026, 6(2), 91; https://doi.org/10.3390/futuretransp6020091 (registering DOI) - 21 Apr 2026
Abstract
Urban mobility is increasingly conceptualized as a multidimensional, user-centered domain of transport system evaluation with potential implications for population health. This study examined the association between user-reported urban mobility experiences and perceived stress among adults using a high-intensity corridor in Nogales, Sonora, Mexico. [...] Read more.
Urban mobility is increasingly conceptualized as a multidimensional, user-centered domain of transport system evaluation with potential implications for population health. This study examined the association between user-reported urban mobility experiences and perceived stress among adults using a high-intensity corridor in Nogales, Sonora, Mexico. A quantitative cross-sectional analytical study was conducted with 423 participants using the Urban Mobility Experiences Scale (UMES) and the Perceived Stress Scale (PSS-14). Spearman’s correlation analyses showed inverse associations between perceived stress and several mobility dimensions, although only Sustainability and Urban Environment remained statistically significant after Bonferroni correction (ρ = −0.266; p < 0.001). In multivariate analysis, Sustainability and Urban Environment, Accessibility and Connectivity, and Travel Time and Efficiency were retained as significant predictors, jointly explaining 14.1% of the variance in perceived stress (R2 = 0.141; f2 = 0.152). These findings suggest that multidimensional urban mobility experiences, particularly environmental and accessibility conditions, are associated with perceived stress beyond traditional operational indicators in high-intensity urban corridors. Full article
Show Figures

Figure 1

26 pages, 2989 KB  
Article
Deep Lift Learning-Based Validation Model for Enhancing Resilience and Adoptability of DevOps Phases
by Fahad S. Altuwaijri
Electronics 2026, 15(8), 1748; https://doi.org/10.3390/electronics15081748 (registering DOI) - 20 Apr 2026
Abstract
Software Development (Dev) and Information Technology Operations (Ops) rely on different process parameters, such as robustness, resilience, and in-line optimizations. Resilience is a key requirement when adopting various enterprise features to ensure system stability and swift recovery from failures. Based on different enterprises’ [...] Read more.
Software Development (Dev) and Information Technology Operations (Ops) rely on different process parameters, such as robustness, resilience, and in-line optimizations. Resilience is a key requirement when adopting various enterprise features to ensure system stability and swift recovery from failures. Based on different enterprises’ adoption conditions, the need for resilience and its optimization is to be designed. In this article, a low-complexity, adoptable, and validated resilience model is proposed to improve the efficiency of the DevOps functional phase. This proposed model uses intrinsic deep lift features of the applications to assess its resilience. In this case, optimization is performed using trust, robustness, and resilience parameters, as per the application’s demand. A fine-to-coarse tuning strategy is applied to both individual and permuted parameters to improve the adaptability and scalability of the DevOps implementation. Considering parameter permutations, selective tuning is also feasible through deep lift learning to improve resilience with reduced complexity. This model is efficient in leveraging adoptability, achieving 12.43% for the plan and 13.17% for the feedback phases, for maximum execution time. Full article
(This article belongs to the Section Computer Science & Engineering)
31 pages, 1178 KB  
Article
A Discrete Informational Framework for Classical Gravity: Ledger Foundations and Galaxy Rotation Curve Constraints
by Megan Simons, Elshad Allahyarov and Jonathan Washburn
Entropy 2026, 28(4), 477; https://doi.org/10.3390/e28040477 (registering DOI) - 20 Apr 2026
Abstract
The weak-field, quasi-static regime of gravity is commonly described by the Newton–Poisson equation as an effective response law. We construct this response within a cost-first discrete variational framework. The Recognition Composition Law (RCL) uniquely selects a reciprocal closure cost within the restricted quadratic [...] Read more.
The weak-field, quasi-static regime of gravity is commonly described by the Newton–Poisson equation as an effective response law. We construct this response within a cost-first discrete variational framework. The Recognition Composition Law (RCL) uniquely selects a reciprocal closure cost within the restricted quadratic symmetric composition class; together with the discrete ledger axioms AX1–AX5 (including conservation) and standard DEC refinement, the Newton–Poisson baseline is then recovered in the instantaneous-closure limit. Conditional on Assumption AS1 (scale-free latency) and Assumption AS2 (causal frequency–wavenumber ansatz), allowing finite equilibration introduces fractional memory into the response, yielding a scale-free modification of the source–potential relation characterized by a power-law kernel wker(k)=1+C(k0/k)α in Fourier space. The kernel exponent α=12(1φ1)0.191, where φ=(1+5)/2, is derived from self-similarity of the discrete ledger closure; the amplitude C=φ20.382 is identified as a hypothesis from a three-channel factorization argument. We evaluate this quasi-static kernel-motivated response against SPARC galaxy rotation curves under a strict global-only protocol (fixed M/L=1, no per-galaxy tuning, conservative σtot), using a controlled multiplicative surrogate for the full nonlocal disk operator implied by the kernel. In this deliberately over-constrained setting, the surrogate interface achieves median(χ2/N)=3.06 over 147 galaxies (2933 points), outperforming a strict global-only NFW benchmark and remaining less efficient than MOND under identical constraints. The analysis is restricted to the non-relativistic, quasi-static sector and should be read as a falsifier-oriented galactic-regime consistency check of the scaling window, not as a relativistic completion or a claim of Solar System viability without additional UV regularization/screening. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
25 pages, 1521 KB  
Article
Comparative Evaluation of Deep-Learning and SARIMA Models for Short-Term Residential PV Power Forecasting
by Kalsoom Bano, Vishnu Suresh, Francesco Montana and Przemyslaw Janik
Energies 2026, 19(8), 1991; https://doi.org/10.3390/en19081991 (registering DOI) - 20 Apr 2026
Abstract
Accurate photovoltaic (PV) power forecasting is essential for the efficient operation of residential energy systems and microgrids, as reliable short-term predictions enable improved energy scheduling, demand management, and operational planning in distributed energy environments. In this study, one-hour-ahead forecasting of residential PV power [...] Read more.
Accurate photovoltaic (PV) power forecasting is essential for the efficient operation of residential energy systems and microgrids, as reliable short-term predictions enable improved energy scheduling, demand management, and operational planning in distributed energy environments. In this study, one-hour-ahead forecasting of residential PV power generation is investigated using real-world data collected from multiple households within an Irish energy community. Several deep-learning architectures, including long short-term memory (LSTM), gated recurrent unit (GRU), convolutional neural networks (CNN), CNN–LSTM hybrid networks, and attention-based LSTM models, are evaluated and compared with a seasonal autoregressive integrated moving average (SARIMA) statistical model. A sliding-window approach is employed to transform the PV time series into a supervised learning problem. To ensure statistical robustness, deep-learning models are evaluated using a multi-run framework, and results are reported as mean ± standard deviation based on MAE, RMSE, MAPE, and R2 metrics across multiple households. The results indicate that deep-learning models achieve consistently strong forecasting performance, with GRU frequently providing the most reliable predictions across several households. For instance, in House 5, GRU achieved an RMSE of 142.02 ± 1.87 W and an R2 of 0.694 ± 0.008, while in Houses 11 and 13 it attained R2 values of 0.837 ± 0.002 and 0.835 0.08, respectively. However, performance varied across households, reflecting the influence of data variability and generation patterns on model effectiveness. In comparison, the SARIMA model demonstrated competitive performance and, in certain cases, outperformed deep-learning models. For example, in House 4, it achieved the lowest RMSE of 90.68 W and the highest R2 of 0.709. Overall, these findings highlight that while deep-learning models offer greater adaptability and stability, statistical models remain effective for more regular PV generation patterns. Consequently, the study emphasizes the importance of evaluating forecasting models under realistic household-level conditions and demonstrates that both deep-learning and statistical approaches can provide short-term PV forecasting. Full article
24 pages, 4735 KB  
Article
An Improved YOLO11n-Based Algorithm for Road Sign Detection
by Haifeng Fu, Xinlei Xiao, Yonghua Han, Le Dai, Lan Yao and Lu Xu
Sensors 2026, 26(8), 2543; https://doi.org/10.3390/s26082543 - 20 Apr 2026
Abstract
For vehicle driving scenarios in complex backgrounds, road sign detection faces challenges such as multi-scale targets, long-distances, and low-resolution. To address these challenges, a detection method based on an improved YOLO11n network is proposed. Firstly, to accommodate the multi-scale characteristics of the targets [...] Read more.
For vehicle driving scenarios in complex backgrounds, road sign detection faces challenges such as multi-scale targets, long-distances, and low-resolution. To address these challenges, a detection method based on an improved YOLO11n network is proposed. Firstly, to accommodate the multi-scale characteristics of the targets and improve the network’s ability to detect low-resolution objects and details, a Multi-path Gated Aggregation (MGA) Module is proposed, achieving these objectives via multi-dimensional feature extraction. Secondly, the Neck is improved by designing a network structure that incorporates high-resolution information from the Backbone, thereby enhancing the detection capabilities for small and blurry targets. Finally, an enhanced Spatial Pyramid Pooling—Fast (SPPF) module is proposed, wherein a Group Convolution-Layer Normalization-SiLU structure is integrated across various stages of information passing. By fusing adjacent channel information, it effectively suppresses complex background noise across multiple scales and amplifies road marking features, which consequently boosts the model’s discriminability for distant and obscured targets. Experimental results on a multi-type road sign dataset show that the improved model achieves an mAP@0.5 of 96.96%, which is 1.42% higher than the original model. The mAP@0.5–0.95 and Recall rates are 83.94% and 92.94%, respectively, while the inference speed remains at 134 FPS. Research demonstrates that via targeted modular designs, the proposed approach strikes a superior balance between detection accuracy and real-time efficiency. Consequently, it provides robust technical support for the reliable operation of intelligent vehicle perception systems under complex conditions. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

16 pages, 3556 KB  
Article
Degradation Pathways and Energy Efficiency on Non-Thermal Plasma for Sulfonamide Antibiotics Removal: A Comparative Study
by Hee-Jun Kim, Donggwan Lee, Sanghoon Han, Jae-Cheol Lee and Hyun-Woo Kim
Processes 2026, 14(8), 1312; https://doi.org/10.3390/pr14081312 - 20 Apr 2026
Abstract
The non-thermal plasma (NTP) process is a promising advanced oxidation process (AOP) for removing non-biodegradable organics from wastewater, owing to the efficient formation of reactive chemicals. Despite its effective oxidizing capability, the decomposition mechanism of organic pollutants is not well understood. This study [...] Read more.
The non-thermal plasma (NTP) process is a promising advanced oxidation process (AOP) for removing non-biodegradable organics from wastewater, owing to the efficient formation of reactive chemicals. Despite its effective oxidizing capability, the decomposition mechanism of organic pollutants is not well understood. This study evaluates NTP for two representative sulfonamides (SMZ and STZ) and reports on (i) time-resolved removal to the method detection limit, (ii) transformation mapping using LC-ESI/MS/MS, which confirmed previously proposed hydroxylation and bond-cleavage pathways and further identified additional hydroxylated intermediates formed on the thiazole and benzene rings under NTP conditions, and (iii) energy evaluation through energy per order (EEO) within a single, reproducible operating window. The EEO values for SMZ and STZ degradation via NTP were calculated at 22.4 and 7.5 kWh/m3/order, respectively. These values are up to 37- and 118-fold lower than those reported for comparable AOPs, quantitatively confirming that the proposed NTP process achieves superior energy efficiency for sulfonamide degradation. Degradation is primarily attributed to reactive oxygen species (ROS) generated by plasma, which initiate the breakdown of the antibiotic structure. Overall, this study demonstrates that NTP is a highly effective AOP for driving the rapid primary degradation and intermediate structural transformation of recalcitrant sulfonamide antibiotics. Full article
Show Figures

Figure 1

22 pages, 2130 KB  
Article
MFAFENet: A Multi-Sensor Collaborative and Multi-Scale Feature Information Adaptive Fusion Network for Spindle Rotational Error Classification in CNC Machine Tools
by Fei Wang, Lin Song, Pengfei Wang, Ping Deng and Tianwei Lan
Entropy 2026, 28(4), 475; https://doi.org/10.3390/e28040475 - 20 Apr 2026
Abstract
Accurate classification of spindle rotational errors is critical for ensuring machining precision and operational reliability of CNC machine tools. However, existing methods face challenges in extracting discriminative feature information from vibration signals due to small inter-class differences and complex electromechanical interference. This paper [...] Read more.
Accurate classification of spindle rotational errors is critical for ensuring machining precision and operational reliability of CNC machine tools. However, existing methods face challenges in extracting discriminative feature information from vibration signals due to small inter-class differences and complex electromechanical interference. This paper proposes a novel deep learning model, MFAFENet, based on multi-sensor collaboration and multi-scale feature information adaptive fusion. Vibration signals from three mounting positions are transformed into time-frequency information representations via Short-time Fourier Transform. The proposed network adaptively fuses multi-scale feature information from parallel branches with different kernel sizes through a branch attention mechanism. An efficient channel attention module is then incorporated to recalibrate channel-wise feature responses. The cross-entropy loss function is employed to optimize the network parameters during training. Experiments on a spindle reliability test bench demonstrate that MFAFENet achieves 93.37% average test accuracy, outperforming other comparative methods. Ablation and comparative studies confirm the effectiveness of each module and the clear advantage of adaptive fusion over fixed-weight multi-scale methods. Multi-sensor fusion further improves accuracy by 7.23% over the best single-sensor setup. The proposed method establishes an effective end-to-end mapping between vibration signals and rotational errors, providing a promising solution for high-precision spindle condition monitoring. Full article
(This article belongs to the Section Multidisciplinary Applications)
31 pages, 1487 KB  
Article
Deep Reinforcement Learning-Based Dual-Loop Adaptive Control Method and Simulation for Loitering Munition Fuze
by Lingyun Zhang, Haojie Li, Chuanhao Zhang, Yuan Zhao, Shixiang Qiao and Hang Yu
Technologies 2026, 14(4), 239; https://doi.org/10.3390/technologies14040239 - 20 Apr 2026
Abstract
To address the poor adaptability and rigid initiation modes of the loitering munition fuze in complex environments and the inadequacy of single fuzzy control against strong interference, this paper proposes a dual-loop adaptive reconfiguration control method. The architecture integrates the Twin Delayed Deep [...] Read more.
To address the poor adaptability and rigid initiation modes of the loitering munition fuze in complex environments and the inadequacy of single fuzzy control against strong interference, this paper proposes a dual-loop adaptive reconfiguration control method. The architecture integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm with fuzzy logic. The inner loop uses TD3 to dynamically optimize fuzzy scaling factors based on real-time interference and state deviations. Concurrently, the outer loop utilizes a Fuze Readiness Index (FRI) and a finite state machine to manage real-time multi-modal mission switching (e.g., proximity, delay, and airburst) and reverse safety-state conversions. Co-simulations under non-stationary composite interference show that the proposed method reduces the burst height RMSE by 82.4% and 61.6% compared with the fixed-threshold and standard fuzzy baselines under the considered non-stationary composite interference setting, respectively. The false alarm rate (FAR) is reduced to 0.15%, and the reconfiguration response time under sudden interference is shortened to 12 ms. Even under extreme conditions, such as 400 ms sensor signal loss, the relative error remains within 5%. These simulation results demonstrate the potential of the proposed architecture to improve precision, responsiveness, and robustness under dynamic interference conditions and show good robustness to intermittent observation loss within the simulated operating envelope. Full article
19 pages, 5014 KB  
Article
Investigation on the Design Space of the Primary Drying Stage of Spray-Freeze-Drying Technology
by Shen Weihua, Liu Bo, Luo Chun, Sun Dongze and Yin Wei
Energies 2026, 19(8), 1989; https://doi.org/10.3390/en19081989 - 20 Apr 2026
Abstract
Spray-freeze-drying technology has gained considerable interest worldwide. However, the high energy consumption and lengthy process duration have hindered its further development. The primary drying stage accounts for the largest proportion of both the total energy consumption and process duration. To improve the energy [...] Read more.
Spray-freeze-drying technology has gained considerable interest worldwide. However, the high energy consumption and lengthy process duration have hindered its further development. The primary drying stage accounts for the largest proportion of both the total energy consumption and process duration. To improve the energy utilization efficiency of the drying stage, a mathematical model describing the drying stage was established. The obtained drying time and maximum product temperature were selected to represent the drying efficiency and the risk of failure, respectively. The design space of the drying stage was then constructed. The results show that the mathematical model gives an accurate description of the drying stage, and increasing the shelf temperature and decreasing the chamber pressure would be beneficial for improving drying efficiency but unfavorable for reducing the risk of failure. In addition, the drying efficiency shows higher sensitivity to the change in the operating conditions compared with the risk of failure. Moreover, the packing porosity is found to affect the design space. A lower packing porosity is found to expand the design space, allowing for a wider range of operating conditions. This study provides insights into the drying process and supports the optimization of operating parameters. Full article
(This article belongs to the Section J1: Heat and Mass Transfer)
Show Figures

Figure 1

30 pages, 1393 KB  
Article
Data-Driven Multi-Mode Time–Cost Trade-Off Optimization for Construction Project Scheduling Using LightGBM
by Shike Jia, Cuinan Luo, Ruchen Wang, Qiangwen Zong, Yunfeng Wang, Fei Chen, Weiquan Guan and Yong Liao
Processes 2026, 14(8), 1311; https://doi.org/10.3390/pr14081311 - 20 Apr 2026
Abstract
Large infrastructure projects frequently experience schedule slippage and cost escalation; however, time–cost planning still relies on expert-assigned activity parameters that fail to reflect the variability induced by construction modes, resource supply, and on-site conditions. This study focuses on activity-level multi-mode time–cost trade-off planning [...] Read more.
Large infrastructure projects frequently experience schedule slippage and cost escalation; however, time–cost planning still relies on expert-assigned activity parameters that fail to reflect the variability induced by construction modes, resource supply, and on-site conditions. This study focuses on activity-level multi-mode time–cost trade-off planning and its dynamic correction during project execution. The proposed methodology is intended for project-level short-term operational scheduling and rolling re-scheduling within a finite project execution horizon, rather than long-term strategic or portfolio-level scheduling. A predict–optimize–update framework is proposed, where light gradient boosting machine (LightGBM) is employed to predict the duration and direct cost of activity–mode pairs using unified features extracted from BIM/IFC records, schedule-resource ledgers, and cost-settlement data, covering engineering quantities, mode and resource decisions, and contextual factors. These predicted parameters are then fed into a time-indexed bi-objective mixed-integer linear program (MILP), which minimizes both project makespan and total cost (including indirect cost) to generate an interpretable Pareto frontier via a weighted-sum approach. Meanwhile, real-time monitoring updates refresh the predictors and re-solve the remaining project network to ensure dynamic adaptability. Validated on a desensitized proprietary enterprise multi-source dataset comprising 25 completed infrastructure projects and 5258 activity–mode samples, the proposed method achieves a mean absolute error (MAE) of 2.7 days and a coefficient of determination (R2) of 0.89 for duration prediction, as well as an MAE of 7.4 × 104 CNY and an R2 of 0.91 for direct-cost prediction. The generated Pareto set exhibits a diminishing return trend: as the project duration is relaxed from 101 to 146 days, the total cost decreases from 45.10 to 40.27 million CNY. A weather-triggered update case demonstrates that the completion forecast is revised from 133 to 128 days, with the total cost reduced from 53.05 to 52.75 million CNY. This framework enables explainable schedule–cost co-control, thereby effectively aiding decision-making for the planning and control of large infrastructure projects. Full article
32 pages, 2688 KB  
Article
Research on an Anti-Speculation Revenue Allocation Mechanism in Multi-Virtual Power Plants
by Mengxue Zhang, Qiang Zhou, Youchao Zhang, Jing Ji and Yiming Qiu
Processes 2026, 14(8), 1309; https://doi.org/10.3390/pr14081309 - 20 Apr 2026
Abstract
In the joint operation of multiple virtual power plants, after day-ahead optimal dispatch is completed, some participants may engage in speculative behaviors such as misreporting profit contribution data to obtain greater benefits during profit distribution, thereby undermining fairness. To address this issue, this [...] Read more.
In the joint operation of multiple virtual power plants, after day-ahead optimal dispatch is completed, some participants may engage in speculative behaviors such as misreporting profit contribution data to obtain greater benefits during profit distribution, thereby undermining fairness. To address this issue, this paper constructs a profit distribution model designed to prevent speculation. An improved Nash bargaining equilibrium algorithm based on a third-party trading intermediary is proposed to curb speculative actions. Furthermore, a dual-layer monitoring mechanism centered on profit deviation is established, which can effectively identify both single-day speculative behaviors and long-term systematic speculative trends, thereby triggering verification procedures. This forms a closed-loop management mechanism for speculation prevention—“detection, monitoring, analysis, verification”—ensuring fair profit distribution among participants within virtual power plants. Case study results demonstrate that the proposed method achieves an average deviation of only 2.32% compared to the profit distribution outcome under non-speculative conditions. In contrast, commonly used methods such as the Shapley value method, nucleolus method, and Nash–Harsanyi bargaining solution exhibit an average deviation as high as 18.44%. The research presented in this paper enables the detection of speculative behaviors among participants and facilitates verification, significantly enhancing the fairness and rationality of profit distribution. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

31 pages, 4910 KB  
Article
Comparative Evaluation of Machine Learning and Deep Learning Models for Tropical Cyclone Track and Intensity Forecasting in the North Atlantic Basin
by Henry A. Ogu, Liping Liu and Yuh-Lang Lin
Atmosphere 2026, 17(4), 418; https://doi.org/10.3390/atmos17040418 - 20 Apr 2026
Abstract
Accurate forecasts of tropical cyclone (TC) track and intensity with a sufficient lead time are critical for disaster preparedness and risk mitigation. Traditional numerical weather prediction models, while fundamental to operational forecasting, often exhibit systematic errors due to limitations in observations, physical parameterizations, [...] Read more.
Accurate forecasts of tropical cyclone (TC) track and intensity with a sufficient lead time are critical for disaster preparedness and risk mitigation. Traditional numerical weather prediction models, while fundamental to operational forecasting, often exhibit systematic errors due to limitations in observations, physical parameterizations, and model resolution. In recent years, machine learning (ML) and deep learning (DL) approaches have emerged as promising data-driven alternatives for improving TC forecasts. This study presents a comparative evaluation of six ML and DL models—Random Forest (RF), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Categorical Boosting (CatBoost), Artificial Neural Network (ANN), and Convolutional Neural Network (CNN)—for forecasting TC track and intensity in the North Atlantic basin. The models are trained using the National Hurricane Center’s (NHC) HURDAT2 best-track dataset for storms from 1990 to 2019 and evaluated on an independent test set from the 2020 season. Model performance is compared across all models and benchmarked against the 2020 mean Decay-SHIFOR5 intensity error, CLIPER5 track errors, and the NHC official forecast (OFCL) errors. Forecast skill is assessed using mean absolute error (MAE) with 95% bootstrap confidence intervals and the coefficient of determination (R2) across lead times of 6, 12, 18, 24, 48, and 72 h. The results show that: (1) several ML and DL models achieve intensity forecast performance that is broadly comparable in magnitude to the 2020 mean OFCL benchmarks, with an average error reduction of 5–11% at the 24 h lead time; (2) among the ML models, XGBoost and CatBoost slightly outperform LightGBM and RF in accuracy, while LightGBM demonstrates the highest computational efficiency; and (3) among the DL models, CNNs outperform ANNs in predictive accuracy and intensity forecasting efficiency, while ANNs exhibit lower computational cost for track forecast. Bootstrap confidence intervals indicate relatively low variability in model errors, supporting the statistical stability of the results within the 2020 season. However, these results reflect within-season variability and do not necessarily generalize across different years or climatological conditions. Overall, the findings demonstrate the potential of ML/DL-based approaches to complement existing operational forecast systems and enhance TC track and intensity forecasting in the North Atlantic basin. Full article
(This article belongs to the Special Issue Machine Learning for Atmospheric and Remote Sensing Research)
13 pages, 555 KB  
Essay
Governing Generative AI in Healthcare: A Normative Conceptual Framework for Epistemic Authority, Trust, and the Architecture of Responsibility
by Fatma Eren Akgün and Metin Akgün
Healthcare 2026, 14(8), 1098; https://doi.org/10.3390/healthcare14081098 - 20 Apr 2026
Abstract
Background/Objectives: Large language models (LLMs) such as ChatGPT are rapidly being integrated into healthcare for tasks ranging from clinical documentation to diagnostic support. Current ethical discussions focus predominantly on bias, privacy, and accuracy, leaving three critical governance questions unresolved: What kind of knowledge [...] Read more.
Background/Objectives: Large language models (LLMs) such as ChatGPT are rapidly being integrated into healthcare for tasks ranging from clinical documentation to diagnostic support. Current ethical discussions focus predominantly on bias, privacy, and accuracy, leaving three critical governance questions unresolved: What kind of knowledge does an LLM output represent in clinical reasoning? When is a clinician’s or patient’s trust in that output justified? Who bears responsibility when an AI-informed decision leads to patient harm? This study proposes the Epistemic Authority–Trust–Responsibility (ETR) Architecture, a normative conceptual framework that addresses these three questions as an integrated governance challenge. Methods: The framework was developed through normative conceptual analysis—a method that constructs governance proposals by synthesising philosophical principles, ethical theories, and empirical evidence. The literature was identified through structured searches of PubMed, PhilPapers, and EUR-Lex (January 2020–March 2026), drawing on the philosophy of medical knowledge, the ethics of trust and testimony, and the moral philosophy of responsibility. Results: The ETR Architecture produces four outputs: (i) a four-tier classification system that distinguishes LLM outputs—from administrative drafts to clinical evidence claims—and matches each tier to appropriate verification requirements; (ii) the concept of the ‘epistemic placebo’, formally defined as a governance measure that creates a documented appearance of compliance while lacking at least one operative element of genuine oversight; (iii) a model specifying four conditions under which trust in healthcare AI is justified; (iv) four testable hypotheses with associated research designs connecting governance design to trust calibration and patient safety. Conclusions: The 2025–2027 regulatory transition period offers a critical window for shaping how healthcare institutions govern AI. We argue that deploying LLMs without explicitly classifying their outputs and building appropriate oversight risks allows governance norms to be set by technology vendors rather than by evidence-informed, patient-centred policy. Full article
(This article belongs to the Special Issue AI-Driven Healthcare: Transforming Patient Care and Outcomes)
Show Figures

Figure 1

15 pages, 3529 KB  
Article
Evaluation of Lubricant Selection and Lubrication Intervals for Pin–Bushing Bearings Operating Under High-Temperature Conditions in Heavy-Duty Construction Machinery
by Ilhan Celik, Abdullah Tahir Şensoy and Sevki Burak Sezer
Lubricants 2026, 14(4), 179; https://doi.org/10.3390/lubricants14040179 - 20 Apr 2026
Abstract
Pin–bushing bearings in heavy-duty construction machinery operating in severe industrial environments are susceptible to accelerated wear, grease degradation, and lubrication failure, yet application-specific guidance for lubricant selection and re-greasing intervals under such conditions remains limited. This study evaluates the combined effects of bushing [...] Read more.
Pin–bushing bearings in heavy-duty construction machinery operating in severe industrial environments are susceptible to accelerated wear, grease degradation, and lubrication failure, yet application-specific guidance for lubricant selection and re-greasing intervals under such conditions remains limited. This study evaluates the combined effects of bushing material (hardened steel, cast bronze, and Cu–Sn alloy), grease type (three commercially used greases with viscosities of 120, 460, and 150 mm2/s at 40 °C), and lubrication interval (8, 12, and 24 h) on grease-condition indicators in a field-operating wheel loader used in slag handling, where surrounding slag temperatures may reach 700–800 °C. A Taguchi L9 orthogonal array was used to define nine experimental configurations, each applied for approximately one week under real operating conditions. Grease samples were characterised using the SKF grease analysis kit based on NLGI consistency grade, base oil release rate, and contamination particle count. All greases showed an increase in NLGI grade from 2 to 3–4 during service, indicating thickening and a possible risk of lubrication channel blockage. Oil release rates decreased by up to 60% in some configurations, indicating reduced base oil mobility during service. When the three grease-condition indicators were evaluated together by Grey Relational Analysis, the combination of steel bushing, type B grease (ISO VG 460, lithium complex with MoS2), and a 12 h lubrication interval showed the most balanced overall response. These findings provide field-based guidance for grease selection and maintenance scheduling in pin–bushing systems operating under demanding service conditions. Full article
(This article belongs to the Special Issue Tribological Characteristics of Bearing System, 4th Edition)
Show Figures

Figure 1

Back to TopTop