Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (640)

Search Parameters:
Keywords = scheduled maintenance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 875 KiB  
Article
Comprehensive Analysis of Neural Network Inference on Embedded Systems: Response Time, Calibration, and Model Optimisation
by Patrick Huber, Ulrich Göhner, Mario Trapp, Jonathan Zender and Rabea Lichtenberg
Sensors 2025, 25(15), 4769; https://doi.org/10.3390/s25154769 (registering DOI) - 2 Aug 2025
Abstract
The response time of Artificial Neural Network (ANN) inference is critical in embedded systems processing sensor data close to the source. This is particularly important in applications such as predictive maintenance, which rely on timely state change predictions. This study enables estimation of [...] Read more.
The response time of Artificial Neural Network (ANN) inference is critical in embedded systems processing sensor data close to the source. This is particularly important in applications such as predictive maintenance, which rely on timely state change predictions. This study enables estimation of model response times based on the underlying platform, highlighting the importance of benchmarking generic ANN applications on edge devices. We analyze the impact of network parameters, activation functions, and single- versus multi-threading on response times. Additionally, potential hardware-related influences, such as clock rate variances, are discussed. The results underline the complexity of task partitioning and scheduling strategies, stressing the need for precise parameter coordination to optimise performance across platforms. This study shows that cutting-edge frameworks do not necessarily perform the required operations automatically for all configurations, which may negatively impact performance. This paper further investigates the influence of network structure on model calibration, quantified using the Expected Calibration Error (ECE), and the limits of potential optimisation opportunities. It also examines the effects of model conversion to Tensorflow Lite (TFLite), highlighting the necessity of considering both performance and calibration when deploying models on embedded systems. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

18 pages, 8520 KiB  
Article
Cross-Layer Controller Tasking Scheme Using Deep Graph Learning for Edge-Controlled Industrial Internet of Things (IIoT)
by Abdullah Mohammed Alharthi, Fahad S. Altuwaijri, Mohammed Alsaadi, Mourad Elloumi and Ali A. M. Al-Kubati
Future Internet 2025, 17(8), 344; https://doi.org/10.3390/fi17080344 - 30 Jul 2025
Viewed by 85
Abstract
Edge computing (EC) plays a critical role in advancing the next-generation Industrial Internet of Things (IIoT) by enhancing production, maintenance, and operational outcomes across heterogeneous network boundaries. This study builds upon EC intelligence and integrates graph-based learning to propose a Cross-Layer Controller Tasking [...] Read more.
Edge computing (EC) plays a critical role in advancing the next-generation Industrial Internet of Things (IIoT) by enhancing production, maintenance, and operational outcomes across heterogeneous network boundaries. This study builds upon EC intelligence and integrates graph-based learning to propose a Cross-Layer Controller Tasking Scheme (CLCTS). The scheme operates through two primary phases: task grouping assignment and cross-layer control. In the first phase, controller nodes executing similar tasks are grouped based on task timing to achieve monotonic and synchronized completions. The second phase governs controller re-tasking both within and across these groups. Graph structures connect the groups to facilitate concurrent tasking and completion. A learning model is trained on inverse outcomes from the first phase to mitigate task acceptance errors (TAEs), while the second phase focuses on task migration learning to reduce task prolongation. Edge nodes interlink the groups and synchronize tasking, migration, and re-tasking operations across IIoT layers within unified completion periods. Departing from simulation-based approaches, this study presents a fully implemented framework that combines learning-driven scheduling with coordinated cross-layer control. The proposed CLCTS achieves an 8.67% reduction in overhead, a 7.36% decrease in task processing time, and a 17.41% reduction in TAEs while enhancing the completion ratio by 13.19% under maximum edge node deployment. Full article
Show Figures

Figure 1

25 pages, 4407 KiB  
Article
A Reproducible Pipeline for Leveraging Operational Data Through Machine Learning in Digitally Emerging Urban Bus Fleets
by Bernardo Tormos, Vicente Bermudez, Ramón Sánchez-Márquez and Jorge Alvis
Appl. Sci. 2025, 15(15), 8395; https://doi.org/10.3390/app15158395 - 29 Jul 2025
Viewed by 168
Abstract
The adoption of predictive maintenance in public transportation has gained increasing attention in the context of Industry 4.0. However, many urban bus fleets remain in early digital transformation stages, with limited historical data and fragmented infrastructures that hinder the implementation of data-driven strategies. [...] Read more.
The adoption of predictive maintenance in public transportation has gained increasing attention in the context of Industry 4.0. However, many urban bus fleets remain in early digital transformation stages, with limited historical data and fragmented infrastructures that hinder the implementation of data-driven strategies. This study proposes a reproducible Machine Learning pipeline tailored to such data-scarce conditions, integrating domain-informed feature engineering, lightweight and interpretable models (Linear Regression, Ridge Regression, Decision Trees, KNN), SMOGN for imbalance handling, and Leave-One-Out Cross-Validation for robust evaluation. A scheduled batch retraining strategy is incorporated to adapt the model as new data becomes available. The pipeline is validated using real-world data from hybrid diesel buses, focusing on the prediction of time spent in critical soot accumulation zones of the Diesel Particulate Filter (DPF). In Zone 4, the model continued to outperform the baseline during the production test, indicating its validity for an additional operational period. In contrast, model performance in Zone 3 deteriorated over time, triggering retraining. These results confirm the pipeline’s ability to detect performance drift and support predictive maintenance decisions under evolving operational constraints. The proposed framework offers a scalable solution for digitally emerging fleets. Full article
(This article belongs to the Special Issue Big-Data-Driven Advances in Smart Maintenance and Industry 4.0)
Show Figures

Figure 1

19 pages, 590 KiB  
Review
Comprehensive Review of Dielectric, Impedance, and Soft Computing Techniques for Lubricant Condition Monitoring and Predictive Maintenance in Diesel Engines
by Mohammad-Reza Pourramezan, Abbas Rohani and Mohammad Hossein Abbaspour-Fard
Lubricants 2025, 13(8), 328; https://doi.org/10.3390/lubricants13080328 - 29 Jul 2025
Viewed by 268
Abstract
Lubricant condition analysis is a valuable diagnostic tool for assessing engine performance and ensuring the reliable operation of diesel engines. While traditional diagnostic techniques—such as Fourier transform infrared spectroscopy (FTIR)—are constrained by slow response times, high costs, and the need for specialized personnel. [...] Read more.
Lubricant condition analysis is a valuable diagnostic tool for assessing engine performance and ensuring the reliable operation of diesel engines. While traditional diagnostic techniques—such as Fourier transform infrared spectroscopy (FTIR)—are constrained by slow response times, high costs, and the need for specialized personnel. In contrast, dielectric spectroscopy, impedance analysis, and soft computing offer real-time, non-destructive, and cost-effective alternatives. This review examines recent advances in integrating these techniques to predict lubricant properties, evaluate wear conditions, and optimize maintenance scheduling. In particular, dielectric and impedance spectroscopies offer insights into electrical properties linked to oil degradation, such as changes in viscosity and the presence of wear particles. When combined with soft computing algorithms, these methods enhance data analysis, reduce reliance on expert interpretation, and improve predictive accuracy. The review also addresses challenges—including complex data interpretation, limited sample sizes, and the necessity for robust models to manage variability in real-world operations. Future research directions emphasize miniaturization, expanding the range of detectable contaminants, and incorporating multi-modal artificial intelligence to further bolster system robustness. Collectively, these innovations signal a shift from reactive to predictive maintenance strategies, with the potential to reduce costs, minimize downtime, and enhance overall engine reliability. This comprehensive review provides valuable insights for researchers, engineers, and maintenance professionals dedicated to advancing diesel engine lubricant monitoring. Full article
Show Figures

Graphical abstract

28 pages, 2918 KiB  
Article
Machine Learning-Powered KPI Framework for Real-Time, Sustainable Ship Performance Management
by Christos Spandonidis, Vasileios Iliopoulos and Iason Athanasopoulos
J. Mar. Sci. Eng. 2025, 13(8), 1440; https://doi.org/10.3390/jmse13081440 - 28 Jul 2025
Viewed by 275
Abstract
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics [...] Read more.
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics is at an emerging state. This paper proposes a machine learning-driven framework for real-time ship performance management. The framework starts with data collected from onboard sensors and culminates in a decision support system that is easily interpretable, even by non-experts. It also provides a method to forecast vessel performance by extrapolating Key Performance Indicator (KPI) values. Furthermore, it offers a flexible methodology for defining KPIs for every crucial component or aspect of vessel performance, illustrated through a use case focusing on fuel oil consumption. Leveraging Artificial Neural Networks (ANNs), hybrid multivariate data fusion, and high-frequency sensor streams, the system facilitates continuous diagnostics, early fault detection, and data-driven decision-making. Unlike conventional static performance models, the framework employs dynamic KPIs that evolve with the vessel’s operational state, enabling advanced trend analysis, predictive maintenance scheduling, and compliance assurance. Experimental comparison against classical KPI models highlights superior predictive fidelity, robustness, and temporal consistency. Furthermore, the paper delineates AI and ML applications across core maritime operations and introduces a scalable, modular system architecture applicable to both commercial and naval platforms. This approach bridges advanced simulation ecosystems with in situ operational data, laying a robust foundation for digital transformation and sustainability in maritime domains. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 5274 KiB  
Article
Sediment Flushing Operation Mode During Sediment Peak Processes Aiming Towards the Sustainability of Three Gorges Reservoir
by Bingjiang Dong, Lingling Zhu, Shi Ren, Jing Yuan and Chaonan Lv
Sustainability 2025, 17(15), 6836; https://doi.org/10.3390/su17156836 - 28 Jul 2025
Viewed by 232
Abstract
Asynchrony between the movement of water and sediment in a reservoir will affect long-term maintenance of the reservoir’s capacity to a certain extent. Based on water and sediment data on the Three Gorges Reservoir (TGR) measured over the years and a river network [...] Read more.
Asynchrony between the movement of water and sediment in a reservoir will affect long-term maintenance of the reservoir’s capacity to a certain extent. Based on water and sediment data on the Three Gorges Reservoir (TGR) measured over the years and a river network model, optimization of the dispatching mode of the reservoir’s sand peak process was studied, and the corresponding water and sediment dispatching indicators were provided. The results show that (1) sand peak discharge dispatching of the TGR can be divided roughly into three stages, namely the flood detention period, the sediment transport period, and the sediment discharge period. (2) According to the process of the flood peak and the sand peak, a division method for each period is proposed. (3) A corresponding scheduling index is proposed according to the characteristics of the sand peak process and the needs of flood control scheduling. This research can provide operational indicators for the operation and management of the sediment load in the TGR and also provide technical support for sustainable reservoirs similar to TGR. Full article
Show Figures

Figure 1

35 pages, 638 KiB  
Review
The Influence of Circadian Rhythms on Transcranial Direct Current Stimulation (tDCS) Effects: Theoretical and Practical Considerations
by James Chmiel and Agnieszka Malinowska
Cells 2025, 14(15), 1152; https://doi.org/10.3390/cells14151152 - 25 Jul 2025
Viewed by 467
Abstract
Transcranial direct current stimulation (tDCS) can modulate cortical excitability in a polarity-specific manner, yet identical protocols often produce inconsistent outcomes across sessions or individuals. This narrative review proposes that much of this variability arises from the brain’s intrinsic temporal landscape. Integrating evidence from [...] Read more.
Transcranial direct current stimulation (tDCS) can modulate cortical excitability in a polarity-specific manner, yet identical protocols often produce inconsistent outcomes across sessions or individuals. This narrative review proposes that much of this variability arises from the brain’s intrinsic temporal landscape. Integrating evidence from chronobiology, sleep research, and non-invasive brain stimulation, we argue that tDCS produces reliable, polarity-specific after-effects only within a circadian–homeostatic “window of efficacy”. On the circadian (Process C) axis, intrinsic alertness, membrane depolarisation, and glutamatergic gain rise in the late biological morning and early evening, whereas pre-dawn phases are marked by reduced excitability and heightened inhibition. On the homeostatic (Process S) axis, consolidated sleep renormalises synaptic weights, widening the capacity for further potentiation, whereas prolonged wakefulness saturates plasticity and can even reverse the usual anodal/cathodal polarity rules. Human stimulation studies mirror this two-process fingerprint: sleep deprivation abolishes anodal long-term-potentiation-like effects and converts cathodal inhibition into facilitation, while stimulating at each participant’s chronotype-aligned (phase-aligned) peak time amplifies and prolongs after-effects even under equal sleep pressure. From these observations we derive practical recommendations: (i) schedule excitatory tDCS after restorative sleep and near the individual wake-maintenance zone; (ii) avoid sessions at high sleep pressure or circadian troughs; (iii) log melatonin phase, chronotype, recent sleep and, where feasible, core temperature; and (iv) consider mild pre-heating or time-restricted feeding as physiological primers. By viewing Borbély’s two-process model and allied metabolic clocks as adjustable knobs for plasticity engineering, this review provides a conceptual scaffold for personalised, time-sensitive tDCS protocols that could improve reproducibility in research and therapeutic gain in the clinic. Full article
Show Figures

Figure 1

35 pages, 1334 KiB  
Article
Advanced Optimization of Flowshop Scheduling with Maintenance, Learning and Deteriorating Effects Leveraging Surrogate Modeling Approaches
by Nesrine Touafek, Fatima Benbouzid-Si Tayeb, Asma Ladj and Riyadh Baghdadi
Mathematics 2025, 13(15), 2381; https://doi.org/10.3390/math13152381 - 24 Jul 2025
Viewed by 224
Abstract
Metaheuristics are powerful optimization techniques that are well-suited for addressing complex combinatorial problems across diverse scientific and industrial domains. However, their application to computationally expensive problems remains challenging due to the high cost and significant number of fitness evaluations required during the search [...] Read more.
Metaheuristics are powerful optimization techniques that are well-suited for addressing complex combinatorial problems across diverse scientific and industrial domains. However, their application to computationally expensive problems remains challenging due to the high cost and significant number of fitness evaluations required during the search process. Surrogate modeling has recently emerged as an effective solution to reduce these computational demands by approximating the true, time-intensive fitness function. While surrogate-assisted metaheuristics have gained attention in recent years, their application to complex scheduling problems such as the Permutation Flowshop Scheduling Problem (PFSP) under learning, deterioration, and maintenance effects remains largely unexplored. To the best of our knowledge, this study is the first to investigate the integration of surrogate modeling within the artificial bee colony (ABC) framework specifically tailored to this problem context. We develop and evaluate two distinct strategies for integrating surrogate modeling into the optimization process, leveraging the ABC algorithm. The first strategy uses a Kriging model to dynamically guide the selection of the most effective search operator at each stage of the employed bee phase. The second strategy introduces three variants, each incorporating a Q-learning-based operator in the selection mechanism and a different evolution control mechanism, where the Kriging model is employed to approximate the fitness of generated offspring. Through extensive computational experiments and performance analysis, using Taillard’s well-known standard benchmarks, we assess solution quality, convergence, and the number of exact fitness evaluations, demonstrating that these approaches achieve competitive results. Full article
Show Figures

Figure 1

23 pages, 13580 KiB  
Article
Enabling Smart Grid Resilience with Deep Learning-Based Battery Health Prediction in EV Fleets
by Muhammed Cavus and Margaret Bell
Batteries 2025, 11(8), 283; https://doi.org/10.3390/batteries11080283 - 24 Jul 2025
Viewed by 244
Abstract
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful [...] Read more.
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful life (RUL) using machine and deep learning, most existing models fail to capture both short-term degradation trends and long-range contextual dependencies jointly. In this study, we introduce V2G-HealthNet, a novel hybrid deep learning framework that uniquely combines Long Short-Term Memory (LSTM) networks with Transformer-based attention mechanisms to model battery degradation under dynamic vehicle-to-grid (V2G) scenarios. Unlike prior approaches that treat SOH estimation in isolation, our method directly links health prediction to operational decisions by enabling SOH-informed adaptive load scheduling and predictive maintenance across EV fleets. Trained on over 3400 proxy charge-discharge cycles derived from 1 million telemetry samples, V2G-HealthNet achieved state-of-the-art performance (SOH RMSE: 0.015, MAE: 0.012, R2: 0.97), outperforming leading baselines including XGBoost and Random Forest. For RUL prediction, the model maintained an MAE of 0.42 cycles over a five-cycle horizon. Importantly, deployment simulations revealed that V2G-HealthNet triggered maintenance alerts at least three cycles ahead of critical degradation thresholds and redistributed high-load tasks away from ageing batteries—capabilities not demonstrated in previous works. These findings establish V2G-HealthNet as a deployable, health-aware control layer for smart city electrification strategies. Full article
Show Figures

Figure 1

23 pages, 5359 KiB  
Article
Relationship Analysis Between Helicopter Gearbox Bearing Condition Indicators and Oil Temperature Through Dynamic ARDL and Wavelet Coherence Techniques
by Lotfi Saidi, Eric Bechhofer and Mohamed Benbouzid
Machines 2025, 13(8), 645; https://doi.org/10.3390/machines13080645 - 24 Jul 2025
Viewed by 277
Abstract
This study investigates the dynamic relationship between bearing gearbox condition indicators (BGCIs) and the lubrication oil temperature within the framework of health and usage monitoring system (HUMS) applications. Using the dynamic autoregressive distributed lag (DARDL) simulation model, we quantified both the short- and [...] Read more.
This study investigates the dynamic relationship between bearing gearbox condition indicators (BGCIs) and the lubrication oil temperature within the framework of health and usage monitoring system (HUMS) applications. Using the dynamic autoregressive distributed lag (DARDL) simulation model, we quantified both the short- and long-term responses of condition indicators to shocks in oil temperature, offering a robust framework for a counterfactual analysis. To complement the time-domain perspective, we applied a wavelet coherence analysis (WCA) to explore time–frequency co-movements and phase relationships between the condition indicators under varying operational regimes. The DARDL results revealed that the ball energy, cage energy, and inner and outer race indicators significantly increased in response to the oil temperature in the long run. The WCA results further confirmed the positive association between oil temperature and the condition indicators under examination, aligning with the DARDL estimations. The DARDL model revealed that the ball energy and the inner race energy have statistically significant long-term effects on the oil temperature, with p-values < 0.01. The adjusted R2 of 0.785 and the root mean square error (MSE) of 0.008 confirm the model’s robustness. The wavelet coherence analysis showed strong time–frequency correlations, especially in the 8–16 scale range, while the frequency-domain causality (FDC) tests confirmed a bidirectional influence between the oil temperature and several condition indicators. The FDC analysis showed that the oil temperature significantly affected the BGCIs, with evidence of feedback effects, suggesting a mutual dependency. These findings contribute to the advancement of predictive maintenance frameworks in HUMSs by providing practical insights for enhancing system reliability and optimizing maintenance schedules. The integration of dynamic econometric approaches demonstrates a robust methodology for monitoring critical mechanical components and encourages further research in broader aerospace and industrial contexts. Full article
Show Figures

Figure 1

22 pages, 3710 KiB  
Review
Problems and Strategies for Maintenance Scheduling of a Giant Cascaded Hydropower System in the Lower Jinsha River
by Le Li, Yushu Wu, Yuanyuan Han, Zixuan Xu, Xingye Wu, Yan Luo and Jianjian Shen
Energies 2025, 18(14), 3831; https://doi.org/10.3390/en18143831 - 18 Jul 2025
Viewed by 201
Abstract
Maintenance scheduling of hydropower units is essential for ensuring the operational security and stability of large-scale cascaded hydropower systems and for improving the efficiency of water energy utilization. This study takes the Cascaded Hydropower System of the Lower Jinsha River (CHSJS) as a [...] Read more.
Maintenance scheduling of hydropower units is essential for ensuring the operational security and stability of large-scale cascaded hydropower systems and for improving the efficiency of water energy utilization. This study takes the Cascaded Hydropower System of the Lower Jinsha River (CHSJS) as a representative case, identifying four key challenges facing maintenance planning: multi-dimensional influencing factor coupling, spatial and temporal conflicts with generation dispatch, coordination with transmission line maintenance, and compound uncertainties of inflow and load. To address these issues, four strategic recommendations are proposed: (1) identifying and quantifying the impacts of multi-factor influences on maintenance planning; (2) developing integrated models for the co-optimization of power generation dispatch and maintenance scheduling; (3) formulating coordinated maintenance strategies for hydropower units and associated transmission infrastructure; and (4) constructing joint models to manage the coupled uncertainties of inflow and load. The strategy proposed in this study was applied to the CHSJS, obtaining the weight of the impact factor. The coordinated unit maintenance arrangements of transmission line maintenance periods increased from 56% to 97%. This study highlights the critical need for synergistic optimization of generation dispatch and maintenance scheduling in large-scale cascaded hydropower systems and provides a methodological foundation for future research and practical applications. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

32 pages, 5175 KiB  
Article
Scheduling and Routing of Device Maintenance for an Outdoor Air Quality Monitoring IoT
by Peng-Yeng Yin
Sustainability 2025, 17(14), 6522; https://doi.org/10.3390/su17146522 - 16 Jul 2025
Viewed by 272
Abstract
Air quality monitoring IoT is one of the approaches to achieving a sustainable future. However, the large area of IoT and the high number of monitoring microsites pose challenges for device maintenance to guarantee quality of service (QoS) in monitoring. This paper proposes [...] Read more.
Air quality monitoring IoT is one of the approaches to achieving a sustainable future. However, the large area of IoT and the high number of monitoring microsites pose challenges for device maintenance to guarantee quality of service (QoS) in monitoring. This paper proposes a novel maintenance programming model for a large-area IoT containing 1500 monitoring microsites. In contrast to classic device maintenance, the addressed programming scenario considers the division of appropriate microsites into batches, the determination of the batch maintenance date, vehicle routing for the delivery of maintenance services, and a set of hard constraints such as QoS in air quality monitoring, the maximum number of labor working hours, and an upper limit on the total CO2 emissions. Heuristics are proposed to generate the batches of microsites and the scheduled maintenance date for the batches. A genetic algorithm is designed to find the shortest routes by which to visit the batch microsites by a fleet of vehicles. Simulations are conducted based on government open data. The experimental results show that the maintenance and transportation costs yielded by the proposed model grow linearly with the number of microsites if the fleet size is also linearly related to the microsite number. The mean time between two consecutive cycles is around 17 days, which is generally sufficient for the preparation of the required maintenance materials and personnel. With the proposed method, the decision-maker can circumvent the difficulties in handling the hard constraints, and the allocation of maintenance resources, including budget, materials, and engineering personnel, is easier to manage. Full article
(This article belongs to the Section Sustainable Engineering and Science)
Show Figures

Figure 1

34 pages, 6467 KiB  
Article
Predictive Sinusoidal Modeling of Sedimentation Patterns in Irrigation Channels via Image Analysis
by Holger Manuel Benavides-Muñoz
Water 2025, 17(14), 2109; https://doi.org/10.3390/w17142109 - 15 Jul 2025
Viewed by 303
Abstract
Sediment accumulation in irrigation channels poses a significant challenge to water resource management, impacting hydraulic efficiency and agricultural sustainability. This study introduces an innovative multidisciplinary framework that integrates advanced image analysis (FIJI/ImageJ 1.54p), statistical validation (RStudio), and vector field modeling with a novel [...] Read more.
Sediment accumulation in irrigation channels poses a significant challenge to water resource management, impacting hydraulic efficiency and agricultural sustainability. This study introduces an innovative multidisciplinary framework that integrates advanced image analysis (FIJI/ImageJ 1.54p), statistical validation (RStudio), and vector field modeling with a novel Sinusoidal Morphodynamic Bedload Transport Equation (SMBTE) to predict sediment deposition patterns with high precision. Conducted along the Malacatos River in La Tebaida Linear Park, Loja, Ecuador, the research captured a natural sediment transport event under controlled flow conditions, transitioning from pressurized pipe flow to free-surface flow. Observed sediment deposition reduced the hydraulic cross-section by approximately 5 cm, notably altering flow dynamics and water distribution. The final SMBTE model (Model 8) demonstrated exceptional predictive accuracy, achieving RMSE: 0.0108, R2: 0.8689, NSE: 0.8689, MAE: 0.0093, and a correlation coefficient exceeding 0.93. Complementary analyses, including heatmaps, histograms, and vector fields, revealed spatial heterogeneity, local gradients, and oscillatory trends in sediment distribution. These tools identified high-concentration sediment zones and quantified variability, providing actionable insights for optimizing canal design, maintenance schedules, and sediment control strategies. By leveraging open-source software and real-world validation, this methodology offers a scalable, replicable framework applicable to diverse water conveyance systems. The study advances understanding of sediment dynamics under subcritical (Fr ≈ 0.07) and turbulent flow conditions (Re ≈ 41,000), contributing to improved irrigation efficiency, system resilience, and sustainable water management. This research establishes a robust foundation for future advancements in sediment transport modeling and hydrological engineering, addressing critical challenges in agricultural water systems. Full article
(This article belongs to the Section Water Erosion and Sediment Transport)
Show Figures

Figure 1

11 pages, 980 KiB  
Article
Impact of Tumor Necrosis Factor Antagonist Therapy on Circulating Angiopoietin-like Protein 8 (ANGPTL8) Levels in Crohn’s Disease—A Prospective Multi-Center Study
by Mohammad Shehab, Sharifa Al-Fajri, Ahmed Alanqar, Mohammad Alborom, Fatema Alrashed, Fatemah Alshammaa, Ahmad Alfadhli, Sriraman Devarajan, Irina Alkhairi, Preethi Cherian, Jehad Abubaker, Mohamed Abu-Farha and Fahd Al-Mulla
J. Clin. Med. 2025, 14(14), 5006; https://doi.org/10.3390/jcm14145006 - 15 Jul 2025
Viewed by 347
Abstract
Background: Crohn’s disease (CD) is a chronic disease perpetuated through key pro-inflammatory molecules, including tumor necrosis factor-alpha (TNFα). Angiopoietin-like protein 8 (ANGPTL8) may contribute to inflammation cascades. This study aimed to investigate how ANGPTL8 levels are influenced in patients with CD prior to [...] Read more.
Background: Crohn’s disease (CD) is a chronic disease perpetuated through key pro-inflammatory molecules, including tumor necrosis factor-alpha (TNFα). Angiopoietin-like protein 8 (ANGPTL8) may contribute to inflammation cascades. This study aimed to investigate how ANGPTL8 levels are influenced in patients with CD prior to and following anti-TNF therapy. Methods: Patients were divided into 3 groups. Patients with CD in clinical remission receiving IFX for at least 24 weeks (IFX-experienced group), patients scheduled to start IFX (IFX-naïve group), and healthy controls (control group). In the IFX-experienced group, ANGPTL8 levels were measured 24 h before the next maintenance IFX dose. In the IFX-naïve group, levels were measured at week 0 and week 24, and in the control group, they were measured randomly. Results: The total number of participants was 166. The numbers of IFX-experienced, IFX-naïve patients, and healthy controls were 82, 13, and 71, respectively. Mean age ranged from 27 to 33 years of age across the three groups. Eighty-four (51%) participants were female. ANGPTL8 levels were significantly higher in patients with CD (138.26 ± 8.47 pmol) compared to the healthy control group (102.52 ± 5.99 pmol, p = 0.001). Among IFX-naïve patients receiving anti-TNFα treatment, ANGPTL8 levels decreased significantly from 145.06 ± 17.93 pmol pre-treatment (week 0) to 81.78 ± 10.61 pmol post-treatment (week 24), p = 0.007. Conclusions: Our findings suggest that ANGPTL8 levels are elevated in CD and may be involved in the inflammatory process. The marked reduction in ANGPTL8 levels following anti-TNFα treatment indicates its potential as a biomarker for treatment response. Further research should focus on the exact mechanisms through which ANGPTL8 influences CD progression and its utility in clinical practice. Full article
(This article belongs to the Special Issue Current Progress in Inflammatory Bowel Disease (IBD))
Show Figures

Figure 1

32 pages, 2917 KiB  
Article
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
by Suchuan Xing, Yihan Wang and Wenhe Liu
Symmetry 2025, 17(7), 1109; https://doi.org/10.3390/sym17071109 - 10 Jul 2025
Viewed by 322
Abstract
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database [...] Read more.
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop