Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,094)

Search Parameters:
Keywords = Markov Model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 798 KB  
Article
A Bayesian Inference Algorithm for Equipment Software Price Estimation Based on Nonlinear Contribution Models
by Tian Meng and Guoping Jiang
Algorithms 2026, 19(5), 396; https://doi.org/10.3390/a19050396 (registering DOI) - 15 May 2026
Abstract
To address the challenges of difficult value quantification, lack of market benchmarks, and scarcity of historical data for embedded software amidst the intelligent transformation of equipment systems, this study develops a scientific price estimation method based on functional capability contribution. A nonlinear pricing [...] Read more.
To address the challenges of difficult value quantification, lack of market benchmarks, and scarcity of historical data for embedded software amidst the intelligent transformation of equipment systems, this study develops a scientific price estimation method based on functional capability contribution. A nonlinear pricing model is constructed to accurately characterize the two-stage evolution of software price: diminishing marginal utility during the mature technology accumulation stage and exponential growth during the technical bottleneck breakthrough stage. To ensure the consistency of pricing logic between hardware and software, a penalty function is innovatively designed to modify the standard likelihood function, effectively transforming practical business logic into a model regularization term. Parameter estimation is achieved by employing a Bayesian inference framework integrated with operational constraints, utilizing Markov Chain Monte Carlo (MCMC) sampling to realize robust posterior inference under small-sample constraints. Empirical analysis demonstrates that the proposed method achieves superior cross-domain data transfer performance compared to traditional baseline models, with a Leave-One-Out Cross-Validation (LOOCV) Mean Absolute Percentage Error (MAPE) of 21.2%. This research provides a practical value-oriented price estimation method for embedded equipment software pricing. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
28 pages, 2623 KB  
Article
Federated Safe Proximal Policy Optimization for Robust Low-Carbon Dispatch of Heterogeneous Multi-Park Electricity–Heat–Hydrogen Integrated Energy Systems
by Zijie Peng, Xiaohui Yang and Qianhua Xiao
Energies 2026, 19(10), 2382; https://doi.org/10.3390/en19102382 - 15 May 2026
Abstract
To achieve low-carbon and cost-effective operation of multi-park electricity–heat–hydrogen integrated energy systems (EHHSs), this paper proposes a low-carbon dispatch framework based on federated safe reinforcement learning. First, a multi-park EHHS dispatch model is established by considering heterogeneous park characteristics, electricity–heat–hydrogen coupling, stepped carbon [...] Read more.
To achieve low-carbon and cost-effective operation of multi-park electricity–heat–hydrogen integrated energy systems (EHHSs), this paper proposes a low-carbon dispatch framework based on federated safe reinforcement learning. First, a multi-park EHHS dispatch model is established by considering heterogeneous park characteristics, electricity–heat–hydrogen coupling, stepped carbon trading, and peer-to-peer (P2P) energy trading. Then, to address the coupled challenges of privacy preservation, operational coupling, and safety constraints, the dispatch problem is formulated as a constrained Markov decision process (CMDP). On this basis, a federated safe proximal policy optimization algorithm (FedSafePPO) is developed by integrating PPO, Lagrangian-based safety constraint handling, and federated parameter aggregation. The proposed method enables each park to learn a local dispatch policy from private data while sharing global knowledge without exchanging raw operational data. In addition, an actor–dual-critic architecture is adopted to jointly evaluate economic returns and constraint costs, thereby improving convergence stability and dispatch feasibility. Case studies involving three heterogeneous parks—industrial, commercial, and residential—demonstrate that the proposed method effectively reduces total operating costs and carbon emissions while satisfying system constraints. Compared with PPO, FedPPO, and SafePPO, the proposed FedSafePPO achieves superior low-carbon economic performance, greater training stability, and better adaptability to heterogeneous operating conditions. The results verify the effectiveness and engineering applicability of the proposed method for the low-carbon dispatch of multi-park EHHSs. Full article
24 pages, 1434 KB  
Article
Adaptive Service Migration in Hybrid MEC–Cloud Environments: A Queueing-Theoretic Framework for Split-User Offloading
by Anna Kushchazli, Kseniia Leonteva, Darina Shiyapova, Alexandr Priscepov and Irina Kochetkova
Future Internet 2026, 18(5), 258; https://doi.org/10.3390/fi18050258 - 14 May 2026
Abstract
Resource-constrained Multi-Access Edge Computing (MEC) nodes cannot fully replace cloud infrastructure, yet existing service placement models treat edge hosting as an all-or-nothing decision. This paper proposes a queueing-theoretic framework for split-user offloading in hybrid MEC–cloud environments. The system is modeled as a Continuous-Time [...] Read more.
Resource-constrained Multi-Access Edge Computing (MEC) nodes cannot fully replace cloud infrastructure, yet existing service placement models treat edge hosting as an all-or-nothing decision. This paper proposes a queueing-theoretic framework for split-user offloading in hybrid MEC–cloud environments. The system is modeled as a Continuous-Time Markov Chain (CTMC) over a load-vector state space that admits a product-form stationary distribution. A delay-aware greedy orchestration policy determines, at every arrival and departure event, which service occupies the MEC node and how many of its users are offloaded from the cloud. Closed-form expressions are derived for average end-to-end (E2E) delay, MEC occupancy and saturation probabilities, per-service hosting probabilities, and delay-saving indicators. Numerical analysis of a five-service industrial scenario shows that the proposed split-user mechanism keeps the MEC node occupied for most of the observation time (around 97% at the baseline load), naturally prioritizes services with the largest aggregate latency benefit, and substantially reduces the average delay compared with a cloud-only configuration. The analytical results are validated by discrete-event simulation, which matches the CTMC values with relative discrepancy below 1% under the Poisson/exponential assumptions; additional simulations quantify the sensitivity to alternative arrival and service-time distributions. The framework provides analytically tractable, interpretable decision logic with negligible runtime overhead, making it a suitable analytical foundation for cloud service orchestration platforms that must meet strict QoS targets in next-generation edge networks. Full article
(This article belongs to the Special Issue Cloud Computing and Cloud Service Orchestration)
Show Figures

Figure 1

29 pages, 5797 KB  
Article
Research on GNSS/INS Tightly Coupled Integrity Monitoring Method Based on State Augmentation Error Modeling
by Xinhua Tang, Xiaoyu Fang and Fei Huang
Remote Sens. 2026, 18(10), 1564; https://doi.org/10.3390/rs18101564 - 14 May 2026
Abstract
In urban environments with signal blockage and multipath effects, GNSS observation errors often exhibit temporal correlation. The Gaussian white noise assumption adopted in conventional tightly coupled Kalman filtering is prone to model mismatch under such conditions, which may lead to an underestimation of [...] Read more.
In urban environments with signal blockage and multipath effects, GNSS observation errors often exhibit temporal correlation. The Gaussian white noise assumption adopted in conventional tightly coupled Kalman filtering is prone to model mismatch under such conditions, which may lead to an underestimation of state uncertainty and consequently cause the protection level (PL) to fail to reliably bound the true positioning error. To address this issue, this paper proposes a tightly coupled GNSS/INS integrity monitoring method based on state augmentation and frequency-domain constrained parameter tuning. The method introduces first-order Gauss-Markov processes (GMP) to model major time-correlated error sources, including residual ephemeris and clock errors, residual tropospheric delay, and code multipath, by augmenting them into the filter state for joint estimation. The model parameters are further conservatively tuned based on power spectral density (PSD) envelope constraints to obtain more consistent covariance estimates. Based on this, the covariance output from the augmented filter is incorporated into the multiple hypothesis solution separation (MHSS) framework, enabling the protection level computation to better match the actual error statistics. Experimental results using vehicular field test data show that the proposed method effectively improves estimation consistency and significantly reduces the risk of PL underestimation in degraded environments. Furthermore, it achieves reliable bounding of horizontal positioning errors without noticeable degradation in positioning accuracy, while maintaining good system availability. These results demonstrate the effectiveness of covariance construction based on physical error modeling and PSD envelope constraints for integrity monitoring in complex environments. Full article
Show Figures

Figure 1

17 pages, 5549 KB  
Article
A Cost–Utility Analysis of Two-Stage Screening Strategies Based on Waist-to-Height Ratio for Pediatric Metabolic Dysfunction-Associated Steatotic Liver Disease (MASLD) in China
by Yunfei Liu, Tianyu Huang, Jiajia Dang, Shan Cai, Jiaxin Li, Ruolan Yang, Jiabin Zhang, Kaiheng Zhu, Ziyue Sun, Yang Yang, Yajie Wang, Bo Xi and Yi Song
Healthcare 2026, 14(10), 1343; https://doi.org/10.3390/healthcare14101343 - 14 May 2026
Abstract
Background: The prevalence of metabolic dysfunction-associated steatotic liver disease (MASLD) has increased rapidly in pediatric populations. Evidence on the cost-effectiveness of pediatric MASLD screening strategies remains limited. Methods: A decision tree combined with a Markov state-transition model was developed to evaluate the cost-effectiveness [...] Read more.
Background: The prevalence of metabolic dysfunction-associated steatotic liver disease (MASLD) has increased rapidly in pediatric populations. Evidence on the cost-effectiveness of pediatric MASLD screening strategies remains limited. Methods: A decision tree combined with a Markov state-transition model was developed to evaluate the cost-effectiveness of three WHtR-based two-stage screening strategies among children aged 6–14 years in Beijing, China: WHtR combined with ultrasound (S1), WHtR combined with FibroScan® (S2), and WHtR combined with magnetic resonance imaging-proton density fat fraction (MRI-PDFF) (S3), compared with no screening (S4). All screening strategies were combined with lifestyle modification programs, including dietary and exercise management. Model inputs were derived from the published literature, national survey data, and expert consensus. Costs and quality-adjusted life years (QALYs) were estimated from a healthcare system perspective over a 10-year time horizon, with a 3% annual discount rate. Incremental cost–utility ratios (ICURs) were calculated, and extensive one-way, two-way, and probabilistic sensitivity analyses were performed. Results: Our model indicated that, at a willingness-to-pay (WTP) threshold of $30,584.0 per QALY, corresponding to three times the gross domestic product (GDP) per capita of China, S2 was identified as the optimal strategy. At a higher WTP threshold of $71,415.5 per QALY, based on the GDP per capita of Beijing, S3 became the most cost-effective option. All three screening strategies were more cost-effective than no screening across both thresholds. Sensitivity analyses demonstrated that utility values for fibrosis stages and the response rate of the lifestyle modification program were the most influential parameters, and probabilistic sensitivity analysis confirmed the robustness of the baseline findings. Conclusions: To the best of our knowledge, this is the first cost-effectiveness analysis for pediatric MASLD in China. Model-based estimates suggest that early screening for MASLD in children using WHtR-based screening strategies is cost-effective, with FibroScan® preferred in settings with average economic development and MRI-PDFF preferred in more affluent regions. These findings underscore the importance of context-specific implementation of early MASLD screening strategies in pediatric populations to mitigate long-term disease burden. Full article
Show Figures

Figure 1

21 pages, 1732 KB  
Article
Resource-Aware Deep Reinforcement Learning for Joint Caching and Service Placement in Multi-Access Edge Computing
by Elias Dritsas and Maria Trigka
Electronics 2026, 15(10), 2074; https://doi.org/10.3390/electronics15102074 - 13 May 2026
Abstract
Multi-access edge computing (MEC) enables low-latency service provisioning by placing computation closer to mobile users. However, efficient service placement remains challenging due to dynamic user mobility, limited edge resources, and the need to manage service migration as system conditions evolve. This study proposes [...] Read more.
Multi-access edge computing (MEC) enables low-latency service provisioning by placing computation closer to mobile users. However, efficient service placement remains challenging due to dynamic user mobility, limited edge resources, and the need to manage service migration as system conditions evolve. This study proposes a resource-aware, cache-enabled service placement framework based on deep reinforcement learning (DRL) to dynamically select edge nodes for hosting services. The approach jointly considers user location, resource availability, and cache status within a unified decision framework, enabling efficient and adaptive service placement in dynamic MEC environments. The problem is formulated as a Markov decision process (MDP) and solved using deep Q-network (DQN)-based methods, with a reward function that balances latency, resource utilization, and cache efficiency. The proposed framework is evaluated in a simulated MEC environment with mobile users and multiple edge nodes. Experimental results demonstrate that the approach achieves lower latency, improved resource utilization, and enhanced cache efficiency compared to baseline strategies. Among the evaluated models, the dueling double deep Q-network (DDDQN) achieves the most balanced overall performance. The proposed framework provides an adaptive and scalable solution for service management in dynamic MEC environments. Full article
(This article belongs to the Special Issue Machine Learning Approach for Prediction: Cross-Domain Applications)
Show Figures

Figure 1

44 pages, 680 KB  
Article
Stochastically Optimal Hierarchical Control for Long-Endurance UAVs Under Communication Degradation: Theory and Validation
by Mosab Alrashed, Ali Fenjan, Humoud Aldaihani and Mohammad Alqattan
Drones 2026, 10(5), 371; https://doi.org/10.3390/drones10050371 - 13 May 2026
Viewed by 66
Abstract
This paper establishes a theoretical framework for treating communication quality as a navigable resource in long-endurance unmanned aerial vehicle (UAV) control under stochastic degradation. We prove that a hierarchical architecture integrating communication-aware model predictive control (MPC) achieves ε-optimality with respect to the [...] Read more.
This paper establishes a theoretical framework for treating communication quality as a navigable resource in long-endurance unmanned aerial vehicle (UAV) control under stochastic degradation. We prove that a hierarchical architecture integrating communication-aware model predictive control (MPC) achieves ε-optimality with respect to the intractable stochastic dynamic programming formulation while maintaining exponential stability guarantees under switched system dynamics governed by continuous-time Markov chains. Three primary theoretical contributions were made: (1) A stochastic optimality theorem is given showing that sigmoid penalty function approximation yields bounded suboptimality of η0.12 under mild ergodicity conditions; (2) a formal stability result for mode switching based on hysteresis was established using multiple Lyapunov functions, and it showed exponentially fast convergence with a decay rate of λ0.23; and (3) bifurcation analysis showed that there is a critical time threshold of 72 h at which thermal-induced gyro-drift in the GPS sensor causes a transition in navigation error dynamics from linear to catastrophic nonlinear growth. The validation through 2430 Monte Carlo missions over 54,686 flight hours resulted in an average increase in endurance by 243% (18.2 days versus 5.3 days), while keeping CEP at approximately 8.7 m and achieving 82% mission success under extreme communication degradation (qcomm<0.3). The statistical results confirm a very strong positive relationship between the Resilience Quotient (RQ) and the length of successful missions (R2=0.89, p<0.001), supporting the theoretical model with empirical evidence. Full article
Show Figures

Graphical abstract

25 pages, 12577 KB  
Article
A Hybrid Deep Learning Framework with Q-Table Optimization for Well Log Reconstruction
by Hangju Yu and Bin Zhao
Processes 2026, 14(10), 1548; https://doi.org/10.3390/pr14101548 - 11 May 2026
Viewed by 144
Abstract
The reconstruction of acoustic (AC) logging curves is of great significance for reservoir evaluation, lithology identification, and velocity modeling, particularly in the presence of missing or degraded logging data. However, conventional reconstruction methods and existing deep learning models often suffer from limited feature [...] Read more.
The reconstruction of acoustic (AC) logging curves is of great significance for reservoir evaluation, lithology identification, and velocity modeling, particularly in the presence of missing or degraded logging data. However, conventional reconstruction methods and existing deep learning models often suffer from limited feature representation capability and rely heavily on manual hyperparameter tuning, leading to suboptimal performance. To address these challenges, this study proposes a reinforcement learning-based optimization framework for AC logging curve reconstruction. Specifically, a hybrid deep learning architecture integrating convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM), and an attention mechanism is developed to effectively capture local spatial features, long-range temporal dependencies, and key feature contributions from multi-logging data. Furthermore, a Q-learning-based optimization strategy is introduced to adaptively tune model hyperparameters by formulating the optimization process as a Markov Decision Process (MDP), enabling dynamic and data-driven parameter adjustment. To validate the effectiveness of the proposed method, comparative experiments are conducted using several baseline and optimized models, including CNN–BiLSTM, CNN–BiLSTM–Attention, particle swarm optimization (PSO)-optimized CNN–BiLSTM–Attention, and genetic algorithm (GA)-optimized CNN–BiLSTM–Attention. The results demonstrate that the proposed approach achieves superior reconstruction accuracy for AC curves, with improved convergence efficiency and model stability. In addition, it exhibits stronger robustness and generalization capability under limited data conditions, effectively mitigating the risk of overfitting and local optima. This study provides a novel reinforcement learning-driven solution for AC logging curve reconstruction and offers practical value for intelligent reservoir characterization in complex geological environments. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

40 pages, 3330 KB  
Article
Data-Driven Dynamic Pricing for Mitigating the Hockey Stick Effect: A Hybrid Forecasting and Actor-Critic Reinforcement Learning Framework
by Shanshan Peng, Dandan Wang and Fang Zhu
Algorithms 2026, 19(5), 382; https://doi.org/10.3390/a19050382 - 11 May 2026
Viewed by 90
Abstract
The demand for the fabric warehouse presents obvious characteristics of hockey stick effect. This leads to problems such as peak congestion and labor shortages during its operation. In order to alleviate this phenomenon, we propose a combination strategy that uses a SARIMA–Markov hybrid [...] Read more.
The demand for the fabric warehouse presents obvious characteristics of hockey stick effect. This leads to problems such as peak congestion and labor shortages during its operation. In order to alleviate this phenomenon, we propose a combination strategy that uses a SARIMA–Markov hybrid model for demand forecasting, and then applies Actor-Critic reinforcement learning for dynamic pricing. This model integrates SARIMA with Markov chains for residual correction, capturing linear trends and seasonal patterns while correcting residuals, yielding more accurate predictions for highly volatile demand in textile logistics. Experimental results indicate that our approach achieves better performance than SARIMA, Temporal Fusion Transformer (TFT), and Ensemble, especially in identifying and reproducing sharp demand peaks. By combining forecasting results with price elasticity, the proposed dynamic pricing scheme cuts peak-hour demand by 12.54%, which in turn eases pressure on labor scheduling and boosts the efficiency of workforce allocation. This work offers a data-driven approach to flattening demand fluctuations via intelligent pricing, improves operational efficiency without requiring extra hardware investment, and provides a practical response to a long-standing bottleneck in the textile logistics sector. Full article
22 pages, 2279 KB  
Article
Virtual Mice, Real Errors: A Sensor-Aware Generative Framework for In Silico Ethology
by Reza Sayfoori, Goli Vaisi and Hung Cao
Sensors 2026, 26(10), 2977; https://doi.org/10.3390/s26102977 - 9 May 2026
Viewed by 172
Abstract
Long-duration animal trajectories are central to computational ethology, yet constructing large rodent cohorts remains costly, time-intensive, and constrained by animal-use considerations. We present a sensor-aware generative framework that separates latent behavioral dynamics from sensing-induced observation distortion to synthesize observed-domain trajectories that are behaviorally [...] Read more.
Long-duration animal trajectories are central to computational ethology, yet constructing large rodent cohorts remains costly, time-intensive, and constrained by animal-use considerations. We present a sensor-aware generative framework that separates latent behavioral dynamics from sensing-induced observation distortion to synthesize observed-domain trajectories that are behaviorally plausible while reproducing proxy-referenced observation distortions. The framework combines a run-level semi-Markov ethology model, occupancy calibration, and state-conditioned kinematic generation with a regime-dependent Ultra-Wideband observation channel that explicitly captures Line-of-Sight and Non-Line-of-Sight sensing conditions. Using four UWB sessions, this proof-of-concept study models three states—exploring, feeding, and burrowing—and evaluates realism through state occupancy, state-conditioned kinematic divergence, residual-domain agreement, and mean-squared displacement across time lags. We further assess whether sensor-aware conditioning improves robustness under LoS/NLoS domain shift in downstream trajectory classification. Sensor-aware conditioning yields stable mixed-domain performance with AUC = 0.995, whereas condition-agnostic baselines decline to AUC = 0.974 and AUC = 0.901. These results support the feasibility of sensor-aware in silico ethology as a proof-of-concept framework for controlled robustness studies and algorithm evaluation under proxy-referenced observation distortion. Because the present evaluation is based on four UWB sessions and uses a smoothed UWB-derived reference trajectory rather than independent ground truth, broader applications to synthetic-cohort generation, disease modeling, and statistical power-analysis workflows should be considered future directions requiring validation in larger datasets. Full article
(This article belongs to the Special Issue Feature Papers in Biosensors Section 2026)
34 pages, 10943 KB  
Article
Dynamic Continuous Berth Scheduling Under Tidal Constraints and Carbon Costs Using QER-DDQN
by Meixian Jiang, Fan Wu, Haozhe Mao, Guanjun Bao and Guanghua Wu
Appl. Sci. 2026, 16(10), 4684; https://doi.org/10.3390/app16104684 - 9 May 2026
Viewed by 89
Abstract
Under the background of green and low-carbon port development, the single-terminal continuous berth dynamic scheduling problem is simultaneously affected by multiple factors, including dynamic vessel arrivals, tidal conditions, quay crane resources, and carbon emission costs, and has become a complex decision-making problem that [...] Read more.
Under the background of green and low-carbon port development, the single-terminal continuous berth dynamic scheduling problem is simultaneously affected by multiple factors, including dynamic vessel arrivals, tidal conditions, quay crane resources, and carbon emission costs, and has become a complex decision-making problem that must balance operational efficiency and low-carbon objectives. To address this issue, the problem is characterized in this study as an event-driven, dynamic spatiotemporal resource allocation and cost-coordinated optimization problem. A dynamic berth scheduling model is established with the objective of minimizing the total cost composed of waiting cost, delay cost, and carbon emission cost, and the problem is further formulated as a Markov decision process. Considering that conventional experience replay mechanisms are insufficient in exploiting critical samples and often suffer from limited training stability in complex dynamic scenarios, this study introduces an experience quality evaluation mechanism and a dual-replay-buffer collaborative training strategy on the basis of the DDQN algorithm, thereby proposing the QER-DDQN algorithm. The experimental results indicate that, under highly complex and highly congested test scenarios, QER-DDQN shows relatively better cost-control performance, with the average total cost reduced by approximately 8.08%, 10.77%, 7.78%, and 2.43% compared with FCFS, GA, PER-DDQN, and DDQN, respectively. The ablation study and scheduling scheme analysis further demonstrate that the experience quality evaluation mechanism and the dual-replay-buffer collaborative training strategy help improve the utilization efficiency of critical samples and, to a certain extent, enhance the local shoreline utilization structure, thereby alleviating the accumulation of waiting time and the propagation of delays caused by local resource conflicts. Further low-carbon scenario analysis shows that carbon tax level and shore power coverage rate both affect berth scheduling outcomes and carbon emission performance, and that appropriate low-carbon policy constraints and shore power infrastructure configuration are conducive to promoting the coordinated optimization of port scheduling efficiency and emission reduction objectives. Full article
27 pages, 1632 KB  
Article
Research on Rural Community Restructuring in Traditional Agricultural Areas from the Perspective of Hybridity: A Case Study of the Jianghan Plain, China
by Xue Zeng, Bin Yu, Mengshan Hu and Zihao Zhang
Sustainability 2026, 18(10), 4681; https://doi.org/10.3390/su18104681 - 8 May 2026
Viewed by 148
Abstract
Theoretical perspective: The function of rural human settlements serves as a crucial perspective for interpreting rural community restructuring, and hybridity is a powerful tool for decoding the function of rural human settlements. Objectives and methodology: Based on rural communities’ sample survey data in [...] Read more.
Theoretical perspective: The function of rural human settlements serves as a crucial perspective for interpreting rural community restructuring, and hybridity is a powerful tool for decoding the function of rural human settlements. Objectives and methodology: Based on rural communities’ sample survey data in typical counties and cities of the Jianghan Plain in 2012 and 2022, an evaluation index system for rural community restructuring was constructed across three dimensions: entity hybridity, network hybridity, and meaning hybridity. This paper aimed to analyze the characteristics of rural community restructuring and human settlement functions evolving in the traditional agricultural plain using the weighted TOPSIS method, the Markov chain, and the synergy model. Results and conclusions: From 2012 to 2022, the level of rural community restructuring in the Jianghan Plain has significantly improved, and the comprehensive index of rural community restructuring has gradually narrowed across different rural communities. Meaning hybridity plays a major role in rural community restructuring in the Jianghan Plain. Social culture is the dominant element of rural community restructuring. Public security situation contributes 21.35% to rural community restructuring as the key indicator. Both macro humanistic environment and micro spatial attributes have significant influences on rural community restructuring. These findings can provide a scientific basis for the optimization of the rural community system in the Jianghan Plain. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
26 pages, 2647 KB  
Article
Long-Term Optimal Scheduling of Cascade Hydro–Wind–PV Complementary System Based on Deep Deterministic Policy Gradient
by Wenwu Li, Mu He, Zixing Wan, Fengming Dai, Taotao Zhang and Yuhao Jiang
Appl. Sci. 2026, 16(10), 4630; https://doi.org/10.3390/app16104630 - 8 May 2026
Viewed by 135
Abstract
Runoff, wind power output, and photovoltaic (PV) power output in cascade hydro–wind–PV complementary systems are inherently uncertain, making long-term scheduling a high-dimensional continuous-control decision-making problem. To address this issue, this study proposes a long-term optimal scheduling method based on the deep deterministic policy [...] Read more.
Runoff, wind power output, and photovoltaic (PV) power output in cascade hydro–wind–PV complementary systems are inherently uncertain, making long-term scheduling a high-dimensional continuous-control decision-making problem. To address this issue, this study proposes a long-term optimal scheduling method based on the deep deterministic policy gradient (DDPG) algorithm. First, a long-term optimal scheduling model for a cascade hydro–wind–PV complementary system is established with the objective of maximizing renewable energy accommodation. Second, the original optimization problem is formulated as a Markov decision process, and the multi-constraint scheduling task is transformed into a deep reinforcement learning problem. Then, the Actor–Critic architecture of DDPG is employed to iteratively update the continuous control policy, while experience replay and target networks are introduced to stabilize the training process and improve learning performance. Finally, a large-scale cascade hydropower system and its surrounding wind and PV plants are selected as a case study for validation, and the proposed method is compared with a deep Q-network (DQN) and proximal policy optimization (PPO). The results show that the proposed method can learn a stable scheduling policy within relatively few training episodes. Compared with a DQN and PPO, DDPG achieves better overall scheduling performance, with higher renewable energy accommodation, lower curtailment, and faster convergence in the considered case study. Full article
Show Figures

Figure 1

31 pages, 8584 KB  
Article
Load Profile Assignment for Planning and Operation Support in Distribution Networks Under Partial Smart Meter Penetration
by Jorge Lara, Mauricio Samper and Delia Graciela Colomé
Processes 2026, 14(10), 1505; https://doi.org/10.3390/pr14101505 - 7 May 2026
Viewed by 258
Abstract
The growing need to enhance observability in distribution networks has driven the development of load pseudomeasurement generation methods, particularly under partial smart meter (SM) penetration. This paper proposes a load pseudomeasurement framework that builds representative daily load profiles (load curves) from hourly SM [...] Read more.
The growing need to enhance observability in distribution networks has driven the development of load pseudomeasurement generation methods, particularly under partial smart meter (SM) penetration. This paper proposes a load pseudomeasurement framework that builds representative daily load profiles (load curves) from hourly SM time series using clustering techniques, with and without weather information. Markov chain models are then used to capture day-to-day dynamics by predicting the most likely next-day profile to be assigned to customers without SM. To enable this transfer, a hierarchical grouping scheme based on monthly energy consumption is introduced to map behaviors from SM-equipped customers to customers without SM measurement. The methodology is validated with real residential data from the Low-Carbon London project under multiple observability scenarios including different SM availability levels, where SM measurements are withheld from the inputs to emulate customers without SM measurement, and the resulting pseudomeasurements are benchmarked against the original measurements. The results show that the Euclidean representative curve method achieved the most robust overall performance, with a minimum MAE of 1.65 in the Reduced × 75% SM configuration. The best-performing configuration depended on the observability level: Reduced was the most robust option under medium-to-high observability, whereas Temp_reduced with a 21-day window performed best under the lowest-observability condition. In addition, the Euclidean method showed low practical deviation in the Reduced × 25% SM case, with a bias of 0.63 and Cohen’s d = 0.27. Overall, the proposed approach accurately reproduces the hourly load shape and captures inter-day variability under partial observability conditions. Full article
(This article belongs to the Special Issue Control, Optimization and Scheduling of Smart Distribution Grids)
Show Figures

Graphical abstract

35 pages, 11787 KB  
Article
A New One-Parameter Model Supports an Upside-Down Bathtub Failure Rate: Theory, Inference, and Real-World Applications
by Ohud A. Alqasem and Ahmed Elshahhat
Mathematics 2026, 14(9), 1566; https://doi.org/10.3390/math14091566 - 6 May 2026
Viewed by 141
Abstract
Researchers often develop ordinal hazard distributions, whether increasing or decreasing, into multi-parameter distributions to derive various forms of the hazard function. This process necessitates the formulation of a multi-parameter hazard function, which involves a more complex mathematical expression. In contrast, this study introduces [...] Read more.
Researchers often develop ordinal hazard distributions, whether increasing or decreasing, into multi-parameter distributions to derive various forms of the hazard function. This process necessitates the formulation of a multi-parameter hazard function, which involves a more complex mathematical expression. In contrast, this study introduces a new one-parameter lifetime model, termed the Inverted Z–Lindley (IZL) distribution, which is capable of capturing an upside-down bathtub-shaped failure rate without sacrificing analytical simplicity. Fundamental distributional properties of the IZL model are rigorously established, including closed-form expressions for the probability density, cumulative distribution, reliability, and hazard rate functions. Theoretical analysis shows that the density is strictly positive, unimodal, positively skewed, and heavy-tailed, while the hazard rate is unimodal with vanishing limits at both extremes. Fractional moments are obtained, and the non-existence of classical moments is formally justified, motivating the use of quantile-based and inactivity-time reliability measures. Besides the quantile function, several key reliability measures, including the mean inactivity time and strong mean inactivity time functions, and order statistics, are also developed. Inferential procedures are constructed under Type-II censoring using both likelihood-based and Bayesian frameworks. The existence and uniqueness of the frequentist estimator are established, while Bayesian estimation is implemented via Markov chain Monte Carlo methods under informative gamma priors. Several interval estimation techniques—including asymptotic, bootstrap, Bayesian credible, and highest posterior density intervals—are developed and compared through extensive Monte Carlo simulations. The practical relevance of the proposed model is demonstrated using real datasets from environmental health and communication engineering, where the IZL distribution consistently outperforms fifteen well-established inverted lifetime models according to likelihood-based criteria, information measures, and goodness-of-fit diagnostics. Overall, the IZL model offers a powerful, interpretable, and computationally efficient alternative for modeling heavy-tailed lifetime data with non-monotone failure behavior, contributing meaningfully to modern distribution theory and applied reliability analysis. Full article
(This article belongs to the Special Issue Computational Statistics: Analysis and Applications for Mathematics)
Show Figures

Figure 1

Back to TopTop