Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,248)

Search Parameters:
Keywords = statistical convergence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1407 KB  
Review
Artificial Intelligence Drives Advances in Multi-Omics Analysis and Precision Medicine for Sepsis
by Youxie Shen, Peidong Zhang, Jialiu Luo, Shunyao Chen, Shuaipeng Gu, Zhiqiang Lin and Zhaohui Tang
Biomedicines 2026, 14(2), 261; https://doi.org/10.3390/biomedicines14020261 - 23 Jan 2026
Abstract
Sepsis is a life-threatening syndrome characterized by marked clinical heterogeneity and complex host–pathogen interactions. Although traditional mechanistic studies have identified key molecular pathways, they remain insufficient to capture the highly dynamic, multifactorial, and systems-level nature of this condition. The advent of high-throughput omics [...] Read more.
Sepsis is a life-threatening syndrome characterized by marked clinical heterogeneity and complex host–pathogen interactions. Although traditional mechanistic studies have identified key molecular pathways, they remain insufficient to capture the highly dynamic, multifactorial, and systems-level nature of this condition. The advent of high-throughput omics technologies—particularly integrative multi-omics approaches encompassing genomics, transcriptomics, proteomics, and metabolomics—has profoundly reshaped sepsis research by enabling comprehensive profiling of molecular perturbations across biological layers. However, the unprecedented scale, dimensionality, and heterogeneity of multi-omics datasets exceed the analytical capacity of conventional statistical methods, necessitating more advanced computational strategies to derive biologically meaningful and clinically actionable insights. In this context, artificial intelligence (AI) has emerged as a powerful paradigm for decoding the complexity of sepsis. By leveraging machine learning and deep learning algorithms, AI can efficiently process ultra-high-dimensional and heterogeneous multi-omics data, uncover latent molecular patterns, and integrate multilayered biological information into unified predictive frameworks. These capabilities have driven substantial advances in early sepsis detection, molecular subtyping, prognosis prediction, and therapeutic target identification, thereby narrowing the gap between molecular mechanisms and clinical application. As a result, the convergence of AI and multi-omics is redefining sepsis research, shifting the field from descriptive analyses toward predictive, mechanistic, and precision-oriented medicine. Despite these advances, the clinical translation of AI-driven multi-omics approaches in sepsis remains constrained by several challenges, including limited data availability, cohort heterogeneity, restricted interpretability and causal inference, high computational demands, difficulties in integrating static molecular profiles with dynamic clinical data, ethical and governance concerns, and limited generalizability across populations and platforms. Addressing these barriers will require the establishment of standardized, multicenter datasets, the development of explainable and robust AI frameworks, and sustained interdisciplinary collaboration between computational scientists and clinicians. Through these efforts, AI-enabled multi-omics research may progress toward reproducible, interpretable, and equitable clinical implementation. Ultimately, the synergy between artificial intelligence and multi-omics heralds a new era of intelligent discovery and precision medicine in sepsis, with the potential to transform both research paradigms and bedside practice. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

35 pages, 2106 KB  
Article
A Novel Method That Is Based on Differential Evolution Suitable for Large-Scale Optimization Problems
by Glykeria Kyrou, Vasileios Charilogis and Ioannis G. Tsoulos
Foundations 2026, 6(1), 2; https://doi.org/10.3390/foundations6010002 - 23 Jan 2026
Abstract
Global optimization represents a fundamental challenge in computer science and engineering, as it aims to identify high-quality solutions to problems spanning from moderate to extremely high dimensionality. The Differential Evolution (DE) algorithm is a population-based algorithm like Genetic Algorithms (GAs) and uses similar [...] Read more.
Global optimization represents a fundamental challenge in computer science and engineering, as it aims to identify high-quality solutions to problems spanning from moderate to extremely high dimensionality. The Differential Evolution (DE) algorithm is a population-based algorithm like Genetic Algorithms (GAs) and uses similar operators such as crossover, mutation and selection. The proposed method introduces a set of methodological enhancements designed to increase both the robustness and the computational efficiency of the classical DE framework. Specifically, an adaptive termination criterion is incorporated, enabling early stopping based on statistical measures of convergence and population stagnation. Furthermore, a population sampling strategy based on k-means clustering is employed to enhance exploration and improve the redistribution of individuals in high-dimensional search spaces. This mechanism enables structured population renewal and effectively mitigates premature convergence. The enhanced algorithm was evaluated on standard large-scale numerical optimization benchmarks and compared with established global optimization methods. The experimental results indicate substantial improvements in convergence speed, scalability and solution stability. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

28 pages, 9471 KB  
Article
Shaking Table Test-Based Verification of PDEM for Random Seismic Response of Anchored Rock Slopes
by Xuegang Pan, Jinqing Jia and Lihua Zhang
Appl. Sci. 2026, 16(2), 1146; https://doi.org/10.3390/app16021146 - 22 Jan 2026
Abstract
This study systematically verified the applicability and accuracy of the Probability Density Evolution Method (PDEM) in the probabilistic modeling of the dynamic response of anchored rock slopes under random seismic action through large-scale shaking table model tests. Across 144 sets of non-stationary random [...] Read more.
This study systematically verified the applicability and accuracy of the Probability Density Evolution Method (PDEM) in the probabilistic modeling of the dynamic response of anchored rock slopes under random seismic action through large-scale shaking table model tests. Across 144 sets of non-stationary random ground motions and 7 sets of white noise excitations, key response data such as acceleration, displacement, and changes in anchor axial force were collected. The PDEM was used to model the instantaneous probability density function (PDF) and cumulative distribution function (CDF), which were then compared with the results of normal distribution, Gumbel distribution, and direct sample statistics from multiple dimensions. The results show that the PDEM does not require a preset distribution form and can accurately reproduce the non-Gaussian, multi-modal, and time evolution characteristics of the response; in the reliability assessment of peak responses, its prediction deviation is much smaller than that of traditional parametric models; the three-dimensional probability density evolution cloud map further reveals the law governing the entire process of the response PDF from “narrow and high” in the early stage of the earthquake, “wide and flat” in the main shock stage, to “re-convergence” after the earthquake. The study confirms that the PDEM has significant advantages and engineering application value in the analysis of random seismic responses and the dynamic reliability assessment of anchored slopes. Full article
Show Figures

Figure 1

37 pages, 13674 KB  
Article
A Reference-Point Guided Multi-Objective Crested Porcupine Optimizer for Global Optimization and UAV Path Planning
by Zelei Shi and Chengpeng Li
Mathematics 2026, 14(2), 380; https://doi.org/10.3390/math14020380 - 22 Jan 2026
Abstract
Balancing convergence accuracy and population diversity remains a fundamental challenge in multi-objective optimization, particularly for complex and constrained engineering problems. To address this issue, this paper proposes a novel Multi-Objective Crested Porcupine Optimizer (MOCPO), inspired by the hierarchical defensive behaviors of crested porcupines. [...] Read more.
Balancing convergence accuracy and population diversity remains a fundamental challenge in multi-objective optimization, particularly for complex and constrained engineering problems. To address this issue, this paper proposes a novel Multi-Objective Crested Porcupine Optimizer (MOCPO), inspired by the hierarchical defensive behaviors of crested porcupines. The proposed algorithm integrates four biologically motivated defense strategies—vision, hearing, scent diffusion, and physical attack—into a unified optimization framework, where global exploration and local exploitation are dynamically coordinated. To effectively extend the original optimizer to multi-objective scenarios, MOCPO incorporates a reference-point guided external archiving mechanism to preserve a well-distributed set of non-dominated solutions, along with an environmental selection strategy that adaptively partitions the objective space and enhances solution quality. Furthermore, a multi-level leadership mechanism based on Euclidean distance is introduced to provide region-specific guidance, enabling precise and uniform coverage of the Pareto front. The performance of MOCPO is comprehensively evaluated on 18 benchmark problems from the WFG and CF test suites. Experimental results demonstrate that MOCPO consistently outperforms several state-of-the-art multi-objective algorithms, including MOPSO and NSGA-III, in terms of IGD, GD, HV, and Spread metrics, achieving the best overall ranking in Friedman statistical tests. Notably, the proposed algorithm exhibits strong robustness on discontinuous, multimodal, and constrained Pareto fronts. In addition, MOCPO is applied to UAV path planning in four complex terrain scenarios constructed from real digital elevation data. The results show that MOCPO generates shorter, smoother, and more stable flight paths while effectively balancing route length, threat avoidance, flight altitude, and trajectory smoothness. These findings confirm the effectiveness, robustness, and practical applicability of MOCPO for solving complex real-world multi-objective optimization problems. Full article
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)
Show Figures

Figure 1

18 pages, 635 KB  
Article
A Federated Deep Learning Framework for Sleep-Stage Monitoring Using the ISRUC-Sleep Dataset
by Alba Amato
Appl. Sci. 2026, 16(2), 1073; https://doi.org/10.3390/app16021073 - 21 Jan 2026
Abstract
Automatic sleep-stage classification is a key component of long-term sleep monitoring and digital health applications. Although deep learning models trained on centralized datasets have achieved strong performance, their deployment in real-world healthcare settings is constrained by privacy, data-governance, and regulatory requirements. Federated learning [...] Read more.
Automatic sleep-stage classification is a key component of long-term sleep monitoring and digital health applications. Although deep learning models trained on centralized datasets have achieved strong performance, their deployment in real-world healthcare settings is constrained by privacy, data-governance, and regulatory requirements. Federated learning (FL) addresses these issues by enabling decentralized training in which raw data remain local and only model parameters are exchanged; however, its effectiveness under realistic physiological heterogeneity remains insufficiently understood. In this work, we investigate a subject-level federated deep learning framework for sleep-stage classification using polysomnography data from the ISRUC-Sleep dataset. We adopt a realistic one subject = one client setting spanning three clinically distinct subgroups and evaluate a lightweight one-dimensional convolutional neural network (1D-CNN) under four training regimes: a centralized baseline and three federated strategies (FedAvg, FedProx, and FedBN), all sharing identical architecture and preprocessing. The centralized model, trained on a cohort with regular sleep architecture, achieves stable performance (accuracy 69.65%, macro-F1 0.6537). In contrast, naive FedAvg fails to converge under subject-level non-IID data (accuracy 14.21%, macro-F1 0.0601), with minority stages such as N1 and REM largely lost. FedProx yields only marginal improvement, while FedBN—by preserving client-specific batch-normalization statistics—achieves the best federated performance (accuracy 26.04%, macro-F1 0.1732) and greater stability across clients. These findings indicate that the main limitation of FL for sleep staging lies in physiological heterogeneity rather than model capacity, highlighting the need for heterogeneity-aware strategies in privacy-preserving sleep analytics. Full article
Show Figures

Figure 1

41 pages, 5360 KB  
Article
Jellyfish Search Algorithm-Based Optimization Framework for Techno-Economic Energy Management with Demand Side Management in AC Microgrid
by Vijithra Nedunchezhian, Muthukumar Kandasamy, Renugadevi Thangavel, Wook-Won Kim and Zong Woo Geem
Energies 2026, 19(2), 521; https://doi.org/10.3390/en19020521 - 20 Jan 2026
Abstract
The optimal allocation of Photovoltaic (PV) and wind-based renewable energy sources and Battery Energy Storage System (BESS) capacity is an important issue for efficient operation of a microgrid network (MGN). The impact of the unpredictability of PV and wind generation needs to be [...] Read more.
The optimal allocation of Photovoltaic (PV) and wind-based renewable energy sources and Battery Energy Storage System (BESS) capacity is an important issue for efficient operation of a microgrid network (MGN). The impact of the unpredictability of PV and wind generation needs to be smoothed out by coherent allocation of BESS unit to meet out the load demand. To address these issues, this article proposes an efficient Energy Management System (EMS) and Demand Side Management (DSM) approaches for the optimal allocation of PV- and wind-based renewable energy sources and BESS capacity in the MGN. The DSM model helps to modify the peak load demand based on PV and wind generation, available BESS storage, and the utility grid. Based on the Real-Time Market Energy Price (RTMEP) of utility power, the charging/discharging pattern of the BESS and power exchange with the utility grid are scheduled adaptively. On this basis, a Jellyfish Search Algorithm (JSA)-based bi-level optimization model is developed that considers the optimal capacity allocation and power scheduling of PV and wind sources and BESS capacity to satisfy the load demand. The top-level planning model solves the optimal allocation of PV and wind sources intending to reduce the total power loss of the MGN. The proposed JSA-based optimization achieved 24.04% of power loss reduction (from 202.69 kW to 153.95 kW) at peak load conditions through optimal PV- and wind-based DG placement and sizing. The bottom level model explicitly focuses to achieve the optimal operational configuration of MGN through optimal power scheduling of PV, wind, BESS, and the utility grid with DSM-based load proportions with an aim to minimize the operating cost. Simulation results on the IEEE 33-node MGN demonstrate that the 20% DSM strategy attains the maximum operational cost savings of €ct 3196.18 (reduction of 2.80%) over 24 h operation, with a 46.75% peak-hour grid dependency reduction. The statistical analysis over 50 independent runs confirms the sturdiness of the JSA over Particle Swarm Optimization (PSO) and Osprey Optimization Algorithm (OOA) with a standard deviation of only 0.00017 in the fitness function, demonstrating its superior convergence characteristics to solve the proposed optimization problem. Finally, based on the simulation outcome of the considered bi-level optimization problem, it can be concluded that implementation of the proposed JSA-based optimization approach efficiently optimizes the PV- and wind-based resource allocation along with BESS capacity and helps to operate the MGN efficiently with reduced power loss and operating costs. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

19 pages, 482 KB  
Article
Development of the Green Cities Questionnaire (GCQ) in Germany: Focus on Mental Health, Willingness to Pay for Sustainability, and Incentives for Green Exercise
by Klemens Weigl
Sustainability 2026, 18(2), 1033; https://doi.org/10.3390/su18021033 - 20 Jan 2026
Abstract
Green cities can contribute to greater mental and physical well-being. In addition, many people enjoy being active outdoors (green exercise). As yet, no questionnaire jointly emphasises mental health, willingness to pay for sustainability, and the incentive of a green environment for physical exercise [...] Read more.
Green cities can contribute to greater mental and physical well-being. In addition, many people enjoy being active outdoors (green exercise). As yet, no questionnaire jointly emphasises mental health, willingness to pay for sustainability, and the incentive of a green environment for physical exercise in cities. Therefore, I developed the new Green Cities Questionnaire (GCQ), comprising 18 items, and used it to survey the perceptions of 249 participants (130 female, 119 male, 0 diverse; aged 18 to 84). Then, I applied exploratory factor analyses where the three factors of mental health (MH; nine items), willingness-to-pay (WTP; five items), and green exercise (GE; four items) were extracted. Additional statistical analyses revealed that women reported higher values on the MH and GE factors than men. In particular, women and men reported a beneficial effect of green cities on mental health (higher ratings on MH than on GE and on WTP). However, there was no gender effect on WTP. From an urban-planning perspective, the two strongest implications are as follows: First, the GCQ facilitates measurement of the three key latent factors: MH, WTP, and GE. However, future validation studies with larger sample sizes and applications of the GCQ alongside additional similar and different recognised scales are necessary to establish convergent and discriminant validity. Second, mental health is reported to be much more important than WTP and GE. Hence, green initiatives, educational programs, and green city workshops should not only focus on expanding urban green spaces but also on providing appropriate relaxation areas to promote and foster psychological well-being and quality of life in green cities. Full article
(This article belongs to the Section Psychology of Sustainability and Sustainable Development)
Show Figures

Figure 1

29 pages, 19300 KB  
Article
Experimental Investigation of Wave Impact Loads Induced by a Three-Dimensional Dam Break
by Jon Martinez-Carrascal, Pablo Eleazar Merino-Alonso, Ignacio Mengual Berjon, Mario Amaro San Gregorio and Antonio Souto-Iglesias
J. Mar. Sci. Eng. 2026, 14(2), 199; https://doi.org/10.3390/jmse14020199 - 18 Jan 2026
Viewed by 95
Abstract
This study presents a detailed experimental investigation of wave impact loads generated by a 3D dam break flow over a dry horizontal bed. Three-dimensionality is induced by a rigid obstacle partially blocking the channel, tested in both symmetric and asymmetric configurations. Impact pressures [...] Read more.
This study presents a detailed experimental investigation of wave impact loads generated by a 3D dam break flow over a dry horizontal bed. Three-dimensionality is induced by a rigid obstacle partially blocking the channel, tested in both symmetric and asymmetric configurations. Impact pressures have been measured at three transverse locations on a downstream vertical wall, and peak pressures, rise times, and pressure impulses have been statistically characterized based on repeated experiments until convergence is achieved. The results show that three-dimensional effects significantly modify the spatial distribution and intensity of impact pressures compared to classical 2D dam break cases. In the asymmetric configuration, the obstacle induces strong lateral redirection of the flow, leading to highly impulsive loads at unshielded locations and substantial pressure attenuation in shadowed regions. In contrast, the symmetric configuration produces more uniform pressure distributions with reduced peak values and weaker impulsive behavior. A probabilistic description of pressure peaks, rise times, and impulses is provided. The dataset offers new experimental benchmarks for the validation and calibration of numerical models aimed at predicting wave-induced structural loads in complex three-dimensional impact flows. Full article
Show Figures

Figure 1

26 pages, 11938 KB  
Article
Spatiotemporal Analysis of Progressive Rock Slope Landslide Destabilization and Multi-Parameter Reliability Analysis
by Ibrahim Haruna Umar, Jubril Izge Hassan, Chaoyi Yang and Hang Lin
Appl. Sci. 2026, 16(2), 939; https://doi.org/10.3390/app16020939 - 16 Jan 2026
Viewed by 111
Abstract
Progressive rock slope destabilization poses significant geohazard risks, necessitating advanced monitoring frameworks to detect precursory failure signals. This study presents a comprehensive time-dependent evaluation of the displacement probability (CTEDP) model, which integrates GNSS-derived spatiotemporal data with multi-parameter reliability indices to enhance landslide risk [...] Read more.
Progressive rock slope destabilization poses significant geohazard risks, necessitating advanced monitoring frameworks to detect precursory failure signals. This study presents a comprehensive time-dependent evaluation of the displacement probability (CTEDP) model, which integrates GNSS-derived spatiotemporal data with multi-parameter reliability indices to enhance landslide risk assessment. Five monitoring points on a destabilizing rock slope were analyzed from mid-November 2024 to early January 2025 using kinematic metrics (velocity, acceleration, and jerk), statistical measures (e.g., moving averages), and reliability indices (RI0, RI1, RI2, and RIcombined). Point 1 exhibited the most critical behavior, with a cumulative displacement of ~60 mm, peak velocities of 34.5 mm/day, and accelerations up to 1.15 mm/day2. The CTEDP for active points converged to 0.56–0.61, indicating sustained high risk. The 90th percentile displacement threshold was 58.48 mm for Point 1. Sensitivity analysis demonstrated that the GNSS-derived reliability indices dominated the RIcombined variance (r = 0.999, explaining 99.8% of variance). The first- and second-order reliability indices (RI1, RI2) at Point 1 exceeded the 60-index threshold, indicating a transition to Class B (“Low Risk—Trend Surveillance Required”) status, while other points showed coherent deformation of 37–45 mm. Results underscore the framework’s ability to integrate spatiotemporal displacement, kinematic precursors, and statistical variability for early-warning systems. This approach bridges gaps in landslide prediction by accounting for spatial heterogeneity and nonlinear geomechanical responses. Full article
Show Figures

Figure 1

24 pages, 1474 KB  
Article
A Fractional Hybrid Strategy for Reliable and Cost-Optimal Economic Dispatch in Wind-Integrated Power Systems
by Abdul Wadood, Babar Sattar Khan, Bakht Muhammad Khan, Herie Park and Byung O. Kang
Fractal Fract. 2026, 10(1), 64; https://doi.org/10.3390/fractalfract10010064 - 16 Jan 2026
Viewed by 146
Abstract
Economic dispatch in wind-integrated power systems is a critical challenge, yet many recent metaheuristics suffer from premature convergence, heavy parameter tuning, and limited ability to escape local optima in non-smooth valve-point landscapes. This study proposes a new hybrid optimization framework, the Fractional Grasshopper [...] Read more.
Economic dispatch in wind-integrated power systems is a critical challenge, yet many recent metaheuristics suffer from premature convergence, heavy parameter tuning, and limited ability to escape local optima in non-smooth valve-point landscapes. This study proposes a new hybrid optimization framework, the Fractional Grasshopper Optimization algorithm (FGOA), which integrates fractional-order calculus into the standard Grasshopper Optimization algorithm (GOA) to enhance its search efficiency. The FGOA method is applied to the economic load dispatch (ELD) problem, a nonlinear and nonconvex task that aims to minimize fuel and wind-generation costs while satisfying practical constraints such as valve-point loading effects (VPLEs), generator operating limits, and the stochastic behavior of renewable energy sources. Owing to the increasing role of wind energy, stochastic wind power is modeled through the incomplete gamma function (IGF). To further improve computational accuracy, FGOA is hybridized with Sequential Quadratic Programming (SQP), where FGOA provides global exploration and SQP performs local refinement. The proposed FGOA-SQP approach is validated on systems with 3, 13, and 40 generating units, including mixed thermal and wind sources. Comparative evaluations against recent metaheuristic algorithms demonstrate that FGOA-SQP achieves more accurate and reliable dispatch outcomes. Specifically, the proposed approach achieves fuel cost reductions ranging from 0.047% to 0.71% for the 3-unit system, 0.31% to 27.25% for the 13-unit system, and 0.69% to 12.55% for the 40-unit system when compared with state-of-the-art methods. Statistical results, particularly minimum fitness values, further confirm the superior performance of the FGOA-SQP framework in addressing the ELD problem under wind power uncertainty. Full article
Show Figures

Figure 1

15 pages, 2092 KB  
Article
Improved NB Model Analysis of Earthquake Recurrence Interval Coefficient of Variation for Major Active Faults in the Hetao Graben and Northern Marginal Region
by Jinchen Li and Xing Guo
Entropy 2026, 28(1), 107; https://doi.org/10.3390/e28010107 - 16 Jan 2026
Viewed by 117
Abstract
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods [...] Read more.
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods significantly underestimate the coefficient of variation when applied to limited paleoseismic datasets, with deviations reaching between 30 and 40% for small sample sizes. We developed a linear transformation and iterative optimization approach that corrects these statistical biases by standardizing recurrence interval data from different sample sizes to conform to a common standardized distribution. Application to 26 fault segments across 15 major active faults in the Hetao graben system yields a corrected coefficient of variation of α = 0.381, representing a 24% increase over the traditional method (α0 = 0.307). This correction demonstrates that conventional approaches systematically underestimate earthquake recurrence variability, potentially compromising seismic hazard assessments. The improved model successfully eliminates sampling bias through iterative convergence, providing more reliable parameters for probability distributions in renewal-based earthquake forecasting. Full article
Show Figures

Figure 1

19 pages, 7967 KB  
Article
State-of-Charge Estimation of Lithium-Ion Batteries Based on GMMCC-AEKF in Non-Gaussian Noise Environment
by Fuxiang Li, Haifeng Wang, Hao Chen, Limin Geng and Chunling Wu
Batteries 2026, 12(1), 29; https://doi.org/10.3390/batteries12010029 - 14 Jan 2026
Viewed by 146
Abstract
To improve the accuracy and robustness of lithium-ion battery state of charge (SOC) estimation, this paper proposes a generalized mixture maximum correlation-entropy criterion-based adaptive extended Kalman filter (GMMCC-AEKF) algorithm, addressing the performance degradation of the traditional extended Kalman filter (EKF) under non-Gaussian noise [...] Read more.
To improve the accuracy and robustness of lithium-ion battery state of charge (SOC) estimation, this paper proposes a generalized mixture maximum correlation-entropy criterion-based adaptive extended Kalman filter (GMMCC-AEKF) algorithm, addressing the performance degradation of the traditional extended Kalman filter (EKF) under non-Gaussian noise and inaccurate initial conditions. Based on the GMMCC theory, the proposed algorithm introduces an adaptive mechanism and employs two generalized Gaussian kernels to construct a mixed kernel function, thereby formulating the generalized mixture correlation-entropy criterion. This enhances the algorithm’s adaptability to complex non-Gaussian noise. Simultaneously, by incorporating adaptive filtering concepts, the state and measurement covariance matrices are dynamically adjusted to improve stability under varying noise intensities and environmental conditions. Furthermore, the use of statistical linearization and fixed-point iteration techniques effectively improves both the convergence behavior and the accuracy of nonlinear system estimation. To investigate the effectiveness of the suggested method, experiments for SOC estimation were implemented using two lithium-ion cells featuring distinct rated capacities. These tests employed both dynamic stress test (DST) and federal test procedure (FTP) profiles under three representative temperature settings: 40 °C, 25 °C, and 10 °C. The experimental findings prove that when exposed to non-Gaussian noise, the GMMCC-AEKF algorithm consistently outperforms both the traditional EKF and the generalized mixture maximum correlation-entropy-based extended Kalman filter (GMMCC-EKF) under various test conditions. Specifically, under the 25 °C DST profile, GMMCC-AEKF improves estimation accuracy by 86.54% and 10.47% over EKF and GMMCC-EKF, respectively, for the No. 1 battery. Under the FTP profile for the No. 2 battery, it achieves improvements of 55.89% and 28.61%, respectively. Even under extreme temperatures (10 °C, 40 °C), GMMCC-AEKF maintains high accuracy and stable convergence, and the algorithm demonstrates rapid convergence to the true SOC value. In summary, the GMMCC-AEKF confirms excellent estimation accuracy under various temperatures and non-Gaussian noise conditions, contributing a practical approach for accurate SOC estimation in power battery systems. Full article
Show Figures

Graphical abstract

17 pages, 285 KB  
Article
Exploring the Use of AI-Based Patient Simulations to Support Cultural Competence Development in Nursing Students: A Mixed-Methods Study
by Małgorzata Lesińska-Sawicka and Bartłomiej Michalak
Educ. Sci. 2026, 16(1), 126; https://doi.org/10.3390/educsci16010126 - 14 Jan 2026
Viewed by 143
Abstract
(1) Background: Developing cultural competence and reflective communication skills remains a challenge in nursing education. Traditional teaching methods often provide limited opportunities for safe practice of culturally sensitive interactions in emotionally complex situations. Artificial intelligence (AI)–based patient simulations may offer a scalable approach [...] Read more.
(1) Background: Developing cultural competence and reflective communication skills remains a challenge in nursing education. Traditional teaching methods often provide limited opportunities for safe practice of culturally sensitive interactions in emotionally complex situations. Artificial intelligence (AI)–based patient simulations may offer a scalable approach to experiential and reflective learning. (2) Aim: This study explored the educational potential of AI-based patient simulations in supporting nursing students’ self-assessed cultural competence, reflective awareness, and communication confidence. (3) Methods: A convergent mixed-methods pre–post study was conducted among 24 s-cycle nursing students. Participants engaged in individual AI-based patient simulations with simulated patients representing diverse cultural contexts. Quantitative data were collected using an exploratory cultural competence self-assessment scale administered before and after the simulation. Qualitative data included post-simulation reflection forms and AI-student interaction transcripts, analysed using inductive thematic analysis. (4) Results: A statistically significant increase in overall self-assessed cultural competence was observed (Wilcoxon signed-rank test: Z = 4.05, p < 0.001, r = 0.59), with the greatest improvements in communication adaptability and perceived communication sufficiency. Qualitative findings indicated an emotional shift from uncertainty to engagement, heightened awareness of cultural complexity, reflective reassessment of assumptions, and high perceived educational value of AI simulations. (5) Conclusions: AI-based patient simulations represent a promising pedagogical tool for fostering reflective and communication-oriented learning in culturally complex nursing contexts. Their primary value lies in supporting experiential learning, emotional engagement, and the development of cultural humility, suggesting their potential role as a complementary educational strategy in advanced nursing education. Full article
38 pages, 12112 KB  
Article
Enhanced Educational Optimization Algorithm Based on Student Psychology for Global Optimization Problems and Real Problems
by Wenyu Miao, Katherine Lin Shu and Xiao Yang
Biomimetics 2026, 11(1), 70; https://doi.org/10.3390/biomimetics11010070 - 14 Jan 2026
Viewed by 252
Abstract
To address the insufficient exploration ability, susceptibility to local optima, and limited convergence accuracy of the standard Student Psychology-Based Optimization (SPBO) algorithm in three-dimensional UAV trajectory planning, we propose an enhanced variant, Enhanced SPBO (ESPBO). ESPBO augments SPBO with three complementary strategies: (i) [...] Read more.
To address the insufficient exploration ability, susceptibility to local optima, and limited convergence accuracy of the standard Student Psychology-Based Optimization (SPBO) algorithm in three-dimensional UAV trajectory planning, we propose an enhanced variant, Enhanced SPBO (ESPBO). ESPBO augments SPBO with three complementary strategies: (i) Time-Adaptive Scheduling, which uses normalized time (τ=t/T) to schedule global step-size shrinking, Gaussian fine-tuning, and Lévy flight intensity, enabling strong early exploration and fine late-stage exploitation; (ii) Mentor Pool Guidance, which selects a top-K mentor set and applies time-varying guidance weights to reduce misleading attraction and improve directional stability; and (iii) Directional Jump Exploration, which couples a differential vector with Lévy flights to strengthen basin-crossing while keeping the differential step bounded for robustness. Numerical experiments on CEC2017, CEC2020 and CEC2022 benchmark functions compare ESPBO with Grey Wolf Optimization (GWO), Whale Optimization Algorithm (WOA), Improved multi-strategy adaptive Grey Wolf Optimization (IAGWO), Dung Beetle Optimization (DBO), Snake Optimization (SO), Rime Optimization (RIME), and the original SPBO. We evaluate best path length, mean trajectory length, standard deviation, and convergence curves and assess statistical stability via Wilcoxon rank-sum tests (p = 0.05) and the Friedman test. ESPBO significantly outperforms the comparison algorithms in path-planning accuracy and convergence stability, ranking first on both test suites. Applied to 3D UAV trajectory planning in mountainous terrain with no-fly zones, ESPBO achieves an optimal path length of 199.8874 m, an average path length of 205.8179 m, and a standard deviation of 5.3440, surpassing all baselines; notably, ESPBO’s average path length is even lower than the optimal path length of other algorithms. These results demonstrate that ESPBO provides an efficient and robust solution for UAV trajectory optimization in intricate environments and extends the application of swarm intelligence algorithms in autonomous navigation. Full article
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)
Show Figures

Figure 1

22 pages, 5277 KB  
Article
High-Speed Microprocessor-Based Optical Instrumentation for the Detection and Analysis of Hydrodynamic Cavitation Downstream of an Additively Manufactured Nozzle
by Luís Gustavo Macêdo West, André Jackson Ramos Simões, Leandro do Rozário Teixeira, Lucas Ramalho Oliveira, Juliane Grasiela de Carvalho Gomes, Igor Silva Moreira dos Anjos, Antonio Samuel Bacelar de Freitas Devesa, Leonardo Rafael Teixeira Cotrim Gomes, Lucas Gomes Pereira, Iran Eduardo Lima Neto, Júlio Cesar de Souza Inácio Gonçalves, Luiz Carlos Simões Soares Junior, Germano Pinto Guedes, Geydison Gonzaga Demetino, Marcus Vinícius Santos da Silva, Vitor Leão Filardi, Vitor Pinheiro Ferreira, André Luiz Andrade Simões, Luciano Matos Queiroz and Iuri Muniz Pepe
Fluids 2026, 11(1), 21; https://doi.org/10.3390/fluids11010021 - 14 Jan 2026
Viewed by 154
Abstract
This study presents the development and validation of a high-speed optical data acquisition system for detecting and characterizing hydrodynamic cavitation downstream of a triangular nozzle. The system integrates a PIN photodiode, a transimpedance amplifier, and a high-sampling-rate microcontroller. Its performance was first evaluated [...] Read more.
This study presents the development and validation of a high-speed optical data acquisition system for detecting and characterizing hydrodynamic cavitation downstream of a triangular nozzle. The system integrates a PIN photodiode, a transimpedance amplifier, and a high-sampling-rate microcontroller. Its performance was first evaluated using controlled sinusoidal signals, and statistical stability was assessed as a function of the number of acquired samples. Experiments were subsequently conducted in a converging–diverging conduit under biphasic flow conditions, where mean irradiance, standard deviation, and frequency spectra were analyzed downstream of the nozzle. The optical signal distributions revealed transitions in flow behavior associated with cavitation development, which were quantified through statistical metrics and spectral features. The Strouhal number was estimated from dominant frequencies extracted from the spectra, exhibiting a non-monotonic dependence on the Reynolds number, consistent with changes in flow structure and turbulence intensity. Spectral analysis further indicated frequency bands associated with energy transfer across turbulent scales and bubble dynamics. Overall, the results demonstrate that the proposed optical system constitutes a viable and non-intrusive methodology for detecting and characterizing cavitation intensity in a way that complements other optical and acoustic methods. Full article
Show Figures

Figure 1

Back to TopTop