Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (685)

Search Parameters:
Keywords = recursive prediction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3014 KB  
Article
Data-Driven Computation Scheme for Duncan–Chang EB Model
by Chaojun Han, Qianhui Liu, Xiaohang Li and Hezuo Zhang
Mathematics 2026, 14(5), 751; https://doi.org/10.3390/math14050751 - 24 Feb 2026
Abstract
This paper extends the data-driven computational mechanics paradigm to nonlinear materials characterized by the Duncan–Chang Elastic-Bulk (E-B) constitutive model. Unlike in linear elastic systems, geotechnical media exhibit stress-dependent tangent moduli and non-convex constitutive manifolds. We propose a recursive nested data-driven solver that dynamically [...] Read more.
This paper extends the data-driven computational mechanics paradigm to nonlinear materials characterized by the Duncan–Chang Elastic-Bulk (E-B) constitutive model. Unlike in linear elastic systems, geotechnical media exhibit stress-dependent tangent moduli and non-convex constitutive manifolds. We propose a recursive nested data-driven solver that dynamically adapts the phase-space distance metric to account for pressure-dependent hardening. A rigorous mathematical analysis of convergence is provided, demonstrating that the solver’s performance is governed by the local transversality between the conservation law constraint set and the nonlinear material manifold. We derive explicit error bounds that couple spatial discretization resolution with material data density. Numerical experiments using triaxial test data from a high-altitude region validate the theoretical predictions, showing that the proposed scheme demonstrates convergence in single-element tests. Full article
17 pages, 1450 KB  
Article
Research on SoC Estimation of Lithium Batteries Based on LDL-MIAUKF Algorithm
by Zhihua Xu and Tinglong Pan
Eng 2026, 7(3), 100; https://doi.org/10.3390/eng7030100 - 24 Feb 2026
Abstract
Accurate state-of-charge (SoC) estimation is essential for ensuring the safety, efficiency, and longevity of lithium-ion batteries in electric vehicles and energy storage systems. However, conventional methods such as ampere-hour (AH) integration and the extended Kalman filter (EKF) often suffer from error accumulation, sensitivity [...] Read more.
Accurate state-of-charge (SoC) estimation is essential for ensuring the safety, efficiency, and longevity of lithium-ion batteries in electric vehicles and energy storage systems. However, conventional methods such as ampere-hour (AH) integration and the extended Kalman filter (EKF) often suffer from error accumulation, sensitivity to initial conditions, and inadequate handling of strong nonlinearities and time-varying noise. To overcome these limitations, this paper proposes a novel LDL-Decomposition-Based Multi-Innovation Adaptive Unscented Kalman Filter (LDL-MIAUKF) algorithm that integrates three key innovations: (1) multi-innovation theory to exploit historical measurement sequences for enhanced state correction; (2) an adaptive mechanism to dynamically adjust process and observation noise covariances in real time; and (3) LDL decomposition (instead of Cholesky) to guarantee numerical stability and positive definiteness of the covariance matrix during sigma point generation. A second-order RC equivalent circuit model is established for the lithium battery, and its parameters are identified online using the forgetting factor recursive least squares (FFRLS) method under Hybrid Pulse Power Characterization (HPPC) test conditions. The proposed LDL-MIAUKF algorithm is then applied to estimate SoC using real battery data. Experimental results demonstrate that the LDL-MIAUKF achieves a maximum SoC estimation error of less than 1% at 25 °C and effectively tracks the reference SoC with high robustness. Furthermore, the terminal voltage prediction error of the identified model remains within ±0.1 V, confirming model accuracy. These results validate that the proposed LDL-MIAUKF algorithm significantly improves estimation accuracy, stability, and adaptability, making it a promising solution for advanced battery management systems. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

25 pages, 896 KB  
Article
Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning
by Negasa Berhanu Fite, Getachew Mamo Wegari and Heidi Steendam
Photonics 2026, 13(2), 211; https://doi.org/10.3390/photonics13020211 - 23 Feb 2026
Viewed by 45
Abstract
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received [...] Read more.
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received signal strength (RSS) characteristics, unknown transmitter orientations, and dynamic indoor disturbances. Existing solutions typically address these challenges in isolation, resulting in limited robustness and scalability. This paper proposes SCENE-VLP (Sequential Deep Learning with Feature Compression and Optimal State Estimation), a structured positioning framework that integrates feature compression, temporal sequence modeling, and probabilistic state refinement within a unified estimation pipeline. Specifically, SCENE-VLP combines Principal Component Analysis (PCA) and Denoising Autoencoders (DAE) for linear and nonlinear observation conditioning, Gated Recurrent Units (GRU) for modeling temporal dependencies in RSS sequences, and Kalman-based filtering (KF/EKF) for recursive state-space refinement. The framework is formulated as a hierarchical approximation of the nonlinear observation model, linking data-driven measurement learning with Bayesian state estimation. A systematic ablation study across multiple scenarios, including same-dataset evaluation and cross-dataset generalization, demonstrates that each component provides complementary benefits. Feature compression reduces redundancy while preserving dominant signal structure; GRU significantly improves robustness over static regression; and recursive filtering consistently reduces positioning error compared to unfiltered predictions. While both KF and EKF improve performance, EKF provides incremental refinement under mild nonlinearities. Extensive simulations conducted on an indoor dataset collected from a realistic deployment with eight ceiling-mounted LEDs and a single photodetector (PD) show that SCENE-VLP achieves sub-decimeter localization accuracy, with P50 and P95 errors of 1.84 cm and 6.52 cm, respectively. Cross-scenario evaluation further confirms stable generalization and statistically consistent improvements. These results demonstrate that the structured integration of observation conditioning, temporal modeling, and Bayesian refinement yields measurable gains beyond partial pipeline configurations, establishing SCENE-VLP as a robust and scalable solution for next-generation indoor visible light positioning systems. Full article
19 pages, 1215 KB  
Article
On the Dynamics of Ergonomic Load in Biomimetic Self-Organizing Systems
by Nikitas Gerolimos, Vasileios Alevizos and Georgios Priniotakis
Electronics 2026, 15(4), 889; https://doi.org/10.3390/electronics15040889 - 21 Feb 2026
Viewed by 152
Abstract
Traditional ergonomic considerations in human–machine and human–swarm systems have primarily relied on static diagnostic snapshots, which often fail to capture the temporal accumulation and non-linear dissipation of musculoskeletal fatigue. As Industry 5.0 transitions toward immersive, human-centric cyber-physical systems, redefining ergonomic load as an [...] Read more.
Traditional ergonomic considerations in human–machine and human–swarm systems have primarily relied on static diagnostic snapshots, which often fail to capture the temporal accumulation and non-linear dissipation of musculoskeletal fatigue. As Industry 5.0 transitions toward immersive, human-centric cyber-physical systems, redefining ergonomic load as an endogenous state variable allows for real-time control of musculoskeletal integrity. This work proposes the Dynamic Integrity Governor (DIG) framework, which treats ergonomic load as a normalized, dimensionless state variable ξt that evolves according to a stochastic proxy of recursive Newton–Euler dynamics. Leveraging a machine-perception-aware Adaptive Event-Triggered Mechanism (AETM) and the Multi-modal Flamingo Search Algorithm (MMFSA), we develop a decentralized architecture that redistributes ergonomic demands in real-time. The framework utilizes a 7-DOF kinematic model and Control Barrier Functions (CBF) to maintain human–swarm interaction within safe biomechanical boundaries, effectively filtering stochastic sensor noise through Girard-based stability buffers. Computational validation via N = 1000 Monte Carlo runs demonstrates that the proposed strategy achieves a 79.97% reduction in control updates (SD = 0.19%; p < 0.0001; Cohen’s d = 2.41), ensuring a positive minimum inter-event time (MIET) to prevent the Zeno phenomenon and supporting carbon-aware AI operations. The integration of variable prediction horizons yields an 80.69% improvement in solving time, while ensuring a minimal computational footprint suitable for real-time edge deployment. The identification of optimal postural niches maintains peak ergonomic load at 41.42%, representing a significant safety margin relative to the integrity barrier. While validated against a 50th percentile male profile, the DIG framework establishes a modular foundation for personalized ergonomic governors in inclusive Industry 5.0 applications. Full article
Show Figures

Figure 1

30 pages, 1973 KB  
Article
Human-Centered AI Perception Prediction in Construction: A Regularized Machine Learning Approach for Industry 5.0
by Annamária Behúnová, Matúš Pohorenec, Tomáš Mandičák and Marcel Behún
Appl. Sci. 2026, 16(4), 2057; https://doi.org/10.3390/app16042057 - 19 Feb 2026
Viewed by 159
Abstract
Industry 5.0 emphasizes human-centered integration of artificial intelligence in industrial contexts, yet successful adoption depends critically on workforce perception and acceptance. This research develops and validates a machine learning framework for predicting AI-related perceptions and expected impacts in the construction industry under small [...] Read more.
Industry 5.0 emphasizes human-centered integration of artificial intelligence in industrial contexts, yet successful adoption depends critically on workforce perception and acceptance. This research develops and validates a machine learning framework for predicting AI-related perceptions and expected impacts in the construction industry under small sample constraints typical of specialized industrial surveys. Specifically, the study aims to develop and empirically validate a predictive AI decision support model that estimates the expected impact of AI adoption in the construction sector based on digital competencies, ICT utilization, AI training and experience, and AI usage at both individual and organizational levels, operationalized through a composite AI Impact Index and two process-oriented outcomes (perceived task automation and perceived cost reduction). Using a dataset of 51 survey responses from Slovak construction professionals collected in 2025, we implement a methodologically rigorous approach specifically designed for limited-data regimes. The framework encompasses ordinal target simplification from five to three classes, dimensionality reduction through theoretically grounded composite indices reducing features from 15 to 7, exclusive deployment of low variance regularized models, and leave-one-out cross-validation for unbiased performance estimation. The optimal model (Lasso regression with recursive feature elimination) predicts cost reduction perception with R2 = 0.501, MAE = 0.551, and RMSE = 0.709, while six classification targets achieve weighted F1 = 0.681, representing statistically optimal performance given sample constraints and perception measurement variability. Comparative evaluation confirms regularized models outperform high variance alternatives: random forest (R2 = 0.412) and gradient boosting (R2 = 0.292) exhibit substantially lower generalization performance, empirically validating the bias-variance trade-off rationale. Key methodological contributions include explicit bias-variance optimization preventing overfitting, feature selection via RFE reducing input space to six predictors (personal AI usage, AI impact on budgeting, ICT utilization, AI training, company size, and age), and demonstration that principled statistical approaches achieve meaningful predictions without requiring large-scale datasets or complex architectures. The framework provides a replicable blueprint for perception and impact prediction in data-constrained Industry 5.0 contexts, enabling targeted interventions, including customized training programs, strategic communication prioritization, and resource allocation for change management initiatives aligned with predicted adoption patterns. Full article
Show Figures

Figure 1

28 pages, 4267 KB  
Article
Machine Learning Framework for HbA1c Prediction: Data Enrichment, Cost Optimization, and Interpretability Through Stratified Regression and Multi-Stage Feature Selection
by Mohamed Ezz, Majed Abdullah Alrowaily, Menwa Alshammeri, Alshaimaa A. Tantawy, Azzah Allahim and Ayman Mohamed Mostafa
Diagnostics 2026, 16(4), 607; https://doi.org/10.3390/diagnostics16040607 - 19 Feb 2026
Viewed by 161
Abstract
Background: Measuring glycated hemoglobin (HbA1c) is essential for assessing long-term glycemic control, yet direct testing remains expensive and underutilized in many large-scale health surveys and resource-constrained settings. This study aims to (i) deliver a highly accurate and interpretable ML model for predicting HbA1c [...] Read more.
Background: Measuring glycated hemoglobin (HbA1c) is essential for assessing long-term glycemic control, yet direct testing remains expensive and underutilized in many large-scale health surveys and resource-constrained settings. This study aims to (i) deliver a highly accurate and interpretable ML model for predicting HbA1c from routinely collected clinical, biochemical, and demographic data, (ii) reduce dependency on extensive laboratory panels by identifying a compact, cost-efficient subset of key predictors, and (iii) establish a transferable, explainable modeling framework applicable across chronic disease biomarkers. Unlike prior HbA1c prediction studies that focus primarily on classification or accuracy-driven models, this work introduces a unified framework for continuous HbA1c regression that jointly integrates cost-oriented feature parsimony, stratified regression validation, and explainability by design. Methods: We aggregated data from the National Health and Nutrition Examination Survey (NHANES) cycles 2007–2020, encompassing 66,148 records and 224 candidate features. We implemented a two-stage feature selection pipeline: Incremental Correlation Selection (ICS) to narrow the variable space, followed by Recursive Feature Elimination with Cross-Validation (RFECV) to isolate the most informative features. Model interpretability was assessed using partial dependence plots and feature importance analysis. Results: The optimal model, LightGBMRegressor with most-frequent imputation, achieved R2 = 0.7161, MAE = 0.334, MSE = 0.304, and MAPE = 5.56%, while using only 40 selected features. Interpretability analysis revealed clinically coherent relationships that align with physiological expectations. Discussion: The proposed framework maintains robust predictive performance while substantially reducing the number of required input features, enabling cost-efficient HbA1c estimation together with transparent, physiologically coherent model insights. By consolidating continuous HbA1c prediction, cost-aware feature selection, stratified evaluation, and explainability within a single pipeline are enhanced. Conclusions: This study advances beyond existing approaches and offers a practical blueprint for scalable biomarker estimation in population health and clinical decision-support applications. Its explainable, efficient, and generalizable design positions it as a strong candidate for clinical decision-support and population-health applications. Full article
(This article belongs to the Special Issue AI and Big Data in Medical Diagnostics)
Show Figures

Figure 1

18 pages, 8819 KB  
Article
Comparation of Graph Neural Networks and Traditional Machine Learning for Property Prediction in All-Inorganic Perovskite Materials
by Jingyu Liu, Xueqiong Su, Lishan Yang, Jiansen Ding, Jin Wang, Xing Ling, Yong Pan, Zhijun Wang, Wei Zhao and Yang Bu
Inorganics 2026, 14(2), 58; https://doi.org/10.3390/inorganics14020058 - 13 Feb 2026
Viewed by 214
Abstract
Machine learning (ML) methods have been widely explored for predicting material properties. However, due to the rapid development of ML techniques and the diversity of available models, performance comparisons between traditional and graph-based machine learning models remain limited. Therefore, we evaluate 11 conventional [...] Read more.
Machine learning (ML) methods have been widely explored for predicting material properties. However, due to the rapid development of ML techniques and the diversity of available models, performance comparisons between traditional and graph-based machine learning models remain limited. Therefore, we evaluate 11 conventional ML models alongside the graph neural network-based Crystal Graph Convolutional Neural Network (CGCNN) for predicting three key properties—formation energy (Ef), band gap (Eg), and energy above hull (Eh)—across a dataset comprising single perovskites, double perovskites, and their combined structures. The results demonstrate that for single perovskites, CGCNN exhibits gains of over 20% in the root mean square error (RMSE) relative to the second-best model (Gradient Boosting Regression), achieving values of 0.205 eV/atom (Ef), 0.718 eV (Eg), and 0.167 eV/atom (Eh). Prediction accuracy for double perovskites is significantly enhanced by training CGCNN on a combined dataset, particularly for Eh, where the coefficient of determination (R2) improves approximately 68.1-fold compared to models trained exclusively on double-perovskite data. Feature importance analysis via one-shot, permutation-based, and recursive feature elimination (RFE) methods reveals that optimal model performance requires retention of at least the top 20 critical features. Furthermore, feature utilization patterns of CGCNN across different prediction tasks are visualized. This work provides actionable guidelines for model selection and feature engineering in perovskite property prediction, establishing a benchmark for future ML-driven materials discovery. Full article
(This article belongs to the Special Issue Recent Progress in Perovskites)
Show Figures

Figure 1

60 pages, 1234 KB  
Article
Leveraging Structural Symmetry for IoT Security: A Recursive InterNetwork Architecture Perspective
by Peyman Teymoori and Toktam Ramezanifarkhani
Computers 2026, 15(2), 125; https://doi.org/10.3390/computers15020125 - 13 Feb 2026
Viewed by 297
Abstract
The Internet of Things (IoT) has transformed modern life through interconnected devices enabling automation across diverse environments. However, its reliance on legacy network architectures has introduced significant security vulnerabilities and efficiency challenges—for example, when Datagram Transport Layer Security (DTLS) encrypts transport-layer communications to [...] Read more.
The Internet of Things (IoT) has transformed modern life through interconnected devices enabling automation across diverse environments. However, its reliance on legacy network architectures has introduced significant security vulnerabilities and efficiency challenges—for example, when Datagram Transport Layer Security (DTLS) encrypts transport-layer communications to protect IoT traffic, it simultaneously blinds intermediate proxies that need to inspect message contents for protocol translation and caching, forcing a fundamental trade-off between security and functionality. This paper presents an architectural solution based on the Recursive InterNetwork Architecture (RINA) to address these issues. We analyze current IoT network stacks, highlighting their inherent limitations—particularly how adding security at one layer often disrupts functionality at others, forcing a detrimental trade-off between security and performance. A central principle underlying our approach is the role of structural symmetry in RINA’s design. Unlike the heterogeneous, protocol-specific layers of TCP/IP, RINA exhibits recursive self-similarity: every Distributed IPC Facility (DIF), regardless of its position in the network hierarchy, instantiates identical mechanisms and offers the same interface to layers above. This architectural symmetry ensures predictable, auditable behavior while enabling policy-driven asymmetry for context-specific security enforcement. By embedding security within each layer and allowing flexible layer arrangement, RINA mitigates common IoT attacks and resolves persistent issues such as the inability of Performance Enhancing Proxies to operate on encrypted connections. We demonstrate RINA’s applicability through use cases spanning smart homes, healthcare monitoring, autonomous vehicles, and industrial edge computing, showcasing its adaptability to both RINA-native and legacy device integration. Our mixed-methods evaluation combines qualitative architectural analysis with quantitative experimental validation, providing both theoretical foundations and empirical evidence for RINA’s effectiveness. We also address emerging trends including AI-driven security and massive IoT scalability. This work establishes a conceptual foundation for leveraging recursive symmetry principles to achieve secure, efficient, and scalable IoT ecosystems. Full article
Show Figures

Graphical abstract

24 pages, 14077 KB  
Article
Efficient and Interpretable Machine Learning for Student Academic Outcome Prediction
by Hongwen Gu and Yuqi Zhang
Mathematics 2026, 14(4), 626; https://doi.org/10.3390/math14040626 - 11 Feb 2026
Viewed by 236
Abstract
Understanding and preventing student dropout presents a decision-critical modeling problem involving heterogeneous variables, nonlinear relationships, and the need for transparent inference. This study addresses the prediction of undergraduate academic outcomes, including Graduation, Enrolled, and Dropout, by proposing a efficientand interpretable machine learning framework [...] Read more.
Understanding and preventing student dropout presents a decision-critical modeling problem involving heterogeneous variables, nonlinear relationships, and the need for transparent inference. This study addresses the prediction of undergraduate academic outcomes, including Graduation, Enrolled, and Dropout, by proposing a efficientand interpretable machine learning framework that explicitly balances predictive performance, feature efficiency, and algorithmic explainability. The empirical analysis relies on a dataset of 4424 student records across 17 undergraduate programs from the Polytechnic Institute of Portalegre, Portugal. In contrast to existing approaches that rely on high-dimensional input spaces and opaque predictive architectures, we develop a reduced-dimensional classification pipeline based on recursive feature elimination with Gradient Boosting and Random Forest models. Starting from a comprehensive set of demographic, academic, and financial indicators, only 20 informative predictors are retained for model construction, substantially reducing input complexity while preserving predictive capacity. Comparative evaluation across multiple learning algorithms identifies Gradient Boosting as the most effective model, achieving an AUC of 0.891. Beyond predictive accuracy, the proposed framework emphasizes model interpretability through the integration of SHapley Additive exPlanations (SHAP), enabling quantitative attribution of feature contributions at both global and instance levels. The analysis reveals that second-semester academic engagement variables—including the number of courses approved, evaluated, and enrolled—as well as tuition fee payment status and age at enrollment, are the dominant factors shaping student outcomes. Overall, the results demonstrate that strong classification performance can be achieved using a compact feature set while maintaining transparent and explainable model behavior. By combining mathematically grounded feature selection with principled model explanation, this study advances methodological understanding of how efficiency, interpretability, and predictive accuracy can be jointly optimized in applied machine learning, with implications for decision-support systems in educational analytics. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

21 pages, 12481 KB  
Article
Research on Multi-State Estimation Strategy for Lithium-Ion Batteries Considering Temperature Bias
by Zhihai Zeng, Yajun Wang and Siyuan Wang
Appl. Sci. 2026, 16(4), 1754; https://doi.org/10.3390/app16041754 - 10 Feb 2026
Viewed by 175
Abstract
Accurate state estimation is a key technology for improving battery utilization and ensuring operational safety in electric vehicles. The joint estimation of the state of charge (SOC) and the state of power (SOP) over a wide temperature range is therefore essential for intelligent [...] Read more.
Accurate state estimation is a key technology for improving battery utilization and ensuring operational safety in electric vehicles. The joint estimation of the state of charge (SOC) and the state of power (SOP) over a wide temperature range is therefore essential for intelligent battery management systems. To address modeling uncertainties and estimation accuracy degradation induced by ambient temperature variations, a dual-polarization equivalent circuit thermal model incorporating temperature bias is proposed, and online parameter updating is achieved using the forgetting factor recursive least squares (FFRLS) algorithm. Furthermore, an unscented particle filter (UPF) is constructed by employing the unscented Kalman filter (UKF) as the proposal density function of the particle filter, thereby improving the estimation accuracy and convergence speed of SOC under wide temperature conditions. Based on the coupling relationship between SOC and SOP, a stepwise progressive strategy is then developed to predict the peak power state under multiple constraints, enhancing the robustness of SOP estimation. Simulation and experimental results demonstrate that the proposed method can accurately estimate SOC and SOP under complex operating conditions over a wide temperature range from −5 °C to 45 °C, exhibiting favorable convergence performance and estimation accuracy, which contributes to the safe operation and performance optimization of electric vehicle battery systems. Full article
Show Figures

Figure 1

20 pages, 6046 KB  
Article
Data-Driven Event-Triggered Predictive Control for Consensus of Discrete-Time Multi-Agent Systems with Time-Varying Delays
by Chang-Jiang Li, Weifeng Xu, Liang Qi, Zhaoping Du, Jianzhen Li and Shuxia Ye
Appl. Sci. 2026, 16(4), 1723; https://doi.org/10.3390/app16041723 - 9 Feb 2026
Viewed by 198
Abstract
Consensus control for discrete-time multi-agent systems is increasingly complex when facing unknown dynamics and time-varying communication delays. Although data-driven control has emerged as a powerful tool to bypass model reliance, few existing studies simultaneously address the challenges of delay compensation and limited communication [...] Read more.
Consensus control for discrete-time multi-agent systems is increasingly complex when facing unknown dynamics and time-varying communication delays. Although data-driven control has emerged as a powerful tool to bypass model reliance, few existing studies simultaneously address the challenges of delay compensation and limited communication bandwidth. To bridge this gap, this paper proposes a novel data-driven event-triggered predictive control framework. We leverage Willems’ fundamental lemma to construct a predictive model directly from historical data, eliminating the need for system identification. Furthermore, an integrated event-triggered mechanism with a dynamic threshold updates control signals only when necessary, effectively reducing transmission load. Theoretical analysis confirms the recursive feasibility and stability of the closed-loop system, while simulations demonstrate that the proposed method achieves robust consensus with significantly reduced event-triggering frequency. Full article
Show Figures

Figure 1

26 pages, 609 KB  
Review
Generative Behavioral Explanation in Micro-Foundational HRM: A Functional Architecture for the Safety–CLB Recursive Mechanism
by Manabu Fujimoto
Adm. Sci. 2026, 16(2), 77; https://doi.org/10.3390/admsci16020077 - 4 Feb 2026
Viewed by 245
Abstract
Micro-foundational HRM has advanced our understanding of how employees perceive and respond to HR practices, yet explanations of how HR systems can generate and sustain coordinated action in day-to-day work remain underspecified. This article presents a theory-building integrative review that specifies a constrained, [...] Read more.
Micro-foundational HRM has advanced our understanding of how employees perceive and respond to HR practices, yet explanations of how HR systems can generate and sustain coordinated action in day-to-day work remain underspecified. This article presents a theory-building integrative review that specifies a constrained, generative mechanism grounded in observable interaction episodes. We propose a functional architecture that assigns constructs to distinct explanatory roles: enabling states (Role A), interaction episodes as the behavioral engine (Role B), and emergent coordination products (Role C). Psychological safety is positioned as an enabling condition that shifts the likelihood and quality of enactment, whereas collective leadership behavior (CLB) is defined as response-inclusive influence episodes (an influence attempt plus an observable response such as uptake, contestation, neglect, or sanction). We formalize a recursive safety–CLB cycle in which response patterns update subsequent safety and influence dispersion over time, which can yield divergent coordination trajectories even when HR conditions are broadly similar. The framework generates discriminant predictions about response profiles, dispersion versus centralization of influence, and temporal signatures, and it clarifies minimal design requirements for testing recursion with episode-level and intensive longitudinal evidence. We discuss implications for micro-foundational HRM, measurement alignment, and testable design-relevant implications for HR system design as an interaction-relevant cue environment. Full article
Show Figures

Figure 1

25 pages, 5664 KB  
Article
Bridging Heterogeneous Experimental Data and Soil Mechanics: An Interpretable Machine Learning Framework for Displacement-Dependent Earth Pressure
by Tianqin Zeng, Zhe Zhang and Yongge Zeng
Buildings 2026, 16(3), 601; https://doi.org/10.3390/buildings16030601 - 1 Feb 2026
Viewed by 226
Abstract
Classical earth pressure theories often struggle to account for the complex coupling effects of wall displacement and spatial non-uniformity under non-limit states. This study presents an interpretable machine learning framework designed to extract universal mechanical laws from heterogeneous experimental datasets. Using a multi-source [...] Read more.
Classical earth pressure theories often struggle to account for the complex coupling effects of wall displacement and spatial non-uniformity under non-limit states. This study presents an interpretable machine learning framework designed to extract universal mechanical laws from heterogeneous experimental datasets. Using a multi-source database of rigid retaining walls with sandy backfill, a three-stage feature refinement strategy is proposed that incorporates Recursive Feature Elimination, Collinearity Analysis, and Interpretability Comparison to identify a parsimonious set of five fundamental physical parameters. A SHapley Additive exPlanations-Categorical Boosting (CatBoost-SHAP) framework is established to predict the active earth pressure coefficient (K) and interpret the underlying mechanisms across various movement modes (RB, RT, and T). Results demonstrate that the model effectively captures the progressive evolution of shear bands and the soil arching effect. Specifically, a critical displacement threshold of Δ/H ≈ 0.006 is identified, marking the transition from mode-dominated stress non-uniformity to magnitude-driven limit states. Leave-One-Dataset-Out Cross-Validation (LODOCV) confirms the model’s ability to maintain physical consistency over purely statistical fitting despite significant inter-literature heterogeneity. Finally, a Graphical User Interface (GUI) is developed to facilitate rapid, displacement-based design in engineering practice. This research bridges the gap between empirical laboratory observations and generalized mechanical logic, providing a data-driven foundation for refined geotechnical design. Full article
Show Figures

Figure 1

28 pages, 9410 KB  
Article
Integrated AI Framework for Sustainable Environmental Management: Multivariate Air Pollution Interpretation and Prediction Using Ensemble and Deep Learning Models
by Youness El Mghouchi and Mihaela Tinca Udristioiu
Sustainability 2026, 18(3), 1457; https://doi.org/10.3390/su18031457 - 1 Feb 2026
Viewed by 282
Abstract
Accurate prediction, forecasting and interpretability of air pollutant concentrations are important for sustainable environmental management and protecting public health. An integrated artificial intelligence (AI) framework is proposed to predict, forecast and analyse six major air pollutants, such as particulate matter concentrations (PM2.5 [...] Read more.
Accurate prediction, forecasting and interpretability of air pollutant concentrations are important for sustainable environmental management and protecting public health. An integrated artificial intelligence (AI) framework is proposed to predict, forecast and analyse six major air pollutants, such as particulate matter concentrations (PM2.5 and PM10), ground-level ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulphur dioxide (SO2), using a combination of ensemble and deep learning models. Five years of hourly air quality and meteorological data are analysed through correlation and Granger causality tests to uncover pollutant interdependencies and driving factors. The results of the Pearson correlation analysis reveal strong positive associations among primary pollutants (PM2.5–PM10, CO–nitrogen oxides NOx and VOCs) and inverse correlations between O3 and NOx (NO and NO2), confirming typical photochemical behaviour. Granger causality analysis further identified NO2 and NO as key causal drivers influencing other pollutants, particularly O3 formation. Among the 23 tested AI models for prediction, XGBoost, Random Forest, and Convolutional Neural Networks (CNNs) achieve the best performance for different pollutants. NO2 prediction using CNNs displays the highest accuracy in testing (R2 = 0.999, RMSE = 0.66 µg/m3), followed by PM2.5 and PM10 with XGBoost (R2 = 0.90 and 0.79 during testing, respectively). The Air Quality Index (AQI) analysis shows that SO2 and PM10 are the dominant contributors to poor air quality episodes, while ozone peaks occur during warm, high-radiation periods. The interpretability analysis based on Shapley Additive exPlanations (SHAP) highlights the key influence of relative humidity, temperature, solar brightness, and NOx species on pollutant concentrations, confirming their meteorological and chemical relevance. Finally, a deep-NARMAX model was applied to forecast the next horizons for the six air pollutants studied. Six formulas were elaborated using input data at times (t, t − 1, t − 2, …, t − n) to forecast a horizon of (t + 1) hours for single-step forecasting. For multi-step forecasting, the forecast is extended iteratively to (t + 2) hours and beyond. A recursive strategy is adopted for this purpose, whereby the forecast at (t + 1) is fed back as an input to generate the forecasts at (t + 2), and so forth. Overall, this integrated framework combines predictive accuracy with physical interpretability, offering a powerful data-driven tool for air quality assessment and policy support. This approach can be extended to real-time applications for sustainable environmental monitoring and decision-making systems. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

28 pages, 5459 KB  
Article
A Hybrid Offline–Online Kalman–RBF Framework for Accurate Relative Humidity Forecasting
by Athanasios Donas, George Galanis, Ioannis Pytharoulis and Ioannis Th. Famelis
Atmosphere 2026, 17(2), 162; https://doi.org/10.3390/atmos17020162 - 31 Jan 2026
Viewed by 326
Abstract
Accurate humidity forecasts are crucial for environmental and operational applications, yet Numerical Weather Prediction systems frequently exhibit systematic and random errors. To address this problem, this study introduces a modified hybrid post-processing approach that extends a previously developed methodology, enabling a direct comparison [...] Read more.
Accurate humidity forecasts are crucial for environmental and operational applications, yet Numerical Weather Prediction systems frequently exhibit systematic and random errors. To address this problem, this study introduces a modified hybrid post-processing approach that extends a previously developed methodology, enabling a direct comparison of computational efficiency and predictive capacity. The proposed framework integrates a quadratic Kalman Filter with a Radial Basis Function Neural Network trained via the Orthogonal Least Squares algorithm and updated online through Recursive Least Squares. This modified method was evaluated via a time-window process, using forecasts from the Weather Research and Forecasting model and recorded observations from stations in northern Greece. The results show substantial improvements in forecast accuracy, as the Bias was reduced by over 85%, and the MAE and RMSE decreased by approximately 65% and 58%, respectively, compared with the baseline model. Furthermore, the proposed framework also demonstrates enhanced computational efficiency, reducing processing time by more than 95% relative to the initial methodology. Full article
Show Figures

Figure 1

Back to TopTop