Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (112)

Search Parameters:
Keywords = multilevel decomposition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 541 KB  
Article
MMCAD-Net: A Multi-Scale Multi-Level Convolutional Attention Decomposition Network for Stock Price Forecasting
by Hongfei Wu, Yin Zhang, Yuli Zhao and Zichen Shi
Appl. Sci. 2026, 16(8), 3716; https://doi.org/10.3390/app16083716 - 10 Apr 2026
Abstract
Stock price prediction is vital for quantitative investment but challenging due to multi-source data complexity, including endogenous, exogenous, and noise components. Standard deep learning models rely on end-to-end modeling of raw market data, failing to disentangle these distinct drivers and hindering prediction accuracy. [...] Read more.
Stock price prediction is vital for quantitative investment but challenging due to multi-source data complexity, including endogenous, exogenous, and noise components. Standard deep learning models rely on end-to-end modeling of raw market data, failing to disentangle these distinct drivers and hindering prediction accuracy. To address this, we propose MMCAD-Net, a novel model based on time series decomposition. It first decomposes the original stock series into an exogenous cyclical component, endogenous temporal component and residual component, thereby disentangling the mixed temporal patterns. Subsequently, deep feature extraction and information refinement are applied to each component: multi-scale convolutions capture diverse patterns in the cyclical component; multi-level convolutional networks refine local and global features in the temporal component; and an attention mechanism sifts for potentially informative signals within the residuals. Finally, a multi-source feature aggregation mechanism fuses all enhanced information. Experiments on real-world stock market datasets demonstrate that MMCAD-Net surpasses mainstream models in both prediction accuracy and efficiency. Ablation studies further confirm the necessity and effectiveness of each core module. Full article
Show Figures

Figure 1

26 pages, 1496 KB  
Article
MAI-GAN: An Inferentially Calibrated Generative Framework for Multilevel Longitudinal Data with Applications to Educational Intersectionality
by Benjamin Hechtman, Ross H. Nehm and Wei Zhu
Stats 2026, 9(2), 42; https://doi.org/10.3390/stats9020042 - 9 Apr 2026
Abstract
Synthetic datasets are increasingly used in education research for methodological validation, privacy-preserving data sharing, and reproducible equity analysis; however, most generative approaches prioritize marginal distributional similarity without ensuring preservation of multilevel inferential properties. This limitation is consequential for repeated-measures data analyzed using intersectionality-focused [...] Read more.
Synthetic datasets are increasingly used in education research for methodological validation, privacy-preserving data sharing, and reproducible equity analysis; however, most generative approaches prioritize marginal distributional similarity without ensuring preservation of multilevel inferential properties. This limitation is consequential for repeated-measures data analyzed using intersectionality-focused hierarchical models, where conclusions depend on variance partitioning, partial pooling, and stratum-level heterogeneity. We introduce MAI-GAN, a hybrid generative framework that implements a structure–residual decomposition approach combining Bayesian longitudinal MAIHDA with conditional GAN-based residual generation. Inferential fidelity is operationalized with respect to multilevel intersectional models by explicitly targeting the preservation of fixed effects, variance components, and variance partitioning coefficients, while baseline composition is maintained via stratified bootstrap resampling. Applied to a six-semester undergraduate biology dataset (N = 2669 students), MAI-GAN was evaluated across multiple independent random seeds and consistently reproduced baseline-dependent residual structure and key inferential quantities. These results demonstrate that model-aligned generative strategies can produce synthetic longitudinal datasets that remain coherent under intersectionality-focused multilevel analysis, offering a principled foundation for equity-oriented synthetic data generation. Full article
Show Figures

Figure 1

23 pages, 557 KB  
Article
A Multi-Stage Decomposition and Hybrid Statistical Framework for Time Series Forecasting
by Swera Zeb Abbasi, Mahmoud M. Abdelwahab, Imam Hussain, Moiz Qureshi, Moeeba Rind, Paulo Canas Rodrigues, Ijaz Hussain and Mohamed A. Abdelkawy
Axioms 2026, 15(4), 273; https://doi.org/10.3390/axioms15040273 - 9 Apr 2026
Abstract
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates [...] Read more.
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates multi-level signal decomposition with structured parametric modeling to enhance predictive accuracy. The proposed hybrid architectures—EMD–EEMD–ARIMA, EMD–EEMD–GMDH, and EMD–EEMD–ETS—employ a hierarchical decomposition–reconstruction strategy before forecasting. In the first stage, Empirical Mode Decomposition (EMD) decomposes the observed series into intrinsic mode functions (IMFs) and a residual component. In the second stage, Ensemble Empirical Mode Decomposition (EEMD) is applied to further refine the extracted components, mitigating mode mixing and improving signal separability. In the final stage, each reconstructed component is modeled using ARIMA, Exponential Smoothing State Space (ETS), and Group Method of Data Handling (GMDH) frameworks, and the individual forecasts are aggregated to obtain the final prediction. Empirical evaluation based on a recursive one-step-ahead forecasting scheme demonstrates consistent numerical improvements across all standard accuracy measures. In particular, the proposed EMD–EEMD–ARIMA model achieves the lowest forecasting error, reducing the root-mean-square error (RMSE) by approximately 6–7% relative to the best-performing single-stage model and by about 3–4% relative to the two-stage EMD-based hybrids. Similar improvements are observed in mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), indicating enhanced stability and robustness of the three-stage architecture. The results provide strong numerical evidence that multi-level decomposition combined with structured statistical modeling yields superior predictive performance for complex nonlinear and nonstationary time series. The proposed framework offers a mathematically coherent, computationally tractable, and systematically structured hybrid modeling strategy that effectively integrates noise-assisted decomposition with parametric and data-driven forecasting techniques. Full article
Show Figures

Figure 1

27 pages, 2452 KB  
Article
Two-Level Source-Grid-Load-Storage Preventive Resilience for Power Systems with Multiple Offshore Wind Farms Under Typhoon Scenarios
by Qiuhui Chen, Junhao Gong, Xiangjing Su and Fengyong Li
Sustainability 2026, 18(7), 3491; https://doi.org/10.3390/su18073491 - 2 Apr 2026
Viewed by 264
Abstract
Typhoon-induced extreme weather poses a severe threat to power systems with high offshore wind penetration. Source-side wind turbine tripping and grid-side transmission line failures are likely to occur simultaneously, which may trigger cascading outages and large-scale load shedding. A multi-level source-grid-load-storage preventive resilience [...] Read more.
Typhoon-induced extreme weather poses a severe threat to power systems with high offshore wind penetration. Source-side wind turbine tripping and grid-side transmission line failures are likely to occur simultaneously, which may trigger cascading outages and large-scale load shedding. A multi-level source-grid-load-storage preventive resilience dispatch strategy is proposed. A typhoon spatiotemporal evolution model is first established based on the Batts gradient wind model. Failure probability models for offshore wind turbines and overhead transmission lines are developed while considering strong wind and lightning strike effects. The most probable and severe fault scenario is identified using an entropy-based quantification method. A two-stage robust preventive dispatch model is subsequently formulated. In the day-ahead stage, unit commitment, multi-type reserve allocation, and pumped storage scheduling are optimized at a 1 h resolution. In the real-time stage, combined wind-storage systems are coordinated at a 10 min resolution to accommodate rapid wind power ramps caused by high-wind shutdown events. The model is reformulated through Lagrangian duality and solved by the Benders decomposition algorithm. Case studies on a modified IEEE-RTS 24-bus system with three offshore wind farms demonstrate that the proposed strategy reduces wind curtailment by 66.3%, load shedding by 74.6%, and total cost by 14.8% compared with the case without energy storage. The combined operation cost of storage resources accounts for only 3.1% of the total cost, confirming its favorable cost-effectiveness for resilience enhancement. The proposed strategy contributes to the sustainable integration of offshore wind energy by ensuring a reliable power supply during extreme weather events. Full article
Show Figures

Figure 1

24 pages, 4424 KB  
Article
Hybrid Attribution-Based Interpretable Deep Reinforcement Learning for Autonomous Driving Behavior Decision-Making
by Yaxuan Liu, Jiakun Huang, Mingjun Li, Qing Ye and Xiaolin Song
Appl. Sci. 2026, 16(6), 3096; https://doi.org/10.3390/app16063096 - 23 Mar 2026
Viewed by 262
Abstract
With the increasing deployment of autonomous driving systems, the opaque nature of deep reinforcement learning (DRL) decision models hinders understanding and validation of driving decisions. To address this challenge, we propose a Hybrid Attribution-based Interpretable Deep Reinforcement Learning framework (HA-IDRL) for autonomous driving [...] Read more.
With the increasing deployment of autonomous driving systems, the opaque nature of deep reinforcement learning (DRL) decision models hinders understanding and validation of driving decisions. To address this challenge, we propose a Hybrid Attribution-based Interpretable Deep Reinforcement Learning framework (HA-IDRL) for autonomous driving behavior decision-making. The framework introduces a Hybrid Gradient–LRP (HGL) attribution mechanism that integrates gradient-based attribution and Layer-wise Relevance Propagation (LRP) to capture complementary sensitivity and contribution information, producing more consistent and comprehensive post hoc explanations. In addition to post hoc interpretability, we enhance structural interpretability by replacing the conventional multilayer perceptron (MLP) in the Dueling Deep Q-Network (Dueling DQN) architecture with Kolmogorov–Arnold Networks (KAN). By representing nonlinear interactions through learnable univariate functions and explicit summation structures, KAN provides inherently interpretable functional decompositions. The proposed framework is evaluated on a highway lane-changing task using the highway-env simulator. Experimental results show that HA-IDRL achieves decision-making performance comparable to representative DRL baselines, including Dueling DQN and Soft Actor-Critic (SAC), while providing explanations that are more stable and better aligned with human driving semantics. Moreover, the proposed method produces explanations with low computational overhead, enabling efficient and real-time interpretability in practical autonomous driving applications. Overall, HA-IDRL advances trustworthy autonomous driving by enabling high-performance decision-making and rigorous, multi-level interpretability, thereby improving the transparency and operational reliability of DRL-based driving policies. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 6311 KB  
Article
Synthesis of FPGA-Based Moore FSMs with Two Cores of Partial Functions
by Alexander Barkalov, Larysa Titarenko and Kazimierz Krzywicki
Electronics 2026, 15(6), 1279; https://doi.org/10.3390/electronics15061279 - 18 Mar 2026
Viewed by 366
Abstract
A new architecture of FPGA-based Moore finite state machine (FSM) is proposed, as well as the corresponding method of synthesis. The proposed architecture of FSM circuit includes two cores of partial Boolean functions. The first core is based on functional decomposition, the second [...] Read more.
A new architecture of FPGA-based Moore finite state machine (FSM) is proposed, as well as the corresponding method of synthesis. The proposed architecture of FSM circuit includes two cores of partial Boolean functions. The first core is based on functional decomposition, the second core is based on structural decomposition. Under certain conditions, the proposed method improves both spatial and temporal characteristics of FSM circuits. The FSM states have two codes. The first of them is a maximum binary code (MBC) having minimum possible number of bits. The second code is a partial state code representing a state as the element of some class of compatibility. The method can be applied if Moore FSM circuits are implemented using look-up table (LUT) elements of field-programmable gate arrays. To improve characteristics of resulting FSM circuits, the classes of pseudoequivalent states are used. This allows diminishing the numbers of literals in sum-of-products representing partial input memory functions. The first core is multi-level. For the second core, all partial functions are generated by single-LUT circuits. These cores form the first level of FSM circuit. The LUTs of the second level generate bits of MBCs. These codes are used by the third circuit level for generating both FSM outputs and partial state codes. An example of synthesis is shown. The experiments are conducted using a known library of benchmark Moore FSMs. The experiments show that the proposed approach can be used for complex FSMs where the total number of FSM inputs and state variables is at least twice the number of inputs of the base LUT. The results of experiments show that the proposed method allows improving both the spatial and temporal characteristics for complex FSMs compared with their counterparts based on other known design methods. Full article
(This article belongs to the Topic VLSI-Based Sequential Devices in Cyber-Physical Systems)
Show Figures

Figure 1

22 pages, 2432 KB  
Article
Open-Circuit Fault Location Method of Lightweight Modular Multilevel Converter for Deloading Operation of Offshore Wind Power
by Zhehao Fang and Haoyang Cui
Electronics 2026, 15(6), 1277; https://doi.org/10.3390/electronics15061277 - 18 Mar 2026
Cited by 1 | Viewed by 251
Abstract
In offshore wind farms, modular multilevel converters (MMCs) may operate under a deloading condition to accommodate wind-speed volatility and dispatch constraints. Here, deloading is defined as transmitted power < 0.2 pu (scenario S2, low-power non-reversal). Under this condition, submodule capacitor-voltage fault signatures are [...] Read more.
In offshore wind farms, modular multilevel converters (MMCs) may operate under a deloading condition to accommodate wind-speed volatility and dispatch constraints. Here, deloading is defined as transmitted power < 0.2 pu (scenario S2, low-power non-reversal). Under this condition, submodule capacitor-voltage fault signatures are weak and exhibit strong operating-point-dependent drift, which degrades conventional threshold-based or offline-trained methods. We propose a lightweight switch-level IGBT open-circuit fault localization framework for deloaded MMCs. Wavelet packet decomposition is used to extract time–frequency energy features, and principal component analysis reduces feature dimensionality for lightweight deployment. An enhanced XGBoost model further integrates severity-index weighting to alleviate class imbalance and incremental learning to adapt to condition drift induced by wind-power fluctuations. MATLAB2024b/Simulink results show 99.6% accuracy in S2 with less than 2 ms inference latency, and robust performance in extended scenarios including partial-power operation and power reversal. Full article
Show Figures

Figure 1

23 pages, 6365 KB  
Article
MTL_TX: A Multi-Task Transformer Model for Improved Radiation Time-Series Estimation
by Hongfang Zhang, Adam Stavola, Hal Ferguson, Bence Budavari, Hongyi Wu, Chiman Kwan and Jiang Li
Sensors 2026, 26(5), 1439; https://doi.org/10.3390/s26051439 - 25 Feb 2026
Viewed by 342
Abstract
Controlling radiation doses at potential radioactive facilities is critical to ensuring the safety of both personnel and the public. At the Thomas Jefferson National Accelerator Facility (JLab), multiple sensors are deployed around the three experimental halls to monitor key parameters, including single-beam current, [...] Read more.
Controlling radiation doses at potential radioactive facilities is critical to ensuring the safety of both personnel and the public. At the Thomas Jefferson National Accelerator Facility (JLab), multiple sensors are deployed around the three experimental halls to monitor key parameters, including single-beam current, energy levels, current leakage, and radiation values during accelerator operations. In this study, we developed a Multi-task Transformer model, MTL_TX, to accurately estimate radiation doses at sensor locations based on historical data, with the aim of enhancing safety in accelerator facilities and surrounding public areas. To improve estimation accuracy, we integrated two innovative components into the proposed model: hierarchical feature embedding (HFE) and multi-level decomposition attention (MDA). Additionally, the multi-task learning (MTL) framework effectively leverages correlations among multiple sensors, enabling individual estimations for each sensor. MTL_TX achieved outstanding results on data collected in 2018, with an MSE of 0.1464, an RMSE of 0.2353, and an R2 score of 0.8584. Furthermore, when trained on 2018 data, MTL_TX exhibited excellent generalization capability to unseen datasets from 2016 to 2019, achieving an MSE of 0.1407, an RMSE of 0.2263, and an R2 score of 0.8831. These results demonstrate a significant improvement over existing state-of-the-art models. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

29 pages, 1593 KB  
Article
COVID-19 Mortality, Human Development, and Age Across the WHO Member States: A Longitudinal Multilevel Count Data Analysis
by José Clemente Jacinto Ferreira, Ana Paula Matias Gama, Luiz Paulo Fávero, Ricardo Goulart Serra, Patrícia Belfiore, Igor Pinheiro de Araújo Costa, Miguel Ângelo Lellis Moreira, Marcos dos Santos and Wilson Tarantin Junior
Computers 2026, 15(2), 136; https://doi.org/10.3390/computers15020136 - 22 Feb 2026
Viewed by 461
Abstract
This study aims to verify whether there is a statistically significant relationship between COVID-19 mortality rates, the Human Development Index (HDI), and population age across the World Health Organisation (WHO) member states. Despite the extensive literature on COVID-19 mortality and socio-demographic indicators, few [...] Read more.
This study aims to verify whether there is a statistically significant relationship between COVID-19 mortality rates, the Human Development Index (HDI), and population age across the World Health Organisation (WHO) member states. Despite the extensive literature on COVID-19 mortality and socio-demographic indicators, few studies explicitly integrate count data diagnostics, zero-inflation mechanisms, and multilevel longitudinal modelling to jointly capture cross-country heterogeneity and temporal dynamics. This study addresses this gap by applying a structured modelling framework that combines negative binomial, zero-inflated, and multilevel regression models to the WHO country-level data. For this purpose, two different statistical techniques were applied, namely: negative binomial regression modelling, zero-inflated negative binomial type for daily temporal exposure on 20 July 2020 and 20 July 2022, before and after the application of the first dose of the COVID-19 vaccine; and multilevel regression for two-level repeated measures data. Negative binomial regression estimates indicate statistically significant positive associations between HDI, age, and COVID-19 mortality rates before the application of the first dose of the vaccine. The variance decomposition from the definition of an unconditional model indicates significant variability in the occurrences of infection and death and between countries/states over time. Full article
Show Figures

Figure 1

27 pages, 2612 KB  
Article
Quantitative Evaluation Method for Source-Load Complementarity and System Regulation Capacity Across Multi-Time Scales
by Xiaoyan Hu, Keteng Jiang, Zikai Fan, Borui Liao, Bingjie Li, Zesen Li, Yi Ge and Hu Li
Inventions 2026, 11(1), 16; https://doi.org/10.3390/inventions11010016 - 11 Feb 2026
Viewed by 284
Abstract
Accurate assessment of source-load complementarity and system regulation capacity is critical for secure dispatch and planning in high-penetration renewable power systems. Addressing limitations of existing methods—which rely heavily on static metrics, struggle to capture temporal and tail dependence characteristics, and provide insufficient support [...] Read more.
Accurate assessment of source-load complementarity and system regulation capacity is critical for secure dispatch and planning in high-penetration renewable power systems. Addressing limitations of existing methods—which rely heavily on static metrics, struggle to capture temporal and tail dependence characteristics, and provide insufficient support for dispatch decisions—this paper proposes a multi-level integrated evaluation framework. First, from a source—load matching perspective, we develop a novel complementarity metric, integrating real-time rate of change, temporal consistency, and tail dependency. An improved adaptive noise-complete set empirical mode decomposition combined with a hybrid Copula model is employed to isolate noise and to precisely quantify dynamic dependency structures. Second, we introduce the Minkowski measure and construct a net load fluctuation domain accounting for extreme fluctuations and coupling relationships. Subsequently, combining the Analytic Hierarchy Process (AHP) with probabilistic convolution enables multi-level comparative quantification of resource capacity and fluctuation domain requirements under varying confidence levels. Simulation results demonstrate that the proposed framework not only provides a more robust assessment of source-load complementarity but also quantitatively outputs the adequacy and risk level of system regulation capacity. This delivers hierarchical, actionable decision support for dispatch planning, significantly enhancing the engineering applicability of evaluation outcomes. Full article
Show Figures

Figure 1

22 pages, 1378 KB  
Article
Bias Correction and Explainability Framework for Large Language Models: A Knowledge-Driven Approach
by Xianming Yang, Qi Li, Chengdong Qian, Haitao Wang, Yonghui Wu and Wei Wang
Big Data Cogn. Comput. 2026, 10(2), 58; https://doi.org/10.3390/bdcc10020058 - 10 Feb 2026
Viewed by 695
Abstract
Large Language Models (LLMs) have demonstrated extraordinary capabilities in natural language generation; however, their real-world deployment is frequently hindered by the generation of factually incorrect or biased content, along with an inherent deficiency in transparency. To address these critical limitations and thereby enhance [...] Read more.
Large Language Models (LLMs) have demonstrated extraordinary capabilities in natural language generation; however, their real-world deployment is frequently hindered by the generation of factually incorrect or biased content, along with an inherent deficiency in transparency. To address these critical limitations and thereby enhance the reliability and explainability of LLM outputs, this study proposes a novel integrated framework, namely the Adaptive Knowledge-Driven Correction Network (AKDC-Net), which incorporates three core algorithmic innovations. Firstly, the Hierarchical Uncertainty-Aware Bias Detector (HUABD) performs multi-level linguistic analysis (lexical, syntactic, semantic, and pragmatic) and, for the first time, decomposes predictive uncertainty into epistemic and aleatoric components. This decomposition enables principled, interpretable bias detection with clear theoretical underpinnings. Secondly, the Neural-Symbolic Knowledge Graph Enhanced Corrector (NSKGEC) integrates a temporal graph neural network with a differentiable symbolic reasoning module, facilitating logically consistent and factually grounded corrections based on dynamically updated knowledge sources. Thirdly, the Contrastive Learning-driven Multimodal Explanation Generator (CLMEG) leverages a cross-modal attention mechanism within a contrastive learning paradigm to generate coherent, high-quality textual and visual explanations that enhance the interpretability of LLM outputs. Extensive evaluations were conducted on a challenging medical domain dataset to validate the effectiveness of the proposed AKDC-Net framework. Experimental results demonstrate significant improvements over state-of-the-art baselines: specifically, a 14.1% increase in the F1-score for bias detection, a 19.4% enhancement in correction quality, and a 31.4% rise in user trust scores. These findings establish a new benchmark for the development of more trustworthy and transparent artificial intelligence (AI) systems, laying a solid foundation for the broader and more reliable application of LLMs in high-stakes domains. Full article
(This article belongs to the Special Issue Enhancement Optimization Techniques on Large Language Model)
Show Figures

Figure 1

20 pages, 1035 KB  
Article
Multi-Level Parallel CPU Execution Method for Accelerated Portion-Based Variant Call Format Data Processing
by Lesia Mochurad, Ivan Tsmots, Vita Mostova and Karina Kystsiv
Computation 2026, 14(2), 48; https://doi.org/10.3390/computation14020048 - 8 Feb 2026
Viewed by 478
Abstract
This paper proposes and experimentally evaluates a multi-level CPU-oriented execution method for high-throughput portion-based processing of file-backed Variant Call Format (VCF) data and automated mutation classification. The approach is based on a formally defined local processing scheme and integrates three coordinated levels of [...] Read more.
This paper proposes and experimentally evaluates a multi-level CPU-oriented execution method for high-throughput portion-based processing of file-backed Variant Call Format (VCF) data and automated mutation classification. The approach is based on a formally defined local processing scheme and integrates three coordinated levels of parallelism: block-based partitioning of file-backed VCF portions read sequentially into localized fragments with data-level parallel processing; task-level decomposition of feature construction into independent transformations; and execution-level specialization via JIT compilation of numerical kernels. To prevent performance degradation caused by nested parallelism, a resource-control mechanism is introduced as an execution rule that bounds effective parallelism and mitigates oversubscription, improving throughput stability on a single multi-core CPU node. Experiments on a public chromosome-17 VCF dataset for BRCA1-region pathogenicity classification demonstrate that the proposed multi-level local CPU execution (parsing/filtering, feature construction, and JIT-specialized numeric kernels) reduces runtime from 291.25 s (sequential) to 73.82 s, yielding a 3.95× speedup. When combined with resource-coordinated parallel model training, the end-to-end runtime further decreases to 51.18 s, corresponding to a 5.69× speedup, while preserving classification quality (accuracy 0.8483, precision 0.8758, recall 0.8261, F1 0.8502). A stage-wise ablation analysis quantifies the contribution of each execution level and confirms consistent scaling under resource-bounded execution. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

28 pages, 4753 KB  
Article
A Fine-Grained Difficulty and Similarity Framework for Dynamic Evaluation of Path-Planning Generalization in UGVs
by Zewei Dong, Yaze Guo, Jingxuan Yang, Xiaochuan Tang, Weichao Xu and Ming Lei
Drones 2026, 10(2), 101; https://doi.org/10.3390/drones10020101 - 31 Jan 2026
Viewed by 496
Abstract
The generalization capability of the decision-making modules in unmanned ground vehicles (UGVs) is critical for their safe deployment in unseen environments. Prevailing evaluation methods, which rely on aggregated performance over static benchmark sets, lack the granularity to diagnose the root causes of model [...] Read more.
The generalization capability of the decision-making modules in unmanned ground vehicles (UGVs) is critical for their safe deployment in unseen environments. Prevailing evaluation methods, which rely on aggregated performance over static benchmark sets, lack the granularity to diagnose the root causes of model failure, as they often conflate the distinct influences of scenario similarity and intrinsic difficulty. To overcome this limitation, we introduce a fine-grained, dynamic evaluation framework that deconstructs generalization along the dual axes of multi-level difficulty and similarity. First, scenario similarity is quantified through a four-layer hierarchical decomposition, with results aggregated into a composite similarity score. Test scenarios are independently classified into ten discrete difficulty levels via a consensus mechanism integrating large language models and task-specific proxy models. By constructing a three-dimensional (3D) performance landscape across similarity, difficulty, and task performance, we enable detailed behavioral diagnosis. The framework assesses robustness by analyzing performance within the high-similarity band (90–100%), while the full 3D landscape characterizes generalization under distribution shift. Seven interpretable metrics are derived to quantify distinct facets of both generalization and robustness. This initial validation focuses on the path-planning layer under full state observability, establishing a foundational proof-of-concept for the framework. It not only ranks algorithms but also reveals non-trivial behavioral patterns, such as the decoupling between in-distribution robustness and out-of-distribution generalization. It provides a reliable and interpretable foundation for evaluating the readiness of UGVs for safe deployment in unseen environments. Full article
Show Figures

Figure 1

47 pages, 729 KB  
Article
Disentangling Signal from Noise: A Bayesian Hybrid Framework for Variance Decomposition in Complex Surveys with Post-Hoc Domains
by JoonHo Lee and Alison Hooper
Mathematics 2026, 14(3), 512; https://doi.org/10.3390/math14030512 - 31 Jan 2026
Viewed by 353
Abstract
Quantifying geographic variation is crucial for policy evaluation, yet researchers often rely on complex national surveys not designed for sub-national inference. This design-analysis mismatch creates two challenges when decomposing variance across domains like states: informative sampling confounds substantive heterogeneity with design artifacts, and [...] Read more.
Quantifying geographic variation is crucial for policy evaluation, yet researchers often rely on complex national surveys not designed for sub-national inference. This design-analysis mismatch creates two challenges when decomposing variance across domains like states: informative sampling confounds substantive heterogeneity with design artifacts, and finite-sample variance inflation conflates sampling noise with signal. We introduce the Bayesian Hybrid Framework that reconciles design-based and model-based inference through Bayesian Pseudo-Likelihood for design consistency and a hybrid generalized linear mixed model that simultaneously estimates substantive domain effects and nuisance design effects (strata, PSUs). We propose a Dual Estimand Framework distinguishing between Descriptive (total observed variance) and Policy (substantive variance net of design) estimands, with explicit de-attenuation to correct finite-sample inflation. Simulations based on the 2019 National Survey of Early Care and Education demonstrate negligible bias and superior efficiency compared to standard alternatives. Applied to subsidy receipt among home-based child care providers, we find the observed between-state variation (16.7%) reduces to only 5.4% after accounting for design artifacts and sampling noise. This three-fold reduction reveals that local factors, not state policies, drive most heterogeneity, highlighting the necessity of our framework for rigorous geographic variance decomposition in complex surveys. An accompanying R package (version 0.3.0), bhfvar, implements the complete framework. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

24 pages, 1632 KB  
Article
Research on Risk Assessment and Prevention–Control Measures for Immersed Tunnel Construction in 100 m-Deep Water Environments
by Haiyang Xu, Zhengzhong Qiu, Sudong Xu, Liuyan Mao and Zebang Cui
J. Mar. Sci. Eng. 2026, 14(1), 53; https://doi.org/10.3390/jmse14010053 - 27 Dec 2025
Cited by 1 | Viewed by 640
Abstract
With the rapid development of cross-sea infrastructure, the immersed tube method has been increasingly applied to deep-water immersed-tube tunnel construction. However, when the construction depth reaches the scale of one hundred meters, issues such as high hydrostatic pressure, complex hydrological conditions, and limited [...] Read more.
With the rapid development of cross-sea infrastructure, the immersed tube method has been increasingly applied to deep-water immersed-tube tunnel construction. However, when the construction depth reaches the scale of one hundred meters, issues such as high hydrostatic pressure, complex hydrological conditions, and limited construction windows significantly elevate project risks. Against this backdrop, this study systematically reviews relevant domestic and international research findings in the context of 100-m-deep water environments and constructs a comprehensive risk index system covering the construction processes of the WBS breakdown system based on the WBS-RBS decomposition method within the HSE framework. A risk index weighting analysis combines quantitative and qualitative analysis, categorizing the indicators into qualitative and quantitative categories. Quantitative analysis employs threshold determination and the LEC method; qualitative analysis utilizes expert surveys and the G1 method. Ultimately, a model that combines multiple methods for a 100-m-deep water environment, integrating subjective expertise and objective data, is developed. On this basis, multi-level prevention and control measures are proposed for hundred-meter-deep water-immersed tube construction. The results demonstrate that the proposed system can effectively identify key risk sources under deep-water conditions and provide practical countermeasures, offering significant guidance for ensuring construction safety and engineering quality in hundred-meter immersed-tube tunnel projects. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop