Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,277)

Search Parameters:
Keywords = robust estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3162 KB  
Article
Distributionally Robust Game-Theoretic Optimization Algorithm for Microgrid Based on Green Certificate–Carbon Trading Mechanism
by Chen Wei, Pengyuan Zheng, Jiabin Xue, Guanglin Song and Dong Wang
Energies 2026, 19(1), 206; https://doi.org/10.3390/en19010206 (registering DOI) - 30 Dec 2025
Abstract
Aiming at multi-agent interest demands and environmental benefits, a distributionally robust game-theoretic optimization algorithm based on a green certificate–carbon trading mechanism is proposed for uncertain microgrids. At first, correlated wind–solar scenarios are generated using Kernel Density Estimation and copula theory and the probability [...] Read more.
Aiming at multi-agent interest demands and environmental benefits, a distributionally robust game-theoretic optimization algorithm based on a green certificate–carbon trading mechanism is proposed for uncertain microgrids. At first, correlated wind–solar scenarios are generated using Kernel Density Estimation and copula theory and the probability distribution ambiguity set is constructed combining 1-norm and -norm metrics. Subsequently, with gas turbines, renewable energy power producers, and an energy storage unit as game participants, a two-stage distributionally robust game-theoretic optimization scheduling model is established for microgrids considering wind and solar correlation. The algorithm is constructed by integrating a non-cooperative dynamic game with complete information and distributionally robust optimization. It minimizes a linear objective subject to linear matrix inequality (LMI) constraints and adopts the column and constraint generation (C&CG) algorithm to determine the optimal output for each device within the microgrid to enhance its overall system performance. This method ultimately yields a scheduling solution that achieves both equilibrium among multiple stakeholders’ interests and robustness. The simulation result verifies the effectiveness of the proposed method. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

23 pages, 5523 KB  
Article
Boosting Tree Stem Sectional Volume Predictions Through Machine Learning-Based Stem Profile Modeling
by Maria J. Diamantopoulou
Forests 2026, 17(1), 54; https://doi.org/10.3390/f17010054 (registering DOI) - 30 Dec 2025
Abstract
Knowledge of the reduction in tree stem diameter with increasing height is considered significant for reliable tree taper prediction. Tree taper modeling offers a comprehensive framework that connects tree form to growth processes, enabling precise estimates of volume and biomass. In this context, [...] Read more.
Knowledge of the reduction in tree stem diameter with increasing height is considered significant for reliable tree taper prediction. Tree taper modeling offers a comprehensive framework that connects tree form to growth processes, enabling precise estimates of volume and biomass. In this context, machine learning modeling approaches offer strong potential for predicting difficult-to-measure field biometric variables, such as tree stem diameters. Two promising machine learning approaches, temporal convolutional networks (TCNs) and extreme gradient boosting (XGBoost), were evaluated for their ability to accurately predict trees’ stem profiles, suggesting a powerful and safe strategy for predicting tree stem sectional volume with minimal ground-truth measurements. The comparative analysis of TCN- and XGBoost-constructed models showed their strong ability to capture the taper trend of the trees. XGBoost proved particularly well adapted to the stem profile of black pine (Pinus nigra) trees in the Karya forest of Mount Olympus, Greece, by summarizing its spatial structure, substantially improving the accuracy of total stem volume up to RMSE% equal to 3.71% and 7.94% of all ranges of the observed stem volume for the fitting and test data sets. The same trend was followed for the 1 m sectional mean stem-volume predictions. The tested machine learning methodologies provide a stable basis for robust tree stem volume predictions, utilizing easily obtained field measurements. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Graphical abstract

17 pages, 503 KB  
Article
Hybrid Human–Machine Consensus Framework for SME Technology Selection: Integrating Machine Learning and Planning Poker
by Chetna Gupta and Varun Gupta
Systems 2026, 14(1), 42; https://doi.org/10.3390/systems14010042 (registering DOI) - 30 Dec 2025
Abstract
This paper proposes a hybrid collaborative framework to optimize technology selection in Small and Medium-sized Enterprises (SMEs) by integrating machine learning (ML) predictions with Planning Poker, consensus-based estimation technique used in agile software development. Addressing known challenges such as cognitive bias, resource constraints, [...] Read more.
This paper proposes a hybrid collaborative framework to optimize technology selection in Small and Medium-sized Enterprises (SMEs) by integrating machine learning (ML) predictions with Planning Poker, consensus-based estimation technique used in agile software development. Addressing known challenges such as cognitive bias, resource constraints, and the need for inclusive decision-making, the proposed model combines data-driven suitability analysis with stakeholder-driven consensus. ML generates quantitative, criterion-wise suitability scores based on historical SME data, providing transparent baselines for evaluation. Stakeholders independently assess candidate technologies using Planning Poker, and their consensus is blended with ML predictions through a flexible weighting mechanism. An illustrative case study on CRM tool selection illustrates the framework’s practical advantages: improved decision accuracy, transparency, and greater stakeholder engagement. The methodology is iterative, allowing for continuous learning and adaptation as new data emerges. This dual approach ensures that technology adoption decisions in SMEs are both empirically validated and contextually robust, offering a significant improvement over traditional, siloed methods. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
19 pages, 431 KB  
Review
Isotonic and Convex Regression: A Review of Theory, Algorithms, and Applications
by Eunji Lim
Mathematics 2026, 14(1), 147; https://doi.org/10.3390/math14010147 (registering DOI) - 30 Dec 2025
Abstract
Shape-restricted regression provides a flexible framework for estimating an unknown relationship between input variables and a response when little is known about the functional form, but qualitative structural information is available. In many practical settings, it is natural to assume that the response [...] Read more.
Shape-restricted regression provides a flexible framework for estimating an unknown relationship between input variables and a response when little is known about the functional form, but qualitative structural information is available. In many practical settings, it is natural to assume that the response changes in a systematic way as inputs increase, such as increasing, decreasing, or exhibiting diminishing returns. Isotonic regression incorporates monotonicity constraints, requiring the estimated function to be nondecreasing with respect to its inputs, while convex regression imposes convexity constraints, capturing relationships with increasing or decreasing marginal effects. These shape constraints arise naturally in a wide range of applications, including economics, operations research, and modern data-driven decision systems, where they improve interpretability, stability, and robustness without relying on parametric model assumptions or tuning parameters. This review focuses on isotonic and convex regression as two fundamental examples of shape-restricted regression. We survey their theoretical properties, computational formulations based on optimization, efficient algorithms, and practical applications, and we discuss key challenges such as non-smoothness, boundary overfitting, and scalability. Finally, we outline open problems and directions for future research. Full article
(This article belongs to the Special Issue Stochastic Simulation: Theory and Applications)
Show Figures

Figure 1

23 pages, 2535 KB  
Article
Corundum Particles as Trypsin Carrier for Efficient Protein Digestion
by Sarah Döring, Birte S. Wulfes, Aleksandra Atanasova, Carsten Jaeger, Leopold Walzel, Georg Tscheuschner, Sabine Flemig, Kornelia Gawlitza, Ines Feldmann, Zoltán Konthur and Michael G. Weller
BioTech 2026, 15(1), 2; https://doi.org/10.3390/biotech15010002 (registering DOI) - 30 Dec 2025
Abstract
Reusable enzyme carriers are valuable for proteomic workflows, yet many supports are expensive or lack robustness. This study describes the covalent immobilization of recombinant trypsin on micrometer-sized corundum particles and assesses their performance in protein digestion and antibody analysis. The corundum surface was [...] Read more.
Reusable enzyme carriers are valuable for proteomic workflows, yet many supports are expensive or lack robustness. This study describes the covalent immobilization of recombinant trypsin on micrometer-sized corundum particles and assesses their performance in protein digestion and antibody analysis. The corundum surface was cleaned with potassium hydroxide, silanized with 3-aminopropyltriethoxysilane and activated with glutaraldehyde. Recombinant trypsin was then attached, and the resulting imines were reduced with sodium cyanoborohydride. Aromatic amino acid analysis (AAAA) estimated an enzyme loading of approximately 1 µg/mg. Non-specific adsorption of human plasma proteins was suppressed by blocking residual aldehydes with a Tris-glycine-lysine buffer. Compared with free trypsin, immobilization shifted the temperature optimum from 50 to 60 °C and greatly improved stability in 1 M guanidinium hydrochloride. Activity remained above 80% across several reuse cycles, and storage at 4 °C preserved functionality for weeks. When applied to digesting the NISTmAb, immobilized trypsin provided peptide yields and sequence coverage comparable to soluble enzyme and outperformed it at elevated temperatures. MALDI-TOF MS analysis of Herceptin digests yielded fingerprint spectra that correctly identified the antibody and achieved >60% sequence coverage. The combination of low cost, robustness and analytical performance makes corundum-immobilized trypsin an attractive option for research and routine proteomic workflows. Full article
Show Figures

Figure 1

12 pages, 234 KB  
Article
Identifying “Ina Jane Doe”: The Forensic Anthropologists’ Role in Revising and Correcting Narratives in a Cold Case
by Amy R. Michael, Samantha H. Blatt, Jennifer D. Bengtson, Ashanti Maronie, Samantha Unwin and Jose Sanchez
Humans 2026, 6(1), 1; https://doi.org/10.3390/humans6010001 (registering DOI) - 30 Dec 2025
Abstract
The 1992 cold case homicide of “Ina Jane Doe” illustrates how an interdisciplinary team worked to identify the decedent using a combined approach of skeletal re-analysis, updated forensic art informed by anthropologists’ input, archival research, and forensic investigative genetic genealogy. The original forensic [...] Read more.
The 1992 cold case homicide of “Ina Jane Doe” illustrates how an interdisciplinary team worked to identify the decedent using a combined approach of skeletal re-analysis, updated forensic art informed by anthropologists’ input, archival research, and forensic investigative genetic genealogy. The original forensic art for “Ina Jane Doe” showed an over-pathologization of skeletal features and an inaccurate hairstyle; however, the case gained notoriety on internet true crime forums leading to speculation about the decedent’s intellectual capacity and physical appearance. The “Ina Jane Doe” case demonstrates the importance of advocating for skeletal re-analysis as more robust methods and technologies emerge in forensic science, as well as the impact of sustained public interest in cold cases. In this case, continuous public interest and online speculation led to anthropologists constructing a team of experts to correct and revise narratives about the decedent. Forensic anthropologists’ role in cold cases may include offering skeletal re-analysis, recognizing and correcting errors in the original estimations of the biological profile, searching for missing person matches, and/or working collaboratively with subject matter experts in forensic art, odontology and forensic investigative genetic genealogy. Full article
27 pages, 2724 KB  
Systematic Review
The Synergy Between the Travel Cost Method and Other Valuation Techniques for Ecosystem Services: A Systematic Review
by Einstein Sánchez Bardales, Ligia Magali García Rosero, Erick Stevinsonn Arellanos Carrion, Einstein Bravo Campos and Omer Cruz Caro
Environments 2026, 13(1), 18; https://doi.org/10.3390/environments13010018 (registering DOI) - 30 Dec 2025
Abstract
This systematic review examined how the Travel Cost Method (TCM) works together with other valuation methods, such as stated and declared preferences, to improve estimates of total economic value (TEV). Despite the widespread use of TCM, no systematic synthesis has examined how its [...] Read more.
This systematic review examined how the Travel Cost Method (TCM) works together with other valuation methods, such as stated and declared preferences, to improve estimates of total economic value (TEV). Despite the widespread use of TCM, no systematic synthesis has examined how its integration with complementary methods enhances TEV estimation across different ecosystems and geographical contexts. Following PRISMA guidelines, we conducted searches in Scopus and Web of Science, identifying 245 records. After the screening process, 57 studies remained for analysis. Results show that 74% of the studies combined TCM with Contingent Valuation Method (CVM), and 12.3% with Choice Experiment (CEM). Three chronological phases were identified: early domination by the United States (1985–2000), international expansion and diversification (2001–2015), and recent methodological innovation led by China (2016–2024). Forest and recreational ecosystems accounted for 25% of applications, followed by marine-coastal (21%). Within cultural ecosystem services, the subcategory of physical and experiential interactions predominates with 63.1%. Comparative analysis indicates that TCM systematically produces higher and more variable monetary estimates than CVM, reflecting its sensitivity to travel behavior and spatial scale, while stated preference methods provide more stable estimates of non-use values. Persistent methodological limitations include non-probabilistic sampling and uneven ecosystem coverage. This review advances the literature by providing the first comprehensive synthesis of integrated TCM applications, demonstrating how methodological combinations strengthen TEV estimation beyond single-method approaches. The findings offer practical guidance for policymakers designing environmental impact assessments, environmental managers selecting valuation tools tailored to ecosystem and management objectives, and researchers seeking standardized and robust frameworks for integrated ecosystem service valuation. Full article
Show Figures

Figure 1

10 pages, 680 KB  
Article
Using Large Language Models for In Silico Development and Simulation of a Patient-Reported Outcome Questionnaire for Cataract Surgery with Various Intraocular Lenses: A Pre-Validation Study
by Ewelina Trojacka, Joanna Przybek-Skrzypecka, Justyna Izdebska, Jacek P. Szaflik, Musa Aamir Qazi, Abdullah Azhar and Janusz Skrzypecki
J. Clin. Med. 2026, 15(1), 283; https://doi.org/10.3390/jcm15010283 (registering DOI) - 30 Dec 2025
Abstract
Background/Objectives: Development of Patient-Reported Outcome Measures (PROMs) in ophthalmology is limited by high patient burden during early validation. We propose an In Silico Pre-validation Framework using Large Language Models (LLMs) to stress-test instruments before clinical deployment. Methods: The LLM generated a PROM questionnaire [...] Read more.
Background/Objectives: Development of Patient-Reported Outcome Measures (PROMs) in ophthalmology is limited by high patient burden during early validation. We propose an In Silico Pre-validation Framework using Large Language Models (LLMs) to stress-test instruments before clinical deployment. Methods: The LLM generated a PROM questionnaire and a synthetic cohort of 500 distinct patient profiles via a Python-based pipeline. Profiles were instantiated as structured JSON objects with detailed attributes for demographics, lifestyle, and health background, including specific clinical parameters like IOL type (Monofocal, Multifocal, EDOF) and dysphotopsia severity. To eliminate memory bias, a stateless simulation approach was used for test–retest reliability; AI agents were re-instantiated without access to prior conversation history. Psychometric validation included Confirmatory Factor Analysis (CFA) using WLSMV estimation and Differential Item Functioning (DIF). Results: The model demonstrated excellent fit (CFI = 0.962, TLI = 0.951, RMSEA = 0.048, SRMR = 0.063), confirming structural validity. DIF analysis detected no significant bias based on age, sex, or IOL type (0/20 items flagged). Internal consistency was robust (Cronbach’s alpha > 0.80) and stateless test–retest reliability was high (ICC > 0.90), indicating stability independent of algorithmic memory. Convergent validity was established via significant correlations with NEI-VFQ-25 scores (Spearman’s: −0.425 to −0.652,). While responsive to change, known-groups validity reflected realistic clinical overlap. Conclusions: LLM-based pre-validation effectively mirrors complex human response patterns through “algorithmic fidelity.” By identifying structural failure points in silico, this framework ensures PROMs are robust and unbiased before clinical trials, reducing the ethical and logistical burden on real-world populations. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

22 pages, 1658 KB  
Article
Deep Hierarchical Graph Correlation: A Two-Stage Approach to Well-Log Alignment Using CNNs and Dynamic Programming
by Sushil Acharya, Karl Fabian, Anis Yazidi and Kjetil Westeng
J. Mar. Sci. Eng. 2026, 14(1), 66; https://doi.org/10.3390/jmse14010066 (registering DOI) - 30 Dec 2025
Abstract
Precise depth alignment of well logs is essential for reliable subsurface characterization, enabling accurate correlation of geological features across multiple wells. This study presents the Deep Hierarchical Graph Correlator (DHGC), a two-stage deep learning framework for scalable and automated well-log depth alignment. DHGC [...] Read more.
Precise depth alignment of well logs is essential for reliable subsurface characterization, enabling accurate correlation of geological features across multiple wells. This study presents the Deep Hierarchical Graph Correlator (DHGC), a two-stage deep learning framework for scalable and automated well-log depth alignment. DHGC aligns a target log to a reference log by comparing fixed-size windows extracted from both signals. In the first stage, a one-dimensional convolutional neural network (1D CNN) trained on 177,026 triplets using triplet-margin loss learns discriminative embeddings of gammaray (GR) log windows from eight Norwegian North Sea wells. In the second stage, a feedforward scoring network evaluates embedded window pairs to estimate local similarity. Dynamic programming then computes the optimal nonlinear warping path from the resulting cost matrix. The feature extractor achieved 99.6% triplet accuracy, and the scoring network achieved 98.93% classification accuracy with an ROC-AUC of 0.9971. Evaluation on 89 unseen GR log pairs demonstrated that DHGC improves the mean Pearson correlation coefficient from 0.35 to 0.91, with successful alignment in 88 cases (98.9%). DHGC achieved an 8.2× speedup over DTW (3.16 s versus 25.83 s per log pair). While DTW achieves a higher mean correlation (0.96 versus 0.91), DHGC avoids singularity artifacts and exhibits lower variability in distance metrics than CC, suggesting improved robustness and scalability for well-log synchronization. Full article
(This article belongs to the Special Issue Marine Well Logging and Reservoir Characterization)
Show Figures

Figure 1

25 pages, 1372 KB  
Article
From Harmony to Probability: The Problem of Identifiability and a Bayesian Inference Perspective on Greek Nominal Stress
by Kosmas Kosmidis, Giorgos Markopoulos and Anthi Revithiadou
Appl. Sci. 2026, 16(1), 374; https://doi.org/10.3390/app16010374 (registering DOI) - 29 Dec 2025
Abstract
Maximum Entropy and Gradient Harmonic Grammar are well-established grammatical models for the analysis of linguistic phenomena, but we demonstrate that their probabilistic versions are inherently unidentifiable, employing parameters that cannot be uniquely determined from empirical data. Through examining Greek nominal stress patterns, we [...] Read more.
Maximum Entropy and Gradient Harmonic Grammar are well-established grammatical models for the analysis of linguistic phenomena, but we demonstrate that their probabilistic versions are inherently unidentifiable, employing parameters that cannot be uniquely determined from empirical data. Through examining Greek nominal stress patterns, we propose a reparameterization approach that introduces two identifiable, phonologically interpretable, parameters. We employ both point estimation and Bayesian inference to calculate their values. The latter approach yields reliable parameter estimates with quantified uncertainty that point estimation cannot offer. Our findings shed light on lesser-known aspects of Greek stress and offer a robust methodological framework for probabilistic phonological modeling across languages. Full article
Show Figures

Figure 1

17 pages, 160077 KB  
Article
RA6D: Reliability-Aware 6D Pose Estimation via Attention-Guided Point Cloud in Aerosol Environments
by Woojin Son, Seunghyeon Lee, Taejoo Kim, Geonhwa Son and Yukyung Choi
Robotics 2026, 15(1), 8; https://doi.org/10.3390/robotics15010008 (registering DOI) - 29 Dec 2025
Abstract
We address the problem of 6D object pose estimation in aerosol environments, where RGB and depth sensors experience correlated degradation due to scattering and absorption. Handling such spatially varying degradation typically requires depth restoration, but obtaining ground-truth complete depth in aerosol conditions is [...] Read more.
We address the problem of 6D object pose estimation in aerosol environments, where RGB and depth sensors experience correlated degradation due to scattering and absorption. Handling such spatially varying degradation typically requires depth restoration, but obtaining ground-truth complete depth in aerosol conditions is prohibitively expensive. To overcome this limitation without relying on costly depth completion, we propose RA6D, a framework that integrates attention-guided reliability modeling with feature distillation. The attention map generated during RGB dehazing reflects aerosol distribution and provides a compact indicator of depth reliability. By embedding this attention as an additional feature in an Attention-Guided Point cloud (AGP), the network can adaptively respond to spatially varying degradation. In addition, to address the scarcity of aerosol-domain data, we employ clean-to-aerosol feature distillation, transferring robust representations learned under clean conditions. Experiments on aerosol benchmarks show that RA6D achieves higher accuracy and significantly faster inference than restoration-based pipelines, offering a practical solution for real-time robotic perception under severe visual degradation. Full article
(This article belongs to the Special Issue Extended Reality and AI Empowered Robots)
Show Figures

Figure 1

18 pages, 7746 KB  
Article
A Multicomponent OBN Time-Shift Joint Correction Method Based on P-Wave Empirical Green’s Functions
by Dongxiao Jiang, Bingyu Chen, Lei Cheng, Chang Chen, Yingda Li and Yun Wang
J. Mar. Sci. Eng. 2026, 14(1), 60; https://doi.org/10.3390/jmse14010060 (registering DOI) - 29 Dec 2025
Abstract
To address clock drift arising from the absence of GPS synchronization during ocean-bottom seismic observations, we propose a time-offset correction and quality-control scheme that uses the correlation of P-wave empirical Green’s functions (EGFs) as the metric, and we demonstrate its efficacy in mitigating [...] Read more.
To address clock drift arising from the absence of GPS synchronization during ocean-bottom seismic observations, we propose a time-offset correction and quality-control scheme that uses the correlation of P-wave empirical Green’s functions (EGFs) as the metric, and we demonstrate its efficacy in mitigating cross-correlation asymmetry caused by azimuthal noise in shallow-water environments. The method unifies the time delays of the four components into a single objective function, estimates per-node offsets via sparse weighted least squares with component-specific weights, applies spatial second-difference smoothing to suppress high-frequency oscillations, and performs spatiotemporally constrained regularized iterative optimization initialized by the previous day’s inversion to achieve a robust solution. Tests on a real four-component ocean-bottom node (4C-OBN) hydrocarbon exploration dataset show that, after conventional linear clock-drift correction of the OBN system, the proposed method can effectively detect millisecond-scale time jumps on individual nodes; compared with traditional noise cross-correlation time-shift calibration based on surface-wave symmetry, our four-component fusion approach achieves superior robustness and accuracy. The results demonstrate a marked increase in the coherence of the four-component cross-correlations after correction, providing a reliable temporal reference for subsequent multicomponent seismic processing and quality control. Full article
(This article belongs to the Section Geological Oceanography)
Show Figures

Figure 1

24 pages, 6317 KB  
Article
Prediction of Multi-Axis Fatigue Life of Metallic Materials Using a Feature-Optimised Hybrid GRU-Attention-DNN Model
by Mi Zhou, Haishen Lu, Yuan Cao, Chunsheng Wang and Dian Chen
Eng 2026, 7(1), 9; https://doi.org/10.3390/eng7010009 (registering DOI) - 29 Dec 2025
Abstract
To address the challenge of simultaneously modelling temporal evolution and static properties in fatigue life prediction, this paper proposes a Hybrid GRU–Attention–DNN model: The Gated Recurrent Unit (GRU) captures time-evolution features, while the attention mechanism adaptively focuses on critical stages. These are then [...] Read more.
To address the challenge of simultaneously modelling temporal evolution and static properties in fatigue life prediction, this paper proposes a Hybrid GRU–Attention–DNN model: The Gated Recurrent Unit (GRU) captures time-evolution features, while the attention mechanism adaptively focuses on critical stages. These are then fused with static properties via a fully connected network to generate life estimates. Training and validation were conducted using an 8:2 split, with baselines including Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and GRU. Performance was evaluated using the coefficient of determination (R2), root mean squared error (RMSE), mean absolute error (MAE), and root mean squared logarithmic error (RMSLE), together with error band plots. Results demonstrate that the proposed model outperforms baseline CNN/GRU/LSTM models in overall accuracy and robustness, and that these improvements remain statistically significant according to bootstrap confidence intervals (CI) of R2, RMSE, MAE and RMSLE on the test set. Additionally, this paper conducts an interpretability analysis: attention visualisations reveal the model’s significant emphasis on the early stages of the lifespan. Time window masking experiments further indicate that removing early information causes the most significant performance degradation. Both lines of evidence show high consistency in qualitative and quantitative trends, providing a basis for engineering sampling window design and trade-offs in test duration. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

27 pages, 2127 KB  
Article
Positive-Unlabeled Learning in Implicit Feedback from Data Missing-Not-At-Random Perspective
by Sichao Wang, Tianyu Xia and Lingxiao Yang
Entropy 2026, 28(1), 41; https://doi.org/10.3390/e28010041 (registering DOI) - 29 Dec 2025
Abstract
The lack of explicit negative labels issue is a prevalent challenge in numerous domains, including CV, NLP, and Recommender Systems (RSs). To address this challenge, many negative sample completion methods are proposed, such as optimizing sample distribution through pseudo-negative sampling and confidence screening [...] Read more.
The lack of explicit negative labels issue is a prevalent challenge in numerous domains, including CV, NLP, and Recommender Systems (RSs). To address this challenge, many negative sample completion methods are proposed, such as optimizing sample distribution through pseudo-negative sampling and confidence screening in CV, constructing reliable negative examples by leveraging textual semantics in NLP, and supplementing negative samples via sparsity analysis of user interaction behaviors and preference inference in RS for handling implicit feedback. However, most existing methods fail to adequately address the Missing-Not-At-Random (MNAR) nature of the data and the potential presence of unmeasured confounders, which compromise model robustness in practice. In this paper, we first formulate the prediction task in RS with implicit feedback as a positive-unlabeled (PU) learning problem. We then propose a two-phase debiasing framework consisting of exposure status imputation, followed by debiasing through the proposed doubly robust estimator. Moreover, our theoretical analysis shows that existing propensity-based approaches are biased in the presence of unmeasured confounders. To overcome this, we incorporate a robust deconfounding method in the debiasing phase to effectively mitigate the impact of unmeasured confounders. We conduct extensive experiments on three widely used real-world datasets to demonstrate the effectiveness and potential of the proposed methods. Full article
(This article belongs to the Special Issue Causal Inference in Recommender Systems)
Show Figures

Figure 1

18 pages, 10258 KB  
Article
A Quantitative Interpretability-Guided Network for Enhanced Wheat Seedling Detection
by Yan Li, Suyi Liu, Xuerui Qi, Yiwei Gao, Xiangxin Zhuang, Jianqing Zhao, Suwan Wang, Yongchao Tian, Yan Zhu, Weixing Cao and Xiaohu Zhang
Agronomy 2026, 16(1), 92; https://doi.org/10.3390/agronomy16010092 (registering DOI) - 29 Dec 2025
Abstract
Seedling count is a key indicator of wheat population during the seedling stage. Accurate seedling detection is therefore vital. Recently, deep learning techniques have been widely used to detect wheat seedlings and estimate seedling numbers from UAV images. However, existing models lack interpretability, [...] Read more.
Seedling count is a key indicator of wheat population during the seedling stage. Accurate seedling detection is therefore vital. Recently, deep learning techniques have been widely used to detect wheat seedlings and estimate seedling numbers from UAV images. However, existing models lack interpretability, resulting in unclear internal processes and making reliable optimization difficult. Consequently, directly applying deep learning for wheat seedling detection often does not yield optimal results. To facilitate this study, we constructed an RGB image dataset captured by the DJI ZENMUSE X4S during the wheat emergence stage. We introduced a model interpretation method that employs network dissection and Gradient-weighted Class Activation Mapping (Grad-CAM) to quantitatively assess the contribution of each network layer to wheat seedling detection. The results show that grayscale inputs elicited the highest responses in the early layers. Based on this, we propose a dual-input wheat seedling detection model that uses both the original image and its grayscale version, fusing their features in the early network layers. The combination of grayscale images improves wheat seedling detection performance. Experimental results demonstrate that our method outperformed the benchmark and traditional object detection techniques, achieving an AP50 of 90.2%, with a recall of 0.88 and a precision of 0.90. The proposed model not only improved AP50 but also reduced the missed detection rate, making wheat detection results more reliable. This approach enhances the robustness, generalization, and interpretability of deep learning models, offering an innovative and highly effective solution for increasing the accuracy and reliability of wheat seedling detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop