Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (535)

Search Parameters:
Keywords = sampling probability strategy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 715 KB  
Systematic Review
MicroRNA Expression Profile in Endometriosis and Endometriosis-Associated Ovarian Cancer—Systematic Review
by Maria Szubert, Iwona Gabriel, Aleksander Rycerz, Monika Golińska and Jacek R. Wilczyński
Cells 2026, 15(4), 374; https://doi.org/10.3390/cells15040374 - 20 Feb 2026
Viewed by 268
Abstract
Endometriosis-associated ovarian cancer comprises a special group of ovarian cancers that most probably originate from endometriosis foci. Several in vitro studies have shown that microRNA (miRNA) plays an important role in this carcinogenesis. Our goal was to establish if a distinct miRNA profile [...] Read more.
Endometriosis-associated ovarian cancer comprises a special group of ovarian cancers that most probably originate from endometriosis foci. Several in vitro studies have shown that microRNA (miRNA) plays an important role in this carcinogenesis. Our goal was to establish if a distinct miRNA profile can be associated with endometriosis and endometriosis-associated ovarian cancer with their potential causal relationship, and whether such a profile could be used clinically to prognose carcinogenesis in endometriosis foci. We conducted a systematic search according to PRISMA guidelines, registered at PROSPERO (number CRD42021245606). The search encompassed whole Pubmed, Cochrane and Medline databases to 1 May 2025 and the search strategy included the following [MeSH] terms: ‘miRNAs’ or ‘microRNAs’ or ‘miR’ and ‘ovarian cancer’ and ‘endometriosis’. Our ultimate inclusion criterion was that studies must simultaneously evaluate miRNA expression in endometriosis, regardless of its form and stage, and in endometriosis-associated ovarian cancer (EAOC), as only data generated under identical experimental conditions and using the same controls are truly comparable. The quality of the data was assessed using The Newcastle-Ottawa scale (NOS) and ROBINS-I tool. Our final analysis included 13 studies, comprising 608 patients and over 1000 miRNA molecules. Among those only five manuscripts presented raw data for each miRNA studied. Although several authors declared high sensitivity and specificity for one or more miRNA in distinguishing between endometriosis and endometriosis-associated ovarian cancer, a meta-analysis could not be performed due to the high heterogeneity of the studied samples. We concluded that there is not enough publicly available raw data to establish a set of miRNAs capable of differentiating between the two diseases and of prognosing carcinogenesis. The greatest limitation lies in the use of various standardized reference gene sets, which makes it impossible to compare relative miRNA expression across studies. New data from the next generation sequencing (NGS) experiments would overcome issues related to reference and control genes. Full article
(This article belongs to the Special Issue Molecular Pathogenesis of Ovarian Cancer and Therapeutic Strategies)
Show Figures

Figure 1

14 pages, 2386 KB  
Article
Reliability and Sensitivity Analysis of Liquid Storage Tank Using Active Learning Kriging
by Qingqing Xu, Xue Li and Feng Zhang
Appl. Sci. 2026, 16(4), 1806; https://doi.org/10.3390/app16041806 - 11 Feb 2026
Viewed by 162
Abstract
This study proposed a Kriging surrogate model incorporating active learning to overcome the high computational costs associated with conducting reliability and sensitivity analyses of industrial liquid storage tank structures. In the proposed method, the Kriging surrogate model efficiently captures the functional relationships between [...] Read more.
This study proposed a Kriging surrogate model incorporating active learning to overcome the high computational costs associated with conducting reliability and sensitivity analyses of industrial liquid storage tank structures. In the proposed method, the Kriging surrogate model efficiently captures the functional relationships between basic variables and structural responses. Two learning functions, i.e., the U learning function and the EFF learning function, are adopted to screen the training sample pool to identify and iteratively update the optimal next training sample point in the model. This strategy significantly reduces the number of limit state functions and finite element analysis calculations required, considerably decreasing the computational cost of analysis. Results from the liquid storage tank case study demonstrate that the adaptive learning Kriging method can achieve failure probability estimation at the order of 10−5 with only approximately 100 limit state function (LSF) evaluations. Additionally, it is found that the pressure exerted by the tank contents has the most significant impact on the tank’s structural reliability, followed by tank thickness and then tank radius. Full article
Show Figures

Figure 1

19 pages, 459 KB  
Review
Sampling Criteria in International Comparative Education Research: A Scoping Review to Inform Health Professions Education
by Franziska König, Doreen Herinek, Franziska Matthes and Michael Ewers
Int. Med. Educ. 2026, 5(1), 24; https://doi.org/10.3390/ime5010024 - 9 Feb 2026
Viewed by 279
Abstract
Health Professions Education research is playing an increasing role in ensuring evidence-based practice in Health Professions Education (HPE). To this end, HPE research uses, among other approaches, comparisons as a method in the sense of Comparative Education Research (CER), which allows to compare [...] Read more.
Health Professions Education research is playing an increasing role in ensuring evidence-based practice in Health Professions Education (HPE). To this end, HPE research uses, among other approaches, comparisons as a method in the sense of Comparative Education Research (CER), which allows to compare programs at different levels of education. To obtain evidence-based results, it needs a methodologically sound approach with transparent and justifiable sampling strategies as well as defined sampling criteria. The aim of this research is to identify sampling criteria used in CER for program comparisons and to draw conclusions about what HPE research can probably learn from that. We conducted a scoping review following the Arksey and O’Malley framework, searching three databases and grey literature for international comparative education studies. Four reviewers selected and analyzed the studies using content analysis. A total of 68 studies were included, and six sampling criteria for international CER were identified: (1) culture, (2) education system, (3) curriculum of an education program, (4) ranking, achievement or performance, (5) state and relevance of research, and (6) opportunities and pragmatic reasons. All these criteria appear to be applicable to education research on HPE programs. The sampling criteria derived can serve as a guide for sample selection in international CER and HPE research, providing impetus to improve the quality of research methodology. This necessitates unrestricted access to data on educational programs and a more profound comprehension of the cultural, political and educational characteristics of the respective country. Full article
Show Figures

Figure 1

20 pages, 1239 KB  
Article
Sustainable Selection Criteria for Small Wastewater Treatment Plants Ensuring Biodegradation
by Zbigniew Mucha, Agnieszka Generowicz, Kamil Zieliński, Iga Pietrucha, Anna Kochanek, Piotr Herbut, Paweł Kwaśnicki, Anna Gronba-Chyła and Elżbieta Sobiecka
Water 2026, 18(3), 433; https://doi.org/10.3390/w18030433 - 6 Feb 2026
Viewed by 401
Abstract
The rapid development of rural and peri-urban areas increases the demand for decentralized wastewater treatment systems. Small wastewater treatment plants (SWTPs) with a capacity below 2000 PE are becoming an important element of local water protection and circular-economy strategies, yet clear guidelines for [...] Read more.
The rapid development of rural and peri-urban areas increases the demand for decentralized wastewater treatment systems. Small wastewater treatment plants (SWTPs) with a capacity below 2000 PE are becoming an important element of local water protection and circular-economy strategies, yet clear guidelines for selecting appropriate technologies are still lacking. This study analyzes the criteria used in decision-making for SWTPs from a multi-stakeholder perspective and evaluates the relative importance of technical, economic, environmental and social factors. The research was conducted in Poland and included a survey of 130 respondents representing six stakeholder groups (officials, operators, designers, contractors, scientists and residents). Respondents allocated weights to four main groups of criteria and assessed eleven detailed parameters on a 1–10 scale. The data were analyzed using descriptive statistics, the Kolmogorov–Smirnov test with the Lilliefors correction to verify distribution assumptions, and the Kruskal–Wallis test to examine differences between stakeholder groups. The results show a consistent hierarchy of criteria, with technical reliability, treatment efficiency and operating costs ranked as the most important factors. Social and environmental aspects were assessed as relevant but secondary. Only minor differences between stakeholder groups were observed. The study highlights the need for integrated, multicriteria approaches in SWTP planning, particularly in dispersed rural areas. The findings may support local authorities, designers and investors in technology selection. The research is limited by the non-probability sampling strategy, the national scope of the dataset and the cross-sectional character of the survey. Full article
Show Figures

Figure 1

16 pages, 5881 KB  
Article
Integrating Multisource Environmental and Socioeconomic Drivers to Predict Forest Fire Risk Using a Random Forest Model in Hubei Province, Central China
by Kuan Lu, Ximing Quan, Zixuan Xiong, Byron B. Lamont, Ruifeng Zhang, Xiaobo Xu, Pujie Wei, Weixing Xue, Lin Chen, Zhiqiang Tang, Zhaogui Yan and Xionghui Qi
Forests 2026, 17(2), 224; https://doi.org/10.3390/f17020224 - 6 Feb 2026
Viewed by 175
Abstract
Wildfire susceptibility mapping supports proactive forest management, and estimated predictive performance may vary with spatial dependence and the control-point sampling strategy. We developed an interpretable random-forest framework to map wildfire occurrence probability across Hubei Province, China, by integrating multi-source environmental (meteorological, topographic, and [...] Read more.
Wildfire susceptibility mapping supports proactive forest management, and estimated predictive performance may vary with spatial dependence and the control-point sampling strategy. We developed an interpretable random-forest framework to map wildfire occurrence probability across Hubei Province, China, by integrating multi-source environmental (meteorological, topographic, and vegetation) and socio-economic predictors. To enhance methodological robustness and address high-dimensional data complexity, the Boruta algorithm was employed for rigorous feature selection, identifying the most significant drivers while filtering out random noise. The model showed strong discrimination on held-out data (AUC = 0.942, accuracy = 87.9%), and variable importance highlighted sunshine duration, elevation, relative humidity, and maximum temperature as dominant predictors. Predicted wildfire probability exhibited a clear east–west gradient; high and very high susceptibility classes covered 22% of forested land while containing 82% of historical fires, indicating priority zones for targeted prevention and resource allocation. These results demonstrate that combining multi-source predictors with machine-learning interpretability can produce actionable susceptibility maps for regional fire-risk management. Full article
(This article belongs to the Special Issue Advanced Technologies for Forest Fire Detection and Monitoring)
Show Figures

Graphical abstract

21 pages, 5948 KB  
Article
MuRaF-LULC: A Systematic Multivariate Random Forest Framework for Annual Land-Use and Land-Cover Mapping and Long-Term Change Detection
by Yunuen Reygadas
Land 2026, 15(2), 268; https://doi.org/10.3390/land15020268 - 5 Feb 2026
Viewed by 480
Abstract
Land-use and land-cover (LULC) change is one of the most pervasive drivers of socioenvironmental transformation worldwide. Given its impacts on ecosystems and climate, the systematic analysis of LULC dynamics remains a central objective of land-change science. Despite major advances in Earth observation capabilities, [...] Read more.
Land-use and land-cover (LULC) change is one of the most pervasive drivers of socioenvironmental transformation worldwide. Given its impacts on ecosystems and climate, the systematic analysis of LULC dynamics remains a central objective of land-change science. Despite major advances in Earth observation capabilities, robust, flexible, and scalable algorithms for long-term monitoring remain unevenly adopted, particularly in remote, forested tropical regions. This study introduces the Multivariate Random Forest Land-Use and Land-Cover (MuRaF-LULC) framework, a supervised and generalizable framework that produces annual, multi-class LULC maps from Landsat time series, with interannual change derived through year-to-year comparisons. A key methodological component of the framework is its predictor-selection strategy, in which variable-importance rankings are used to identify an optimized subset of predictors prior to final model training. MuRaF-LULC was implemented in Google Earth Engine (GEE) and evaluated in Guatemala’s Maya Biosphere Reserve (MBR) for the 2018–2024 period using probability-based sampling and uncertainty-aware accuracy assessment and area estimation. Results show that MuRaF-LULC generates robust annual LULC classifications across multiple years (overall accuracy = 0.90–0.92) and reliable estimates of agropecuario expansion (the dominant transition in the study area) when change is assessed over longer temporal windows where transitions signals stabilize and for which the framework is best suited (producer’s accuracy = 0.97 ± 0.03; user’s accuracy = 0.69 ± 0.05). By prioritizing consistent annual, multiclass LULC trajectories, MuRaF-LULC complements breakpoint- and disturbance-oriented approaches commonly used in land-change studies. Implemented in publicly available, well-documented GEE scripts, MuRaF-LULC facilitates policy-relevant LULC assessment by remote sensing practitioners in governmental and private organizations, where reproducibility, clarity, and ease of deployment are as important as methodological sophistication. Full article
Show Figures

Figure 1

26 pages, 2517 KB  
Article
A Simulation-Based Framework for Optimal Force Determination in Construction Robotics: A Case Study of Aluminum Formwork Removal
by Jaemin Kim, Taekyoung Yu, Mideum Lee, Jiyeon Kim, Seulki Lee and Jungho Yu
Buildings 2026, 16(3), 659; https://doi.org/10.3390/buildings16030659 - 5 Feb 2026
Viewed by 229
Abstract
The construction industry is increasingly challenged by an aging workforce and persistent labor shortages, underscoring the need for automation and the integration of construction robotics. However, the high uncertainty and variability of real construction environments impose significant constraints on robot design and deployment. [...] Read more.
The construction industry is increasingly challenged by an aging workforce and persistent labor shortages, underscoring the need for automation and the integration of construction robotics. However, the high uncertainty and variability of real construction environments impose significant constraints on robot design and deployment. In particular, accurately estimating the required operational force—without unnecessary overdesign—is essential for ensuring operational safety, energy efficiency, and battery endurance. Conducting on-site experiments that reflect diverse field conditions is often impractical, making simulation-based approaches a viable alternative. This study proposes a simulation-driven method for deriving energy-efficient, task-appropriate operational forces for construction robots. As a case study, an aluminum formwork dismantling operation was modeled in NVIDIA Isaac Sim, and a dataset of environmental variables was generated through random sampling. Sensitivity analysis revealed that the dynamic friction coefficient at the aluminum–aluminum interface had the greatest impact on the required dismantling force. To mitigate this influence, a lubrication strategy was introduced to reduce surface friction. With a 10% safety margin applied, the dismantling operation achieved a 99.5% success probability at an operational force of 50 N-representing an 11.71 N reduction and an 18.97% decrease compared to the non-lubricated scenario. These results demonstrate a practical and evidence-based approach for optimizing operational forces in construction robotics, contributing to reduced energy consumption, improved operational efficiency, and mitigation of construction schedule delays. Full article
(This article belongs to the Special Issue Large-Scale AI Models Across the Construction Lifecycle)
Show Figures

Figure 1

18 pages, 2702 KB  
Article
A Dual-Branch Ensemble Learning Method for Industrial Anomaly Detection: Fusion and Optimization of Scattering and PCA Features
by Jing Cai, Zhuo Wu, Runan Hua, Shaohua Mao, Yulun Zhang, Ran Guo and Ke Lin
Appl. Sci. 2026, 16(3), 1597; https://doi.org/10.3390/app16031597 - 5 Feb 2026
Viewed by 223
Abstract
Industrial visual anomaly detection remains challenging because practical inspection systems must achieve high detection accuracy while operating under highly imbalanced data, diverse defect patterns, limited computational resources, and increasing demands for interpretability. This work aims to develop a lightweight yet effective and explainable [...] Read more.
Industrial visual anomaly detection remains challenging because practical inspection systems must achieve high detection accuracy while operating under highly imbalanced data, diverse defect patterns, limited computational resources, and increasing demands for interpretability. This work aims to develop a lightweight yet effective and explainable anomaly detection framework for industrial images in settings where a limited number of labeled anomalous samples are available. We propose a dual-branch feature-based supervised ensemble method that integrates complementary representations: a PCA branch to capture linear global structure and a scattering branch to model multi-scale textures. A heterogeneous pool of classical learners (SVM, RF, ET, XGBoost, and LightGBM) is trained on each feature branch, and stable probability outputs are obtained via stratified K-fold out-of-fold training, probability calibration, and a quantile-based threshold search. Decision-level fusion is then performed by stacking, where logistic regression, XGBoost, and LightGBM serve as meta-learners over the out-of-fold probabilities of the selected top-K base learners. Experiments on two public benchmarks (MVTec AD and BTAD) show that the proposed method substantially improves the best PCA-based single model, achieving relative F1_score gains of approximately 31% (MVTec AD) and 26% (BTAD), with maximum AUC values of about 0.91 and 0.96, respectively, under comparable inference complexity. Overall, the results demonstrate that combining high-quality handcrafted features with supervised ensemble fusion provides a practical and interpretable alternative/complement to heavier deep models for resource-constrained industrial anomaly detection, and future work will explore more category-adaptive decision strategies to further enhance robustness on challenging classes. Full article
(This article belongs to the Special Issue AI and Data-Driven Methods for Fault Detection and Diagnosis)
Show Figures

Figure 1

17 pages, 1755 KB  
Article
An Extremum-Based BP Neural Network Method and Its Application in Time-Dependent Structural System Reliability Analysis
by Guijie Li, Yimian He, Lai Zhang and Guangqing Xia
Aerospace 2026, 13(2), 146; https://doi.org/10.3390/aerospace13020146 - 3 Feb 2026
Viewed by 190
Abstract
Time-dependent structural systems (TDSSs) in engineering involve high dimensionality, nonlinearity, and complex uncertainties, complicating the reliability analysis compared to time-independent assessments. To address these challenges, this paper proposes an extremum-based back propagation neural network (BPNN) method for TDSS reliability analysis. The method adopts [...] Read more.
Time-dependent structural systems (TDSSs) in engineering involve high dimensionality, nonlinearity, and complex uncertainties, complicating the reliability analysis compared to time-independent assessments. To address these challenges, this paper proposes an extremum-based back propagation neural network (BPNN) method for TDSS reliability analysis. The method adopts a double-loop structure. Specifically, the inner loop finds the minimum of the time-dependent performance function for a given realization of the random variables. This transformation converts the time-dependent problem into an equivalent time-invariant one. Then, the outer loop constructs a BPNN surrogate model to map the relationship between the random variables and the performance function minima. To improve computational efficiency, an adaptive sample selection strategy is integrated into the training process. This technique selects samples near the failure boundary to iteratively update the BPNN, ensuring high accuracy with a small training set. Once the stopping criterion is satisfied, the failure probability is estimated using Monte Carlo simulation (MCS). The trained BPNN model is used to rapidly predict the extremum for the large-scale sample pool. The proposed method is verified through three practical engineering cases: a four-bar mechanism, an aero-engine turbine disc, and a cantilever tube. Results show that the method remains accurate and efficient. The successful applications confirm the rationality and engineering applicability of the proposed model. Full article
Show Figures

Figure 1

29 pages, 378 KB  
Article
Associations Between Restorative Justice Practices, Music Therapy, and Social Reintegration Among Adolescent Offenders in Peru: An Observational Study
by Luis Ángel Espinoza-Pajuelo, Edison Menacho-Taipe, Johnny William Mogollon-Longa, Allan Alexander Muñoz-Linares, Jose Mario Ochoa-Pachas, Jhony Wilber Ravelo-Perez, Jorge Luis Caro-Gonzalo and Roberto Christian Puente-Jesus
Soc. Sci. 2026, 15(2), 76; https://doi.org/10.3390/socsci15020076 - 30 Jan 2026
Viewed by 413
Abstract
Restorative justice within the juvenile justice system has gained increasing attention as an alternative to punitive approaches, particularly in relation to the social reintegration of adolescents in conflict with the law, while complementary interventions such as music therapy are often implemented to support [...] Read more.
Restorative justice within the juvenile justice system has gained increasing attention as an alternative to punitive approaches, particularly in relation to the social reintegration of adolescents in conflict with the law, while complementary interventions such as music therapy are often implemented to support emotional regulation, social skills, and personal development within restorative contexts. This observational, cross-sectional study examined the associations between restorative justice practices, participation in music therapy, and indicators of social reintegration among 317 adolescents involved in restorative programs in Peru. Data were collected using a structured survey composed of ordinal-scale items assessing dimensions of restorative practices, engagement in music therapy, and perceived social reintegration, with the instrument demonstrating satisfactory internal consistency. Statistical associations were analysed using Somers’ d, a non-parametric measure appropriate for assessing ordinal associations in observational research. The results revealed statistically significant and directionally consistent associations between restorative justice practices and social reintegration outcomes, as well as positive associations between participation in music therapy and higher levels of reported social reintegration. These findings should be interpreted in light of the study’s cross-sectional design and non-probability sampling strategy, which limit causal inference and generalizability. While the results are consistent with the potential relevance of integrating music-based activities within restorative contexts, future research employing experimental or longitudinal designs is required to examine causal mechanisms and long-term effects and to further clarify the role of therapeutic interventions in supporting the social reintegration of justice-involved adolescents. Full article
(This article belongs to the Special Issue Criminal Justice Responses to Juvenile Delinquency)
13 pages, 980 KB  
Article
Cost-Effectiveness of a Quality of Life Predictor to Guide Psychosocial Support in Breast Cancer
by Tuukka Hakkarainen, Ira Haavisto, Mikko Nuutinen, Yrjänä Hynninen, Paula Poikonen-Saksela, Johanna Mattson, Haridimos Kondylak, Eleni Kolokotroni, Ketti Mazzocco, Berta Sousa, Isabel Manica, Ruth Pat-Horenczyk and Riikka-Leena Leskelä
Cancers 2026, 18(3), 439; https://doi.org/10.3390/cancers18030439 - 29 Jan 2026
Viewed by 246
Abstract
Introduction: Women with breast cancer experience psychological distress, and resilience-strengthening psychosocial support may improve their quality of life (QoL). Identifying those at risk of low QoL is challenging. This study evaluated the cost-effectiveness of a machine learning-based QoL predictor to support clinical [...] Read more.
Introduction: Women with breast cancer experience psychological distress, and resilience-strengthening psychosocial support may improve their quality of life (QoL). Identifying those at risk of low QoL is challenging. This study evaluated the cost-effectiveness of a machine learning-based QoL predictor to support clinical decision-making regarding psychosocial support (sample size: 660). Methods: A decision tree cost–utility model was developed to compare four decision-making strategies in offering psychosocial support: the clinician alone, the QoL predictor alone, the clinician supported by the predictor, and no prediction with no psychosocial support. QoL after one year was used as a proxy for resilience. Costs, health outcomes, and net monetary benefits (NMBs) were estimated using a one-year time horizon. Incremental cost-effectiveness ratios (ICERs) were calculated and dominance assessed. A societal scenario analysis incorporated productivity losses. A probabilistic sensitivity analysis generated cost-effectiveness acceptability curves. Results: Clinicians supported by the QoL predictor produced the highest NMB (EUR 16,349) and the greatest quality-adjusted life year (QALY) gain (0.759), with an ICER of EUR 22,892 compared with the next least costly strategy. Clinician-only prediction and predictor-only approaches were dominated or extendedly dominated. Under the societal perspective, all strategies produced negative NMB values due to productivity losses, but the overall ranking remained unchanged. The probabilistic sensitivity analysis showed that the combined clinician and predictor strategy had a 69% probability of being cost-effective at a willingness to pay threshold of EUR 30,000. Conclusions: Combining clinician judgement with the machine learning-based QoL predictor improved the targeting of psychosocial support and was the most cost-effective strategy. Further prospective and comparative studies are needed to confirm its long-term effectiveness and cost-effectiveness in clinical practice. Full article
(This article belongs to the Special Issue Cost-Effectiveness Studies in Cancers)
Show Figures

Figure 1

20 pages, 2389 KB  
Article
A Monocular Depth Estimation Method for Autonomous Driving Vehicles Based on Gaussian Neural Radiance Fields
by Ziqin Nie, Zhouxing Zhao, Jieying Pan, Yilong Ren, Haiyang Yu and Liang Xu
Sensors 2026, 26(3), 896; https://doi.org/10.3390/s26030896 - 29 Jan 2026
Viewed by 414
Abstract
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, [...] Read more.
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, scale ambiguity and occlusion handling. These limitations lead to suboptimal performance in complex environments, reducing model efficiency and generalization and hindering their broader use in autonomous driving and other applications. To solve these challenges, this paper introduces a Neural Radiance Field (NeRF)-based monocular depth estimation method for autonomous driving. It introduces a Gaussian probability-based ray sampling strategy to effectively solve the problem of massive sampling points in large complex scenes and reduce computational costs. To improve generalization, a lightweight spherical network incorporating a fine-grained adaptive channel attention mechanism is designed to capture detailed pixel-level features. These features are subsequently mapped to 3D spatial sampling locations, resulting in diverse and expressive point representations for improving the generalizability of the NeRF model. Our approach exhibits remarkable performance on the KITTI benchmark, surpassing traditional methods in depth estimation tasks. This work contributes significant technical advancements for practical monocular depth estimation in autonomous driving applications. Full article
Show Figures

Figure 1

26 pages, 2618 KB  
Article
A Cascaded Batch Bayesian Yield Optimization Method for Analog Circuits via Deep Transfer Learning
by Ziqi Wang, Kaisheng Sun and Xiao Shi
Electronics 2026, 15(3), 516; https://doi.org/10.3390/electronics15030516 - 25 Jan 2026
Viewed by 304
Abstract
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional [...] Read more.
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional failures. These variations often lead to rare circuit failure events, underscoring the importance of accurate yield estimation and robust design methodologies. Conventional Monte Carlo yield estimation is computationally infeasible as millions of simulations are required to capture failure events with extremely low probability. This paper presents a novel reliability-based circuit design optimization framework that leverages deep transfer learning to improve the efficiency of repeated yield analysis in optimization iterations. Based on pre-trained neural network models from prior design knowledge, we utilize model fine-tuning to accelerate importance sampling (IS) for yield estimation. To improve estimation accuracy, adversarial perturbations are introduced to calibrate uncertainty near the model decision boundary. Moreover, we propose a cascaded batch Bayesian optimization (CBBO) framework that incorporates a smart initialization strategy and a localized penalty mechanism, guiding the search process toward high-yield regions while satisfying nominal performance constraints. Experimental validation on SRAM circuits and amplifiers reveals that CBBO achieves a computational speedup of 2.02×–4.63× over state-of-the-art (SOTA) methods, without compromising accuracy and robustness. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

26 pages, 5754 KB  
Article
Heatmap-Assisted Reinforcement Learning Model for Solving Larger-Scale TSPs
by Guanqi Liu and Donghong Xu
Electronics 2026, 15(3), 501; https://doi.org/10.3390/electronics15030501 - 23 Jan 2026
Viewed by 261
Abstract
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the [...] Read more.
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the combinatorial search space, state–action space explosion, and sharply increased sample complexity, which together cause significant performance degradation for most existing DRL-based models when directly applied to large-scale instances. This research proposes a two-stage reinforcement learning framework, termed GCRL-TSP (Graph Convolutional Reinforcement Learning for the TSP), which consists of a heatmap generation stage based on a graph convolutional neural network, and a heatmap-assisted Proximal Policy Optimization (PPO) training stage, where the generated heatmaps are used as auxiliary guidance for policy optimization. First, we design a divide-and-conquer heatmap generation strategy: a graph convolutional network infers m-node sub-heatmaps, which are then merged into a global edge-probability heatmap. Second, we integrate the heatmap into PPO by augmenting the state representation and restricting the action space toward high-probability edges, improving training efficiency. On standard instances with 200/500/1000 nodes, GCRL-TSP achieves a Gap% of 4.81/4.36/13.20 (relative to Concorde) with runtimes of 36 s/1.12 min/4.65 min. Experimental results show that GCRL-TSP achieves more than twice the solving speed compared to other TSP solving algorithms, while obtaining solution quality comparable to other algorithms on TSPs ranging from 200 to 1000 nodes. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

28 pages, 564 KB  
Article
CONFIDE: CONformal Free Inference for Distribution-Free Estimation in Causal Competing Risks
by Quang-Vinh Dang, Ngoc-Son-An Nguyen and Thi-Bich-Diem Vo
Mathematics 2026, 14(2), 383; https://doi.org/10.3390/math14020383 - 22 Jan 2026
Viewed by 195
Abstract
Accurate prediction of individual treatment effects in survival analysis is often complicated by the presence of competing risks and the inherent unobservability of counterfactual outcomes. While machine learning models offer improved discriminative power, they typically lack rigorous guarantees for uncertainty quantification, which are [...] Read more.
Accurate prediction of individual treatment effects in survival analysis is often complicated by the presence of competing risks and the inherent unobservability of counterfactual outcomes. While machine learning models offer improved discriminative power, they typically lack rigorous guarantees for uncertainty quantification, which are essential for safety-critical clinical decision-making. In this paper, we introduce CONFIDE (CONFormal Inference for Distribution-free Estimation), a novel framework that bridges causal inference and conformal prediction to construct valid prediction sets for cause-specific cumulative incidence functions. Unlike traditional confidence intervals for population-level parameters, CONFIDE provides individual-level prediction sets for time-to-event outcomes, which are more clinically actionable for personalized treatment decisions by directly quantifying uncertainty in future patient outcomes rather than uncertainty in population averages. By integrating semi-parametric hazard estimation with targeted bias correction strategies, CONFIDE generates calibrated prediction sets that cover the true potential outcome with a user-specified probability, irrespective of the underlying data distribution. We empirically validate our approach on four diverse medical datasets, demonstrating that CONFIDE achieves competitive discrimination (C-index up to 0.83) while providing robust finite-sample marginal coverage guarantees (e.g., 85.7% coverage on the Bone Marrow Transplant dataset). We note two key limitations: (1) coverage may degrade under heavy censoring (>40%) unless inverse probability of censoring weighted (IPCW) conformal quantiles are used, as demonstrated in our sensitivity analysis; (2) while the method guarantees marginal coverage averaged over the covariate distribution, conditional coverage for specific covariate values is theoretically impossible without structural assumptions, though practical approximations via locally-adaptive calibration can improve conditional performance. Our framework effectively enables trustworthy personalized risk assessment in complex survival settings. Full article
(This article belongs to the Special Issue Statistical Models and Their Applications)
Show Figures

Figure 1

Back to TopTop