Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,845)

Search Parameters:
Keywords = threshold curve

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2093 KB  
Article
Flood Susceptibility Mapping and Runoff Modeling in the Upper Baishuijiang River Basin, China
by Hao Wang, Quanfu Niu, Jiaojiao Lei and Weiming Cheng
Remote Sens. 2026, 18(9), 1270; https://doi.org/10.3390/rs18091270 - 22 Apr 2026
Abstract
Mountain flood susceptibility in complex mountainous basins is strongly influenced by terrain–climate interactions; however, the linkage between spatial susceptibility patterns and hydrological processes remains poorly understood. This study proposes a process-oriented framework that explicitly links flood susceptibility patterns with hydrological processes, moving beyond [...] Read more.
Mountain flood susceptibility in complex mountainous basins is strongly influenced by terrain–climate interactions; however, the linkage between spatial susceptibility patterns and hydrological processes remains poorly understood. This study proposes a process-oriented framework that explicitly links flood susceptibility patterns with hydrological processes, moving beyond conventional approaches that rely on independent model integration. The Baishuijiang River Basin, located in Wenxian County, southern Gansu Province, China, is selected as a representative mountainous watershed for this analysis. The specific conclusions are as follows: (1) Flood susceptibility was mapped using a Particle Swarm Optimization (PSO)-enhanced Maximum Entropy (MaxEnt) model based on multi-source environmental variables, including climatic, terrain, soil, land cover, and vegetation factors. The model achieved high predictive accuracy (Area Under the Receiver Operating Characteristic Curve (AUC) = 0.912), identifying precipitation of the driest month (bio14), elevation, and land use as dominant controlling factors. Medium-to-high-susceptibility areas account for approximately 22% of the basin and are mainly distributed along river valleys and flow convergence areas. These patterns are strongly associated with reduced infiltration capacity under dry antecedent conditions and enhanced flow concentration in steep terrain, and they exhibit clear nonlinear responses and threshold effects. (2) Hydrological simulations using Hydrologic Engineering Center–Hydrologic Modeling System (HEC-HMS) show good agreement with observed runoff (Nash–Sutcliffe Efficiency (NSE) = 0.74−0.85). Sensitivity analysis indicates that runoff dynamics are primarily controlled by the Curve Number (CN), recession constant, and ratio to peak, corresponding to infiltration capacity, recession processes, and peak discharge amplification. The spatial consistency between high-susceptibility areas and areas of strong runoff response demonstrates that susceptibility patterns can be physically explained through hydrological processes, providing a process-based interpretation rather than a purely statistical prediction. (3) Future projections indicate that medium–high-susceptibility areas remain generally stable but show a gradual expansion (+5.2% ± 0.8%) and increasing concentration along river corridors under climate change scenarios. This reflects intensified precipitation variability and enhanced runoff concentration processes, suggesting a climate-driven amplification of flood risk in hydrologically connected areas. Overall, this study goes beyond conventional susceptibility assessment by establishing a physically interpretable framework that provides a consistent linkage between environmental controls, susceptibility patterns, and hydrological responses. The proposed approach is transferable to similar mountainous basins with strong terrain–climate interactions, although uncertainties related to data limitations and single-basin application remain and require further investigation. Full article
(This article belongs to the Special Issue Remote Sensing for Planetary Geomorphology and Mapping)
40 pages, 8223 KB  
Article
An Interpretable Fuzzy Distance-Based Ensemble Framework with SHAP Analysis for Clinically Transparent Prediction of Diabetes
by Asif Hassan Syed, Altyeb Altaher Taha, Ahmed Hamza Osman, Yakubu Suleiman Baguda, Hani Moaiteq Aljahdali and Arda Yunianta
Diagnostics 2026, 16(9), 1254; https://doi.org/10.3390/diagnostics16091254 - 22 Apr 2026
Abstract
Background/Objectives: Diabetes is a chronic metabolic disorder affecting global health, where early prediction can significantly reduce disease severity. Methods: This research proposes an interpretable multi-metric fuzzy distance-based ensemble (MMFDE) that integrates multi-variant gradient-boosting classifiers (GBM, LightGBM, XGBoost, and AdaBoost) through a novel fuzzy [...] Read more.
Background/Objectives: Diabetes is a chronic metabolic disorder affecting global health, where early prediction can significantly reduce disease severity. Methods: This research proposes an interpretable multi-metric fuzzy distance-based ensemble (MMFDE) that integrates multi-variant gradient-boosting classifiers (GBM, LightGBM, XGBoost, and AdaBoost) through a novel fuzzy fusion mechanism designed for intrinsic interpretability. Unlike conventional ensembles relying on opaque averaging or voting, MMFDE transforms base classifier predictions into a high-dimensional fuzzy space quantified via a weighted hybrid distance incorporating Euclidean, Manhattan, Chebyshev, and cosine metrics against ideal diabetic and non-diabetic reference vectors. These distances are translated into membership degrees with the help of exponentially decaying functions, which give clinicians calibrated confidence scores for every prediction. Comprehensive SHAP analysis identifies important clinical risk factors (glucose, BMI, and diabetes pedigree function), which show concordance with the medical literature, thereby giving greater clinical trust. Results: Experimental evaluations on two publicly available datasets, Hospital Frankfurt Germany Diabetes Dataset (HFGDD) and Pima Indians Diabetes Dataset (PIDD), show that MMFDE outperforms all base models with a significant accuracy of 94.83% and Area Under the Curve (AUC) of 97.66% on HFGDD and three different levels of interpretability: geometric transparency via distance-based decisions, confidence-calibrated uncertainty estimates, and feature-level explanations via SHAP. The confidence thresholds enabled in the framework support risk stratification clinical workflows with high-confidence predictions for automated screening and cases with moderate/low confidence flagged out for review by the clinician. Conclusions: By demonstrating that high performance and interpretability need not be mutually exclusive, MMFDE advances trustworthy AI for clinical decision support, addressing the critical need for transparent and clinically actionable diabetes prediction systems. Full article
(This article belongs to the Special Issue Explainable Machine Learning in Clinical Diagnostics)
Show Figures

Figure 1

11 pages, 3891 KB  
Proceeding Paper
Nose Detection Based on Quadratic Curve Fitting with Geometric–Photometric–Structural Scoring
by Yu-Chen Chen, Shao-Chi Kao and Jian-Jiun Ding
Eng. Proc. 2026, 134(1), 71; https://doi.org/10.3390/engproc2026134071 - 22 Apr 2026
Abstract
An edge-based and curve-based rule-driven nose detection framework is designed to improve the reliability of face detection. The designed framework combines quadratic curve fitting with a calibrated scoring mechanism that fuses geometric, photometric, and structural information into a unified model. These stages jointly [...] Read more.
An edge-based and curve-based rule-driven nose detection framework is designed to improve the reliability of face detection. The designed framework combines quadratic curve fitting with a calibrated scoring mechanism that fuses geometric, photometric, and structural information into a unified model. These stages jointly enforce symmetry consistency, reliable tip position, and clear wing boundaries. Candidate face regions are first refined by skin filtering and ellipse validation, from which a mid-lower facial ROI is framed for nasal candidate extraction. We further incorporate eye/mouth hints (EyeMap/MouthMap) to restrict the region of interest (ROI) to the region below the eyes, above the mouth, and between the two eyes. When a mouth is detected, this ROI refinement supersedes the chrominance-red (Cr) channel trimming; otherwise, we fall back to the Cr channel horizontal projection to detect dominant mouth peaks and trim the lower-lip band, thereby suppressing lip interference. A multi-threshold Canny procedure with histogram projection is employed to collect multiple nose rectangles by selecting various vertical and horizontal peaks under three adaptive threshold scales. Within each rectangle, edge contours are quadratically fitted and categorized into U-shape (nasal base), N-shape (nostril rim), and C-shape (nasal wings), enabling rule-based selection of the base, wings, and nostrils. The fused features are then processed by a calibrated geometric–photometric–structural scoring module that uses YCbCr contrasts and red/black penalties to suppress lip and eye confounders. Experiments with diverse faces and lighting conditions show accurate and stable nose localization, with notably reliable wing fitting and nasal base detection, improving the accuracy of face detection. Full article
Show Figures

Figure 1

18 pages, 1306 KB  
Article
Impact of Allergic Diseases or Obstructive Sleep Apnea Risk on Severe Mycoplasma pneumoniae Pneumonia in Children: A Clinical Study and Nomogram Construction
by Zonglang Yu, Jingrong Song, Yu Fu, Rui Li, Ruimeng Ma, Tienan Feng, Mengting Zhang, Shuping Jin and Xiaoying Zhang
J. Clin. Med. 2026, 15(8), 3159; https://doi.org/10.3390/jcm15083159 - 21 Apr 2026
Abstract
Background/Objectives: This study aimed to investigate the impact of allergic diseases (AD) or obstructive sleep apnea (OSA) risk, as a host factor, on the development of severe Mycoplasma pneumoniae Pneumonia (SMPP) in children by analyzing the clinical data of pediatric patients with [...] Read more.
Background/Objectives: This study aimed to investigate the impact of allergic diseases (AD) or obstructive sleep apnea (OSA) risk, as a host factor, on the development of severe Mycoplasma pneumoniae Pneumonia (SMPP) in children by analyzing the clinical data of pediatric patients with Mycoplasma pneumoniae Pneumonia (MPP). Methods: This retrospective study enrolled children hospitalized with Mycoplasma pneumoniae pneumonia (MPP) at Shanghai Ninth People’s Hospital from November 2024 to November 2025. Patients were classified into severe (SMPP) and mild (MMPP) groups. Demographic, clinical, laboratory, and questionnaire data were collected and compared between groups. Univariate and multivariate logistic regression analyses were performed to identify independent predictors of SMPP and construct a nomogram. The model was validated for discrimination, calibration, and clinical utility using ROC curves, calibration plots, and decision curve analysis, with internal validation by bootstrap resampling. Results: Among the 150 enrolled children with MPP, 35 (23.3%) were classified as severe (SMPP) and 115 (76.7%) as mild (MMPP). Patients with SMPP exhibited significantly higher frequencies of allergic diseases, prolonged fever and steroid use, elevated inflammatory markers (CRP, LDH, D-dimer, ferritin, ALT), and higher PSQ and RQLQ scores (all p < 0.05). Disease severity was positively correlated with these clinical, laboratory, and questionnaire-based parameters. Multivariate logistic regression identified allergic diseases, PSQ score, LDH, and ferritin as independent predictors of SMPP. A nomogram incorporating these four factors demonstrated good predictive performance, with an internally validated C-index of 0.827, satisfactory calibration (Hosmer–Lemeshow p = 0.116), and clinical utility within a 0–25% threshold probability range on decision curve analysis. Conclusions: Children with MPP and comorbid AD or OSA risk are more likely to develop SMPP. Among children aged 6–12 years, RQLQ score is positively correlated with the severity of MPP. AD, PSQ score, LDH, and ferritin are independent risk factors for SMPP. Clinicians should be alert to the development of SMPP when children with MPP present with a history of AD, PSQ score >3.5, LDH >327.50 U/L, or ferritin >120.05 ng/mL. The visual nomogram model constructed by combining these risk factors demonstrates improved predictive performance for SMPP, with high predictive efficacy and accuracy. It has great clinical value and can be used for individualized risk assessment and early intervention. However, our proposed nomogram requires external validation prior to broader implementation. Full article
(This article belongs to the Section Clinical Pediatrics)
Show Figures

Figure 1

18 pages, 926 KB  
Article
Research on Threshold Optimization and Variability-Based Digital Biomarker Approaches Through MMSE-Lifelog Multimodal Integrated Analysis from a Clinical Screening Perspective
by Yeeun Park and Jin-hyoung Jeong
Healthcare 2026, 14(8), 1094; https://doi.org/10.3390/healthcare14081094 - 20 Apr 2026
Abstract
Background: Early screening of cognitive impairment is essential for timely clinical intervention; however, conventional cognitive tests such as the Mini-Mental State Examination (MMSE) rely on fixed thresholds that may not be optimal in real-world screening settings. Methods: This study developed a [...] Read more.
Background: Early screening of cognitive impairment is essential for timely clinical intervention; however, conventional cognitive tests such as the Mini-Mental State Examination (MMSE) rely on fixed thresholds that may not be optimal in real-world screening settings. Methods: This study developed a threshold-aware multimodal screening framework integrating MMSE item-level scores with wearable-derived sleep and physical activity lifelog data. A dataset of 174 adults was analyzed and categorized into cognitively normal (CN), mild cognitive impairment (MCI), and dementia, with MCI and dementia combined as an impaired group. A CatBoost-based binary classification model was trained using five-fold cross-validation. The optimal decision threshold was determined by maximizing balanced accuracy using out-of-fold predictions. Results: The optimized threshold (0.49) achieved an accuracy of 0.818 and a balanced accuracy of 0.728 on the validation set. The recall values were 0.885 for CN and 0.571 for the impaired group, with an area under the ROC curve of 0.676. Feature importance and stability analyses showed that variability-related sleep and activity features were consistently selected across folds. Conclusions: These findings suggest that threshold optimization combined with multimodal lifelog integration may improve the interpretability of screening decisions. Variability-based lifelog features may provide complementary information alongside MMSE, although their role remains exploratory and requires further validation in larger and longitudinal cohorts. Full article
Show Figures

Figure 1

40 pages, 3164 KB  
Article
Systematic Assessment of Minimum Inter-Event Time Determination Methods and Precipitation Thresholds for Constructing Design-Critical Huff Hyetographs
by Marin Grubišić, Željko Šreng, Jadran Berbić and Tamara Brleković
Water 2026, 18(8), 976; https://doi.org/10.3390/w18080976 - 20 Apr 2026
Abstract
The primary processing of high-resolution precipitation records (5 min and shorter) is crucial for constructing dimensionless design hyetographs and identifying design-critical precipitation scenarios for urban drainage systems. A key step in this process is separating continuous precipitation records into individual precipitation events, typically [...] Read more.
The primary processing of high-resolution precipitation records (5 min and shorter) is crucial for constructing dimensionless design hyetographs and identifying design-critical precipitation scenarios for urban drainage systems. A key step in this process is separating continuous precipitation records into individual precipitation events, typically based on minimum inter-event time (MIT) and precipitation amount thresholds. This separation directly influences the subsequent analysis steps and the accuracy of the design hyetographs. Building upon this foundation, this study systematically analyses how different MIT determination methods influence the construction of dimensionless Huff hyetographs in a moderately humid continental climate. Three approaches for defining MIT were examined: a fixed MIT method (1–12 h), an autocorrelation-based method (AC), and a kernel density estimation approach (KDE). The analysis also considers the effects of minimum precipitation thresholds (P = 1, 3, and 5 mm) and precipitation duration classes (all durations and short-duration events with T2 h), utilising a continuous 10-year series of 5 min precipitation data. The results demonstrate that the choice of MIT substantially affects the identified precipitation events, duration, total amount, and the median Huff curve’s shape, especially for precipitation types with early and late maximum intensity. Specifically, increasing MIT values produces longer and deeper events with steeper Huff curves, while precipitation thresholds mainly filter weaker events rather than impacting peak intensities. The AC method yields results similar to larger fixed MIT values (≈6–9 h), whereas the KDE method corresponds to shorter separations (≈1–3 h). To unify the assessment of design relevance, a composite design index combining Huff curve slope and short-term peak intensities was introduced. Analysis shows that short-duration convective precipitation with an early maximum is the most critical design scenario. However, late-maximum events (events in which peak intensity occurs in the fourth quartile of storm duration, Type 4) can become equally critical when longer MIT values or autocorrelation-based separation are applied. These findings underscore the importance of a transparent and methodologically consistent definition of precipitation event separation criteria when using dimensionless hyetographs in urban drainage design. Full article
(This article belongs to the Special Issue Changes in Hydrology and Rainfall–Runoff Processes at Watersheds)
13 pages, 615 KB  
Article
Performance of Traditional Cardiovascular Risk Scores and Objective Optimization in Cancer Survivors
by Harsh A. Patel, Saifullah Syed, Pranathi Tella, Harshith Thyagaturu and Brijesh Patel
Curr. Oncol. 2026, 33(4), 230; https://doi.org/10.3390/curroncol33040230 - 19 Apr 2026
Viewed by 134
Abstract
Introduction: Cardiovascular disease (CVD) is a leading cause of non-cancer death among cancer survivors, attributable to cardiotoxic therapies and cardiovascular risk factors. General population risk prediction tools, including ASCVD (Atherosclerotic cardiovascular disease), Framingham’s Score, and PREVENT (Predicting Risk of Cardiovascular Disease EVENTS), lack [...] Read more.
Introduction: Cardiovascular disease (CVD) is a leading cause of non-cancer death among cancer survivors, attributable to cardiotoxic therapies and cardiovascular risk factors. General population risk prediction tools, including ASCVD (Atherosclerotic cardiovascular disease), Framingham’s Score, and PREVENT (Predicting Risk of Cardiovascular Disease EVENTS), lack cancer-specific variables. We evaluated whether these models, even after statistical optimization, could predict cardiovascular mortality in cancer survivors. Methods: Using the National Health and Nutrition Examination Survey (NHANES) 2001–2018, linked with National Death Index (NDI) mortality data, we conducted a retrospective analysis of 634 and 429 cancer survivors, respectively, across model-specific cohorts free of baseline cardiovascular disease. Discrimination was assessed for ASCVD, Framingham Score, and PREVENT using standardized thresholds of 7.5% and 20%, as well as Youden-optimized cutoffs. Area under the curve (AUC) comparisons were performed using the DeLong non-parametric method. Results: Standard thresholds showed suboptimal discrimination across all models (AUCs: ASCVD 0.56, Framingham 0.53, PREVENT 0.64). In contrast, Youden-optimized AUCs (ASCVD: 0.68; PREVENT: 0.71; all p < 0.001, DeLong test). Optimization increased the “low-risk” group’s mortality rate from 2.8% to 4.1% (RR = 1.47), suggesting improved statistical fit came at the cost of overestimating the risk. Optimized thresholds outperformed conventional cutoffs, underscoring the necessity for recalibrated, cohort-specific risk stratification in cancer survivors. Conclusions: Standard risk scores have inadequate discrimination for cardiovascular mortality prediction in cancer survivors. Threshold recalibration improves statistical metrics but does not resolve the structural failure of these models to account for cardiotoxic exposure. Development of cardio-oncology-specific risk models incorporating oncologic exposures is therefore warranted. Full article
31 pages, 24709 KB  
Article
Evaluating SAR-Derived Phenological Metrics for Monsoon (Kharif) Crop Monitoring in Diversified Agricultural Systems: Insights from Central India
by Meghavi Prashnani and Chris Justice
Remote Sens. 2026, 18(8), 1238; https://doi.org/10.3390/rs18081238 - 19 Apr 2026
Viewed by 190
Abstract
Effective crop monitoring during monsoon growing seasons in Central India faces challenges from persistent cloud cover that limits optical remote sensing during critical agricultural periods. This study presents the first attempt to develop a novel set of SAR-derived phenological metrics organized into five [...] Read more.
Effective crop monitoring during monsoon growing seasons in Central India faces challenges from persistent cloud cover that limits optical remote sensing during critical agricultural periods. This study presents the first attempt to develop a novel set of SAR-derived phenological metrics organized into five thematic categories for monsoon crop discrimination in smallholder agricultural systems. Five major monsoon crops (cotton, rice, maize, soybean, and urad) were analyzed across five different agroclimatic zones in Central India using Sentinel-1 data for the 2021 growing season. Phenological features were extracted from VV, VH polarizations, and their ratio, including seasonal extrema, threshold crossings, duration measures, curve shape descriptors, and area under the curve. Distinct crop-specific signatures were observed, with cotton showing extended phenology and cereal–legume crops displaying compressed, overlapping growth patterns. VV polarization achieved the highest statistical discrimination for intensity-based metrics, with 75% thresholds (VV_HP75V: F = 1287) providing higher separability than other thresholds by capturing near-peak biomass differences. VH performed best for duration and integration-based metrics, while VH/VV provided limited additional separability across metric types. For area-under-the-curve metrics, AUC25 outperformed AUC50 and AUC75 by capturing cumulative backscatter across the broader growing season while remaining robust to soil- and residue-dominated backscatter variability at sowing and harvest. Multiclass classification achieved 48.3% overall accuracy with systematic cereal–legume confusion, reflecting fundamental phenological convergence among monsoon-aligned crops. Cotton achieved the highest performance (F1: 0.79), with VH polarization dominating feature importance (65% of top 20 features). Binary classification revealed crop-specific discrimination patterns: cotton was best separated using VV intensity metrics, maize using the VH/VV ratio, and rice using timing-based features. Cross-district transferability showed the highest mean overall accuracy for rice (74%) and cotton (72%), while the remaining crops showed lower accuracy due to their phenological similarity. These findings highlight both the potential and limitations of SAR phenological metrics for monsoon crop discrimination, with effective results for structurally distinct crops but persistent cereal–legume confusion, requiring further investigation with multi-sensor approaches. Full article
Show Figures

Figure 1

15 pages, 892 KB  
Article
Spatial Dosimetric-Based Prediction of Long-Term Urinary Toxicity After Permanent Prostate Brachytherapy
by Chaoqiong Ma, Ying Hou, Rajeev Badkul, Jufri Setianegara, Xinglei Shen, Jay Shiao, Harold Li and Ronald C. Chen
Cancers 2026, 18(8), 1287; https://doi.org/10.3390/cancers18081287 - 18 Apr 2026
Viewed by 143
Abstract
Background: To explore the correlation between spatial dose distribution and post-implant urinary toxicity, aiming to assist decision making in low-dose-rate (LDR) treatment planning, thereby improving patient outcomes. Methods: Eighty-five prostate LDR patients with >12-month follow-up were included. Patient-reported urinary toxicity was collected prospectively [...] Read more.
Background: To explore the correlation between spatial dose distribution and post-implant urinary toxicity, aiming to assist decision making in low-dose-rate (LDR) treatment planning, thereby improving patient outcomes. Methods: Eighty-five prostate LDR patients with >12-month follow-up were included. Patient-reported urinary toxicity was collected prospectively using the International Prostate Symptom Score (IPSS) questionnaire, from before implant (baseline) to post-implant follow-up. Patients were then grouped into those whose symptom scores returned to ≤2 points above baseline by 12 months (no long-term toxicity) vs. those who did not (long-term toxicity). A total of 106 features were extracted for each patient, including principal components of dose-volume histograms (DVHs) from multiple prostate subzones, the whole prostate and urethra, along with baseline IPSS, implantation characteristics, and additional DVH indicators for the prostate and the urethra. A machine learning (ML) model incorporating backward feature selection algorithm was developed to predict long-term toxicity status, using a shuffle-and-split validation strategy for model evaluation during feature selection. A univariate statistical analysis was conducted on the model’s selected features. Results: Out of 85 patients, 41 (48%) had long-term urinary toxicity. Seven features were selected during model training, including baseline IPSS and six dosimetric features from several prostate subzones primarily located in the posterior prostate. The model achieved a high mean area under the receiver operating characteristic curve (AUC) of 0.81, with a balanced sensitivity and specificity of 0.78 by adjusting the probability threshold. In univariate analysis, only baseline IPSS and one selected dose feature were significantly correlated with long-term toxicity with AUC < 0.71. Conclusions: The proposed ML model, integrating baseline IPSS and spatial dosimetric features, effectively predicts long-term urinary toxicity after prostate LDR. This approach offers a practical method for risk stratification, allowing clinicians to identify patients at elevated risk and prioritize them for targeted preventative measures and closer follow-up. Full article
(This article belongs to the Special Issue The Roles of Deep Learning in Cancer Radiotherapy)
Show Figures

Figure 1

20 pages, 1048 KB  
Article
Soiling Status Detection in Photovoltaic Energy Systems Using Machine Learning and Weather Data for Cleaning Alerts
by Bruno Knevitz Hammerschmitt, João Carlos Jachenski Junior, Leandro Mario, Edwin Augusto Tonolo, Patryk Henrique de Fonseca, Rafael Martini Silva and Natália Pereira Menezes
Energies 2026, 19(8), 1964; https://doi.org/10.3390/en19081964 - 18 Apr 2026
Viewed by 168
Abstract
Soiling in photovoltaic systems is a recurring problem that reduces energy generation and demands efficient operation and maintenance (O&M) strategies. In this context, this paper proposes a machine learning-based approach to identify dirt levels and generate cleaning alerts using operational and weather data. [...] Read more.
Soiling in photovoltaic systems is a recurring problem that reduces energy generation and demands efficient operation and maintenance (O&M) strategies. In this context, this paper proposes a machine learning-based approach to identify dirt levels and generate cleaning alerts using operational and weather data. Initially, the models were evaluated with a decision threshold ranging from 0.5 to 0.7, using only operational features. Subsequently, the inclusion of weather features was tested, which improved the models’ performance and enabled the selection of the best models for the exhaustive features search step. The models analyzed in this step were Extra Trees, Histogram-based Gradient Boosting, Extreme Gradient Boosting, and Random Forest. Exhaustive analysis further improved model performance, as indicated by global metrics and ROC curves. The Extra Trees model with a threshold of 0.5 showed the best performance and was selected as the final configuration, achieving an accuracy of 0.9884 and an AUC-ROC of 0.9957. Finally, the selected model was applied to determine daily soiling levels and trigger alerts based on temporal persistence, indicating its potential to support predictive O&M decisions and cleaning actions in PV systems. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

18 pages, 2701 KB  
Article
An Interpretable and Externally Validated Model for Cardiovascular Disease Risk Assessment in Older Adults
by Madina Suleimenova, Kuat Abzaliyev, Symbat Abzaliyeva and Nargiza Nassyrova
Appl. Sci. 2026, 16(8), 3903; https://doi.org/10.3390/app16083903 - 17 Apr 2026
Viewed by 129
Abstract
Cardiovascular disease (CVD) risk assessment in older adults requires models that are accurate, clinically interpretable, and able to retain performance in independent populations. This study developed an interpretable machine-learning framework for CVD risk stratification in individuals aged 65 years and older using routinely [...] Read more.
Cardiovascular disease (CVD) risk assessment in older adults requires models that are accurate, clinically interpretable, and able to retain performance in independent populations. This study developed an interpretable machine-learning framework for CVD risk stratification in individuals aged 65 years and older using routinely available clinical factors and a selected biochemical extension and then evaluated its performance in a substantially larger independent external cohort. Model development used a development cohort of 100 patients (Almaty, age ≥ 65) with leakage-free nested cross-validation and out-of-fold (OOF) probabilities. Three internally evaluated configurations were compared: a clinical logistic regression baseline (LR clinical), a biomarker-augmented logistic regression (LR selected), and a nonlinear random forest on the selected feature set (RF selected). Discrimination was assessed using ROC-AUC and PR-AUC; probabilistic accuracy using Brier score and log loss. Calibration was examined using OOF calibration curves with sigmoid calibration for selected models. Decision-analytic utility and exploratory operational thresholds were assessed using Decision Curve Analysis (DCA), yielding a three-tier scale with thresholds t_low = 0.23 and t_high = 0.40. In nested cross-validation, LR clinical achieved ROC-AUC 0.9425 ± 0.0188 and PR-AUC 0.9574 ± 0.0092 with Brier 0.1004 ± 0.0215 and log loss 0.3634 ± 0.0652; LR selected performed worse, while RF selected showed competitive discrimination. External validation on an independent cohort (n = 695) showed retained discrimination (ROC-AUC 0.8355; PR-AUC 0.9376) with acceptable probabilistic accuracy (Brier 0.1131; log loss 0.3760), and recalibration (intercept + slope) slightly improved probability metrics. Explainability analyses (odds ratios, permutation importance, SHAP) consistently identified heredity, BMI, physical activity, and diabetes as influential model-associated factors, with clinically plausible directionality. The results suggest that an interpretable model trained on a small geriatric cohort can retain meaningful predictive performance on a substantially larger external cohort, supporting the potential value of transparent risk stratification in older adults, while broader prospective and multi-center validation remains necessary before routine clinical implementation. Full article
Show Figures

Figure 1

9 pages, 774 KB  
Article
Incremental Value of Adding S100B to NSE for High-Specificity Rule-in of Poor Neurological Outcome After Out-of-Hospital Cardiac Arrest
by Seokjae Hong, Seungho Lee, Jung Soo Park, Jin Hong Min, Changshin Kang and Byung Kook Lee
J. Clin. Med. 2026, 15(8), 3043; https://doi.org/10.3390/jcm15083043 - 16 Apr 2026
Viewed by 231
Abstract
Background: We evaluated whether adding S100B to NSE improved discrimination or high-specificity rule-in of poor neurological outcome after out-of-hospital cardiac arrest (OHCA). Methods: In this single-center retrospective cohort study, comatose adult OHCA survivors treated with targeted temperature management had NSE and [...] Read more.
Background: We evaluated whether adding S100B to NSE improved discrimination or high-specificity rule-in of poor neurological outcome after out-of-hospital cardiac arrest (OHCA). Methods: In this single-center retrospective cohort study, comatose adult OHCA survivors treated with targeted temperature management had NSE and S100B measured at 0, 24, 48, and 72 h after return of spontaneous circulation. At each time point, we assessed NSE alone, S100B alone, and a logistic model combining both biomarkers in paired complete cases. Discrimination was assessed using the area under the receiver operating characteristic curve (AUC). Rule-in performance was evaluated using a timepoint-specific threshold that achieved 100% specificity in our cohort. Poor neurological outcome was defined as cerebral performance category 3–5 at 6 months. Results: Among 124 patients, 66 (53.2%) had poor outcomes. AUCs were similar between NSE alone and the combination across all time points (all p > 0.3). At 48 h, the combination ruled in 46/65 (70.8%) patients with poor outcome versus 36/65 (55.4%) with NSE alone, identifying 10 additional patients and a 15.4-percentage-point difference (95% confidence interval, −5.6 to 23.6). Conclusions: Adding S100B to NSE did not improve overall discrimination. The higher 48 h rule-in yield was estimated imprecisely and should be interpreted cautiously. Our findings require external validation before they can be translated to clinical settings. Full article
Show Figures

Figure 1

18 pages, 2075 KB  
Article
Diagnostic and Clinical Impact of Imaging Modality on PSA Density: TRUS Versus MRI in Gray-Zone Prostate Cancer
by Davut Unsal Capkan and Mehmet Solakhan
Curr. Oncol. 2026, 33(4), 221; https://doi.org/10.3390/curroncol33040221 - 16 Apr 2026
Viewed by 142
Abstract
Background: In this study, it was aimed to compare transrectal ultrasound (TRUS)- and magnetic resonance imaging (MRI)-derived prostate-specific antigen density (PSAD) in patients with gray-zone PSA levels (4–10 ng/mL), evaluate their diagnostic performance for clinically significant prostate cancer (csPCa), and assess the clinical [...] Read more.
Background: In this study, it was aimed to compare transrectal ultrasound (TRUS)- and magnetic resonance imaging (MRI)-derived prostate-specific antigen density (PSAD) in patients with gray-zone PSA levels (4–10 ng/mL), evaluate their diagnostic performance for clinically significant prostate cancer (csPCa), and assess the clinical implications of reclassification across commonly used thresholds. Methods: We retrospectively analyzed 202 men who underwent both TRUS and multiparametric MRI between January 2020 and June 2025. Prostate volume was measured using the ellipsoid formula for TRUS and contour-based planimetry for MRI. PSA density (PSAD) was calculated as total PSA (tPSA, ng/mL) divided by prostate volume (mL) for each modality: TRUS-PSAD and MRI-PSAD. Agreement between modalities was evaluated using Bland–Altman plots and correlation analyses. Reclassification at PSAD thresholds of 0.15, 0.20, and 0.30 ng/mL/mL was assessed using Cohen’s κ and net reclassification improvement (NRI). Diagnostic performance for csPCa (ISUP grade group ≥ 2) was evaluated with ROC analysis and the DeLong test. Inter- and intra-observer reproducibility was determined using intraclass correlation coefficients (ICC) and Cohen’s κ. Clinical utility was assessed by decision curve analysis (DCA). Results: MRI-derived prostate volumes were significantly lower than TRUS-derived volumes (median 47.0 vs. 52.5 mL, p < 0.001), resulting in higher MRI-PSAD values (median 0.14 vs. 0.12 ng/mL/mL, p < 0.001). Bland–Altman analysis demonstrated a negative bias for prostate volume (−3.2 mL) and a positive bias for PSAD (+0.03). Strong correlations were observed between TRUS and MRI measurements (r = 0.96 for volume and r = 0.94 for PSAD). MRI-PSAD frequently reclassified patients into higher risk categories, yielding positive net reclassification improvement for cancer cases across all thresholds, while introducing some negative reclassification among non-cancer cases. ROC analysis showed comparable overall diagnostic performance between TRUS-PSAD and MRI-PSAD (AUC 0.681 vs. 0.679, p = 0.91). However, MRI-PSAD demonstrated higher sensitivity at predefined thresholds at the expense of reduced specificity, reflecting a threshold-dependent shift rather than improved discrimination. Reproducibility was higher for MRI-derived measurements (ICC = 0.94; κ = 0.83) compared with TRUS (ICC = 0.86; κ = 0.71). Decision curve analysis indicated that MRI-PSAD, particularly when combined with PI-RADS ≥ 3, provided the greatest net clinical benefit at lower threshold probabilities (5–15%). Conclusions: MRI-derived PSA density produces systematically higher values than TRUS-based measurements due to inherent differences in prostate volume estimation. While this results in increased sensitivity at standard thresholds, overall discrimination remains unchanged. These findings support the use of modality-specific PSAD thresholds rather than uniform cutoffs across imaging techniques. In clinical practice, MRI-PSAD may provide additional value when interpreted in conjunction with PI-RADS, primarily through improved threshold calibration rather than enhanced diagnostic accuracy. Full article
(This article belongs to the Collection New Insights into Prostate Cancer Diagnosis and Treatment)
Show Figures

Figure 1

23 pages, 7162 KB  
Article
Causal Interpretation of DBSCAN Algorithm: A Dynamic Modeling for Epsilon Estimation
by K. Garcia-Sanchez, J.-L. Perez-Ramos, S. Ramirez-Rosales, A.-M. Herrera-Navarro, H. Jiménez-Hernández and D. Canton-Enriquez
Entropy 2026, 28(4), 452; https://doi.org/10.3390/e28040452 - 15 Apr 2026
Viewed by 252
Abstract
DBSCAN is widely used to identify structured regions in unlabeled data, but its performance depends critically on the selection of the neighborhood parameter ε. Traditional heuristics for estimating ε often become unreliable in high-dimensional or varying-density settings because they rely heavily on [...] Read more.
DBSCAN is widely used to identify structured regions in unlabeled data, but its performance depends critically on the selection of the neighborhood parameter ε. Traditional heuristics for estimating ε often become unreliable in high-dimensional or varying-density settings because they rely heavily on local geometric criteria and may fail under smooth transitions or topological ambiguity. This work presents a three-level perspective on DBSCAN hyperparameter selection. At the algorithmic level, ε controls neighborhood connectivity and structural transitions in clustering. At the modeling level, the ordered k-distance signal is approximated through a surrogate dynamical estimation framework inspired by a mass–spring–damper system. At the causal level, the resulting estimator is interpreted through interventions on its internal threshold-selection mechanism. The proposed method models the variation of ε using ordinary differential equations defined on the ordered k-distance signal, enabling analysis of structural transitions in density organization via a surrogate dynamical representation. System identification is performed using L-BFGS-B optimization on the smoothed k-distance curve, while the system dynamics are solved with the fourth-order Runge–Kutta method. The resulting estimator identifies transition regions that are structurally informative for ε selection in DBSCAN. To analyze the estimator at the intervention level, Pearl’s do-calculus is used to compute the Average Causal Effect (ACE). The method was evaluated on synthetic benchmarks and on the Covtype dataset, including scenarios with multi-density overlap and dimensionality up to R10. The resulting ACE values, +0.9352, +0.5148, and +0.9246, indicate that the proposed estimator improves intervention-based ε selection relative to the geometric baseline across the evaluated datasets. Its practical computational cost is dominated by nearest-neighbor search, behaving approximately as O(NlogN) under favorable indexing conditions and degrading toward O(N2) in high-dimensional or weak-pruning regimes. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications, 2nd Edition)
Show Figures

Figure 1

21 pages, 5336 KB  
Article
Unveiling the Spatially Heterogeneous Driving Mechanisms of Net Migration in Chinese Cities: A Geographically Weighted Random Forest Approach
by Runhua Huang, Feng Shi and Huichao Guo
Sustainability 2026, 18(8), 3866; https://doi.org/10.3390/su18083866 - 14 Apr 2026
Viewed by 383
Abstract
As China transitions from rapid urbanization to high-quality development, the competition for population among cities has intensified, characterized by a shift from labor-intensive migration to multi-dimensional lifestyle choices. However, traditional migration models often assume global linearity, failing to capture the complex non-linear thresholds [...] Read more.
As China transitions from rapid urbanization to high-quality development, the competition for population among cities has intensified, characterized by a shift from labor-intensive migration to multi-dimensional lifestyle choices. However, traditional migration models often assume global linearity, failing to capture the complex non-linear thresholds and spatial non-stationarity inherent in migration decisions. This study employs a novel Geographically Weighted Random Forest (GWRF) model to analyze net migration flows across 278 Chinese cities using high-granularity mobile signaling data from the 2020 Spring Festival travel rush. The results reveal that GWRF significantly outperforms traditional OLS, GWR, and global Random Forest models, effectively handling spatial heterogeneity and non-linearity. Wage levels are the dominant global driver, exhibiting a distinct “S-curve” non-linear threshold, while population scale shows a significant U-shaped effect, highlighting the transition from agglomeration economies to congestion costs. Migration drivers exhibit profound spatial heterogeneity: western inland cities are “wage-driven,” the Pearl River Delta is “employment-structure driven,” and the northeastern “Rust Belt” is increasingly sensitive to “innovation investment” (technology expenditure). These findings challenge the “one-size-fits-all” approach to population policy, offering precise, spatially targeted strategies for urban planners to mitigate population shrinkage and enhance urban vitality. Full article
Show Figures

Figure 1

Back to TopTop