Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,303)

Search Parameters:
Keywords = extreme gradient boosting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 11715 KB  
Article
A Hybrid Framework for Detecting Gold Mineralization Zones in G.R. Halli, Western Dharwar Craton, Karnataka, India
by P. V. S. Raju, Venkata Sai Mudili and Avatharam Ganivada
Minerals 2025, 15(11), 1125; https://doi.org/10.3390/min15111125 - 28 Oct 2025
Abstract
Mineral prospectivity mapping (MPM) is a powerful approach for identifying mineralization zones with high potential for economically viable mineral deposits. This study proposes a hybrid framework combining a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP), a Convolutional Neural Network (CNN) and a [...] Read more.
Mineral prospectivity mapping (MPM) is a powerful approach for identifying mineralization zones with high potential for economically viable mineral deposits. This study proposes a hybrid framework combining a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP), a Convolutional Neural Network (CNN) and a Fuzzy-Kernel Extreme Learning Machine (FKELM) to address the challenges of imbalanced and uncertain datasets in mineral exploration. The approach was applied to the G.R. Halli gold prospect, in the Chitradurga Schist Belt, Western Dharwar Craton, India, using nine geochemical pathfinder elements. WGAN-GP generated high-quality negative samples, balancing the dataset and reducing overfitting. Compared with Support Vector Machines, Gradient Boosting, and a baseline CNN, FKELM (AUC = 0.976, accuracy = 92%) and WGAN-GP + CNN (AUC = 0.973, accuracy = 91%) showed superior performance and produced geologically coherent prospectivity maps. Promising gold targets were delineated, closely aligned with known mineralized zones and geochemical anomalies. This hybrid framework provides a robust, cost-effective, and scalable MPM solution for structurally controlled geological tracts, insufficient data terrains, and integration with additional geoscience datasets for other complex mineral systems. Full article
Show Figures

Figure 1

22 pages, 6015 KB  
Article
Data-Driven Estimation of Reference Evapotranspiration in Paraguay from Geographical and Temporal Predictors
by Bilal Cemek, Erdem Küçüktopçu, Maria Gabriela Fleitas Ortellado and Halis Simsek
Appl. Sci. 2025, 15(21), 11429; https://doi.org/10.3390/app152111429 - 25 Oct 2025
Viewed by 142
Abstract
Reference evapotranspiration (ET0) is a fundamental variable for irrigation scheduling and water management. Conventional estimation methods, such as the FAO-56 Penman–Monteith equation, are of limited use in developing regions where meteorological data are scarce. This study evaluates the potential of machine [...] Read more.
Reference evapotranspiration (ET0) is a fundamental variable for irrigation scheduling and water management. Conventional estimation methods, such as the FAO-56 Penman–Monteith equation, are of limited use in developing regions where meteorological data are scarce. This study evaluates the potential of machine learning (ML) approaches to estimate ET0 in Paraguay, using only geographical and temporal predictors—latitude, longitude, altitude, and month. Five algorithms were tested: artificial neural networks (ANNs), k-nearest neighbors (KNN), random forest (RF), extreme gradient boosting (XGB), and adaptive neuro-fuzzy inference systems (ANFISs). The framework consisted of ET0 calculation, baseline model testing (ML techniques), ensemble modeling, leave-one-station-out validation, and spatial interpolation by inverse distance weighting. ANFIS achieved the highest prediction accuracy (R2 = 0.950, RMSE = 0.289 mm day−1, MAE = 0.202 mm day−1), while RF and XGB showed stable and reliable performance across all stations. Spatial maps highlighted strong seasonal variability, with higher ET0 values in the Chaco region in summer and lower values in winter. These results confirm that ML algorithms can generate robust ET0 estimates under data-constrained conditions, and provide scalable and cost-effective solutions for irrigation management and agricultural planning in Paraguay. Full article
Show Figures

Figure 1

23 pages, 10676 KB  
Article
Hourly and 0.5-Meter Green Space Exposure Mapping and Its Impacts on the Urban Built Environment
by Yan Wu, Weizhong Su, Yingbao Yang and Jia Hu
Remote Sens. 2025, 17(21), 3531; https://doi.org/10.3390/rs17213531 - 24 Oct 2025
Viewed by 195
Abstract
Accurately mapping urban residents’ exposure to green space at high spatiotemporal resolutions is essential for assessing disparities and equality across blocks and enhancing urban environment planning. In this study, we developed a framework to generate hourly green space exposure maps at 0.5 m [...] Read more.
Accurately mapping urban residents’ exposure to green space at high spatiotemporal resolutions is essential for assessing disparities and equality across blocks and enhancing urban environment planning. In this study, we developed a framework to generate hourly green space exposure maps at 0.5 m resolution using multiple sources of remote sensing data and an Object-Based Image Classification with Graph Convolutional Network (OBIC-GCN) model. Taking the main urban area in Nanjing city of China as the study area, we proposed a Dynamic Residential Green Space Exposure (DRGE) metric to reveal disparities in green space access across four housing price blocks. The Palma ratio was employed to explain the inequity characteristics of DRGE, while XGBoost (eXtreme Gradient Boosting) and SHAP (SHapley Additive explanation) methods were utilized to explore the impacts of built environment factors on DRGE. We found that the difference in daytime and nighttime DRGE values was significant, with the DRGE value being higher after 6:00 compared to the night. Mean DRGE on weekends was about 1.5 times higher than on workdays, and the DRGE in high-priced blocks was about twice that in low-priced blocks. More than 68% of residents in high-priced blocks experienced over 8 h of green space exposure during weekend nighttime (especially around 19:00), which was much higher than low-price blocks. Moreover, spatial inequality in residents’ green space exposure was more pronounced on weekends than on workdays, with lower-priced blocks exhibiting greater inequality (Palma ratio: 0.445 vs. 0.385). Furthermore, green space morphology, quantity, and population density were identified as the critical factors affecting DRGE. The optimal threshold for Percent of Landscape (PLAND) was 25–70%, while building density, height, and Sky View Factor (SVF) were negatively correlated with DRGE. These findings address current research gaps by considering population mobility, capturing green space supply and demand inequities, and providing scientific decision-making support for future urban green space equality and planning. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Urban Environment and Climate)
Show Figures

Figure 1

22 pages, 3311 KB  
Article
Machine Learning-Based Prediction of Root-Zone Temperature Using Bio-Based Phase-Change Material in Greenhouse
by Hasan Kaan Kucukerdem and Hasan Huseyin Ozturk
Sustainability 2025, 17(21), 9455; https://doi.org/10.3390/su17219455 (registering DOI) - 24 Oct 2025
Viewed by 213
Abstract
The study focuses on the experimental investigation of the impact of using coconut oil (CO) as a phase-change material (PCM) for heat storage on the root-zone temperature within a greenhouse in Adana, Türkiye. The study examines the efficacy of PCM as latent heat-storage [...] Read more.
The study focuses on the experimental investigation of the impact of using coconut oil (CO) as a phase-change material (PCM) for heat storage on the root-zone temperature within a greenhouse in Adana, Türkiye. The study examines the efficacy of PCM as latent heat-storage material and predicts root-zone temperature using three machine learning algorithms. The dataset used in the analysis consists of 2658 data at hourly resolution with six variables from February to April in 2022. A greenhouse with PCM shows a remarkable increase in both ambient (0.9–4.1 °C) and root-zone temperatures (1.1–1.6 °C) especially during the periods without sunlight compared to a conventional greenhouse. Machine learning algorithms used in this study include Multivariate Adaptive Regression Splines (MARS), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGBoost). Hyperparameter tuning was performed for all three models to control model complexity, flexibility, learning rate, and regularization level, thereby preventing overfitting and underfitting. Among these algorithms, R2 values for testing data listed from largest to smallest are MARS (0.95), SVR (0.96), and XGBoost (0.97), respectively. The results emphasize the potential of machine learning approaches for applying thermal energy storage systems to agricultural greenhouses. In addition, it provides insight into a net-zero energy greenhouse approach by storing heat in a bio-based PCM, alongside its implementation and operational procedures. Full article
Show Figures

Figure 1

17 pages, 2557 KB  
Article
System Inertia Cost Forecasting Using Machine Learning: A Data-Driven Approach for Grid Energy Trading in Great Britain
by Maitreyee Dey, Soumya Prakash Rana and Preeti Patel
Analytics 2025, 4(4), 30; https://doi.org/10.3390/analytics4040030 - 23 Oct 2025
Viewed by 181
Abstract
As modern power systems integrate more renewable and decentralised generation, maintaining grid stability has become increasingly challenging. This study proposes a data-driven machine learning framework for forecasting system inertia service costs—a key yet underexplored variable influencing energy trading and frequency stability in Great [...] Read more.
As modern power systems integrate more renewable and decentralised generation, maintaining grid stability has become increasingly challenging. This study proposes a data-driven machine learning framework for forecasting system inertia service costs—a key yet underexplored variable influencing energy trading and frequency stability in Great Britain. Using eight years (2017–2024) of National Energy System Operator (NESO) data, four models—Long Short-Term Memory (LSTM), Residual LSTM, eXtreme Gradient Boosting (XGBoost), and Light Gradient-Boosting Machine (LightGBM)—are comparatively analysed. LSTM-based models capture temporal dependencies, while ensemble methods effectively handle nonlinear feature relationships. Results demonstrate that LightGBM achieves the highest predictive accuracy, offering a robust method for inertia cost estimation and market intelligence. The framework contributes to strategic procurement planning and supports market design for a more resilient, cost-effective grid. Full article
(This article belongs to the Special Issue Business Analytics and Applications)
Show Figures

Figure 1

22 pages, 7295 KB  
Article
An Artificial Intelligence-Driven Precipitation Downscaling Method Using Spatiotemporally Coupled Multi-Source Data
by Chao Li, Long Ma, Xing Huang, Chenyue Wang, Xinyuan Liu, Bolin Sun and Qiang Zhang
Atmosphere 2025, 16(11), 1226; https://doi.org/10.3390/atmos16111226 - 22 Oct 2025
Viewed by 182
Abstract
Addressing the challenges posed by sparse ground meteorological stations and the insufficient resolution and accuracy of reanalysis and satellite precipitation products, this study establishes a multi-source environmental feature system that precisely matches the target precipitation data resolution (1 km × 1 km). Based [...] Read more.
Addressing the challenges posed by sparse ground meteorological stations and the insufficient resolution and accuracy of reanalysis and satellite precipitation products, this study establishes a multi-source environmental feature system that precisely matches the target precipitation data resolution (1 km × 1 km). Based on this foundation, it innovatively proposes a Random Forest-based Dual-Spectrum Adaptive Threshold algorithm (RF-DSAT) for key factor screening and subsequently integrates Convolutional Neural Network (CNN) with Gated Recurrent Unit (GRU) to construct a Spatiotemporally Coupled Bias Correction Model for multi-source data (CGBCM). Furthermore, by integrating these technological components, it presents an Artificial Intelligence-driven Multi-source data Precipitation Downscaling method (AIMPD), capable of downscaling precipitation fields from 0.1° × 0.1° to high-precision 1 km × 1 km resolution. Taking the bend region of the Yellow River Basin in China as a case study, AIMPD demonstrates superior performance compared to bicubic interpolation, eXtreme Gradient Boosting (XGBoost), CNN, and Long Short-Term Memory (LSTM) networks, achieving improvements of approximately 1.73% to 40% in Nash-Sutcliffe Efficiency (NSE). It exhibits exceptional accuracy, particularly in extreme precipitation downscaling, while significantly enhancing computational efficiency, thereby offering novel insights for global precipitation downscaling research. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

16 pages, 1300 KB  
Article
Multi-Class Segmentation and Classification of Intestinal Organoids: YOLO Stand-Alone vs. Hybrid Machine Learning Pipelines
by Luana Conte, Giorgio De Nunzio, Giuseppe Raso and Donato Cascio
Appl. Sci. 2025, 15(21), 11311; https://doi.org/10.3390/app152111311 - 22 Oct 2025
Viewed by 157
Abstract
Background: The automated analysis of intestinal organoids in microscopy images are essential for high-throughput morphological studies, enabling precision and scalability. Traditional manual analysis is time-consuming and subject to observer bias, whereas Machine Learning (ML) approaches have recently demonstrated superior performance. Purpose: [...] Read more.
Background: The automated analysis of intestinal organoids in microscopy images are essential for high-throughput morphological studies, enabling precision and scalability. Traditional manual analysis is time-consuming and subject to observer bias, whereas Machine Learning (ML) approaches have recently demonstrated superior performance. Purpose: This study aims to evaluate YOLO (You Only Look Once) for organoid segmentation and classification, comparing its standalone performance with a hybrid pipeline that integrates DL-based feature extraction and ML classifiers. Methods: The dataset, consisting of 840 light microscopy images and over 23,000 annotated intestinal organoids, was divided into training (756 images) and validation (84 images) sets. Organoids were categorized into four morphological classes: cystic non-budding organoids (Org0), early organoids (Org1), late organoids (Org3), and Spheroids (Sph). YOLO version 10 (YOLOv10) was trained as a segmenter-classifier for the detection and classification of organoids. Performance metrics for YOLOv10 as a standalone model included Average Precision (AP), mean AP at 50% overlap (mAP50), and confusion matrix evaluated on the validation set. In the hybrid pipeline, trained YOLOv10 segmented bounding boxes, and features extracted from these regions using YOLOv10 and ResNet50 were classified with ML algorithms, including Logistic Regression, Naive Bayes, K-Nearest Neighbors (KNN), Random Forest, eXtreme Gradient Boosting (XGBoost), and Multi-Layer Perceptrons (MLP). The performance of these classifiers was assessed using the Receiver Operating Characteristic (ROC) curve and its corresponding Area Under the Curve (AUC), precision, F1 score, and confusion matrix metrics. Principal Component Analysis (PCA) was applied to reduce feature dimensionality while retaining 95% of cumulative variance. To optimize the classification results, an ensemble approach based on AUC-weighted probability fusion was implemented to combine predictions across classifiers. Results: YOLOv10 as a standalone model achieved an overall mAP50 of 0.845, with high AP across all four classes (range 0.797–0.901). In the hybrid pipeline, features extracted with ResNet50 outperformed those extracted with YOLO, with multiple classifiers achieving AUC scores ranging from 0.71 to 0.98 on the validation set. Among all classifiers, Logistic Regression emerged as the best-performing model, achieving the highest AUC scores across multiple classes (range 0.93–0.98). Feature selection using PCA did not improve classification performance. The AUC-weighted ensemble method further enhanced performance, leveraging the strengths of multiple classifiers to optimize prediction, as demonstrated by improved ROC-AUC scores across all organoid classes (range 0.92–0.98). Conclusions: This study demonstrates the effectiveness of YOLOv10 as a standalone model and the robustness of hybrid pipelines combining ResNet50 feature extraction and ML classifiers. Logistic Regression emerged as the best-performing classifier, achieving the highest ROC-AUC across multiple classes. This approach ensures reproducible, automated, and precise morphological analysis, with significant potential for high-throughput organoid studies and live imaging applications. Full article
Show Figures

Figure 1

17 pages, 2899 KB  
Article
Hyperspectral Imaging for Quality Assessment of Processed Foods: A Case Study on Sugar Content in Apple Jam
by Danila Lissovoy, Alina Zakeryanova, Rustem Orazbayev, Tomiris Rakhimzhanova, Michael Lewis, Huseyin Atakan Varol and Mei-Yen Chan
Foods 2025, 14(21), 3585; https://doi.org/10.3390/foods14213585 - 22 Oct 2025
Viewed by 386
Abstract
Apple jam is a widely used all-season product. The quality of the jam is closely related to its sugar concentration, which affects its taste, texture, shelf life, and legal compliance with production requirements. Although traditional methods for measuring sugar, such as titration, enzymatic [...] Read more.
Apple jam is a widely used all-season product. The quality of the jam is closely related to its sugar concentration, which affects its taste, texture, shelf life, and legal compliance with production requirements. Although traditional methods for measuring sugar, such as titration, enzymatic methods, and chromatography, are accurate, they are also invasive, destructive, and unsuitable for rapid screening. This study investigates a non-destructive and non-invasive alternative method that uses hyperspectral imaging (HSI) in combination with machine learning to estimate the sugar content in processed apple products. Eight cultivars were selected from the Central Asian region, recognized as the origin of apples and known for its rich diversity of apple cultivars. A total of 88 jam samples were prepared with sugar concentrations ranging from 25% to 75%. For each sample, several hyperspectral images were obtained using a visible-to-near-infrared (VNIR) camera. The acquired spectral data were then processed and analyzed using regression models, including the support vector machine (SVM), eXtreme gradient boosting (XGBoost), and a one-dimensional residual network (1D ResNet). Among them, ResNet achieved the highest prediction accuracy of R2 = 0.948. The results highlight the potential of HSI and machine learning for a fast, accurate, and non-invasive assessment of the sugar content in processed foods. Full article
Show Figures

Figure 1

22 pages, 1939 KB  
Article
Development and Validation of Prognostic Models for Treatment Response of Patients with B-Cell Lymphoma: Standard Statistical and Machine-Learning Approaches
by Adugnaw Zeleke Alem, Itismita Mohanty, Nalini Pati, Cameron Wellard, Eliza Chung, Eliza A. Hawkes, Zoe K. McQuilten, Erica M. Wood, Stephen Opat and Theophile Niyonsenga
J. Clin. Med. 2025, 14(20), 7445; https://doi.org/10.3390/jcm14207445 - 21 Oct 2025
Viewed by 238
Abstract
Background: Achieving a complete response after therapy is an important predictor of long-term survival in lymphoma patients. However, previous predictive models have primarily focused on overall survival (OS) and progression-free survival (PFS), often overlooking treatment response. Predicting the likelihood of complete response before [...] Read more.
Background: Achieving a complete response after therapy is an important predictor of long-term survival in lymphoma patients. However, previous predictive models have primarily focused on overall survival (OS) and progression-free survival (PFS), often overlooking treatment response. Predicting the likelihood of complete response before initiating therapy can provide more immediate and actionable insights. Thus, this study aims to develop and validate predictive models for treatment response to first-line therapy in patients with B-cell lymphomas. Methods: The study used 2763 patients from the Lymphoma and Related Diseases Registry (LaRDR). The data were randomly divided into training (n = 2221, 80%) and validation (n = 553, 20%) cohorts. Seven algorithms: logistic regression, K-nearest neighbor, support vector machine, random forest, Naïve Bayes, gradient boosting machine, and extreme gradient boosting were evaluated. Model performance was assessed using discrimination and classification metrics. Additionally, model calibration and clinical utility were evaluated using the Brier score and decision curve analysis, respectively. Results: All models demonstrated comparable performance in the validation cohort, with area under the curve (AUC) values ranging from 0.69 to 0.70. A nomogram incorporating the six variables, including stage, lactate dehydrogenase, performance status, BCL2 expression, anemia, and systemic immune-inflammation index, achieved an AUC of 0.70 (95% CI: 0.65–0.75), outperforming the international prognostic index (IPI: AUC = 0.65), revised IPI (AUC = 0.61), and NCCN-IPI (AUC = 0.63). Decision curve analysis confirmed the nomogram’s superior net benefit over IPI-based systems. Conclusions: While our nomogram demonstrated improved discriminative performance and clinical utility compared to IPI-based systems, further external validation is needed before clinical integration. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

15 pages, 987 KB  
Article
Predicting Mortality in Non-Variceal Upper Gastrointestinal Bleeding: Machine Learning Models Versus Conventional Clinical Risk Scores
by İzzet Ustaalioğlu and Rohat Ak
J. Clin. Med. 2025, 14(20), 7425; https://doi.org/10.3390/jcm14207425 - 21 Oct 2025
Viewed by 136
Abstract
Background/Objectives: Non-variceal upper gastrointestinal bleeding (NVUGIB) is associated with considerable morbidity and mortality, particularly in emergency department (ED) settings. While traditional clinical scores such as the Glasgow-Blatchford Score (GBS), AIMS65, and Pre-Endoscopic Rockall are widely used for risk stratification, their accuracy in [...] Read more.
Background/Objectives: Non-variceal upper gastrointestinal bleeding (NVUGIB) is associated with considerable morbidity and mortality, particularly in emergency department (ED) settings. While traditional clinical scores such as the Glasgow-Blatchford Score (GBS), AIMS65, and Pre-Endoscopic Rockall are widely used for risk stratification, their accuracy in mortality prediction is limited. This study aimed to evaluate the performance of multiple supervised machine learning (ML) models in predicting 30-day all-cause mortality in NVUGIB and to compare these models with established risk scores. Methods: A retrospective cohort study was conducted on 1233 adult patients with NVUGIB who presented to the ED of a tertiary center between January 2022 and January 2025. Clinical and laboratory data were extracted from electronic records. Seven supervised ML algorithms—logistic regression, ridge regression, support vector machine, random forest, extreme gradient boosting (XGBoost), naïve Bayes, and artificial neural networks—were trained using six feature selection techniques generating 42 distinct models. Performance was assessed using AUROC, F1-score, sensitivity, specificity, and calibration metrics. Traditional scores (GBS, AIMS65, Rockall) were evaluated in parallel. Results: Among the cohort, 96 patients (7.8%) died within 30 days. The best-performing ML model (XGBoost with univariate feature selection) achieved an AUROC > 0.80 and F1-score of 0.909, significantly outperforming all traditional scores (highest AUROC: Rockall, 0.743; p < 0.001). ML models demonstrated higher sensitivity and specificity, with improved calibration. Key predictors consistently included age, comorbidities, hemodynamic parameters, and laboratory markers. The best-performing ML models demonstrated very high apparent AUROC values (up to 0.999 in internal analysis), substantially exceeding conventional scores. These results should be interpreted as apparent performance estimates, likely optimistic in the absence of external validation. Conclusions: While machine-learning models showed markedly higher apparent discrimination than conventional scores, these findings are based on a single-center retrospective dataset and require external multicenter validation before clinical implementation. Full article
Show Figures

Figure 1

15 pages, 2661 KB  
Article
Biological Interpretable Machine Learning Model for Predicting Pathological Grading in Clear Cell Renal Cell Carcinoma Based on CT Urography Peritumoral Radiomics Features
by Dingzhong Yang, Haonan Mei, Panpan Jiao and Qingyuan Zheng
Bioengineering 2025, 12(10), 1125; https://doi.org/10.3390/bioengineering12101125 - 20 Oct 2025
Viewed by 854
Abstract
Background: The purpose of this study was to investigate the value of machine learning models for preoperative non-invasive prediction of International Society of Urological Pathology (ISUP) grading in clear cell renal cell carcinoma (ccRCC) based on CT urography (CTU)-related peritumoral area (PAT) radiomics [...] Read more.
Background: The purpose of this study was to investigate the value of machine learning models for preoperative non-invasive prediction of International Society of Urological Pathology (ISUP) grading in clear cell renal cell carcinoma (ccRCC) based on CT urography (CTU)-related peritumoral area (PAT) radiomics features. Methods: We retrospectively analysed 328 ccRCC patients from our institution, along with an external validation cohort of 175 patients from The Cancer Genome Atlas. A total of 1218 radiomics features were extracted from contrast-enhanced CT images, with LASSO regression used to select the most predictive features. We employed four machine learning models, namely, Logistic Regression (LR), Multilayer Perceptron (MLP), Support Vector Machine (SVM), and Extreme Gradient Boosting (XGBoost), for training and evaluation using Receiver Operating Characteristic (ROC) analysis. The model performance was assessed in training, internal validation, and external validation sets. Results: The XGBoost model demonstrated consistently superior discriminative ability across all datasets, achieving AUCs of 0.95 (95% CI: 0.92–0.98) in the training set, 0.93 (95% CI: 0.89–0.96) in the internal validation set, and 0.92 (95% CI: 0.87–0.95) in the external validation set. The model significantly outperformed LR, MLP, and SVM (p < 0.001) and demonstrated prognostic value (Log-rank p = 0.018). Transcriptomic analysis of model-stratified groups revealed distinct biological signatures, with high-grade predictions showing significant enrichment in metabolic pathways (DPEP3/THRSP) and immune-related processes (lymphocyte-mediated immunity, MHC complex activity). These findings suggest that peritumoral imaging characteristics provide valuable biological insights into tumor aggressiveness. Conclusions: The machine learning models based on PAT radiomics features of CTU demonstrated significant value in the non-invasive preoperative prediction of ISUP grading for ccRCC, and the XGBoost modeling had the best predictive ability. This non-invasive approach may enhance preoperative risk stratification and guide clinical decision-making, reducing reliance on invasive biopsy procedures. Full article
(This article belongs to the Special Issue New Sights of Machine Learning and Digital Models in Biomedicine)
Show Figures

Figure 1

23 pages, 9764 KB  
Article
Ecoregion-Based Landslide Susceptibility Mapping: A Spatially Partitioned Modeling Strategy for Oregon, USA
by Zhixiang Xu, Peng Zuo, Wen Zhao, Zeyu Zhou, Xiangyu Shao, Junpo Yu, Haize Yu, Weijie Wang, Junwei Gan, Jinshun Duan and Jiming Jin
Appl. Sci. 2025, 15(20), 11242; https://doi.org/10.3390/app152011242 - 20 Oct 2025
Viewed by 196
Abstract
Conventional non-partitioned Landslide Susceptibility Mapping (LSM), which neglects geospatial heterogeneity, often has limitations in accurately capturing local risk patterns. To address this challenge, this study investigated the effectiveness of localized modeling in the environmentally diverse state of Oregon, USA, by comparing ecoregion-based local [...] Read more.
Conventional non-partitioned Landslide Susceptibility Mapping (LSM), which neglects geospatial heterogeneity, often has limitations in accurately capturing local risk patterns. To address this challenge, this study investigated the effectiveness of localized modeling in the environmentally diverse state of Oregon, USA, by comparing ecoregion-based local models with the non-partitioned model. We partitioned Oregon into seven distinct units using the U.S. Environmental Protection Agency (EPA) Level III Ecoregions and developed one global and seven local models with the eXtreme Gradient Boosting (XGBoost) algorithm. A comprehensive evaluation framework, including the Area Under the Curve (AUC), Landslide Density (LD), and the Total Deviation Index (TDI), was used to compare the models. The results demonstrated the clear superiority of the partitioned strategy. Moreover, different ecoregions were found to have distinct dominant landslide conditioning factors, revealing strong spatial non-stationarity. Although all models generated high AUC values (>0.93), LD analysis showed that the local models were significantly more efficient at identifying high-risk zones. This advantage was particularly pronounced in critical, landslide-prone western areas; for instance, in the Willamette–Georgia–Puget Lowland, the local model’s LD value in the ‘very high’ susceptibility class was over 3.5 times that of the global model. High TDI values (some >35%) further confirmed fundamental spatial discrepancies between the risk maps obtained by the two strategies. This research substantiated that, in geographically complex terrains, partitioned modeling is an effective approach for more accurate and reliable LSM, providing a scientific basis for developing targeted regional disaster mitigation policies. Full article
Show Figures

Figure 1

35 pages, 14047 KB  
Article
Wildfire Susceptibility Mapping Using Deep Learning and Machine Learning Models Based on Multi-Sensor Satellite Data Fusion: A Case Study of Serbia
by Uroš Durlević, Velibor Ilić and Aleksandar Valjarević
Fire 2025, 8(10), 407; https://doi.org/10.3390/fire8100407 - 20 Oct 2025
Viewed by 656
Abstract
To prevent or mitigate the negative impact of fires, spatial prediction maps of wildfires are created to identify susceptible locations and key factors that influence the occurrence of fires. This study uses artificial intelligence models, specifically machine learning (XGBoost) and deep learning (Kolmogorov-Arnold [...] Read more.
To prevent or mitigate the negative impact of fires, spatial prediction maps of wildfires are created to identify susceptible locations and key factors that influence the occurrence of fires. This study uses artificial intelligence models, specifically machine learning (XGBoost) and deep learning (Kolmogorov-Arnold networks—KANs, and deep neural network—DNN), with data obtained from multi-sensor satellite imagery (MODIS, VIIRS, Sentinel-2, Landsat 8/9) for spatial modeling wildfires in Serbia (88,361 km2). Based on geographic information systems (GIS) and 199,598 wildfire samples, 16 quantitative variables (geomorphological, climatological, hydrological, vegetational, and anthropogenic) are presented, together with 3 synthesis maps and an integrated susceptibility map of the 3 applied models. The results show a varying percentage of Serbia’s very high vulnerability to wildfires (XGBoost = 11.5%; KAN = 14.8%; DNN = 15.2%; Ensemble = 12.7%). Among the applied models, the DNN achieved the highest predictive performance (Accuracy = 83.4%, ROC-AUC = 92.3%), followed by XGBoost and KANs, both of which also demonstrated strong predictive accuracy (ROC-AUC > 90%). These results confirm the robustness of deep and machine learning approaches for wildfire susceptibility mapping in Serbia. SHAP analysis determined that the most influential factors are elevation, air temperature, and humidity regime (precipitation, aridity, and series of consecutive dry/wet days). Full article
Show Figures

Graphical abstract

15 pages, 1536 KB  
Article
Evaluation of the Risk of Urinary System Stone Recurrence Using Anthropometric Measurements and Lifestyle Behaviors in a Developed Artificial Intelligence Model
by Hikmet Yasar, Kadir Yildirim, Mucahit Karaduman, Bayram Kolcu, Mehmet Ezer, Ferhat Yakup Suceken, Fatih Bicaklioğlu, Mehmet Erhan Aydin, Coskun Kaya, Muhammed Yildirim and Kemal Sarica
Diagnostics 2025, 15(20), 2643; https://doi.org/10.3390/diagnostics15202643 - 20 Oct 2025
Viewed by 336
Abstract
Background/Objectives: Urinary system stone disease is an important health problem both clinically and economically due to its high recurrence rates. In this study, an innovative hybrid approach based on deep learning is proposed to predict the recurrence risk of stone disease. Methods: Patient [...] Read more.
Background/Objectives: Urinary system stone disease is an important health problem both clinically and economically due to its high recurrence rates. In this study, an innovative hybrid approach based on deep learning is proposed to predict the recurrence risk of stone disease. Methods: Patient data were divided into three subsets: anthropometric measurements (Part A), derived body composition indices (Part B), and other clinical and demographic information (Part C). Each data subset was processed with autoencoder models, and low-dimensional, meaningful features were extracted. The obtained features were combined, and the classification process was performed using four different machine learning algorithms: Extreme Gradient Boosting (XGBoost), Cubic Support Vector Machines (Cubic SVM), k-Nearest Neighbor algorithm (KNN), and Decision Tree (DT). Results: According to the experimental results, the highest classification performance was obtained with the XGBoost algorithm. The suggested approach adds to the literature by offering a novel solution that makes early risk calculation for stone disease recurrence easier. It also shows how well structural feature engineering and deep representation can be integrated in clinical prediction issues. Conclusions: Prediction of the stone recurrence risk in advance is of great importance both in terms of improving the quality of life of patients and reducing the unnecessary diagnostic evaluations along with lowering treatment costs. Full article
(This article belongs to the Special Issue New Technologies and Tools Used for Risk Assessment of Diseases)
Show Figures

Figure 1

29 pages, 28659 KB  
Article
Assessing Anthropogenic Impacts on the Carbon Sink Dynamics in Tropical Lowland Rainforest Using Multiple Remote Sensing Data: A Case Study of Jianfengling, China
by Shijie Mao, Mingjiang Mao, Wenfeng Gong, Yuxin Chen, Yixi Ma, Renhao Chen, Miao Wang, Xiaoxiao Zhang, Jinming Xu, Junting Jia and Lingbing Wu
Forests 2025, 16(10), 1611; https://doi.org/10.3390/f16101611 - 20 Oct 2025
Viewed by 341
Abstract
Aboveground biomass (AGB) is a key indicator of forest structure and carbon sequestration, yet its dynamics under concurrent anthropogenic disturbances remain poorly understood. This study investigates the spatiotemporal dynamics and driving mechanisms of AGB in the Jianfengling tropical lowland rainforest (JFLTLR) within Hainan [...] Read more.
Aboveground biomass (AGB) is a key indicator of forest structure and carbon sequestration, yet its dynamics under concurrent anthropogenic disturbances remain poorly understood. This study investigates the spatiotemporal dynamics and driving mechanisms of AGB in the Jianfengling tropical lowland rainforest (JFLTLR) within Hainan Tropical Rainforest National Park (NRHTR) from 2015 to 2023. Six machine learning models—Extreme Gradient Boosting (XGBoost), Gradient Boosting Machine (GBM), Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), and Random Forest (RF)—were evaluated, with RF achieving the highest accuracy (R2 = 0.83). Therefore, RF was employed to generate high-resolution annual AGB maps based on Sentinel-1/2 data fusion, field surveys, socio-economic indicators, and topographic variables. Human pressure was quantified using the Human Influence Index (HII). Threshold analysis revealed a critical breakpoint at ΔHII ≈ 0.1712: below this level, AGB remained relatively stable, whereas beyond it, biomass declined sharply (≈−2.65 mg·ha−1 per 0.01 ΔHII). Partial least squares structural equation modeling (PLS-SEM) identified plantation forests as the dominant negative driver, while GDP (−0.91) and road (−1.04) exerted strong indirect effects through HII, peaking in 2019 before weakening under ecological restoration policies. Spatially, biomass remained resilient within central core zones but declined in peripheral regions associated with road expansion. Temporally, AGB exhibited a trajectory of decline, partial recovery, and renewed loss, resulting in a net reduction of ≈ 0.0393 × 106 mg. These findings underscore the urgent need for a “core stabilization–peripheral containment” strategy integrating disturbance early-warning systems, transportation planning that minimizes impacts on high-AGB corridors, and the strengthening of ecological corridors to maintain carbon-sink capacity and guide differentiated rainforest conservation. Full article
(This article belongs to the Special Issue Modelling and Estimation of Forest Biomass)
Show Figures

Figure 1

Back to TopTop