Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,313)

Search Parameters:
Keywords = gradient boosting decision trees

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1112 KB  
Article
Predicting Stock Market Risk Using Machine Learning Classification Models
by Seol-Hyun Noh
Risks 2026, 14(4), 92; https://doi.org/10.3390/risks14040092 - 17 Apr 2026
Abstract
This study aims to predict stock market risk and improve preparedness for potential economic crises by identifying sharp declines in stock returns using classification-based machine learning models. Using ten years of KOSPI 200 index data (2015 to 2024), a daily return series was [...] Read more.
This study aims to predict stock market risk and improve preparedness for potential economic crises by identifying sharp declines in stock returns using classification-based machine learning models. Using ten years of KOSPI 200 index data (2015 to 2024), a daily return series was constructed. A day was labeled a risk event (1) if its return fell below the 5th percentile of the returns observed over the preceding 100 trading days, indicating a sharp decline. Nine classification models—Logistic Regression, k-nearest Neighbor, Decision Tree, Random Forest, Linear Discriminant Analysis, Naive Bayes, Quadratic Discriminant Analysis, AdaBoost, and Gradient Boosting—were trained and validated. Among these, Logistic Regression demonstrated the strongest overall performance across multiple evaluation metrics, including accuracy, non-risk F1 score, risk F1 score, and AUC. Full article
(This article belongs to the Special Issue AI for Financial Risk Perception)
12 pages, 2549 KB  
Article
Predicting Osmotic Coefficients in Aqueous Inorganic Systems: A Hybrid Gazelle Optimization Algorithm (GOA)–Machine Learning Framework for Sustainable Water Treatment
by Seyed Hossein Hashemi, Ali Cheperli, Farshid Torabi and Yousef Shafiei
Sustainability 2026, 18(8), 3959; https://doi.org/10.3390/su18083959 - 16 Apr 2026
Abstract
Efficient design of desalination and brine management systems, which are central to a water circular economy, requires accurate thermodynamic data such as the osmotic coefficient. This property is key to understanding salt behavior in aqueous solutions, directly impacting the energy efficiency and sustainability [...] Read more.
Efficient design of desalination and brine management systems, which are central to a water circular economy, requires accurate thermodynamic data such as the osmotic coefficient. This property is key to understanding salt behavior in aqueous solutions, directly impacting the energy efficiency and sustainability of treatment processes. This study presents a predictive framework that combines machine learning with the Gazelle Optimization Algorithm (GOA) to accurately estimate osmotic coefficients for various inorganic salt solutions. The GOA was employed to automatically tune the hyperparameters of two models: Decision Tree (DT) and Gradient Boosting Machine (GBM). Using a comprehensive dataset of 893 samples with 27 salt-specific parameters, the GOA-GBM hybrid model delivered the highest predictive accuracy, achieving an R2 of 0.9734 on test data. The GOA-DT model also performed robustly (R2 = 0.9260), providing a more interpretable alternative. By creating a reliable tool for predicting osmotic coefficients, this methodology enables more precise process simulation and optimization. This directly supports the development of energy-efficient desalination technologies and informed decision-making for water reuse and resource recovery. The integration of advanced digital tools like GOA with machine learning offers a powerful approach to enhancing process efficiency and environmental safety, contributing directly to the design of sustainable, circular economy-based water treatment solutions for industrial and municipal applications. Full article
Show Figures

Figure 1

37 pages, 1793 KB  
Systematic Review
The Role of Artificial Intelligence in Prognosis, Recurrence Prediction, and Treatment Outcomes in Laryngeal Cancer: A Systematic Review
by Hadi Afandi Al-Hakami, Ismail A. Abdullah, Nora S. Almutairi, Rimaz R. Aldawsari, Ghadah Ali Alluqmani, Halah Ahmed Fallatah, Yara Saud Alsulami, Elyas Mohammed Alasiri, Rahaf D. Alsufyani, Raghad Ayman Alorabi and Reffal Mohammad Aldainiy
Cancers 2026, 18(8), 1257; https://doi.org/10.3390/cancers18081257 - 16 Apr 2026
Viewed by 81
Abstract
Background: Laryngeal cancer (LC), a common subtype of head and neck cancers (HNC), is most frequently represented by laryngeal squamous cell carcinoma (LSCC). Prognosis largely depends on early detection; however, traditional prognostic tools, including tumor-node-metastasis (TNM) staging, often show limited predictive accuracy. Artificial [...] Read more.
Background: Laryngeal cancer (LC), a common subtype of head and neck cancers (HNC), is most frequently represented by laryngeal squamous cell carcinoma (LSCC). Prognosis largely depends on early detection; however, traditional prognostic tools, including tumor-node-metastasis (TNM) staging, often show limited predictive accuracy. Artificial intelligence (AI), including machine learning (ML), natural language processing, and deep learning (DL), has emerged as a promising approach to improving cancer diagnosis, prognosis, and treatment planning by analyzing clinical data and medical imaging. Objective: This systematic review assesses the role of AI in prognosis, recurrence prediction, and treatment outcomes in LC. Methods: PubMed, MEDLINE, Scopus, Web of Science, IEEE Xplore, and ScienceDirect were searched up to January 2025. A total of 1062 records were identified; after title/abstract screening and full-text assessment, 29 studies were included. Eligible studies involved adult patients with LC and applied AI to diagnose, prognose, predict recurrence, or assess treatment outcomes using human datasets. Study quality and risk of bias were evaluated using the QUADAS-2 and QUIPS. Results: The 29 included studies were mostly retrospective, with sample sizes ranging from 10 to 63,000 patients. Most focused on LSCC, with a higher prevalence in males. The studies utilized various AI techniques, including deep learning models such as convolutional neural networks (CNNs) and DeepSurv, as well as ML algorithms like random survival forest, gradient boosting machines, random forest, k-nearest neighbors, naïve Bayes, and decision trees. AI models demonstrated strong prognostic performance, surpassing Cox regression and TNM staging in predicting survival and recurrence. Several studies reported outcomes related to treatment, such as chemotherapy response, occult lymph node metastasis, and the need for salvage surgery. Methodological quality varied, with biases related to patient selection and confounding factors. Conclusions: AI has the potential to improve prognosis estimation, recurrence prediction, and treatment outcome assessment in LC. However, although AI can be a helpful addition to clinical decision-making, more prospective studies, external validation, and standardized evaluation are necessary before these technologies can be confidently adopted in everyday clinical practice. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

19 pages, 1079 KB  
Article
Intelligent Triggering of Safety Risk Warning in Metro Tunnel Construction: A Two-Stage Framework Integrating Static and Dynamic Data
by Liang Ou, Yinghui Zhang and Yun Chen
Buildings 2026, 16(8), 1550; https://doi.org/10.3390/buildings16081550 - 15 Apr 2026
Viewed by 180
Abstract
With the rapid expansion of metro tunnel construction, safety risks such as collapse, water inrush, and gas explosion have become increasingly critical. Existing warning models often lack fine-grained disaster type identification and dynamic risk assessment capabilities. This paper proposes a two-stage intelligent warning [...] Read more.
With the rapid expansion of metro tunnel construction, safety risks such as collapse, water inrush, and gas explosion have become increasingly critical. Existing warning models often lack fine-grained disaster type identification and dynamic risk assessment capabilities. This paper proposes a two-stage intelligent warning framework based on multi-source data fusion. First, a dual-autoencoder structure (MLP-AE and LSTM-AE) extracts deep features from static geological parameters and dynamic monitoring sequences. Then, a multilayer perceptron (MLP) classifier identifies four typical states: normal, collapse, water/mud inrush, and gas explosion. Subsequently, a regression model predicts a continuous risk score, mapped to three risk levels: Safe, Moderate Risk, and Significant Risk. Experimental results demonstrate that, compared with Decision Tree (DT), Gradient Boosting Decision Tree (GBDT), and Bayesian Network (BN), the proposed framework achieves superior performance in risk level identification, with an accuracy of 91% and an F1-score of 0.87. Notably, it exhibits particularly strong recall for severe (Level III) risks, which is crucial for practical engineering applications. The proposed framework provides a practical and intelligent approach for safety warning in metro tunnel construction. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

7 pages, 1155 KB  
Proceeding Paper
Electronic Nose-Based Classification of Honey Brands Using Extreme Gradient-Boosted Decision Tree
by Mark Jasper R. Iglesias, Xandre Adrian M. Nicolas and Meo Vincent C. Caya
Eng. Proc. 2026, 134(1), 52; https://doi.org/10.3390/engproc2026134052 - 15 Apr 2026
Viewed by 148
Abstract
Honey is one of the most valued natural food products, yet it remains highly vulnerable to fraud through mislabeling and adulteration, practices that mislead consumers and compromise food safety. We develop a low-cost and portable electronic nose (e-nose) system for classifying locally available [...] Read more.
Honey is one of the most valued natural food products, yet it remains highly vulnerable to fraud through mislabeling and adulteration, practices that mislead consumers and compromise food safety. We develop a low-cost and portable electronic nose (e-nose) system for classifying locally available honey brands in the Philippines. The system integrates an array of eight MQ gas sensors to detect volatile organic compounds (VOCs), with an Arduino Mega 2560 handling data acquisition and a Raspberry Pi 5 executing data processing and classification. An Extreme Gradient-Boosted Decision Tree (XGBoost) algorithm was applied to analyze the VOC profiles of three honey brands, each with 38 samples, resulting in a total dataset of 114 samples. The dataset was divided into training, testing, and validation sets to assess the system’s classifying and predictive performance, with accuracy evaluated using a 3 × 3 confusion matrix. The results showed that the system effectively distinguished between honey brands, achieving a validation accuracy of 87.50%, corresponding to 21 out of 24 correctly identified validation trials. Full article
Show Figures

Figure 1

28 pages, 3324 KB  
Article
Predicting Flexural Strength of FRP-Strengthened Waste Aggregate Concrete Beams with Machine Learning: A Step Towards Sustainability
by Arissaman Sangthongtong, Burachat Chatveera, Gritsada Sua-iam, Adnan Nawaz, Tahir Mehmood, Suniti Suparp, Muhammad Salman, Muhammad Noman, Qudeer Hussain and Panumas Saingam
Buildings 2026, 16(8), 1512; https://doi.org/10.3390/buildings16081512 - 12 Apr 2026
Viewed by 265
Abstract
Using waste materials in the manufacture of concrete has many environmental advantages. However, it can be difficult to estimate structural performance, especially when beams are reinforced with fiber-reinforced polymers (FRP). In order to provide a data-driven approach to sustainable structural design, this work [...] Read more.
Using waste materials in the manufacture of concrete has many environmental advantages. However, it can be difficult to estimate structural performance, especially when beams are reinforced with fiber-reinforced polymers (FRP). In order to provide a data-driven approach to sustainable structural design, this work explores the use of machine learning (ML) approaches to forecast the flexural strength of FRP-strengthened waste aggregate concrete beams. A total number of 92 experimental datasets were used to develop and assess four ML algorithms: Random Forest (RF), Decision Tree (DT), Neural Network (NN), and Extreme Gradient Boosting (XGBoost). Regression plots, Taylor diagrams, statistical measures (R2R^2R2, RMSE, MAE, MSE), and explainable AI (XAI) tools, including SHAP, LIME, and partial dependence plots (PDPs), were used to evaluate the model’s performance. RF outperformed NN in terms of predictive accuracy, while XGBoost exhibited similar performance to RF. The most significant predictors, according to a SHAP analysis, were beam length and fiber length, with the lower followed by steel tensile strength, fiber width, and concrete compressive strength. LIME offered local interpretability for individual predictions, but PDPs demonstrated optimal parameter ranges and a nonlinear feature strength relationship. The findings provide engineers with a strong decision-support tool for designing green infrastructure, since they show that ensemble-based models can accurately represent the intricate, nonlinear dynamics controlling flexural behavior in sustainable FRP-strengthened waste aggregate concrete beams. Full article
(This article belongs to the Collection Advanced Concrete Materials in Construction)
33 pages, 8917 KB  
Article
An Automated Decision-Support Framework for Interior Space Quality Evaluation Using Computer Vision and Multi-Criteria Decision-Making
by Yuanan Wang, Zichen Zhao and Xuesong Guan
Buildings 2026, 16(8), 1508; https://doi.org/10.3390/buildings16081508 - 12 Apr 2026
Viewed by 321
Abstract
With the growing adoption of data-driven workflows and the need to compare numerous interior design alternatives in housing renewal, scalable and consistent assessment of interior space quality is increasingly important; however, current practice still depends on manual scoring and expert judgment. To address [...] Read more.
With the growing adoption of data-driven workflows and the need to compare numerous interior design alternatives in housing renewal, scalable and consistent assessment of interior space quality is increasingly important; however, current practice still depends on manual scoring and expert judgment. To address this gap, we propose an automation-ready framework that evaluates interior space quality from visual data. We construct the Functionality–Healthiness–Aesthetics Spatial Interior Dataset-10K (FHASID-10K) with 13,962 images for systematic validation. Three sub-models quantify functionality via space utilization and circulation smoothness, healthiness via detection of health-related visual elements, and aesthetics via semantic visual representations with regression-based prediction. Dimension scores are standardized and fused using the analytic hierarchy process (AHP) and the technique for order preference by similarity to ideal solution (TOPSIS) to produce a comprehensive score for ranking and grading. Experiments show stable score distributions and clear differentiation across space categories and style–space combinations. A gradient-boosted decision tree (GBDT) surrogate reconstructs the fused score with high accuracy (test R2 = 0.9992; MSE = 1.1 × 10−5), and human-subject evaluation shows strong agreement with overall-quality ratings (r = 0.760, p < 0.001). Overall, the framework enables scalable benchmarking, scheme comparison, and decision support. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

15 pages, 392 KB  
Article
Random Forest Predicts Human Ratings of Creative Stories Using Very Small Training Samples
by Baptiste Barbot and Thomas Calogero Kiekens
Behav. Sci. 2026, 16(4), 576; https://doi.org/10.3390/bs16040576 - 11 Apr 2026
Viewed by 266
Abstract
The Consensual Assessment Technique (CAT) is a gold standard of creativity assessment which provides valid product-based creativity scores that are contextually grounded (stemming from raters with unique expertise, culturally and historically situated). However, its implementation is often demanding (raters’ burden, complex rating designs). [...] Read more.
The Consensual Assessment Technique (CAT) is a gold standard of creativity assessment which provides valid product-based creativity scores that are contextually grounded (stemming from raters with unique expertise, culturally and historically situated). However, its implementation is often demanding (raters’ burden, complex rating designs). This study investigates whether machine learning can effectively simulate expert-panel judgments of creativity using minimal training data. Using a dataset of 411 short stories, we compared the performance of Random Forest (RF), Gradient Boosted Trees, and Decision Tree models, based on story length and Divergent Semantic Integration, to predict expert CAT ratings by (1) identifying the optimal algorithm and (2) the minimum training sample size required for reliable prediction. Results indicate that RF consistently outperformed other algorithms, achieving high correlations with CAT scores (r = 0.80) using as few as 25 training stories. Furthermore, RF demonstrated superior accuracy and lower reliance on story length compared to LLM-based scoring models. These findings provide a robust proof-of-concept for using simulated expert panels as a scalable alternative to (decontextualized) automated assessment methods, while reducing human raters’ burden and the logistical constraints of complex rating designs. Extension of this work to different contexts, creativity tasks and domains are necessary to gauge its generalizability. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

26 pages, 3829 KB  
Article
Time–Frequency and Spectral Analysis of Welding Arc Sound for Automated SMAW Quality Classification
by Alejandro García Rodríguez, Christian Camilo Barriga Castellanos, Jair Eduardo Rocha-Gonzalez and Everardo Bárcenas
Sensors 2026, 26(8), 2357; https://doi.org/10.3390/s26082357 - 11 Apr 2026
Viewed by 302
Abstract
This study investigates the feasibility of acoustic signal analysis for the assessment of weld bead quality in the shielded metal arc welding (SMAW) process. The work focuses on comparing time-domain acoustic signals and time–frequency spectrogram representations for the classification of welds as accepted [...] Read more.
This study investigates the feasibility of acoustic signal analysis for the assessment of weld bead quality in the shielded metal arc welding (SMAW) process. The work focuses on comparing time-domain acoustic signals and time–frequency spectrogram representations for the classification of welds as accepted or rejected according to standard welding inspection criteria. Two key acoustic descriptors, the fundamental frequency (F0) and the harmonics-to-noise ratio (HNR), were extracted and analyzed to evaluate statistical differences between the two weld quality classes. Statistical tests, including Anderson–Darling, Levene, ANOVA, and Kruskal–Wallis (α = 0.05), revealed significant differences between accepted and rejected welds. Accepted welds exhibited a bimodal HNR distribution associated with transient arc instability at the beginning and end of the bead, whereas rejected welds showed more uniform acoustic behavior throughout the process. Subsequently, the acoustic data were represented using both audio signals and spectrograms and used as inputs for ten supervised machine learning models, including Support Vector Classifier (SVC), Logistic Regression (LR), k-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF), Extra Trees (ET), Gradient Boosting (GB), and Naïve Bayes (NB). The results demonstrate that spectrogram-based representations significantly outperform time-domain signals, achieving accuracies of 0.95–0.96, ROC-AUC values above 0.95, and false positive and false negative rates below 6%. These findings indicate that, while scalar acoustic descriptors provide statistically significant insight into weld quality, time–frequency representations combined with machine learning enable a more robust and reliable framework for automated non-destructive evaluation, particularly in manual SMAW processes under realistic operating conditions. Full article
(This article belongs to the Section Sensor Materials)
Show Figures

Figure 1

18 pages, 11142 KB  
Article
Comparative Analysis of Various Supervised Machine Learning Models for the Prediction of the Outcome of the Welded Bead Bending Test
by Fritz Backofen, Ulrike Hähnel, Frank Hahn and Kristin Hockauf
Metals 2026, 16(4), 418; https://doi.org/10.3390/met16040418 - 10 Apr 2026
Viewed by 318
Abstract
The Welded Bead Bending Test (WBBT) assesses steel structures intended for construction in Germany in accordance with ZTV-ING Part 4 or DBS 918 002-02, as specified in Stahl-Eisen-Prüfblatt (SEP) 1390. Test outcomes are classified as passed (p) if the minimum bending [...] Read more.
The Welded Bead Bending Test (WBBT) assesses steel structures intended for construction in Germany in accordance with ZTV-ING Part 4 or DBS 918 002-02, as specified in Stahl-Eisen-Prüfblatt (SEP) 1390. Test outcomes are classified as passed (p) if the minimum bending angle α60 is achieved without fracture, not passed (n.p.) if fracture occurs beforehand, and invalid if no crack propagates into the base material. This study evaluates eight supervised machine learning models for classification regarding their suitability for predicting WBBT results: Decision Tree Classifier (DT), Random Forest Classifier (RF), Histogram-based Gradient Boosting Classifier (HGBC), k-Nearest-Neighbour (KNN), Bagging Classifiers based on DT (BCDT) and RF (BCRF), Generalized Learning Vector Quantizer (GLVQ), and Generalized Matrix Learning Vector Quantizer (GMLVQ). An industrial dataset of approximately 3600 samples was compiled in collaboration with Chemnitzer Werkstoff und Oberflächentechnik GmbH (CEWUS). Evaluation metrics included Balanced Accuracy, Recall, Specificity, computation time, and prediction stability. BCDT and BCRF achieved the highest Balanced Accuracy (70.6% and 70.3%, respectively), with BCRF excelling in Specificity (82.5%), thereby reliably detecting the n.p. class. GLVQ and GMLVQ demonstrated superior stability (maximum variability between training and testing dataset 0.14% and 3.17%, respectively), while BCRF and GMLVQ required the longest training times (BCRF: 10 s–20 s; GMLVQ: up to 80 s). KNN proved least suitable for WBBT outcome prediction. Full article
(This article belongs to the Section Computation and Simulation on Metals)
Show Figures

Figure 1

20 pages, 3510 KB  
Article
Nondestructive Detection of Eggshell Thickness Using Near-Infrared Spectroscopy Based on GBDT Feature Selection and an Improved CatBoost Algorithm
by Ziqing Li, Ying Ji, Changheng Zhao, Dehe Wang and Rongyan Zhou
Foods 2026, 15(8), 1286; https://doi.org/10.3390/foods15081286 - 8 Apr 2026
Viewed by 223
Abstract
Eggshell thickness is a critical indicator for evaluating egg breakage resistance and hatchability, yet traditional measurement methods remain destructive and inefficient. To address this, this study proposes a robust prediction approach by integrating Gradient Boosting Decision Tree (GBDT) feature optimization with an improved [...] Read more.
Eggshell thickness is a critical indicator for evaluating egg breakage resistance and hatchability, yet traditional measurement methods remain destructive and inefficient. To address this, this study proposes a robust prediction approach by integrating Gradient Boosting Decision Tree (GBDT) feature optimization with an improved CatBoost algorithm. First, a joint strategy of Standard Normal Variate (SNV) and Multiplicative Scatter Correction (MSC) was employed to eliminate spectral scattering noise and enhance organic matrix fingerprint information. Subsequently, GBDT was introduced for nonlinear feature evaluation to adaptively screen the top 50 wavelengths, effectively mitigating the “curse of dimensionality” and multicollinearity in full-spectrum data. A CatBoost regression model was then constructed using an Ordered Boosting mechanism, supported by a dual anti-overfitting strategy that merged 10-fold nested cross-validation with Bootstrap resampling. Experimental results demonstrate that this method significantly outperforms traditional algorithms in both prediction accuracy and generalization. The coefficients of determination (R2) for the calibration and prediction sets reached 0.930 and 0.918, respectively, with a root mean square error of prediction (RMSEP) of 0.008 mm. Residual analysis confirms that prediction errors follow a zero-mean Gaussian distribution, indicating that systematic bias was effectively eliminated. This research provides a reliable theoretical foundation and technical support for the intelligent grading of poultry egg quality. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

27 pages, 5739 KB  
Article
Baseline-Conditioned Spatial Heterogeneity in Ensemble-Learning Correction for Global Hourly Sea-Level Reconstruction
by Yu Hao, Yixuan Tang, Wen Du, Yang Li and Min Xu
J. Mar. Sci. Eng. 2026, 14(8), 697; https://doi.org/10.3390/jmse14080697 - 8 Apr 2026
Viewed by 435
Abstract
This study examines how assessments of coastal extreme sea levels depend on the separability and reconstructability of the astronomical tide in hourly sea-level records. Using a global tide-gauge network, it proposes an ensemble-learning correction framework that integrates a physical-baseline threshold with multi-criteria consistency [...] Read more.
This study examines how assessments of coastal extreme sea levels depend on the separability and reconstructability of the astronomical tide in hourly sea-level records. Using a global tide-gauge network, it proposes an ensemble-learning correction framework that integrates a physical-baseline threshold with multi-criteria consistency testing to determine whether machine-learning enhancement is genuinely effective across stations and time windows. The analysis uses hourly records from 528 UHSLC tide gauges, with 31-day short sequences used to reconstruct 180-day sea-level variability. Taking the physical tidal model as the baseline, residuals are corrected using Extremely Randomized Trees, Random Forest, and Gradient Boosting. To avoid false improvement driven solely by error reduction, a hierarchical decision framework is established. Baseline model quality is first screened using NSE and the coefficient of determination, after which mathematical artefacts are identified through diagnostics of peak suppression and variance shrinkage. A five-level classification is then derived from the convergent evidence of twelve performance metrics and four statistical significance tests. The results show a consistent global pattern across all three algorithms. Approximately 57% of stations meet the criterion for genuine improvement, whereas about 42% are associated with an unreliable physical baseline, indicating that the dominant source of failure arises not from the ensemble-learning algorithms themselves, but from spatially varying limitations in the underlying physical baseline. Spatially, the credibility of machine-learning correction is strongly conditioned by baseline quality: stations with effective correction are more continuous along the eastern North Atlantic and European coasts, whereas stations with ineffective correction are more concentrated in the Gulf of Mexico, the Caribbean, and the marginal seas and archipelagic regions of the western Pacific. These results indicate that the observed spatial heterogeneity primarily reflects geographically varying physical and dynamical conditions that control baseline reliability and residual learnability, rather than a standalone difference in the intrinsic capability of ensemble learning itself. Full article
(This article belongs to the Special Issue AI-Enhanced Dynamics and Reliability Analysis of Marine Structures)
Show Figures

Figure 1

21 pages, 5711 KB  
Article
A Study on High-Precision Dimensional Measurement of Irregularly Shaped Carbonitrided 820CrMnTi Components
by Xiaojiao Gu, Dongyang Zheng, Jinghua Li and He Lu
Materials 2026, 19(8), 1491; https://doi.org/10.3390/ma19081491 - 8 Apr 2026
Viewed by 209
Abstract
For irregularly shaped 820CrMnTi carburizing and nitriding parts, the challenges of high reflectivity-induced overexposure, low surface contrast, and interference from minute burrs in industrial online inspection are addressed in this paper. An innovative precision detection method integrating adaptive imaging and a dual-drive heterogeneous [...] Read more.
For irregularly shaped 820CrMnTi carburizing and nitriding parts, the challenges of high reflectivity-induced overexposure, low surface contrast, and interference from minute burrs in industrial online inspection are addressed in this paper. An innovative precision detection method integrating adaptive imaging and a dual-drive heterogeneous coupling model (RGFCN) is proposed. Such parts, due to surface photovoltaic characteristic changes caused by carburizing and nitriding heat treatment and the complex on-site lighting environment, are prone to local overexposure and “false out-of-tolerance” measurements caused by outlier sensitivity in traditional inspections. First, an innovative programmatic adaptive exposure control algorithm based on grayscale histogram feedback is introduced, which dynamically adjusts imaging parameters in real time to effectively suppress high-brightness overexposure under specific working conditions. Second, a novel adaptive main-axis scanning strategy is designed to construct a dynamic follow-up coordinate system, eliminating projection errors introduced by random positioning from a geometric perspective. Additionally, Gaussian gradient energy fields are combined with the Huber M-estimation robust fitting mechanism to suppress thermal noise while automatically reducing the weight of burrs and oil stains, achieving “immunity” to non-functional defects. Meanwhile, a data-driven innovative compensation approach is introduced. Based on sample training, gradient boosting decision trees (GBDTs) are integrated to explore the nonlinear mapping relationship between multidimensional feature spaces and system residuals, achieving implicit calibration of lens distortion and environmental coupling errors. By simulating factory conditions with drastic 24 h day–night lighting fluctuations and strong oil stain interference, statistical analysis of over 1000 mass-produced parts shows that this method exhibits excellent robustness in complex environments. It reduces the false out-of-tolerance rate caused by burrs by over 90%, and the standard deviation of repeated measurements converges to the micrometer level. This effectively addresses the visual inspection challenges of irregular, highly reflective parts on dynamic production lines. Full article
(This article belongs to the Special Issue Latest Developments in Advanced Machining Technologies for Materials)
Show Figures

Figure 1

22 pages, 2332 KB  
Article
A Multi-Model Machine Learning Framework for Predicting and Ranking High-Risk Urban Intersections in Riyadh
by Saleh Altwaijri, Saleh Alotaibi, Faisal Alosaimi, Adel Almutairi and Abdulaziz Alauany
Sustainability 2026, 18(8), 3651; https://doi.org/10.3390/su18083651 - 8 Apr 2026
Viewed by 446
Abstract
Road traffic accidents at intersections pose a persistent challenge in Riyadh, Saudi Arabia, contributing significantly to public health burdens and economic losses. Traditional statistical approaches often fail to capture the complex, non-linear interactions among geometric design, traffic parameters, and accident severity. This study [...] Read more.
Road traffic accidents at intersections pose a persistent challenge in Riyadh, Saudi Arabia, contributing significantly to public health burdens and economic losses. Traditional statistical approaches often fail to capture the complex, non-linear interactions among geometric design, traffic parameters, and accident severity. This study develops a multi-methodological machine learning framework to predict intersection accident severity using the Equivalent Property Damage Only (EPDO) metric. Historical data (2017–2023) from Riyadh Municipality for 150 high-risk intersections were analyzed, incorporating predictors such as service road distance (SRD), U-turn distance (UTD), median width (MW), peak hour volume (PHV), heavy vehicle percentage (HV%), and injury/frequency counts. Six algorithms, i.e., Decision Tree, Random Forest, Gradient Boosting, Support Vector Machine, Linear Regression, and Artificial Neural Network, were compared using a 70/30 train–test split and k-fold cross-validation in this study. The Gradient Boosting model achieved superior performance (R2 = 0.89 with MSE = 63.43 and RMSE = 7.96) and was selected for final deployment. SHAP feature importance analysis revealed minor injuries (MIs), serious injuries (SRIs), and fatalities (FAs) as the most important dominant predictors, with geometric factors (UTD, MW) and traffic composition (HV%) providing actionable infrastructure insights. The model ranked intersections and identified the “Jeddah Road with Taif Road” (predicted EPDO = 137.22) as the highest-risk location. Evidence-based recommendations include enforcing the minimum 300 m U-turn buffers with staggering service road exits ≥150 m and restricting heavy vehicles during peak hours. The scalable framework developed in this study supports the data-driven prioritization of safety interventions and aligns with sustainable urban mobility goals and offers transferability to other metropolitan contexts worldwide. Full article
Show Figures

Figure 1

23 pages, 4047 KB  
Article
UAV-Based Estimation of Tea Leaf Area Index in Mountainous Terrain: Integrating Topographic Correction and Interpretable Machine Learning
by Na Lin, Jian Zhao, Huxiang Shao, Miaomiao Wang and Hong Chen
Sensors 2026, 26(7), 2218; https://doi.org/10.3390/s26072218 - 3 Apr 2026
Viewed by 367
Abstract
Leaf Area Index (LAI) is a fundamental parameter for characterizing the growth of tea (Camellia sinensis L.). However, in rugged mountainous regions, the combined effects of topographic relief and canopy structural heterogeneity severely constrain the accuracy of UAV-based multispectral LAI retrieval. This [...] Read more.
Leaf Area Index (LAI) is a fundamental parameter for characterizing the growth of tea (Camellia sinensis L.). However, in rugged mountainous regions, the combined effects of topographic relief and canopy structural heterogeneity severely constrain the accuracy of UAV-based multispectral LAI retrieval. This study develops an integrated framework combining topographic correction with interpretable machine learning to improve LAI estimation. We utilized a UAV multispectral dataset collected during the peak growing season from a typical tea-growing region in Fujian Province, China (altitude range: 58–186 m), comprising a total of 90 samples. Three topographic correction methods, including Sun–Canopy–Sensor (SCS), SCS with C correction (SCS+C), and Minnaert+SCS, were evaluated in combination with Linear Regression (LR), Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) models. Results indicated that the SCS+C algorithm outperformed other methods by effectively accounting for direct and diffuse radiation components, thereby reducing topographic dependence while maintaining radiometric consistency across heterogeneous surfaces. The XGBoost model combined with SCS+C correction achieved the highest performance (R2 = 0.8930, RMSE = 0.6676, nRMSE = 7.93%, MAE = 0.4936, Bias = −0.0836). SHapley Additive exPlanations (SHAP) analysis revealed a structure-dominated retrieval mechanism, in which red-band textural features (Correlation_R) exhibited higher importance than conventional vegetation indices. Compared with previous studies that primarily focus on either topographic correction or model development, this study provides quantitative insights into the underlying retrieval mechanisms. This framework improves the precision of tea LAI retrieval in complex terrains and provides a robust methodological basis for digital management in mountainous agriculture. Full article
(This article belongs to the Special Issue AI UAV-Based Systems for Agricultural Monitoring)
Show Figures

Figure 1

Back to TopTop