Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,567)

Search Parameters:
Keywords = model trees

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 14377 KB  
Article
An Interpretable Attention Decision Forest Model for Surface Soil Moisture Retrieval
by Jianhui Chen, Zuo Wang, Ziran Wei, Chang Huang, Yongtao Yang, Ping Wei, Hu Li, Yuanhong You, Shuoqi Zhang, Zhijie Dong and Hao Liu
Remote Sens. 2025, 17(20), 3468; https://doi.org/10.3390/rs17203468 (registering DOI) - 17 Oct 2025
Abstract
Surface soil moisture (SSM) plays a critical role in climate change, hydrological processes, and agricultural production. Decision trees and deep learning are widely applied to SSM retrieval. The former excels in interpretability while the latter outperforms in generalization, neither, however, integrates both. To [...] Read more.
Surface soil moisture (SSM) plays a critical role in climate change, hydrological processes, and agricultural production. Decision trees and deep learning are widely applied to SSM retrieval. The former excels in interpretability while the latter outperforms in generalization, neither, however, integrates both. To address this issue, an attention decision forest (ADF) was developed, comprising feature extractor, soft decision tree, and tree-attention modules. The feature extractor projects raw inputs into a high-dimensional space to reveal nonlinear relationships. The soft decision tree preserves the advantages of tree models in nonlinear partitioning and local feature interaction. The tree-attention module integrates outputs from the soft tree’s subtrees to enhance overall fitting and generalization. Experiments on conterminous United States (CONUS) watershed dataset demonstrate that, upon sample-based validation, ADF outperforms traditional models with an R2 of 0.868 and a ubRMSE of 0.041 m3/m3. Further spatiotemporal independent testing demonstrated the robust performance of this method, with R2 of 0.643 and0.673, and ubRMSE of 0.062 and 0.065 m3/m3. Furthermore, an evaluation of the interpretability of the ADF using the Shapley Additive Interpretative Model (SHAP) revealed that the ADF was more stable than deep learning methods (e.g., DNN) and comparable to tree-based ensemble learning methods (e.g., RF and XGBoost). Both the ADF and ensemble learning methods demonstrated that, at large scales, spatiotemporal variation had the greatest impact on the SSM, followed by environmental conditions and soil properties. Moreover, the superior spatial SSM maps produced by ADF, compared with GSSM, SMAP L4 and ERA5-Land, further demonstrate ADF’s capability for large-scale mapping. ADF thus offers a novel architecture capable of integrating prediction accuracy, generalization, and interpretability. Full article
(This article belongs to the Special Issue GIS and Remote Sensing in Soil Mapping and Modeling (Second Edition))
23 pages, 2837 KB  
Article
Spatial Error Prediction and Compensation of Industrial Robots Based on Extended Joints and BO-XGBoost
by Bingran Yang and Xuedong Jing
Sensors 2025, 25(20), 6422; https://doi.org/10.3390/s25206422 (registering DOI) - 17 Oct 2025
Abstract
Robotic positioning accuracy is paramount in complex tasks. This accuracy is influenced by both geometric and non-geometric factors, making error prediction a significant challenge. To address this, this paper introduces two key contributions. First, we propose a novel input feature, the robot’s “extended [...] Read more.
Robotic positioning accuracy is paramount in complex tasks. This accuracy is influenced by both geometric and non-geometric factors, making error prediction a significant challenge. To address this, this paper introduces two key contributions. First, we propose a novel input feature, the robot’s “extended joint angles,” which incorporates joint reversal information to better capture non-geometric errors like gear backlash. Second, we develop a high-accuracy spatial error prediction model by combining the Extreme Gradient Boosting (XGBoost) algorithm with Bayesian Optimization (BO) for hyperparameter tuning. The BO-XGBoost model establishes a direct non-linear mapping from the extended joint angles to the positioning error. Experimental results demonstrate that after compensation, the mean position error was reduced from 1.0751 mm to 0.1008 mm (a 90.62% decrease), the maximum error from 3.3884 mm to 0.4782 mm (an 85.88% decrease), and the standard deviation from 0.5383 mm to 0.0832 mm (an 84.54% decrease). A comparative analysis against Decision Tree, K-Nearest Neighbors, and Random Forest models further validates the superiority of the proposed method in reducing robot position error. Full article
(This article belongs to the Special Issue Advanced Robotic Manipulators and Control Applications)
19 pages, 1524 KB  
Article
Potential Rapid Quantification of Antioxidant Capacity of Olea europaea L. Leaves by Near-Infrared Spectroscopy Using Different Assays
by Manuel Piqueras-García, Jorge F. Escobar-Talavera, María Esther Martínez-Navarro, Gonzalo L. Alonso and Rosario Sánchez-Gómez
Antioxidants 2025, 14(10), 1246; https://doi.org/10.3390/antiox14101246 - 17 Oct 2025
Abstract
The olive tree has exceptional agricultural and economic importance in Mediterranean regions due to its fruit, which is used to produce olive oil. However, the olive oil industry generates a significant amount of waste, including leaves from Olea europaea L. These leaves contain [...] Read more.
The olive tree has exceptional agricultural and economic importance in Mediterranean regions due to its fruit, which is used to produce olive oil. However, the olive oil industry generates a significant amount of waste, including leaves from Olea europaea L. These leaves contain a high concentration of bioactive compounds, predominantly phenolic ones, which are well known for their antioxidant properties and health benefits. Determining antioxidant capacity involves the use of different assays based on absorbance (DPPH, 2,2-diphenyl-1-picrylhydrazyl; and ABTS, 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid)) and fluorescence (ORAC, Oxygen Radical Absorbance Capacity), which require reagents and long waiting times. Therefore, having a non-destructive technique capable of providing this information would be useful. To explore this, 120 olive leaf samples were analyzed using the three antioxidant assays to quantify their total antioxidant capacity. Predictive models were successfully developed for each of the three methods, achieving coefficients of determination (R2) between 0.9 and 1 across calibration, validation, and prediction. Additionally, high residual predictive deviation (RPD) values were obtained, indicating that the models exhibit strong reliability and predictive performance. Full article
Show Figures

Figure 1

18 pages, 5303 KB  
Article
Mechanical Analysis and Multi-Objective Optimization of Origami-Inspired Tree-Shaped Thin-Walled Structures Under Axial Impacts
by Honghao Zhang, Zilong Meng, Jixiang Zhang, Xinyu Hao, Shangbin Zhang and Niancheng Guo
Biomimetics 2025, 10(10), 705; https://doi.org/10.3390/biomimetics10100705 - 17 Oct 2025
Abstract
Rail vehicles, frequently utilized as a heavy-duty, high-speed means of transportation, have been observed to result in substantial casualties and economic losses in the event of accidents. Energy-absorbing structures are critical to achieving passive safety, effectively absorbing and dissipating energy. The present study [...] Read more.
Rail vehicles, frequently utilized as a heavy-duty, high-speed means of transportation, have been observed to result in substantial casualties and economic losses in the event of accidents. Energy-absorbing structures are critical to achieving passive safety, effectively absorbing and dissipating energy. The present study utilizes numerical simulation to assess the performance of origami-inspired tree-shaped structures (OTSs) under diverse surface configurations. OTSs offer significant advantages in reducing IPCF without substantially compromising other performance metrics. This experimental approach is employed to validate the efficacy of a finite element model. A multi-criteria decision-making method integrates MOEA/D-DAE and TOPSIS. This integrated approach is employed to identify optimal structures. The validity of the method was established through a comparison of the predicted results with the outcomes of finite element analysis. The findings demonstrated a 31.2% reduction in IPCF, a 3.6% increase in SEA, and a 10.4% rise in ULC. The optimized IPCF is 4.9919 kN, SEA is 12.316 kJ/kg. The collective results indicate the efficacy of the method as a tool for analyzing and optimizing energy-absorbing structures. Full article
(This article belongs to the Special Issue Computer-Aided Biomimetics: 3rd Edition)
Show Figures

Figure 1

23 pages, 8417 KB  
Article
Assessing Coniferous Forest Cover Change and Associated Uncertainty in a Subbasin of the Great Salt Lake Watershed: A Stochastic Approach Using Landsat Imagery and Random Forest Models
by Kaleb Markert, Gustavious P. Williams, Norman L. Jones, Robert B. Sowby and Grayson R. Morgan
Environments 2025, 12(10), 387; https://doi.org/10.3390/environments12100387 - 17 Oct 2025
Abstract
We present a stochastic method for classifying high-elevation coniferous forest coverage that includes an uncertainty estimate using Landsat images. We evaluate trends in coniferous coverage from 1986 to 2024 in a sub-basin of the Great Salt Lake basin in the western United States [...] Read more.
We present a stochastic method for classifying high-elevation coniferous forest coverage that includes an uncertainty estimate using Landsat images. We evaluate trends in coniferous coverage from 1986 to 2024 in a sub-basin of the Great Salt Lake basin in the western United States This work was completed before the recent release of the extended National Land Cover Database (NLCD) data, so we use the 9 years of NLCD data previously available over the period from 2001 to 2021 for training and validation. We perform 100 draws of 5130 data points each using stratified sampling from the paired NLCD and Landsat data to generate 100 Random Forest Models. Even though extended NLCD data are available, our model is unique as it is trained on high elevation dense coniferous stands and does not classify wester pinyon (Pinus edulis) or Utah juniper (Juniperus osteosperma) shrub trees as “coniferous”. We apply these models, implemented in Google Earth Engine, to the nearly 40-year Landsat dataset to stochastically classify coniferous forest extent to support trend analysis with uncertainty. Model accuracy for most years is better than 94%, comparable to published NLCD accuracy, though several years had significantly worse results. Coniferous area standard deviations for any given year ranged from 0.379% to 1.17% for 100 realizations. A linear fit from 1985 to 2024 shows an increase of 65% in coniferous coverage over 38 years, though there is variation around the trend. The method can be adapted for other specialized land cover categories and sensors, facilitating long-term environmental monitoring and management while providing uncertainty estimates. The findings support ongoing research forest management impacts on snowpack and water infiltration, as increased coniferous coverage of dense fir and spruce increases interception and sublimation, decreasing infiltration and runoff. NLCD data cannot easily be used for this work in the west, as the pinyon (Pinus edulis) and juniper (Juniperus osteosperma) forests are classified as coniferous, but have much lower impact on interception and sublimation. Full article
Show Figures

Figure 1

18 pages, 6519 KB  
Article
Detection of SPAD Content in Leaves of Grey Jujube Based on Near Infrared Spectroscopy
by Lanfei Wang, Junkai Zeng, Mingyang Yu, Weifan Fan and Jianping Bao
Horticulturae 2025, 11(10), 1251; https://doi.org/10.3390/horticulturae11101251 - 17 Oct 2025
Abstract
The efficient and non-destructive inspection of the chlorophyll content of grey jujube leaf is of great significance for its growth surveillance and nutritional diagnosis. Near-infrared spectroscopy combined with chemometric methods provides an effective approach to achieve this goal. This study took grey jujube [...] Read more.
The efficient and non-destructive inspection of the chlorophyll content of grey jujube leaf is of great significance for its growth surveillance and nutritional diagnosis. Near-infrared spectroscopy combined with chemometric methods provides an effective approach to achieve this goal. This study took grey jujube leaves as the research object, systematically collected near-infrared spectral data in the range of 4000–10,000 cm−1, and simultaneously measured their soil and plant analyzer development (SPAD) value as a reference index for chlorophyll content. Through various pretreatment and their combination methods on the original spectrum—smooth, standard normal variable transformation (SNV), first derivative (FD), second derivative (SD), smooth + first derivative (Smooth + FD), smooth + second derivative (Smooth + SD), standard normal variable transformation + first derivative (SNV + FD), standard normal variable transformation + second derivative (SNV + SD)—the effects of different methods on the quality of the spectrum and its correlation with SPAD value were compared. The competitive adaptive reweighted sampling algorithm (CARS) was adopted to extract the characteristic wavelength, aiming to reduce data dimensionality and optimize model input. Both BP neural network and RBF neural network prediction models were established, and the model performance under different training functions was compared. The results indicate that after Smooth + FD pretreatment, followed by CARS screening of the characteristic wavelength, the BP neural network model trained using the LBFGS algorithm demonstrated the best performance, with its coefficient of determination (R2) of 0.87 (training set) and 0.85 (validation set), root mean square error (RMSE) of 1.36 (training set) and 1.35 (validation set), and residual prediction deviation (RPD) of 2.81 (training set) and 2.56 (validation set) showing good prediction accuracy and robustness. Research indicates that by combining near-infrared spectroscopy with feature extraction and machine learning methods, the rapid and non-destructive inspection of the grey jujube leaf SPAD value can be achieved, providing reliable technical support for the real-time monitoring of the nutritional status of jujube trees. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

18 pages, 6056 KB  
Article
Comparative Study on the Different Downscaling Methods for GPM Products in Complex Terrain Areas
by Jiao Liu, Xuyang Shi, Yahui Fang, Caiyan Wu and Zhenyan Yi
Earth 2025, 6(4), 129; https://doi.org/10.3390/earth6040129 - 17 Oct 2025
Abstract
Fine spatial information of precipitation plays a significant role in regional eco-hydrological studies but remain challenging to derive from satellite observations, especially in complex terrain areas. Sichuan Province, located in the southwest of China, has a highly variable terrain, and the spatial distribution [...] Read more.
Fine spatial information of precipitation plays a significant role in regional eco-hydrological studies but remain challenging to derive from satellite observations, especially in complex terrain areas. Sichuan Province, located in the southwest of China, has a highly variable terrain, and the spatial distribution of precipitation exhibits extreme heterogeneity and strong autocorrelation. Multi-scale Geographically Weighted Regression (MGWR) and Random Forest (RF) were employed for downscaling the Global Precipitation Measurement Mission (GPM) products based on high spatial resolution terrain, vegetation, and meteorological data in Sichuan province, and their specific effects on gauged precipitation accuracy and spatial precipitation distributions have been analyzed based on the influences of environmental variables. Results show that the influence of each environmental factor on the distribution of precipitation at different scales was well represented in the MGWR model. The downscaled data showed good spatial sharpening effects; additionally, the biases in the overestimated region were well corrected after downscaling. However, when based on spatial autocorrelation and considering adjacent influences, the MGWR performed poorly in correcting outlier sites adjacent to the high–high clusters. Compared with MGWR, relying on independently constructed decision trees and powerful regression capabilities, superior correction for outlier sites has been achieved in RF. Nevertheless, the influence of environmental variables reflected in RF differs from actual conditions, and detailed characteristics of precipitation spatial distribution have been lost in the downscaled results. MGWR and RF demonstrate varying applicability when downscaling GPM products in complex terrain areas, as they both improve the ability to finely depict spatial information but differ in terms of texture property expression and precipitation bias correction. Full article
Show Figures

Figure 1

19 pages, 13759 KB  
Article
University Campuses as Vital Urban Green Infrastructure: Quantifying Ecosystem Services Based on Field Inventory in Nizhny Novgorod, Russia
by Basil N. Yakimov, Nataly I. Zaznobina, Irina M. Kuznetsova, Angela D. Bolshakova, Taisia A. Kovaleva, Ivan N. Markelov and Vladislav V. Onishchenko
Land 2025, 14(10), 2073; https://doi.org/10.3390/land14102073 - 17 Oct 2025
Abstract
This study provides the first comprehensive, field-inventory-based assessment of urban ecosystem services within a Russian university campus, focusing on the woody vegetation of the Lobachevsky State University of Nizhny Novgorod. Utilizing a detailed field tree inventory combined with the i-Tree framework (including i-Tree [...] Read more.
This study provides the first comprehensive, field-inventory-based assessment of urban ecosystem services within a Russian university campus, focusing on the woody vegetation of the Lobachevsky State University of Nizhny Novgorod. Utilizing a detailed field tree inventory combined with the i-Tree framework (including i-Tree Eco, i-Tree Canopy, UFORE, and i-Tree Hydro models), we quantified the campus’s capacity for carbon storage and sequestration, air pollutant removal, and stormwater runoff mitigation. The campus green infrastructure, comprising 1887 trees across 32 species with a density of 145.5 stems per hectare, demonstrated significant ecological value. Results show a carbon storage density of 26.61 t C ha−1 and an annual gross carbon sequestration of 11.43 tons. Furthermore, the campus trees removed 1213.7 kg of air pollutants annually (a deposition rate of 9.35 g m−2), with ozone, particulate matter, and sulfur dioxide showing the highest deposition. The campus also retained 956.1 m3 of stormwater annually. These findings, particularly the high carbon sequestration rates, are attributed to the dominance of relatively young, fast-growing tree species. This research establishes a critical baseline for understanding urban ecosystem services in a previously under-researched geographical context. The detailed, empirical data offers crucial insights for urban planners and policymakers in Nizhny Novgorod and beyond, advocating for the strategic integration of ecosystem services assessments into campus planning and broader urban green infrastructure development across Russian cities. The study underscores the significant role of university campuses as vital components of urban green infrastructure, contributing substantially to environmental sustainability and human well-being. Full article
(This article belongs to the Section Land Use, Impact Assessment and Sustainability)
Show Figures

Figure 1

17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 (registering DOI) - 17 Oct 2025
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

36 pages, 3174 KB  
Review
A Bibliometric-Systematic Literature Review (B-SLR) of Machine Learning-Based Water Quality Prediction: Trends, Gaps, and Future Directions
by Jeimmy Adriana Muñoz-Alegría, Jorge Núñez, Ricardo Oyarzún, Cristian Alfredo Chávez, José Luis Arumí and Lien Rodríguez-López
Water 2025, 17(20), 2994; https://doi.org/10.3390/w17202994 - 17 Oct 2025
Abstract
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified [...] Read more.
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified exponential growth in scientific publications since 2020, indicating that this field has reached a stage of maturity. The results showed that the predominant techniques for predicting water quality, both for surface and groundwater, fall into three main categories: (i) ensemble models, with Bagging and Boosting representing 43.07% and 25.91%, respectively, particularly random forest (RF), light gradient boosting machine (LightGBM), and extreme gradient boosting (XGB), along with their optimized variants; (ii) deep neural networks such as long short-term memory (LSTM) and convolutional neural network (CNN), which excel at modeling complex temporal dynamics; and (iii) traditional algorithms like artificial neural network (ANN), support vector machines (SVMs), and decision tree (DT), which remain widely used. Current trends point towards the use of hybrid and explainable architectures, with increased application of interpretability techniques. Emerging approaches such as Generative Adversarial Network (GAN) and Group Method of Data Handling (GMDH) for data-scarce contexts, Transfer Learning for knowledge reuse, and Transformer architectures that outperform LSTM in time series prediction tasks were also identified. Furthermore, the most studied water bodies (e.g., rivers, aquifers) and the most commonly used water quality indicators (e.g., WQI, EWQI, dissolved oxygen, nitrates) were identified. The B-SLR and Topic Modeling methodology provided a more robust, reproducible, and comprehensive overview of AI/ML/DL models for freshwater quality prediction, facilitating the identification of thematic patterns and research opportunities. Full article
(This article belongs to the Special Issue Machine Learning Applications in the Water Domain)
Show Figures

Figure 1

25 pages, 1355 KB  
Article
Source Robust Non-Parametric Reconstruction of Epidemic-like Event-Based Network Diffusion Processes Under Online Data
by Jiajia Xie, Chen Lin, Xinyu Guo and Cassie S. Mitchell
Big Data Cogn. Comput. 2025, 9(10), 262; https://doi.org/10.3390/bdcc9100262 - 16 Oct 2025
Abstract
Temporal network diffusion models play a crucial role in healthcare, information technology, and machine learning, enabling the analysis of dynamic event-based processes such as disease spread, information propagation, and behavioral diffusion. This study addresses the challenge of reconstructing temporal network diffusion events in [...] Read more.
Temporal network diffusion models play a crucial role in healthcare, information technology, and machine learning, enabling the analysis of dynamic event-based processes such as disease spread, information propagation, and behavioral diffusion. This study addresses the challenge of reconstructing temporal network diffusion events in real time under conditions of missing and evolving data. A novel non-parametric reconstruction method by simple weights differentiationis proposed to enhance source detection robustness with provable improved error bounds. The approach introduces adaptive cost adjustments, dynamically reducing high-risk source penalties and enabling bounded detours to mitigate errors introduced by missing edges. Theoretical analysis establishes enhanced upper bounds on false positives caused by detouring, while a stepwise evaluation of dynamic costs minimizes redundant solutions, resulting in robust Steiner tree reconstructions. Empirical validation on three real-world datasets demonstrates a 5% improvement in Matthews correlation coefficient (MCC), a twofold reduction in redundant sources, and a 50% decrease in source variance. These results confirm the effectiveness of the proposed method in accurately reconstructing temporal network diffusion while improving stability and reliability in both offline and online settings. Full article
26 pages, 2009 KB  
Article
Tool Wear Prediction Using Machine-Learning Models for Bone Drilling in Robotic Surgery
by Shilpa Pusuluri, Hemanth Satya Veer Damineni and Poolan Vivekananda Shanmuganathan
Automation 2025, 6(4), 59; https://doi.org/10.3390/automation6040059 - 16 Oct 2025
Abstract
Bone drilling is a widely encountered process in orthopedic surgeries and keyhole neuro surgeries. We are developing a sensor-integrated smart end-effector for drilling for robotic surgical applications. In manual surgeries, surgeons assess tool wear based on experience and force perception. In this work, [...] Read more.
Bone drilling is a widely encountered process in orthopedic surgeries and keyhole neuro surgeries. We are developing a sensor-integrated smart end-effector for drilling for robotic surgical applications. In manual surgeries, surgeons assess tool wear based on experience and force perception. In this work, we propose a machine-learning (ML)-based tool condition monitoring system based on multi-sensor data to preempt excessive tool wear during drilling in robotic surgery. Real-time data is acquired from the six-component force sensor of a collaborative arm along with the data from the temperature and multi-axis vibration sensor mounted on the bone specimen being drilled upon. Raw data from the sensors may have noises and outliers. Signal processing in the time- and frequency-domain are used for denoising as well as to obtain additional features to be derived from the raw sensory data. This paper addresses the challenging problem of identification of the most suitable ML algorithm and the most suitable features to be used as inputs to the algorithm. While dozens of features and innumerable machine learning and deep learning models are available, this paper addresses the problem of selecting the most relevant features, the most relevant AI models, and the optimal hyperparameters to be used in the AI model to provide accurate prediction on the tool condition. A unique framework is proposed for classifying tool wear that combines machine learning-based modeling with multi-sensor data. From the raw sensory data that contains only a handful of features, a number of additional features are derived using frequency-domain techniques and statistical measures. Using feature engineering, we arrived at a total of 60 features from time-domain, frequency-domain, and interaction-based metrics. Such additional features help in improving its predictive capabilities but make the training and prediction complicated and time-consuming. Using a sequence of techniques such as variance thresholding, correlation filtering, ANOVA F-test, and SHAP analysis, the number of features was reduced from 60 to the 4 features that will be most effective in real-time tool condition prediction. In contrast to previous studies that only examine a small number of machine learning models, our approach systematically evaluates a wide range of machine learning and deep learning architectures. The performances of 47 classical ML models and 6 deep learning (DL) architectures were analyzed using the set of the four features identified as most suitable. The Extra Trees Classifier (an ML model) and the one-dimensional Convolutional Neural Network (1D CNN) exhibited the best prediction accuracy among the models studied. Using real-time data, these models monitored the drilling tool condition in real-time to classify the tool wear into three categories of slight, moderate, and severe. Full article
Show Figures

Figure 1

17 pages, 1132 KB  
Article
Clinical Variability and Genotype–Phenotype Correlation in Spanish Patients with Type 1 Gaucher Disease: A Focus on Non-c.[1226A>G]; [1448T>C] Genotypes
by Irene Serrano-Gonzalo, Francisco Bauza, Laura Lopez de Frutos, Isidro Arevalo-Vargas, Mercedes Roca-Espiau, Marcio Andrade-Campos, Esther Valero-Tena, Sonia Roca-Esteve, David Iniguez and Pilar Giraldo
Int. J. Mol. Sci. 2025, 26(20), 10088; https://doi.org/10.3390/ijms262010088 - 16 Oct 2025
Abstract
The clinical heterogeneity of type 1 Gaucher disease (GD1) underscores the limited correlation between the GBA1 genotype and phenotype. This study examined GD1 patients from the Spanish Gaucher Disease Registry carrying heterozygous GBA1 genotypes distinct from NM_000157: c.[1226A>G](N370S); [1448T>C](L444P). Among 374 patients with [...] Read more.
The clinical heterogeneity of type 1 Gaucher disease (GD1) underscores the limited correlation between the GBA1 genotype and phenotype. This study examined GD1 patients from the Spanish Gaucher Disease Registry carrying heterozygous GBA1 genotypes distinct from NM_000157: c.[1226A>G](N370S); [1448T>C](L444P). Among 374 patients with GD1, 195 (52.1%) had alternative heterozygous combinations, including variants corresponding to severe (37.9%) or moderate (42.1%) mutation, whereas only 20% patients harbored mild variants—all of them in combination with N370S. Descriptive statistics and predictive models based on logistic regression and decision trees were applied. Patients carrying N370S with a different L444P variant showed significantly higher rates of advanced bone disease (59.9%) compared to those with homozygous N370S (38.3%) or N370S; L444P (41.0%) (p = 0.002). Decision tree analysis identified the bone marrow burden score (S-MRI) as the strongest predictor of osteopenia/osteoporosis at diagnosis. Genotype also emerged as a key discriminator for Parkinson’s disease: patients with non-N370S; L444P genotypes showed a markedly higher likelihood of developing Parkinsonism. Overall, GD1 patients with genotypes other than N370S; L444P present more severe phenotypes, particularly with greater skeletal involvement and neurological complications. These findings highlight the importance of genotype stratification and predictive modeling in improving risk assessment and clinical management in GD1. Full article
Show Figures

Figure 1

9 pages, 473 KB  
Proceeding Paper
Optimization of Forecasting Performance in the Retail Sector Using Artificial Intelligence
by Hoda Jatte, Sara Belattar and El Khatir Haimoudi
Eng. Proc. 2025, 112(1), 37; https://doi.org/10.3390/engproc2025112037 - 16 Oct 2025
Abstract
In the retail industry, demand forecasting is absolutely crucial for guaranteeing efficient inventory and supply chain control. Different artificial intelligence (AI) techniques have been used lately to improve forecasting performance. Demand fluctuation, seasonal patterns, and outside influences continue to create difficulties, though. Using [...] Read more.
In the retail industry, demand forecasting is absolutely crucial for guaranteeing efficient inventory and supply chain control. Different artificial intelligence (AI) techniques have been used lately to improve forecasting performance. Demand fluctuation, seasonal patterns, and outside influences continue to create difficulties, though. Using several machine-learning techniques Linear Regression, XGBoost, Random Forest, Decision Tree, Prophet, and LSTM this paper offers a comparative study to forecast product demand. A retail dataset obtained from Kaggle served as the basis for training and testing the forecasting models. The experimental results demonstrate that the LSTM model outperforms all others with accuracy, precision, recall, and F1-score of 92.31%, 92.31%, 100.00%, and 96.00%, respectively, followed by Prophet with 85.71%, 92.31%, 92.31%, and 92.31%, respectively, Decision Tree with 93.05%, 75.76%, 76.13%, and 75.94%, respectively, Random Forest with 91.99%, 66.86%, 88.08%, and 76.02%, respectively, XGBoost with 83.21%, 45.70%, 87.84%, and 60.12%, respectively, and Linear Regression with 60.67%, 25.46%, 89.75%, and 39.67%, respectively. These results verify that ensemble and deep learning models can greatly help retailers in raising operational efficiency and notably improve forecasting accuracy. Full article
Show Figures

Figure 1

31 pages, 1941 KB  
Review
Machine Learning in Slope Stability: A Review with Implications for Landslide Hazard Assessment
by Miguel Trinidad and Moe Momayez
GeoHazards 2025, 6(4), 67; https://doi.org/10.3390/geohazards6040067 - 16 Oct 2025
Abstract
Slope failures represent one of the most serious geotechnical hazards, which can have severe consequences for personnel, equipment, infrastructure, and other aspects of a mining operation. Deterministic and stochastic conventional methods of slope stability analysis are useful; however, some limitations in applicability may [...] Read more.
Slope failures represent one of the most serious geotechnical hazards, which can have severe consequences for personnel, equipment, infrastructure, and other aspects of a mining operation. Deterministic and stochastic conventional methods of slope stability analysis are useful; however, some limitations in applicability may arise due to the inherent anisotropy of rock mass properties and rock mass interactions. In recent years, Machine Learning (ML) techniques have become powerful tools for improving prediction and risk assessment in slope stability analysis. This review provides a comprehensive overview of ML applications for analyzing slope stability and delves into the performance of each technique as well as the interrelationship between the geotechnical parameters of the rock mass. Supervised learning methods such as decision trees, support vector machines, random forests, gradient boosting, and neural networks have been applied by different authors to predict the safety factor and classify slopes. Unsupervised learning techniques such as clustering and Gaussian mixture models have also been applied to identify hidden patterns. The objective of this manuscript is to consolidate existing work by highlighting the advantages and limitations of different ML techniques, while identifying gaps that should be analyzed in future research. Full article
Show Figures

Figure 1

Back to TopTop