Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (477)

Search Parameters:
Keywords = TAB1

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 15596 KB  
Article
Biomass Estimation of Picea schrenkiana Forests in the Western Tianshan Mountains Using Integrated ICESat-2 and GF-6 Data
by Yan Tang, Donghua Chen, Xinguo Li, Juluduzi Shashan and Pinghao Xu
Forests 2026, 17(4), 421; https://doi.org/10.3390/f17040421 - 27 Mar 2026
Abstract
Forest biomass reflects the carbon storage capacity of forest ecosystems. Although remote sensing-based biomass estimation techniques have become increasingly mature, the issue of signal saturation in optical remote sensing still requires further investigation. This study was conducted in the Picea schrenkiana forest of [...] Read more.
Forest biomass reflects the carbon storage capacity of forest ecosystems. Although remote sensing-based biomass estimation techniques have become increasingly mature, the issue of signal saturation in optical remote sensing still requires further investigation. This study was conducted in the Picea schrenkiana forest of the Ili River Valley in the western Tianshan Mountains. By integrating multimodal data from ICESat-2 LiDAR and GF-6 optical imagery, we developed machine learning and deep learning models to achieve high-precision biomass estimation. Based on forest management inventory data, we extracted spectral and textural features from GF-6, along with canopy structure attributes derived from the four acquisition modes (day/night, strong/weak beams) of ICESat-2. After correlation-based feature selection, LightGBM, CatBoost, and TabNet models were trained and compared. The results showed that models integrating multi-source data significantly outperformed those based on a single data source. The TabNet model not only achieved high estimation accuracy but also provided clear feature importance rankings, with ICESat-2-derived canopy height percentiles and GF-6 red-edge vegetation indices contributing most significantly to the biomass estimation of Picea schrenkiana. These findings demonstrate the feasibility of synergistically utilizing domestic high-resolution satellites and multi-mode spaceborne LiDAR for forest biomass estimation in arid regions, providing an effective technical reference for accurate carbon sink monitoring of specific tree species in forest areas. Full article
(This article belongs to the Special Issue Modelling and Estimation of Forest Biomass)
Show Figures

Figure 1

17 pages, 1422 KB  
Article
Performance Evaluation of Publicly Funded Agricultural Research Projects with Light-TabNet
by Zelin Liu, Lu Fan, Qiulian Chen, Haipeng Li and Ailan Wei
Appl. Sci. 2026, 16(7), 3230; https://doi.org/10.3390/app16073230 - 27 Mar 2026
Viewed by 107
Abstract
This study focuses on the performance evaluation of publicly funded agricultural research projects in a structured tabular-data setting characterized by small sample size and heterogeneous features. We construct a project-level performance evaluation dataset covering 24 provincial agricultural research institutions in China, with [...] Read more.
This study focuses on the performance evaluation of publicly funded agricultural research projects in a structured tabular-data setting characterized by small sample size and heterogeneous features. We construct a project-level performance evaluation dataset covering 24 provincial agricultural research institutions in China, with n=280 samples. The target variable is the project self-evaluation score, reflecting overall annual target completion rather than a fixed explicit transformation of the input indicators. To address the limitations of manual evaluation—including subjectivity, poor inter-rater consistency, and potential bias—we propose Light-TabNet, which enhances the model’s fitting capability in small-sample scenarios while preserving interpretability. Interpretability is achieved through sparse decision masks and aggregated feature-attribution analysis, with partial cross-model support from comparison with XGBoost-SHAP rankings. Compared with 13 deep learning and traditional machine learning baselines, Light-TabNet achieves improved accuracy in terms of mean absolute error (MAE), root mean squared error (RMSE), and the coefficient of determination (R2) (MAE 4.9765, RMSE 8.8140, R2 0.8891). In a preliminary real-world validation on eight projects from a provincial agricultural research institution, the model’s predicted scores were overall close to ratings provided by a third-party organization, suggesting preliminary practical usefulness in a similar management setting. The results suggest that Light-TabNet can serve as a decision-support tool for the performance evaluation of publicly funded agricultural research projects by providing an objective, traceable, and interpretable quantitative reference. Full article
Show Figures

Figure 1

28 pages, 6373 KB  
Article
Mitigating Urban-Centric Bias to Address the Rural Eligibility Discovery Lag
by Guiyan Jiang and Donghui Zhang
Land 2026, 15(4), 535; https://doi.org/10.3390/land15040535 - 25 Mar 2026
Viewed by 247
Abstract
Urban sustainability depends on rural hinterlands, yet national-scale evaluation and AI screening often rely on urban-centric proxies, which can under-recognize remote villages where the evidence base is sparse. Using China’s national honored-village programme (N = 24,450) as a case, we examine how recognition [...] Read more.
Urban sustainability depends on rural hinterlands, yet national-scale evaluation and AI screening often rely on urban-centric proxies, which can under-recognize remote villages where the evidence base is sparse. Using China’s national honored-village programme (N = 24,450) as a case, we examine how recognition patterns change when data availability and observability are unequal across regions, with a focus on the Qinghai–Tibetan Plateau (QTP), where 923 honored villages account for only 3.78% of the national total. We interpret urban-centric proxy reliance as the tendency for recognition patterns to correlate with urban-linked observability signals (e.g., nighttime lights). In this study, discovery lag refers to situations where villages exhibit characteristics similar to historically recognized villages but remain unrecognized under the current honor regime due to uneven data availability and observability. Methodologically, we build a scene-aware predictive framework that integrates multi-source geospatial indicators and explicitly handles extreme imbalance and environmental heterogeneity to estimate recognition likelihood under the current honor regime, treating national honor lists as administratively produced recognition outcomes rather than objective measures of village value. The model highlights four high-probability nomination belts on the QTP and reveals a pronounced DEM–NTL decoupling: the median NTL of currently honored QTP villages is 0, suggesting that NTL-based urban proxies can fail in high-altitude, data-scarce contexts. Overall, the observed under-representation is consistent with uneven observability and institutional constraints within the current honor system, and the proposed framework provides a scalable diagnostic and screening tool for identifying villages with high predicted recognition likelihood and supporting more evidence-aware rural data collection. Full article
Show Figures

Figure 1

19 pages, 721 KB  
Article
Evaluating EEG-Based Seizure Classification Using Foundation and Classical Ensemble Models
by George Obaido and Ebenezer Esenogho
Appl. Sci. 2026, 16(7), 3120; https://doi.org/10.3390/app16073120 - 24 Mar 2026
Viewed by 177
Abstract
Electroencephalogram (EEG)-based seizure classification remains challenging due to inter-subject variability and heterogeneous signal characteristics. Foundation models offer a promising alternative to dataset-specific training by leveraging pretrained priors. In this study, we evaluate a tabular foundation model, the Tabular Prior-Data Fitted Network (TabPFN), against [...] Read more.
Electroencephalogram (EEG)-based seizure classification remains challenging due to inter-subject variability and heterogeneous signal characteristics. Foundation models offer a promising alternative to dataset-specific training by leveraging pretrained priors. In this study, we evaluate a tabular foundation model, the Tabular Prior-Data Fitted Network (TabPFN), against classical ensemble baselines (gradient boosting, random forests, AdaBoost, and XGBoost) for EEG seizure segment classification. We use subject-independent GroupKFold cross-validation without out-of-fold evaluation to assess generalization to unseen individuals. Experiments on the Bangalore EEG Epilepsy Dataset (BEED) and the University of Bonn (Bonn) dataset show that TabPFN achieves higher accuracy than classical ensembles, reaching 99.7% on BEED and 99.6% on Bonn. These results suggest that pretrained tabular priors can be effective in feature-based EEG pipelines where subject-level generalization is required. Full article
(This article belongs to the Special Issue AI-Driven Healthcare)
Show Figures

Figure 1

37 pages, 7684 KB  
Review
Comparative Review of Cooling Systems for Lithium-Ion Battery Modules with 21700 Cylindrical Cells
by Leone Martellucci, Roberto Capata and Matteo De Marco
Batteries 2026, 12(3), 107; https://doi.org/10.3390/batteries12030107 - 21 Mar 2026
Viewed by 242
Abstract
The automotive sector is currently undergoing a rapid and complex transition from classic internal combustion engines to hybrid or fully electric propulsion systems, at the core of which is the battery pack. Currently, the battery packs of almost all electric vehicles on the [...] Read more.
The automotive sector is currently undergoing a rapid and complex transition from classic internal combustion engines to hybrid or fully electric propulsion systems, at the core of which is the battery pack. Currently, the battery packs of almost all electric vehicles on the road consist of lithium-ion cells. The thermal management of these cells represents a complex and fundamental challenge, essential not only to ensure optimal vehicle performance but also to guarantee passenger safety. Therefore, this paper examines and compares four main systems used for battery thermal management, highlighting their strengths, weaknesses, and overall effectiveness. First, a standard module comprising 21700 cylindrical cells, representative of automotive applications, is designed. Subsequently, computational fluid dynamics (CFD) thermal analyses of this module are performed to evaluate four different cooling methods: forced air cooling, bottom cold plate cooling, liquid tube cooling, and immersion cooling combined with tab cooling. Finally, an experimental validation is conducted by testing these systems on a physical module, which is subjected to the same electrical discharge profile simulated in the CFD analyses, to verify the effectiveness of the four considered methods. Full article
(This article belongs to the Special Issue Advanced Battery Safety Technologies: From Materials to Systems)
Show Figures

Figure 1

22 pages, 5690 KB  
Article
Testing and Modeling of a CFRP Composite Subjected to Simple and Compound Loads
by Ionuț Mititelu, Viorel Goanță, Paul Doru Bârsănescu and Ciprian Ionuț Morăraș
C 2026, 12(1), 26; https://doi.org/10.3390/c12010026 - 20 Mar 2026
Viewed by 231
Abstract
Most components fail under complex states of stress and for this reason the study of materials failure under these conditions is an important topic. The article presents the experimental study of the failure of a CFRP material, with a 0/90° cross-ply configuration, subjected [...] Read more.
Most components fail under complex states of stress and for this reason the study of materials failure under these conditions is an important topic. The article presents the experimental study of the failure of a CFRP material, with a 0/90° cross-ply configuration, subjected to both simple loading conditions (tension, compression, and shear) and combined loading (tension–shear), using a modified Arcan testing method. The Arcan device and specimen geometry were redesigned to reduce experimental errors and the dispersion of results. It was found that there are significant differences between the strength values obtained for simple loads performed by the standardized methods and by the Arcan method, respectively. For this reason, it is recommended to use the Arcan method only for mixed loading modes. Specimens with steel tabs were used to reduce both hole ovality during testing and the number of clamping screws to only four. It was found that the experimental results under complex stress states are well described by the Tsai–Hill failure criterion and the failure envelope for the material studied was plotted. Recommendations are provided regarding the appropriate use of the Arcan method in order to obtain precise results for CFRP composites under multiaxial loading. Full article
(This article belongs to the Section Carbon Materials and Carbon Allotropes)
Show Figures

Figure 1

23 pages, 2679 KB  
Article
Morphology-Aware Deep Features and Frozen Filters for Surgical Instrument Segmentation with LLM-Based Scene Summarization
by Adnan Haider, Muhammad Arsalan and Kyungeun Cho
J. Clin. Med. 2026, 15(6), 2227; https://doi.org/10.3390/jcm15062227 - 15 Mar 2026
Viewed by 194
Abstract
Background/Objectives: The rise of artificial intelligence is injecting intelligence into the healthcare sector, including surgery. Vision-based intelligent systems that assist surgical procedures can significantly increase productivity, safety, and effectiveness during surgery. Surgical instruments are central components of any surgical intervention, yet detecting and [...] Read more.
Background/Objectives: The rise of artificial intelligence is injecting intelligence into the healthcare sector, including surgery. Vision-based intelligent systems that assist surgical procedures can significantly increase productivity, safety, and effectiveness during surgery. Surgical instruments are central components of any surgical intervention, yet detecting and locating them during live surgeries remains challenging due to adverse imaging conditions such as blood occlusion, smoke, blur, glare, low-contrast, instrument scale variation, and other artifacts. Methods: To address these challenges, we developed an advanced segmentation architecture termed the frozen-filters-based morphology-aware segmentation network (FFMS-Net). Accurate surgical instrument segmentation strongly depends on edge and morphology information; however, in conventional neural networks, this spatial information is progressively degraded during spatial processing. FFMS-Net introduces a frozen and learnable feature pipeline (FLFP) that simultaneously exploits frozen edge representations and learnable features. Within FLFP, Sobel and Laplacian filters are frozen to preserve edge and orientation information, which is subsequently fused with learnable initial spatial features. Moreover, a tri-atrous blending (TAB) block is employed at the end of the encoder to fuse multi-receptive-field-based contextual information, preserving instrument morphology and improving robustness under challenging conditions such as blur, blood occlusion, and smoke. Datasets focused on surgical instruments often suffer from severe class imbalance and poor instrument visibility. To mitigate these issues, FFMS-Net incorporates a progressively structure-preserving decoder (PSPD) that aggregates dilated and standard spatial information after each upsampling stage to maintain class structure. Multi-scale spatial features from different encoder levels are further fused using light skip paths (LSPs) to project channels with task-relevant patterns. Results/Conclusions: FFMS-Net is extensively evaluated on three challenging datasets: UW-Sinus-surgery-live, UW-Sinus-cadaveric, and CholecSeg8k. The proposed method demonstrates promising performance compared with state-of-the-art approaches while requiring only 1.5 million trainable parameters. In addition, an open-source large language model is integrated for non-clinical summarization of the surgical scene based on the predicted mask and deterministic descriptors derived from it. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Clinical Practice)
Show Figures

Figure 1

12 pages, 3262 KB  
Article
Colorimetric Behaviour of Ceramic Zirconia Restorations Cemented on Darkened Substrates—In Vitro Study
by Ricardo Dias, Cristiano Pereira Alves, Raul Yehudi, Fernando Guerra and Ana Messias
Surfaces 2026, 9(1), 27; https://doi.org/10.3390/surfaces9010027 - 12 Mar 2026
Viewed by 159
Abstract
The colour matching of ceramic restorations is sensitive to ceramic thickness, ceramic optical properties, the tooth region, the tooth/substrate basis colour, and the shade of the bonding agent. This in vitro study evaluates the influence of substrate darkening, resin cement shade and zirconia [...] Read more.
The colour matching of ceramic restorations is sensitive to ceramic thickness, ceramic optical properties, the tooth region, the tooth/substrate basis colour, and the shade of the bonding agent. This in vitro study evaluates the influence of substrate darkening, resin cement shade and zirconia thickness on the final colour of monolithic Prettau®2 zirconia restorations. An in vitro factorial design was used combining four resin substrates simulating increasing darkening (ND6–ND9), three shades of dual-cure resin cement (universal, transparent, white opaque) and three zirconia thicknesses (0.5, 1.0, 1.5 mm) of Prettau®2 zirconia. Standardized photographs were taken under controlled conditions, and CIELAB coordinates (L*, a*, b*) were obtained in Adobe Photoshop. Colour differences relative to the Prettau®2 A1 shade tab were calculated as ΔL*, Δa*, Δb* and ΔE*. An additive linear model on ΔE* and a main-effect MANOVA on ΔL*, Δa* and Δb* were fitted to assess the impact of each factor. The mean ΔE* was 6.67 ± 2.66, and all but two specimens showed a clinically perceptible colour difference (ΔE* > 2.7) from the A1 shade tab. Substrate shade accounted for 38.4% of the explained variance in ΔE*, cement for 27.6% and zirconia thickness for 6.7%. MANOVA confirmed significant multivariate effects of substrate and cement, but not of zirconia thickness. Translucent monolithic zirconia showed limited ability to reproduce the A1 reference shade over darkened substrates. Substrate shade was the main determinant of colour mismatch, followed by resin cement, whereas zirconia thickness within 0.5–1.5 mm played a minor role. White opaque cement reduced ΔE* and brought the final shade closer to A1, but residual mismatches often remained clinically relevant. These findings highlight the need to control and, when possible, modify the underlying substrate and to select high-opacity cements when shade matching is critical. Full article
Show Figures

Figure 1

32 pages, 3089 KB  
Article
Systematic Evaluation of Machine Learning and Deep Learning Models for IoT Malware Detection Across Ransomware, Rootkit, Spyware, Trojan, Botnet, Worm, Virus, and Keylogger
by Mazdak Maghanaki, Soraya Keramati, F. Frank Chen and Mohammad Shahin
Sensors 2026, 26(6), 1750; https://doi.org/10.3390/s26061750 - 10 Mar 2026
Viewed by 439
Abstract
The rapid growth of Internet-of-Things (IoT) deployments has substantially expanded the attack surface of modern cyber–physical systems, making accurate and computationally feasible malware detection essential for enterprise and industrial environments. This study presents a large-scale, systematic comparison of 27 machine learning (ML) and [...] Read more.
The rapid growth of Internet-of-Things (IoT) deployments has substantially expanded the attack surface of modern cyber–physical systems, making accurate and computationally feasible malware detection essential for enterprise and industrial environments. This study presents a large-scale, systematic comparison of 27 machine learning (ML) and 18 deep learning (DL) models for IoT malware detection across eight major malware categories: Trojan, Botnet, Ransomware, Rootkit, Worm, Spyware, Keylogger, and Virus. A realistic dataset was constructed using 50,000 executable samples collected from the Any.Run platform, including 8000 malware instances (1000 per class) and 42,000 benign samples. Each sample was executed in a sandbox to extract detailed static and behavioral telemetry. A targeted feature-selection pipeline reduced the feature space to 47 diagnostic features spanning static properties, behavioral indicators, process/file/registry activity, debug signals, and network telemetry, yielding a compact representation suitable for malware detection in IoT settings. Experimental results demonstrate that ensemble tree-based ML models consistently dominate performance on the engineered tabular feature set as 7 of the top 10 models are ML, with CatBoost and LightGBM achieving near-ceiling accuracy and low false-positive rates. Per-malware analysis further shows that optimal model choice depends on malware behavior. CatBoost is best for Trojan/Spyware, LightGBM for Botnet, XGBoost for Worm, Extra Trees for Rootkit, and Random Forest for Keylogger, while DL models are competitive only for specific categories, with TabNet performing best for Ransomware and FT-Transformer for Virus. In addition, an end-to-end computational time analysis across all 45 models reveals a clear efficiency advantage for boosted tree ensembles relative to most DL architectures, supporting deployment feasibility on commodity CPU hardware. Overall, the study provides actionable guidance for designing adaptive IoT malware detection frameworks, recommending gradient-boosted ensemble ML models as the primary deployment choice, with selective DL models only when category-specific gains justify additional computational cost. Full article
(This article belongs to the Special Issue Intelligent Sensors for Security and Attack Detection)
Show Figures

Figure 1

16 pages, 1277 KB  
Article
Limitations of MMSE in Cognitive Assessment: Revealing Latent Risk via Structural Brain Atrophy
by Moonhyeok Choi, Jaehyun Jo and Jinhyoung Jeong
Life 2026, 16(3), 451; https://doi.org/10.3390/life16030451 - 10 Mar 2026
Viewed by 252
Abstract
The primary objective of this study was to evaluate the relative contributions of the MMSE and nWBV in three-class cognitive stage classification, with a secondary objective of conducting a subgroup analysis to investigate latent risk within the MMSE-normal population. To achieve this, we [...] Read more.
The primary objective of this study was to evaluate the relative contributions of the MMSE and nWBV in three-class cognitive stage classification, with a secondary objective of conducting a subgroup analysis to investigate latent risk within the MMSE-normal population. To achieve this, we proposed an explainable deep-learning-based analytical framework integrating the MMSE with nWBV, a structural brain atrophy indicator, and systematically assessed the relative contributions of each variable in cognitive impairment stage classification and potential risk screening. Although the MMSE is widely used in clinical practice as a cognitive screening tool, it has limited sensitivity to early or subtle cognitive decline and may not adequately reflect structural brain changes due to the ceiling effect. To address this limitation, we compared four tabular deep learning models—MLP, Tab ResNet, Tab Transformer, and FT Transformer—under identical fivefold cross-validation conditions. Age and sex were fixed as covariates, and feature ablation analysis was conducted to examine the independent and combined effects of the MMSE and nWBV. The results showed no statistically significant differences in classification performance among model architectures, indicating that predictive performance was primarily determined by the informational content of the input variables rather than model complexity. In the feature ablation analysis, the MMSE alone demonstrated strong discriminative power, whereas nWBV alone showed relatively limited performance; however, when combined with the MMSE, nWBV consistently improved classification performance. Furthermore, for interpretability analysis, both Integrated Gradients (IG) and SHAP were applied to validate variable contributions from complementary perspectives. Across both methods, the MMSE and nWBV were repeatedly identified as key contributing features, and interpretability stability was maintained throughout cross-validation folds, supporting the robustness and reliability of the explanatory results. Beyond simple model performance comparisons, this study provides evidence supporting the complementary integration of structural brain atrophy information into MMSE-centered traditional cognitive assessment by jointly considering variable contribution and interpretability stability. This approach is expected to contribute to precision risk screening and clinical decision support in the early stages of cognitive decline. Although the MMSE exhibited strong discriminative performance, nWBV provided complementary structural risk signals within the MMSE-normal subgroup, suggesting that integrating cognitive assessment with structural biomarkers may enhance early risk identification. Full article
(This article belongs to the Section Physiology and Pathology)
Show Figures

Figure 1

31 pages, 7962 KB  
Article
Study on a Process Parameter-Driven Deep Learning Prediction Model for Multi-Physical Fields in Flange Shaft Welding
by Chaolong Yang, Zhiqiang Xu, Feiting Shi, Ketong Liu and Peng Cao
Materials 2026, 19(5), 995; https://doi.org/10.3390/ma19050995 - 4 Mar 2026
Viewed by 411
Abstract
Large flange shafts are the core load-bearing and connecting components of high-end equipment, and their welding multi-physical fields directly affect the quality and service safety of the components. Traditional experiments and finite element methods suffer from long cycles and low efficiency, which can [...] Read more.
Large flange shafts are the core load-bearing and connecting components of high-end equipment, and their welding multi-physical fields directly affect the quality and service safety of the components. Traditional experiments and finite element methods suffer from long cycles and low efficiency, which can hardly meet the demand for rapid prediction. Aiming at the fast and accurate prediction of welding temperature, deformation and residual stress, this study combines thermal–mechanical coupled finite element simulation with machine learning to construct and compare a variety of prediction models. A dataset is built based on simulation data from 100 groups of process parameters. Overfitting is reduced through strategies including early stopping and dropout, and models such as MLP, RF, RBF-SVR, TabNet, XGBoost, and FT-Transformer are established and verified through 10-fold cross-validation. The results show that the MLP model performs best in the prediction of temperature, deformation and residual stress, and is in good agreement with the simulation values. The prediction errors of the peak temperature of the weld and base metal are below 5%, and the errors of deformation and residual stress are controlled within 10%. The average error of peak residual stress is about 6 MPa, and the deviation of most samples is less than 5 MPa. The RF model ranks second in accuracy, with an average error of about 6.5 MPa for peak residual stress, showing a satisfactory interpretability and engineering applicability. RBF-SVR and TabNet can meet basic prediction requirements. Under the small-sample condition in this work, XGBoost and FT-Transformer present relatively large errors and a weak generalization ability, making it difficult to achieve high-precision prediction. The MLP model established in this paper can effectively reproduce the evolution of welding multi-physical fields and supports the rapid prediction and process optimization of large flange shaft welding. The generalization ability and practical performance of the model can be further improved by expanding the dataset and experimental verification in the future. Full article
Show Figures

Figure 1

7 pages, 1172 KB  
Proceeding Paper
Explainable Deep Learning for Stress and Performance Analysis in Professional Tennis Matches
by Hsien-Chung Huang, Wei-Hsin Hung and Meng-Hsiun Tsai
Eng. Proc. 2026, 129(1), 7; https://doi.org/10.3390/engproc2026129007 - 2 Mar 2026
Viewed by 240
Abstract
Tennis match analysis is a critical component of sports science, offering data on player performance, workload management, and competitive stress. We developed a data-driven framework to classify tennis matches as high-stress or low-stress using the Association of Tennis Professionals’ match statistics. High-stress matches [...] Read more.
Tennis match analysis is a critical component of sports science, offering data on player performance, workload management, and competitive stress. We developed a data-driven framework to classify tennis matches as high-stress or low-stress using the Association of Tennis Professionals’ match statistics. High-stress matches are characterized by extended duration or frequent break points, both representing elevated physical and psychological demands. We implement TabNet and compare its performance with recurrent deep learning models, including long short-term memory (LSTM), bidirectional LSTM, attention-enhanced LSTM, and convolutional LSTM. Experimental results show that TabNet achieves the best accuracy (98%), while the recurrent models maintain accuracies above 93%, demonstrating consistent predictive capability. To enhance interpretability, SHAP analysis identifies break points faced, break points saved, and match duration as the most influential determinants of match stress, with serving and returning features providing secondary contributions. These findings confirm the effectiveness of interpretable deep learning in sports analytics and highlight its potential for guiding training and match preparation. Full article
Show Figures

Figure 1

34 pages, 5939 KB  
Article
Explainable Machine Learning for Volatile Fatty Acid Soft-Sensing in Anaerobic Digestion: A Pilot Feasibility Study
by Bibars Amangeldy, Assiya Boltaboyeva, Nurdaulet Tasmurzayev, Zhanel Baigarayeva, Baglan Imanbek, Aliya Jemal Getahun, Dinara Turmakhanbet, Moldir Kuatova and Waldemar Wojcik
Algorithms 2026, 19(3), 183; https://doi.org/10.3390/a19030183 - 1 Mar 2026
Viewed by 385
Abstract
Sustainable energy systems such as anaerobic digestion (AD) bioreactors exhibit complex nonlinear dynamics that complicate the monitoring of key stability indicators using conventional laboratory-based methods. As a preliminary investigation, this pilot study explores the feasibility of using machine learning-based soft sensing to estimate [...] Read more.
Sustainable energy systems such as anaerobic digestion (AD) bioreactors exhibit complex nonlinear dynamics that complicate the monitoring of key stability indicators using conventional laboratory-based methods. As a preliminary investigation, this pilot study explores the feasibility of using machine learning-based soft sensing to estimate Total Volatile Fatty Acids (TVFA(M)) from routinely measured physicochemical parameters. Using a short-term laboratory dataset obtained from controlled CO2 biomethanisation experiments, several regression models were benchmarked, including an attention-based deep learning architecture (TabNet), multi-architecture artificial neural networks (ANNs), gradient-boosting ensembles (CatBoost, XGBoost, LightGBM), and classical kernel-based approaches. Model performance was evaluated under a cross-validated framework to assess predictive capability and consistency across folds within the limited experimental scope. Among the tested models, TabNet achieved highly competitive performance, yielding an R2 of 0.8551, an RMSE of 0.0090, and an MAE of 0.0067. To support model transparency and interpretability, Explainable Artificial Intelligence (XAI) techniques based on SHapley Additive exPlanations (SHAP) were applied, identifying pCO2 as the dominant contributor to TVFA(M) predictions within the studied operational range. The results demonstrate the potential of explainable machine learning models as soft sensors for TVFA(M) estimation under controlled laboratory conditions. Although restricted to controlled laboratory conditions and a short observation period, this pilot study demonstrates the potential of explainable machine learning models for TVFA(M) estimation and provides a methodological benchmark for future validation using larger and more diverse datasets. Full article
Show Figures

Figure 1

21 pages, 59248 KB  
Article
Applying an Interpretable Deep Learning Model to Identify Wildfire-Prone Areas in Southwest China
by Chenyu Ma, Siquan Yang, Jing Cui, Qiang Li, Qichao Yao, De Zhang, Jiachang Guo, Xinqian Wang and Chong Qu
Fire 2026, 9(3), 107; https://doi.org/10.3390/fire9030107 - 1 Mar 2026
Viewed by 390
Abstract
Assessing wildfire susceptibility requires integrating environmental and anthropogenic factors to quantify the probability and vulnerability of fires in a given area. Many existing machine-learning models offer high predictive power but limited interpretability, restricting their utility for operational decision-making. This study is the first [...] Read more.
Assessing wildfire susceptibility requires integrating environmental and anthropogenic factors to quantify the probability and vulnerability of fires in a given area. Many existing machine-learning models offer high predictive power but limited interpretability, restricting their utility for operational decision-making. This study is the first to apply the intrinsically interpretable deep network TabNet to wildfire susceptibility modeling. By fusing multi-source data and leveraging TabNet’s feature-mask matrix, we achieve accurate prediction and built-in explanation without relying on auxiliary tools. On a dataset of 133,811 samples, the proposed model achieves an Area Under the Curve (AUC) of 0.760, recall of 0.883, precision of 0.395, and an F1.5 score of 0.640, outperforming XGBoost (version 1.5.0) and other baseline models. The importance rankings derived from the feature-mask matrix align with the Shapley Additive Explanations (SHAP) results, confirming the reliability of the explanations. This approach combines predictive accuracy with transparency, providing a deployable framework for wildfire early warning, risk management, and ecosystem conservation. Full article
Show Figures

Figure 1

11 pages, 1093 KB  
Article
Repeatability of Artificial Intelligence Chatbots in Composite Shade Selection: Agreement with a Dental Specialist
by Seyit Bilal Ozdemir, Busra Ozdemir and Cagri Ural
Appl. Sci. 2026, 16(5), 2306; https://doi.org/10.3390/app16052306 - 27 Feb 2026
Viewed by 220
Abstract
This study aimed to evaluate the intra-model repeatability of three artificial intelligence-based chatbots (ChatGPT-4.0, Microsoft Copilot, and Claude 3.5) in composite shade selection and their agreement with a dental specialist. Ten acrylic resin maxillary central incisor teeth representing different VITA Classical shades ( [...] Read more.
This study aimed to evaluate the intra-model repeatability of three artificial intelligence-based chatbots (ChatGPT-4.0, Microsoft Copilot, and Claude 3.5) in composite shade selection and their agreement with a dental specialist. Ten acrylic resin maxillary central incisor teeth representing different VITA Classical shades (n = 10) were photographed together with A1, A2, and A3 composite shade tabs under standardized illumination. Shade selections were performed by each artificial intelligence model based on the photographs and repeated on five different days using identical images and prompts. Visual shade selection by the dental specialist was determined by consensus between two calibrated evaluators. CIE L*, a*, and b* values of the acrylic teeth and composite shade tabs were obtained by photometric analysis, and color differences were calculated using the CIEDE2000 formula. Intra-model repeatability was assessed using Fleiss’ kappa coefficient, and agreement with the dental specialist was evaluated using Cohen’s kappa statistic. Intra-model repeatability differed among the models, with ChatGPT-4.0 demonstrating fair repeatability (κ = 0.33), Claude 3.5 showing moderate repeatability (κ = 0.45), and Microsoft Copilot exhibiting poor repeatability (κ = −0.12). Trial-level agreement with the dental specialist varied across repeated assessments, with ChatGPT-4.0 generally demonstrating higher agreement than the other models, whereas Microsoft Copilot showed consistently low agreement. Artificial intelligence chatbots showed variable repeatability and limited agreement with expert evaluation in composite shade selection under standardized conditions. Full article
Show Figures

Figure 1

Back to TopTop