Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (728)

Search Parameters:
Keywords = SHAP interpretation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2644 KiB  
Article
Intelligent Decoupling of Hydrological Effects in Han River Cascade Dam System: Spatial Heterogeneity Mechanisms via an LSTM-Attention-SHAP Interpretable Framework
by Shuo Ouyang, Changjiang Xu, Weifeng Xu, Mingyuan Zhou, Junhong Zhang, Guiying Zhang and Zixuan Pan
Hydrology 2025, 12(8), 217; https://doi.org/10.3390/hydrology12080217 (registering DOI) - 16 Aug 2025
Abstract
The construction of cascade dam systems profoundly reshapes river hydrological processes, yet the analysis of their spatial heterogeneity effects has long been constrained by the mechanistic deficiencies and interpretability limitations of traditional mechanistic models. Focusing on the middle-lower Han River (a 652 km [...] Read more.
The construction of cascade dam systems profoundly reshapes river hydrological processes, yet the analysis of their spatial heterogeneity effects has long been constrained by the mechanistic deficiencies and interpretability limitations of traditional mechanistic models. Focusing on the middle-lower Han River (a 652 km reach regulated by seven dams) as a representative case, this study develops an LSTM-Attention-SHAP interpretable framework to achieve, for the first time, intelligent decoupling of dam-induced hydrological effects and mechanistic analysis of spatial differentiation. Key findings include the following: (1) The LSTM model demonstrates exceptional predictive performance of water level and flow rate in intensively regulated reaches (average Nash–Sutcliffe Efficiency, NSE = 0.935 at Xiangyang, Huangzhuang, and Xiantao stations; R2 = 0.988 for discharge at Xiantao Station), while the attention mechanism effectively captures sensitive factors such as the abrupt threshold (>560 m3/s) in the Tangbai River tributary; (2) Shapley Additive exPlanations (SHAP) values reveal spatial heterogeneous dam contributions: the Cuijiaying Dam increases discharge at Xiangyang station (mean SHAP +0.22) but suppresses water level at Xiantao station (mean SHAP −0.15), whereas the Wangfuzhou Dam shows a stable negative correlation with Xiangyang water levels (mean SHAP −0.18); (3) dam operations induce cascade effects through altered channel storage capacity. These findings provide spatially adaptive strategies for flood risk zoning and ecological operations in globally intensively regulated rivers such as the Yangtze and Mekong. Full article
Show Figures

Figure 1

28 pages, 5112 KiB  
Article
Remote Sensing and Machine Learning Uncover Dominant Drivers of Carbon Sink Dynamics in Subtropical Mountain Ecosystems
by Leyan Xia, Hongjian Tan, Jialong Zhang, Kun Yang, Chengkai Teng, Kai Huang, Jingwen Yang and Tao Cheng
Remote Sens. 2025, 17(16), 2843; https://doi.org/10.3390/rs17162843 - 15 Aug 2025
Abstract
Net ecosystem productivity (NEP) serves as a key indicator for assessing regional carbon sink potential, with its dynamics regulated by nonlinear interactions among multiple factors. However, its driving factors and their coupling processes remain insufficiently characterized. This study investigated terrestrial ecosystems in Yunnan [...] Read more.
Net ecosystem productivity (NEP) serves as a key indicator for assessing regional carbon sink potential, with its dynamics regulated by nonlinear interactions among multiple factors. However, its driving factors and their coupling processes remain insufficiently characterized. This study investigated terrestrial ecosystems in Yunnan Province, China, to elucidate the drivers of NEP using 14 environmental factors (including topography, meteorology, soil texture, and human activities) and 21 remote sensing features. We developed a research framework based on “Feature Selection–Machine Learning–Mechanism Interpretation.” The results demonstrated that the Variable Selection Using Random Forests (VSURF) feature selection method effectively reduced model complexity. The selected features achieved high estimation accuracy across three machine learning models, with the eXtreme Gradient Boosting Regression (XGBR) model performing optimally (R2 = 0.94, RMSE = 76.82 gC/(m2·a), MAE = 55.11 gC/(m2·a)). Interpretation analysis using the SHAP (SHapley Additive exPlanations) method revealed the following: (1) The Enhanced Vegetation Index (EVI), soil pH, solar radiation, air temperature, clay content, precipitation, sand content, and vegetation type were the primary drivers of NEP in Yunnan. Notably, EVI’s importance exceeded that of other factors by approximately 3 to 10 times. (2) Significant interactions existed between soil texture and temperature: Under low-temperature conditions (−5 °C to 12.15 °C), moderate clay content (13–25%) combined with high sand content (40–55%) suppressed NEP. Conversely, within the medium to high temperature range (5 °C to 23.79 °C), high clay content (25–40%) coupled with low sand content (25–43%) enhanced NEP. These findings elucidate the complex driving mechanisms of NEP in subtropical ecosystems, confirming the dominant role of EVI in carbon sequestration and revealing nonlinear regulatory patterns in soil–temperature interactions. This study provides not only a robust “Feature Selection–Machine Learning–Mechanism Interpretation” modeling framework for assessing carbon budgets in mountainous regions but also a scientific basis for formulating regional carbon management policies. Full article
(This article belongs to the Section Ecological Remote Sensing)
26 pages, 5281 KiB  
Article
Spatial Drivers of Urban Industrial Agglomeration Using Street View Imagery and Remote Sensing: A Case Study of Shanghai
by Jiaqi Zhang, Zhen He, Weijing Wang and Ziwen Sun
Land 2025, 14(8), 1650; https://doi.org/10.3390/land14081650 - 15 Aug 2025
Abstract
The spatial distribution mechanism of industrial agglomeration has long been a central topic in urban economic geography. With the increasing availability of street view imagery and built environment data, effectively integrating multi-source spatial information to identify key drivers of firm clustering has become [...] Read more.
The spatial distribution mechanism of industrial agglomeration has long been a central topic in urban economic geography. With the increasing availability of street view imagery and built environment data, effectively integrating multi-source spatial information to identify key drivers of firm clustering has become a pressing research challenge. Taking Shanghai as a case study, this paper constructs a street-level BE database and proposes an interpretable spatial analysis framework that integrates SHapley Additive exPlanations with Multi-Scale Geographically Weighted Regression. The findings reveal that: (1) building morphology, streetscape characteristics, and perceived greenness significantly influence firm agglomeration, exhibiting nonlinear threshold effects; (2) spatial heterogeneity is evident in the underlying mechanisms, with localized trade-offs between morphological and perceptual factors; and (3) BE features are as important as macroeconomic factors in shaping agglomeration patterns, with notable interaction effects across space, while streetscape perception variables play a relatively secondary role. This study advances the understanding of how micro-scale built environments shape industrial spatial structures and offers both theoretical and empirical support for optimizing urban industrial layouts and promoting high-quality regional economic development. Full article
Show Figures

Figure 1

18 pages, 4186 KiB  
Article
Ensemble Learning and SHAP Interpretation for Predicting Tensile Strength and Elastic Modulus of Basalt Fibers Based on Chemical Composition
by Guolei Liu, Lunlian Zheng, Peng Long, Lu Yang and Ling Zhang
Sustainability 2025, 17(16), 7387; https://doi.org/10.3390/su17167387 - 15 Aug 2025
Abstract
Tensile strength and elastic modulus are key mechanical properties for continuous basalt fibers, which are inherently sustainable materials derived from naturally occurring volcanic rock. This study employs five ensemble learning models, including Extra Tree Regression, Random Forest, Extreme Gradient Boosting, Categorical Gradient Boosting, [...] Read more.
Tensile strength and elastic modulus are key mechanical properties for continuous basalt fibers, which are inherently sustainable materials derived from naturally occurring volcanic rock. This study employs five ensemble learning models, including Extra Tree Regression, Random Forest, Extreme Gradient Boosting, Categorical Gradient Boosting, and Light Gradient Boosting Machine, to predict the tensile strength and elastic modulus of basalt fibers based on chemical composition. Model performance was evaluated using the coefficient of determination (R2), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). Following hyperparameter optimization, the Extreme Gradient Boosting model demonstrated superior performance for tensile strength prediction (R2 = 0.9152, MSE = 0.2867, RMSE = 0.5354, and MAE = 0.6091), while CatBoost excelled in elastic modulus prediction (R2 = 0.9803, MSE = 0.1209, RMSE = 0.3478, and MAE = 0.2692). SHapley Additive exPlanations (SHAP) analysis identified CaO and SiO2 as the most significant features, with dependency analysis further revealing optimal ranges of critical variables that enhance mechanical performance. This approach enables rapid data-driven basalt selection, reduces energy-intensive trials, lowers costs, and aligns with sustainability by minimizing resource use and emissions. Integrating machine learning with material science advances eco-friendly fiber production, supporting the circular economy in construction and composites. Full article
(This article belongs to the Section Resources and Sustainable Utilization)
Show Figures

Graphical abstract

18 pages, 1752 KiB  
Systematic Review
Beyond Post hoc Explanations: A Comprehensive Framework for Accountable AI in Medical Imaging Through Transparency, Interpretability, and Explainability
by Yashbir Singh, Quincy A. Hathaway, Varekan Keishing, Sara Salehi, Yujia Wei, Natally Horvat, Diana V. Vera-Garcia, Ashok Choudhary, Almurtadha Mula Kh, Emilio Quaia and Jesper B Andersen
Bioengineering 2025, 12(8), 879; https://doi.org/10.3390/bioengineering12080879 - 15 Aug 2025
Abstract
The integration of artificial intelligence (AI) in medical imaging has revolutionized diagnostic capabilities, yet the black-box nature of deep learning models poses significant challenges for clinical adoption. Current explainable AI (XAI) approaches, including SHAP, LIME, and Grad-CAM, predominantly focus on post hoc explanations [...] Read more.
The integration of artificial intelligence (AI) in medical imaging has revolutionized diagnostic capabilities, yet the black-box nature of deep learning models poses significant challenges for clinical adoption. Current explainable AI (XAI) approaches, including SHAP, LIME, and Grad-CAM, predominantly focus on post hoc explanations that may inadvertently undermine clinical decision-making by providing misleading confidence in AI outputs. This paper presents a systematic review and meta-analysis of 67 studies (covering 23 radiology, 19 pathology, and 25 ophthalmology applications) evaluating XAI fidelity, stability, and performance trade-offs across medical imaging modalities. Our meta-analysis of 847 initially identified studies reveals that LIME achieves superior fidelity (0.81, 95% CI: 0.78–0.84) compared to SHAP (0.38, 95% CI: 0.35–0.41) and Grad-CAM (0.54, 95% CI: 0.51–0.57) across all modalities. Post hoc explanations demonstrated poor stability under noise perturbation, with SHAP showing 53% degradation in ophthalmology applications (ρ = 0.42 at 10% noise) compared to 11% in radiology (ρ = 0.89). We demonstrate a consistent 5–7% AUC performance penalty for interpretable models but identify modality-specific stability patterns suggesting that tailored XAI approaches are necessary. Based on these empirical findings, we propose a comprehensive three-pillar accountability framework that prioritizes transparency in model development, interpretability in architecture design, and a cautious deployment of post hoc explanations with explicit uncertainty quantification. This approach offers a pathway toward genuinely accountable AI systems that enhance rather than compromise clinical decision-making quality and patient safety. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Medical Imaging)
Show Figures

Figure 1

19 pages, 7937 KiB  
Article
Feature-Level Insights into the Progesterone–Estradiol Ratio in Postmenopausal Women Using Explainable Machine Learning
by Ajna Hamidovic, John Davis and Mark R Burge
AI 2025, 6(8), 187; https://doi.org/10.3390/ai6080187 - 15 Aug 2025
Abstract
The protective role of progesterone against estradiol-driven proliferation is essential for preserving endometrial homeostasis. However, the factors that influence the progesterone–estradiol (P4:E2) ratio remain poorly characterized. This study aimed to model this ratio using a machine learning approach to identify key hormonal, anthropometric, [...] Read more.
The protective role of progesterone against estradiol-driven proliferation is essential for preserving endometrial homeostasis. However, the factors that influence the progesterone–estradiol (P4:E2) ratio remain poorly characterized. This study aimed to model this ratio using a machine learning approach to identify key hormonal, anthropometric, demographic, dietary, metabolic, and inflammatory predictors. In addition, it aimed to assess estradiol and progesterone as individual outcomes to clarify whether shared or divergent mechanisms underlie variation in each hormone. NHANES data were used to identify postmenopausal women (n = 1902). An XGBoost model was developed to predict the log-transformed P4:E2 ratio using a 70/30 stratified train–test split. SHAP (SHapley Additive exPlanations) values were computed to interpret feature contributions. The final XGBoost model for the log-transformed P4:E2 ratio achieved an RMSE of 0.746, an MAE of 0.574, and an R2 of 0.298 on the test set. SHAP analysis identified FSH (0.213), waist circumference (0.181), and CRP (0.133) as the most influential contributors, followed by total cholesterol (0.085) and LH (0.066). FSH and waist circumference emerged as key predictors of estradiol, while total cholesterol and LH were the most influential for progesterone. By leveraging SHAP-based feature importance to rank predictors of the P4:E2 ratio, this study provides interpretable, data-driven insights into the reproductive hormonal dynamics of postmenopausal women. Full article
Show Figures

Figure 1

32 pages, 6394 KiB  
Article
Neuro-Bridge-X: A Neuro-Symbolic Vision Transformer with Meta-XAI for Interpretable Leukemia Diagnosis from Peripheral Blood Smears
by Fares Jammal, Mohamed Dahab and Areej Y. Bayahya
Diagnostics 2025, 15(16), 2040; https://doi.org/10.3390/diagnostics15162040 - 14 Aug 2025
Abstract
Background/Objectives: Acute Lymphoblastic Leukemia (ALL) poses significant diagnostic challenges due to its ambiguous symptoms and the limitations of conventional methods like bone marrow biopsies and flow cytometry, which are invasive, costly, and time-intensive. Methods: This study introduces Neuro-Bridge-X, a novel neuro-symbolic hybrid model [...] Read more.
Background/Objectives: Acute Lymphoblastic Leukemia (ALL) poses significant diagnostic challenges due to its ambiguous symptoms and the limitations of conventional methods like bone marrow biopsies and flow cytometry, which are invasive, costly, and time-intensive. Methods: This study introduces Neuro-Bridge-X, a novel neuro-symbolic hybrid model designed for automated, explainable ALL diagnosis using peripheral blood smear (PBS) images. Leveraging two comprehensive datasets, ALL Image (3256 images from 89 patients) and C-NMC (15,135 images from 118 patients), the model integrates deep morphological feature extraction, vision transformer-based contextual encoding, fuzzy logic-inspired reasoning, and adaptive explainability. To address class imbalance, advanced data augmentation techniques were applied, ensuring equitable representation across benign and leukemic classes. The proposed framework was evaluated through 5-fold cross-validation and fixed train-test splits, employing Nadam, SGD, and Fractional RAdam optimizers. Results: Results demonstrate exceptional performance, with SGD achieving near-perfect accuracy (1.0000 on ALL, 0.9715 on C-NMC) and robust generalization, while Fractional RAdam closely followed (0.9975 on ALL, 0.9656 on C-NMC). Nadam, however, exhibited inconsistent convergence, particularly on C-NMC (0.5002 accuracy). A Meta-XAI controller enhances interpretability by dynamically selecting optimal explanation strategies (Grad-CAM, SHAP, Integrated Gradients, LIME), ensuring clinically relevant insights into model decisions. Conclusions: Visualizations confirm that SGD and RAdam models focus on morphologically critical features, such as leukocyte nuclei, while Nadam struggles with spurious attributions. Neuro-Bridge-X offers a scalable, interpretable solution for ALL diagnosis, with potential to enhance clinical workflows and diagnostic precision in oncology. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

18 pages, 2364 KiB  
Article
Deterioration Modeling of Pavement Performance in Cold Regions Using Probabilistic Machine Learning Method
by Zhen Liu, Xingyu Gu and Wenxiu Wu
Infrastructures 2025, 10(8), 212; https://doi.org/10.3390/infrastructures10080212 - 14 Aug 2025
Abstract
Accurate and reliable modeling of pavement deterioration is critical for effective infrastructure management. This study proposes a probabilistic machine learning framework using Bayesian-optimized Natural Gradient Boosting (BO-NGBoost) to predict the International Roughness Index (IRI) of asphalt pavements in cold climates. A dataset only [...] Read more.
Accurate and reliable modeling of pavement deterioration is critical for effective infrastructure management. This study proposes a probabilistic machine learning framework using Bayesian-optimized Natural Gradient Boosting (BO-NGBoost) to predict the International Roughness Index (IRI) of asphalt pavements in cold climates. A dataset only for cold regions was constructed from the Long-Term Pavement Performance (LTPP) database, integrating multiple variables related to climate, structure, materials, traffic, and constructions. The BO-NGBoost model was evaluated against conventional deterministic models, including artificial neural networks, random forest, and XGBoost. Results show that BO-NGBoost achieved the highest predictive accuracy (R2 = 0.897, RMSE = 0.184, MAE = 0.107) while also providing uncertainty quantification for risk-based maintenance planning. BO-NGBoost effectively captures long-term deterioration trends and reflects increasing uncertainty with pavement age. SHAP analysis reveals that initial IRI, pavement age, layer thicknesses, and precipitation are key factors, with freeze–thaw cycles and moisture infiltration driving faster degradation in cold climates. This research contributes a scalable and interpretable framework that advances pavement deterioration modeling from deterministic to probabilistic paradigms and provides practical value for more uncertainty-aware infrastructure decision-making. Full article
Show Figures

Figure 1

13 pages, 1049 KiB  
Article
Injury Prediction in Korean Adult Field Hockey Players Using Machine Learning and SHAP-Based Feature Importance Analysis
by Minkyung Choi, Kumju Lee and Kihyuk Lee
Appl. Sci. 2025, 15(16), 8946; https://doi.org/10.3390/app15168946 - 13 Aug 2025
Viewed by 154
Abstract
Field hockey involves repetitive high-intensity movements and physical contact, posing a high risk of injury. However, studies developing injury prediction models without relying on expensive tools such as GPS remain limited. This study aimed to develop an explainable AI model that predicts injury [...] Read more.
Field hockey involves repetitive high-intensity movements and physical contact, posing a high risk of injury. However, studies developing injury prediction models without relying on expensive tools such as GPS remain limited. This study aimed to develop an explainable AI model that predicts injury occurrence using only simple questionnaire-based data and visually identifies key predictors. Survey data were collected from 239 adult players registered with the Korea Field Hockey Association in 2024, including university and professional team athletes. Ten variables were used: sex, team affiliation, playing experience, player level, warm-up duration, weekly training hours and days, and physical indicators (age, height, weight). Injury was defined as an event within the past year that resulted in being unable to train for more than 24 h. Logistic Regression, Random Forest, and XGBoost models were compared. The final model—Logistic Regression—underwent SHAP-based visualization for interpretability. The Logistic Regression model showed the best performance in recall (0.6810 ± 0.0983), F1-score (0.6260 ± 0.0499), and AUC (0.6515 ± 0.0393). SHAP analysis identified Group, Training Time, Weight, and Player Level as key predictors, and visualized their contributions to individual predictions. This study demonstrates that a lightweight, interpretable injury prediction model using only simple survey data can achieve practical performance. This approach offers valuable insights for real-world applications and the development of injury prevention strategies. Full article
(This article belongs to the Special Issue Sports Injuries: Prevention and Rehabilitation)
Show Figures

Figure 1

22 pages, 4719 KiB  
Article
An Explainable AI Approach for Interpretable Cross-Layer Intrusion Detection in Internet of Medical Things
by Michael Georgiades and Faisal Hussain
Electronics 2025, 14(16), 3218; https://doi.org/10.3390/electronics14163218 - 13 Aug 2025
Viewed by 137
Abstract
This paper presents a cross-layer intrusion detection framework leveraging explainable artificial intelligence (XAI) and interpretability methods to enhance transparency and robustness in attack detection within the Internet of Medical Things (IoMT) domain. By addressing the dual challenges of compromised data integrity, which span [...] Read more.
This paper presents a cross-layer intrusion detection framework leveraging explainable artificial intelligence (XAI) and interpretability methods to enhance transparency and robustness in attack detection within the Internet of Medical Things (IoMT) domain. By addressing the dual challenges of compromised data integrity, which span both biosensor and network-layer data, this study combines advanced techniques to enhance interpretability, accuracy, and trust. Unlike conventional flow-based intrusion detection systems that primarily rely on transport-layer statistics, the proposed framework operates directly on raw packet-level features and application-layer semantics, including MQTT message types, payload entropy, and topic structures. The key contributions of this research include the application of K-Means clustering combined with the principal component analysis (PCA) algorthim for initial categorization of attack types, the use of SHapley Additive exPlanations (SHAP) for feature prioritization to identify the most influential factors in model predictions, and the employment of Partial Dependence Plots (PDP) and Accumulated Local Effects (ALE) to elucidate feature interactions across layers. These methods enhance the system’s interpretability, making data-driven decisions more accessible to nontechnical stakeholders. Evaluation on a realistic healthcare IoMT testbed demonstrates significant improvements in detection accuracy and decision-making transparency. Furthermore, the proposed approach highlights the effectiveness of explainable and cross-layer intrusion detection for secure and trustworthy medical IoT environments that are tailored for cybersecurity analysts and healthcare stakeholders. Full article
Show Figures

Figure 1

26 pages, 4766 KiB  
Article
RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics
by Sachin Kansal, Bajrangi Kumar Mishra, Saniya Sethi, Kanika Vinayak, Priya Kansal and Jyotindra Narayan
Sensors 2025, 25(16), 5019; https://doi.org/10.3390/s25165019 - 13 Aug 2025
Viewed by 211
Abstract
Diabetic retinopathy (DR), a leading cause of vision loss worldwide, poses a critical challenge to healthcare systems due to its silent progression and the reliance on labor-intensive, subjective manual screening by ophthalmologists, especially amid a global shortage of eye care specialists. Addressing the [...] Read more.
Diabetic retinopathy (DR), a leading cause of vision loss worldwide, poses a critical challenge to healthcare systems due to its silent progression and the reliance on labor-intensive, subjective manual screening by ophthalmologists, especially amid a global shortage of eye care specialists. Addressing the pressing need for scalable, objective, and interpretable diagnostic tools, this work introduces RetinoDeep—deep learning frameworks integrating hybrid architectures and explainable AI to enhance the automated detection and classification of DR across seven severity levels. Specifically, we propose four novel models: an EfficientNetB0 combined with an SPCL transformer for robust global feature extraction; a ResNet50 ensembled with Bi-LSTM to synergize spatial and sequential learning; a Bi-LSTM optimized through genetic algorithms for hyperparameter tuning; and a Bi-LSTM with SHAP explainability to enhance model transparency and clinical trustworthiness. The models were trained and evaluated on a curated dataset of 757 retinal fundus images, augmented to improve generalization, and benchmarked against state-of-the-art baselines (including EfficientNetB0, Hybrid Bi-LSTM with EfficientNetB0, Hybrid Bi-GRU with EfficientNetB0, ResNet with filter enhancements, Bi-LSTM optimized using Random Search Algorithm (RSA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and a standard Convolutional Neural Network (CNN)), using metrics such as accuracy, F1-score, and precision. Notably, the Bi-LSTM with Particle Swarm Optimization (PSO) outperformed other configurations, achieving superior stability and generalization, while SHAP visualizations confirmed alignment between learned features and key retinal biomarkers, reinforcing the system’s interpretability. By combining cutting-edge neural architectures, advanced optimization, and explainable AI, this work sets a new standard for DR screening systems, promising not only improved diagnostic performance but also potential integration into real-world clinical workflows. Full article
Show Figures

Figure 1

24 pages, 2794 KiB  
Article
Algorithmic Modeling of Generation Z’s Therapeutic Toys Consumption Behavior in an Emotional Economy Context
by Xinyi Ma, Xu Qin and Li Lv
Algorithms 2025, 18(8), 506; https://doi.org/10.3390/a18080506 - 13 Aug 2025
Viewed by 171
Abstract
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and [...] Read more.
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and random forest modeling to forecast purchase intention for therapeutic toys and interpret its underlying drivers. First, 856 customer reviews were scraped from Jellycat’s official website and subjected to polarity classification using a fine-tuned RoBERTa-wwm-ext model (F1 = 0.92), with generated sentiment scores and high-frequency keywords mapped as interpretable features. Next, Boruta–SHAP feature selection was applied to 35 structured variables from 336 survey records, retaining 17 significant predictors. The core module employed a RF (random forest) model to estimate continuous “purchase intention” scores, achieving R2 = 0.83 and MSE = 0.14 under 10-fold cross-validation. To enhance interpretability, RF model was also utilized to evaluate feature importance, quantifying each feature’s contribution to the model outputs, revealing Social Ostracism (β = 0.307) and Task Overload (β = 0.207) as dominant predictors. Finally, k-means clustering with gap statistics segmented consumers based on emotional relevance, value rationality, and interest level, with model performance compared across clusters. Experimental results demonstrate that our integrated predictive model achieves a balance between forecasting accuracy and decision interpretability in emotional value computation, offering actionable insights for targeted product development and precision marketing in the therapeutic goods sector. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

16 pages, 1932 KiB  
Article
2.5D Deep Learning and Machine Learning for Discriminative DLBCL and IDC with Radiomics on PET/CT
by Fei Liu, Wen Chen, Jianping Zhang, Jianling Zou, Bingxin Gu, Hongxing Yang, Silong Hu, Xiaosheng Liu and Shaoli Song
Bioengineering 2025, 12(8), 873; https://doi.org/10.3390/bioengineering12080873 - 12 Aug 2025
Viewed by 250
Abstract
We aimed to establish non-invasive diagnostic models comparable to pathology testing and explore reliable digital imaging biomarkers to classify diffuse large B-cell lymphoma (DLBCL) and invasive ductal carcinoma (IDC). Our study enrolled 386 breast nodules from 279 patients with DLBCL and IDC, which [...] Read more.
We aimed to establish non-invasive diagnostic models comparable to pathology testing and explore reliable digital imaging biomarkers to classify diffuse large B-cell lymphoma (DLBCL) and invasive ductal carcinoma (IDC). Our study enrolled 386 breast nodules from 279 patients with DLBCL and IDC, which were pathologically confirmed and underwent 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) examination. Patients from two centers were separated into internal and external cohorts. Notably, we introduced 2.5D deep learning and machine learning to extract features, develop models, and discover biomarkers. Performances were assessed using the area under curve (AUC) and confusion matrix. Additionally, the Shapley additive explanation (SHAP) and local interpretable model-agnostic explanations (LIME) techniques were employed to interpret the model. On the internal cohort, the optimal model PT_TDC_SVM achieved an accuracy of 0.980 (95% confidence interval (CI): 0.957–0.991) and an AUC of 0.992 (95% CI: 0.946–0.998), surpassing the other models. On the external cohort, the accuracy was 0.975 (95% CI: 0.913–0.993) and the AUC was 0.996 (95% CI: 0.972–0.999). The optimal imaging biomarker PET_LBP-2D_gldm_DependenceEntropy demonstrated an average accuracy of 0.923/0.937 on internal/external testing. Our study presented an innovative automated model for DLBCL and IDC, identifying reliable digital imaging biomarkers with significant potential. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

29 pages, 2939 KiB  
Article
Automated Sleep Stage Classification Using PSO-Optimized LSTM on CAP EEG Sequences
by Manjur Kolhar, Manahil Mohammed Alfuraydan, Abdulaziz Alshammary, Khalid Alharoon, Abdullah Alghamdi, Ali Albader, Abdulmalik Alnawah and Aryam Alanazi
Brain Sci. 2025, 15(8), 854; https://doi.org/10.3390/brainsci15080854 - 11 Aug 2025
Viewed by 274
Abstract
The automatic classification of sleep stages and Cyclic Alternating Pattern (CAP) subtypes from electroencephalogram (EEG) recordings remains a significant challenge in computational sleep research because of the short duration of CAP events and the inherent class imbalance in clinical datasets. Background/Objectives: The research [...] Read more.
The automatic classification of sleep stages and Cyclic Alternating Pattern (CAP) subtypes from electroencephalogram (EEG) recordings remains a significant challenge in computational sleep research because of the short duration of CAP events and the inherent class imbalance in clinical datasets. Background/Objectives: The research introduces a domain-specific deep learning system that employs an LSTM network optimized through a PSO-Hyperband hybrid hyperparameter tuning method. Methods: The research enhances EEG-based sleep analysis through the implementation of hybrid optimization methods within an LSTM architecture that addresses CAP sequence classification requirements without requiring architectural changes. Results: The developed model demonstrates strong performance on the CAP Sleep Database by achieving 97% accuracy for REM and 96% accuracy for stage S0 and ROC AUC scores exceeding 0.92 across challenging CAP subtypes (A1–A3). The model transparency is improved through the application of SHAP-based interpretability techniques, which highlight the role of spectral and morphological EEG features in classification outcomes. Conclusions: The proposed framework demonstrates resistance to class imbalance and better discrimination between visually similar CAP subtypes. The results demonstrate how hybrid optimization methods improve the performance, generalizability, and interpretability of deep learning models for EEG-based sleep microstructure analysis. Full article
Show Figures

Figure 1

23 pages, 888 KiB  
Article
Explainable Deep Learning Model for ChatGPT-Rephrased Fake Review Detection Using DistilBERT
by Rania A. AlQadi, Shereen A. Taie, Amira M. Idrees and Esraa Elhariri
Big Data Cogn. Comput. 2025, 9(8), 205; https://doi.org/10.3390/bdcc9080205 - 11 Aug 2025
Viewed by 298
Abstract
Customers heavily depend on reviews for product information. Fake reviews may influence the perception of product quality, making online reviews less effective. ChatGPT’s (GPT-3.5 and GPT-4) ability to generate human-like reviews and responses to inquiries across several disciplines has increased recently. This leads [...] Read more.
Customers heavily depend on reviews for product information. Fake reviews may influence the perception of product quality, making online reviews less effective. ChatGPT’s (GPT-3.5 and GPT-4) ability to generate human-like reviews and responses to inquiries across several disciplines has increased recently. This leads to an increase in the number of reviewers and applications using ChatGPT to create fake reviews. Consequently, the detection of fake reviews generated or rephrased by ChatGPT has become essential. This paper proposes a new approach that distinguishes ChatGPT-rephrased reviews, considered fake, from real ones, utilizing a balanced dataset to analyze the sentiment and linguistic patterns that characterize both reviews. The proposed model further leverages Explainable Artificial Intelligence (XAI) techniques, including Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for deeper insights into the model’s predictions and the classification logic. The proposed model performs a pre-processing phase that includes part-of-speech (POS) tagging, word lemmatization, tokenization, and then fine-tuned Transformer-based Machine Learning (ML) model DistilBERT for predictions. The obtained experimental results indicate that the proposed fine-tuned DistilBERT, utilizing the constructed balanced dataset along with a pre-processing phase, outperforms other state-of-the-art methods for detecting ChatGPT-rephrased reviews, achieving an accuracy of 97.25% and F1-score of 97.56%. The use of LIME and SHAP techniques not only enhanced the model’s interpretability, but also offered valuable insights into the key factors that affect the differentiation of genuine reviews from ChatGPT-rephrased ones. According to XAI, ChatGPT’s writing style is polite, uses grammatical structure, lacks specific descriptions and information in reviews, uses fancy words, is impersonal, and has deficiencies in emotional expression. These findings emphasize the effectiveness and reliability of the proposed approach. Full article
(This article belongs to the Special Issue Natural Language Processing Applications in Big Data)
Show Figures

Figure 1

Back to TopTop