Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (849)

Search Parameters:
Keywords = machine box

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3930 KiB  
Article
KAN-Based Tool Wear Modeling with Adaptive Complexity and Symbolic Interpretability in CNC Turning Processes
by Zhongyuan Che, Chong Peng, Jikun Wang, Rui Zhang, Chi Wang and Xinyu Sun
Appl. Sci. 2025, 15(14), 8035; https://doi.org/10.3390/app15148035 - 18 Jul 2025
Abstract
Tool wear modeling in CNC turning processes is critical for proactive maintenance and process optimization in intelligent manufacturing. However, traditional physics-based models lack adaptability, while machine learning approaches are often limited by poor interpretability. This study develops Kolmogorov–Arnold Networks (KANs) to address the [...] Read more.
Tool wear modeling in CNC turning processes is critical for proactive maintenance and process optimization in intelligent manufacturing. However, traditional physics-based models lack adaptability, while machine learning approaches are often limited by poor interpretability. This study develops Kolmogorov–Arnold Networks (KANs) to address the trade-off between accuracy and interpretability in lathe tool wear modeling. Three KAN variants (KAN-A, KAN-B, and KAN-C) with varying complexities are proposed, using feed rate, depth of cut, and cutting speed as input variables to model flank wear. The proposed KAN-based framework generates interpretable mathematical expressions for tool wear, enabling transparent decision-making. To evaluate the performance of KANs, this research systematically compares prediction errors, topological evolutions, and mathematical interpretations of derived symbolic formulas. For benchmarking purposes, MLP-A, MLP-B, and MLP-C models are developed based on the architectures of their KAN counterparts. A comparative analysis between KAN and MLP frameworks is conducted to assess differences in modeling performance, with particular focus on the impact of network depth, width, and parameter configurations. Theoretical analyses, grounded in the Kolmogorov–Arnold representation theorem and Cybenko’s theorem, explain KANs’ ability to approximate complex functions with fewer nodes. The experimental results demonstrate that KANs exhibit two key advantages: (1) superior accuracy with fewer parameters compared to traditional MLPs, and (2) the ability to generate white-box mathematical expressions. Thus, this work bridges the gap between empirical models and black-box machine learning in manufacturing applications. KANs uniquely combine the adaptability of data-driven methods with the interpretability of physics-based models, offering actionable insights for researchers and practitioners. Full article
Show Figures

Figure 1

84 pages, 3825 KiB  
Systematic Review
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by Daniele Pelosi, Diletta Cacciagrano and Marco Piangerelli
Algorithms 2025, 18(7), 443; https://doi.org/10.3390/a18070443 - 18 Jul 2025
Abstract
Explainability and interpretability have emerged as essential considerations in machine learning, particularly as models become more complex and integral to a wide range of applications. In response to increasing concerns over opaque “black-box” solutions, the literature has seen a shift toward two distinct [...] Read more.
Explainability and interpretability have emerged as essential considerations in machine learning, particularly as models become more complex and integral to a wide range of applications. In response to increasing concerns over opaque “black-box” solutions, the literature has seen a shift toward two distinct yet often conflated paradigms: explainable AI (XAI), which refers to post hoc techniques that provide external explanations for model predictions, and interpretable AI, which emphasizes models whose internal mechanisms are understandable by design. Meanwhile, the phenomenon of concept and data drift—where models lose relevance due to evolving conditions—demands renewed attention. High-impact events, such as financial crises or natural disasters, have highlighted the need for robust interpretable or explainable models capable of adapting to changing circumstances. Against this backdrop, our systematic review aims to consolidate current research on explainability and interpretability with a focus on concept and data drift. We gather a comprehensive range of proposed models, available datasets, and other technical aspects. By synthesizing these diverse resources into a clear taxonomy, we intend to provide researchers and practitioners with actionable insights and guidance for model selection, implementation, and ongoing evaluation. Ultimately, this work aspires to serve as a practical roadmap for future studies, fostering further advancements in transparent, adaptable machine learning systems that can meet the evolving needs of real-world applications. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
27 pages, 3704 KiB  
Article
Explainable Machine Learning and Predictive Statistics for Sustainable Photovoltaic Power Prediction on Areal Meteorological Variables
by Sajjad Nematzadeh and Vedat Esen
Appl. Sci. 2025, 15(14), 8005; https://doi.org/10.3390/app15148005 - 18 Jul 2025
Abstract
Precisely predicting photovoltaic (PV) output is crucial for reliable grid integration; so far, most models rely on site-specific sensor data or treat large meteorological datasets as black boxes. This study proposes an explainable machine-learning framework that simultaneously ranks the most informative weather parameters [...] Read more.
Precisely predicting photovoltaic (PV) output is crucial for reliable grid integration; so far, most models rely on site-specific sensor data or treat large meteorological datasets as black boxes. This study proposes an explainable machine-learning framework that simultaneously ranks the most informative weather parameters and reveals their physical relevance to PV generation. Starting from 27 local and plant-level variables recorded at 15 min resolution for a 1 MW array in Çanakkale region, Türkiye (1 August 2022–3 August 2024), we apply a three-stage feature-selection pipeline: (i) variance filtering, (ii) hierarchical correlation clustering with Ward linkage, and (iii) a meta-heuristic optimizer that maximizes a neural-network R2 while penalizing poor or redundant inputs. The resulting subset, dominated by apparent temperature and diffuse, direct, global-tilted, and terrestrial irradiance, reduces dimensionality without significantly degrading accuracy. Feature importance is then quantified through two complementary aspects: (a) tree-based permutation scores extracted from a set of ensemble models and (b) information gain computed over random feature combinations. Both views converge on shortwave, direct, and global-tilted irradiance as the primary drivers of active power. Using only the selected features, the best model attains an average R2 ≅ 0.91 on unseen data. By utilizing transparent feature-reduction techniques and explainable importance metrics, the proposed approach delivers compact, more generalized, and reliable PV forecasts that generalize to sites lacking embedded sensor networks, and it provides actionable insights for plant siting, sensor prioritization, and grid-operation strategies. Full article
Show Figures

Figure 1

14 pages, 4726 KiB  
Article
Interpretable Prediction and Analysis of PVA Hydrogel Mechanical Behavior Using Machine Learning
by Liying Xu, Siqi Liu, Anqi Lin, Zichuan Su and Daxin Liang
Gels 2025, 11(7), 550; https://doi.org/10.3390/gels11070550 - 16 Jul 2025
Viewed by 65
Abstract
Polyvinyl alcohol (PVA) hydrogels have emerged as versatile materials due to their exceptional biocompatibility and tunable mechanical properties, showing great promise for flexible sensors, smart wound dressings, and tissue engineering applications. However, rational design remains challenging due to complex structure–property relationships involving multiple [...] Read more.
Polyvinyl alcohol (PVA) hydrogels have emerged as versatile materials due to their exceptional biocompatibility and tunable mechanical properties, showing great promise for flexible sensors, smart wound dressings, and tissue engineering applications. However, rational design remains challenging due to complex structure–property relationships involving multiple formulation parameters. This study presents an interpretable machine learning framework for predicting PVA hydrogel tensile strain properties with emphasis on mechanistic understanding, based on a comprehensive dataset of 350 data points collected from a systematic literature review. XGBoost demonstrated superior performance after Optuna-based optimization, achieving R2 values of 0.964 for training and 0.801 for testing. SHAP analysis provided unprecedented mechanistic insights, revealing that PVA molecular weight dominates mechanical performance (SHAP importance: 84.94) through chain entanglement and crystallization mechanisms, followed by degree of hydrolysis (72.46) and cross-linking parameters. The interpretability analysis identified optimal parameter ranges and critical feature interactions, elucidating complex non-linear relationships and reinforcement mechanisms. By addressing the “black box” limitation of machine learning, this approach enables rational design strategies and mechanistic understanding for next-generation multifunctional hydrogels. Full article
(This article belongs to the Special Issue Research Progress and Application Prospects of Gel Electrolytes)
Show Figures

Figure 1

23 pages, 16046 KiB  
Article
A False-Positive-Centric Framework for Object Detection Disambiguation
by Jasper Baur and Frank O. Nitsche
Remote Sens. 2025, 17(14), 2429; https://doi.org/10.3390/rs17142429 - 13 Jul 2025
Viewed by 247
Abstract
Existing frameworks for classifying the fidelity for object detection tasks do not consider false positive likelihood and object uniqueness. Inspired by the Detection, Recognition, Identification (DRI) framework proposed by Johnson 1958, we propose a new modified framework that defines three categories as visible [...] Read more.
Existing frameworks for classifying the fidelity for object detection tasks do not consider false positive likelihood and object uniqueness. Inspired by the Detection, Recognition, Identification (DRI) framework proposed by Johnson 1958, we propose a new modified framework that defines three categories as visible anomaly, identifiable anomaly, and unique identifiable anomaly (AIU) as determined by human interpretation of imagery or geophysical data. These categories are designed to better capture false positive rates and emphasize the importance of identifying unique versus non-unique targets compared to the DRI Index. We then analyze visual, thermal, and multispectral UAV imagery collected over a seeded minefield and apply the AIU Index for the landmine detection use-case. We find that RGB imagery provided the most value per pixel, achieving a 100% identifiable anomaly rate at 125 pixels on target, and the highest unique target classification compared to thermal and multispectral imaging for the detection and identification of surface landmines and UXO. We also investigate how the AIU Index can be applied to machine learning for the selection of training data and informing the required action to take after object detection bounding boxes are predicted. Overall, the anomaly, identifiable anomaly, and unique identifiable anomaly index prescribes essential context for false-positive-sensitive or resolution-poor object detection tasks with applications in modality comparison, machine learning, and remote sensing data acquisition. Full article
Show Figures

Figure 1

16 pages, 2622 KiB  
Article
Emulation of Variational Quantum Circuits on Embedded Systems for Real-Time Quantum Machine Learning Applications
by Ali Masoudian, Uffe Jakobsen and Mohammad Hassan Khooban
Designs 2025, 9(4), 87; https://doi.org/10.3390/designs9040087 - 11 Jul 2025
Viewed by 215
Abstract
This paper presents an engineering design framework for integrating Variational Quantum Circuits (VQCs) into industrial control systems via real-time quantum emulation on embedded hardware. In this work, we present a novel framework for fully embedded real-time quantum machine learning (QML), in which a [...] Read more.
This paper presents an engineering design framework for integrating Variational Quantum Circuits (VQCs) into industrial control systems via real-time quantum emulation on embedded hardware. In this work, we present a novel framework for fully embedded real-time quantum machine learning (QML), in which a four-qubit, four-layer VQC is both emulated and trained in situ on an FPGA-based embedded platform (dSPACE MicroLabBox 1202). The system achieves deterministic microsecond-scale response at a closed-loop frequency of 100 kHz, enabling its application in latency-critical control tasks. We demonstrate the feasibility of online VQC training within this architecture by approximating nonlinear functions in real time, thereby validating the potential of embedded QML for advanced signal processing and control applications. This approach provides a scalable and practical path toward real-time Quantum Reinforcement Learning (QRL) and other quantum-enhanced embedded controllers. The results validate the feasibility of real-time quantum emulation and establish a structured engineering design methodology for implementing trainable quantum machine learning (QML) models on embedded platforms, thereby enabling the development of deployable quantum-enhanced controllers. Full article
Show Figures

Figure 1

20 pages, 3465 KiB  
Article
Phase-Controlled Closing Strategy for UHV Circuit Breakers with Arc-Chamber Insulation Deterioration Consideration
by Hao Li, Qi Long, Xu Yang, Xiang Ju, Haitao Li, Zhongming Liu, Dehua Xiong, Xiongying Duan and Minfu Liao
Energies 2025, 18(13), 3558; https://doi.org/10.3390/en18133558 - 5 Jul 2025
Viewed by 359
Abstract
To address the impact of insulation medium degradation in the arc quenching chambers of ultra-high-voltage SF6 circuit breakers on phase-controlled switching accuracy caused by multiple operations throughout the service life, this paper proposes an adaptive switching algorithm. First, a modified formula for [...] Read more.
To address the impact of insulation medium degradation in the arc quenching chambers of ultra-high-voltage SF6 circuit breakers on phase-controlled switching accuracy caused by multiple operations throughout the service life, this paper proposes an adaptive switching algorithm. First, a modified formula for the breakdown voltage of mixed gases is derived based on the synergistic effect. Considering the influence of contact gap on electric field distortion, an adaptive switching strategy is designed to quantify the dynamic relationship among operation times, insulation strength degradation, and electric field distortion. Then, multi-round switching-on and switching-off tests are carried out under the condition of fixed single-arc ablation amount, and the laws of voltage–current, gas decomposition products, and pre-breakdown time are obtained. The test data are processed by the least squares method, adaptive switching algorithm, and machine learning method. The results show that the coincidence degree of the pre-breakdown time obtained by the adaptive switching algorithm and the test value reaches 90%. Compared with the least squares fitting, this algorithm achieves a reasonable balance between goodness of fit and complexity, with prediction deviations tending to be randomly distributed, no obvious systematic offset, and low dispersion degree. It can also explain the physical mechanism of the decay of insulation degradation rate with the number of operations. Compared with the machine learning method, this algorithm has stronger generalization ability, effectively overcoming the defects of difficult interpretation of physical causes and the poor engineering adaptability of the black box model. Full article
Show Figures

Figure 1

20 pages, 3123 KiB  
Article
Cryogenic Distribution System and Entropy-Based Analysis of Chosen Design Options for the Example of the Polish FEL Facility
by Tomasz Banaszkiewicz, Maciej Chorowski and Paweł Duda
Energies 2025, 18(13), 3554; https://doi.org/10.3390/en18133554 - 5 Jul 2025
Viewed by 240
Abstract
The Polish Free-Electron Laser (PolFEL), which is currently under construction in the National Centre for Nuclear Research in Świerk near Warsaw, will comprise an electron gun and from four to six cryomodules, each accommodating two nine-cell TESLA RF superconducting resonant cavities. To cool [...] Read more.
The Polish Free-Electron Laser (PolFEL), which is currently under construction in the National Centre for Nuclear Research in Świerk near Warsaw, will comprise an electron gun and from four to six cryomodules, each accommodating two nine-cell TESLA RF superconducting resonant cavities. To cool the superconducting resonant cavities, the cryomodules will be supplied with superfluid helium at a temperature of 2 K. Other requirements regarding the cooling power of PolFEL result from the need to cool the power couplers for the accelerating cryomodules (5 K) and thermal shields, which limit the heat inleaks due to radiation (40–80 K). The machine will utilize several thermodynamic states of helium, including two-phase superfluid helium, supercritical helium, and low-pressure helium vapours. Supercritical helium will be supplied from a cryoplant by a cryogenic distribution system (CDS)—transfer line and valve boxes—where it will be thermodynamically transformed into a superfluid state. This article presents the architecture of the CDS, discusses several design solutions that could have been decided on with the use of second law analysis, and presents the design methodology of the chosen CDS elements. Full article
Show Figures

Figure 1

24 pages, 787 KiB  
Article
Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance
by Cagla Acun and Olfa Nasraoui
Appl. Sci. 2025, 15(13), 7544; https://doi.org/10.3390/app15137544 - 4 Jul 2025
Viewed by 170
Abstract
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable [...] Read more.
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable white-box model. Pre hoc uses the white-box model to regularize the black-box model, while co hoc jointly optimizes both models with a shared loss function. We extend these frameworks to generate instance-specific explanations using Jensen–Shannon divergence as a regularization term. Our two-phase approach first trains models for fidelity, then generates local explanations through neighborhood-based fine-tuning. Experiments on credit risk scoring and movie recommendation datasets demonstrate superior global and local fidelity compared to LIME, without compromising accuracy. The co hoc framework additionally enhances white-box model accuracy by up to 3%, making it valuable for regulated domains requiring interpretable models. Our approaches provide more faithful and consistent explanations at a lower computational cost than existing methods, offering a promising direction for making machine learning models more transparent and trustworthy while maintaining high prediction accuracy. Full article
(This article belongs to the Special Issue AI Horizons: Present Status and Visions for the Next Era)
Show Figures

Figure 1

25 pages, 5231 KiB  
Article
Using AI for Optimizing Packing Design and Reducing Cost in E-Commerce
by Hayder Zghair and Rushi Ganesh Konathala
AI 2025, 6(7), 146; https://doi.org/10.3390/ai6070146 - 4 Jul 2025
Viewed by 450
Abstract
This research explores how artificial intelligence (AI) can be leveraged to optimize packaging design, reduce operational costs, and enhance sustainability in e-commerce. As packaging waste and shipping inefficiencies grow alongside global online retail demand, traditional methods for determining box size, material use, and [...] Read more.
This research explores how artificial intelligence (AI) can be leveraged to optimize packaging design, reduce operational costs, and enhance sustainability in e-commerce. As packaging waste and shipping inefficiencies grow alongside global online retail demand, traditional methods for determining box size, material use, and logistics planning have become economically and environmentally inadequate. Using a three-phase framework, this study integrates data-driven diagnostics, AI modeling, and real-world validation. In the first phase, a systematic analysis of current packaging inefficiencies was conducted through secondary data, benchmarking, and cost modeling. Findings revealed significant waste caused by over-packaging, suboptimal box-sizing, and poor alignment between product characteristics and logistics strategy. In the second phase, a random forest (RF) machine learning model was developed to predict optimal packaging configurations using key product features: weight, volume, and fragility. This model was supported by AI simulation tools that enabled virtual testing of material performance, space efficiency, and damage risk. Results demonstrated measurable improvements in packaging optimization, cost reduction, and emission mitigation. The third phase validated the AI framework using practical case studies—ranging from a college textbook to a fragile kitchen dish set and a high-volume children’s bicycle. The model successfully recommended right-sized packaging for each product, resulting in reduced material usage, improved shipping density, and enhanced protection. Simulated cost-saving scenarios further confirmed that smart packaging and AI-generated configurations can drive efficiency. The research concludes that AI-based packaging systems offer substantial strategic value, including cost savings, environmental benefits, and alignment with regulatory and consumer expectations—providing scalable, data-driven solutions for e-commerce enterprises such as Amazon and others. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

17 pages, 1691 KiB  
Article
Towards Explainable Graph Embeddings for Gait Assessment Using Per-Cluster Dimensional Weighting
by Chris Lochhead and Robert B. Fisher
Sensors 2025, 25(13), 4106; https://doi.org/10.3390/s25134106 - 30 Jun 2025
Viewed by 202
Abstract
As gaitpathology assessment systems improve both in accuracy and efficiency, the prospect of using these systems in real healthcare applications is becoming more realistic. Although gait analysis systems have proven capable of detecting gait abnormalities in supervised tasks in laboratories and clinics, there [...] Read more.
As gaitpathology assessment systems improve both in accuracy and efficiency, the prospect of using these systems in real healthcare applications is becoming more realistic. Although gait analysis systems have proven capable of detecting gait abnormalities in supervised tasks in laboratories and clinics, there is comparatively little investigation into making such systems explainable to healthcare professionals who would use gait analysis in practice in home-based settings. There is a “black box” problem with existing machine learning models, where healthcare professionals are expected to “trust” the model making diagnoses without understanding its underlying reasoning. To address this applicational barrier, an end-to-end pipeline is introduced here for creating graph feature embeddings, generated using a bespoke Spatio-temporal Graph Convolutional Network and per-joint Principal Component Analysis. The latent graph embeddings produced by this framework led to a novel semi-supervised weighting function which quantifies and ranks the most important joint features, which are used to provide a description for each pathology. Using these embeddings with a K-means clustering approach, the proposed method also outperforms the state of the art by between 4.53 and 16% in classification accuracy across three datasets with a total of 14 different simulated gait pathologies from minor limping to ataxic gait. The resulting system provides a workable improvement to at-home gait assessment applications by providing accurate and explainable descriptions of the nature of detected gait abnormalities without need of prior labeled descriptions of detected pathologies. Full article
Show Figures

Graphical abstract

35 pages, 3147 KiB  
Article
Hybrid Optimization Approaches for Impeller Design in Turbomachinery: Methods, Metrics, and Design Strategies
by Abel Remache, Modesto Pérez-Sánchez, Víctor Hugo Hidalgo and Helena M. Ramos
Water 2025, 17(13), 1976; https://doi.org/10.3390/w17131976 - 30 Jun 2025
Viewed by 333
Abstract
Optimizing the design of impellers in turbomachinery is crucial for improving its energy efficiency, structural integrity, and hydraulic performance in various engineering applications. This work proposes a novel modular framework for impeller optimization that integrates high-fidelity CFD and FEM simulations, AI-based surrogate modeling, [...] Read more.
Optimizing the design of impellers in turbomachinery is crucial for improving its energy efficiency, structural integrity, and hydraulic performance in various engineering applications. This work proposes a novel modular framework for impeller optimization that integrates high-fidelity CFD and FEM simulations, AI-based surrogate modeling, and multi-objective evolutionary algorithms. A comprehensive analysis of over one hundred recent studies was conducted, with a focus on advanced computational and hybrid optimization techniques, CFD, FEM, surrogate modeling, evolutionary algorithms, and machine learning approaches. Emphasis is placed on multi-objective and data-driven strategies that integrate high-fidelity simulations with metamodels and experimental validation. The findings demonstrate that hybrid methodologies such as combining response surface methodology (RSM), Box–Behnken design (BBD), non-dominated sorting genetic algorithm II (NSGA-II), and XGBoost lead to significant improvements in hydraulic efficiency (up to 6.7%), mass reduction (over 30%), and cavitation mitigation. This study introduces a modular decision-making framework for impeller optimization which considers design objectives, simulation constraints, and the physical characteristics of turbomachinery. Furthermore, emerging trends in open-source tools, additive manufacturing, and the application of deep neural networks are discussed as key enablers for future advancements in both research and industrial applications. This work provides a practical, results-oriented framework for engineers and researchers seeking to enhance the design of impellers in the next generation of turbomachinery. Full article
(This article belongs to the Special Issue Hydraulics and Hydrodynamics in Fluid Machinery, 2nd Edition)
Show Figures

Figure 1

23 pages, 4984 KiB  
Article
Design and Experiment of the Belt-Tooth Residual Film Recovery Machine
by Zebin Gao, Xinlei Zhang, Jiaxi Zhang, Yichao Wang, Jinming Li, Shilong Shen, Wenhao Dong and Xiaoxuan Wang
Agriculture 2025, 15(13), 1422; https://doi.org/10.3390/agriculture15131422 - 30 Jun 2025
Viewed by 233
Abstract
To address poor film pickup, incomplete soil–film separation, and high soil content in conventional residual film recovery machines, this study designed a belt-tooth type residual film recovery machine. Its core component integrates flexible belts with nail-teeth, providing both overload protection and efficient conveying. [...] Read more.
To address poor film pickup, incomplete soil–film separation, and high soil content in conventional residual film recovery machines, this study designed a belt-tooth type residual film recovery machine. Its core component integrates flexible belts with nail-teeth, providing both overload protection and efficient conveying. EDEM simulations compared film pickup performance across tooth profiles, identifying an optimal structure. Based on the kinematics and mechanical properties of residual film, a film removal mechanism and packing device were designed, incorporating partitioned packing belts to reduce soil content rate in the collected film. Using Box–Behnken experimental design, response surface methodology analyzed the effects of machine forward speed, film-lifting tooth penetration depth, and pickup belt inclination angle. Key findings show: forward speed, belt angle, and tooth depth (descending order) primarily influence recovery rate; while tooth depth, belt angle, and forward speed primarily affect soil content rate. Multi-objective optimization in Design-Expert determined optimal parameters: 5.2 km/h speed, 44 mm tooth depth, and 75° belt angle. Field validation achieved a 90.15% recovery rate and 5.86% soil content rate. Relative errors below 2.73% confirmed the regression model’s reliability. Compared with common models, the recovery rate has increased slightly, while the soil content rate has decreased by more than 4%, meeting the technical requirements for resource recovery of residual plastic film. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

25 pages, 418 KiB  
Review
Emerging Diagnostic Approaches for Musculoskeletal Disorders: Advances in Imaging, Biomarkers, and Clinical Assessment
by Rahul Kumar, Kiran Marla, Kyle Sporn, Phani Paladugu, Akshay Khanna, Chirag Gowda, Alex Ngo, Ethan Waisberg, Ram Jagadeesan and Alireza Tavakkoli
Diagnostics 2025, 15(13), 1648; https://doi.org/10.3390/diagnostics15131648 - 27 Jun 2025
Viewed by 688
Abstract
Musculoskeletal (MSK) disorders remain a major global cause of disability, with diagnostic complexity arising from their heterogeneous presentation and multifactorial pathophysiology. Recent advances across imaging modalities, molecular biomarkers, artificial intelligence applications, and point-of-care technologies are fundamentally reshaping musculoskeletal diagnostics. This review offers a [...] Read more.
Musculoskeletal (MSK) disorders remain a major global cause of disability, with diagnostic complexity arising from their heterogeneous presentation and multifactorial pathophysiology. Recent advances across imaging modalities, molecular biomarkers, artificial intelligence applications, and point-of-care technologies are fundamentally reshaping musculoskeletal diagnostics. This review offers a novel synthesis by unifying recent innovations across multiple diagnostic imaging modalities, such as CT, MRI, and ultrasound, with emerging biochemical, genetic, and digital technologies. While existing reviews typically focus on advances within a single modality or for specific MSK conditions, this paper integrates a broad spectrum of developments to highlight how use of multimodal diagnostic strategies in combination can improve disease detection, stratification, and clinical decision-making in real-world settings. Technological developments in imaging, including photon-counting detector computed tomography, quantitative magnetic resonance imaging, and four-dimensional computed tomography, have enhanced the ability to visualize structural and dynamic musculoskeletal abnormalities with greater precision. Molecular imaging and biochemical markers such as CTX-II (C-terminal cross-linked telopeptides of type II collagen) and PINP (procollagen type I N-propeptide) provide early, objective indicators of tissue degeneration and bone turnover, while genetic and epigenetic profiling can elucidate individual patterns of susceptibility. Point-of-care ultrasound and portable diagnostic devices have expanded real-time imaging and functional assessment capabilities across diverse clinical settings. Artificial intelligence and machine learning algorithms now automate image interpretation, predict clinical outcomes, and enhance clinical decision support, complementing conventional clinical evaluations. Wearable sensors and mobile health technologies extend continuous monitoring beyond traditional healthcare environments, generating real-world data critical for dynamic disease management. However, standardization of diagnostic protocols, rigorous validation of novel methodologies, and thoughtful integration of multimodal data remain essential for translating technological advances into improved patient outcomes. Despite these advances, several key limitations constrain widespread clinical adoption. Imaging modalities lack standardized acquisition protocols and reference values, making cross-site comparison and clinical interpretation difficult. AI-driven diagnostic tools often suffer from limited external validation and transparency (“black-box” models), impacting clinicians’ trust and hindering regulatory approval. Molecular markers like CTX-II and PINP, though promising, show variability due to diurnal fluctuations and comorbid conditions, complicating their use in routine monitoring. Integration of multimodal data, especially across imaging, omics, and wearable devices, remains technically and logistically complex, requiring robust data infrastructure and informatics expertise not yet widely available in MSK clinical practice. Furthermore, reimbursement models have not caught up with many of these innovations, limiting access in resource-constrained healthcare settings. As these fields converge, musculoskeletal diagnostics methods are poised to evolve into a more precise, personalized, and patient-centered discipline, driving meaningful improvements in musculoskeletal health worldwide. Full article
(This article belongs to the Special Issue Advances in Musculoskeletal Imaging: From Diagnosis to Treatment)
22 pages, 1359 KiB  
Article
A Meta-Learning-Based Ensemble Model for Explainable Alzheimer’s Disease Diagnosis
by Fatima Hasan Al-bakri, Wan Mohd Yaakob Wan Bejuri, Mohamed Nasser Al-Andoli, Raja Rina Raja Ikram, Hui Min Khor, Zulkifli Tahir and The Alzheimer’s Disease Neuroimaging Initiative
Diagnostics 2025, 15(13), 1642; https://doi.org/10.3390/diagnostics15131642 - 27 Jun 2025
Viewed by 461
Abstract
Background/Objectives: Artificial intelligence (AI) models for Alzheimer’s disease (AD) diagnosis often face the challenge of limited explainability, hindering their clinical adoption. Previous studies have relied on full-scale MRI, which increases unnecessary features, creating a “black-box” problem in current XAI models. Methods: This study [...] Read more.
Background/Objectives: Artificial intelligence (AI) models for Alzheimer’s disease (AD) diagnosis often face the challenge of limited explainability, hindering their clinical adoption. Previous studies have relied on full-scale MRI, which increases unnecessary features, creating a “black-box” problem in current XAI models. Methods: This study proposes an explainable ensemble-based diagnostic framework trained on both clinical data and mid-slice axial MRI from the ADNI and OASIS datasets. The methodology involves training an ensemble model that integrates Random Forest, Support Vector Machine, XGBoost, and Gradient Boosting classifiers, with meta-logistic regression used for the final decision. The core contribution lies in the exclusive use of mid-slice MRI images, which highlight the lateral ventricles, thus improving the transparency and clinical relevance of the decision-making process. Our mid-slice approach minimizes unnecessary features and enhances model explainability by design. Results: We achieved state-of-the-art diagnostic accuracy: 99% on OASIS and 97.61% on ADNI using clinical data alone; 99.38% on OASIS and 98.62% on ADNI using only mid-slice MRI; and 99% accuracy when combining both modalities. The findings demonstrated significant progress in diagnostic transparency, as the algorithm consistently linked predictions to observed structural changes in the dilated lateral ventricles of the brain, which serve as a clinically reliable biomarker for AD and can be easily verified by medical professionals. Conclusions: This research presents a step toward more transparent AI-driven diagnostics, bridging the gap between accuracy and explainability in XAI. Full article
(This article belongs to the Special Issue Explainable Machine Learning in Clinical Diagnostics)
Show Figures

Figure 1

Back to TopTop