Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,349)

Search Parameters:
Keywords = cams model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1461 KiB  
Article
Comparative Analysis of Orbital Morphology Accuracy in 3D Models Based on Cone-Beam and Fan-Beam Computed Tomography Scans for Reconstructive Planning
by Natalia Bielecka-Kowalska, Bartosz Bielecki-Kowalski and Marcin Kozakiewicz
J. Clin. Med. 2025, 14(15), 5541; https://doi.org/10.3390/jcm14155541 - 6 Aug 2025
Abstract
Background/Objectives: Orbital reconstruction remains one of the most demanding procedures in maxillofacial surgery. It requires not only precise anatomical knowledge but also poses multiple intraoperative challenges. Limited surgical visibility—especially in transconjunctival or transcaruncular approaches—demands exceptional precision from the surgeon. At the same time, [...] Read more.
Background/Objectives: Orbital reconstruction remains one of the most demanding procedures in maxillofacial surgery. It requires not only precise anatomical knowledge but also poses multiple intraoperative challenges. Limited surgical visibility—especially in transconjunctival or transcaruncular approaches—demands exceptional precision from the surgeon. At the same time, the complex anatomical structure of the orbit, its rich vascularization and innervation, and the risk of severe postoperative complications—such as diplopia, sensory deficits, impaired ocular mobility, or in the most serious cases, post-traumatic blindness due to nerve injury or orbital compartment syndrome—necessitate the highest level of surgical accuracy. In this context, patient-specific implants (PSIs), commonly fabricated from zirconium oxide or ultra-high-density polyethylene, have become invaluable. Within CAD-based reconstructive planning, especially for orbital implants, critical factors include the implant’s anatomical fit, passive stabilization on intact bony structures, and non-interference with orbital soft tissues. Above all, precise replication of the orbital dimensions is essential for optimal clinical outcomes. This study compares the morphological accuracy of orbital structures based on anthropometric measurements from 3D models generated from fan-beam computed tomography (FBCT) and cone-beam computed tomography (CBCT). Methods: A cohort group of 500 Caucasian patients aged 8 to 88 years was analyzed. 3D models of the orbits were generated from FBCT and CBCT scans. Anthropometric measurements were taken to evaluate the morphological accuracy of the orbital structures. The assessed parameters included orbital depth, orbital width, the distance from the infraorbital rim to the infraorbital foramen, the distance between the piriform aperture and the infraorbital foramen, and the distance from the zygomatico-orbital foramen to the infraorbital rim. Results: Statistically significant differences were observed between virtual models derived from FBCT and those based on CBCT in several key parameters. Discrepancies were particularly evident in measurements of orbital depth, orbital width, the distance from the infraorbital rim to the infraorbital foramen, the distance between the piriform aperture and the infraorbital foramen, and the distance from the zygomatico-orbital foramen to the infraorbital rim. Conclusions: The statistically significant discrepancies in selected orbital dimensions—particularly in regions of so-called thin bone—demonstrate that FBCT remains the gold standard in the planning and design of CAD/CAM patient-specific orbital implants. Despite its advantages, including greater accessibility and lower radiation dose, CBCT shows limited reliability in the context of orbital and infraorbital reconstruction planning. Full article
(This article belongs to the Special Issue State-of-the-Art Innovations in Oral and Maxillofacial Surgery)
Show Figures

Figure 1

26 pages, 3940 KiB  
Article
In Vitro Proof-of-Concept Study: Lidocaine and Epinephrine Co-Loaded in a Mucoadhesive Liquid Crystal Precursor System for Topical Oral Anesthesia
by Giovana Maria Fioramonti Calixto, Aylla Mesquita Pestana, Arthur Antunes Costa Bezerra, Marcela Tavares Luiz, Jonatas Lobato Duarte, Marlus Chorilli and Michelle Franz-Montan
Pharmaceuticals 2025, 18(8), 1166; https://doi.org/10.3390/ph18081166 (registering DOI) - 6 Aug 2025
Abstract
Background: Local anesthesia is essential for most dental procedures, but its parenteral administration is often painful. Topical anesthetics are commonly used to minimize local anesthesia pain; however, commercial formulations fail to fully prevent the discomfort of local anesthetic injection. Methods: We developed and [...] Read more.
Background: Local anesthesia is essential for most dental procedures, but its parenteral administration is often painful. Topical anesthetics are commonly used to minimize local anesthesia pain; however, commercial formulations fail to fully prevent the discomfort of local anesthetic injection. Methods: We developed and characterized a novel lidocaine and epinephrine co-loaded liquid crystalline precursor system (LCPS) for topical anesthesia. The formulation was structurally characterized using polarized light microscopy (PLM) and small-angle X-ray scattering (SAXS). Rheological behavior was assessed through continuous and oscillatory rheological analyses. Texture profile analysis, in vitro mucoadhesive force evaluation, in vitro drug release and permeation studies, and an in vivo toxicity assay using the chicken chorioallantoic membrane (CAM) model were also conducted. Results: PLM and SAXS confirmed the transition of the LCPS from a microemulsion to a lamellar liquid crystalline structure upon contact with artificial saliva. This transition enhanced formulation consistency by over 100 times and tripled mucoadhesion strength. The LCPS also provided controlled drug release, reducing permeation flow by 93% compared to the commercial formulation. Importantly, the CAM assay indicated that the LCPS exhibited similar toxicity to the commercial product. Conclusions: The developed LCPS demonstrated promising physicochemical and biological properties for topical anesthesia, including enhanced mucoadhesion, controlled drug delivery, and acceptable biocompatibility. These findings support its potential for in vivo application and future clinical use to reduce pain during dental anesthesia procedures. Full article
(This article belongs to the Special Issue Advances in Topical and Mucosal Drug Delivery Systems)
Show Figures

Figure 1

23 pages, 2640 KiB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

20 pages, 4095 KiB  
Article
Integrated Explainable Diagnosis of Gear Wear Faults Based on Dynamic Modeling and Data-Driven Representation
by Zemin Zhao, Tianci Zhang, Kang Xu, Jinyuan Tang and Yudian Yang
Sensors 2025, 25(15), 4805; https://doi.org/10.3390/s25154805 - 5 Aug 2025
Viewed by 52
Abstract
Gear wear degrades transmission performance, necessitating highly reliable fault diagnosis methods. To address the limitations of existing approaches—where dynamic models rely heavily on prior knowledge, while data-driven methods lack interpretability—this study proposes an integrated bidirectional verification framework combining dynamic modeling and deep learning [...] Read more.
Gear wear degrades transmission performance, necessitating highly reliable fault diagnosis methods. To address the limitations of existing approaches—where dynamic models rely heavily on prior knowledge, while data-driven methods lack interpretability—this study proposes an integrated bidirectional verification framework combining dynamic modeling and deep learning for interpretable gear wear diagnosis. First, a dynamic gear wear model is established to quantitatively reveal wear-induced modulation effects on meshing stiffness and vibration responses. Then, a deep network incorporating Gradient-weighted Class Activation Mapping (Grad-CAM) enables visualized extraction of frequency-domain sensitive features. Bidirectional verification between the dynamic model and deep learning demonstrates enhanced meshing harmonics in wear faults, leading to a quantitative diagnostic index that achieves 0.9560 recognition accuracy for gear wear across four speed conditions, significantly outperforming comparative indicators. This research provides a novel approach for gear wear diagnosis that ensures both high accuracy and interpretability. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

17 pages, 3807 KiB  
Article
2AM: Weakly Supervised Tumor Segmentation in Pathology via CAM and SAM Synergy
by Chenyu Ren, Liwen Zou and Luying Gui
Electronics 2025, 14(15), 3109; https://doi.org/10.3390/electronics14153109 - 5 Aug 2025
Viewed by 117
Abstract
Tumor microenvironment (TME) analysis plays an extremely important role in computational pathology. Deep learning shows tremendous potential for tumor tissue segmentation on pathological images, which is an essential part of TME analysis. However, fully supervised segmentation methods based on deep learning usually require [...] Read more.
Tumor microenvironment (TME) analysis plays an extremely important role in computational pathology. Deep learning shows tremendous potential for tumor tissue segmentation on pathological images, which is an essential part of TME analysis. However, fully supervised segmentation methods based on deep learning usually require a large number of manual annotations, which is time-consuming and labor-intensive. Recently, weakly supervised semantic segmentation (WSSS) works based on the Class Activation Map (CAM) have shown promising results to learn the concept of segmentation from image-level class labels but usually have imprecise boundaries due to the lack of pixel-wise supervision. On the other hand, the Segment Anything Model (SAM), a foundation model for segmentation, has shown an impressive ability for general semantic segmentation on natural images, while it suffers from the noise caused by the initial prompts. To address these problems, we propose a simple but effective weakly supervised framework, termed as 2AM, combining CAM and SAM for tumor tissue segmentation on pathological images. Our 2AM model is composed of three modules: (1) a CAM module for generating salient regions for tumor tissues on pathological images; (2) an adaptive point selection (APS) module for providing more reliable initial prompts for the subsequent SAM by designing three priors of basic appearance, space distribution, and feature difference; and (3) a SAM module for predicting the final segmentation. Experimental results on two independent datasets show that our proposed method boosts tumor segmentation accuracy by nearly 25% compared with the baseline method, and achieves more than 15% improvement compared with previous state-of-the-art segmentation methods with WSSS settings. Full article
(This article belongs to the Special Issue AI-Driven Medical Image/Video Processing)
Show Figures

Figure 1

19 pages, 11665 KiB  
Article
Upregulating ANKHD1 in PS19 Mice Reduces Tau Phosphorylation and Mitigates Tau Toxicity-Induced Cognitive Deficits
by Xiaolin Tian, Nathan Le, Yuhai Zhao, Dina Alawamleh, Andrew Schwartz, Lauren Meyer, Elizabeth Helm and Chunlai Wu
Int. J. Mol. Sci. 2025, 26(15), 7524; https://doi.org/10.3390/ijms26157524 - 4 Aug 2025
Viewed by 136
Abstract
Using the fly eye as a model system, we previously demonstrated that upregulation of the fly gene mask protects against FUS- and Tau-induced photoreceptor degeneration. Building upon this finding, we investigated whether the protective role of mask is conserved in mammals. To this [...] Read more.
Using the fly eye as a model system, we previously demonstrated that upregulation of the fly gene mask protects against FUS- and Tau-induced photoreceptor degeneration. Building upon this finding, we investigated whether the protective role of mask is conserved in mammals. To this end, we generated a transgenic mouse line carrying Cre-inducible ANKHD1, the human homolog of mask. Utilizing the TauP301S-PS19 mouse model for Tau-related dementia, we found that expressing ANKHD1 driven by CamK2a-Cre reduced hyperphosphorylated human Tau in 6-month-old mice. Additionally, ANKHD1 expression was associated with a trend toward reduced gliosis and preservation of the presynaptic marker Synaptophysin, suggesting a protective role of ANKHD1 against TauP301S-linked neuropathology. At 9 months of age, novel object recognition (NOR) testing revealed cognitive impairment in female, but not male, PS19 mice. Notably, co-expression of ANKHD1 restored cognitive performance in the affected female mice. Together, this study highlights the novel effect of ANKHD1 in counteracting the adverse effects induced by the mutant human Tau protein. This finding underscores ANKHD1’s potential as a unique therapeutic target for tauopathies. Full article
Show Figures

Figure 1

19 pages, 3468 KiB  
Article
Fine-Tuning Models for Histopathological Classification of Colorectal Cancer
by Houda Saif ALGhafri and Chia S. Lim
Diagnostics 2025, 15(15), 1947; https://doi.org/10.3390/diagnostics15151947 - 3 Aug 2025
Viewed by 160
Abstract
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained [...] Read more.
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained models on specialized and multiple datasets is proposed, where the proposed models, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep, are algorithmically fine-tuned at varying depths to improve the performance of colorectal cancer classification. These models were applied to datasets of 10,613 images from public and private repositories, external sources, and unseen data. To validate the models’ decision-making and improve transparency, we integrated Grad-CAM to provide visual explanations that influence classification decisions. Results and Conclusions: On average across all datasets, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep achieved test accuracies of 99.34%, 99.48%, and 99.45%, respectively, highlighting the effectiveness of fine-tuning in improving classification performance and generalization. Statistical methods, including paired t-tests, ANOVA, and the Kruskal–Wallis test, confirmed significant improvements in the proposed methods’ performance, with p-values below 0.05. These findings demonstrate that fine-tuning based on the characteristics of CNN’s architecture enhances colorectal cancer classification in histopathology, thereby improving the diagnostic potential of deep learning models. Full article
Show Figures

Figure 1

17 pages, 6494 KiB  
Article
Evaluation of a Passive-Assist Exoskeleton Under Different Assistive Force Profiles in Agricultural Working Postures
by Naoki Saito, Takumi Kobayashi, Kohei Akimoto, Toshiyuki Satoh and Norihiko Saga
Actuators 2025, 14(8), 381; https://doi.org/10.3390/act14080381 - 1 Aug 2025
Viewed by 174
Abstract
To enable the practical application of passive back-support exoskeletons employing pneumatic artificial muscles (PAMs) in tasks such as agricultural work, we evaluated their assistive effectiveness in a half-squatting posture with a staggered stance. In this context, assistive force profiles were adjusted according to [...] Read more.
To enable the practical application of passive back-support exoskeletons employing pneumatic artificial muscles (PAMs) in tasks such as agricultural work, we evaluated their assistive effectiveness in a half-squatting posture with a staggered stance. In this context, assistive force profiles were adjusted according to body posture to achieve more effective support. The targeted assistive force profile was designed to be continuously active from the standing to the half-squatting position, with minimal variation across this range. The assistive force profile was developed based on a PAM contractile force model and implemented using a cam mechanism. The effectiveness of assistance was assessed by measuring body flexion angles and erector spinae muscle activity during lifting and carrying tasks. The results showed that the assistive effect was greater on the side with the forward leg. Compared to the condition without exoskeleton assistance, the conventional pulley-based system reduced muscle activity by approximately 20% whereas the cam-based system achieved a reduction of approximately 30%. Full article
(This article belongs to the Special Issue Actuation and Sensing of Intelligent Soft Robots)
Show Figures

Figure 1

29 pages, 2495 KiB  
Article
AIM-Net: A Resource-Efficient Self-Supervised Learning Model for Automated Red Spider Mite Severity Classification in Tea Cultivation
by Malathi Kanagarajan, Mohanasundaram Natarajan, Santhosh Rajendran, Parthasarathy Velusamy, Saravana Kumar Ganesan, Manikandan Bose, Ranjithkumar Sakthivel and Baskaran Stephen Inbaraj
AgriEngineering 2025, 7(8), 247; https://doi.org/10.3390/agriengineering7080247 - 1 Aug 2025
Viewed by 146
Abstract
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. [...] Read more.
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. This article proposes AIM-Net (AI-based Infestation Mapping Network) by evaluating SwAV (Swapping Assignments between Views), a self-supervised learning framework, for classifying RSM infestation severity (Mild, Moderate, Severe) using a geo-referenced, field-acquired dataset of RSM infested tea-leaves, Cam-RSM. The methodology combines SwAV pre-training on unlabeled data with fine-tuning on labeled subsets, employing multi-crop augmentation and online clustering to learn discriminative features without full supervision. Comparative analysis against a fully supervised ResNet-50 baseline utilized 5-fold cross-validation, assessing accuracy, F1-scores, and computational efficiency. Results demonstrate SwAV’s superiority, achieving 98.7% overall accuracy (vs. 92.1% for ResNet-50) and macro-average F1-scores of 98.3% across classes, with a 62% reduction in labeled data requirements. The model showed particular strength in Mild_RSM-class detection (F1-score: 98.5%) and computational efficiency, enabling deployment on edge devices. Statistical validation confirmed significant improvements (p < 0.001) over baseline approaches. These findings establish self-supervised learning as a transformative tool for precision pest management, offering resource-efficient solutions for early infestation detection while maintaining high accuracy. Full article
Show Figures

Figure 1

17 pages, 1546 KiB  
Article
Design and Optimization of Valve Lift Curves for Piston-Type Expander at Different Rotational Speeds
by Yongtao Sun, Qihui Yu, Zhenjie Han, Ripeng Qin and Xueqing Hao
Fluids 2025, 10(8), 204; https://doi.org/10.3390/fluids10080204 - 1 Aug 2025
Viewed by 122
Abstract
The piston-type expander (PTE), as the primary output component, significantly influences the performance of an energy storage system. This paper proposes a non-cam variable valve actuation system for the PTE, supported by a mathematical model. An enhanced S-curve trajectory planning method is used [...] Read more.
The piston-type expander (PTE), as the primary output component, significantly influences the performance of an energy storage system. This paper proposes a non-cam variable valve actuation system for the PTE, supported by a mathematical model. An enhanced S-curve trajectory planning method is used to design the valve lift curve. The study investigates the effects of various valve lift design parameters on output power and efficiency at different rotational speeds, employing orthogonal design and SPSS Statistics 27 (Statistical Product and Service Solutions) simulations. A grey comprehensive evaluation method is used to identify optimal valve lift parameters for each speed. The results show that valve lift parameters influence PTE performance to varying degrees, with intake duration having the greatest effect, followed by maximum valve lift, while intake end time has the least impact. The non-cam PTE outperforms the cam-based PTE. At 800 rpm, the optimal design yields 7.12 kW and 53.5% efficiency; at 900 rpm, 8.17 kW and 50.6%; at 1000 rpm, 9.2 kW and 46.8%; and at 1100 rpm, 12.09 kW and 41.2%. At these speeds, output power increases by 18.37%, 11.42%, 11.62%, and 9.82%, while energy efficiency improves by 15.01%, 15.05%, 14.24%, and 13.86%, respectively. Full article
Show Figures

Figure 1

19 pages, 1160 KiB  
Article
Multi-User Satisfaction-Driven Bi-Level Optimization of Electric Vehicle Charging Strategies
by Boyin Chen, Jiangjiao Xu and Dongdong Li
Energies 2025, 18(15), 4097; https://doi.org/10.3390/en18154097 - 1 Aug 2025
Viewed by 216
Abstract
The accelerating integration of electric vehicles (EVs) into contemporary transportation infrastructure has underscored significant limitations in traditional charging paradigms, particularly in accommodating heterogeneous user requirements within dynamic operational environments. This study presents a differentiated optimization framework for EV charging strategies through the systematic [...] Read more.
The accelerating integration of electric vehicles (EVs) into contemporary transportation infrastructure has underscored significant limitations in traditional charging paradigms, particularly in accommodating heterogeneous user requirements within dynamic operational environments. This study presents a differentiated optimization framework for EV charging strategies through the systematic classification of user types. A multidimensional decision-making environment is established for three representative user categories—residential, commercial, and industrial—by synthesizing time-variant electricity pricing models with dynamic carbon emission pricing mechanisms. A bi-level optimization architecture is subsequently formulated, leveraging deep reinforcement learning (DRL) to capture user-specific demand characteristics through customized reward functions and adaptive constraint structures. Validation is conducted within a high-fidelity simulation environment featuring 90 autonomous EV charging agents operating in a metropolitan parking facility. Empirical results indicate that the proposed typology-driven approach yields a 32.6% average cost reduction across user groups relative to baseline charging protocols, with statistically significant improvements in expenditure optimization (p < 0.01). Further interpretability analysis employing gradient-weighted class activation mapping (Grad-CAM) demonstrates that the model’s attention mechanisms are well aligned with theoretically anticipated demand prioritization patterns across the distinct user types, thereby confirming the decision-theoretic soundness of the framework. Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

23 pages, 3099 KiB  
Article
Explainable Multi-Scale CAM Attention for Interpretable Cloud Segmentation in Astro-Meteorological Applications
by Qing Xu, Zichen Zhang, Guanfang Wang and Yunjie Chen
Appl. Sci. 2025, 15(15), 8555; https://doi.org/10.3390/app15158555 (registering DOI) - 1 Aug 2025
Viewed by 189
Abstract
Accurate cloud segmentation is critical for astronomical observations and solar forecasting. However, traditional threshold- and texture-based methods suffer from limited accuracy (65–80%) under complex conditions such as thin cirrus or twilight transitions. Although the deep-learning segmentation method based on U-Net effectively captures low-level [...] Read more.
Accurate cloud segmentation is critical for astronomical observations and solar forecasting. However, traditional threshold- and texture-based methods suffer from limited accuracy (65–80%) under complex conditions such as thin cirrus or twilight transitions. Although the deep-learning segmentation method based on U-Net effectively captures low-level and high-level features and achieves significant progress in accuracy, current methods still lack interpretability and multi-scale feature integration and usually produce fuzzy boundaries or fragmented predictions. In this paper, we propose multi-scale CAM, an explainable AI (XAI) framework that integrates class activation mapping (CAM) with hierarchical feature fusion to quantify pixel-level attention across hierarchical features, thereby enhancing the model’s discriminative capability. To achieve precise segmentation, we integrate CAM into an improved U-Net architecture, incorporating multi-scale CAM attention for adaptive feature fusion and dilated residual modules for large-scale context extraction. Experimental results on the SWINSEG dataset demonstrate that our method outperforms existing state-of-the-art methods, improving recall by 3.06%, F1 score by 1.49%, and MIoU by 2.21% over the best baseline. The proposed framework balances accuracy, interpretability, and computational efficiency, offering a trustworthy solution for cloud detection systems in operational settings. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

28 pages, 6624 KiB  
Article
YoloMal-XAI: Interpretable Android Malware Classification Using RGB Images and YOLO11
by Chaymae El Youssofi and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 52; https://doi.org/10.3390/jcp5030052 - 1 Aug 2025
Viewed by 322
Abstract
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB [...] Read more.
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB image representations by mapping DEX (Dalvik Executable), Manifest.xml, and Resources.arsc files to distinct color channels. Evaluated on the CICMalDroid2020 dataset using YOLO11 pretrained classification models, YoloMal-XAI achieves 99.87% accuracy in binary classification and 99.56% in multi-class classification (Adware, Banking, Riskware, SMS, and Benign). Compared to ResNet-50, GoogLeNet, and MobileNetV2, YOLO11 offers competitive accuracy with at least 7× faster training over 100 epochs. Against YOLOv8, YOLO11 achieves comparable or superior accuracy while reducing training time by up to 3.5×. Cross-corpus validation using Drebin and CICAndMal2017 further confirms the model’s generalization capability on previously unseen malware. An ablation study highlights the value of integrating DEX, Manifest, and Resources components, with the full RGB configuration consistently delivering the best performance. Explainable AI (XAI) techniques—Grad-CAM, Grad-CAM++, Eigen-CAM, and HiRes-CAM—are employed to interpret model decisions, revealing the DEX segment as the most influential component. These results establish YoloMal-XAI as a scalable, efficient, and interpretable framework for Android malware detection, with strong potential for future deployment on resource-constrained mobile devices. Full article
Show Figures

Figure 1

14 pages, 2727 KiB  
Article
A Multimodal MRI-Based Model for Colorectal Liver Metastasis Prediction: Integrating Radiomics, Deep Learning, and Clinical Features with SHAP Interpretation
by Xin Yan, Furui Duan, Lu Chen, Runhong Wang, Kexin Li, Qiao Sun and Kuang Fu
Curr. Oncol. 2025, 32(8), 431; https://doi.org/10.3390/curroncol32080431 - 30 Jul 2025
Viewed by 182
Abstract
Purpose: Predicting colorectal cancer liver metastasis (CRLM) is essential for prognostic assessment. This study aims to develop and validate an interpretable multimodal machine learning framework based on multiparametric MRI for predicting CRLM, and to enhance the clinical interpretability of the model through [...] Read more.
Purpose: Predicting colorectal cancer liver metastasis (CRLM) is essential for prognostic assessment. This study aims to develop and validate an interpretable multimodal machine learning framework based on multiparametric MRI for predicting CRLM, and to enhance the clinical interpretability of the model through SHapley Additive exPlanations (SHAP) analysis and deep learning visualization. Methods: This multicenter retrospective study included 463 patients with pathologically confirmed colorectal cancer from two institutions, divided into training (n = 256), internal testing (n = 111), and external validation (n = 96) sets. Radiomics features were extracted from manually segmented regions on axial T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI). Deep learning features were obtained from a pretrained ResNet101 network using the same MRI inputs. A least absolute shrinkage and selection operator (LASSO) logistic regression classifier was developed for clinical, radiomics, deep learning, and combined models. Model performance was evaluated by AUC, sensitivity, specificity, and F1-score. SHAP was used to assess feature contributions, and Grad-CAM was applied to visualize deep feature attention. Results: The combined model integrating features across the three modalities achieved the highest performance across all datasets, with AUCs of 0.889 (training), 0.838 (internal test), and 0.822 (external validation), outperforming single-modality models. Decision curve analysis (DCA) revealed enhanced clinical net benefit from the integrated model, while calibration curves confirmed its good predictive consistency. SHAP analysis revealed that radiomic features related to T2WI texture (e.g., LargeDependenceLowGrayLevelEmphasis) and clinical biomarkers (e.g., CA19-9) were among the most predictive for CRLM. Grad-CAM visualizations confirmed that the deep learning model focused on tumor regions consistent with radiological interpretation. Conclusions: This study presents a robust and interpretable multiparametric MRI-based model for noninvasively predicting liver metastasis in colorectal cancer patients. By integrating handcrafted radiomics and deep learning features, and enhancing transparency through SHAP and Grad-CAM, the model provides both high predictive performance and clinically meaningful explanations. These findings highlight its potential value as a decision-support tool for individualized risk assessment and treatment planning in the management of colorectal cancer. Full article
(This article belongs to the Section Gastrointestinal Oncology)
Show Figures

Graphical abstract

34 pages, 2740 KiB  
Article
Lightweight Anomaly Detection in Digit Recognition Using Federated Learning
by Anja Tanović and Ivan Mezei
Future Internet 2025, 17(8), 343; https://doi.org/10.3390/fi17080343 - 30 Jul 2025
Viewed by 265
Abstract
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point [...] Read more.
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point arithmetic. The system is trained on a reduced MNIST dataset (1000 resized samples) and evaluated using EMNIST and MNIST-C for anomaly detection. Seven fully connected autoencoder architectures are first evaluated on a PC to explore the impact of model size and batch size on training time and anomaly detection performance. Selected models are then re-implemented in the C programming language and deployed on a single ESP32 device, achieving training times as short as 12 min, inference latency as low as 9 ms, and F1 scores of up to 0.87. Autoencoders are further tested on ten devices in a real-world federated learning experiment using Wi-Fi. We explore non-IID and IID data distribution scenarios: (1) digit-specialized devices and (2) partitioned datasets with varying content and anomaly types. The results show that small unmodified autoencoder models can be effectively trained and evaluated directly on low-power hardware. The best models achieve F1 scores of up to 0.87 in the standard IID setting and 0.86 in the extreme non-IID setting. Despite some clients being trained on corrupted datasets, federated aggregation proves resilient, maintaining high overall performance. The resource analysis shows that more than half of the models and all the training-related allocations fit entirely in internal RAM. These findings confirm the feasibility of local float32 training and collaborative anomaly detection on low-cost hardware, supporting scalable and privacy-preserving edge intelligence. Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communication)
Show Figures

Figure 1

Back to TopTop