Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,534)

Search Parameters:
Keywords = perceptron

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1131 KB  
Article
Imbalance-Aware APS Failure Classification Using Feature-Wise Attention Graph Convolutional Network
by Juhyeon Noh, Jihoon Lee, Seungmin Oh, Jaehyung Park, Minsoo Hahn, HoYong Ryu and Jinsul Kim
Processes 2026, 14(7), 1107; https://doi.org/10.3390/pr14071107 (registering DOI) - 29 Mar 2026
Abstract
Industrial equipment data often exhibit high dimensionality and class imbalance, which make it difficult to achieve both accurate failure detection and identification of the factors contributing to failures. To address this issue, this study proposes an explainable failure classification framework, Feature-Wise Attention Graph [...] Read more.
Industrial equipment data often exhibit high dimensionality and class imbalance, which make it difficult to achieve both accurate failure detection and identification of the factors contributing to failures. To address this issue, this study proposes an explainable failure classification framework, Feature-Wise Attention Graph Convolutional Network (FWA-GCN), which combines Feature-Wise Attention (FWA) with a Graph Convolutional Network (GCN) to provide both high classification performance and variable-level interpretability. In the proposed model, tabular sensor records are treated as nodes, and a similarity-based graph is constructed to capture relationships among samples. Feature-Wise Attention learns the importance of each feature and reweights node features accordingly, and the reweighted features are then used as input to the GCN to classify failure occurrences. To alleviate the class imbalance problem, a weighted loss function is applied during training by assigning a higher weight to the failure class. Experiments conducted on the Air Pressure System (APS) dataset demonstrate that the proposed FWA-GCN achieves Precision of 79.95%, Recall of 85.07%, and F1-score of 82.43%, outperforming conventional machine learning models including Random Forest, XGBoost, CatBoost, and Multi-Layer Perceptron, as well as a standard GCN model. Furthermore, an ablation study was conducted by removing the top features selected by the attention mechanism. The results show a significant decrease in recall, confirming the effectiveness of the attention-based feature importance and supporting the interpretability of the proposed framework. Full article
Show Figures

Figure 1

29 pages, 6898 KB  
Article
MDE-UNet: A Physically Guided Asymmetric Fusion Network for Multi-Source Meteorological Data Lightning Identification
by Yihua Chen, Yuanpeng Han, Yujian Zhang, Yi Liu, Lin Song, Jialei Wang, Xinjue Wang and Qilin Zhang
Remote Sens. 2026, 18(7), 1027; https://doi.org/10.3390/rs18071027 (registering DOI) - 29 Mar 2026
Abstract
Utilizing multi-source meteorological data for lightning identification is crucial for monitoring severe convective weather. However, several key challenges persist in this field: dimensional imbalance and modal competition among multi-source heterogeneous data, model training bias caused by the extreme sparsity of lightning samples, and [...] Read more.
Utilizing multi-source meteorological data for lightning identification is crucial for monitoring severe convective weather. However, several key challenges persist in this field: dimensional imbalance and modal competition among multi-source heterogeneous data, model training bias caused by the extreme sparsity of lightning samples, and an imbalance between false alarms and missed detections resulting from complex background noise. To address these challenges, this paper proposes a lightning identification network guided by physical priors and constrained by supervision. First, to tackle the issue of modal competition in fusing satellite (high-dimensional) and radar (low-dimensional) data, a physical prior-guided asymmetric radar information enhancement mechanism is introduced. This mechanism uses radar physical features as contextual guidance to selectively enhance the latent weak radar signatures. Second, at the architectural level, a multi-source multi-scale feature fusion module and a weighted sliding window–multilayer perceptron (MLP) enhanced decoding unit are constructed. The former achieves the coupling of multi-scale physical features at a 2 km grid scale through cross-level semantic alignment, building a highly consistent feature field that effectively improves the model’s ability to detect lightning signals. The latter leverages adaptive receptive fields and the nonlinear modeling capability of MLPs to effectively smooth spatially discrete noise, ensuring spatial continuity in the reconstructed results. Finally, to address the model bias caused by severe class imbalance between positive and negative samples—resulting from the extreme sparsity of lightning events—an asymmetrically weighted BCE-DICE loss function is designed. Its “asymmetric” characteristic is implemented by assigning different penalty weights to false-positive and false-negative predictions. This loss function balances pixel-level accuracy and inter-class equilibrium while imposing high-weight penalties on false-positive predictions, achieving synergistic optimization of feature enhancement and directional suppression. Experimental results show that the proposed method effectively increases the hit rate while substantially reducing the false alarm rate, enabling efficient utilization of multi-source data and high-precision identification of lightning strike areas. Full article
19 pages, 1666 KB  
Article
MTLL: A Novel Multi-Task Learning Approach for Lymphocytic Leukemia Classification and Nucleus Segmentation
by Cuisi Ou, Zhigang Hu, Xinzheng Wang, Kaiwen Cao and Yipei Wang
Electronics 2026, 15(7), 1419; https://doi.org/10.3390/electronics15071419 (registering DOI) - 28 Mar 2026
Abstract
Bone marrow cell classification and nucleus segmentation in microscopic images are fundamental tasks for computer-aided diagnosis of lymphocytic leukemia. However, bone marrow cells from different subtypes exhibit high morphological similarity, and structural information is often constrained under optical microscopic imaging, posing challenges for [...] Read more.
Bone marrow cell classification and nucleus segmentation in microscopic images are fundamental tasks for computer-aided diagnosis of lymphocytic leukemia. However, bone marrow cells from different subtypes exhibit high morphological similarity, and structural information is often constrained under optical microscopic imaging, posing challenges for stable and effective feature representation. To address this issue, we propose MTLL (Multitask Model on Lymphocytic Leukemia), a novel multitask approach that performs cell classification and nucleus segmentation within a unified network to exploit their complementary information. The model constructs a hybrid backbone for shared feature representation based on a CNN-Transformer architecture, in which Fuse-MBConv modules are tightly integrated with multilayer multi-scale transformers to enable deep fusion of local texture and global semantic information. For the segmentation branch, we design an AM (Atrous Multilayer Perceptron) decoder that combines atrous spatial pyramid pooling with multilayer perceptrons to fuse multi-scale information and accurately delineate nucleus boundaries. The classification branch incorporates prior knowledge of cell nuclei structures to capture subtle variations in cellular morphology and texture, thereby enhancing the model’s ability to distinguish between leukemia subtypes. Experimental results demonstrate that the MTLL model significantly outperforms existing advanced single-task and multi-task models in both lymphocytic leukemia classification and cell nucleus segmentation. These results validate the effectiveness of the multi-task feature-sharing strategy for lymphocytic leukemia diagnosis using bone marrow microscopic images. Full article
Show Figures

Figure 1

29 pages, 4423 KB  
Article
A Neighbor Feature Aggregation-Based Multi-Agent Reinforcement Learning Method for Fast Solution of Distributed Real-Time Power Dispatch Problem
by Baisen Chen, Chenghuang Li, Qingfen Liao, Wenyi Wang, Lingteng Ma and Xiaowei Wang
Electronics 2026, 15(7), 1415; https://doi.org/10.3390/electronics15071415 (registering DOI) - 28 Mar 2026
Abstract
To address the challenges posed by the strong uncertainty of high-proportion renewable energy sources (RES) to the secure and stable operation of distributed real-time power dispatch (D-RTPD) in new-type power systems, this paper proposes an integrated solution combining a neighborhood feature aggregation-based graph [...] Read more.
To address the challenges posed by the strong uncertainty of high-proportion renewable energy sources (RES) to the secure and stable operation of distributed real-time power dispatch (D-RTPD) in new-type power systems, this paper proposes an integrated solution combining a neighborhood feature aggregation-based graph attention network (NFA-GAT) and multi-agent deep deterministic policy gradient (MADDPG). First, the D-RTPD problem is modeled as a decentralized partially observable Markov decision process (Dec-POMDP), which effectively captures the stochastic game characteristics of multi-regional agents and the partial observability of grid states. Second, the NFA-GAT is designed to enhance agents’ perception of grid operating states: by introducing a spatial discount factor, it realizes rational aggregation of multi-order neighborhood information while modeling the attenuation of electrical quantity influence with topological distance. Third, a prior-guided mechanism is integrated into the MADDPG framework to eliminate constraint-violating actions by setting their actor logits to negative infinity, improving training efficiency and strategy reliability. Simulation validations on the IEEE 118-bus test system (75.2% RES installed capacity ratio) show that the proposed method achieves efficient training convergence. Compared with the multi-layer perceptron (MLP) structure, it attains higher cumulative reward values and scenario win rates. When compared with traditional model-driven (ADMM) and data-driven (Q-MIX) methods, the proposed method balances solution efficiency, operational safety (98.7% maximum line load rate, zero power flow violation rate), and economic performance ($12,845 daily dispatch cost), providing a reliable technical support for D-RTPD under high-proportion RES integration. Full article
Show Figures

Figure 1

16 pages, 8167 KB  
Article
Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System
by Wanhong Yan, Lingping He, Chen Tao, Tianqi Ma, Zhenwei Han, Sibo Yu and Bo Chen
Photonics 2026, 13(4), 330; https://doi.org/10.3390/photonics13040330 (registering DOI) - 28 Mar 2026
Abstract
To meet the requirements of quantitative elemental analysis in the ultraviolet (UV) spectrum, a UV single-photon imaging system was developed, integrating a digital micromirror device (DMD) and a single photon-counting imaging detector, enabling high sensitivity, high resolution, and a wide dynamic range. However, [...] Read more.
To meet the requirements of quantitative elemental analysis in the ultraviolet (UV) spectrum, a UV single-photon imaging system was developed, integrating a digital micromirror device (DMD) and a single photon-counting imaging detector, enabling high sensitivity, high resolution, and a wide dynamic range. However, intrinsic geometric distortion poses a significant challenge to accurate spectral calibration. A hybrid correction framework is proposed, cascading polynomial coarse correction with multilayer perceptron (MLP) fine regression, improving calibration accuracy. The method utilizes a full-field dot-array mask projected by the DMD to acquire distortion-reference image pairs. The polynomial model rapidly captures the dominant high-order distortion, while a lightweight MLP performs non-parametric fine regression of residual displacements, achieving a mean error of 0.84 pixels. This approach reduces the root mean square (RMS) error to 1.01 pixels, outperforming traditional direct linear transformation (5.35 pixels) and pure polynomial models (1.33 pixels), while the nonlinearity index decreases from 0.35° to 0.05°. In addition, the method demonstrates stable performance across multi-scale checkerboard patterns ranging from 128 to 280 pixels, with RMS errors remaining around the 1-pixel level. These results validate the high-precision distortion suppression and robust cross-scale performance of the proposed framework. By leveraging DMD-generated patterns for self-calibration, this method eliminates the need for external targets, offering a scalable solution for high-end spectrometer calibration. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

22 pages, 3647 KB  
Article
Addressing Class Imbalance in Predicting Student Performance Using SMOTE and GAN Techniques
by Fatema Mohammad Alnassar, Tim Blackwell, Elaheh Homayounvala and Matthew Yee-king
Appl. Sci. 2026, 16(7), 3274; https://doi.org/10.3390/app16073274 (registering DOI) - 28 Mar 2026
Abstract
Virtual Learning Environments (VLEs) have become increasingly popular in education, particularly with the rise of remote learning during the COVID-19 pandemic. Assessing student performance in VLEs is challenging, and the accurate prediction of final results is of great interest to educational institutions. Machine [...] Read more.
Virtual Learning Environments (VLEs) have become increasingly popular in education, particularly with the rise of remote learning during the COVID-19 pandemic. Assessing student performance in VLEs is challenging, and the accurate prediction of final results is of great interest to educational institutions. Machine learning classification models have been shown to be effective in predicting student performance, but the accuracy of these models depends on the dataset’s size, diversity, quality, and feature type. Class imbalance is a common issue in educational datasets, but there is a lack of research on addressing this problem in predicting student performance. In this paper, we present an experimental design that addresses class imbalance in predicting student performance by using the Synthetic Minority Over-sampling Technique (SMOTE) and Generative Adversarial Network (GAN) technique. We compared the classification performance of seven machine learning models (i.e., Multi-Layer Perceptron (MLP), Decision Trees (DT), Random Forests (RF), Extreme Gradient Boosting (XGBoost), Categorical Boosting (CATBoost), K-Nearest Neighbors (KNN), and Support Vector Classifier (SVC)) using different dataset combinations, and our results show that SMOTE techniques can improve model performance, and GAN models can generate useful simulated data for classification tasks. Among the SMOTE resampling methods, SMOTE NN produced the strongest performance for the RF model, achieving a Region of Convergence (ROC) Area Under the Curve (AUC) of 0.96 and a Type II error rate of 8%. For the generative data experiments, the XGBoost model demonstrated the best performance when trained on the GAN-generated dataset balanced using SMOTE NN, attaining a ROC AUC of 0.97 and a reduced Type II error rate of 3%. These results indicate that the combined use of class balancing techniques and generative synthetic data augmentation can enhance student outcome prediction performance. Full article
(This article belongs to the Topic Explainable AI in Education)
Show Figures

Figure 1

21 pages, 4699 KB  
Article
Leveraging Deep Learning to Construct a Programmed Cell Death-Driven Prognostic Signature in Acute Myeloid Leukemia
by Chunlong Zhang, Haisen Ni, Ziyi Zhao and Ning Zhao
Curr. Issues Mol. Biol. 2026, 48(4), 354; https://doi.org/10.3390/cimb48040354 - 27 Mar 2026
Abstract
Acute myeloid leukemia (AML) is an aggressive hematologic malignancy characterized by profound molecular heterogeneity and high relapse rates, posing significant clinical challenges. Programmed cell death (PCD), encompassing diverse regulated modalities such as apoptosis, necroptosis, and ferroptosis, plays a key role in leukemogenesis and [...] Read more.
Acute myeloid leukemia (AML) is an aggressive hematologic malignancy characterized by profound molecular heterogeneity and high relapse rates, posing significant clinical challenges. Programmed cell death (PCD), encompassing diverse regulated modalities such as apoptosis, necroptosis, and ferroptosis, plays a key role in leukemogenesis and therapeutic response; however, a comprehensive prognostic framework integrating multi-modal PCD pathways in AML remains elusive. In this study, we performed a systematic transcriptomic analysis of 1624 genes associated with 13 distinct PCD forms. A novel computational pipeline combining a variational autoencoder (VAE) for dimensionality reduction and a multilayer perceptron (MLP) for classification was employed to identify robust PCD-related biomarkers, interpreted via SHapley Additive exPlanations (SHAP) analysis. This approach identified 48 candidate genes with discriminative potential between AML and normal bone marrow. Unsupervised consensus clustering based on these genes delineated two molecular subtypes exhibiting divergent clinical outcomes and immune microenvironment profiles. The subtype demonstrated an immunosuppressive phenotype, characterized by enriched regulatory T cells, M2 macrophages, and elevated expression of inhibitory immune checkpoints, correlating with inferior survival. We developed an 8-gene prognostic signature (SORL1, PIK3R5, RIPK3, ELANE, GPX1, VNN1, CD74, and IL3RA) that effectively categorized patients into high- and low-risk groups with notable survival differences, validated across independent cohorts. A prognostic nomogram combining the risk score, age, and cytogenetic risk enhanced the prediction accuracy for overall survival. Our study presents an integrative model that connects multi-modal PCD pathways to AML prognosis, offering a new molecular subtyping system and a clinically applicable risk assessment tool for improved prognostication and personalized treatment strategies. Full article
(This article belongs to the Special Issue Linking Genomic Changes with Cancer in the NGS Era, 3rd Edition)
Show Figures

Figure 1

23 pages, 1545 KB  
Article
Advanced Hybrid Deep Learning Framework for Short-Term Solar Radiation Forecasting Using Temporal and Meteorological Features
by Farrukh Hafeez, Zeeshan Ahmad Arfeen, Muhammad I. Masud, Abdoalateef Alzhrani, Mohammed Aman, Nasser Alkhaldi and Mehreen Kausar Azam
Processes 2026, 14(7), 1081; https://doi.org/10.3390/pr14071081 - 27 Mar 2026
Abstract
Short-term forecasting of solar radiation is essential for the efficient operation of solar energy systems. This study presents a neural network-based approach for short-term solar radiation forecasting using a hybrid framework that integrates temporal characteristics with weather-based features. The proposed model combines a [...] Read more.
Short-term forecasting of solar radiation is essential for the efficient operation of solar energy systems. This study presents a neural network-based approach for short-term solar radiation forecasting using a hybrid framework that integrates temporal characteristics with weather-based features. The proposed model combines a Gated Recurrent Unit (GRU) to capture short-term temporal dynamics, a Transformer Encoder, and a Multilayer Perceptron (MLP) to integrate these representations for final prediction. Key meteorological variables, including temperature, humidity, and wind speed, are incorporated along with engineered time-related features such as lagged values, rolling statistics, and cyclical time-of-day encodings. The results demonstrate that the hybrid model effectively integrates sequential learning and feature interaction, leading to improved forecasting accuracy. The proposed approach achieves a test Mean Absolute Error (MAE) of 0.056, Root Mean Square Error (RMSE) of 0.086, and coefficient of determination (R2) of 0.92, outperforming benchmark models such as AutoRegressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), GRU, and Extreme Gradient Boosting (XGBoost). The model maintains stable performance across cross-validation folds, multiple forecasting horizons, and varying weather conditions. These findings indicate that the proposed framework provides a reliable and practical solution for accurate short-term solar radiation forecasting, supporting real-time solar energy management and renewable energy system optimization. Full article
(This article belongs to the Special Issue Advanced Technologies of Renewable Energy Sources (RESs))
22 pages, 1692 KB  
Article
A Novel AAF-SwinT Model for Automatic Recognition of Abnormal Goat Lung Sounds
by Shengli Kou, Decao Zhang, Jiadong Yu, Yanling Yin, Weizheng Shen and Qiutong Cen
Animals 2026, 16(7), 1021; https://doi.org/10.3390/ani16071021 - 26 Mar 2026
Viewed by 116
Abstract
In abnormal goat lung sound recognition, high inter-class similarity and large intra-class variability pose significant challenges. To address this issue and improve recognition performance, we propose a deep learning model, AAF-SwinT, based on an improved Swin Transformer. The model replaces the original Swin [...] Read more.
In abnormal goat lung sound recognition, high inter-class similarity and large intra-class variability pose significant challenges. To address this issue and improve recognition performance, we propose a deep learning model, AAF-SwinT, based on an improved Swin Transformer. The model replaces the original Swin Transformer self-attention module with Axial Decomposed Attention (ADA), modeling the temporal and frequency axes separately and integrating attention weights to mitigate inter-class feature similarity. Adaptive Spatial Aggregation for Patch Merging (ASAP) is designed to emphasize key time-frequency regions, and a Frequency-Aware Multi-Layer Perceptron (FAM) is introduced to model features across different frequency bands, further enhancing the discriminative ability for abnormal lung sounds. Experiments on a self-constructed goat lung sound dataset demonstrate that AAF-SwinT achieves an accuracy of 88.21%, outperforming existing mainstream Transformer-based models by 2.68–5.98%. Ablation studies further confirm the effectiveness of each proposed module, improving the accuracy of baseline Swin Transformer model from 85.53% to 88.21%. These results indicate that the proposed approach exhibits strong robustness and practical potential for abnormal lung sound recognition in goats, providing technical support for early diagnosis and management of respiratory diseases in large-scale goat farming. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Veterinary Medicine)
44 pages, 11575 KB  
Article
GeoAI-Driven Land Cover Change Prediction Using Copernicus Earth Observation and Geospatial Data for Law-Compliant Territorial Planning in the Aosta Valley (Italy)
by Tommaso Orusa, Duke Cammareri and Davide Freppaz
Land 2026, 15(4), 533; https://doi.org/10.3390/land15040533 - 25 Mar 2026
Viewed by 448
Abstract
Mapping land cover, monitoring its changes, and simulating future alterations are essential tasks for sustainable land management. These processes enable accurate assessment of environmental impacts, support informed policymaking, and assist in the planning needed to mitigate risks related to urban expansion, deforestation, and [...] Read more.
Mapping land cover, monitoring its changes, and simulating future alterations are essential tasks for sustainable land management. These processes enable accurate assessment of environmental impacts, support informed policymaking, and assist in the planning needed to mitigate risks related to urban expansion, deforestation, and climate change. This study proposes a GeoAI-based framework leveraging Multilayer Perceptron (MLP), a class of Artificial Neural Networks (ANNs), to predict land cover changes in the Aosta Valley region (NW Italy). The model uses Copernicus Earth Observation data, specifically Sentinel-1 and Sentinel-2 imagery, and is trained and validated on land cover maps derived from different time periods previously validated with ground truth data. The objective is to provide a predictive tool capable of simulating potential future landscape configurations, supporting proactive regional land use planning including regulatory constraints under the current land use plan. Model performance is evaluated using accuracy metrics. The land cover classification methodology follows established approaches in the scientific literature, adapted to the specific geomorphological characteristics of the Aosta Valley. To explore and visualize potential future land cover transitions, Sankey and chord diagrams are used in combination with zonal statistics and thematic plots. These provide detailed insights into the intensity, direction, and magnitude of landscape dynamics. Training data were stratified-sampled across the study area, covering a diverse set of land cover classes to ensure robustness and generalization of the MLP model. This GeoAI approach offers a scalable and replicable methodology for anticipating land cover dynamics, identifying vulnerable areas, and informing adaptive environmental management strategies at the regional scale, while simultaneously considering the latest urban planning regulations. Full article
Show Figures

Figure 1

22 pages, 4755 KB  
Article
Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage
by Milad Tajik Jamalabad, Elham Abohamzeh, Daud Mustafa Minhas, Seongbhin Kim, Dohyun Kim, Aejung Yoon and Georg Frey
Energies 2026, 19(7), 1619; https://doi.org/10.3390/en19071619 - 25 Mar 2026
Viewed by 143
Abstract
In this study, supervised machine learning (ML) regression models are employed to predict water uptake during the sorption process in a sorption reactor for thermal energy storage applications. Two main methods are used to study sorption storage systems: experimental studies and numerical simulations. [...] Read more.
In this study, supervised machine learning (ML) regression models are employed to predict water uptake during the sorption process in a sorption reactor for thermal energy storage applications. Two main methods are used to study sorption storage systems: experimental studies and numerical simulations. Experimental studies involve physical testing and measurements but are often costly and time-consuming. Numerical simulations are more flexible and cost-effective, though they can require significant computational resources for large or complex systems. To address these challenges, researchers are increasingly employing various machine learning techniques, which offer strong potential for data analysis and predictive modeling. In this study, CFD-based sorption simulations are integrated with machine learning models to predict the spatiotemporal evolution of water uptake. Several ML techniques including support vector regression (SVR), Random Forest, XGBoost, CatBoost (gradient boosting decision trees), and multilayer perceptron neural networks (MLPs) are evaluated and compared. A fixed-bed reactor equipped with fins and tubes is considered within a closed adsorption thermal storage system. Numerical simulations are conducted for three different fin lengths (10 mm, 25 mm, and 35 mm) to generate a comprehensive dataset for training the ML models and capturing the complex temporal evolution of water uptake, thereby enabling predictions for unseen fin geometries. The results indicate that neural network-based models achieve superior predictive performance compared to the other methods. For water uptake training, the mean absolute error (MAE), root mean squared error (RMSE), and coefficient of determination R2 are approximately 2.83, 4.37, and 0.91, respectively. The predicted water uptake shows close agreement with the numerical simulation results. For the prediction cases, the MAE, MSE, and R2 values are approximately 1.13, 1.2, and 0.8, respectively. Overall, the study demonstrates that machine learning models can accurately predict water uptake beyond the training dataset, indicating strong generalization capability and significant potential for improving thermal management system design. Additionally, the proposed approach reduces simulation time and computational cost while providing an efficient and reliable framework for modeling complex sorption processes in thermal energy storage systems. Full article
Show Figures

Figure 1

10 pages, 2178 KB  
Article
Pan-Cancer Prediction of Genomic Alterations from H&E Whole-Slide Images in a Real-World Clinical Cohort
by Dongheng Ma, Hinano Nishikubo, Tomoya Sano and Masakazu Yashiro
Genes 2026, 17(4), 371; https://doi.org/10.3390/genes17040371 (registering DOI) - 25 Mar 2026
Viewed by 187
Abstract
Background: Predicting genomic alterations from routine hematoxylin and eosin (H&E) whole-slide images (WSIs) may help triage molecular testing. Methods: We retrospectively enrolled 437 patients at Osaka Metropolitan University Hospital across 26 cancers, matched with clinical gene-panel data. We curated 1023 binary [...] Read more.
Background: Predicting genomic alterations from routine hematoxylin and eosin (H&E) whole-slide images (WSIs) may help triage molecular testing. Methods: We retrospectively enrolled 437 patients at Osaka Metropolitan University Hospital across 26 cancers, matched with clinical gene-panel data. We curated 1023 binary endpoints across SNV, CNV, and SV categories. We extracted slide embeddings from five pathology foundation models (Prism, GigaPath, Feather, Chief, and Titan) using a unified feature extraction pipeline and benchmarked them using a lightweight downstream Multi-Layer Perceptron (MLP) classifier. Using the best-performing patch feature system, we trained a multi-instance learning model to assess incremental benefit. Results: Titan achieved the highest and most stable transfer performance, with a median endpoint-wise Area Under the Receiver Operating Characteristic curve (AUROC) of 0.77 in the slide benchmarking; at the patch-level, prediction of APC_SNV reached an AUROC of 0.916, and prediction of KRAS_SNV reached an AUROC of 0.811 on the held-out test set. Conclusions: In a heterogeneous clinical gene-panel setting, pathology foundation models can provide strong baseline genomic-prediction signals without additional fine-tuning. We propose a practical, deployment-oriented two-stage workflow: rapid slide-embedding screening to prioritize robust representations and candidate endpoints, followed by patch-level training for high-value tasks where additional performance gains and interpretable regions are clinically worthwhile. Full article
(This article belongs to the Special Issue Computational Genomics and Bioinformatics of Cancer)
Show Figures

Figure 1

27 pages, 16965 KB  
Article
On-Device Motion Activity Intensity Recognition Using Smartwatch Accelerator
by Seungyeon Kim and Jaehyun Yoo
Electronics 2026, 15(7), 1351; https://doi.org/10.3390/electronics15071351 - 24 Mar 2026
Viewed by 53
Abstract
Wearable device-based Human Activity Recognition (HAR) is widely used in health management, rehabilitation, and personal safety. While contemporary HAR research effectively classifies a wide range of discrete activities, there remains a significant gap in organizing these heterogeneous motions into a structured intensity framework [...] Read more.
Wearable device-based Human Activity Recognition (HAR) is widely used in health management, rehabilitation, and personal safety. While contemporary HAR research effectively classifies a wide range of discrete activities, there remains a significant gap in organizing these heterogeneous motions into a structured intensity framework suitable for continuous risk assessment. Furthermore, many high-performing models rely on computationally intensive architectures that hinder real-time deployment on resource-constrained wearables. We propose an on-device method for estimating five-level activity intensity in real time using only accelerometer signals from a commercial smartwatch. To bridge the gap between simple identification and intensity modeling, 13 dynamic and emergency-like wrist motions were integrated with 11 daily activities from the PAMAP2 dataset, yielding 21 activities mapped onto an ordinal five-level intensity scale. A finetuned Multi-Layer Perceptron (MLP) classifier trained on this integrated dataset achieved 0.939 accuracy and a quadratic weighted kappa (QWK) of 0.971. The model was deployed on a Galaxy Watch 7, achieving <1 ms inference latency and a size <0.1 MB, confirming real-time feasibility. This approach demonstrates that organizing diverse activities into a lightweight, intensity-aware framework provides a robust foundation for safety-aware monitoring systems under real-world, on-device constraints. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

27 pages, 5821 KB  
Article
Experimental Comparative Evaluation of Machine Learning Methods for Early Multi-Fault Detection in Brushless DC Motors
by Mehmet Şen and Mümtaz Mutluer
Eng 2026, 7(4), 145; https://doi.org/10.3390/eng7040145 - 24 Mar 2026
Viewed by 139
Abstract
Early and reliable fault detection in Brushless Direct Current (BLDC) motors is essential for improving system reliability and reducing unplanned industrial downtime. This study presents a controlled experimental investigation of data-driven machine learning approaches for the classification of multiple common BLDC motor faults. [...] Read more.
Early and reliable fault detection in Brushless Direct Current (BLDC) motors is essential for improving system reliability and reducing unplanned industrial downtime. This study presents a controlled experimental investigation of data-driven machine learning approaches for the classification of multiple common BLDC motor faults. Four representative fault-related indicators were obtained under systematically designed operating conditions, and a consistent feature extraction procedure was applied prior to model development. A comparative evaluation was conducted using Multi-Layer Perceptron (MLP), Support Vector Machines (SVM), k-Nearest Neighbour (kNN), and decision tree-based classifiers. All models were trained and tested on the same dataset using an identical validation protocol to ensure methodological fairness and reproducibility. Performance was assessed through standard classification metrics, enabling a transparent comparison of predictive capability and stability. The results show that the MLP model achieved the highest overall classification accuracy (91.6%), closely followed by SVM (91.4%) and kNN (90.2%). Although the performance differences are moderate, the neural network demonstrated more consistent behaviour in scenarios where fault signatures exhibited overlapping characteristics. These findings suggest that non-linear feature interactions play a significant role in BLDC fault discrimination and can be effectively captured by multi-layer architectures. The study provides a reproducible experimental framework and a balanced performance assessment that may support both academic research and the practical development of intelligent condition monitoring systems for BLDC-driven applications. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

24 pages, 4424 KB  
Article
Hybrid Attribution-Based Interpretable Deep Reinforcement Learning for Autonomous Driving Behavior Decision-Making
by Yaxuan Liu, Jiakun Huang, Mingjun Li, Qing Ye and Xiaolin Song
Appl. Sci. 2026, 16(6), 3096; https://doi.org/10.3390/app16063096 - 23 Mar 2026
Viewed by 150
Abstract
With the increasing deployment of autonomous driving systems, the opaque nature of deep reinforcement learning (DRL) decision models hinders understanding and validation of driving decisions. To address this challenge, we propose a Hybrid Attribution-based Interpretable Deep Reinforcement Learning framework (HA-IDRL) for autonomous driving [...] Read more.
With the increasing deployment of autonomous driving systems, the opaque nature of deep reinforcement learning (DRL) decision models hinders understanding and validation of driving decisions. To address this challenge, we propose a Hybrid Attribution-based Interpretable Deep Reinforcement Learning framework (HA-IDRL) for autonomous driving behavior decision-making. The framework introduces a Hybrid Gradient–LRP (HGL) attribution mechanism that integrates gradient-based attribution and Layer-wise Relevance Propagation (LRP) to capture complementary sensitivity and contribution information, producing more consistent and comprehensive post hoc explanations. In addition to post hoc interpretability, we enhance structural interpretability by replacing the conventional multilayer perceptron (MLP) in the Dueling Deep Q-Network (Dueling DQN) architecture with Kolmogorov–Arnold Networks (KAN). By representing nonlinear interactions through learnable univariate functions and explicit summation structures, KAN provides inherently interpretable functional decompositions. The proposed framework is evaluated on a highway lane-changing task using the highway-env simulator. Experimental results show that HA-IDRL achieves decision-making performance comparable to representative DRL baselines, including Dueling DQN and Soft Actor-Critic (SAC), while providing explanations that are more stable and better aligned with human driving semantics. Moreover, the proposed method produces explanations with low computational overhead, enabling efficient and real-time interpretability in practical autonomous driving applications. Overall, HA-IDRL advances trustworthy autonomous driving by enabling high-performance decision-making and rigorous, multi-level interpretability, thereby improving the transparency and operational reliability of DRL-based driving policies. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop