Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (22,549)

Search Parameters:
Keywords = convolution neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 5548 KB  
Article
Impact of Simulated Artifacts on the Classification Performance of Apical Views in Transthoracic Echocardiography Using Convolutional Neural Networks
by Gabriela Bernadeta Orzeł-Łomozik, Łukasz Łomozik, Maciej Podolski, Martyna Rożek, Kalina Światlak, Weronika Radwan, Zuzanna Przybylska, Paulina Michalska, Maciej Pruski and Katarzyna Mizia-Stec
Bioengineering 2026, 13(5), 522; https://doi.org/10.3390/bioengineering13050522 (registering DOI) - 30 Apr 2026
Abstract
Background: In recent years, artificial intelligence (AI) methods, including deep convolutional neural networks (CNNs), have gained increasing importance in supporting the automated analysis of echocardiograms. The aim of this study was to evaluate the impact of selected image artifacts—motion blur, acoustic shadowing, and [...] Read more.
Background: In recent years, artificial intelligence (AI) methods, including deep convolutional neural networks (CNNs), have gained increasing importance in supporting the automated analysis of echocardiograms. The aim of this study was to evaluate the impact of selected image artifacts—motion blur, acoustic shadowing, and speckle noise—on the performance of automatic classification of standard transthoracic echocardiographic (TTE) views using deep learning models. Methods: The analysis included 217 TTE video clips (2170 frames) covering apical views: two-chamber (A2C), three-chamber (A3C), four-chamber (A4C), and five-chamber (A5C). Two convolutional neural network architectures—ResNet-18 and ResNet-34—were applied, initialized with weights pretrained on the ImageNet dataset (transfer learning). In a limited comparative scope, EfficientNet-B0, a ViT model used as a frozen feature extractor combined with Logistic Regression, and a classical HOG + SVM model, were also included as reference methods. Classification performance was evaluated under conditions of controlled image degradation caused by motion blur, acoustic shadowing, and speckle noise. Results: All analyzed artifacts reduced classification performance, although the magnitude of this effect depended on artifact type. Speckle noise proved to be the most destructive, causing performance collapse across all evaluated methods at high severity. Motion blur and acoustic shadowing produced more differentiated degradation profiles. The ResNet models achieved the highest performance under reference conditions; however, after degradation, the ranking of models was no longer stable. In the comparative analysis, HOG + SVM showed the smallest relative performance loss under motion blur and the highest balanced accuracy under severe acoustic shadowing, whereas severe speckle remained critical for all models. Conclusions: Image quality degradation significantly impairs TTE view classification performance, and evaluation based solely on reference-quality images does not fully reflect model robustness to artifacts. These findings indicate the need to complement standard model evaluation with a structured robustness analysis under degraded imaging conditions and highlight the importance of training and validation settings that better reflect real clinical practice. Full article
Show Figures

Figure 1

21 pages, 7396 KB  
Article
Convolutional Neural Network for Specimen-Invariant Structural Health Monitoring of FRC Under Flexural Loading
by George M. Sapidis, Ioannis Kansizoglou, Maria C. Naoum, Nikos A. Papadopoulos, Konstantinos A. Tsintotas, Maristella E. Voutetaki and Antonios Gasteratos
Sensors 2026, 26(9), 2788; https://doi.org/10.3390/s26092788 (registering DOI) - 29 Apr 2026
Abstract
Reinforced Concrete (RC) structures experience progressive degradation over their service life due to mechanical loading and environmental exposure, leading to reduced bearing capacity and compromised structural safety. Incorporating discrete fibers into concrete mitigates crack propagation and enhances ductility, resulting in fiber-reinforced concrete (FRC) [...] Read more.
Reinforced Concrete (RC) structures experience progressive degradation over their service life due to mechanical loading and environmental exposure, leading to reduced bearing capacity and compromised structural safety. Incorporating discrete fibers into concrete mitigates crack propagation and enhances ductility, resulting in fiber-reinforced concrete (FRC) with superior fracture energy, durability, and sustainability characteristics. Despite these advantages, research on Structural Health Monitoring (SHM) techniques for FRC elements remains limited. The Electromechanical Impedance (EMI) method, which exploits piezoelectric transducers as both actuators and sensors, offers high sensitivity for detecting early-stage damage by monitoring variations in local mechanical impedance. This study investigates the effectiveness of a deep learning-enabled EMI framework for assessing the structural condition of FRC beams under flexural loading. A one-dimensional convolutional neural network (1D-CNN) is proposed to automatically extract salient features from high-frequency EMI signatures and classify structural health into three predefined states. The model is rigorously evaluated using specimen-invariant validation to ensure generalization across different FRC specimens, addressing a critical limitation of conventional cross-validation approaches in SHM research. Experimental tests on FRC beams instrumented with surface-bonded PZT transducers provide a dataset of 264 EMI responses for training and validation, enabling direct comparison between common and specimen-invariant validation schemes. The results demonstrate the superior robustness of the specimen-invariant approach and confirm the capability of the proposed 1D-CNN to identify flexural damage progression in FRC elements accurately. An ablation study further highlights the contribution of each architectural component to overall model performance. The findings underscore the potential of integrating EMI-based sensing with advanced deep learning models for reliable, automated, and scalable SHM of next-generation resilient concrete infrastructures. Full article
(This article belongs to the Special Issue Sensor-Based Structural Health Monitoring of Civil Infrastructure)
22 pages, 6175 KB  
Article
Accurate Identification of Ilex (Aquifoliaceae) Taxa Based on Leaf Morphology Using Deep Learning
by Lin Yang, Yizhe Zhao, Cheng Jin, Shichang Wu, Zeyu Lu, Mingzhuo Hao, Changwei Bi and Kewang Xu
Plants 2026, 15(9), 1365; https://doi.org/10.3390/plants15091365 (registering DOI) - 29 Apr 2026
Abstract
Holly (Ilex L.) is a genus of woody dioecious plants with substantial ecological and economic value. However, its high species diversity and morphological similarity make accurate identification challenging. To address this, we constructed a multi-taxon Ilex leaf image dataset. We then trained [...] Read more.
Holly (Ilex L.) is a genus of woody dioecious plants with substantial ecological and economic value. However, its high species diversity and morphological similarity make accurate identification challenging. To address this, we constructed a multi-taxon Ilex leaf image dataset. We then trained six deep learning models—GoogLeNet, ResNet50, ResNet101, DenseNet121, DenseNet169, and EfficientNet-B3—using a unified PyTorch framework on cloud computing resources. Leaf images were preprocessed by background removal, resizing, cropping, and normalization. Model performance was evaluated using accuracy, F1-score, and Grad-CAM visualizations. Under an image-level data split that may overestimate generalization, all six models achieved over 99% classification accuracy on preprocessed leaf images under controlled laboratory conditions. DenseNet121 and DenseNet169 performed best, reaching 99.65% accuracy. Because images of the same leaf or same plant could appear in both training and test sets under this split, plant-level cross-validation is required to assess real-world generalizability. The reported accuracies represent an upper-bound estimate under image-level splitting. The framework offers a rapid and accurate tool for preliminary screening under controlled conditions, but its performance on raw field photographs and across different collection sites remains to be validated. Full article
(This article belongs to the Special Issue Origin and Evolution of the East Asian Flora (EAF)—2nd Edition)
34 pages, 13121 KB  
Article
Mortality Forecasting Using LSTM-CNN Model
by Ning Zhang, Jingyang Chen, Hao Chen and Jingzhen Liu
Axioms 2026, 15(5), 324; https://doi.org/10.3390/axioms15050324 (registering DOI) - 29 Apr 2026
Abstract
Accurate mortality prediction is essential to actuarial practice as it is directly linked to insurance pricing, reserving, and the management of longevity risk. This study proposes a deep neural network (DNN) model for the mortality rates of multiple populations; it is composed of [...] Read more.
Accurate mortality prediction is essential to actuarial practice as it is directly linked to insurance pricing, reserving, and the management of longevity risk. This study proposes a deep neural network (DNN) model for the mortality rates of multiple populations; it is composed of long short-term memory (LSTM) and convolutional neural network (CNN) components. As mortality trends evolve over long time horizons, and as capturing the complex dependencies among mortality rates across countries or regions with a linear model is challenging, the LSTM and CNN were applied to mortality modeling. The former can automatically learn long-term dependencies of sequential data, whereas the latter can extract local features from grid or sequential data. Formulated as a nonlinear generalization of the Lee–Carter decomposition, the model maps the log-mortality matrix logM to future logm(x,t) end-to-end and generates multi-step forecasts through dynamic recursive prediction. Then, the DNN and baseline models were used to fit mortality data of 21 countries from the Human Mortality Database (HMD), which were divided into training and test sets with the year 2000 as the split point. Extensive numerical experiments from the perspectives of accuracy, stability, and reliability of long-term forecasting revealed that DNN models yield better predictive performance, particularly the LSTM-CNN model. It combines the LSTM, CNN, and fully connected network (FCN) layers and thus exploits each deep neural network to fit nonlinear age, period, and cohort effects as well as their interactive terms to achieve better predictive performance. However, the CNN still outperformed other models for certain groups. In addition, the conclusions hold for remaining life expectancy. Full article
(This article belongs to the Special Issue Financial Mathematics and Econophysics)
22 pages, 5221 KB  
Article
Hybrid Deep Neural Network with Natural Language Processing Techniques to Analyze Customer Satisfaction with Delivery Platform Manager Responses
by Salihah Alotaibi
Appl. Sci. 2026, 16(9), 4359; https://doi.org/10.3390/app16094359 - 29 Apr 2026
Abstract
Delivery services have drawn much attention and become of topmost significance in urban areas by presenting online food delivery selections for a diversity of dishes from a wide range of restaurants, decreasing both travel and waiting times. Customer data analysis acts as a [...] Read more.
Delivery services have drawn much attention and become of topmost significance in urban areas by presenting online food delivery selections for a diversity of dishes from a wide range of restaurants, decreasing both travel and waiting times. Customer data analysis acts as a cornerstone in corporate strategy, allowing enterprises to gather and interpret user feedback and helping them to make informed decisions that drive future business development. However, major knowledge gaps remain due to the scarcity of literature review studies on these delivery services, hindering a complete understanding of customer satisfaction in this sector. Furthermore, there has been little systematic research on managerial response tactics to online consumer complaints and negative reviews. Researchers have contributed by applying artificial intelligence, including deep learning and machine learning models, to analyze customer sentiment and understand customer brand perceptions. This study presents a Hybrid Deep Neural Network Model for Customer Satisfaction Analysis (HDNNM-CSA), with the aim of developing an efficient model which is capable of accurately classifying customer satisfaction levels in delivery apps based on textual responses provided by customer experience managers. To achieve this, the model initially pre-processes text data using text cleaning, emoji removal, normalization, tokenization, stop word removal, and stemming to clean and standardize the unstructured text data for further analysis. Following this, term frequency–inverse document frequency-based word embedding is utilized to transform the pre-processed text into meaningful feature representations. Lastly, an ensemble architecture involving bidirectional long short-term memory, temporal convolutional, and graph convolutional networks is deployed to classify customer satisfaction levels with managers’ responses. A series of experimental analyses are performed, and the results are examined for numerous features. A comparative analysis demonstrates the enhanced performance of the HDNNM-CSA technique with respect to existing approaches. Full article
Show Figures

Figure 1

27 pages, 3810 KB  
Article
Real-Time Energy Management of a Series Hybrid Wheel Loader Using Operating-Stage Recognition and ISSA-Optimized ECMS
by Tao Yu, Zhiguo Lei, Yubo Xiao and Xuesheng Shen
Energies 2026, 19(9), 2149; https://doi.org/10.3390/en19092149 - 29 Apr 2026
Abstract
Driven by increasingly stringent requirements for energy saving and emission reduction in non-road machinery, hybrid wheel loaders have attracted growing attention as a practical pathway toward cleaner construction equipment. However, conventional energy management strategies often show limited adaptability to highly transient operating cycles [...] Read more.
Driven by increasingly stringent requirements for energy saving and emission reduction in non-road machinery, hybrid wheel loaders have attracted growing attention as a practical pathway toward cleaner construction equipment. However, conventional energy management strategies often show limited adaptability to highly transient operating cycles and struggle to balance fuel economy, real-time applicability, and battery charge sustainability. To address these issues, this study proposes an improved sparrow-search-algorithm-based equivalent consumption minimization strategy (ISSA-ECMS) for a series hybrid wheel loader. A quasi-static powertrain model was established, while ISSA was used to optimize both the hyperparameters of a Convolutional Neural Network-Long Short-Term Memory (CNN–LSTM) stage-recognition model and the stage-dependent ECMS parameters. A hidden Markov model (HMM)-based post-processing framework was further introduced to improve temporal consistency in operating-stage recognition. The results show that the optimized ISSA-CNN–LSTM achieved 93.22% accuracy, 93.08% Macro-F1, and 93.21% Weighted-F1, while HMM refinement further improved recognition accuracy from 94.02% to 97.92%. In energy management simulations, ISSA-ECMS maintained the terminal state of charge (SOC) at 50.0069%, reduced fuel consumption by 2.1% and 1.4% compared with conventional ECMS and A-ECMS, respectively, and increased the proportion of engine operating points in the economical region to 77.549%. Compared with dynamic programming, its fuel-consumption increase was only 0.28%, while retaining online applicability. These results demonstrate that the proposed method provides an effective and practical solution for real-time energy management of series hybrid wheel loaders. Full article
Show Figures

Figure 1

23 pages, 1944 KB  
Article
Intelligent Localization of Cross-Sectional Structural Damage in Molten Salt Receiver Tubes Using Mel Spectrograms and TSA-Optimized 2D-CNN
by Peiran Leng, Man Liang, Weihong Sun, Tiefeng Shao, Luowei Cao and Sunting Yan
Sensors 2026, 26(9), 2780; https://doi.org/10.3390/s26092780 - 29 Apr 2026
Abstract
In this paper, an intelligent localization framework based on deep learning is proposed to address the limitations of insufficient accuracy and robustness in defect identification and localization during the ultrasonic guided-wave non-destructive testing (NDT) of receiver tubes in tower-type molten salt Concentrated Solar [...] Read more.
In this paper, an intelligent localization framework based on deep learning is proposed to address the limitations of insufficient accuracy and robustness in defect identification and localization during the ultrasonic guided-wave non-destructive testing (NDT) of receiver tubes in tower-type molten salt Concentrated Solar Power (CSP) stations. In the proposed method, a 1D convolutional neural network (1D-CNN) initially processes raw time-series-guided wave signals, achieving coarse identification and preliminary localization of defective segments. Then, Mel spectrograms are employed to exploit multi-dimensional features in the time–frequency domain and transform 1D signals into 2D representations, thereby enriching feature diversity. A regression-based 2D-CNN was designed to predict the start and end points of defect segments, enabling precise interval localization. Furthermore, the Tree Seed Algorithm (TSA) was integrated to jointly optimize key hyperparameters, enhancing training efficiency and prediction accuracy. Experimental validation on a dataset of ultrasonic guided-wave signals from molten salt receiver tubes demonstrates that the TSA-optimized Mel+2D-CNN model achieves superior performance, with a Mean Absolute Error (MAE) of 75.11 sampling points and a Coefficient of Determination (R2) of 0.90. At an Intersection over Union (IoU) threshold of 0.3, the model achieves a hit rate of 89.21%, exhibiting significantly higher localization accuracy and stability compared to the 1D-CNN baseline model. These findings indicate that the proposed method effectively enhances the accuracy and robustness of guided wave-based defect localization in slender structures. While promising, the model’s generalization capability remains dependent on the data distribution and operating conditions; future work will focus on validating its engineering applicability across diverse, multi-scenario industrial environments. Full article
(This article belongs to the Special Issue Ultrasonic Sensors and Ultrasonic Signal Processing)
36 pages, 14306 KB  
Article
Enhancing SDN Intrusion Detection via Multi-Hybrid Deep Learning Fusion and Explainable AI
by Usman Ahmed and Muhammad Tariq Sadiq
Mathematics 2026, 14(9), 1498; https://doi.org/10.3390/math14091498 - 29 Apr 2026
Abstract
Software-defined networking (SDN) represents a paradigm shift in network management, but its centralized control plane introduces new and severe security vulnerabilities. Conventional intrusion detection systems, including signature- and rule-based methods, lack adaptability and interpretability in the face of evolving threats. This paper proposes [...] Read more.
Software-defined networking (SDN) represents a paradigm shift in network management, but its centralized control plane introduces new and severe security vulnerabilities. Conventional intrusion detection systems, including signature- and rule-based methods, lack adaptability and interpretability in the face of evolving threats. This paper proposes a multi-hybrid deep learning fusion ensemble (MHDLFE) to enhance intrusion detection in SDN environments. The framework integrates Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) models via feature fusion and a meta-classifier, thereby improving both detection performance and robustness. To address the critical need for transparency in security systems, the proposed approach incorporates Explainable AI techniques, specifically Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), providing interpretable insights into model decisions. The proposed model achieves strong performance on the NSL-KDD and CIC-IDS2017 datasets, attaining near-perfect binary classification scores of 97.91% and 93.30%, and multiclass accuracies of 98.61% and 97.91%, respectively. These results demonstrate that the proposed framework delivers an effective and trustworthy SDN intrusion detection system by combining deep learning, ensemble fusion, and explainable AI to support accurate, transparent, and reliable cybersecurity decision-making. Full article
22 pages, 11482 KB  
Article
Deployment-Oriented Lithium-Ion Battery Remaining Useful Life Prediction with Adaptive History Selection and Parameter-Efficient Updating
by Dongxiao Ren, Xinyu Zhong, Zixiang Ye and Xing-Liang Xu
Energies 2026, 19(9), 2135; https://doi.org/10.3390/en19092135 - 29 Apr 2026
Abstract
For battery management systems, accurate remaining useful life (RUL) prediction is important, yet models trained offline may not remain well matched to individual cells during operation, because degradation trajectories differ across cells and evolve over aging stages. This study examines a lightweight online [...] Read more.
For battery management systems, accurate remaining useful life (RUL) prediction is important, yet models trained offline may not remain well matched to individual cells during operation, because degradation trajectories differ across cells and evolve over aging stages. This study examines a lightweight online personalization strategy under a representative convolutional neural network–long short-term memory (CNN–LSTM) online-transfer setting while keeping the backbone architecture and fixed input length unchanged. The proposed method restricts online updates to a small adaptation path and adjusts the effective history span according to recent degradation behavior. Experiments on 22 test cells under unseen protocols show that the method improves average post-adaptation RUL performance relative to the representative baseline, reducing the root mean square error (RMSE) from 186.00 to 160.58. The number of trainable parameters involved in online updating is reduced from 74,880 to 2193, while the average update time per step decreases slightly from 2.54 s to 2.29 s. Cell-level analysis further shows that the benefit is not uniform across all cells, motivating more selective updating for safer deployment. Overall, the results indicate that lightweight online personalization can improve the accuracy–cost trade-off of deployment-oriented battery prognostics. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

22 pages, 2593 KB  
Article
Revisiting CNN-Based Parkinson’s Disease Classification from DaT-SPECT Images: The Role of Training Protocols
by Denis Chegodaev, Lilies Handayani, Ray Steven, Takayuki Shibutani, Kenichi Nakajima and Kenji Satou
Electronics 2026, 15(9), 1883; https://doi.org/10.3390/electronics15091883 - 29 Apr 2026
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder for which dopamine transporter single-photon emission computed tomography (DaT-SPECT) is widely used to support clinical diagnosis. Recent convolutional neural network (CNN)-based studies have reported high classification accuracy on DaT-SPECT datasets. However, the relative contributions of [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder for which dopamine transporter single-photon emission computed tomography (DaT-SPECT) is widely used to support clinical diagnosis. Recent convolutional neural network (CNN)-based studies have reported high classification accuracy on DaT-SPECT datasets. However, the relative contributions of network architecture and training protocol design to these results remain insufficiently explored, particularly for small and moderately sized medical imaging datasets. In this study, a training-oriented evaluation of CNNs for PD classification is conducted using two DaT-SPECT datasets derived from the Parkinson’s Progression Markers Initiative (PPMI). First, a previously published experimental setup is faithfully reproduced on a curated dataset of 645 DaT-SPECT images using identical preprocessing procedures, data splits, and model architectures. Under the reproduced experimental setting, previously reported classification accuracies for individual CNN architectures ranged from 93.02% to 95.34%, while an ensemble approach achieved 98.45% accuracy. The same architectures are then evaluated using a unified training protocol incorporating standardized optimization and regularization strategies. Using this protocol, ResNet50 achieves 100% classification accuracy, with all evaluation metrics reaching 1.0, while VGG16, Inception V3, and Xception each achieve an accuracy of 99.22%. On a larger, independently constructed PPMI-derived dataset with higher spatial resolution, previously reported classification accuracies ranged from 93.27% for PD vs. SWEDD to 95.33% for PD vs. control. Using the proposed unified training protocol, the evaluated CNN architectures achieve classification accuracies of 98% for PD vs. SWEDD and 100% for PD vs. control. These results indicate that training protocol design has a stronger influence on DaT-SPECT-based PD classification performance than the specific choice of CNN architecture. Full article
Show Figures

Figure 1

15 pages, 3122 KB  
Article
AttentionMS-Net: An Attention-Enhanced Multi-Scale Framework for Alzheimer’s Disease Classification with Subject-Level Validation
by Osman Yildiz and Abdulhamit Subasi
Appl. Sci. 2026, 16(9), 4338; https://doi.org/10.3390/app16094338 - 29 Apr 2026
Abstract
Many MRI-based Alzheimer’s disease (AD) classification studies report near-perfect accuracy; however, these results are often inflated by data leakage caused by slice-level splitting, where correlated slices of the same subject appear in both the training and test sets. In this study, we introduce [...] Read more.
Many MRI-based Alzheimer’s disease (AD) classification studies report near-perfect accuracy; however, these results are often inflated by data leakage caused by slice-level splitting, where correlated slices of the same subject appear in both the training and test sets. In this study, we introduce AttentionMS-Net, an attention-enhanced multi-scale deep learning architecture that combines channel-spatial attention using the Convolutional Block Attention Module (CBAM) with multi-scale feature aggregation from intermediate EfficientNet-B3 layers for binary AD classification (Non-Demented vs. Demented). Using strict subject-level 10-fold cross-validation on the OASIS dataset (347 subjects, 86,437 slices), our experiments clearly show the impact of data leakage: image-level 10-fold CV achieves about 99.9% accuracy, whereas subject-level 10-fold CV with the same model results in 80.8% accuracy—a reduction of roughly 19 percentage points. Aggregating predictions at the subject level further improves accuracy to 82.4% (AUC: 0.889), suggesting that prediction errors are mainly slice-specific rather than subject-specific. Systematic ablation reveals a complementary interaction between attention and multi-scale components, neither of which performs as well alone. Post-hoc Grad-CAM++ visualization and SHAP analysis suggest that AttentionMS-Net’s attention patterns are focused on ventricular regions—visual biomarkers indicative of overall brain atrophy rather than early hippocampal degeneration. These findings highlight the unreliability of current benchmarks and establish methodologically rigorous baselines for future AD classification research. Full article
(This article belongs to the Special Issue MR-Based Neuroimaging, 2nd Edition)
Show Figures

Figure 1

21 pages, 3762 KB  
Article
GIS Mechanical Fault Classification Method Based on Composite Dimensionally Upscaled Images of Vibration Signals and Vision Transformer
by Su Xu, Bin Jia, Yi Liu, Fei Wang, Xiaobao Hu, Ming Ma, Yulong Yang and Jingang Wang
Electronics 2026, 15(9), 1879; https://doi.org/10.3390/electronics15091879 - 29 Apr 2026
Abstract
To address the challenges of extracting mechanical fault features in Gas Insulated Switchgear (GIS) under complex operating conditions and the insufficient diagnostic accuracy associated with traditional one-dimensional time-series signals, this paper proposes a GIS fault-classification method based on composite dimensional upscaling images of [...] Read more.
To address the challenges of extracting mechanical fault features in Gas Insulated Switchgear (GIS) under complex operating conditions and the insufficient diagnostic accuracy associated with traditional one-dimensional time-series signals, this paper proposes a GIS fault-classification method based on composite dimensional upscaling images of vibration signals and the Vision Transformer (ViT) algorithm. This method first employs a sliding window slicing strategy to segment the raw long-sequence vibration signals into multiple overlapping time segments. Then, it utilizes the Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF), and Markov Transition Field (MTF) to perform composite dimensional upscaling on these segmented signals, projecting the resulting features into a three-channel RGB composite two-dimensional image. Subsequently, the global self-attention mechanism of the Vision Transformer (ViT) processes the dimensionally upscaled data to achieve the fault classification of the GIS equipment. Experimental results demonstrate that, compared to single-channel ViT variants, Convolutional Neural Networks (CNN), and Residual Networks (ResNet), the proposed algorithm achieves the highest overall performance in the training set experiments, and the superiority of this method is verified through ablation studies and comparative experiments. Furthermore, the average accuracy of the algorithm on the testing set reaches 95.63%, proving the reliability and accuracy of the proposed method. Full article
Show Figures

Figure 1

30 pages, 8060 KB  
Article
Modeling and Optimization of Deep and Machine Learning Methods for Credit Card Fraud Risk Management
by Slavi Georgiev, Maya Markova, Vesela Mihova and Venelin Todorov
Mathematics 2026, 14(9), 1496; https://doi.org/10.3390/math14091496 - 29 Apr 2026
Abstract
As digital payment infrastructures expand, the incidence of card-not-present fraud has become a major source of operational and financial risk for banks, payment processors, and merchants. In response, financial institutions increasingly rely on data-driven decision systems, yet fraudsters continuously adapt their strategies to [...] Read more.
As digital payment infrastructures expand, the incidence of card-not-present fraud has become a major source of operational and financial risk for banks, payment processors, and merchants. In response, financial institutions increasingly rely on data-driven decision systems, yet fraudsters continuously adapt their strategies to evade conventional rule-based controls. A promising way to strengthen risk management is to model transactional data so as to uncover non-trivial, high-dimensional patterns characteristic of fraudulent behavior and to embed these models into real-time decision pipelines. In this work, we develop and compare a suite of learning-based fraud detectors, including a convolutional neural network and several machine learning classifiers, within a unified quantitative risk-management framework. The problem is formulated as a supervised classification task within a quantitative risk management framework, where the cost of missed fraud is particularly critical. The mathematical contribution is methodological rather than architectural: we design a leakage-safe and prevalence-faithful evaluation protocol for extremely imbalanced binary classification, combine cross-validated hyperparameter optimization with risk-aligned model selection based on metrics such as recall and Matthews correlation coefficient, and quantify uncertainty by bootstrap confidence intervals and paired McNemar tests. In addition, we connect statistical evaluation with deployment-time decisioning through a decision-theoretic, cost-sensitive threshold rule, showing how institution-specific false-positive and false-negative costs determine the operating point of the classifier. Because fraudulent transactions constitute only a small proportion of the total volume, we employ resampling strategies to mitigate severe class imbalance and systematically calibrate the models via cross-validated hyperparameter optimization. The empirical analysis on real transaction data shows that carefully tuned deep and ensemble methods can achieve strong fraud-detection performance, while the proposed framework clarifies which performance differences are statistically meaningful and which operating points are most suitable under institution-specific false-positive and false-negative costs. Full article
Show Figures

Figure 1

28 pages, 9613 KB  
Article
High-Frequency Skywave Source Geolocation Using Deep Learning-Based TDOA Estimation and Bias-Regularized Semidefinite Programming with Field Evaluation
by Chen Xu, Houlong Ai, Le He, Chaoyu Hu, Siyi Chen, Zhaoyang Li and Xijun Liu
Sensors 2026, 26(9), 2755; https://doi.org/10.3390/s26092755 - 29 Apr 2026
Abstract
High-frequency (HF) skywave propagation exploits ionospheric reflection for beyond-line-of-sight transmission, making time-difference-of-arrival (TDOA)-based geolocation a primary technique for localizing non-cooperative HF emitters. However, reliable TDOA estimation remains challenging due to time-varying ionospheric conditions, wideband multipath dispersion, and low signal-to-noise ratio (SNR). This paper [...] Read more.
High-frequency (HF) skywave propagation exploits ionospheric reflection for beyond-line-of-sight transmission, making time-difference-of-arrival (TDOA)-based geolocation a primary technique for localizing non-cooperative HF emitters. However, reliable TDOA estimation remains challenging due to time-varying ionospheric conditions, wideband multipath dispersion, and low signal-to-noise ratio (SNR). This paper proposes an integrated framework coupling realistic channel synthesis, deep learning-based TDOA estimation, and convex optimization-based localization. Three contributions are made. First, an improved wideband ionospheric channel model is constructed by integrating the International Reference Ionosphere (IRI) with region-specific calibration and a stochastic perturbation module, yielding time-varying multipath responses for physics-consistent waveform generation. Second, a convolutional neural network (CNN)-based TDOA estimator is designed to jointly exploit time-domain complex-baseband in-phase/quadrature (I/Q) waveforms, multi-weight generalized cross-correlation (GCC) feature maps, and channel-state information (CSI) within a unified regression network, achieving robust delay estimation under severe noise and multipath conditions. Third, the geolocation problem is formulated as a bias-regularized constrained least-squares problem with unknown ionospheric excess-delay surrogates, and a semidefinite programming (SDP) relaxation is derived to yield a tractable solution without prescribing a fixed virtual reflection height. Simulations show that the proposed estimator consistently outperforms competing algorithms across a wide SNR range and narrows the gap to the Cramér–Rao lower bound (CRLB) at high SNR. On field-recorded signals, the estimator reduces the mean absolute TDOA deviation by 51% relative to GCC with phase transform (GCC-PHAT), and the end-to-end pipeline achieves a mean geolocation error of 19.67 km across 100 field segments, outperforming all compared baselines. Full article
(This article belongs to the Special Issue Smart Sensor Systems for Positioning and Navigation: 2nd Edition)
Show Figures

Figure 1

23 pages, 3967 KB  
Article
PULSE-KAN: Price-Aware Unified Linear-Attention and Smoothed-Trend Encoder with Kolmogorov–Arnold Network Head for Stock Movement Prediction
by Xingwang Zhang and Jiabo Li
Mathematics 2026, 14(9), 1494; https://doi.org/10.3390/math14091494 - 29 Apr 2026
Abstract
Accurate prediction of binary stock price movements remains a challenging task due to the coexistence of short-term noise and medium-term trend dynamics in financial time series. Existing recurrent models typically encode raw price sequences within a single representation stream and aggregate temporal information [...] Read more.
Accurate prediction of binary stock price movements remains a challenging task due to the coexistence of short-term noise and medium-term trend dynamics in financial time series. Existing recurrent models typically encode raw price sequences within a single representation stream and aggregate temporal information using softmax-based attention, which often entangles noisy fluctuations with underlying trends and limits nonlinear expressiveness in the final classification stage. In this paper, we propose PULSE-KAN (Price-aware Unified Linear-attention and Smoothed-trend Encoder with Kolmogorov–Arnold Network Head), a modular neural architecture designed to enhance binary stock movement prediction. The proposed framework introduces three plug-and-play components designed for LSTM-based pipelines as demonstrated here within the Adv-ALSTM framework. First, the P-EMA Trend Bridge constructs an explicit smoothed trend representation via a parameterized exponential moving average and fuses it with the raw price stream to improve trend awareness. Second, the Pola Pulse Router performs efficient temporal aggregation using linear-complexity polarized attention combined with local convolutional priors, enabling better capture of multi-scale temporal dependencies. Third, the KAN Signal Refiner replaces the conventional linear prediction head with learnable Chebyshev-polynomial activations, providing enhanced nonlinear modeling capacity for decision boundaries. Extensive experiments on two public benchmark datasets demonstrate that PULSE-KAN consistently outperforms strong recurrent and attention-based baselines in terms of both classification accuracy and the Matthews Correlation Coefficient. Further ablation studies verify that each proposed component contributes independently and significantly to the overall performance improvement. Full article
Show Figures

Figure 1

Back to TopTop