Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,345)

Search Parameters:
Keywords = deep learning and traditional classification methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 38038 KB  
Article
Hyperspectral-Imaging-Based ECNN-1D for Accurate Origin Classification of Fragrant Pears
by Zhihao Liang, Xiaoyang Zhang, Fei Tan, Ruoyu Di, Jinbang Zhang, Wei Xu, Pan Gao and Li Zhang
Foods 2026, 15(9), 1552; https://doi.org/10.3390/foods15091552 - 30 Apr 2026
Abstract
Geographical origin identification of fragrant pears is crucial for ensuring fruit quality, protecting regional brand value, and maintaining market order. However, pears from different origins often exhibit highly similar appearance and physicochemical properties, making rapid and nondestructive identification challenging for traditional methods. This [...] Read more.
Geographical origin identification of fragrant pears is crucial for ensuring fruit quality, protecting regional brand value, and maintaining market order. However, pears from different origins often exhibit highly similar appearance and physicochemical properties, making rapid and nondestructive identification challenging for traditional methods. This study proposes a hyperspectral origin identification method based on an enhanced one-dimensional convolutional neural network (ECNN-1D) incorporating an Efficient Channel Attention (ECA) mechanism, using visible–near-infrared (Vis–NIR) and short-wave infrared (SWIR) spectral data. To address the technical challenges of highly similar spectra, redundant features, and complex information distribution, ECNN-1D enhances discriminative spectral feature representation, overcoming limitations of conventional machine learning and standard deep learning models in feature extraction and classification stability. Systematic comparisons with machine learning models (LDA, RF, KNN, SVM) and deep learning models (VGG-1D, ResNet-1D, CNN-1D) showed that while all models performed well on Vis–NIR spectra, ECNN-1D achieved the highest test accuracy of 98.94% and F1 score of 98.95% on the more challenging SWIR spectra, outperforming other approaches. These results indicate that ECNN-1D enables high-precision, nondestructive origin identification of fragrant pears, with potential cost advantages, providing a reliable technical solution for fruit traceability and quality supervision. Full article
Show Figures

Figure 1

22 pages, 3980 KB  
Article
A Contrastive Multi-Modal Time-Frequency Swin Transformer Network for Semi-Supervised Mechanical Equipment Fault Diagnosis
by Na Wu, Hao Song, Jianwei Yang, Tong Ji and Lingli Cui
Appl. Sci. 2026, 16(9), 4333; https://doi.org/10.3390/app16094333 - 29 Apr 2026
Abstract
Traditional deep learning networks in fault diagnosis tasks struggle to effectively utilize unlabeled data, and their diagnostic performance is constrained by the limited availability of labeled samples. To address this issue, this paper proposes a semi-supervised method based on multi-modal time-frequency fusion and [...] Read more.
Traditional deep learning networks in fault diagnosis tasks struggle to effectively utilize unlabeled data, and their diagnostic performance is constrained by the limited availability of labeled samples. To address this issue, this paper proposes a semi-supervised method based on multi-modal time-frequency fusion and an improved Swin Transformer, termed the Contrastive Multi-modal Time-Frequency Swin Transformer for Semi-Supervised Fault Diagnosis (CMTFST-SFD). First, a multi-scale shift window attention fusion module is designed as the backbone network, performing parallel computations with a 7 × 7 fine-grained window and a 14 × 14 coarse-grained window. Subsequently, a three-channel time-frequency encoder and a multi-frequency fusion convolution module are constructed to extract diverse frequency characteristics. Finally, by integrating a joint contrastive-cross-entropy loss function, the network backbone is trained in the first stage using a combination of unlabeled and labeled data, followed by training the classification head solely with labeled data in the second stage, thereby fully leveraging all available data for comprehensive model training. The proposed method is evaluated on two bearing fault datasets under three different labeling rates, consistently achieving a recognition rate exceeding 97%. Compared to other advanced techniques, the proposed method demonstrates superior recognition rates and enhanced clustering performance. Full article
Show Figures

Figure 1

21 pages, 41291 KB  
Article
Unraveling the Spectral–Spatial Mechanisms of Mineral Identification: A Case Study on CASI Data Using SpectralFormer and Traditional Classifiers
by Huilin Yang, Kai Qin, Yuxi Hao, Ming Li, Ling Zhu, Yuechao Yang and Yingjun Zhao
Remote Sens. 2026, 18(9), 1365; https://doi.org/10.3390/rs18091365 - 29 Apr 2026
Abstract
Traditional diagnostic spectroscopy provides a physically interpretable basis for mineral identification. However, how modern classifiers balance spectral and spatial information remains insufficiently understood. This study investigates this issue using CASI airborne hyperspectral data from the Liuyuan area, China. A geologically constrained ground-truth dataset [...] Read more.
Traditional diagnostic spectroscopy provides a physically interpretable basis for mineral identification. However, how modern classifiers balance spectral and spatial information remains insufficiently understood. This study investigates this issue using CASI airborne hyperspectral data from the Liuyuan area, China. A geologically constrained ground-truth dataset was constructed based on expert knowledge and a semi-automatic Spectral Hourglass workflow. We evaluated representative shallow machine learning methods and deep learning models, including a three-dimensional convolutional neural network (3D-CNN), Vision Transformer (ViT), and SpectralFormer. The Support Vector Machine (SVM) achieved the highest overall accuracy but showed a strong bias toward dominant background classes and failed to reliably detect rare minerals such as jarosite. Deep learning models improved class balance by incorporating broader spectral features. However, excessive spatial aggregation reduced their sensitivity to small and fragmented alteration zones. SpectralFormer models hyperspectral data as ordered spectral sequences and showed more stable performance for spectrally similar and rare minerals. Multi-scale experiments reveal a spectral-dominant discrimination mechanism. Increasing the spectral receptive field improves classification up to an optimal level. In contrast, overly large spatial patches introduce background interference and obscure diagnostic absorption features. These findings highlight the fundamental role of spectral continuity in airborne hyperspectral alteration mineral mapping and clarify the trade-offs involved in integrating spatial context. Full article
(This article belongs to the Special Issue Advanced Hyperspectral Imaging and AI for Geological Applications)
Show Figures

Figure 1

7 pages, 845 KB  
Proceeding Paper
You Only Look Once-Based Bitter Melon Size Classification Enhanced by Harris Corner Detection and Douglas–Peucker Algorithm
by Julian Marc B. Surara, Charles Ivan Matthew C. Nangit, Analyn N. Yumang and Charmaine C. Paglinawan
Eng. Proc. 2026, 134(1), 85; https://doi.org/10.3390/engproc2026134085 - 27 Apr 2026
Viewed by 43
Abstract
Accurate size classification remains a persistent challenge for agricultural products with irregular morphology, such as bitter melon (Momordica charantia). Proper grading is essential for fair pricing, efficient packaging, and compliance with the Association of Southeast Asian Nations and Philippine National Standards, [...] Read more.
Accurate size classification remains a persistent challenge for agricultural products with irregular morphology, such as bitter melon (Momordica charantia). Proper grading is essential for fair pricing, efficient packaging, and compliance with the Association of Southeast Asian Nations and Philippine National Standards, yet traditional manual sorting often results in inconsistencies. To address this, we introduce an automated classification framework built on the You Only Look Once Version 8 (YOLOv8) model. The system integrates Harris Corner Detection to enhance feature extraction and the Douglas–Peucker algorithm to simplify contour representations, thereby reducing noise and improving shape analysis. A dataset of Ampalaya images was trained and processed to detect and categorize fruit sizes, with evaluation conducted through a confusion matrix. Experimental results showed an overall classification accuracy of 93.75%, demonstrating that the combined approach effectively balances precision with computational efficiency. Beyond improving classification accuracy, the findings highlight the broader potential of combining deep learning and contour-based methods to advance agricultural automation, optimize post-harvest workflows, and strengthen competitiveness in both local and international markets. Full article
Show Figures

Figure 1

26 pages, 15962 KB  
Article
LECloud: Efficient Cloud and Cloud-Shadow Segmentation Based on Windowed State Space Model and Lightweight Attention Mechanism
by Ao Lu, Junzhe Wang, Tengyue Guo, Zhiwei Wang and Min Xia
Remote Sens. 2026, 18(9), 1341; https://doi.org/10.3390/rs18091341 - 27 Apr 2026
Viewed by 181
Abstract
Accurate cloud and cloud-shadow segmentation is a crucial step in optical remote sensing image preprocessing, playing a significant role in subsequent applications such as land-cover classification and change detection. However, the complexity of cloud/shadow shapes and noise interference (e.g., snow and ice, buildings, [...] Read more.
Accurate cloud and cloud-shadow segmentation is a crucial step in optical remote sensing image preprocessing, playing a significant role in subsequent applications such as land-cover classification and change detection. However, the complexity of cloud/shadow shapes and noise interference (e.g., snow and ice, buildings, complex backgrounds, and atmospheric optics) make this task challenging. Although existing deep learning methods have achieved remarkable results in cloud segmentation tasks, a better balance between computational efficiency and segmentation accuracy is still needed. Traditional deep learning models have good detail and generalization capabilities due to their local feature extraction ability and spatial invariance, but they are relatively weak in processing global context information, leading to false positives and false negatives in complex scenarios. Encoders based on state space models (such as VMamba) can effectively capture global context through long-range dependency modeling, but there is still room for optimization in computational efficiency. Additionally, complex attention mechanisms (such as CBAM) can improve feature representation capability, but the large number of parameters limits the deployment efficiency of models. This paper conducts a systematic architectural exploration of the MCloudX cloud segmentation network, seeking a balance between efficiency and accuracy from three directions: backbone network modernization, encoder efficiency optimization, and attention mechanism lightweighting. Through comprehensive ablation experiments on SPARCS and L8-Biome datasets, we systematically evaluate the independent and synergistic effects of each component and validate them on Biome_3 and SPARCS datasets. Experimental results show that the proposed optimization configuration (ResNet50+LocalMamba+ECA-Net) significantly improves computational efficiency while maintaining comparable accuracy to the baseline. We name this optimization configuration LECloud, providing valuable empirical references for future research on efficient remote sensing segmentation architectures. Full article
Show Figures

Figure 1

15 pages, 1007 KB  
Article
Fault Location Method for Distribution Networks Based on SimAM-GraphSAGE-GAT
by Wei Bao, Lei Wang, Wei Liu, Qilong Chen, Yanan Yang, Bingxuan Li, Kang Sun and Ming Yang
Energies 2026, 19(9), 2093; https://doi.org/10.3390/en19092093 - 27 Apr 2026
Viewed by 154
Abstract
In distribution networks, traditional fault location methods have insufficient anti-interference capability and low accuracy in locating high-resistance grounding faults. To address these issues, a distribution network fault location method on the basis of SimAM-GraphSAGE-GAT is proposed. Firstly, the distribution network topology structure is [...] Read more.
In distribution networks, traditional fault location methods have insufficient anti-interference capability and low accuracy in locating high-resistance grounding faults. To address these issues, a distribution network fault location method on the basis of SimAM-GraphSAGE-GAT is proposed. Firstly, the distribution network topology structure is converted into an adjacency matrix, and the electrical parameters of the faulty line are incorporated as node features into the graph structure of the network. Subsequently, the sampling and aggregation mechanism of GraphSAGE is used for learning node representation. Features are refined using SimAM. As a parameter-free attention mechanism, SimAM improves the ability of the model to capture important fault information. Then, the multi-head attention mechanism of GAT is introduced to enhance the representation of neighborhood relationships. Finally, GraphSAGE is utilized once again for deep aggregation, with a view to localizing faults by node classification. An IEEE 33-node distribution network model is adopted to verify the effectiveness of the algorithm in the experiment. The results show that this method can maintain high positioning accuracy even under the tested conditions, such as high-resistance grounding, noise interference, and data loss. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

36 pages, 9428 KB  
Article
Smart Diagnostics: Hierarchical Deep Learning of Acoustic Emission Signals for Early Crack Detection in Zirconia Dental Structures
by Kuson Tuntiwong, Rangsinee Wangman, Kanchana Kanchanatawewat, Boonjira Anucul, Hiranya Sritart, Pattarapong Phasukkit and Supan Tungjitkusolmun
Sensors 2026, 26(9), 2682; https://doi.org/10.3390/s26092682 - 26 Apr 2026
Viewed by 985
Abstract
Monolithic zirconia restorations are frequently affected by the unnoticed growth of subcritical cracks, a failure process that is not captured by traditional imaging methods like radiographs and ultrasounds in sophisticated dental architectures. To address this evaluative inadequacy, this research introduces a hierarchical deep [...] Read more.
Monolithic zirconia restorations are frequently affected by the unnoticed growth of subcritical cracks, a failure process that is not captured by traditional imaging methods like radiographs and ultrasounds in sophisticated dental architectures. To address this evaluative inadequacy, this research introduces a hierarchical deep learning framework for microcrack detection and spatial localization. We promote a hierarchical deep learning system that integrates Acoustic Emission (AE) detection alongside signal processing. Raw AE signals utilized during dynamic loading are enhanced via Kalman filtering and Continuous Wavelet Transform (CWT) to construct high-fidelity time–frequency scalograms. The diagnostic pipeline operates in two stages: first, a hybrid CNN–BiGRU network with temporal attention fulfills zirconia component-level classification; second, a ResNet-18 backbone integrated with Bidirectional LSTM and Multi-Head Attention precisely localizes defects across five anatomical crown regions. This hierarchical design effectively captures the non-stationary, transient nature of fracture-induced stress waves. The framework achieved an F1-score of 99.00% and an AUC of 0.994, significantly outperforming conventional convolutional networks. By enabling predictive maintenance through early, non-invasive damage localization, this study demonstrates a promising laboratory framework for AE-based crack detection in zirconia dental structures and prosthetics and toward enhanced clinical reliability in digital dentistry. Full article
Show Figures

Graphical abstract

22 pages, 3438 KB  
Article
Beyond Byte-Level Modeling: Structure-Aware and Adaptive Traffic Classification for Encrypted Networks
by Gyeong-Min Yu, Yoon-Seong Jang, Ju-Sung Kim, Seung-Woo Nam, Ji-Min Kim, Yang-Seo Choi and Myung-Sup Kim
Electronics 2026, 15(9), 1828; https://doi.org/10.3390/electronics15091828 - 25 Apr 2026
Viewed by 106
Abstract
The widespread adoption of encryption protocols such as TLS 1.3 has significantly reduced the visibility of packet payloads, limiting the effectiveness of traditional traffic analysis methods. Recent deep learning approaches attempt to learn representations directly from raw byte sequences; however, in encrypted environments, [...] Read more.
The widespread adoption of encryption protocols such as TLS 1.3 has significantly reduced the visibility of packet payloads, limiting the effectiveness of traditional traffic analysis methods. Recent deep learning approaches attempt to learn representations directly from raw byte sequences; however, in encrypted environments, byte-level patterns often exhibit high entropy and unstable ordering, raising concerns about their reliability. In this work, we revisit the roles of content and structural information in traffic classification and argue that effective modeling should move beyond content-only representations. We propose a structure-aware framework that models hierarchical relationships across fields, layers, and sessions while representing byte information using compact, permutation-invariant summaries. In addition, we introduce a hierarchical shuffle pretraining strategy to capture relational dependencies and an adaptive inter-level gating mechanism to dynamically integrate multi-level representations. Extensive experiments on multiple datasets with varying levels of encryption demonstrate that byte-level sequential patterns are not always essential, while structural information provides consistent complementary cues. Furthermore, the importance of different structural levels varies across datasets, highlighting the need for adaptive multi-level modeling. The proposed method achieves strong performance across diverse datasets, including highly encrypted traffic, while maintaining robustness under domain shifts and limited data scenarios. These results suggest that combining compact content representations with structural context and adaptive integration is a promising direction for encrypted traffic analysis. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 3rd Edition)
25 pages, 1078 KB  
Systematic Review
Evaluating Artificial Intelligence Models for ICU Length of Stay Prediction: A Systematic Review and Meta-Analysis
by Carlos Zepeda-Lugo, Andrea Insfran-Rivarola, Marcos Sanchez-Lizarraga, Sharon Macias-Velasquez, Ana-Pamela Arevalos, Yolanda Baez-Lopez and Diego Tlapa
Healthcare 2026, 14(9), 1131; https://doi.org/10.3390/healthcare14091131 - 23 Apr 2026
Viewed by 177
Abstract
Background/Objectives: Efficient management of intensive care unit (ICU) resources is a critical challenge for modern healthcare systems, which must balance high-quality patient care with operational and financial performance. ICU length of stay (LOS) is a key metric of clinical complexity and hospital efficiency. [...] Read more.
Background/Objectives: Efficient management of intensive care unit (ICU) resources is a critical challenge for modern healthcare systems, which must balance high-quality patient care with operational and financial performance. ICU length of stay (LOS) is a key metric of clinical complexity and hospital efficiency. However, traditional methods for predicting LOS often fail to capture the complex, nonlinear interactions among physiological, demographic, and treatment-related variables. Machine learning (ML) and deep learning (DL) models have emerged as promising tools for enhancing predictive accuracy and supporting data-driven decision-making. Methods: This study presents a systematic review and meta-analysis of ML and DL approaches for predicting ICU LOS in adult patients. Following PRISMA guidelines, eight scientific databases were searched, yielding 33 eligible studies published between 2015 and 2025. Results: Mixed medical–surgical ICUs were the most common setting (51.5%), and 45.5% of datasets were sourced from public repositories. Most studies (19/33) focused on binary classification of prolonged stays, although thresholds ranged from >48 h to ≥14 days. The pooled results from ten studies yielded an AUROC of 0.9005 (95% CI: 0.8890–0.9121), indicating strong predictive capability across diverse clinical contexts. Subgroup analyses showed comparable performance between specialized surgical and general ICUs. Conclusions: These findings suggest that AI-driven LOS prediction models exhibit strong discriminatory power for ICU LOS prediction, supporting hospital capacity planning. However, to translate this into reliable clinical support, the methodological heterogeneity, scarcity of external validation, and near absence of calibration reporting identified in this review need to be addressed. Full article
(This article belongs to the Section Healthcare and Sustainability)
53 pages, 2972 KB  
Review
Neural Computing Advancements in Cardiac Imaging: A Review of Deep Learning Approaches for Heart Disease Diagnosis
by Tarek Berghout
J. Imaging 2026, 12(5), 180; https://doi.org/10.3390/jimaging12050180 - 22 Apr 2026
Viewed by 248
Abstract
Heart disease remains a leading cause of mortality worldwide, and timely and accurate diagnosis is crucial for improving patient outcomes. Medical imaging plays a pivotal role in this process, yet traditional diagnostic methods often suffer from limitations, including dependency on manual interpretation, susceptibility [...] Read more.
Heart disease remains a leading cause of mortality worldwide, and timely and accurate diagnosis is crucial for improving patient outcomes. Medical imaging plays a pivotal role in this process, yet traditional diagnostic methods often suffer from limitations, including dependency on manual interpretation, susceptibility to observer variability, and inefficiency in handling large-scale data. Deep learning has emerged as an innovative technology in medical imaging, providing unparalleled advancements in feature extraction, segmentation, classification, and prediction tasks. Despite its proven potential, comprehensive reviews of deep learning methods specifically targeted at cardiac imaging remain scarce. This review paper seeks to bridge this gap by analyzing the state-of-the-art deep learning applications for heart disease diagnosis, covering the period from 2015 to 2025. Employing a well-structured methodology, this review categorizes and examines studies based on imaging modalities: Ultrasound (US), Magnetic Resonance Imaging (MRI), X-ray, Computed Tomography (CT), and Electrocardiography (ECG). For each modality, the analysis focuses on utilized datasets, processing techniques (e.g., extraction, segmentation and classification), and paradigms (e.g., transfer learning, federated learning, explainability, interpretability, and uncertainty quantification). Additionally, the types of heart disease addressed and prediction accuracy metrics are also scrutinized. These findings point toward future opportunities, including the study of data quality, optimization, transfer learning, uncertainty quantification and model explainability or interpretability. Furthermore, exploring advanced techniques such as recurrent expansion, transformers, and other architectures may unlock new pathways in cardiac imaging research. This review is a critical synthesis offering a roadmap for researchers and practitioners to advance the application of deep learning in heart disease diagnosis. Full article
(This article belongs to the Special Issue Advances and Challenges in Cardiovascular Imaging)
30 pages, 1435 KB  
Review
A Review of Machine Learning Modeling Approaches of Spatiotemporal Urbanization and Land Use Land Cover
by Farasath Hasan, Jian Liu and Xintao Liu
Smart Cities 2026, 9(5), 74; https://doi.org/10.3390/smartcities9050074 - 22 Apr 2026
Viewed by 212
Abstract
Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming the modeling of complex spatiotemporal urban processes such as urban growth, sprawl, shrinkage, redevelopment, and Land Use/Land Cover Change (LULCC). However, despite rapid methodological innovation, applications remain fragmented, and there [...] Read more.
Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming the modeling of complex spatiotemporal urban processes such as urban growth, sprawl, shrinkage, redevelopment, and Land Use/Land Cover Change (LULCC). However, despite rapid methodological innovation, applications remain fragmented, and there is limited synthesis of how AI-based models complement, extend, or supersede conventional approaches. This study addresses this gap through a systematic review of 6356 records, from which 120 articles were selected for detailed analysis. It investigates: (i) how ML/DL techniques are embedded within spatiotemporal modeling frameworks; (ii) their use in simulating urbanization dynamics and land-use (LU) transitions; (iii) methodological and performance gains relative to traditional statistical and rule-based models; and (iv) emerging research frontiers and limitations. The review shows that LULCC dominates current applications, with Artificial Neural Networks (ANNs) as the most prevalent ML method, increasingly complemented by DL architectures. Across cases, AI is primarily used to learn non-linear transition dynamics, represent spatial and temporal dependencies, identify influential drivers, and improve classification performance and computational efficiency. Building on these insights, the paper synthesizes the roles of AI in spatiotemporal urban modeling and outlines forward-looking research directions to support more robust, transparent, and policy-relevant applications for urban sustainability. Full article
16 pages, 1285 KB  
Article
A SMOTE–ViT Framework for Advanced Soil Classification on a Self-Generated Geotechnical Image Database
by Atousa Zohouri Rad, Ahmet Topal, Burcu Tunga and Müge Balkaya
Appl. Sci. 2026, 16(9), 4063; https://doi.org/10.3390/app16094063 - 22 Apr 2026
Viewed by 260
Abstract
Accurate soil type classification is fundamental to geotechnical engineering, yet traditional laboratory methods are often time consuming and labor intensive. This study investigates the potential of a Transformer-based deep learning framework for the automated classification of complex soil compositions. An image database for [...] Read more.
Accurate soil type classification is fundamental to geotechnical engineering, yet traditional laboratory methods are often time consuming and labor intensive. This study investigates the potential of a Transformer-based deep learning framework for the automated classification of complex soil compositions. An image database for geotechnical analysis is constructed using six distinct geotechnical samples comprising gravel, sand, silt, and clay systematically blended into 80 ternary mixtures. To address the inherent class imbalances in the multi-component dataset, the Synthetic Minority Oversampling Technique (SMOTE) is employed, ensuring robust representation across all categories. The proposed framework utilizes a Vision Transformer (ViT) architecture, leveraging its self-attention mechanism to capture both intricate textural patterns and long-range structural dependencies within the soil matrices. Experimental results demonstrate that the SMOTE–ViT pipeline achieved an overall accuracy of 95.83%, with high precision and recall across diverse ternary compositions. This interdisciplinary approach provides a scalable and high-precision alternative for soil characterization, offering significant potential for real-time decision-making in geotechnical investigation workflows. Full article
Show Figures

Figure 1

18 pages, 9261 KB  
Article
MSResBiMamba: A Deep Cascaded Architecture for EEG Signal Decoding
by Ruiwen Jiang, Yi Zhou and Jingxiang Zhang
Mathematics 2026, 14(8), 1348; https://doi.org/10.3390/math14081348 - 17 Apr 2026
Viewed by 181
Abstract
Electroencephalogram (EEG) signals serve as the core information carrier for brain–computer interfaces (BCIs); however, their highly non-stationary nature, extremely low signal-to-noise ratio, and significant inter-individual variability pose considerable challenges for signal decoding. Existing deep learning methods struggle to strike a balance between multi-scale, [...] Read more.
Electroencephalogram (EEG) signals serve as the core information carrier for brain–computer interfaces (BCIs); however, their highly non-stationary nature, extremely low signal-to-noise ratio, and significant inter-individual variability pose considerable challenges for signal decoding. Existing deep learning methods struggle to strike a balance between multi-scale, fine-grained feature extraction and efficient long-range temporal modeling. To overcome this limitation, this study proposes a novel deep cascaded architecture, MSResBiMamba, which deeply integrates multi-scale spatiotemporal feature learning with cutting-edge long-sequence modeling techniques. The model first utilizes an enhanced multi-scale spatiotemporal convolutional network (MS-CNN) combined with a SE-channel attention mechanism to adaptively extract local multi-band features and dynamically suppress redundant artefacts. Subsequently, it innovatively introduces an enhanced bidirectional Mamba (Bi-Mamba) module to efficiently capture non-causal long-range temporal dependencies with linear computational complexity, whilst cascading multi-head self-attention mechanisms to establish global higher-order feature interactions. Extensive experiments on the BCI Competition IV-2a dataset demonstrate that MSResBiMamba achieves outstanding classification performance in multi-class motor imagery tasks, significantly outperforming traditional methods and existing state-of-the-art neural networks. Ablation studies and t-SNE visualisations further confirm the model’s robustness in feature decoupling and cross-subject applications, providing a high-precision, high-efficiency decoding solution for BCI systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

35 pages, 5529 KB  
Article
Occasion-Based Clothing Classification Using Vision Transformer and Traditional Machine Learning Models
by Hanaa Alzahrani, Maram Almotairi and Arwa Basbrain
Computers 2026, 15(4), 249; https://doi.org/10.3390/computers15040249 - 17 Apr 2026
Viewed by 378
Abstract
Clothing classification by occasion is an important area in computer vision and artificial intelligence (AI). This task is particularly challenging because of the subtle visual similarities among clothing categories such as formal, party, and casual attire. Variations in color, fabric, patterns, and lighting [...] Read more.
Clothing classification by occasion is an important area in computer vision and artificial intelligence (AI). This task is particularly challenging because of the subtle visual similarities among clothing categories such as formal, party, and casual attire. Variations in color, fabric, patterns, and lighting further increase the complexity of this task. To address this challenge, we used the Fashionpedia dataset to create a balanced subset of 15,000 images. Specifically, we adopted two different methods for labeling these images: automated classification, which relies on category identifications (IDs) and components, and manual labeling performed by human annotators. We then implemented our preprocessing pipeline, which includes several steps: resizing, image normalization, background removal using segmentation masks, and class balancing. We benchmarked traditional models, including artificial neural networks (ANNs), support vector machines (SVMs), and k-nearest neighbors (KNNs), which use a histogram of oriented gradient (HOG) features, as well as deep learning models such as convolutional neural networks (CNNs), the Visual Geometry Group 16 (VGG16) model utilizing transfer learning, and the vision transformer (ViT) model, all evaluated using identical data splits and preprocessing procedures. The traditional models achieved moderate accuracy, ranging from 54% to 66%. In contrast, the ViT model achieved an accuracy of 81.78% with automated classification and 98.09% with manual labeling. This indicates that a higher label accuracy, along with the preprocessing steps used, significantly enhances the performance. Together, these factors improve the effectiveness of ViT in context-aware apparel classification and establish a reliable baseline for future research. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

21 pages, 1353 KB  
Article
Chaos Theory with AI Analysis in IoT Network Scenarios
by Antonio Francesco Gentile and Maria Cilione
Cryptography 2026, 10(2), 25; https://doi.org/10.3390/cryptography10020025 - 10 Apr 2026
Viewed by 262
Abstract
While general network dynamics have been extensively modeled using stochastic methods, the emergence of dense Internet of Things (IoT) ecosystems demands a more specialized analytical framework. IoT environments are characterized by extreme non-linearity and sensitivity to initial conditions, where traditional models often fail [...] Read more.
While general network dynamics have been extensively modeled using stochastic methods, the emergence of dense Internet of Things (IoT) ecosystems demands a more specialized analytical framework. IoT environments are characterized by extreme non-linearity and sensitivity to initial conditions, where traditional models often fail to account for chaotic latency and packet loss. This paper introduces a specialized approach that integrates Chaos Theory with the innovative paradigm of Vibe Coding—an AI-assisted development and analysis methodology that allows for the ‘encoding’ and interpretation of the dynamic ‘vibe’ or signature of network fluctuations in real-time. By categorizing network behavior into four distinct scenarios (quiescent, perturbed, attacked, and perturbed–Attacked), the proposed framework utilizes deep learning to transform chaotic signals into actionable intelligence. Our findings demonstrate that this specialized synergy between chaos analysis and Vibe Coding provides superior classification of adversarial threats, such as DoS and injection attacks, fostering intelligent native security for next-generation IoT infrastructures. Full article
Show Figures

Figure 1

Back to TopTop