Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (148)

Search Parameters:
Keywords = a representation of (sub)networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1631 KiB  
Article
Detecting Malicious Anomalies in Heavy-Duty Vehicular Networks Using Long Short-Term Memory Models
by Mark J. Potvin and Sylvain P. Leblanc
Sensors 2025, 25(14), 4430; https://doi.org/10.3390/s25144430 - 16 Jul 2025
Cited by 1 | Viewed by 388
Abstract
Utilizing deep learning models to detect malicious anomalies within the traffic of application layer J1939 protocol networks, found on heavy-duty commercial vehicles, is becoming a critical area of research in platform protection. At the physical layer, the controller area network (CAN) bus is [...] Read more.
Utilizing deep learning models to detect malicious anomalies within the traffic of application layer J1939 protocol networks, found on heavy-duty commercial vehicles, is becoming a critical area of research in platform protection. At the physical layer, the controller area network (CAN) bus is the backbone network for most vehicles. The CAN bus is highly efficient and dependable, which makes it a suitable networking solution for automobiles where reaction time and speed are of the essence due to safety considerations. Much recent research has been conducted on securing the CAN bus explicitly; however, the importance of protecting the J1939 protocol is becoming apparent. Our research utilizes long short-term memory models to predict the next binary data sequence of a J1939 packet. Our primary objective is to compare the performance of our J1939 detection system trained on data sub-fields against a published CAN system trained on the full data payload. We conducted a series of experiments to evaluate both detection systems by utilizing a simulated attack representation to generate anomalies. We show that both detection systems outperform one another on a case-by-case basis and determine that there is a clear requirement for a multifaceted security approach for vehicular networks. Full article
Show Figures

Figure 1

30 pages, 17752 KiB  
Article
DMA-Net: Dynamic Morphology-Aware Segmentation Network for Remote Sensing Images
by Chao Deng, Haojian Liang, Xiao Qin and Shaohua Wang
Remote Sens. 2025, 17(14), 2354; https://doi.org/10.3390/rs17142354 - 9 Jul 2025
Viewed by 403
Abstract
Semantic segmentation of remote sensing imagery is a pivotal task for intelligent interpretation, with critical applications in urban monitoring, resource management, and disaster assessment. Recent advancements in deep learning have significantly improved RS image segmentation, particularly through the use of convolutional neural networks, [...] Read more.
Semantic segmentation of remote sensing imagery is a pivotal task for intelligent interpretation, with critical applications in urban monitoring, resource management, and disaster assessment. Recent advancements in deep learning have significantly improved RS image segmentation, particularly through the use of convolutional neural networks, which demonstrate remarkable proficiency in local feature extraction. However, due to the inherent locality of convolutional operations, prevailing methodologies frequently encounter challenges in capturing long-range dependencies, thereby constraining their comprehensive semantic comprehension. Moreover, the preprocessing of high-resolution remote sensing images by dividing them into sub-images disrupts spatial continuity, further complicating the balance between local feature extraction and global context modeling. To address these limitations, we propose DMA-Net, a Dynamic Morphology-Aware Segmentation Network built on an encoder–decoder architecture. The proposed framework incorporates three primary parts: a Multi-Axis Vision Transformer (MaxViT) encoder achieves a balance between local feature extraction and global context modeling through multi-axis self-attention mechanisms; a Hierarchy Attention Decoder (HA-Decoder) enhanced with Hierarchy Convolutional Groups (HCG) for precise recovery of fine-grained spatial details; and a Channel and Spatial Attention Bridge (CSA-Bridge) to mitigate the encoder–decoder semantic gap while amplifying discriminative feature representations. Extensive experimentation has been conducted to demonstrate the state-of-the-art performance of DMA-Net, which has been shown to achieve 87.31% mIoU on Potsdam, 83.23% on Vaihingen, and 54.23% on LoveDA, thereby surpassing existing methods. Full article
Show Figures

Figure 1

31 pages, 5644 KiB  
Article
SWMD-YOLO: A Lightweight Model for Tomato Detection in Greenhouse Environments
by Quan Wang, Ye Hua, Qiongdan Lou and Xi Kan
Agronomy 2025, 15(7), 1593; https://doi.org/10.3390/agronomy15071593 - 29 Jun 2025
Viewed by 441
Abstract
The accurate detection of occluded tomatoes in complex greenhouse environments remains challenging due to the limited feature representation ability and high computational costs of existing models. This study proposes SWMD-YOLO, a lightweight multi-scale detection network optimized for greenhouse scenarios. The model integrates switchable [...] Read more.
The accurate detection of occluded tomatoes in complex greenhouse environments remains challenging due to the limited feature representation ability and high computational costs of existing models. This study proposes SWMD-YOLO, a lightweight multi-scale detection network optimized for greenhouse scenarios. The model integrates switchable atrous convolution (SAConv) and wavelet transform convolution (WTConv) for the dynamic adjustment of receptive fields for occlusion-adaptive feature extraction and to decompose features into multi-frequency sub-bands, respectively, thus preserving critical edge details of obscured targets. Traditional down-sampling is replaced with a dynamic sample (DySample) operator to minimize information loss during resolution transitions, while a multi-scale convolutional attention (MSCA) mechanism prioritizes discriminative regions under varying illumination. Additionally, we introduce Focaler-IoU, a novel loss function that addresses sample imbalance by dynamically re-weighting gradients for partially occluded and multi-scale targets. Experiments on greenhouse tomato data sets demonstrate that SWMD-YOLO achieves 93.47% mAP50 with a detection speed of 75.68 FPS, outperforming baseline models in accuracy while reducing parameters by 18.9%. Cross-data set validation confirms the model’s robustness to complex backgrounds and lighting variations. Overall, the proposed model provides a computationally efficient solution for real-time crop monitoring in resource-constrained precision agriculture systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

17 pages, 956 KiB  
Article
Comparative Analysis of Attention Mechanisms in Densely Connected Network for Network Traffic Prediction
by Myeongjun Oh, Sung Oh, Jongkyung Im, Myungho Kim, Joung-Sik Kim, Ji-Yeon Park, Na-Rae Yi and Sung-Ho Bae
Signals 2025, 6(2), 29; https://doi.org/10.3390/signals6020029 - 19 Jun 2025
Viewed by 536
Abstract
Recently, STDenseNet (SpatioTemporal Densely connected convolutional Network) showed remarkable performance in predicting network traffic by leveraging the inductive bias of convolution layers. However, it is known that such convolution layers can only barely capture long-term spatial and temporal dependencies. To solve this problem, [...] Read more.
Recently, STDenseNet (SpatioTemporal Densely connected convolutional Network) showed remarkable performance in predicting network traffic by leveraging the inductive bias of convolution layers. However, it is known that such convolution layers can only barely capture long-term spatial and temporal dependencies. To solve this problem, we propose Attention-DenseNet (ADNet), which effectively incorporates an attention module into STDenseNet to learn representations for long-term spatio-temporal patterns. Specifically, we explored the optimal positions and the types of attention modules in combination with STDenseNet. Our key findings are as follows: i) attention modules are very effective when positioned between the last dense module and the final feature fusion module, meaning that the attention module plays a key role in aggregating low-level local features with long-term dependency. Hence, the final feature fusion module can easily exploit both global and local information; ii) the best attention module is different depending on the spatio-temporal characteristics of the dataset. To verify the effectiveness of the proposed ADNet, we performed experiments on the Telecom Italia dataset, a well-known benchmark dataset for network traffic prediction. The experimental results show that, compared to STDenseNet, our ADNet improved RMSE performance by 3.72%, 2.84%, and 5.87% in call service (Call), short message service (SMS), and Internet access (Internet) sub-datasets, respectively. Full article
Show Figures

Figure 1

18 pages, 1067 KiB  
Article
Fine-Grained Fault Sensitivity Analysis of Vision Transformers Under Soft Errors
by Jiajun He, Yi Liu, Changqing Xu, Xinfang Liao and Yintang Yang
Electronics 2025, 14(12), 2418; https://doi.org/10.3390/electronics14122418 - 13 Jun 2025
Viewed by 655
Abstract
Over the past decade, deep neural networks (DNNs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP), achieving unprecedented performance across a variety of tasks. The Vision Transformer (ViT) has emerged as a powerful alternative to convolutional neural networks [...] Read more.
Over the past decade, deep neural networks (DNNs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP), achieving unprecedented performance across a variety of tasks. The Vision Transformer (ViT) has emerged as a powerful alternative to convolutional neural networks (CNNs), leveraging self-attention mechanisms to capture long-range dependencies and global context. Owing to their flexible architecture and scalability, ViTs have been widely adopted in safety-critical applications such as autonomous driving, where system reliability is paramount. However, ViTs’ reliability issues induced by soft errors in large-scale digital integrated circuits have generally been overlooked. In this paper, we present a fine-grained fault sensitivity analysis of ViT variants under bit-flip fault injections, focusing on different ViT models, transformer encoder layers, weight matrix types, and attention-head dimensions. Experimental results demonstrate that the first transformer encoder layer is susceptible to soft errors due to its essential role in local and global feature extraction. Moreover, in the middle and later layers, the Multi-Layer Perceptron (MLP) sub-blocks dominate the computational workload and significantly influence representation learning, making them critical points of vulnerability. These insights highlight key reliability bottlenecks in ViT architectures when deployed in error-prone environments. Full article
Show Figures

Figure 1

28 pages, 975 KiB  
Article
Advanced Hyena Hierarchy Architectures for Predictive Modeling of Interest Rate Dynamics from Central Bank Communications
by Tao Song, Shijie Yuan and Rui Zhong
Appl. Sci. 2025, 15(12), 6420; https://doi.org/10.3390/app15126420 - 7 Jun 2025
Viewed by 1312
Abstract
Effective analysis of central bank communications is critical for anticipating monetary policy changes and guiding market expectations. However, traditional natural language processing models face significant challenges in processing lengthy and nuanced policy documents, which often exceed tens of thousands of tokens. This study [...] Read more.
Effective analysis of central bank communications is critical for anticipating monetary policy changes and guiding market expectations. However, traditional natural language processing models face significant challenges in processing lengthy and nuanced policy documents, which often exceed tens of thousands of tokens. This study addresses these challenges by proposing a novel integrated deep learning framework based on Hyena Hierarchy architectures, which utilize sub-quadratic convolution mechanisms to efficiently process ultra-long sequences. The framework employs Delta-LoRA (low-rank adaptation) for parameter-efficient fine-tuning, updating less than 1% of the total parameters without additional inference overhead. To ensure robust performance across institutions and policy cycles, domain-adversarial neural networks are incorporated to learn domain-invariant representations, and a multi-task learning approach integrates auxiliary hawkish/dovish sentiment signals. Evaluations conducted on a comprehensive dataset comprising Federal Open Market Committee statements and European Central Bank speeches from 1977 to 2024 demonstrate state-of-the-art performance, achieving over 6% improvement in macro-F1 score compared to baseline models while significantly reducing inference latency by 65%. This work offers a powerful and efficient new paradigm for handling ultra-long financial policy texts and demonstrates the effectiveness of integrating advanced sequence modeling, efficient fine-tuning, and domain adaptation techniques for extracting timely economic signals, with the aim to open new avenues for quantitative policy analysis and financial market forecasting. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
Show Figures

Figure 1

37 pages, 2359 KiB  
Article
CAG-MoE: Multimodal Emotion Recognition with Cross-Attention Gated Mixture of Experts
by Axel Gedeon Mengara Mengara and Yeon-kug Moon
Mathematics 2025, 13(12), 1907; https://doi.org/10.3390/math13121907 - 7 Jun 2025
Cited by 1 | Viewed by 1114
Abstract
Multimodal emotion recognition faces substantial challenges due to the inherent heterogeneity of data sources, each with its own temporal resolution, noise characteristics, and potential for incompleteness. For example, physiological signals, audio features, and textual data capture complementary yet distinct aspects of emotion, requiring [...] Read more.
Multimodal emotion recognition faces substantial challenges due to the inherent heterogeneity of data sources, each with its own temporal resolution, noise characteristics, and potential for incompleteness. For example, physiological signals, audio features, and textual data capture complementary yet distinct aspects of emotion, requiring specialized processing to extract meaningful cues. These challenges include aligning disparate modalities, handling varying levels of noise and missing data, and effectively fusing features without diluting critical contextual information. In this work, we propose a novel Mixture of Experts (MoE) framework that addresses these challenges by integrating specialized transformer-based sub-expert networks, a dynamic gating mechanism with sparse Top-k activation, and a cross-modal attention module. Each modality is processed by multiple dedicated sub-experts designed to capture intricate temporal and contextual patterns, while the dynamic gating network selectively weights the contributions of the most relevant experts. Our cross-modal attention module further enhances the integration by facilitating precise exchange of information among modalities, thereby reinforcing robustness in the presence of noisy or incomplete data. Additionally, an auxiliary diversity loss encourages expert specialization, ensuring the fused representation remains highly discriminative. Extensive theoretical analysis and rigorous experiments on benchmark datasets—the Korean Emotion Multimodal Database (KEMDy20) and the ASCERTAIN dataset—demonstrate that our approach significantly outperforms state-of-the-art methods in emotion recognition, setting new performance baselines in affective computing. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 456 KiB  
Article
PathCare: Integrating Clinical Pathway Information to Enable Healthcare Prediction at the Neuron Level
by Dehao Sui, Lei Gu, Chaohe Zhang, Kaiwei Yang, Xiaocui Li, Liantao Ma, Ling Wang and Wen Tang
Bioengineering 2025, 12(6), 578; https://doi.org/10.3390/bioengineering12060578 - 28 May 2025
Viewed by 466
Abstract
Electronic Health Records (EHRs) offer valuable insights for healthcare prediction. Existing methods approach EHR analysis through direct imputation techniques in data space or representation learning in feature space. However, these approaches face the following two critical limitations: first, they struggle to model long-term [...] Read more.
Electronic Health Records (EHRs) offer valuable insights for healthcare prediction. Existing methods approach EHR analysis through direct imputation techniques in data space or representation learning in feature space. However, these approaches face the following two critical limitations: first, they struggle to model long-term clinical pathways due to their focus on isolated time points rather than continuous health trajectories; second, they lack mechanisms to effectively distinguish between clinically relevant and redundant features when observations are irregular. To address these challenges, we introduce PathCare, a neural framework that integrates clinical pathway information into prediction tasks at the neuron level. PathCare employs an auxiliary sub-network that models future visit patterns to capture temporal health progression, coupled with a neuron-level filtering gate that adaptively selects relevant features while filtering out redundant information. We evaluate PathCare on the following three real-world EHR datasets: CDSL, MIMIC-III, and MIMIC-IV, demonstrating consistent performance improvements in mortality and readmission prediction tasks. Our approach offers a practical solution for enhancing healthcare predictions in real-world clinical settings with varying data completeness. Full article
(This article belongs to the Special Issue Artificial Intelligence for Better Healthcare and Precision Medicine)
Show Figures

Figure 1

24 pages, 6314 KiB  
Article
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening
by Jinting Ding, Honghui Xu and Shengjun Zhou
Entropy 2025, 27(6), 567; https://doi.org/10.3390/e27060567 - 27 May 2025
Viewed by 490
Abstract
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs [...] Read more.
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral–spatial fidelity and producing images with higher perceptual quality. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

24 pages, 3352 KiB  
Article
A Stacking Ensemble-Based Multi-Channel CNN Strategy for High-Accuracy Damage Assessment in Mega-Sub Controlled Structures
by Zheng Wei, Xinwei Wang, Buqiao Fan and Muhammad Moman Shahzad
Buildings 2025, 15(11), 1775; https://doi.org/10.3390/buildings15111775 - 22 May 2025
Cited by 1 | Viewed by 478
Abstract
The Mega-Sub Controlled Structure System (MSCSS) represents an innovative category of seismic-resistant super high-rise building structural systems, and exploring its damage mechanisms and identification methods is crucial. Nonetheless, the prevailing methodologies for establishing criteria for structural damage are deficient in providing a lucid [...] Read more.
The Mega-Sub Controlled Structure System (MSCSS) represents an innovative category of seismic-resistant super high-rise building structural systems, and exploring its damage mechanisms and identification methods is crucial. Nonetheless, the prevailing methodologies for establishing criteria for structural damage are deficient in providing a lucid and comprehensible representation of the actual damage sustained by edifices during seismic events. To address these challenges, the present study develops a finite element model of the MSCSS, conducts nonlinear time-history analyses to assess the MSCSS’s response to prolonged seismic motion records, and evaluates its damage progression. Moreover, considering the genuine damage conditions experienced by the MSCSS, damage working scenarios under seismic forces were formulated to delineate the damage patterns. A convolutional neural network recognition framework based on stacking ensemble learning is proposed for extracting damage features from the temporal response of structural systems and achieving damage classification. This framework accounts for the temporal and spatial interrelations among sensors distributed at disparate locations within the structure and addresses the issue of data imbalance arising from a limited quantity of damaged samples. The research results indicate that the proposed method achieves an accuracy of over 98% in dealing with damage in imbalanced datasets, while also demonstrating remarkable robustness. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

16 pages, 3751 KiB  
Article
Improved Face Image Super-Resolution Model Based on Generative Adversarial Network
by Qingyu Liu, Yeguo Sun, Lei Chen and Lei Liu
J. Imaging 2025, 11(5), 163; https://doi.org/10.3390/jimaging11050163 - 19 May 2025
Viewed by 794
Abstract
Image super-resolution (SR) models based on the generative adversarial network (GAN) face challenges such as unnatural facial detail restoration and local blurring. This paper proposes an improved GAN-based model to address these issues. First, a Multi-scale Hybrid Attention Residual Block (MHARB) is designed, [...] Read more.
Image super-resolution (SR) models based on the generative adversarial network (GAN) face challenges such as unnatural facial detail restoration and local blurring. This paper proposes an improved GAN-based model to address these issues. First, a Multi-scale Hybrid Attention Residual Block (MHARB) is designed, which dynamically enhances feature representation in critical face regions through dual-branch convolution and channel-spatial attention. Second, an Edge-guided Enhancement Block (EEB) is introduced, generating adaptive detail residuals by combining edge masks and channel attention to accurately recover high-frequency textures. Furthermore, a multi-scale discriminator with a weighted sub-discriminator loss is developed to balance global structural and local detail generation quality. Additionally, a phase-wise training strategy with dynamic adjustment of learning rate (Lr) and loss function weights is implemented to improve the realism of super-resolved face images. Experiments on the CelebA-HQ dataset demonstrate that the proposed model achieves a PSNR of 23.35 dB, a SSIM of 0.7424, and a LPIPS of 24.86, outperforming classical models and delivering superior visual quality in high-frequency regions. Notably, this model also surpasses the SwinIR model (PSNR: 23.28 dB → 23.35 dB, SSIM: 0.7340 → 0.7424, and LPIPS: 30.48 → 24.86), validating the effectiveness of the improved model and the training strategy in preserving facial details. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

21 pages, 5452 KiB  
Article
HFC-YOLO11: A Lightweight Model for the Accurate Recognition of Tiny Remote Sensing Targets
by Jinyin Bai, Wei Zhu, Zongzhe Nie, Xin Yang, Qinglin Xu and Dong Li
Computers 2025, 14(5), 195; https://doi.org/10.3390/computers14050195 - 18 May 2025
Cited by 1 | Viewed by 1374
Abstract
To address critical challenges in tiny object detection within remote sensing imagery, including resolution–semantic imbalance, inefficient feature fusion, and insufficient localization accuracy, this study proposes Hierarchical Feature Compensation You Only Look Once 11 (HFC-YOLO11), a lightweight detection model based on hierarchical feature compensation. [...] Read more.
To address critical challenges in tiny object detection within remote sensing imagery, including resolution–semantic imbalance, inefficient feature fusion, and insufficient localization accuracy, this study proposes Hierarchical Feature Compensation You Only Look Once 11 (HFC-YOLO11), a lightweight detection model based on hierarchical feature compensation. Firstly, by reconstructing the feature pyramid architecture, we preserve the high-resolution P2 feature layer in shallow networks to enhance the fine-grained feature representation for tiny targets, while eliminating redundant P5 layers to reduce the computational complexity. In addition, a depth-aware differentiated module design strategy is proposed: GhostBottleneck modules are adopted in shallow layers to improve its feature reuse efficiency, while standard Bottleneck modules are maintained in deep layers to strengthen the semantic feature extraction. Furthermore, an Extended Intersection over Union loss function (EIoU) is developed, incorporating boundary alignment penalty terms and scale-adaptive weight mechanisms to optimize the sub-pixel-level localization accuracy. Experimental results on the AI-TOD and VisDrone2019 datasets demonstrate that the improved model achieves mAP50 improvements of 3.4% and 2.7%, respectively, compared to the baseline YOLO11s, while reducing its parameters by 27.4%. Ablation studies validate the balanced performance of the hierarchical feature compensation strategy in the preservation of resolution and computational efficiency. Visualization results confirm an enhanced robustness against complex background interference. HFC-YOLO11 exhibits superior accuracy and generalization capability in tiny object detection tasks, effectively meeting practical application requirements for tiny object recognition. Full article
Show Figures

Figure 1

34 pages, 14771 KiB  
Article
Research on Intelligent Planning Method for Turning Machining Process Based on Knowledge Base
by Yante Li and Tingting Zhou
Machines 2025, 13(5), 417; https://doi.org/10.3390/machines13050417 - 15 May 2025
Viewed by 624
Abstract
Against the backdrop of accelerating transformation in traditional mechanical manufacturing toward intelligent production models integrating mechanical, electronic, and information technologies, coupled with increasing demands for mass customization, conventional machining methods are proving inadequate to meet modern manufacturing requirements. To address these challenges, this [...] Read more.
Against the backdrop of accelerating transformation in traditional mechanical manufacturing toward intelligent production models integrating mechanical, electronic, and information technologies, coupled with increasing demands for mass customization, conventional machining methods are proving inadequate to meet modern manufacturing requirements. To address these challenges, this study proposes a knowledge-based intelligent process planning system. First, to address the heterogeneity issues in knowledge aggregation during machining processes, a process knowledge model comprising three sub-models was designed. Using ontological analysis methods with OWL language, inter-model relationships were formally expressed, achieving structured knowledge representation. Furthermore, to meet the system’s substantial knowledge demands, a MySQL-based knowledge framework was developed, enabling distributed storage and the intelligent retrieval of process planning knowledge. Second, to overcome limitations like low openness and decision-making rigidity in traditional process planning, a hybrid reasoning mechanism was proposed: on the one hand, an instance and rule-based reasoning system ensures adaptability to parameter variations; on the other hand, Generative Adversarial Networks are introduced to transcend the completeness limitations of traditional knowledge reasoning, enabling the dynamic evolution of process knowledge. Finally, the intelligent process planning system was implemented in Python on the VSCode platform. Validation via typical turning cases demonstrates the system’s autonomous process planning and execution capabilities. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

18 pages, 1850 KiB  
Article
Cross-Subject Motor Imagery Electroencephalogram Decoding with Domain Generalization
by Yanyan Zheng, Senxiang Wu, Jie Chen, Qiong Yao and Siyu Zheng
Bioengineering 2025, 12(5), 495; https://doi.org/10.3390/bioengineering12050495 - 7 May 2025
Viewed by 748
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training [...] Read more.
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training for target subjects by utilizing EEG data from source subjects. However, the diversity in data distribution among subjects limits the model’s robustness. In this study, we investigate a cross-subject MI-EEG decoding model with domain generalization based on a deep learning neural network that extracts domain-invariant features from source subjects. Firstly, a knowledge distillation framework is adopted to obtain the internally invariant representations based on spectral features fusion. Then, the correlation alignment approach aligns mutually invariant representations between each pair of sub-source domains. In addition, we use distance regularization on two kinds of invariant features to enhance generalizable information. To assess the effectiveness of our approach, experiments are conducted on the BCI Competition IV 2a and the Korean University dataset. The results demonstrate that the proposed model achieves 8.93% and 4.4% accuracy improvements on two datasets, respectively, compared with current state-of-the-art models, confirming that the proposed approach can effectively extract invariant features from source subjects and generalize to the unseen target distribution, hence paving the way for effective implementation of the plug-and-play functionality in MI-BCI applications. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

20 pages, 4145 KiB  
Article
Multiscale Interaction Purification-Based Global Context Network for Industrial Process Fault Diagnosis
by Yukun Huang, Jianchang Liu, Peng Xu, Lin Jiang, Xiaoyu Sun and Haotian Tang
Mathematics 2025, 13(9), 1371; https://doi.org/10.3390/math13091371 - 23 Apr 2025
Viewed by 455
Abstract
The application of deep convolutional neural networks (CNNs) has gained popularity in the field of industrial process fault diagnosis. However, conventional CNNs primarily extract local features through convolution operations and have limited receptive fields. This leads to insufficient feature expression, as CNNs neglect [...] Read more.
The application of deep convolutional neural networks (CNNs) has gained popularity in the field of industrial process fault diagnosis. However, conventional CNNs primarily extract local features through convolution operations and have limited receptive fields. This leads to insufficient feature expression, as CNNs neglect the temporal correlations in industrial process data, ultimately resulting in lower diagnostic performance. To address this issue, a multiscale interaction purification-based global context network (MIPGC-Net) is proposed. First, we propose a multiscale feature interaction refinement (MFIR) module. The module aims to extract multiscale features enriched with combined information through feature interaction while refining feature representations by employing the efficient channel attention mechanism. Next, we develop a wide temporal dependency feature extraction sub-network (WTD) by integrating the MFIR module with the global context network. This sub-network can capture the temporal correlation information from the input, enhancing the comprehensive perception of global information. Finally, MIPGC-Net is constructed by stacking multiple WTD sub-networks to perform fault diagnosis in industrial processes, effectively capturing both local and global information. The proposed method is validated on both the Tennessee Eastman and the Continuous Stirred-Tank Reactor processes, confirming its effectiveness. Full article
Show Figures

Figure 1

Back to TopTop