Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (87)

Search Parameters:
Keywords = convolutional denoising autoencoder

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 11487 KB  
Article
ML-CDAE: Multi-Lead Convolutional Denoising Autoencoder for Denoising 12-Lead ECG Signals
by Malaz Alfa, Fars Samann and Thomas Schanze
Signals 2026, 7(1), 18; https://doi.org/10.3390/signals7010018 - 19 Feb 2026
Viewed by 688
Abstract
Background: Electrocardiography (ECG), particularly the 12-lead configuration, is a crucial method for identifying heart rhythm abnormalities. However, its effectiveness can be reduced by noise contamination. State-of-the-art denoising methods based on neural networks have demonstrated promising performance in denoising complex biosignals like ECG. However, [...] Read more.
Background: Electrocardiography (ECG), particularly the 12-lead configuration, is a crucial method for identifying heart rhythm abnormalities. However, its effectiveness can be reduced by noise contamination. State-of-the-art denoising methods based on neural networks have demonstrated promising performance in denoising complex biosignals like ECG. However, most of these methods have focused on denoising single-lead ECG recordings. Methods: This research aims to leverage the inherent correlation among multi-lead ECG signals. Therefore, a multi-lead convolutional denoising autoencoder (ML-CDAE) model is proposed, to learn more effective representations, leading simultaneously to improved denoising performance and enhanced quality of 12-lead ECG recordings. Results: The findings indicate that ML-CDAE consistently outperforms a single-lead convolutional denoising autoencoder (SL-CDAE) and fully convolutional denoising autoencoder (FCN-DAE) model in denoising ECG signals corrupted by a mixture of physical noises. In particular, the mean squared error (MSE) and signal-to-noise ratio improvement (SNRimp) are used as evaluation metrics to assess the performance. Conclusions: The strong correlation among multi-lead ECG signals can be leveraged not only to enhance the denoising performance of the ML-CDAE model but also to simultaneously denoise 12-lead ECG signals more successfully compared to both the SL-CDAE and FCN-DAE models. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing II)
Show Figures

Figure 1

26 pages, 15341 KB  
Article
A Multimodal Three-Channel Bearing Fault Diagnosis Method Based on CNN Fusion Attention Mechanism Under Strong Noise Conditions
by Yingyong Zou, Chunfang Li, Yu Zhang, Zhiqiang Si and Long Li
Algorithms 2026, 19(2), 144; https://doi.org/10.3390/a19020144 - 10 Feb 2026
Viewed by 475
Abstract
Bearings, as core components of mechanical equipment, play a critical role in ensuring equipment safety and reliability. Early fault detection holds significant importance. Addressing the challenges of insufficient robustness in bearing fault diagnosis under industrial high-noise conditions and the difficulty of extracting fault [...] Read more.
Bearings, as core components of mechanical equipment, play a critical role in ensuring equipment safety and reliability. Early fault detection holds significant importance. Addressing the challenges of insufficient robustness in bearing fault diagnosis under industrial high-noise conditions and the difficulty of extracting fault features from a single modality, this study proposes a three-channel multimodal fault diagnosis method that integrates a Convolutional Auto-Encoder (CAE) with a dual attention mechanism (M-CNNBiAM). This approach provides an effective technical solution for the precise diagnosis of bearing faults in high-noise environments. To suppress substantial noise interference, a CAE denoising module was designed to filter out intense noise, providing high-quality input for subsequent diagnostic networks. To address the limitations of single-modal feature extraction and restricted generalization capabilities, a three-channel time–frequency signal joint diagnosis model combining the Continuous Wavelet Transform (CWT) with an attention mechanism was proposed. This approach enables deep mining and efficient fusion of multi-domain features, thereby enhancing fault diagnosis accuracy and generalization capabilities. Experimental results demonstrate that the designed CAE module maintains excellent noise reduction performance even under −10 dB strong noise conditions. When combined with the proposed diagnostic model, it achieves an average diagnostic accuracy of 98% across both the CWRU and self-test datasets, demonstrating outstanding diagnostic precision. Furthermore, under −4 dB noise conditions, it achieves a 94% diagnostic accuracy even without relying on the CAE denoising module. With a single training cycle taking only 6.8 s, it balances training efficiency and diagnostic performance, making it well-suited for real-time, reliable bearing fault diagnosis in industrial environments with high noise levels. Full article
Show Figures

Figure 1

20 pages, 2021 KB  
Article
Noise-Conditioned Denoising Autoencoder with Temporal Attention for Bearing RUL Prediction
by Zhongtian Jin, Chong Chen, Aris Syntetos and Ying Liu
Machines 2026, 14(1), 75; https://doi.org/10.3390/machines14010075 - 8 Jan 2026
Viewed by 594
Abstract
Bearings are important elements of mechanical systems and the correct forecasting of their remaining useful life (RUL) is key to successful predictive maintenance. Nevertheless, noise interference during different operating conditions is also a significant problem in predicting their RUL. Existing denoising-based RUL prediction [...] Read more.
Bearings are important elements of mechanical systems and the correct forecasting of their remaining useful life (RUL) is key to successful predictive maintenance. Nevertheless, noise interference during different operating conditions is also a significant problem in predicting their RUL. Existing denoising-based RUL prediction models often show degraded performance when exposed to heterogeneous and non-stationary noise, resulting in unstable feature extraction and reduced generalisation. To address the challenge of heterogeneous and non-stationary noise in bearing RUL prediction, this study proposes a hybrid framework that combines a noise-conditioned convolutional denoising autoencoder (NC-CDAE) and a temporal attention transformer (TAT). The NC-CDAE adaptively suppresses diverse noise types through conditional modulation, while the TAT captures long-term temporal dependencies to enhance degradation trend learning. This synergistic design improves both the noise robustness and temporal modelling capability of the system. To further validate the model under varying conditions, synthetic datasets with different noise intensities were generated using a conditional generative adversarial network (cGAN). Comprehensive experiments show that the proposed NC-CDAE + TAT framework achieves lower and more stable errors than state-of-the-art methods, reducing RMSE by up to 23.6% and MAE by 18.2% on average and maintaining consistent performance (an RMSE between 0.155 and 0.194) across diverse conditions. Full article
Show Figures

Figure 1

26 pages, 709 KB  
Article
A Tabular Data Imputation Technique Using Transformer and Convolutional Neural Networks
by Charlène Béatrice Bridge-Nduwimana, Salah Eddine El Harrauss, Aziza El Ouaazizi and Majid Benyakhlef
Big Data Cogn. Comput. 2025, 9(12), 321; https://doi.org/10.3390/bdcc9120321 - 13 Dec 2025
Cited by 2 | Viewed by 1110
Abstract
Upstream processes strongly influence downstream analysis in sequential data-processing workflows, particularly in machine learning, where data quality directly affects model performance. Conventional statistical imputations often fail to capture nonlinear dependencies, while deep learning approaches typically lack uncertainty quantification. We introduce a hybrid imputation [...] Read more.
Upstream processes strongly influence downstream analysis in sequential data-processing workflows, particularly in machine learning, where data quality directly affects model performance. Conventional statistical imputations often fail to capture nonlinear dependencies, while deep learning approaches typically lack uncertainty quantification. We introduce a hybrid imputation model that integrates a deep learning autoencoder with Convolutional Neural Network (CNN) layers and a Transformer-based contextual modeling architecture to address systematic variation across heterogeneous data sources. Performing multiple imputations in the autoencoder–transformer latent space and averaging representations provides implicit batch correction that suppresses context-specific remains without explicit batch identifiers. We performed experiments on datasets in which 10% of missing data was artificially introduced by completely random missing data (MCAR) and non-random missing data (MNAR) mechanisms. They demonstrated practical performance, jointly ranking first among the imputation methods evaluated. This imputation technique reduced the root mean square error (RMSE) by 50% compared to denoising autoencoders (DAE) and by 46% compared to iterative imputation (MICE). Performance was comparable for adversarial models (GAIN) and attention-based models (MIDA), and both provided interpretable uncertainty estimates (CV = 0.08–0.15). Validation on datasets from multiple sources confirmed the robustness of the technique: notably, on a forensic dataset from multiple laboratories, our imputation technique achieved a practical improvement over GAIN (0.146 vs. 0.189 RMSE), highlighting its effectiveness in mitigating batch effects. Full article
Show Figures

Graphical abstract

26 pages, 5681 KB  
Article
Physiological Artifact Suppression in EEG Signals Using an Efficient Multi-Scale Depth-Wise Separable Convolution and Variational Attention Deep Learning Model for Improved Neurological Health Signal Quality
by Vandana Akshath Raj, Tejasvi Parupudi, Vishnumurthy Kedlaya K, Ananthakrishna Thalengala and Subramanya G. Nayak
Technologies 2025, 13(12), 578; https://doi.org/10.3390/technologies13120578 - 9 Dec 2025
Viewed by 1160
Abstract
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, [...] Read more.
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, and generalize to unseen artifact types. To address these challenges, this study proposes MDSC-VA, an efficient denoising framework that integrates multi-scale (M) depth-wise separable convolution (DSConv), variational autoencoder-based (VAE) latent encoding, and a multi-head self-attention mechanism. This unified architecture effectively balances denoising accuracy and model complexity while enhancing generalization to unseen artifact types. Comprehensive evaluations on three open-source EEG datasets, including EEGdenoiseNet, a Motion Artifact Contaminated Multichannel EEG dataset, and the PhysioNet EEG Motor Movement/Imagery dataset, demonstrate that MDSC-VA consistently outperforms state-of-the-art methods, achieving a higher signal-to-noise ratio (SNR), lower relative root mean square error (RRMSE), and stronger correlation coefficient (CC) values. Moreover, the model preserved over 99% of the dominant neural frequency band power, validating its ability to retain physiologically relevant rhythms. These results highlight the potential of MDSC-VA for reliable clinical EEG interpretation, real-time BCI systems, and advancement towards sustainable healthcare technologies in line with SDG-3 (Good Health and Well-Being). Full article
Show Figures

Graphical abstract

23 pages, 4765 KB  
Article
Physics-Informed SDAE-Based Denoising Model for High-Impedance Fault Detection
by Jianxin Lin, Xuchang Wang and Huaiyuan Wang
Processes 2025, 13(11), 3673; https://doi.org/10.3390/pr13113673 - 13 Nov 2025
Viewed by 598
Abstract
The accurate detection of high-impedance faults (HIFs) in distribution systems is fundamentally dependent on the extraction of weak fault signatures. However, these features are often obscured by complex and high-level noise present in current transformer (CT) measurement data. To address this challenge, an [...] Read more.
The accurate detection of high-impedance faults (HIFs) in distribution systems is fundamentally dependent on the extraction of weak fault signatures. However, these features are often obscured by complex and high-level noise present in current transformer (CT) measurement data. To address this challenge, an energy-proportion-guided channel-wise attention stacked denoising autoencoder (EPGCA-SDAE) model is proposed. In this model, wavelet decomposition is employed to transform the signal into informative frequency band components. A channel attention mechanism is utilized to adaptively assign weights to each component, thereby enhancing model interpretability. Furthermore, a physics-informed prior, based on energy distribution, is introduced to guide the loss function and regulate the attention learning process. Extensive simulations using both synthetic and real-world 10kV distribution network data are conducted. The superiority of the EPGCA-SDAE over traditional wavelet-based methods, stacked denoising autoencoders (SDAE), denoising convolutional neural network (DnCNN), and Transformer-based networks across various noise conditions is demonstrated. The lowest average mean squared error (MSE) is achieved by the proposed model (simulated: 50.60×105p.u.; real: 76.45×105p.u.), along with enhanced noise robustness, generalization capability, and physical interpretability. These results verify the method’s feasibility within the tested 10 kV distribution system, providing a reliable data recovery framework for fault diagnosis in noise-contaminated distribution network environments. Full article
(This article belongs to the Special Issue Process Safety Technology for Nuclear Reactors and Power Plants)
Show Figures

Figure 1

20 pages, 1056 KB  
Article
Deep Learning Algorithms for Human Activity Recognition in Manual Material Handling Tasks
by Giulia Bassani, Carlo Alberto Avizzano and Alessandro Filippeschi
Sensors 2025, 25(21), 6705; https://doi.org/10.3390/s25216705 - 2 Nov 2025
Cited by 3 | Viewed by 1775
Abstract
Human Activity Recognition (HAR) is widely used for healthcare, but few works focus on Manual Material Handling (MMH) activities, despite their diffusion and impact on the workers’ health. We propose four Deep Learning algorithms for HAR in MMH: Bidirectional Long Short-Term Memory (BiLSTM), [...] Read more.
Human Activity Recognition (HAR) is widely used for healthcare, but few works focus on Manual Material Handling (MMH) activities, despite their diffusion and impact on the workers’ health. We propose four Deep Learning algorithms for HAR in MMH: Bidirectional Long Short-Term Memory (BiLSTM), Sparse Denoising Autoencoder (Sp-DAE), Recurrent Sp-DAE, and Recurrent Convolutional Neural Network (RCNN). We explored different hyperparameter combinations to maximize the classification performance (F1-score,) using wearable sensors’ data gathered from 14 subjects. We investigated the best three-parameter combinations for each network using the full dataset to select the two best-performing networks, which were then compared using 14 datasets with increasing subject numerosity, 70–30% split, and Leave-One-Subject-Out (LOSO) validation, to evaluate whether they may perform better with a larger dataset. The benchmarking network DeepConvLSTM was tested on the full dataset. BiLSTM performs best in classification and complexity (95.7% 70–30% split; 90.3% LOSO). RCNN performed similarly (95.9%; 89.2%) with a positive trend with subject numerosity. DeepConvLSTM achieves similar classification performance (95.2%; 90.3%) but requires ×57.1 and ×31.3 more Multiply and ACcumulate (MAC) and ×100.8 and ×28.3 more Multiplication and Addition (MA) operations, which measure the complexity of the network’s inference process, than BiLSTM and RCNN, respectively. The BILSTM and RCNN perform close to DeepConvLSTM while being computationally lighter, fostering their use in embedded systems. Such lighter algorithms can be readily used in the automatic ergonomic and biomechanical risk assessment systems, enabling personalization of risk assessment and easing the adoption of safety measures in industrial practices involving MMH. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

17 pages, 1294 KB  
Article
SPARSE-OTFS-Net: A Sparse Robust OTFS Signal Detection Algorithm for 6G Ubiquitous Coverage
by Yunzhi Ling and Jun Xu
Electronics 2025, 14(17), 3532; https://doi.org/10.3390/electronics14173532 - 4 Sep 2025
Cited by 1 | Viewed by 1192
Abstract
With the evolution of 6G technology toward global coverage and multidimensional integration, OTFS modulation has become a research focus due to its advantages in high-mobility scenarios. However, existing OTFS signal detection algorithms face challenges such as pilot contamination, Doppler spread degradation, and diverse [...] Read more.
With the evolution of 6G technology toward global coverage and multidimensional integration, OTFS modulation has become a research focus due to its advantages in high-mobility scenarios. However, existing OTFS signal detection algorithms face challenges such as pilot contamination, Doppler spread degradation, and diverse interference in complex environments. This paper proposes the SPARSE-OTFS-Net algorithm, which establishes a comprehensive signal detection solution by innovatively integrating sparse random pilot design, compressive sensing-based frequency offset estimation with closed-loop cancellation, and joint denoising techniques combining an autoencoder, residual learning, and multi-scale feature fusion. The algorithm employs deep learning to dynamically generate non-uniform pilot distributions, reducing pilot contamination by 60%. Through orthogonal matching pursuit algorithms, it achieves super-resolution frequency offset estimation with tracking errors controlled within 20 Hz, effectively addressing Doppler spread degradation. The multi-stage denoising mechanism of deep neural networks suppresses various interferences while preserving time-frequency domain signal sparsity. Simulation results demonstrate: Under large frequency offset, multipath, and low SNR conditions, multi-kernel convolution technology achieves significant computational complexity reduction while exhibiting outstanding performance in tracking error and weak multipath detection. In 1000 km/h high-speed mobility scenarios, Doppler error estimation accuracy reaches ±25 Hz (approaching the Cramér-Rao bound), with BER performance of 5.0 × 10−6 (7× improvement over single-Gaussian CNN’s 3.5 × 10−5). In 1024-user interference scenarios with BER = 10−5 requirements, SNR demand decreases from 11.4 dB to 9.2 dB (2.2 dB reduction), while maintaining EVM at 6.5% under 1024-user concurrency (compared to 16.5% for conventional MMSE), effectively increasing concurrent user capacity in 6G ultra-massive connectivity scenarios. These results validate the superior performance of SPARSE-OTFS-Net in 6G ultra-massive connectivity applications and provide critical technical support for realizing integrated space–air–ground networks. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

15 pages, 712 KB  
Article
A Novel Autoencoder-Based Design for Channel Estimation in Maritime OFDM Systems
by Yongjie Yang, Wenming Chao, Li Ma, Fandi Meng and Zhixuan Hu
Electronics 2025, 14(17), 3454; https://doi.org/10.3390/electronics14173454 - 29 Aug 2025
Cited by 1 | Viewed by 1417
Abstract
This paper introduces a novel autoencoder-based channel estimation framework specifically designed for OFDM systems in the complex and rapidly time-varying maritime channel. We design a novel autoencoder architecture that integrates attention mechanisms with long short-term memory networks to adapt to the challenges posed [...] Read more.
This paper introduces a novel autoencoder-based channel estimation framework specifically designed for OFDM systems in the complex and rapidly time-varying maritime channel. We design a novel autoencoder architecture that integrates attention mechanisms with long short-term memory networks to adapt to the challenges posed by maritime communication. Additionally, to enhance the OFDM system’s ability to acquire precise channel response and improve operational efficiency, we introduce an improved fast super-resolution convolutional neural network. This enhancement is achieved through the incorporation of a residual denoising module specifically designed to mitigate the adverse effects of additive noise. By jointly training the autoencoder and the channel estimation network, we significantly enhance the reliability of maritime OFDM communication systems. Simulation results demonstrate that the proposed channel estimation network accurately estimates channel response across different pilot numbers, and the joint channel estimation method based on the autoencoder can be extended to accommodate different transmission rates and sea states. Full article
Show Figures

Figure 1

23 pages, 19888 KB  
Article
Research on Loosening Fault Diagnosis Method of Escalator Drive Mainframe Anchor Bolts Based on Improved High-Strength Denoising RCDAE Model
by Dongdong Chen, Minghui Chen, Binxin Lang, Xiaoqing Wang, Qiang Xu, Jiong Shen, Lihua Liang and Qin Luo
Sensors 2025, 25(17), 5219; https://doi.org/10.3390/s25175219 - 22 Aug 2025
Cited by 2 | Viewed by 1162
Abstract
To address the challenges of weak early-stage loosening fault signals and strong environmental noise interference in escalator drive mainframe anchor bolts, which hinder effective fault feature extraction, this paper proposes an improved Residual Convolutional Denoising Autoencoder (RCDAE) for signal denoising in high-intensity noise [...] Read more.
To address the challenges of weak early-stage loosening fault signals and strong environmental noise interference in escalator drive mainframe anchor bolts, which hinder effective fault feature extraction, this paper proposes an improved Residual Convolutional Denoising Autoencoder (RCDAE) for signal denoising in high-intensity noise environments. The model combines DMS (Dynamically Multimodal Synergistic) loss function, the gated residual mechanism, and CNN–Transformer. The experimental results demonstrate that the proposed model achieves an average accuracy of 93.88% under noise intensities ranging from 10 dB to −10 dB, representing a 2.65% improvement over the baseline model without the improved RCDAE (91.23%). At the same time, in order to verify the generalization performance of the model, the CWRU bearing data set is used to conduct experiments under the same conditions. The experimental results show that the accuracy of the proposed model is 1.30% higher than that of the baseline model without improved RCDAE, validating the method’s significant advantages in noise suppression and feature representation. This study provides an effective solution for loosening fault diagnosis of escalator drive mainframe anchor bolts. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

25 pages, 26404 KB  
Review
Review of Deep Learning Applications for Detecting Special Components in Agricultural Products
by Yifeng Zhao and Qingqing Xie
Computers 2025, 14(8), 309; https://doi.org/10.3390/computers14080309 - 30 Jul 2025
Cited by 6 | Viewed by 2048
Abstract
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications [...] Read more.
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications across three core domains: contaminant surveillance (heavy metals, pesticides, and mycotoxins), nutritional component quantification (soluble solids, polyphenols, and pigments), and structural/biomarker assessment (disease symptoms, gel properties, and physiological traits). Emerging hybrid architectures—including attention-enhanced convolutional neural networks (CNNs) for lesion localization, wavelet-coupled autoencoders for spectral denoising, and multi-task learning frameworks for joint parameter prediction—demonstrate unprecedented accuracy in decoding complex agricultural matrices. Particularly noteworthy are sensor fusion strategies integrating hyperspectral imaging (HSI), Raman spectroscopy, and microwave detection with deep feature extraction, achieving industrial-grade performance (RPD > 3.0) while reducing detection time by 30–100× versus conventional methods. Nevertheless, persistent barriers in the “black-box” nature of complex models, severe lack of standardized data and protocols, computational inefficiency, and poor field robustness hinder the reliable deployment and adoption of DL for detecting special components in agricultural products. This review provides an essential foundation and roadmap for future research to bridge the gap between laboratory DL models and their effective, trusted application in real-world agricultural settings. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

9 pages, 1717 KB  
Proceeding Paper
Generative AI Respiratory and Cardiac Sound Separation Using Variational Autoencoders (VAEs)
by Arshad Jamal, R. Kanesaraj Ramasamy and Junaidi Abdullah
Comput. Sci. Math. Forum 2025, 10(1), 9; https://doi.org/10.3390/cmsf2025010009 - 1 Jul 2025
Cited by 1 | Viewed by 1705
Abstract
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, [...] Read more.
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, particularly convolutional neural networks (CNNs), to overcome these obstacles. Advanced machine learning models, denoising algorithms, and domain adaptation strategies address challenges such as frequency overlap, external noise, and limited labeled datasets. This study presents a robust methodology for detecting heart and lung diseases from audio signals using advanced preprocessing, feature extraction, and deep learning models. The approach integrates adaptive filtering and bandpass filtering as denoising techniques and variational autoencoders (VAEs) for feature extraction. The extracted features are input into a CNN, which classifies audio signals into different heart and lung conditions. The results highlight the potential of this combined approach for early and accurate disease detection, contributing to the development of reliable diagnostic tools for healthcare. Full article
Show Figures

Figure 1

34 pages, 2216 KB  
Article
An Optimized Transformer–GAN–AE for Intrusion Detection in Edge and IIoT Systems: Experimental Insights from WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT Datasets
by Ahmad Salehiyan, Pardis Sadatian Moghaddam and Masoud Kaveh
Future Internet 2025, 17(7), 279; https://doi.org/10.3390/fi17070279 - 24 Jun 2025
Cited by 12 | Viewed by 2911
Abstract
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial [...] Read more.
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial to prevent disruptions and data breaches. Traditional IDS approaches often fall short against evolving threats, highlighting the need for intelligent and adaptive solutions. While deep learning (DL) offers strong capabilities for pattern recognition, single-model architectures often lack robustness. Thus, hybrid and optimized DL models are increasingly necessary to improve detection performance and address data imbalance and noise. In this study, we propose an optimized hybrid DL framework that combines a transformer, generative adversarial network (GAN), and autoencoder (AE) components, referred to as Transformer–GAN–AE, for robust intrusion detection in Edge and IIoT environments. To enhance the training and convergence of the GAN component, we integrate an improved chimp optimization algorithm (IChOA) for hyperparameter tuning and feature refinement. The proposed method is evaluated using three recent and comprehensive benchmark datasets, WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT, widely recognized as standard testbeds for IIoT intrusion detection research. Extensive experiments are conducted to assess the model’s performance compared to several state-of-the-art techniques, including standard GAN, convolutional neural network (CNN), deep belief network (DBN), time-series transformer (TST), bidirectional encoder representations from transformers (BERT), and extreme gradient boosting (XGBoost). Evaluation metrics include accuracy, recall, AUC, and run time. Results demonstrate that the proposed Transformer–GAN–AE framework outperforms all baseline methods, achieving a best accuracy of 98.92%, along with superior recall and AUC values. The integration of IChOA enhances GAN stability and accelerates training by optimizing hyperparameters. Together with the transformer for temporal feature extraction and the AE for denoising, the hybrid architecture effectively addresses complex, imbalanced intrusion data. The proposed optimized Transformer–GAN–AE model demonstrates high accuracy and robustness, offering a scalable solution for real-world Edge and IIoT intrusion detection. Full article
Show Figures

Figure 1

28 pages, 4199 KB  
Article
Dose Reduction in Scintigraphic Imaging Through Enhanced Convolutional Autoencoder-Based Denoising
by Nikolaos Bouzianis, Ioannis Stathopoulos, Pipitsa Valsamaki, Efthymia Rapti, Ekaterini Trikopani, Vasiliki Apostolidou, Athanasia Kotini, Athanasios Zissimopoulos, Adam Adamopoulos and Efstratios Karavasilis
J. Imaging 2025, 11(6), 197; https://doi.org/10.3390/jimaging11060197 - 14 Jun 2025
Cited by 1 | Viewed by 1850
Abstract
Objective: This study proposes a novel deep learning approach for enhancing low-dose bone scintigraphy images using an Enhanced Convolutional Autoencoder (ECAE), aiming to reduce patient radiation exposure while preserving diagnostic quality, as assessed by both expert-based quantitative image metrics and qualitative evaluation. Methods: [...] Read more.
Objective: This study proposes a novel deep learning approach for enhancing low-dose bone scintigraphy images using an Enhanced Convolutional Autoencoder (ECAE), aiming to reduce patient radiation exposure while preserving diagnostic quality, as assessed by both expert-based quantitative image metrics and qualitative evaluation. Methods: A supervised learning framework was developed using real-world paired low- and full-dose images from 105 patients. Data were acquired using standard clinical gamma cameras at the Nuclear Medicine Department of the University General Hospital of Alexandroupolis. The ECAE architecture integrates multiscale feature extraction, channel attention mechanisms, and efficient residual blocks to reconstruct high-quality images from low-dose inputs. The model was trained and validated using quantitative metrics—Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM)—alongside qualitative assessments by nuclear medicine experts. Results: The model achieved significant improvements in both PSNR and SSIM across all tested dose levels, particularly between 30% and 70% of the full dose. Expert evaluation confirmed enhanced visibility of anatomical structures, noise reduction, and preservation of diagnostic detail in denoised images. In blinded evaluations, denoised images were preferred over the original full-dose scans in 66% of all cases, and in 61% of cases within the 30–70% dose range. Conclusion: The proposed ECAE model effectively reconstructs high-quality bone scintigraphy images from substantially reduced-dose acquisitions. This approach supports dose reduction in nuclear medicine imaging while maintaining—or even enhancing—diagnostic confidence, offering practical benefits in patient safety, workflow efficiency, and environmental impact. Full article
Show Figures

Figure 1

30 pages, 52809 KB  
Article
Enhancing Border Learning for Better Image Denoising
by Xin Ge, Yu Zhu, Liping Qi, Yaoqi Hu, Jinqiu Sun and Yanning Zhang
Mathematics 2025, 13(7), 1119; https://doi.org/10.3390/math13071119 - 28 Mar 2025
Cited by 3 | Viewed by 3393
Abstract
Deep neural networks for image denoising typically follow an encoder–decoder model, with convolutional (Conv) layers as essential components. Conv layers apply zero padding at the borders of input data to maintain consistent output dimensions. However, zero padding introduces ring-like artifacts at the borders [...] Read more.
Deep neural networks for image denoising typically follow an encoder–decoder model, with convolutional (Conv) layers as essential components. Conv layers apply zero padding at the borders of input data to maintain consistent output dimensions. However, zero padding introduces ring-like artifacts at the borders of output images, referred to as border effects, which negatively impact the network’s ability to learn effective features. In traditional methods, these border effects, associated with convolutional/deconvolutional operations, have been mitigated using patch-based techniques. Inspired by this observation, patch-wise denoising algorithms were explored to derive a CNN architecture that avoids border effects. Specifically, we extend the patch-wise autoencoder to learn image mappings through patch extraction and patch-averaging operations, demonstrating that the patch-wise autoencoder is equivalent to a specific convolutional neural network (CNN) architecture, resulting in a novel residual block. This new residual block includes a mask that enhances the CNN’s ability to learn border features and eliminates border artifacts, referred to as the Border-Enhanced Residual Block (BERBlock). By stacking BERBlocks, we constructed a U-Net denoiser (BERUNet). Experiments on public datasets demonstrate that the proposed BERUNet achieves outstanding performance. The proposed network architecture is built on rigorous mathematical derivations, making its working mechanism highly interpretable. The code and all pretrained models are publicly available. Full article
(This article belongs to the Special Issue Image Processing and Machine Learning with Applications)
Show Figures

Figure 1

Back to TopTop