Next Article in Journal
Securing Generative AI Systems: Threat-Centric Architectures and the Impact of Divergent EU–US Governance Regimes
Next Article in Special Issue
Generation of Distances Between Feature Vectors Derived from a Siamese Neural Network for Continuous Authentication
Previous Article in Journal
DIGITRACKER: An Efficient Tool Leveraging Loki for Detecting, Mitigating Cyber Threats and Empowering Cyber Defense
Previous Article in Special Issue
A Practical Incident-Response Framework for Generative AI Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Denoising Adaptive Multi-Branch Architecture for Detecting Cyber Attacks in Industrial Internet of Services

Department of Computing Technologies, School of Science Computing and Emerging Technologies, Swinburne University of Technologies, Hawthorn, Melbourne, VIC 3122, Australia
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2026, 6(1), 26; https://doi.org/10.3390/jcp6010026
Submission received: 28 May 2025 / Revised: 1 December 2025 / Accepted: 9 December 2025 / Published: 5 February 2026
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)

Abstract

The emerging scope of the Industrial Internet of Services (IIoS) requires a robust intrusion detection system to detect malicious attacks. The increasing frequency of sophisticated and high-impact cyber attacks has resulted in financial losses and catastrophes in IIoS-based manufacturing industries. However, existing solutions often struggle to adapt and generalize to new cyber attacks. This study proposes a unique approach designed for known and zero-day network attack detection in IIoS environments, called Denoising Adaptive Multi-Branch Architecture (DA-MBA). The proposed approach is a smart, conformal, and self-adjusting cyber attack detection framework featuring denoising representation learning, hybrid neural inference, and open-set uncertainty calibration. The model merges a denoising autoencoder (DAE) to generate noise-tolerant latent representations, which are processed using a hybrid multi-branch classifier combining dense and bidirectional recurrent layers to capture both static and temporal attack signatures. Moreover, it addresses challenges such as adaptability and generalizability by hybridizing a Multilayer Perceptron (MLP) and bidirectional LSTM (BiLSTM). The proposed hybrid model was designed to fuse feed-forward transformations with sequence-aware modeling, which can capture direct feature interactions and any underlying temporal and order-dependent patterns. Multiple approaches have been applied to strengthen the dual-branch architecture, such as class weighting and comprehensive hyperparameter optimization via Optuna, which collectively address imbalanced data, overfitting, and dynamically shifting threat vectors. The proposed DA-MBA is evaluated on two widely recognized IIoT-based datasets, Edge-IIoT set and WUSTL-IIoT-2021 and achieves over 99% accuracy and a near 0.02 loss, underscoring its effectiveness in detecting the most sophisticated attacks and outperforming recent deep learning IDS baselines. The solution offers a scalable and flexible architecture for enhancing cybersecurity within evolving IIoS environments by coupling feature denoising, multi-branch classification, and automated hyperparameter tuning. The results confirm that coupling robust feature denoising with sequence-aware classification can provide a scalable and flexible framework for improving cybersecurity within the IIoS. The proposed architecture offers a scalable, interpretable, and risk sensitive defense mechanism for IIoS, advancing secure, adaptive, and trustworthy industrial cyber-resilience.

1. Introduction

Industry 4.0 is rapidly evolving to create a highly digital environment that will soon become a part of our daily lives. The IIoS, along with the IIoT, is a significant pillar of Industry 4.0, adding value and contributing to human interest. IIoS is an integral component of Industry 4.0, similar to IIoT. In a general context, the IIoT acts as the “eyes” of a system, providing visibility through connected sensors and devices that collect and transmit data. In contrast, IIoS functions as the “brain,” interpreting data and providing intelligent services to improve industrial efficiency. The switch from traditional networks to IoT has recently revolutionized the global economy. IIoS infrastructure can support smart manufacturing, agriculture, health, government, cities, logistics, and home automation. The IIoS is a new concept that is still emerging. IIoS is a way to connect products and services to work together more seamlessly for humans. IIoS is a new concept that is still emerging and ultimately allows products and services to work together more seamlessly.
Figure 1 depicts the overall architecture of Industry 4.0, which blends the Industrial Internet of Things (IIoT) and cloud computing under an Industrial Internet of Services (IIoS) framework that offers more smart and intelligent services such as smart manufacturing systems. In Industry 4.0, incoming data from smart connected devices and sensors are transmitted via IIoT to cloud/edge-based platforms, which enables advanced analytics, real-time monitoring, and centralized control. Organizations can optimize processes, improve efficiency, and rapidly adapt to changing market or operational demands by layering service-oriented technologies, such as smart manufacturing and intelligent service delivery, atop this infrastructure.
According to a recent statistical report, the market value of IoT was $1.90 billion in 2018, $25 billion in 2020, and $925.2 billion as of 2023. It is forecasted to hit $6 trillion in 2025, which makes the compound annual growth rate (CAGR) from 2018 to 2025 approximately 15.12%. The report declared that 25 billion devices were connected before 2020, with 50 billion permanent connections and over 200 billion intermittent connections. The report also projects that 29.4 billion by 2023 [1]. These trends suggest that IoT adoption and investment continue to accelerate, with manufacturing poised to capture a significant share. Intel projected that the market value for IoT could hit 6.2 trillion dollars by 2025, and a substantial percentage of it is in manufacturing [2,3]. The growing trends of IoT can establish that rapid growth and high connectivity underscore the increasing importance of the IIoS. Factories adopt more sensors, edge devices, and data-driven solutions, which provide opportunities for advanced analytics, real-time monitoring, and predictive smart services. These projected shifts increase operational efficiency and assemble new revenue models that ultimately transform manufacturing processes through connectivity-based industrial service paradigms.
The IIoS can underpin numerous smart manufacturing services and operations, connecting machines and sensors, and control real-time systems. However, they are still vulnerable to cyber attacks that can disrupt production lines, damage machinery, or even endanger human operators. An unsecured IIoS architecture risks costly downtime, breaking the supply chain, and theft of sensitive intellectual property (IP). Attackers manipulating complex networks can vandalize manufacturing runs, produce inadequate products, or terminate operations, leading to significant financial and reputational losses. Therefore, robust cybersecurity measures are essential for maintaining smooth, safe, and resilient industrial operation.
Although intrusion detection within the industrial ecosystem has progressed in recent years, existing solutions often rely on static architectures that struggle to adapt to rapidly evolving, sophisticated threats, and typically neglect inherent data imbalance problems that impact the overall performance. In industrial landscapes, benign network traffic extensively surpasses malicious events, making it crucial to address data imbalances such that rare yet highly consequential attacks are not overlooked. Moreover, industrial data contain problematic noise generated by industrial equipment, sensors, and complex network topologies, which can degrade the performance of traditional machine-learning methods. Similarly, IIoS/IIoT-enabled industries can produce random noise in operational data generated by robotic arms, sporadic sensor calibrations, and inconsistent power supply, which can cause disruptions in conventional machine learning-based cyber defense techniques because conventional machine learning models are highly dependent on data. Furthermore, traditional solutions often struggle with dynamic conditions, lacking mechanisms to adapt swiftly as new threats emerge and underlying data shifts.
Considering these cybersecurity challenges, the proposed DA-MBA model offers a solution that can clean data from corrupted inputs by addressing the challenge of noise and shifting data distributions by learning to reconstruct. In smart industries, there is a chance of sensor malfunctions, environmental disturbances, and complex network protocols that can introduce anomalies that degrade the reliability of conventional machine-learning classifiers. The proposed solution overcomes this problem by training the autoencoder on noisy samples created via Gaussian noise, which learns a compressed and more consistent representation of network traffic that highlights essential features while canceling out noise. This DAE process yields a feature space that is significantly more robust to changes and irregularities in the data and allows the classifier to focus on meaningful patterns rather than spurious fluctuations. The proposed model, DA-MBA, is further described as more adaptive, with appropriate hyperparameter tuning using the Optuna framework and suitable regularization, allowing the model to keep pace with the dynamic nature of the threats. After feature extraction and making the incoming feature space more appropriate, the pipeline employs a hybrid-pronged classification, in which the MLP can distinguish direct and non-sequential relationships between the denoised features. Moreover, MLP can quickly learn global patterns and correlations characteristic of malicious behavior, which can be critical for detecting attacks that exhibit clear patterns of anomalies. However, BiLSTM, on the other hand, handles the sequential and contextual aspects of data. Recently, the signatures of many attacks have evolved, even if the data points are not strictly chronological, such as the escalation of suspicious ports and the increase in abnormal traffic over time. Sometimes, unfolding and identifying interrelated patterns is very difficult, but BiLSTM’s recurrent structure is well suited for this task. The model integrates the outputs of both MLP and BiLSTM to provide immediate feature-level insights with sequential and context-based insights, resulting in a more comprehensive view of potential cyber attacks.
The proposed model is also integrated with automated hyperparameter search, dropout, L2 regularization, and class weighting, which allows the hybrid classifier to adapt to shifting conditions corresponding to retraining and refining its parameters as new threats or data distributions emerge. This proposed flexibility is crucial for staying effective in continuously evolving industrial networks, where complex and sophisticated exploits and rare attack variations can rapidly sabotage static detection methods. The proposed model was trained and tested on two widely recognized IIoT-based datasets, Edge-IIoTset and WUSTL-IIOT-2021 and achieved over 99% accuracy and a near 0.02 loss, underscoring its effectiveness in detecting the most sophisticated attacks.
The proposed study incorporates multiple regularization strategies throughout the training process, which makes the model immune from overfitting and maintains a robust performance on unseen data. The first step is to apply dropouts and L2 regularizations in the key layers to penalize large weights and minimize co-adaptation among neurons. Furthermore, the DAE intervenes by adding Gaussian noise, forcing the encoder to learn fundamental feature representations rather than memorizing noisy and temporary patterns. The Gaussian noise process injects the model more adaptively to evening cyber attacks, which can reshape its behavior to exploit conventional machine learning methods. The study also employs early stopping based on validation loss, halting training once performance no longer improves, and preventing the DA-MBA from overfitting to erroneous fluctuations in the training set. Moreover, automated hyperparameter optimization via the Optuna framework further refines the dropout rates, learning rates, and encoding dimensions of the network to strike an optimal balance between capacity and generalization. The datasets were split into training, validation, and test sets to ensure an unbiased performance estimation and robust hyperparameter tuning. Furthermore, the DA-MBA was statistically validated by comparing the training and test accuracy, loss, F1 scores, and ROC-AUC curves.

Need for Specialized Cyber Attack Detection in IIoS

The Industrial Internet of Services (IIoS) represents the convergence of advanced technologies and industrial systems, creating a paradigm shift in how industries operate and deliver services. Figure 2 describes the overall architecture along with five layers as perception layer, network layer, service layer, application layer and business layer.
In Industry 4.0, numerous enterprises are adopting layered industrial architectures to influence the synergy between the IIoT and IIoS. As illustrated in Figure 2, the perception layer is surrounded by field-level devices and sensors for data capture. The perception layer hosts a network of sensors, actuators, and smart devices, continuously streaming precise data regarding the equipment status, environmental conditions, and operational process variables. However, the network layer comprises communication protocols, routers, and gateways that securely transmit collected information. The network layer, where low-latency communication protocols, edge computing nodes, and smart network hardware solutions, such as intelligent routers and next-generation firewalls, ensure secure and efficient data routing. Networked data then travel to the service layer based on the IIoS, where cloud-based services, virtualized microservices, and adaptive computing resources facilitate advanced analytics, real-time monitoring and reporting, and on-demand scalability. Moreover, the application layer merges sophisticated services from the service layer with end-user dashboards, SCADA systems, and machine-to-machine APIs, thereby promoting remote visibility, advanced process automation, and automated decision-making. Eventually, the business layer relies on consolidated analytics to optimize production procedures, improve supply chain management, and discover novel remuneration streams.
A complete smart industrial architecture requires robust cybersecurity measures across all the layers. However, the IIoS imposes additional challenges that necessitate specialized network-based cyber attack detection solutions. Compared to conventional industrial networks, IIoS relies heavily on cloud-based smart services, as it fundamentally provides industrial services through cloud platforms. The IIoS offers virtualized micro-services and dynamic resource configurations in industrial architecture, often transiting multiple geographic regions and heterogeneous complex network domains. Moreover, a complex ecosystem of IIoS invites complicated communication patterns and a broader attack surface, making IIoS vulnerable to cyber threats such as distributed denial-of-service (DDoS), supply chain vulnerabilities, and man-in-the-middle (MITM) attacks. Furthermore, the IIoS merges real-time data feeds from sensors and smart edge devices, making anomalies more challenging to detect within high-volume traffic. Implementing a network-based intrusion detection solution that incorporates technologies such as deep packet inspection, behavioral analytics, and machine learning-driven anomaly detection can enable industries to continuously monitor critical data flows, isolate suspicious activity, and adjust to emerging cyber exploits.
This paper is organized as follows: Section 2 describes the literature. Section 3 discusses the research methodology. Section 4 discusses the results and findings of this work, and finally, Section 5 concludes the work presented in this paper and gives recommendations for future work.

2. Literature Review

The rapid expansion of the IIoS, where modern service-centric architectures and industrial processes assemble, has significantly increased the connectivity and complexity of modern industrial ecosystems. Despite previous research on cybersecurity in industrial contexts that has predominantly focused on IIoT, the unique operational dynamics of IIoS introduce additional attack vectors and vulnerabilities that necessitate more sophisticated defense methods. For example, service-level reliance, real-time resource management, and cross-domain data interactions significantly increase the attack surface and create new opportunities for intruders to launch sophisticated exploitation. Researchers are working on continual learning and transformers, but conventional pipelines are still inadequate for evolved and zero-day cyber attacks. In the following section (Table 1), key advances in intrusion detection methods are critically reviewed, highlighting their contributions and results. Existing studies have highlighted the effectiveness of Machine Learning, Deep Learning, and Hybrid ML/DL heuristic methods for intrusion detection in IIoT infrastructures. Taking advantage of the above-mentioned heuristic approaches, ref. [4] systematically investigated DL approaches and explored how convolutional autoencoders (CAEs) can be utilized to enhance side-channel attacks, which are a critical threat vector in cryptographic systems. The authors compared multiple CAE architectures and hyperparameter configurations, providing a better understanding of the design choices that best capture and compress high-dimensional patterns without compromising critical leakage information and detecting network-based cyber attacks. However, despite the promising results of this research, a principal limitation of this study is its focus on a relatively constrained set of experimental settings (e.g., a single device type or single cryptographic algorithm), which may limit the broader applicability of the findings. Another study [5] presented a more robust approach by proposing an adaptive cyber attack prediction framework that integrates an enhanced genetic algorithm (GA) with deep learning (DL) techniques to dynamically identify and respond to emerging cybersecurity threats. The proposed method optimizes both feature selection and hyperparameter configuration settings in a deep neural network (DNN), resulting in improved predictive accuracy and reduced false positives compared with baseline models. However, a key limitation is that this approach, which combines a genetic algorithm and a deep learning pipeline, may become computationally demanding when scaled to large datasets.
Further studies [6,7] have concentrated on automating and optimizing security mechanisms to mitigate sophisticated cyber threats. The author [6] implemented automated machine-learning (AutoML) pipelines specifically for DL-based malware detection, highlighting the reduced manual overhead in setting hyperparameter configurations and selecting the appropriate architecture. Through the systematic exploration of optimal network arrangements, the proposed approach facilitates rapid adaptation to evolving malware variations. In parallel, ref. [7] presented an intrusion-detection framework precisely designed for IoT domains that combines selective feature engineering with lightweight classification to achieve a balance between accuracy and computational efficiency. Although both proposed solutions originate from different applications, they work on the same objective of streamlining model development to generate quicker and more effective responses to cyber hazards. However, questions remain regarding their scalability and adaptability in highly dynamic scenarios. Other studies [8,9,10] focus on a shared objective: strengthening IoT and IIoT ecosystems against emerging cyber threats through advanced DL techniques. The study [8] highlights hybrid deep learning techniques that combine various neural architectures, thereby improving detection robustness while considering the heterogeneity often inherent in IoT infrastructures. Moreover, ref. [9] disseminated this perspective by proposing a hybrid deep random neural network for the industrial IoT (IIoT), demonstrating how specialized randomization mechanisms can mitigate overfitting and adapt quickly to new attack patterns. In parallel, ref. [10] introduces a distributed attack detection framework powered by a deep learning (DL) architecture, improving scalability by enabling intrusion detection tasks to be distributed and coordinated across multiple nodes in an Internet of Things (IoT) environment. Overall, these studies demonstrate that deep learning strategies, whether hybrid, can yield substantial improvements in the detection of complex cyber attacks in complex, interconnected IoT environments. However, these studies also illustrate the ongoing challenges of computational overhead, real-time flexibility, and evolving nature of threat profiles.
Moreover, further studies [11,12,13], which try to overcome recent challenges in cyber attack detection [11], propose a DL-based intrusion detection pipeline tailored for IIoT, focusing on high detection accuracy and real-time responsiveness in typically under-resourced operational environments. Ref. [12] expanded this perspective by emphasizing robust feature selection and overfitting mitigation through hybrid machine-learning approaches, which are crucial for handling large-scale noisy datasets that can weaken intrusion indicators. Similarly, ref. [13] proposed a hybrid deep learning approach for network security, which highlights the potential of various neural network architectures to improve threat detection rates. Overall, studies have demonstrated that sophisticated learning pipelines can strictly be based on deep learning or hybrid methods and benefit from careful data preprocessing, feature engineering, and computational efficiency parameters. Moreover, studies have demonstrated the persistent need for adaptable, self-optimizing models that can survive evolving cyber hazards, particularly in complex IIoT scenarios, where even minor security breaches can lead to significant operational and financial consequences. Further studies [11,12,13] investigate more robust architectures to overcome existing research gaps and detect evolved cyber hazards. The author [11] leveraged a CNN to enhance network-based intrusion detection, revealing notable progress in detection accuracy while maintaining manageable computational complexity. Moreover, ref. [12] extended the paradigm by merging neural networks with ML algorithms, effectively capturing network traffic data and detecting cyber hazards. Their proposed model not only improves the detection precision for varying threat types but also supports high performance. Meanwhile, ref. [13] focused on a hyper-tuned, compact LSTM framework for hybrid intrusion detection, aiming to optimize resource utilization without compromising the performance. Together, these studies demonstrate the shift toward specialized and hybrid deep learning models that tackle both the high dimensionality and real-time limitations of complex networks, signifying that further improvements in architecture design, hyperparameter tuning, and scalability may generate even more robust intrusion-detection abilities.
However, despite the substantial benefits offered by the previously mentioned studies in applying deep learning (DL) and hybrid learning pipelines for intrusion detection, these approaches reveal constraints such as exposure to overfitting on noisy, imbalanced datasets, inadequate hyperparameter tuning, and insufficient measurement of real-time inference performance, particularly in resource-constrained IIoT contexts. Alternatively, the proposed model in this study integrates a denoising autoencoder (DAE) to handle noise robustly, and utilizes bidirectional long short-term memory (BiLSTM) alongside a multilayer perceptron (MLP) for complementary spatial-temporal feature extraction. The study also included automated hyperparameter optimization via the Optuna framework, ensuring that the proposed pipeline adapts effectively to diverse network conditions while minimizing manual intervention. Furthermore, the proposed research systematically measures decision time per detection, and the framework explicitly addresses operational latency concerns often overlooked in prior work, making it a more comprehensive and practical solution for real-world IIoT cyber defense.

State of Arts Cyber Attacks and Available Datasets

A Comprehensive List of Cyber Attacks provides an organized and detailed catalog of known cyber threats, vulnerabilities, and attack techniques that have been observed across different domains and industries in Figure 3. This list typically includes various attack types such as phishing, ransomware, distributed denial-of-service (DDoS), man-in-the-middle (MITM), SQL injection, and advanced persistent threats (APTs), among others. For each attack type, the list often provides an overview of the methodology, impact, targeted systems, and historical examples of incidents. Such a resource is crucial for understanding the evolving landscape of cybersecurity threats, enabling organizations to identify potential risks, prioritize defense strategies, and implement robust incident response plans. A comprehensive attack list also serves as a foundation for researchers and practitioners to simulate real-world scenarios and test the resilience of their systems against emerging threats.
A Comprehensive List of Available Datasets offers an exhaustive inventory of datasets relevant to a variety of research and application domains, including cybersecurity, machine learning, healthcare, and industrial systems as given in Table 2. This list typically includes information about datasets that are publicly available or accessible through partnerships, specifying their focus areas, formats, and intended use cases. For instance, in cybersecurity, datasets might cover network traffic logs, malware samples, or intrusion detection system (IDS) events, while other datasets may focus on areas like IoT, financial fraud, or natural language processing. By providing detailed descriptions, licensing terms, and potential applications, such a list supports researchers, developers, and analysts in identifying the most appropriate datasets for their needs, fostering innovation and reproducibility in research while accelerating the development of advanced solutions.

3. Research Methodology

This section presents the proposed DA-MBA designed for generalized and robust intrusion and zero-day attack detection in the Industrial Internet of Services. The proposed pipeline integrates noise-robust feature learning, temporal analysis, mutual information (MI)-based feature selection, and adaptive thresholding, aided by the explainability.

3.1. Problem Formulation

In the era of IIoS, data are generated from heterogeneous multivariate telemetry, including sensor readings, protocol metadata, actuator commands, and device-level events. Cyber attacks also blend into benign operations and traffic and appear through both instantaneous anomalies. This section presents the proposed DA-MBA designed for generalized and robust intrusion detection and zero-day attack detection in the Industrial Internet of Services. Given dataset D = x i ,   y i   i = 1 , where x i     R d represents input features and y i { 0,1 } indicates benign or malicious behavior and the ultimate goal is to learn mapping.
f = x i p   ( y i = 1 | x i )
This Equation (1) represents high generalization to unseen families of attacks.

3.2. Datasets

The proposed model was evaluated on two industry-standard intrusion detection datasets from IIoT environments.
Edge IIoT dataset (Table 3): A renowned IIoT cyber range dataset containing cross-protocol network traffic (such as MQTT, CoAP, and Modbus), device synchronization, and more than 14 attack families, including scanning, spoofing, DDoS, data modification, and malware spreading. The Edge IIoT dataset includes “Attack type” as a family identifier and “Attack label” as the binary target.
The Edge IIoT Dataset is an extensive and diverse dataset specifically designed to facilitate research in the field of the Industrial Internet of Things (IIoT), as described in Table 2. This dataset captures the operational data from edge devices deployed in industrial settings, including sensors, actuators, and controllers. It often includes real-time data streams and structured data for monitoring and analyzing industrial processes such as manufacturing, energy management, and predictive maintenance. The Edge IIoT dataset is valuable for analyzing key areas such as anomaly detection, fault prediction, and cybersecurity in industrial IoT systems. A combination of temporal, spatial, and contextual features serves as a robust foundation for testing machine learning models, validating edge analytics solutions, and improving system reliability and scalability in industrial Internet of Things networks.
WUSTL-IIoT dataset (Table 4): Another well-known multi-domain IIoT dataset with operational traffic from industrial devices, such as PLC, RTU, and sensors, consisting of both benign and malicious events. In the WUSTL-IIoT dataset, the metadata included “Traffic” (family type) and “Target” (label). The WUSTL-IIoT dataset is significantly imbalanced and mimics real factory conditions.
The WUSTL Dataset serves as a benchmark for evaluating novel machine learning pipelines, enabling applications in diagnostics, real-time monitoring, and adaptive systems.
For each dataset, the proposed model has two main evaluation settings: Baseline Detection (seen-attack): the model analyzes the sample from all attack families during training for binary classification. Zero-Day Detection (leave-one-family-out): For each attack family, excluding normal traffic, the model builds a leave-one-family-out split. All samples from the dataset belonging to family f form the unknown (zero-day) set and are entirely excluded from training. The remaining samples were considered known, and the dataset was split into subsets. Two test cases are assessed for each held-out family: one is Strict Zero-Day, where only normal and zero-day attacks of family ff are contained. In other cases, mixed where normal, known attack families, and zero-day attacks are together.

3.3. Machine Learning Process

This study employs a comprehensive machine learning pipeline to optimize and evaluate a hybrid model for binary classification tasks.
This study employed a comprehensive machine learning pipeline to optimize and evaluate a hybrid binary classification model. A crucial design principle of DA-MBA is to be leakage prevention: no transformation is allowed to “observe” test data during the fitting. The preprocessing of this pipeline consisted of the following: Split first: Each dataset was split into three subsets, namely training, validation, and testing, to ensure robust model implementation and reliable performance assessment. The validation subset was used strictly for early stopping, hyperparameter tuning, and monitoring progress without influencing the final testing results. Moreover, the most efficient data distribution was approximately 72% training, 8% validation, and 20% testing, which ensured a robust, leakage-free evaluation pipeline. To prevent cross-family information leakage, this system implements a structured preprocessing pipeline that removes identifier-related fields, reduces high-cardinality features, and transforms sensitive flow attributes. Feature audits based on uniqueness ratios and mutual information values verified that the cleaned dataset contained no unnecessary fields that would bias the evaluations. The leakage-controlled preprocessing pipeline includes leakage audits that report uniqueness ratios and mutual information scores for all sanitized features. The audit confirmed that after sanitization, no identifier-like or high-cardinality fields remained. Handling missing values: Numerical features from the dataset (such as packet counts, byte counts, and durations) were modeled with the median of the training distribution for that feature.
Categorical features (such as protocol types and status codes of events) are modeled from their most frequent training values. The proposed process (Figure 4) preserves empirical organization while avoiding test statistic bias. Encoding categorical attributes: Categorical features were encoded as numerical values using ordinal encoding. The zero day sets map unknown categories to a dedicated “unknown” code, which ensures that the model cannot fail when processing unseen values. Feature Selection using Mutual Information (MI): Following data imputation and encoding, all numerical and encoded categorical features were consolidated. For every feature, its MI with the binary label is evaluated.
Only the top-K features with the highest MI are retrieved, where K is an adjustable hyperparameter. The proposed setup reduces dimensionality, eliminates irrelevant and noisy features, and boosts training to be more reliable and efficient. Standardization: In this step, the selected features were standardized using the mean and standard deviation of the training set. A zero mean and unit variance were assigned to each feature. This is advantageous for subsequent deep learning stages, which assume comparable feature scales.
The DA-MBA model combines multiple components to optimize robustness and reliability under realistic IIoS operating conditions. First, the model acts as an implicit ensemble, combining a DAE for compressed latent representation with a combined MLP and a bidirectional LSTM pathway. The robust design allows the model to capture both static feature patterns and short-range temporal dependencies, significantly improving generalization across diverse traffic events. Furthermore, to strengthen decision reliability, the DA-MBA employs threshold calibration, where validation-time normal samples are used to derive an adaptive decision boundary that achieves a specified false-positive rate, which ensures that the classifier remains steady and interpretable when deployed in environments where benign traffic may vary over time.
In the proposed DA-MBA pipeline, the calibrated decision module refines the raw classifier outputs to ensure reliable predictions under IIoS operating conditions. After the hybrid deep learning MLP–BiLSTM classifier generates the attack probabilities, the predictions are calibrated using standard validation samples to adjust the decision threshold. This calibration process was implemented through an FPR-controlled quantile calibration step to produce a calibrated probability distribution that aligned the model’s output with the desired false-positive bounds. The calibrated predictions are then passed to an adaptive thresholding mechanism that selects the optimal cutoff for distinguishing benign from malicious events in both strict zero-day and mixed evaluation modes. The final controlled calibration layer strengthens deployment reliability by minimizing unnecessary alerts while preserving sensitivity to genuinely anomalous patterns, ensuring that the DA-MBA maintains precise and explainable decision-making across unseen or evolving IIoS attack patterns.
Finally, the framework performs an extensive zero-day assessment using a leave-one-family-out (LOFO) approach, where each attack family is omitted during training and later treated as entirely unseen during testing. This strict approach assesses the model’s ability to detect previously unseen attacks, emphasizing DA-MBA’s suitability of DA-MBA for real-world IIoS security scenarios, where adversaries continually introduce new and evolving threat patterns.

3.4. DA-MBA Model Architecture

The proposed DA-MBA architecture includes two main modules: a denoising autoencoder that develops a reliable and concise representation of the input features. Hybrid classifier: that integrates a fully coupled (MLP) branch with a bidirectional LSTM branch.
DAE: The DAE utilizes the standardized top-K feature matrix as the input set and learns to reassemble it. To enhance robustness, Small Gaussian noise is injected into the input layer during the training phase. The encoder is composed of dense layers with nonlinear activations and L2 regularization, resulting in a latent code. The decoder phase replicates the structure by mapping the latent code back to the original feature dimension. The DAE is designed to minimize reconstruction errors, enabling it to discover a low-dimensional manifold that captures the most salient structure of both benign and malicious traffic. After convergence, only the encoder was kept and used to transform each feature vector into a latent vector.
Hybrid Classifier: The proposed hybrid classifier utilizes the latent representation and outputs an estimate of the possibility of an attack. The latent function is fed into a dense node, which generates fully connected layers with nonlinear activations and dropout, resulting in a transformed feature vector. Moreover, the same latent vector is reshaped into a length-1 sequence and passed through a BiLSTM layer. However, despite having a single time step, the BiLSTM branch provides increased representational power and can be used to model complex nonlinear transformations, unlike the purely feedforward branch. The outputs of the dense and BiLSTM branches were combined and fed into a final dense layer that yielded a single scalar, [0, 1], expressed as the probability of an attack. The proposed hybrid design maximizes the potential of both fully connected and recurrent layers while ensuring that the overall model is sufficiently lightweight for edge deployment. The hybrid classifier was trained with class weighting and early stopping, with the AUC on the validation set monitored.
A denoising autoencoder is designed to extract robust features from the input data as given in Figure 5. Gaussian noise is added to the training data to simulate real-world scenarios, and the autoencoder is trained to reconstruct the original data. The encoder component of the autoencoder extracts compressed, high-dimensional representations, which are further utilized as input features for the hybrid model. Regularization techniques, such as dropout and L2 regularization, are incorporated to enhance generalization and prevent overfitting.

3.5. Training and Evaluation Process

The hybrid model combines the strengths of Multi-Layer Perceptron (MLP) and Bidirectional Long Short-Term Memory (BiLSTM) networks. The MLP pathway processes the encoded features, while the BiLSTM pathway models temporal relationships in reshaped encoded features. Outputs from both pathways are concatenated and passed through a dense layer with sigmoid activation for binary classification. Dropout layers are strategically integrated in both pathways to regularize the model.
Optuna, an advanced hyperparameter optimization framework, is employed to identify the optimal set of hyperparameters for the denoising autoencoder and the hybrid model. The optimization process dynamically explores the hyperparameter search space, including encoding dimensions, noise factors, dropout rates, LSTM and MLP units, L2 regularization strength, and learning rate. Optuna leverages the Tree-Structured Parzen Estimator (TPE) to balance exploration and exploitation, while early stopping mechanisms prune underperforming trials to minimize computational overhead.

4. Results and Discussion

The performance evaluation of our proposed method was conducted on two benchmark datasets, WUSTL and EdgeIIoT. Both datasets demonstrated exceptional results, with accuracy metrics reaching nearly 99% across training, testing, and validation phases. These results underline the robustness and reliability of the approach in addressing the challenges inherent in the datasets and highlight its applicability to real-world IoT and edge computing environments.
Table 5 describes all the used resources along with the environment and libraries that have been used in this hybrid model.
Table 6 describes the used hyperparameters along with configurations which are automatically optimized and selected by the framework Optuna.
For the WUSTL dataset, the training, testing, and validation phases showed minimal variance, indicating the model’s ability to effectively generalize across different data splits. The training accuracy remained consistently high, reflecting the model’s capability to learn intricate patterns within the dataset. Similarly, the testing and validation accuracies validate the predictive power of the model and its potential to perform well on unseen data, as shown in Table 7. This uniform performance across all phases suggests that the model is free from overfitting and is well optimized for this dataset. The EdgeIIoT dataset, which presents unique challenges owing to its high-dimensional features and noise levels, also yielded nearly 99% accuracy across the training, testing, and validation phases. The results for this dataset confirm the scalability and adaptability of the proposed approach. The minimal loss observed during the training phase indicates efficient convergence of the model, whereas the low testing and validation losses corroborate its robustness against overfitting and its capability to retain high performance on diverse data distributions.
The results achieved in this study significantly outperformed those reported in similar studies using the WUSTL and EdgeIIoT datasets. Table 8 also presents comparisons of the baseline MLP and BiLSTM models with the proposed model. Although previous approaches have often been limited by issues such as overfitting, scalability, or inadequate performance on noisy datasets, our method demonstrates an unprecedented level of accuracy and stability. MLP and BiLSTM provided less accuracy than the proposed model by approximately 92% to 97%, whereas the proposed model yielded approximately 99% results. This underscores the importance of incorporating innovative features and training techniques. The high accuracy and low loss across diverse phases suggest that this approach is both theoretically robust and practically viable. In real-world applications, such reliability is crucial for decision-making in IoT and edge computing environments, where data integrity and rapid processing are paramount. The proposed study provides a strong foundation for further exploration and improvement, presenting a significant leap forward in addressing the challenges associated with edge computing and IoT applications. Nearly perfect accuracy metrics demonstrate the potential of the proposed approaches in addressing these issues and provide a method for innovative, practical solutions in the field.
The leave-one-family-out (LOFO) method yields Table 9, which shows that DA-MBA sustains a strong zero-day performance on Edge-IIoT dataset. The DA-MBA model exhibited sophisticated generalization, yielding ROC-AUC and PR-AUC scores (≈0.99–1.00) across nearly all attack types. Therefore, the obtained results indicate that the learned latent representation and hybrid classifier can clearly isolate the majority of malicious behaviors from benign events, even when using the LOFO method. However, a notable exception is the MITM attack class, which shows a slightly lower recall (0.813) despite retaining a high ROC-AUC (0.9996) and PR-AUC (0.9883). According to this pattern, the model correctly ranked the MITM samples (high separability), but setting the decision threshold to 0.5 resulted in a modest number of missed detections. The reason for the low results is that MITM traffic often overlaps with benign communication patterns and exhibits fewer statistical deviations than volumetric attacks, such as DDoS and scanning. Moreover, the marginally lower performance against MITM attacks is also due to the limited number of training data points. Although MITM attacks are more challenging than regular attacks, the DA-MBA maintains excellent detection performance (>80% recall) and near-perfect ranking metrics, confirming the model’s ability to detect both high-volume and stealthy attacks.
Table 10 presents the per-attack (attack class) results of the proposed pipeline, DA-MBA, on the WUSTL-IIoT dataset under strict zero-day and mixed evaluation conditions. The analysis demonstrated the consistent existence of a strong detection ability across most attack classes, with ROC-AUC and PR-AUC values near 1.0, demonstrating excellent sensitivity tole malicious and benign events. The model offers good recall for DoS (0.95) and strong recall for Backdoor (0.76415) and Command Injection (0.80309), demonstrating its ability to correctly identify a wide range of previously unseen attack types. However, the effectiveness is marginally lower for the reconnaissance class, which shows reduced recall at a threshold of 0.5; this class still retains a high ROC-AUC (0.95) and PR-AUC (0.87), indicating that the model ranks these samples well despite the complex characteristics of the dataset. Overall, the table shows that the DA-MBA delivers robust and reliable zero-day detection across diverse attack classes, with good results against high-impact attack events such as Backdoor, Command Injection, and DoS.
Table 11 summarizes the computing environment used to execute the DA-MBA model. Table 12 shows the computational footprint of the DA-MBA model. Despite its small trainable parameter count (90,881) and compact FP32 footprint (0.35 MB), the overall process memory is high because of the Python runtime overheads, data loading, and temporary arrays generated during preprocessing and batch inference. The given values represent the execution environment of the entire pipeline. Based on the results, the proposed model appears to be lightweight and suitable for edge deployment.
Table 12 presents the latency metrics for the full DA-MBA pipeline, which consists of preprocessing, encoder forward pass, and classification in a batch-based analysis setting. Both industrial datasets exhibit extremely low processing times, with median latencies of 4.23 µs (EDGE-IIoT dataset) and 2.35 µs (WUSTL-IIoT dataset). Even at the 99th percentile, the latencies remained within a few microseconds, demonstrating highly stable real-time behavior.
The assessed values were high throughput levels, surpassing 230 k samples/s on EDGE-IIoT and 425 k samples/s on WUSTL-IIoT. The statistics indicate that the proposed DA-MBA model is well suited to real-time industrial IoT environments that require fast and consistent detection across large data streams.
Figure 6 and Figure 7 present the SHAP summary plots illustrating the contribution of the top features to the prediction behavior of the DA-MBA model in the WUSTL IIoT and Edge-IIoT datasets. The visualization shows a clear and interpretable distribution of the feature influences. In the WUSTL-IIoT dataset, high-impact features such as DIntPkt, DstJitter, Dport, and various byte- and packet-level attributes largely drive the model output. The well-balanced distribution of red (high feature values) and blue (low feature values) features illustrates that the model uses both large and small fluctuations in network statistics to distinguish between benign and malicious events. The given pattern demonstrates that the model is not entirely dependent on a single dominant feature. However, it integrates information from multiple network-level feature spaces, suggesting stable and consistent learning patterns. Moreover, the Edge-IIoT dataset highlighted the features that most influenced the model’s predictions. The distribution of SHAP values indicates that protocol-level attributes such as mqtt.prtoname, mqtt.topic, dns.qry.name.len and mqtt.conack.flags contribute strongly to distinguishing benign from malicious events, whereas HTTP- and ICMP-related fields also play significant roles in this model. The balanced distribution of red (high influenced values) and blue (low influenced values) across both positive and negative SHAP contributions indicates that the model learns complex patterns rather than relying on any single dominant feature. The visualization indicates that the model is based on various contextually appropriate IIoT communication attributes.
The proposed DA-MBA model offers essential insights into its stability and generalization capability during the training process across heterogeneous IIoT environments, as it reflects both industrial datasets. As shown in Figure 8 and Figure 9, the training and validation accuracy curves for the renowned industrial datasets, the classifier is stabilized and demonstrates highly stable learning patterns.
In both datasets, the training and validation curves remained closely aligned throughout the training, indicating the absence of overfitting and confirming that the model learned robust features that generalize well to unseen events. Considering the complexity and volume of IIoS traffic, where temporal fluctuations, unreliable protocol characteristics, and class imbalance are critical challenges to the training stability of industrial intrusion detection models, such consistency is significant.
Balancing the ROC analysis, the PR curves presented in Figure 10 and Figure 11 for the WUSTL-IIoT dataset and Figure 12 and Figure 13 for the Edge-IIoT dataset provide a comprehensive analysis of the performance under class-imbalanced circumstances, which is a significant challenge in IIoS security. In summary, visual analyses demonstrated that DA-MBA delivers consistent and robust detection performance across both datasets, positioning it as a reliable deployment-ready solution for safeguarding modern IIoS infrastructures.

5. Conclusions

This study presents DA-MBA, a lightweight yet highly effective intrusion detection framework designed for the Industrial Internet of Services. The proposed model, DA-MBA, combines DAE with a hybrid MLP–BiLSTM classifier and achieves robust data modeling and reliable detection across diverse industrial traffic patterns. The proposed DA-MBA model achieved high ROC-AUC and PR-AUC values for most attack classes, despite the strict zero-day attack condition, when evaluated on the renowned industrial datasets EDGE-IIoT and WUSTL-IIoT. Moreover, SHAP-based explainability analysis verifies that the model decisions are supported by meaningful structural features, enhancing trust and ensuring accountability in practical cybersecurity scenarios. Furthermore, the results highlight that the proposed model, DA-MBA, is a practical, reliable, and computationally efficient approach for securing modern industrial ecosystems, offering both high detection accuracy and deployment potential on resource-limited industrial platforms.

Author Contributions

Conceptualization, G.Q. and S.C.; methodology, G.Q.; software, validation, and formal analysis, G.Q.; investigation, resources, data curation, writing—original draft preparation, G.Q.; writing—review and editing, G.Q. and S.C.; visualization, G.Q.; supervision, S.C.; project administration, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data available in a publicly accessible repository.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yalli, J.S.; Hasan, M.H.; Badawi, A. Internet of things (IoT): Origin, embedded technologies, smart applications and its growth in the last decade. IEEE Access 2024, 12, 91357–91382. [Google Scholar] [CrossRef]
  2. Pohan, M.M.; Soewito, B. Injection attack detection on internet of things device with machine learning method. Jurasik (J. Ris. Sist. Inf. Dan Tek. Inform.) 2023, 8, 204–212. [Google Scholar]
  3. Chui, M.; Collins, M.; Patel, M. The Internet of Things: Catching up to An Accelerating Opportunity; McKinsey &Company: New York, NY, USA, 2021. [Google Scholar]
  4. van den Berg, D.; Slooff, T.; Brohet, M.; Papagiannopoulos, K.; Regazzoni, F. Data Under Siege: The Quest for the Optimal Convolutional Autoencoder in Side-Channel Attacks. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Queensland, Australia, 18–23 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–9. [Google Scholar]
  5. Ibor, A.E.; Oladeji, F.A.; Okunoye, O.B.; Uwadia, C.O. Novel adaptive cyberattack prediction model using an enhanced genetic algorithm and deep learning (AdacDeep). Inf. Secur. J. A Glob. Perspect. 2022, 31, 105–124. [Google Scholar] [CrossRef]
  6. Brown, A.; Gupta, M.; Abdelsalam, M. Automated machine learning for deep learning based malware detection. Comput. Secur. 2024, 137, 103582. [Google Scholar] [CrossRef]
  7. Qaddos, A.; Yaseen, M.U.; Al-Shamayleh, A.S.; Imran, M.; Akhunzada, A.; Alharthi, S.Z. A novel intrusion detection framework for optimizing IoT security. Sci. Rep. 2024, 14, 21789. [Google Scholar] [CrossRef] [PubMed]
  8. Maaz, M.; Ahmed, G.; Al-Shamayleh, A.S.; Akhunzada, A.; Siddiqui, S.; Al-Ghushami, A.H. Empowering IoT resilience: Hybrid deep learning techniques for enhanced security. IEEE Access 2024, 12, 180597–180618. [Google Scholar] [CrossRef]
  9. Huma, Z.E.; Latif, S.; Ahmad, J.; Idrees, Z.; Ibrar, A.; Zou, Z.; Alqahtani, F.; Baothman, F. A hybrid deep random neural network for cyberattack detection in the industrial internet of things. IEEE Access 2021, 9, 55595–55605. [Google Scholar] [CrossRef]
  10. Jullian, O.; Otero, B.; Rodriguez, E.; Gutierrez, N.; Antona, H.; Canal, R. Deep-learning based detection for cyber-attacks in iot networks: A distributed attack detection framework. J. Netw. Syst. Manag. 2023, 31, 33. [Google Scholar] [CrossRef]
  11. Nandanwar, H.; Katarya, R. Deep learning enabled intrusion detection system for Industrial IOT environment. Expert Syst. Appl. 2024, 249, 123808. [Google Scholar] [CrossRef]
  12. Ahmadi Abkenari, F.; Milani Fard, A.; Khanchi, S. Hybrid Machine Learning-Based Approaches for Feature and Overfitting Reduction to Model Intrusion Patterns. J. Cybersecur. Priv. 2023, 3, 544–557. [Google Scholar] [CrossRef]
  13. Qureshi, S.; He, J.; Tunio, S.; Zhu, N.; Akhtar, F.; Ullah, F.; Nazir, A.; Wajahat, A. A hybrid DL-based detection mechanism for cyber threats in secure networks. IEEE Access 2021, 9, 73938–73947. [Google Scholar] [CrossRef]
  14. Al-Turaiki, I.; Altwaijry, N. A convolutional neural network for improved anomaly-based network intrusion detection. Big Data 2021, 9, 233–252. [Google Scholar] [CrossRef]
  15. Halbouni, A.; Gunawan, T.S.; Habaebi, M.H.; Halbouni, M.; Kartiwi, M.; Ahmad, R. CNN-LSTM: Hybrid deep neural network for network intrusion detection system. IEEE Access 2022, 10, 99837–99849. [Google Scholar] [CrossRef]
  16. Bibi, A.; Sampedro, G.A.; Almadhor, A.; Javed, A.R.; Kim, T.-H. A hypertuned lightweight and scalable LSTM model for hybrid network intrusion detection. Technologies 2023, 11, 121. [Google Scholar] [CrossRef]
  17. Ferrag, M.A.; Friha, O.; Hamouda, D.; Maglaras, L.; Janicke, H. Edge-IIoTset: A new comprehensive realistic cyber security dataset of IoT and IIoT applications for centralized and federated learning. IEEE Access 2022, 10, 40281–40306. [Google Scholar] [CrossRef]
  18. Alanazi, M.; Mahmood, A.; Chowdhury, M.J.M. ICS-LTU2022: A dataset for ICS vulnerabilities. Comput. Secur. 2025, 148, 104143. [Google Scholar] [CrossRef]
  19. Lamshöft, K.; Neubert, T.; Krätzer, C.; Vielhauer, C.; Dittmann, J. Information hiding in cyber physical systems: Challenges for embedding, retrieval and detection using sensor data of the SWAT dataset. In Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security, Online, 22–25 June 2021; pp. 113–124. [Google Scholar]
  20. Highnam, K.; Arulkumaran, K.; Hanif, Z.; Jennings, N.R. Beth dataset: Real cybersecurity data for unsupervised anomaly detection research. In Proceedings of the Conference on Applied Machine Learning for Information Security, Arlington, Virginia, 4–5 November 2021; pp. 1–12. [Google Scholar]
  21. Al-Hawawreh, M.; Sitnikova, E.; Aboutorab, N. X-IIoTID: A connectivity-agnostic and device-agnostic intrusion data set for industrial Internet of Things. IEEE Internet Things J. 2021, 9, 3962–3977. [Google Scholar] [CrossRef]
  22. Ismail, S.; Dandan, S.; Qushou, A.a. Intrusion Detection in IoT and IIoT: Comparing Lightweight Machine Learning Techniques Using TON_IoT, WUSTL-IIOT-2021, and EdgeIIoTset Datasets. IEEE Access 2025, 13, 73468–73485. [Google Scholar] [CrossRef]
  23. Moustafa, N.; Slay, J. UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra, Australia, 10–12 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  24. Akgun, D.; Hizal, S.; Cavusoglu, U. A new DDoS attacks intrusion detection model based on deep learning for cybersecurity. Comput. Secur. 2022, 118, 102748. [Google Scholar] [CrossRef]
  25. Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A detailed analysis of the KDD CUP 99 data set. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1–6. [Google Scholar]
  26. Liu, L.; Engelen, G.; Lynar, T.; Essam, D.; Joosen, W. Error prevalence in nids datasets: A case study on cic-ids-2017 and cse-cic-ids-2018. In Proceedings of the 2022 IEEE Conference on Communications and Network Security (CNS), Austin, TX, USA, 2–5 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 254–262. [Google Scholar]
  27. Chetouane, A.; Karoui, K.; Nemri, G. An Intelligent ML-Based IDS Framework for DDoS Detection in the SDN Environment. In Proceedings of the International Conference on Advances in Mobile Computing and Multimedia Intelligence, Venice, Italy, 28–30 November 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 18–31. [Google Scholar]
  28. Bala, R.; Nagpal, R. A review on kdd cup99 and nsl nsl-kdd dataset. Int. J. Adv. Res. Comput. Sci. 2019, 10, 64–67. [Google Scholar] [CrossRef]
  29. Ahmed, Y.; Beyioku, K.; Yousefi, M. Securing smart cities through machine learning: A honeypot-driven approach to attack detection in Internet of Things ecosystems. IET Smart Cities 2024, 6, 180–198. [Google Scholar] [CrossRef]
  30. Soheily-Khah, S.; Marteau, P.-F.; Béchet, N. Intrusion detection in network systems through hybrid supervised and unsupervised mining process-a detailed case study on the ISCX benchmark dataset. In Proceedings of the 2018 1st International Conference on Data Intelligence and Security (ICDIS), South Padre Island, TX, USA, 8–10 April 2018. [Google Scholar]
  31. Lippmann, R.; Haines, J.W.; Fried, D.J.; Korba, J.; Das, K. The 1999 DARPA off-line intrusion detection evaluation. Comput. Netw. 2000, 34, 579–595. [Google Scholar] [CrossRef]
  32. Sekhar, C.; Rao, K.V.; Prasad, M.K. Classification of the DDoS Attack over Flash Crowd with DNN using World Cup 1998 and CAIDA 2007 Datasets. I-Manag. J. Softw. Eng. 2021, 15, 29. [Google Scholar] [CrossRef]
  33. Nigmatullin, R.; Ivchenko, A.; Dorokhin, S. Differentiation of sliding rescaled ranges: New approach to encrypted and VPN traffic detection. In Proceedings of the 2020 International Conference Engineering and Telecommunication (En&T), Online, 25–26 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–5. [Google Scholar]
  34. Jose, J.; Jose, D.V. Deep learning algorithms for intrusion detection systems in internet of things using CIC-IDS 2017 dataset. Int. J. Electr. Comput. Eng. (IJECE) 2023, 13, 1134–1141. [Google Scholar] [CrossRef]
  35. Mirsky, Y.; Doitshman, T.; Elovici, Y.; Shabtai, A. Kitsune: An ensemble of autoencoders for online network intrusion detection. arXiv 2018, arXiv:1802.09089. [Google Scholar] [CrossRef]
  36. Lefoane, M.; Ghafir, I.; Kabir, S.; Awan, I.-U. Unsupervised learning for feature selection: A proposed solution for botnet detection in 5g networks. IEEE Trans. Ind. Inform. 2022, 19, 921–929. [Google Scholar] [CrossRef]
  37. Abbasi, F.; Naderan, M.; Alavi, S.E. Anomaly detection in Internet of Things using feature selection and classification based on Logistic Regression and Artificial Neural Network on N-BaIoT dataset. In Proceedings of the 2021 5th International Conference on Internet of Things and Applications (IoT), Isfahan, Iran, 19–20 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
  38. Peterson, J.M.; Leevy, J.L.; Khoshgoftaar, T.M. A review and analysis of the bot-iot dataset. In Proceedings of the 2021 IEEE International Conference on Service-Oriented System Engineering (SOSE), Oxford, UK, 23–26 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 20–27. [Google Scholar]
  39. Daoudi, N.; Allix, K.; Bissyandé, T.F.; Klein, J. A deep dive inside drebin: An explorative analysis beyond android malware detection scores. ACM Trans. Priv. Secur. 2022, 25, 13. [Google Scholar] [CrossRef]
  40. Düzgün, B.; Çayır, A.; Demirkıran, F.; Kahya, C.N.; Gençaydın, B.; Dağ, H. Benchmark Static API Call Datasets for Malware Family Classification. arXiv 2021, arXiv:2111.15205. [Google Scholar]
  41. Shinde, O.; Khobragade, A.; Agrawal, P. Static malware detection of Ember windows-PE API call using machine learning. In Proceedings of the AIP Conference Proceedings, Computational Intelligence and Network Security, Raipur, India, 3–4 March 2022; AIP Publishing: Melville, NY, USA, 2023. [Google Scholar]
  42. Sharma, A.; Babbar, H. Detecting Cyber Threats in Real-Time: A Supervised Learning Perspective on the CTU-13 Dataset. In Proceedings of the 2024 5th International Conference for Emerging Technology (INCET), Belgaum, India, 24–26 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar]
  43. Letteri, I.; Della Penna, G.; Di Vita, L.; Grifa, M.T. MTA-KDD’19: A Dataset for Malware Traffic Detection. In Proceedings of the Italian Conference on Cybersecurity (ITASEC), Ancona, Italy, 4–7 February 2020; pp. 153–165. [Google Scholar]
  44. Nkongolo, M.; Van Deventer, J.P.; Kasongo, S.M. Ugransome1819: A novel dataset for anomaly detection and zero-day threats. Information 2021, 12, 405. [Google Scholar] [CrossRef]
  45. Gad, A.R.; Nashat, A.A.; Barkat, T.M. Intrusion detection system using machine learning for vehicular ad hoc networks based on ToN-IoT dataset. IEEE Access 2021, 9, 142206–142217. [Google Scholar] [CrossRef]
  46. Zachos, G.; Essop, I.; Mantas, G.; Porfyrakis, K.; Ribeiro, J.C.; Rodriguez, J. Generating IoT edge network datasets based on the TON_IoT telemetry dataset. In Proceedings of the 2021 IEEE 26th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), Porto, Portugal, 25–27 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  47. Rustam, F.; Salauddin, M.; Saeed, U.; Jurcut, A.D. Dual-Approach Machine Learning for Robust Cyber-Attack Detection in Water Distribution System. In Proceedings of the 14th International Conference on the Internet of Things, Oulu, Finland, 19–22 November 2024; pp. 248–254. [Google Scholar]
  48. Khan, A.A.Z. Misuse intrusion detection using machine learning for gas pipeline SCADA networks. In Proceedings of the International Conference on Security and Management (SAM), Las Vegas, NV, USA, 29 July–1 August 2019; The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing WorldComp: Las Vegas, NV, USA, 2019; pp. 84–90. [Google Scholar]
  49. Tseng, S.-M.; Wang, Y.-Q.; Wang, Y.-C. Multi-Class Intrusion Detection Based on Transformer for IoT Networks Using CIC-IoT-2023 Dataset. Future Internet 2024, 16, 284. [Google Scholar] [CrossRef]
  50. Wardhani, R.W.; Putranto, D.S.C.; Jo, U.; Kim, H. Toward enhanced attack detection and explanation in intrusion detection system-based IoT environment data. IEEE Access 2023, 11, 131661–131676. [Google Scholar]
  51. Stoian, N.-A. Machine Learning for Anomaly Detection in IoT Networks: Malware Analysis on the IoT-23 Data Set. Bachelor’s Thesis, University of Twente, Enschede, The Netherlands, 2020. [Google Scholar]
  52. Selvam, R.; Velliangiri, S. An improving intrusion detection model based on novel CNN technique using recent CIC-IDS datasets. In Proceedings of the 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT), Bengaluru, India, 15–16 March 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  53. Kalidindi, A.; Koti, B.R.; Srilakshmi, C.; Buddaraju, K.M.; Kandi, A.R.; Makutam, G.S.S. Advanced Machine Learning Techniques for Enhancing Network Intrusion Detection and Classification Using DarkNet CIC2020. Int. J. Online Biomed. Eng. 2024, 20, 141–154. [Google Scholar] [CrossRef]
  54. Salehiyan, A.; Moghaddam, P.S.; Kaveh, M. An Optimized Transformer–GAN–AE for Intrusion Detection in Edge and IIoT Systems: Experimental Insights from WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT Datasets. Future Internet 2025, 17, 279. [Google Scholar] [CrossRef]
  55. Angelin, J.A.B.; Priyadharsini, C. Deep learning based network based intrusion detection system in industrial Internet of Things. In Proceedings of the 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), Bengaluru, India, 4–6 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 426–432. [Google Scholar]
  56. Mohy-Eddine, M.; Guezzaz, A.; Benkirane, S.; Azrour, M. An effective intrusion detection approach based on ensemble learning for IIoT edge computing. J. Comput. Virol. Hacking Tech. 2023, 19, 469–481. [Google Scholar] [CrossRef]
  57. Aslam, S.; Alshoweky, M.M.; Saad, M. Binary and multiclass classification of attacks in edge iiot networks. In Proceedings of the 2024 Advances in Science and Engineering Technology International Conferences (ASET), Phoenix, AZ, USA, 25–27 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar]
Figure 1. Industry 4.0 Architecture.
Figure 1. Industry 4.0 Architecture.
Jcp 06 00026 g001
Figure 2. Architecture of Smart Manufacturing.
Figure 2. Architecture of Smart Manufacturing.
Jcp 06 00026 g002
Figure 3. Comprehensive List of Cyber Attacks.
Figure 3. Comprehensive List of Cyber Attacks.
Jcp 06 00026 g003
Figure 4. Machine Learning process of Proposed Hybrid Model.
Figure 4. Machine Learning process of Proposed Hybrid Model.
Jcp 06 00026 g004
Figure 5. DAE—Feature Extraction Model.
Figure 5. DAE—Feature Extraction Model.
Jcp 06 00026 g005
Figure 6. WUSTL-IIoT SHAP.
Figure 6. WUSTL-IIoT SHAP.
Jcp 06 00026 g006
Figure 7. Edge-IIoT SHAP.
Figure 7. Edge-IIoT SHAP.
Jcp 06 00026 g007
Figure 8. Edge-IIoT Classifier Accuracy.
Figure 8. Edge-IIoT Classifier Accuracy.
Jcp 06 00026 g008
Figure 9. WUSTL-IIoT Classifier Accuracy.
Figure 9. WUSTL-IIoT Classifier Accuracy.
Jcp 06 00026 g009
Figure 10. WUSTL-IIoT PR Curve.
Figure 10. WUSTL-IIoT PR Curve.
Jcp 06 00026 g010
Figure 11. WUSTL-IIoT ROC Curve.
Figure 11. WUSTL-IIoT ROC Curve.
Jcp 06 00026 g011
Figure 12. Edge-IIoT PR Curve.
Figure 12. Edge-IIoT PR Curve.
Jcp 06 00026 g012
Figure 13. Edge-IIoT ROC Curve.
Figure 13. Edge-IIoT ROC Curve.
Jcp 06 00026 g013
Table 1. Comparison of recent existing studies.
Table 1. Comparison of recent existing studies.
RefDatasetContributionAttack TypeCategory of Attack TypesFeature SelectionFeature ExtractionAL/ML-Based Attack Detection ApproachHyperparameters TuningHybrid ApproachAccuracy/Results
[4]ASCADOptimization of Convolutional Autoencoders for Side-Channel AttacksSide-Channel Attack (Power Analysis)Network-based-Convolutional Autoencoder (CAE)MLP, CNN, Template Attack (TA)OptunaNo37% fewer traces needed for attack, reduced trainable parameters by factor of 29
[5]CICIDS2017, UNSW_NB15AdacDeep: Enhanced Genetic Algorithm + Deep Autoencoder + DFFNNMultiple attack types (e.g., DDoS, DoS, Brute Force)Device-based-Deep Autoencoder (DAE)Deep Feedforward Neural Network (DFFNN)EGAYes, Enhanced Genetic Algorithm + Deep Autoencoder + DFFNNImprovements in accuracy of 0.22% to 35%
[6]SOREL-20M, EMBER-2018AutoML for deep learning-based malware detection in both static and online environments.Malware detectionCloud services based-Deep learning-driven automated feature extractionFeedforward Neural Network (FFNN), CNNTPEYes, AutoML combines static, dynamic, and online analysis methods.On the EMBER-2018 and SOREL-20M datasets, the models achieved accuracies of 95.8% and 99%, respectively.
[7]IoTID20, UNSW-NB15Proposed a hybrid approach combining CNN and GRU to excel in capturing spatial and temporal dependencies in the data.IoT-related intrusion detectionNetwork-basedPSOCNNCNN-GRUGrid SearchYes—CNN-GRU99.60% accuracy on IoTID20 and 99.14% on UNSW-NB15 dataset.
[8]Kitsune and TON-IoTProposed two hybrid models CNN-LSTM and CNN-GRU to enhance IoT securityMultiple attacks (DDoS, Telnet, password, Injection and backdoor)Network-based-CNNCNN-LSTM and CNN-GRUGrid SearchYes—CNN-LSTM and CNN-GRU99.6% accuracy on Kitsune and 99.00% TON_IoT dataset
[9]DS2OS and UNSW-NB15Proposed a novel hybrid deep random neural network for cyber attack detection in IIoT.Multiple attacks (DoS, Worms, Scan, Spying and Fuzzers)Network-basedARMDRaNNHDRaNNManual approachYes—HDRaNN98% and 99% for DS2OS and UNSW-NB15
[10]NSL-KDD and BoT-IoTImplementing a distributed framework based on deep learning to simultaneously control various sources of vulnerability.Multiple attacks (DDoS, keylogging, Data theft, U2R and R2L)Network-based-Standard feature selection methods.FFNN and LSTMHyperbandNoAchieved up to 99.95% accuracy across various setups.
[11]N_BaIoTProposes a robust AttackNet model for the detection of various botnet attacks in IIoT.Multiple attacks (DDoS, Malware, MiTM and Zero day attacks)Network-based-CNNCNN-GRU-Yes—CNN-GRUAccuracy of 99.75% across 10 given classes.
[12]CSE-CIC-IDS2018Develop a hybrid model for attack detection that leverages autoencoders for effective feature extraction and DT classifiers to achieve high accuracy and reduce overfitting.Multiple attacks (DDoS, Malware, MiTM, phishing, Supply chain and Zero-day attacks)Network-basedAuto-encodersLASSO, Random Forest and BorutaDecision Tree, Naïve Bayes, neural networks, and ensemble methods-Yes—Decision Tree, Naïve Bayes, neural networks, Random Forest, and XGBoostOverall accuracy reached around 94.54%
[13]N_BaIoTProposed a DNN for feature extraction using LSTM to manage sequential data.DDoS, Mirai and GafgytNetwork-based-Done implicitly by the DNN layersDNN-LSTM-Yes—Deep Neural Network (DNN) and Long Short-Term Memory(LSTM)99.96%
[14]NSL-KDD, UNSW-NB15Proposed hybrid pre-processing method combines PCA and feature engineering via DFS to develop meaningful features for network intrusion detection.Multiple attacks (DDoS, Malware, MiTM, phishing, Supply chain and Zero-day attacks)Network-basedPCACNNCNN, Naive Bayes, Random Forest, Decision Tree, Ada Boost, BaggingManual tuningNoNSL-KDD Achieved 90.14%
UNSW-NB15 Achieved 95.7%
[15]CIC-IDS 2017, UNSW-NB15, and WSN-DSProposed a CNN-LSTM Hybrid Deep Learning model for an intrusion detection system that merges the strengths of both algorithms.Multiple attacks (DDoS, Malware, MiTM, Brute force, Web based, Worms, Blackhole attacks)Network-basedSelect-K-BestCNNCNN-LSTMManual tuningYes—CNN-LSTMCIC-IDS achieved 99.6%, UNSW-NB15 93.7%, and WSN-DS achieved 99.5%
[16]IOT23, CICIDS2017, and NSL KDDMerge long short-term memory (LSTM) and autoencoder (AE) for feature-rich scalable attack detection.Probe, R2L, U2R, DDoS, Botnet, and HeartBeatNetwork-basedPCCAELSTM and AEManual tuningYes—LSTM and AE97.7% achieved on NSL KDD dataset, 99% achieved on CICIDS2017 dataset, and 98.7% achieved on IOT23 dataset
Our Study Edge-IIoTset and WUSTL-IIOT-2021 Proposed a multibranch hybrid model based on MLP and BiLSTMDDoS, scanning, injection, brute force, infiltration, MiTM, and privilege escalationNetwork-basedDAEDAEHybrid model based on MLP and BiLSTMOptunaYes—DA—MBAAlmost 99% accuracy on both datasets.
Table 2. Comprehensive List of Available Datasets.
Table 2. Comprehensive List of Available Datasets.
Dataset CategorizationAvailable DatasetsRef
IIoT and ICS DatasetsEdge IIoT Dataset[17]
ICS-LTU2022[18]
SWaT Dataset[19]
BETH Dataset[20]
X-IIoTID Dataset[21]
WUSTL-IIOT-2021[22]
Network Traffic DatasetsUNSW-NB15 [23]
CIC-DDoS 2019[24]
KDD Cup 1999 [25]
CSE-CIC-IDS 2018[26]
SDN-DDOS-TCP-SYN[27]
NSL-KDD Dataset[28]
Canadian Institute of Cybersecurity (CIC) honeynet[29]
ISCX IDS 2012[30]
DARPA 1999[31]
CAIDA 2007 [32]
ISCXVPN 2016[33]
CIC-IDS 2017 [34]
Kitsune Network Attack[35]
NSS Mirai Dataset[36]
Botnet DatasetsN-BaIoT Dataset[37]
Bot-IoT Dataset[38]
The Drebin Dataset[39]
Malware Analysis DatasetsVirusShare Datasets[40]
EMBER-2018[41]
CTU-13 Dataset[42]
Anomaly Detection DatasetsMTA-KDD’19 Dataset[43]
UGR’16 (UG Ransom)[44]
TON-IoT Dataset[45]
Telemetry DatasetsTon IoT Telemetry 2021[46]
BATADAL (Battle of the Attack Detection Algorithms)[47]
Operational Technology (OT) DatasetsGas Pipeline Dataset[48]
CIC IoT 2023[49]
General IoT DatasetsIoTID2020 [50]
IoT-23 [51]
CICIDS 2017–CICIDS 2022[52]
CICDarknet 2020[53]
Table 3. Edge IIoT Dataset.
Table 3. Edge IIoT Dataset.
Types of AttacksTrainingTestingValidation
1Backdoor 17,44437163702
2DDoS HTTP34,89074667555
3DDoS ICMP81,59817,38917,449
4DDoS TCP34,89775937572
5DDoS UDP 84,96518,34618,257
6Fingerprinting705154142
7MITM, Encoded Value840189185
8Normal1,130,977242,569242,097
9Password35,08474667603
10Port Scanning15,77434403350
11Ransomware771815971610
12SQL Injection35,97375787652
13Uploading26,15956375838
14Vulnerability Scanner35,28473357491
15XSS11,13224052378
16Total Attack Dataset343,363102,311102,784
Table 4. WUSTL IIoT Dataset.
Table 4. WUSTL IIoT Dataset.
Types of AttacksTrainingTestingValidation
1Normal775,152166,264166,032
2DoS54,81711,60811,880
3Reconnaissance 582512201195
4Command Injection1904034
5Backdoor1403829
6Total Attack Dataset60,97212,90613,138
Table 5. System Aspects for ML Process.
Table 5. System Aspects for ML Process.
AspectsSpecificationVersion
ResourcesProcessorIntel(R) Core(TM) Processor
Generation13th
OSMicrosoft Windows 10 Enterprise
RAM32 GB
GPUNVIDIA RTX A2000 12GB
EnvironmentPyCharm IDEPyCharm 2024.1 (Professional Edition)
Python Language3.9.18
LibrariesPandas2.2.2
Numpy1.26.4
Tensorflow2.16.1
Scikit-Learn1.4.2
Keras3.3.3
Matplotlib3.9.0
Seaborn0.13.2
Table 6. Optuna Hyperparameters and Search Configurations.
Table 6. Optuna Hyperparameters and Search Configurations.
CategoryHyperparametersModel ComponentConfigurations
ArchitectureEncoding DimensionDAE32–256
Top-K MIPreprocessing16–128
MLP Hidden UnitsClassifier{64, 128, 192}
BiLSTEM UnitsClassifier{32, 64, 96}
Noise and RegularizationNoise FactorDAE0.05–0.20
L2 Regularization DAE and Classifier 1 × 10 6 1 × 10 3
Dropout Rate (Dense)Classifier0.2–0.6
Dropout Rate (LSTM)Classifier0.2–0.6
TrainingBatch SizeDAE and Classifier{512, 1024, 2048, 4096, 8192}
Epoch DAE and Classifier5–30 (DAE), 10–60 (Classifier)
Early StoppingDAE and Classifier10
Learning RateClassifier 3 × 10 5 5 × 10 4
OptimizerDAE and ClassifierAdam
Loss FunctionLoss TypeDAE and ClassifierMSE (DAE), BCE (Classifier)
Table 7. Results and Comparisons.
Table 7. Results and Comparisons.
AccuracyLossPrecisionRecallF1 ScoreTime Per Attack Detection (s)
MLPWUSTL0.92790.68780.93990.93230.92340.000027
EdgeIIoT0.90090.31720.89460.91460.90450.000029
BiLSTMWUSTL0.96980.11340.96600.97540.97070.000020
EdgeIIoT0.97170.10290.96550.97980.97260.000019
Proposed ModelWUSTL0.99480.02970.99520.99170.99290.000050
EdgeIIoT0.99840.04080.99991.00.99990.000026
Table 8. Comparison with other methods.
Table 8. Comparison with other methods.
Proposed ModelAccuracyPrecisionRecallF1 Score
[54] 2025WUSTLGAN-AE0.9786-0.9863-
Edge IIoT0.9863-0.9879-
[22] 2025WUSTLDT0.9625---
[55] 2024Edge IIoTCNN-AE0.92340.90280.91690.8908
[56] 2023Bot-IoTRF0.9429---
[57] 2024Edge IIoTRNN0.92000.840010.9200
Proposed ModelWUSTL DA-MBA 0.99480.99520.99170.9929
Edge IIoT0.99840.99991.00.9999
Table 9. Edge IIoT Zero-Day Evaluation.
Table 9. Edge IIoT Zero-Day Evaluation.
Attack Type—Edge IIoTROC-AUCPR-AUCRecallMixed—ROC-AUCMixed—PR-AUCUnknown Only Recall @ 0.5
Backdoor0.99930.99901.00.99990.99981.0
DDoS_HTTP0.99950.99950.99970.99960.99970.99997
DDoS_ICMP0.99970.99991.00.99980.99991.0
DDoS_TCP0.99970.99941.00.99970.99971.0
DDoS_UDP0.99930.99981.00.99990.99981.0
Fingerprinting0.99990.984571.00.99990.99961.0
MITM0.99960.98830.81300.99990.99970.8130
Password0.99970.99921.00.99990.99971.0
Port_Scanning0.99990.99891.00.99970.99971.0
Ransomware0.99990.99821.00.99920.99951.0
SQL_injection0.99990.999291.00.99960.99971.0
Uploading0.99990.99901.00.999940.99981.0
Scanner0.99990.99931.00.99970.99971.0
XSS0.99990.99831.00.999940.99961.0
Table 10. WUSTL IIoT Zero-Day Evaluation.
Table 10. WUSTL IIoT Zero-Day Evaluation.
Attack Type—WUSTL IIoTROC-AUCPR-AUCRecallMixed—ROC-AUCMixed—PR-AUCUnknown Only Recall @ 0.5
Backdoor0.998100.713200.764150.999790.993560.70283
CommInj0.998580.808430.803090.999700.995160.78764
DoS0.999110.997710.949940.999060.996230.95486
Reconn0.946780.867680.720990.981520.968490.75121
Table 11. DA-MBA Model Complexity.
Table 11. DA-MBA Model Complexity.
DatasetParametersFP32 Model Size (MB)
Edge-IIoT90,8810.347
WUSTL-IIoT90,8810.347
Table 12. Summarized Batched Inference Latency Profile.
Table 12. Summarized Batched Inference Latency Profile.
DatasetSamples Measuredp50p90p95p99MinMaxApprox Throughput (Samples/s)
EDGE-IIoT443,8410.00423 (4.23 μs)0.00505 (5.05 μs)0.00528 (5.28 μs)0.00546 (5.46 μs)0.00357 (3.57 μs)0.00550 (5.50 μs)236,295
WUSTL-IIoT238,8930.00235 (2.35 μs)0.00276 (2.76 μs)0.00289 (2.89 μs)0.00299 (2.99 μs)0.00210 (2.10 μs)0.00302 (3.02 μs)425,578
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qaiser, G.; Chandrasekaran, S. Denoising Adaptive Multi-Branch Architecture for Detecting Cyber Attacks in Industrial Internet of Services. J. Cybersecur. Priv. 2026, 6, 26. https://doi.org/10.3390/jcp6010026

AMA Style

Qaiser G, Chandrasekaran S. Denoising Adaptive Multi-Branch Architecture for Detecting Cyber Attacks in Industrial Internet of Services. Journal of Cybersecurity and Privacy. 2026; 6(1):26. https://doi.org/10.3390/jcp6010026

Chicago/Turabian Style

Qaiser, Ghazia, and Siva Chandrasekaran. 2026. "Denoising Adaptive Multi-Branch Architecture for Detecting Cyber Attacks in Industrial Internet of Services" Journal of Cybersecurity and Privacy 6, no. 1: 26. https://doi.org/10.3390/jcp6010026

APA Style

Qaiser, G., & Chandrasekaran, S. (2026). Denoising Adaptive Multi-Branch Architecture for Detecting Cyber Attacks in Industrial Internet of Services. Journal of Cybersecurity and Privacy, 6(1), 26. https://doi.org/10.3390/jcp6010026

Article Metrics

Back to TopTop