# FLDID: Federated Learning Enabled Deep Intrusion Detection in Smart Manufacturing Industries

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

#### Vulnerabilities in Smart Manufacturing

- Developed a federated learning-enabled framework to construct a comprehensive intrusion detection model which can collaboratively train the model on multiple data from different industries without disclosing it to each other. As data does not leave the premises, thus data privacy is also achieved.
- A proposed Deep Intrusion Detection model for cyber threat detection in SM using CNN, LSTM, and MLP. The proposed model is proven to be efficient in detecting cyber threats and incorporated with the federated learning framework.
- Proposed a Paillier-based encryption to provide secure communication throughout the training process, in order to safeguard the privacy of model gradients and to handle the threats against the federated learning framework.
- Tests carried out on an IIoT-based dataset using the proposed FLDID framework to prove its effectiveness in the industrial environment as well.

## 2. Related Work

## 3. System Model, Assumptions and Threat Model

#### 3.1. System Model

- Edge Devices: Each edge device is representative of SM industries taking part in the process of collaborative learning. These devices have the local data collected from its associated SM industry and are responsible for building the local DID model using this data. It is also responsible for communicating with the cloud server to send local model parameters, receive aggregated model parameters from the cloud server, and again perform the training on the received model. These steps are recursively performed until convergence. The local model is placed on the edge devices which are capable of performing intrusion detection on their local data. In this work, we used a CNN+LSTM+MLP-based DL model for intrusion detection.
- Cloud Server: It is accountable for collaboratively building the DID model in a federated way. The cloud server consists of two major functionalities: (a) initialization of global model parameters and sharing it with local edge devices, (b) aggregating the parameters uploaded by edge devices until the model converges and sends it back to the edge devices.
- Key Management Centre: The KMC is responsible for ensuring secure communication between the cloud server and the edge devices. It establishes a secure channel between them using a Paillier-based cryptosystem. The Paillier cryptosystem is a partially homomorphic encryption scheme that allows operations on encrypted data. KMC is also responsible for generating public and private keys used in the Paillier cryptosystem.

#### 3.2. Assumptions

- Cloud server is considered a trustworthy party but curious, which is honestly performing its assigned task but quite interested in knowing the model gradients.
- KMC is assumed to be the fully honest party ensuring secure communication between edge devices and the cloud server.
- Edge devices are considered to be partially trustworthy, as they follow the process but may be curious about the other edge device’s data resources.

#### 3.3. Threat Model

## 4. Materials and Methods

Algorithm 1 Secured FL procedure |

Input: Set of Edge devices ${E}_{n}$ and their associated local data ${D}_{n}|n\in N$, No. of communication rounds R |

Output: Global DID model |

1: Cloud server initialize model parameter ${W}_{0}$ and $\eta ,B,\zeta ,\rho ,\tau $ |

2: Each ${E}_{n}$ informs the data size $|{D}_{n}|$ to server; |

3: Server computes contribution ratio ${\alpha}_{n}=|{D}_{n}|/(|{D}_{1}|+|{D}_{2}|+\dots +|{D}_{N}\left|\right)$; |

4: For each ${E}_{n}$ KMC generates key pair $(PubK,PriK)$ using $KeyGen\left(n\right)$ and establishes secure channel between each edge device and server; |

5: For r = 1 to R; |

(i) ${E}_{n}$ computes ${r}^{th}$ round model gradients ${W}_{n}^{r}$ using Algorithm 2; |

(ii) Encrypt model gradients ${W}_{n}^{r}$ as $E\left({W}_{n}^{r}\right)=Encrypt\_grad({W}_{n}^{r},PubK)$; |

(iii) Upload $E\left({W}_{n}^{r}\right)$ to cloud server; |

End |

6: Cloud server aggregates $E\left({W}_{n}^{r}\right)$ as $C=Aggr\_grad(E\left({W}_{1}^{r}\right),\left({W}_{2}^{r}\right),\dots {\alpha}_{1},\dots {\alpha}_{n})$; |

7: Server distributes C to all edge devices ${E}_{n}$ |

8: To obtain new global model ${E}_{n}$ performs decryption as $\tilde{{W}^{r}}=Decrypt\_grad(C,PriK)$ |

9: ${E}_{n}$ updates model gradients using $\tilde{{W}^{r}}$; |

10: $r\leftarrow r+1$; |

**Step 1 (Model initialization):**In this step, the server selects and sends the initial parameters ${W}_{0}$ for the DID model and other necessary parameters required for model training such as: Batch size B, Learning rate $\eta $, loss function $\zeta $, momentum $\rho $, and decay $\tau $ to the edge devices. Moreover, each edge device ${E}_{n}$ associated with SM industries ${S}_{n}$ informs the server about the size of the data ${D}_{n}$ it has from the industry, where $n\in N=1,2,\dots ,N$, this helps the cloud server to calculate the contribution ratio ${\alpha}_{n}$ for each edge device. A positive integer R defines the number of communication rounds between the edge devices and the cloud server.**Step 2 (Key generation):**In addition to the above parameters, next the public key $PubK$ and the private key $PriK$ are generated using $KeyGen\left(n\right)$ by the KMC which is used in Paillier encryption to establish a secure path between the server and edge device.**Step 3 (Local model training):**Based on the initial model parameters ${W}_{0}$, $\eta ,B,$ $\zeta ,\rho ,\tau $ received from the cloud server, each edge device performs the model training on their local data ${D}_{n}$. The model used for training at the edge devices is described in Section 4.1. In the proposed DID model, a hybrid CNN+LSTM+MLP model is used to train the model for detecting intrusions in smart manufacturing. The elaborated training process is presented in Algorithm 2.**Step 4 (Gradient encryption):**After training the model on the local data each edge device ${E}_{n}$ encrypts the model gradients ${W}_{n}^{r}$ using $Encrypt\_grad({W}_{n}^{r},PubK)$, where ${W}_{n}^{r}$ are the model gradients after training at edge device ${E}_{n}$ in the ${r}^{th}$ round. Then, the encrypted gradients $E\left({W}_{n}^{r}\right)$ of the local models by each edge device are sent to the cloud for aggregation to generate the comprehensive global model.**Step 5 (Global model construction):**Cloud server aggregates the encrypted model gradients received from each edge device participating in the process of collaborative learning. The gradients are aggregated using $Aggr\_grad(E\left({W}_{1}^{r}\right),E\left({W}_{2}^{r}\right),$$\dots E\left({W}_{n}^{r}\right),$${\alpha}_{1},\dots {\alpha}_{n})$, where ${\alpha}_{n}$ is contribution ratio of each edge device. Then the aggregated encrypted gradients are sent back to edge devices as a cipher text C.**Step 6 (Local model updation):**At each edge device after receiving the global model (aggregated gradients) as a cipher text, each edge device performs decryption $Decrypt\_grad(C,PriK)$ using the private key. After receiving the decrypted gradients $\tilde{{W}^{r}}$, local DID models are then updated and retrained with their local data.

Algorithm 2 DL model training |

Input:
${W}_{0},\eta ,B,\zeta ,\rho ,\tau $ |

Output:
${W}_{n}^{r}$ |

1: Divide ${D}_{n}$, into equal size B batches with feature vector x; |

2: Set ${W}_{n}^{r}$ with initial values; |

3: For each Batch; |

${c}_{1}\leftarrow $ Forward x to $Con{v}_{1}$; |

${c}_{2}\leftarrow $ Forward ${c}_{1}$ to $Con{v}_{2}$; |

$\lambda \leftarrow $ Flatten $\left({c}_{2}\right)$; |

${H}^{\prime}\leftarrow $ Forward $\lambda $ to $LST{M}_{1}$; |

$\mu \leftarrow $ Forward ${H}^{\prime}$ to $LST{M}_{2}$; |

$M\leftarrow $ Forward $\mu $ to $Dense$; |

$\gamma \leftarrow $ Dropout $\left(M\right)$; |

$\nu \leftarrow $ Forward $\gamma $ to Output(Sigmoid); |

4: Compute loss function using: |

$\zeta =-\frac{1}{B}{\sum}_{i=0}^{1}{x}_{i}.log{\widehat{x}}_{i}+(1-{x}_{i}).log(1-{\widehat{x}}_{i})$; |

5: Update ${W}_{n}^{r}$; |

6: Repeat until $\zeta $ converges; |

#### 4.1. Proposed Deep Intrusion Detection Model

#### 4.1.1. Pre-Processing Unit

#### 4.1.2. CNN Unit

#### 4.1.3. LSTM Unit

#### 4.1.4. MLP Unit

#### 4.2. Encryption Method for Secure Communication

- $KeyGen$: The key management center generates the Public key $PubK=(p,q)$ and Private key $PriK=(\delta ,\kappa )$ using Paillier cryptosystem as mentioned in [40].
- $Encrypt\_grad$: Here, the model gradients ${W}_{n}^{r}$ are encrypted using $PubK(p,q)$ and results in $E\left({W}_{n}^{r}\right)$. For example, if m is the plain text and C is the cipher text the encryption is represented as:$$C={q}^{m}.\phantom{\rule{0.222222em}{0ex}}{r}^{p}mod\phantom{\rule{0.222222em}{0ex}}{p}^{2}$$
- $Decrypt\_grad$: Each edge device performs decryption upon receiving the cipher text C from the cloud server and retrieves the updated gradients $\tilde{{W}^{r}}$. The decryption is performed on C using the private key $PriK(\delta ,\kappa )$ to obtain the plaintext m.$$m=\frac{L\left({C}^{\delta}\phantom{\rule{0.222222em}{0ex}}mod\phantom{\rule{0.222222em}{0ex}}{p}^{2}\right)}{\left({q}^{\delta}\phantom{\rule{0.222222em}{0ex}}mod\phantom{\rule{0.222222em}{0ex}}{p}^{2}\right)}mod\phantom{\rule{0.222222em}{0ex}}p$$

#### Analysis of Encryption Scheme

## 5. Results and Discussion

#### 5.1. Environmental Setup and Parameters

#### 5.2. Dataset Description

#### 5.3. Performance Metrics Used

- TP:
- Specify the count of attack requests rightly predicted as attack;
- TN:
- Specify the count of benign samples rightly predicted as benign;
- FP:
- Count of benign requests falsely predicted as attack;
- FN:
- Count of attack requests falsely predicted as benign.

**Accuracy:**It is the percentage of the right prediction of attack and benign requests$$\mathrm{Accuracy}=\frac{(\mathrm{TP}+\mathrm{TN})}{(\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN})}$$**Recall:**It is defined as the ratio of right prediction of attack to the all observations in actual class$$\mathrm{Recall}=\frac{\mathrm{TP}}{(\mathrm{TP}+\mathrm{FN})}$$**Precision:**It is the ratio of right prediction of benign request to the entire predicted benign requests.$$\mathrm{Precision}=\frac{\mathrm{TP}}{(\mathrm{TP}+\mathrm{FP})}$$**F1-Score:**This score is the measure of test’s accuracy and calculated using Precision and Recall.$$\mathrm{F}1-\mathrm{Score}=2\ast \left(\frac{(\mathrm{Recall}\ast \mathrm{Precision})}{(\mathrm{Recall}+\mathrm{Precision})}\right)$$

#### 5.4. Result Evaluation

#### 5.4.1. Performance Comparison with Centralized and Isolated Models

#### 5.4.2. Performance Comparison with Baseline Studies

#### 5.4.3. Performance Comparison of Proposed Model with ML Classifiers

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

AI | Artificial Intelligence |

CNN | Convolutional Neural Network |

CPS | Cyber Physical Systems |

C&C | Command and Control |

DDoS | Distributed Denial of Service |

DID | Deep Intrusion Detection |

DL | Deep Learning |

DoS | Denial of Service |

FDAGMM | Federated Deep Auto encoding Gaussian Mixture Model |

FL | Federated Learning |

FLDID | Federated Learning enabled Deep Intrusion Detection |

FN | False Negative |

FP | False Positive |

GRU | Gated Recurrent Unit |

IDS | Intrusion Detection Systems |

IIoT | Industrial Internet of Things |

KMC | Key Management Centre |

LSTM | Long Short Term Memory |

ML | Machine Learning |

MLP | Multi Layer Perceptron |

MT-DNN-FL | Multi-Task Deep Neural Network in Federated Learning |

RDoS | Ransom DoS |

RNN | Recurrent Neural Network |

TN | True Negative |

TP | True Positive |

SM | Smart Manufacturing |

## References

- Tuptuk, N.; Hailes, S. Security of smart manufacturing systems. J. Manuf. Syst.
**2018**, 47, 93–106. [Google Scholar] [CrossRef] - Sari, A.; Lekidis, A.; Butun, I. Industrial networks and IIoT: Now and future trends. In Industrial IoT; Springer: Berlin/Heidelberg, Germany, 2020; pp. 3–55. [Google Scholar]
- Bao, H.; Lu, R.; Li, B.; Deng, R. BLITHE: Behavior rule-based insider threat detection for smart grid. IEEE Internet Things J.
**2015**, 3, 190–205. [Google Scholar] [CrossRef] - Hao, M.; Li, H.; Luo, X.; Xu, G.; Yang, H.; Liu, S. Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Trans. Ind. Inform.
**2019**, 16, 6532–6542. [Google Scholar] [CrossRef] - Li, B.; Lu, R.; Wang, W.; Choo, K.K.R. DDOA: A Dirichlet-based detection scheme for opportunistic attacks in smart grid cyber-physical system. IEEE Trans. Inf. Forensics Secur.
**2016**, 11, 2415–2425. [Google Scholar] [CrossRef] - Dhirani, L.L.; Armstrong, E.; Newe, T. Industrial IoT, cyber threats, and standards landscape: Evaluation and roadmap. Sensors
**2021**, 21, 3901. [Google Scholar] [CrossRef] [PubMed] - Flechais, I.; Sasse, M.A.; Hailes, S.M. Bringing security home: A process for developing secure and usable systems. In Proceedings of the NSPW03: New Security Paradigms and Workshop, Ascona, Switzerland, 18–21 August 2003; pp. 49–57. [Google Scholar]
- Ismail, M.; Shaaban, M.F.; Naidu, M.; Serpedin, E. Deep learning detection of electricity theft cyber-attacks in renewable distributed generation. IEEE Trans. Smart Grid
**2020**, 11, 3428–3437. [Google Scholar] [CrossRef] - Yang, J.; Zhou, C.; Yang, S.; Xu, H.; Hu, B. Anomaly detection based on zone partition for security protection of industrial cyber-physical systems. IEEE Trans. Ind. Electron.
**2017**, 65, 4257–4267. [Google Scholar] [CrossRef] [Green Version] - Tuptuk, N.; Hailes, S. The Cyberattack on Ukraine’s Power Grid Is a Warning of What’s to Come. Conversation
**2016**, 52832, 847–855. Available online: http://theconversation.com/thecyberattack-on-ukraines-power-grid-is-a-warning-ofwhats-tocome (accessed on 20 July 2022). - Maiziere, T. Die Lage Der It-Sicherheit in Deutschland 2014; Bundesamt für Sicherheit in der Informationstechnik: Bonn, Germany, 2014. [Google Scholar]
- Zhang, K.; Zhu, Y.; Maharjan, S.; Zhang, Y. Edge intelligence and blockchain empowered 5G beyond for the industrial Internet of Things. IEEE Netw.
**2019**, 33, 12–19. [Google Scholar] [CrossRef] - Qiu, C.; Yu, F.R.; Yao, H.; Jiang, C.; Xu, F.; Zhao, C. Blockchain-based software-defined industrial Internet of Things: A dueling deep Q-learning approach. IEEE Internet Things J.
**2018**, 6, 4627–4639. [Google Scholar] [CrossRef] - Savazzi, S.; Nicoli, M.; Rampa, V. Federated learning with cooperating devices: A consensus approach for massive IoT networks. IEEE Internet Things J.
**2020**, 7, 4641–4654. [Google Scholar] [CrossRef] - Samarakoon, S.; Bennis, M.; Saad, W.; Debbah, M. Federated learning for ultra-reliable low-latency V2V communications. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–7. [Google Scholar]
- Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun.
**2019**, 37, 1205–1221. [Google Scholar] [CrossRef] [Green Version] - Qi, J.; Yang, P.; Newcombe, L.; Peng, X.; Yang, Y.; Zhao, Z. An overview of data fusion techniques for Internet of Things enabled physical activity recognition and measure. Inf. Fusion
**2020**, 55, 269–280. [Google Scholar] [CrossRef] - Qi, J.; Yang, P.; Waraich, A.; Deng, Z.; Zhao, Y.; Yang, Y. Examining sensor-based physical activity recognition and monitoring for healthcare using Internet of Things: A systematic review. J. Biomed. Inform.
**2018**, 87, 138–153. [Google Scholar] [CrossRef] - Huang, C.; Min, G.; Wu, Y.; Ying, Y.; Pei, K.; Xiang, Z. Time series anomaly detection for trustworthy services in cloud computing systems. IEEE Trans. Big Data
**2017**, 8, 60–72. [Google Scholar] [CrossRef] - Meng, W.; Tischhauser, E.W.; Wang, Q.; Wang, Y.; Han, J. When intrusion detection meets blockchain technology: A review. IEEE Access
**2018**, 6, 10179–10188. [Google Scholar] [CrossRef] - Shone, N.; Ngoc, T.N.; Phai, V.D.; Shi, Q. A deep learning approach to network intrusion detection. IEEE Trans. Emerg. Top. Comput. Intell.
**2018**, 2, 41–50. [Google Scholar] [CrossRef] [Green Version] - Diro, A.; Chilamkurti, N.; Nguyen, V.D.; Heyne, W. A Comprehensive Study of Anomaly Detection Schemes in IoT Networks Using Machine Learning Algorithms. Sensors
**2021**, 21, 8320. [Google Scholar] [CrossRef] [PubMed] - Taghavinejad, S.M.; Taghavinejad, M.; Shahmiri, L.; Zavvar, M.; Zavvar, M.H. Intrusion detection in IoT-based smart grid using hybrid decision tree. In Proceedings of the 2020 6th International Conference on Web Research (ICWR), Tehran, Iran, 22–23 April 2020; pp. 152–156. [Google Scholar]
- Wu, D.; Jiang, Z.; Xie, X.; Wei, X.; Yu, W.; Li, R. LSTM learning with Bayesian and Gaussian processing for anomaly detection in industrial IoT. IEEE Trans. Ind. Inform.
**2019**, 16, 5244–5253. [Google Scholar] [CrossRef] [Green Version] - Doshi, R.; Apthorpe, N.; Feamster, N. Machine learning ddos detection for consumer internet of things devices. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; pp. 29–35. [Google Scholar]
- Zuo, Y.; Wu, Y.; Min, G.; Huang, C.; Pei, K. An intelligent anomaly detection scheme for micro-services architectures with temporal and spatial data analysis. IEEE Trans. Cogn. Commun. Netw.
**2020**, 6, 548–561. [Google Scholar] [CrossRef] - Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process.
**2019**, 115, 213–237. [Google Scholar] [CrossRef] - Ma, T.; Wang, F.; Cheng, J.; Yu, Y.; Chen, X. A hybrid spectral clustering and deep neural network ensemble algorithm for intrusion detection in sensor networks. Sensors
**2016**, 16, 1701. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Brun, O.; Yin, Y.; Gelenbe, E.; Kadioglu, Y.M.; Augusto-Gonzalez, J.; Ramos, M. Deep learning with dense random neural networks for detecting attacks against IoT-connected home environments. In Proceedings of the First International ISCIS Security Workshop 2018, London, UK, 26–27 February 2018; pp. 79–89. [Google Scholar]
- Potluri, S.; Diedrich, C. Accelerated deep neural networks for enhanced intrusion detection system. In Proceedings of the 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), Berlin, Germany, 6–9 September 2016; pp. 1–8. [Google Scholar]
- Rey, V.; Sánchez, P.M.S.; Celdrán, A.H.; Bovet, G. Federated learning for malware detection in iot devices. Comput. Netw.
**2022**, 204, 108693. [Google Scholar] [CrossRef] - Nguyen, T.D.; Marchal, S.; Miettinen, M.; Fereidooni, H.; Asokan, N.; Sadeghi, A.R. DÏoT: A federated self-learning anomaly detection system for IoT. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019; pp. 756–767. [Google Scholar]
- Zhao, Y.; Chen, J.; Wu, D.; Teng, J.; Yu, S. Multi-task network anomaly detection using federated learning. In Proceedings of the SoICT 2019: The Tenth International Symposium on Information and Communication Technology, Hanoi Ha Long Bay, Vietnam, 4–6 December 2019; pp. 273–279. [Google Scholar]
- Chen, Y.; Zhang, J.; Yeo, C.K. Network anomaly detection using federated deep autoencoding Gaussian mixture model. In Proceedings of the International Conference on Machine Learning for Networking, Paris, France, 3–5 December 2019; pp. 1–14. [Google Scholar]
- Al-Hawawreh, M.; Sitnikova, E.; Aboutorab, N. Asynchronous Peer-to-Peer Federated Capability-Based Targeted Ransomware Detection Model for Industrial IoT. IEEE Access
**2021**, 9, 148738–148755. [Google Scholar] [CrossRef] - Makkar, A.; Kim, T.W.; Singh, A.K.; Kang, J.; Park, J.H. SecureIIoT Environment: Federated Learning empowered approach for Securing IIoT from Data Breach. IEEE Trans. Ind. Inform.
**2022**, 18, 6406–6414. [Google Scholar] [CrossRef] - Al-Hawawreh, M.; Sitnikova, E.; Aboutorab, N. X-IIoTID: A connectivity-agnostic and device-agnostic intrusion data set for industrial Internet of Things. IEEE Internet Things J.
**2021**, 9, 3962–3977. [Google Scholar] [CrossRef] - Patro, S.; Sahu, K.K. Normalization: A preprocessing stage. arXiv
**2015**, arXiv:1503.06462. [Google Scholar] [CrossRef] - Sun, P.; Liu, P.; Li, Q.; Liu, C.; Lu, X.; Hao, R.; Chen, J. DL-IDS: Extracting features using CNN-LSTM hybrid network for intrusion detection system. Secur. Commun. Netw.
**2020**, 2020, 8890306. [Google Scholar] [CrossRef] - Paillier, P. Public-key cryptosystems based on composite degree residuosity classes. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Prague, Czech Republic, 2–6 May 1999; pp. 223–238. [Google Scholar]
- Schneble, W.; Thamilarasu, G. Attack detection using federated learning in medical cyber-physical systems. In Proceedings of the 28th International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 25–28 July 2022; pp. 1–8. [Google Scholar]
- Chen, Y.; Qin, X.; Wang, J.; Yu, C.; Gao, W. Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intell. Syst.
**2020**, 35, 83–93. [Google Scholar] [CrossRef]

**Figure 5.**Performance comparison of FLDID with centralized and isolated models with different numbers of edge devices for R = 10: (

**a**) N = 2, (

**b**) N = 5, (

**c**) N = 10, (

**d**) N = 15.

**Figure 6.**Performance comparison of FLDID with state-of-the-art approaches with different number of edge devices for R = 10: (

**a**) N = 2, (

**b**) N = 5.

Proposed Framework | CPU | Memory | Time (in s) |
---|---|---|---|

Without Paillier encryption | 20% | 72% | 3042 |

With Paillier encryption | 83% | 88% | 76,920 |

FL Model | Learning rate (0.01), Momentum (0.9), decay (0.01), loss function (binary cross-entropy), epoch (10), number of clients (K = 2, 5, 10, 15) |

DID Model (CNN + LSTM + MLP) | No. of hidden layers (11), dropout rate (0.2), CNN Layer 1 (filters (128), kernel size (3), activation function (relu)), CNN Layer 1 (filters (64), kernel size (3), activation function (relu)), pooing size (2), strides (2), LSTM layers (perceptron (50), activation function (tanh)), MLP layer (perceptron (100), activation function (tanh), output layer activation function (sigmoid)), optimizer (adam), loss (binary cross entropy) |

N | R | Accuracy | Precision | Recall | F-Score |
---|---|---|---|---|---|

2 | 2 | 0.99032 | 0.99555 | 0.98449 | 0.98999 |

4 | 0.99183 | 0.9946 | 0.98857 | 0.99157 | |

6 | 0.99259 | 0.9947 | 0.99004 | 0.99237 | |

8 | 0.99306 | 0.99368 | 0.99204 | 0.99286 | |

10 | 0.99348 | 0.99634 | 0.99022 | 0.99327 | |

5 | 2 | 0.99373 | 0.99573 | 0.99135 | 0.99353 |

4 | 0.99409 | 0.99621 | 0.99161 | 0.9939 | |

6 | 0.99415 | 0.99577 | 0.99219 | 0.99398 | |

8 | 0.99415 | 0.99626 | 0.99169 | 0.99397 | |

10 | 0.99428 | 0.99665 | 0.99157 | 0.9941 | |

10 | 2 | 0.99432 | 0.99645 | 0.99186 | 0.99415 |

4 | 0.99434 | 0.99637 | 0.99198 | 0.99417 | |

6 | 0.99437 | 0.99642 | 0.99198 | 0.99419 | |

8 | 0.99443 | 0.99654 | 0.99199 | 0.99426 | |

10 | 0.99438 | 0.99645 | 0.99198 | 0.99421 | |

15 | 2 | 0.99441 | 0.99654 | 0.99195 | 0.99424 |

4 | 0.99443 | 0.99644 | 0.9921 | 0.99426 | |

6 | 0.99443 | 0.99667 | 0.99185 | 0.99426 | |

8 | 0.99445 | 0.99657 | 0.99199 | 0.99428 | |

10 | 0.99447 | 0.99659 | 0.99203 | 0.9943 |

Classifier | Accuracy | CPU | Memory | Time (in s) |
---|---|---|---|---|

SVM | 0.9227 | 35% | 90% | 4265 |

LR | 0.9148 | 15% | 80% | 36.411 |

KNN | 0.9847 | 14% | 75% | 0.6592 |

DT | 0.9904 | 19% | 78% | 4.975 |

Proposed | 0.9979 | 20% | 72% | 3042 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Verma, P.; Breslin, J.G.; O’Shea, D.
FLDID: Federated Learning Enabled Deep Intrusion Detection in Smart Manufacturing Industries. *Sensors* **2022**, *22*, 8974.
https://doi.org/10.3390/s22228974

**AMA Style**

Verma P, Breslin JG, O’Shea D.
FLDID: Federated Learning Enabled Deep Intrusion Detection in Smart Manufacturing Industries. *Sensors*. 2022; 22(22):8974.
https://doi.org/10.3390/s22228974

**Chicago/Turabian Style**

Verma, Priyanka, John G. Breslin, and Donna O’Shea.
2022. "FLDID: Federated Learning Enabled Deep Intrusion Detection in Smart Manufacturing Industries" *Sensors* 22, no. 22: 8974.
https://doi.org/10.3390/s22228974