Skip Content
You are currently on the new version of our website. Access the old version .
Applied SciencesApplied Sciences
  • Article
  • Open Access

25 June 2024

Multiple Intrusion Detection Using Shapley Additive Explanations and a Heterogeneous Ensemble Model in an Unmanned Aerial Vehicle’s Controller Area Network

and
Department of Software Convergence and Communication Engineering, Sejong Campus, Hongik University, 2639, Sejong-ro, Jochiwon-eup, Sejong City 30016, Republic of Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Integrating Artificial Intelligence in Renewable Energy Systems

Abstract

Recently, methods to detect DoS and spoofing attacks on In-Vehicle Networks via the CAN protocol have been studied using deep learning models, such as CNN, RNN, and LSTM. These studies have produced significant results in the field of In-Vehicle Network attack detection using deep learning models. However, these studies have typically addressed studies on single-model intrusion detection verification in drone networks. This study developed an ensemble model that can detect multiple types of intrusion simultaneously. In preprocessing, the patterns within the payload using the measure of Feature Importance are distinguished from the attack and normal data. As a result, this improved the accuracy of the ensemble model. Through the experiment, both the accuracy score and the F1-score were verified for practical utility through 97% detection performance measurement.

1. Introduction

As unmanned aerial vehicle (UAV) [1] and Internet of Things (IoT) technologies are developing, they are used for weather observation [2], agriculture [3], and military purposes [4]. However, not only physical signal attacks, such as GPS Signal Spoofing and Jamming Signals [5] but also malware and malicious communication attacks, such as spoofed signals, are being launched [6]. These attacks are classified as a major threat because they can also be used against unmanned aerial vehicles (UAVs). Accordingly, cyber-attack attempts targeting unmanned aerial vehicles (UAVs) are increasing [7], and research to detect them is actively underway [8,9]. These include spoofing, Denial of Service (DoS), and Replay attacks on CAN protocols against UAVs [10].
Research using machine learning (ML) and deep learning (DL) models to detect network intrusions occurring in the CAN protocol [11] is also underway, but the performance of these models is not stable because the feature learning process of ML/DL models cannot be verified. Unfortunately, researchers cannot utilize the patterns analyzed by the ML/DL model [12,13,14,15,16]. In addition, stacking techniques that use an ensemble of different types of models are being used to improve detection, but this results in poor performance. Recently, it has become possible to analyze the Feature Importance and SHAP value of ML/DL models using Lunderberg’s explainable artificial intelligence (XAI) research [15,16] and the SHAP technique. Solved cases [17,18] are also emerging.
This study validates an ensemble model that can simultaneously detect more than one intrusion in a drone network. Related studies have typically addressed studies on single-model intrusion detection verification in drone networks [19,20,21]. Our hypothesis proposes a way to effectively identify multiple needle types through ensemble models. This ensemble model uses a stacking method that combines a single model trained for the binary classification of each attack. Therefore, this study will extend the detection range by utilizing an assembled ensemble model based on performance evaluation and feature importance analysis on ML/DL models trained on individual attacks. Additionally, this study will validate an effective ensemble method for detecting multiple intrusions by analyzing ML/DL models using SHAP value analysis.

3. Method and Materials

In this section, the dataset for each scenario type is analyzed. By comparing the SHAP analysis results of the attack scenario and the ML model, we confirmed whether the ML model effectively analyzes patterns within the dataset.

3.1. Dataset Preprocessing

The analyzed datasets were preprocessed as follows. The train/test datasets were split at a 7:3 ratio under the condition of stratifying the label. Timestamps were included for the time series LSTM [48] and simple RNN models, and the datasets with removed timestamps rows were used for the ML models. Additionally, in all the scenario-type datasets, there was an error of less than 2 s between the experiment on data collection and the collected dataset.
The proportion of each scenario type in the attack datasets is shown in Table 9.
Table 9. Single attack dataset proportion.
The normal data are labeled 0, and the attack data are labeled 1. Types 01 and 03 have relatively balanced ratios, and types 02, 04, 05, and 06 have relatively unbalanced ratios. As a result of Pearson correlation analysis shown in Table 10, the data length of all the scenario datasets shows a high correlation coefficient with the other data bits. It is assumed that the composition of the payload varies depending on the data length.
Table 10. Pearson correlation heatmap of the dataset by scenario type.

3.2. Experiment Model

The base model was constructed according to the performance evaluation results in Section 4. Experiment models constructed from each base model are represented as shown in Figure 6. Flooding was detected using the LSTM model, a fuzzy attack was detected using the LSTM model, and a replay attack was detected using the decision tree (DT) model, all under the same conditions as the models learned during performance evaluation.
Figure 6. Structure of experiment model.

3.3. Performance Evaluation Metrics

In this experiment, the relationship between the model’s detection results and the labels of the actual data was defined according to the confusion matrix in Table 11. A TP (True Positive) is when an actual attack is normally detected, an FN (False Negative) when an actual attack is not detected, an FP (False Positive) when a normal attack is detected, and a TN (True Negative) when a normal attack is classified as normal.
Table 11. Confusion matrix.
To verify the model, the TP, FP, TN, and FN data (classified according to the definition in Table 11) were substituted into the evaluation matrix definitions in Table 12.
Table 12. Evaluation matrix definitions.
Accuracy is an indicator of how well a model classifies actual data and is a commonly used indicator for model evaluation. However, the bias problem caused by model over-fitting can be overlooked, so it is used together with the F1-Score indicator, which consists of precision and recall.

3.4. Experiment Environment

This experiment was conducted in a computing environment consisting of 32 GB of memory, CPU AMD Ryzen 7 5700X, and GPU NVIDIA GeForce RTX 3060. The hyperparameter settings for each model are as presented in Table 13.
Table 13. Hyperparameter for each model.
The criteria used to evaluate the performance of each model are as follows. The elements of each matrix, a TP, an FP, a TN, and an FN, follow the definition of a confusion matrix.

4. Experiment Results

4.1. SHAP Value Analyses

In this experiment, to utilize the effective SHAP analysis technique, for each scenario, the dataset consists of 13 bytes that make up the payload section of messages exchanged on the CAN bus. Each byte stores information such as message ID, CAN ID, data, and CRC details. The patterns within the payload are analyzed through SHAP to distinguish between attack data and normal data. Finally, SHAP value analyses with Tree Explainer (XG Boost) were performed on the datasets for each scenario to measure the SHAP interaction value between feature importance and the important features.

4.1.1. Flooding Scenario Dataset Analysis

As a result of the SHAP analysis of flooding scenario datasets (scenario types 01 and 02), the ninth and seventh data bytes and the data length were determined to be the most important features. These results are consistent with the flooding payload analysis analyzed in Table 14. This model analyzed the key features of the flooding payload, such as the data length and Transfer ID.
Table 14. SHAP analysis for scenario type 01.
Table 15 shows the SHAP value distribution among the feature values, Feature Importance, and important features determined via the analysis of the scenario type 01 dataset. In the feature value graph, you can see that data length is concentrated around two values near the SHAP value of 0.0. This shows that the data lengths of the flooding payload are all seven and eight, and feature values are concentrated in that section. Additionally, the blue dots with relatively low feature values are concentrated in the range from −0.6 to −0.2 based on the SHAP value, and most of the normal driving data are distributed with a length other than seven or eight.
Table 15. SHAP analysis for scenario type 02.
Figure 7 shows a SHAP force plot for three random rows of data. The red section represents where the SHAP value is high, and the blue section denotes where the SHAP value is low. The ninth data byte and data length of the first and second graphs in Figure 7 are clearly different from those in the third graph. This means that the flooding payload has a feature for flooding attacks in the ninth data byte and is judged according to the data length. We can confirm that the evidence has been classified.
Figure 7. SHAP force plot for 3 random rows of data.
SHAP value analysis was performed on the data length, which had a high correlation coefficient with other data, and the ninth data byte, which had the highest feature importance value.
Scenario datasets 01 and 02 are related to flooding attacks, and in the SHAP analysis results shown in Figure 8, the sixth, seventh, and ninth data bytes were commonly selected as important features. Additionally, when analyzing the SHAP value distribution of the sixth and seventh data bytes in scenario type 01, a specific pattern emerged.
Figure 8. SHAP force plot for 4 random rows of data (type 02).
Table 15 shows the SHAP value distribution between the feature values, Feature Importance, and important features through the analysis of the scenario type 02 dataset. It presents the analysis of the same flooding attack dataset as Table 14 does. Therefore, in the Feature Importance graph, the ninth and seventh data bytes and the data length are selected as important features. However, in the SHAP value distribution graph, both the blue and red dots show a more biased distribution, and because type 02 has a higher unbalanced normal–attack ratio than type 01 does, SHAP analysis was performed. It is estimated that the Tree Explainer model of type 02 was more over-fitted compared to type 01.
The Feature Importance for each scenario type dataset was subjected to SHAP value analysis based on the XG boost model as follows. The datasets of scenario types 1 and 2—containing flooding attack data, which represent a DDoS attack—were collected. The ninth, seventh, and sixth data bytes and the data length are commonly measured because they are highly important. They were consistently observed despite the difference in proportions between the two datasets.

4.1.2. Fuzzy Attack Scenario Dataset Analysis (Types 03 and 04)

We can check the features of the fuzzy attack through the SHAP value distribution graph in Table 16. Except for the data length, all data byte items have high feature values in the section with a high SHAP value and low feature values in the section with a low SHAP value. This is presumed to be a pattern caused by the attacker injecting the payload using the brute forcing method. Therefore, unlike the previous flooding dataset, the feature importance of data length was relatively low.
Table 16. SHAP analysis for scenario type 03.
Figure 9 shows the SHAP force plot for three rows of the scenario dataset. We checked if the pattern matched the previous SHAP analysis. Unlike the flooding dataset, a different data byte was selected for each row, and the SHAP value was analyzed separately. It was found that the fuzzy attack method was brute forcing for each data byte.
Figure 9. SHAP force plot for 3 random rows of data (type 03).
Table 17 shows a dataset for the same fuzzy attack as scenario type 03, but it is unbalanced due to the low quantity of attack data. However, in the section where the SHAP value is high, the feature value is high, the data length is unimportant, and the SHAP value distribution between the features is clustered in blue and red.
Table 17. SHAP analysis of scenario type 04.
The above pattern can be confirmed in the SHAP force plot for three random rows shown in Figure 10. In all three rows, the SHAP value was high in the section with a low base value, and the SHAP value was low in the section with a high base value, and the target features were also different for each row.
Figure 10. SHAP force plot for 3 random rows of data (type 04).

4.1.3. Replay Attack Scenario Dataset Analysis (Types 05 and 06)

Table 18 shows a graph of the SHAP value analysis results for scenario type 05, which represents replay attacks. In the SHAP value distribution plot of the sixth, seventh, ninth, and eleventh data bytes, the blue and red dots are more clustered than the other previous datasets. This is presumed to be a result of the replay attack’s principle of injecting the same payload repeatedly. Therefore, the normal driving data in various forms can be classified into blue dots with low SHAP values and clustered in a straight line.
Table 18. SHAP analysis for scenario type 05.
The pattern of the above replay attack can be seen in the SHAP force plot in Figure 11. It is observed that the SHAP value for some features is not high due to the repeated injection of the replay’s payload, and instead, the SHAP value is evenly distributed for several features. This model analyzes whether it is a replay payload by studying the data bytes of the entire message rather than finding attack patterns in other features.
Figure 11. SHAP force plot for 3 random rows of data (type 05).
Table 19 consists of a SHAP value distribution plot, a feature importance plot, and a distribution graph showing the SHAP and feature values for each element for scenario type 06, which contains the replay attack data presented in Table 19. We can see the clustering of blue and red dots in the SHAP value distribution plot of the sixth and seventh data bytes.
Table 19. SHAP analysis for scenario type 06.
In Figure 12, unlike Figure 11, the features with high SHAP values can be observed, which suggests that scenario type 06 collected more diverse normal driving data than scenario type 05 did. This is because the dataset of scenario type 06 consists of more rows of data, and the quantity of attack data is also higher.
Figure 12. SHAP force plot for 3 random rows of data (type 06).

4.1.4. Single Model Results Analysis

The accuracy, precision, recall, and F1-score in Table 12 were used as the indicators of model performance evaluation, and the performance evaluation results for each attack are shown in Table 20.
Table 20. Evaluation results for each single model.
Scenario types 1 and 2 are flooding attack datasets, representing a type of DoS attack. Despite how the timestamps and data are ordered, the time series analysis RNN and LSTM models show an excellent performance of over 95% in accuracy, and their F1-Score indicators are shown. In addition, scenario types 3 and 4, which represent fuzzy attack datasets, and scenario type 6, which is a relay attack dataset, showed an excellent performance of over 95% in both accuracy and the F1-Score indicators. However, in scenario type 5’s dataset, the F1-Score of both models fell below 90%. On the other hand, the LSTM model used for time series analysis shows an accuracy of over 99% [22].
The decision tree single model, Random Forest, K-Neighbors Classifier, and dual classification DNN models showed excellent performances in the accuracy and F1-Score indicators of over 95% in all the datasets, but the logistic regression model’s accuracy varied between 79% and 97%. The F1-Score of the ML model is 97%, which is like the existing model. Over-fitting due to data imbalance was not observed.
In the performance evaluation in Section 3.2, the best models of each attack type were selected as base models to design a stacking-based ensemble model.
The base models independently detect flooding, fuzzy, and replay attacks from the input CAN Traffic Data and deliver these detection results to the meta model shown in Table 21. The decision tree-based metamodel synthesizes the detection results of each base model to determine whether an attack has occurred in a binary sense.
Table 21. Base model Analysis results for each attack.
As a result of analyzing the SHAP value interaction and Pearson correlation of the detection results of each base model, it is estimated that the correlation and similarity between the detection results of the replay and fuzzy models are high.

4.2. Experiment Model Results

The LSTM model learned about the flood and fuzzy models for over 20 epochs, and the accuracy of the two models was over 98% during the training and testing processes. The accuracy changes according to the learning process are shown in Table 22.
Table 22. Optimization of flood and fuzzy models.
Unlike the previous two models, the decision tree model learned about the replay model, with the max depth set to five, and its performance was evaluated in Figure 13. Replay attacks were detected with an accuracy and an F1-Score of over 97%. Through tree visualization, it was confirmed that the model underwent balanced training.
Figure 13. Structure of relay attack model.
The model that assembled these base models had an accuracy of 83%, a precision of 62%, and an F1-Score of 68%. Therefore, the cause of the poorer performance compared to that of the existing base model was determined through SHAP analysis in Section 4.1.

4.3. Result Analysis

In this experiment, the timestamps were removed from the DL models that performed time series analysis, such as the RNN, LSTM, and ML models, and the experiments were conducted to detect fuzzing, flooding, and relay attacks by learning data with shuffled data sequence. All the models showed significant detection performances.
In addition, by conducting SHAP on the detection performance, it was found that the models learned the data bytes associated with the algorithm used in each attack as a meaningful feature.
Although it was effective in the binary classification of a single attack through a single model, the performance of the binary classification of multiple attack types using a stacking model was poor. Looking at the SHAP analysis and correlation analysis results, it is assumed that the replay and fuzzy models have a high correlation in the detection results, causing overfitting in the ensemble model through stacking, adversely affecting their performance.

5. Discussion and Conclusions

According to the experiments proposed in this study, the LSTM model showed relatively lower performance when learning non-timeline data than timeline data. In this experiment, the Feature Importance and SHAP force plot confirmed more than 96% of both the accuracy score and the F1-score performance for intrusion in the drone network. These results are due to the fact that the Explainer model has the ability to select the intrusion pattern in the payload of the single model.
This study developed an ensemble model that can simultaneously detect multiple types of intrusion. In preprocessing, the patterns within the payload using a measure of Feature Importance were distinguished from attack and normal data, and as a result, improved the accuracy of the ensemble model. Through the experiment, both the accuracy score and F1-score were verified for practical utility through 97% detection performance measurement. However, it is also necessary to reconsider the limitation of three types of intrusion in this study. In future work, we plan to continuously evaluate the potential of these problems to increase the practical applicability of the model.

Author Contributions

Conceptualization, methodology, software, validation, writing—original draft preparation, formal analysis, investigation, visualization: Y.-W.H.; validation, writing—review and editing, supervision, project administration, funding acquisition, project administration, resources: D.-Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Hongik University’s new faculty research support fund.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. The UAVCAN Attack dataset utilized in our experiments originates from the HCRL (Hacking and Countermeasure Research Lab). The dataset can be acquired at https://ocslab.hksecurity.net/Datasets/uavcan-attack-dataset (accessed on 19 November 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. OpenCyphal. DS-015 UAVCAN Drone Standard v1.0.1. 2021. Available online: https://legacy.uavcan.org/Specification/4._CAN_bus_transport_layer/ (accessed on 26 April 2024).
  2. Mademlis, I.; Nikolaidis, N.; Tefas, A.; Pitas, I.; Wagner, T.; Messina, A. Autonomous unmanned aerial vehicles filming in dynamic unstructured outdoor environments. IEEE Signal Process. Mag. 2018, 36, 147–153. [Google Scholar] [CrossRef]
  3. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned aerial vehicles in agriculture: A review of perspective of platform, control, and applications. IEEE Access 2019, 7, 105100–105115. [Google Scholar] [CrossRef]
  4. Gargalakos, M. The role of unmanned aerial vehicles in military communications: Application scenarios, current trends, and beyond. J. Def. Model. Simul. 2021, 15485129211031668. [Google Scholar] [CrossRef]
  5. Altawy, R.; Youssef, A.M. Security, privacy, and safety aspects of civilian drones: A survey. ACM Trans. Cyber-Phys. Syst. 2016, 1, 1–25. [Google Scholar] [CrossRef]
  6. Shrestha, R.; Omidkar, A.; Roudi, S.A.; Abbas, R.; Kim, S. Machine-learning-enabled intrusion detection system for cellular connected UAV networks. Electronics 2021, 10, 1549. [Google Scholar] [CrossRef]
  7. Liu, J.; Yin, T.; Yue, D.; Karimi, H.R.; Cao, J. Event-based secure leader-following consensus control for multiagent systems with multiple cyber attacks. IEEE Trans. Cybern. 2020, 51, 162–173. [Google Scholar] [CrossRef] [PubMed]
  8. Cao, J.; Ding, D.; Liu, J.; Tian, E.; Hu, S.; Xie, X. Hybrid-triggered-based security controller design for networked control system under multiple cyber attacks. Inf. Sci. 2021, 548, 69–84. [Google Scholar] [CrossRef]
  9. CAN Specification, Version 2.0; Postfach 30 02 40; Robert Bosch GmbH: Stuttgart, Germany, 1991.
  10. Sikora, R. A modified stacking ensemble machine learning algorithm using genetic algorithms. In Handbook of Research on Organizational Transformations through Big Data Analytics; IGi Global: Hershey, PA, USA, 2015; pp. 43–53. [Google Scholar]
  11. Kwon, H.; Park, J.; Lee, Y. Stacking ensemble technique for classifying breast cancer. Healthc. Inform. Res. 2019, 25, 283–288. [Google Scholar] [CrossRef] [PubMed]
  12. Charoenkwan, P.; Chiangjong, W.; Nantasenamat, C.; Hasan, M.M.; Manavalan, B.; Shoombuatong, W. StackIL6: A stacking ensemble model for improving the prediction of IL-6 inducing peptides. Brief. Bioinform. 2021, 22, bbab172. [Google Scholar] [CrossRef]
  13. Akyol, K. Stacking ensemble based deep neural networks modeling for effective epileptic seizure detection. Expert Syst. Appl. 2020, 148, 113239. [Google Scholar] [CrossRef]
  14. Rashid, M.; Kamruzzaman, J.; Imam, T.; Wibowo, S.; Gordon, S. A tree-based stacking ensemble technique with feature selection for network intrusion detection. Appl. Intell. 2022, 52, 9768–9781. [Google Scholar] [CrossRef]
  15. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Long Beach, CA, USA, 4–9 December 2017; Volume 30, pp. 4768–4777. [Google Scholar]
  16. Lundberg, S.M.; Erion, G.G.; Lee, S.I. Consistent individualized feature attribution for tree ensembles. arXiv 2018, arXiv:1802.03888. [Google Scholar]
  17. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Lee, S.I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  18. Li, J.; Guo, Y.; Li, L.; Liu, X.; Wang, R. Using LightGBM with SHAP for predicting and analyzing traffic accidents severity. In Proceedings of the 2023 7th International Conference on Transportation Information and Safety (ICTIS), Xi’an, China, 4–6 August 2023; IEEE: New York, NY, USA, 2023; pp. 2150–2155. [Google Scholar]
  19. Lee, Y.G.; Oh, J.Y.; Kim, D.; Kim, G. Shap value-based feature importance analysis for short-term load forecasting. J. Electr. Eng. Technol. 2023, 18, 579–588. [Google Scholar] [CrossRef]
  20. OpenCyphal. Available online: https://legacy.uavcan.org/ (accessed on 26 April 2024).
  21. Sajid, J.; Hayawi, K.; Malik, A.W.; Anwar, Z.; Trabelsi, Z. A fog computing framework for intrusion detection of energy-based attacks on UAV-assisted smart farming. Appl. Sci. 2023, 13, 3857. [Google Scholar] [CrossRef]
  22. Tlili, F.; Ayed, S.; Chaari Fourati, L. Dynamic Intrusion Detection Framework for UAVCAN Protocol Using AI. In Proceedings of the 18th International Conference on Availability, Reliability and Security, Benevento, Italy, 28 August–1 September 2023; pp. 1–10. [Google Scholar]
  23. Hoang, T.N.; Islam, M.R.; Yim, K.; Kim, D. CANPerFL: Improve in-vehicle intrusion detection performance by sharing knowledge. Appl. Sci. 2023, 13, 6369. [Google Scholar] [CrossRef]
  24. Tanksale, V. Intrusion detection for controller area network using support vector machines. In Proceedings of the 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW), Monterey, CA, USA, 4–7 November 2019; IEEE: New York, NY, USA, 2019; pp. 121–126. [Google Scholar]
  25. Alsoliman, A.; Rigoni, G.; Callegaro, D.; Levorato, M.; Pinotti, C.M.; Conti, M. Intrusion Detection Framework for Invasive FPV Drones Using Video Streaming Characteristics. ACM Trans. Cyber-Phys. Syst. 2023, 7, 1–29. [Google Scholar] [CrossRef]
  26. Moulahi, T.; Zidi, S.; Alabdulatif, A.; Atiquzzaman, M. Comparative performance evaluation of intrusion detection based on machine learning in in-vehicle controller area network bus. IEEE Access 2021, 9, 99595–99605. [Google Scholar] [CrossRef]
  27. Kang, M.J.; Kang, J.W. Intrusion detection system using deep neural network for in-vehicle network security. PLoS ONE 2016, 11, e0155781. [Google Scholar] [CrossRef]
  28. Javed, A.R.; Ur Rehman, S.; Khan, M.U.; Alazab, M.; Reddy, T. CANintelliIDS: Detecting in-vehicle intrusion attacks on a controller area network using CNN and attention-based GRU. IEEE Trans. Netw. Sci. Eng. 2021, 8, 1456–1466. [Google Scholar] [CrossRef]
  29. Kou, L.; Ding, S.; Wu, T.; Dong, W.; Yin, Y. An intrusion detection model for drone communication network in sdn environment. Drones 2022, 6, 342. [Google Scholar] [CrossRef]
  30. Song, H.M.; Woo, J.; Kim, H.K. In-vehicle network intrusion detection using deep convolutional neural network. Veh. Commun. 2020, 21, 100198. [Google Scholar] [CrossRef]
  31. Tariq, S.; Lee, S.; Kim, H.K.; Woo, S.S. CAN-ADF: The controller area network attack detection framework. Comput. Secur. 2020, 94, 101857. [Google Scholar] [CrossRef]
  32. Seo, E.; Song, H.M.; Kim, H.K. GIDS: GAN based intrusion detection system for in-vehicle network. In Proceedings of the 2018 16th Annual Conference on Privacy, Security and Trust (PST), Belfast, Ireland, 28–30 August 2018; pp. 1–6. [Google Scholar]
  33. Qin, H.; Yan, M.; Ji, H. Application of controller area network (CAN) bus anomaly detection based on time series prediction. Veh. Commun. 2021, 27, 100291. [Google Scholar] [CrossRef]
  34. Khan, M.H.; Javed, A.R.; Iqbal, Z.; Asim, M.; Awad, A.I. DivaCAN: Detecting in-vehicle intrusion attacks on a controller area network using ensemble learning. Comput. Secur. 2024, 139, 103712. [Google Scholar] [CrossRef]
  35. Zhang, H.; Wang, J.; Wang, Y.; Li, M.; Song, J.; Liu, Z. ICVTest: A Practical Black-Box Penetration Testing Framework for Evaluating Cybersecurity of Intelligent Connected Vehicles. Appl. Sci. 2023, 14, 204. [Google Scholar] [CrossRef]
  36. Adly, S.; Moro, A.; Hammad, S.; Maged, S.A. Prevention of Controller Area Network (CAN) Attacks on Electric Autonomous Vehicles. Appl. Sci. 2023, 13, 9374. [Google Scholar] [CrossRef]
  37. Fang, S.; Zhang, G.; Li, Y.; Li, J. Windowed Hamming Distance-Based Intrusion Detection for the CAN Bus. Appl. Sci. 2024, 14, 2805. [Google Scholar] [CrossRef]
  38. Islam, R.; Refat, R.U.D.; Yerram, S.M.; Malik, H. Graph-based intrusion detection system for controller area networks. IEEE Trans. Intell. Transp. Syst. 2020, 23, 1727–1736. [Google Scholar] [CrossRef]
  39. Capuano, N.; Fenza, G.; Loia, V.; Stanzione, C. Explainable artificial intelligence in cybersecurity: A survey. IEEE Access 2022, 10, 93575–93600. [Google Scholar] [CrossRef]
  40. Chamola, V.; Hassija, V.; Sulthana, A.R.; Ghosh, D.; Dhingra, D.; Sikdar, B. A review of trustworthy and explainable artificial intelligence (xai). IEEE Access. 2023, 11, 78994–79015. [Google Scholar] [CrossRef]
  41. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  42. Covington, P.; Adams, J.; Sargin, E. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 191–198. [Google Scholar]
  43. The Asimov Institute. Available online: https://www.asimovinstitute.org/neural-network-zoo/ (accessed on 8 March 2024).
  44. Martinez, G.J.; Dubrovskiy, G.; Zhu, S.; Mohammed, A.; Lin, H.; Laneman, J.N.; Striegel, A.; Pragada, R.; Castor, D.R. An open, real-world dataset of cellular UAV communication properties. In Proceedings of the 2021 International Conference on Computer Communications and Networks (ICCCN), Athens, Greece, 19–22 July 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  45. Chang, Y.; Cheng, Y.; Murray, J.; Huang, S.; Shi, G. The hdin dataset: A real-world indoor uav dataset with multi-task labels for visual-based navigation. Drones 2022, 6, 202. [Google Scholar] [CrossRef]
  46. Kim, D.; Song, Y.; Kwon, S.; Kim, H.; Yoo, J.D.; Kim, H.K. Uavcan dataset description. arXiv 2022, arXiv:2212.09268. [Google Scholar]
  47. Hartmann, K.; Steup, C. The vulnerability of UAVs to cyber attacks—An approach to the risk assessment. In Proceedings of the 2013 5th International Conference on Cyber Conflict (CYCON 2013), Tallinn, Estonia, 4–7 June 2013; pp. 1–23. [Google Scholar]
  48. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.