sensors-logo

Journal Browser

Journal Browser

Cyber Security and AI

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 30783

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Interests: multimedia security; AI security; blockchain
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518000, China
Interests: cyberspace security; multimedia security; chaos theory
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cyber security is the protection of internet-connected systems such as hardware, software, and data from cyberthreats. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information, extorting money from users, or interrupting normal business processes. Recently, with the fast development of artificial intelligence (AI), some classical problems in cyber security have been solved. However, new technologies are gradually being used by criminals, which causes new security issues for cyberspace. The vulnerabilities of the system can be easily found, which makes the system easier to be attacked. When AI is applied to large-scale network attacks, attackers can adaptively generate attack programs and pass a large number of automated attacks. Using AI, attackers can hide attack behaviors, attack paths, etc., making it more difficult for defenders to detect and detect such attacks. AI technology may also be used to steal secret data from users, thereby causing more serious security problems. AI technology can be used to improve cyber security, and can also cause many new security issues for cyberspace. This Special Issue of Sensors is dedicated to original research and recent developments on cyber security and AI.

This Special Issue covers all topics relating to cyber security and AI.

Prof. Dr. Yushu Zhang
Dr. Zhongyun Hua
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI and machine learning for cybersecurity
  • adversairal attack for cybersecurity
  • generative adversarial networks for cybersecurity
  • automated and intelligent and data-driven cybersecurity model
  • AI security and neural networks
  • cloud computing security and AI
  • AI for multimedia networks security

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 4136 KiB  
Article
Scalable Learning Framework for Detecting New Types of Twitter Spam with Misuse and Anomaly Detection
by Jaeun Choi, Byunghwan Jeon and Chunmi Jeon
Sensors 2024, 24(7), 2263; https://doi.org/10.3390/s24072263 - 02 Apr 2024
Viewed by 446
Abstract
The growing popularity of social media has engendered the social problem of spam proliferation through this medium. New spam types that evade existing spam detection systems are being developed continually, necessitating corresponding countermeasures. This study proposes an anomaly detection-based framework to detect new [...] Read more.
The growing popularity of social media has engendered the social problem of spam proliferation through this medium. New spam types that evade existing spam detection systems are being developed continually, necessitating corresponding countermeasures. This study proposes an anomaly detection-based framework to detect new Twitter spam, which works by modeling the characteristics of non-spam tweets and using anomaly detection to classify tweets deviating from this model as anomalies. However, because modeling varied non-spam tweets is challenging, the technique’s spam detection and false positive (FP) rates are low and high, respectively. To overcome this shortcoming, anomaly detection is performed on known spam tweets pre-detected using a trained decision tree while modeling normal tweets. A one-class support vector machine and an autoencoder with high detection rates are used for anomaly detection. The proposed framework exhibits superior detection rates for unknown spam compared to conventional techniques, while maintaining equivalent or improved detection and FP rates for known spam. Furthermore, the framework can be adapted to changes in spam conditions by adjusting the costs of detection errors. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

20 pages, 16168 KiB  
Article
A Synthetic Time-Series Generation Using a Variational Recurrent Autoencoder with an Attention Mechanism in an Industrial Control System
by Seungho Jeon and Jung Taek Seo
Sensors 2024, 24(1), 128; https://doi.org/10.3390/s24010128 - 26 Dec 2023
Viewed by 813
Abstract
Data scarcity is a significant obstacle for modern data science and artificial intelligence research communities. The fact that abundant data are a key element of a powerful prediction model is well known through various past studies. However, industrial control systems (ICS) are operated [...] Read more.
Data scarcity is a significant obstacle for modern data science and artificial intelligence research communities. The fact that abundant data are a key element of a powerful prediction model is well known through various past studies. However, industrial control systems (ICS) are operated in a closed environment due to security and privacy issues, so collected data are generally not disclosed. In this environment, synthetic data generation can be a good alternative. However, ICS datasets have time-series characteristics and include features with short- and long-term temporal dependencies. In this paper, we propose the attention-based variational recurrent autoencoder (AVRAE) for generating time-series ICS data. We first extend the evidence lower bound of the variational inference to time-series data. Then, a recurrent neural-network-based autoencoder is designed to take this as the objective. AVRAE employs the attention mechanism to effectively learn the long-term and short-term temporal dependencies ICS data implies. Finally, we present an algorithm for generating synthetic ICS time-series data using learned AVRAE. In a comprehensive evaluation using the ICS dataset HAI and various performance indicators, AVRAE successfully generated visually and statistically plausible synthetic ICS data. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

12 pages, 483 KiB  
Article
Mobile Payment Protocol with Deniably Authenticated Property
by Yunzhuo Liu, Wen Huang, Ming Zhuo, Shijie Zhou and Mengshi Li
Sensors 2023, 23(8), 3927; https://doi.org/10.3390/s23083927 - 12 Apr 2023
Cited by 1 | Viewed by 1259
Abstract
Mobile payment services have been widely applied in our daily life, where users can conduct transactions in a convenient way. However, critical privacy concerns have arisen. Specifically, a risk of participating in a transaction is the disclosure of personal privacy. This might occur [...] Read more.
Mobile payment services have been widely applied in our daily life, where users can conduct transactions in a convenient way. However, critical privacy concerns have arisen. Specifically, a risk of participating in a transaction is the disclosure of personal privacy. This might occur if, for example, the user pays for some special medicine, such as AIDS medicine or contraceptives. In this paper, we propose a mobile payment protocol that is suitable for mobile devices only with limited computing resources. In particular, the user in a transaction can confirm the identity of others in the same transaction while the user cannot show convincing evidence to prove that others also take part in the same transactions. We implement the proposed protocol and test its computation overhead. The experiment results corroborate that the proposed protocol is suitable for mobile devices with limited computing resources. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

17 pages, 1700 KiB  
Article
Invertible Privacy-Preserving Adversarial Reconstruction for Image Compressed Sensing
by Di Xiao, Yue Li and Min Li
Sensors 2023, 23(7), 3575; https://doi.org/10.3390/s23073575 - 29 Mar 2023
Viewed by 1046
Abstract
Since the advent of compressed sensing (CS), many reconstruction algorithms have been proposed, most of which are devoted to reconstructing images with better visual quality. However, higher-quality images tend to reveal more sensitive information in machine recognition tasks. In this paper, we propose [...] Read more.
Since the advent of compressed sensing (CS), many reconstruction algorithms have been proposed, most of which are devoted to reconstructing images with better visual quality. However, higher-quality images tend to reveal more sensitive information in machine recognition tasks. In this paper, we propose a novel invertible privacy-preserving adversarial reconstruction method for image CS. While optimizing the quality, the reconstructed images are made to be adversarial samples at the moment of generation. For semi-authorized users, they can only obtain the adversarial reconstructed images, which provide little information for machine recognition or training deep models. For authorized users, they can reverse adversarial reconstructed images to clean samples with an additional restoration network. Experimental results show that while keeping good visual quality for both types of reconstructed images, the proposed scheme can provide semi-authorized users with adversarial reconstructed images with a very low recognizable rate, and allow authorized users to further recover sanitized reconstructed images with recognition performance approximating that of the traditional CS. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

19 pages, 816 KiB  
Article
Cyber Attacker Profiling for Risk Analysis Based on Machine Learning
by Igor Kotenko, Elena Fedorchenko, Evgenia Novikova and Ashish Jha
Sensors 2023, 23(4), 2028; https://doi.org/10.3390/s23042028 - 10 Feb 2023
Cited by 2 | Viewed by 2212
Abstract
The notion of the attacker profile is often used in risk analysis tasks such as cyber attack forecasting, security incident investigations and security decision support. The attacker profile is a set of attributes characterising an attacker and their behaviour. This paper analyzes the [...] Read more.
The notion of the attacker profile is often used in risk analysis tasks such as cyber attack forecasting, security incident investigations and security decision support. The attacker profile is a set of attributes characterising an attacker and their behaviour. This paper analyzes the research in the area of attacker modelling and presents the analysis results as a classification of attacker models, attributes and risk analysis techniques that are used to construct the attacker models. The authors introduce a formal two-level attacker model that consists of high-level attributes calculated using low-level attributes that are in turn calculated on the basis of the raw security data. To specify the low-level attributes, the authors performed a series of experiments with datasets of attacks. Firstly, the requirements of the datasets for the experiments were specified in order to select the appropriate datasets, and, afterwards, the applicability of the attributes formed on the basis of such nominal parameters as bash commands and event logs to calculate high-level attributes was evaluated. The results allow us to conclude that attack team profiles can be differentiated using nominal parameters such as bash history logs. At the same time, accurate attacker profiling requires the extension of the low-level attributes list. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

15 pages, 1443 KiB  
Article
A Novel Fractional Accumulative Grey Model with GA-PSO Optimizer and Its Application
by Ruixiao Huang, Xiaofeng Fu and Yifei Pu
Sensors 2023, 23(2), 636; https://doi.org/10.3390/s23020636 - 05 Jan 2023
Cited by 5 | Viewed by 1054
Abstract
The prediction of cyber security situation plays an important role in early warning against cyber security attacks. The first-order accumulative grey model has achieved remarkable results in many prediction scenarios. Since recent events have a greater impact on future decisions, new information should [...] Read more.
The prediction of cyber security situation plays an important role in early warning against cyber security attacks. The first-order accumulative grey model has achieved remarkable results in many prediction scenarios. Since recent events have a greater impact on future decisions, new information should be given more weight. The disadvantage of first-order accumulative grey models is that with the first-order accumulative method, equal weight is given to the original data. In this paper, a fractional-order cumulative grey model (FAGM) is used to establish the prediction model, and an intelligent optimization algorithm known as particle swarm optimization (PSO) combined with a genetic algorithm (GA) is used to determine the optimal order. The model discussed in this paper is used for the prediction of Internet cyber security situations. The results of a comparison with the traditional grey model GM(1,1), the grey model GM(1,n), and the fractional discrete grey seasonal model FDGSM(1,1) show that our model is suitable for cases with insufficient data and irregular sample sizes, and the prediction accuracy and stability of the model are better than those of the other three models. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

22 pages, 11097 KiB  
Article
A Secrecy Transmission Protocol with Energy Harvesting for Federated Learning
by Ping Xie, Fan Li, Ilsun You, Ling Xing, Honghai Wu and Huahong Ma
Sensors 2022, 22(15), 5506; https://doi.org/10.3390/s22155506 - 23 Jul 2022
Viewed by 1210
Abstract
In federated learning (FL), model parameters of deep learning are communicated between clients and the central server. To better train deep learning models, the spectrum resource and transmission security need to be guaranteed. Toward this end, we propose a secrecy transmission protocol based [...] Read more.
In federated learning (FL), model parameters of deep learning are communicated between clients and the central server. To better train deep learning models, the spectrum resource and transmission security need to be guaranteed. Toward this end, we propose a secrecy transmission protocol based on energy harvesting and jammer selection for FL, in which the secondary transmitters can harvest energy from the primary source. Specifically, a secondary transmitter STi is first selected, which can offer the best transmission performance for the secondary users to access the primary frequency spectrum. Then, another secondary transmitter STn, which has the best channel for eavesdropping, is also chosen as a friendly jammer to provide secrecy service. Furthermore, we use outage probability (OP) and intercept probability (IP) as metrics to evaluate performance. Meanwhile, we also derive closed-form expressions of OP and IP of primary users and OP of secondary users for the proposed protocol, respectively. We also conduct a theoretical analysis of the optimal secondary transmission selection (OSTS) protocol. Finally, the performance of the proposed protocol is validated through numerical experiments. The results show that the secrecy performance of the proposed protocol is better than the OSTS and OCJS, respectively. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

17 pages, 718 KiB  
Article
CTTGAN: Traffic Data Synthesizing Scheme Based on Conditional GAN
by Jiayu Wang, Xuehu Yan, Lintao Liu, Longlong Li and Yongqiang Yu
Sensors 2022, 22(14), 5243; https://doi.org/10.3390/s22145243 - 13 Jul 2022
Cited by 5 | Viewed by 2238
Abstract
Most machine learning algorithms only have a good recognition rate on balanced datasets. However, in the field of malicious traffic identification, benign traffic on the network is far greater than malicious traffic, and the network traffic dataset is imbalanced, which makes the algorithm [...] Read more.
Most machine learning algorithms only have a good recognition rate on balanced datasets. However, in the field of malicious traffic identification, benign traffic on the network is far greater than malicious traffic, and the network traffic dataset is imbalanced, which makes the algorithm have a low identification rate for small categories of malicious traffic samples. This paper presents a traffic sample synthesizing model named Conditional Tabular Traffic Generative Adversarial Network (CTTGAN), which uses a Conditional Tabular Generative Adversarial Network (CTGAN) algorithm to expand the small category traffic samples and balance the dataset in order to improve the malicious traffic identification rate. The CTTGAN model expands and recognizes feature data, which meets the requirements of a machine learning algorithm for training and prediction data. The contributions of this paper are as follows: first, the small category samples are expanded and the traffic dataset is balanced; second, the storage cost and computational complexity are reduced compared to models using image data; third, discrete variables and continuous variables in traffic feature data are processed at the same time, and the data distribution is described well. The experimental results show that the recognition rate of the expanded samples is more than 0.99 in MLP, KNN and SVM algorithms. In addition, the recognition rate of the proposed CTTGAN model is better than the oversampling and undersampling schemes. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

19 pages, 5298 KiB  
Article
Sparse Adversarial Video Attacks via Superpixel-Based Jacobian Computation
by Zhenyu Du, Fangzheng Liu and Xuehu Yan
Sensors 2022, 22(10), 3686; https://doi.org/10.3390/s22103686 - 12 May 2022
Cited by 2 | Viewed by 1354
Abstract
Adversarial examples have aroused great attention during the past years owing to their threat to the deep neural networks (DNNs). Recently, they have been successfully extended to video models. Compared with image cases, the sparse adversarial perturbations in the videos can not only [...] Read more.
Adversarial examples have aroused great attention during the past years owing to their threat to the deep neural networks (DNNs). Recently, they have been successfully extended to video models. Compared with image cases, the sparse adversarial perturbations in the videos can not only reduce the computation complexity, but also guarantee the crypticity of adversarial examples. In this paper, we propose an efficient attack to generate adversarial video perturbations with large sparsity in both the temporal (inter-frames) and spatial (intra-frames) domains. Specifically, we select the key frames and key pixels according to the gradient feedback of the target models by computing the forward derivative, and then add the perturbations on them. To overcome the problem of dimensional explosion in the video, we introduce super-pixels to decrease the number of pixels that need to compute gradients. The proposed method is finally verified under both the white-box and black-box settings. We estimate the gradients using natural evolution strategy (NES) in the black-box attacks. The experiments are conducted on two widely used datasets: UCF101 and HMDB51 versus two mainstream models: C3D and LRCN. Results show that compared with the state-of-the-art method, our method can achieve the similar attacking performance, but it pollutes only <1% pixels and costs less time to finish the attacks. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

15 pages, 645 KiB  
Article
Wet Paper Coding-Based Deep Neural Network Watermarking
by Xuan Wang, Yuliang Lu, Xuehu Yan and Long Yu
Sensors 2022, 22(9), 3489; https://doi.org/10.3390/s22093489 - 04 May 2022
Cited by 1 | Viewed by 2181
Abstract
In recent years, the wide application of deep neural network models has brought serious risks of intellectual property rights infringement. Embedding a watermark in a network model is an effective solution to protect intellectual property rights. Although researchers have proposed schemes to add [...] Read more.
In recent years, the wide application of deep neural network models has brought serious risks of intellectual property rights infringement. Embedding a watermark in a network model is an effective solution to protect intellectual property rights. Although researchers have proposed schemes to add watermarks to models, they cannot prevent attackers from adding and overwriting original information, and embedding rates cannot be quantified. Therefore, aiming at these problems, this paper designs a high embedding rate and tamper-proof watermarking scheme. We employ wet paper coding (WPC), in which important parameters are regarded as wet blocks and the remaining unimportant parameters are regarded as dry blocks in the model. To obtain the important parameters more easily, we propose an optimized probabilistic selection strategy (OPSS). OPSS defines the unimportant-level function and sets the importance threshold to select the important parameter positions and to ensure that the original function is not affected after the model parameters are changed. We regard important parameters as an unmodifiable part, and only modify the part that includes the unimportant parameters. We selected the MNIST, CIFAR-10, and ImageNet datasets to test the performance of the model after adding a watermark and to analyze the fidelity, robustness, embedding rate, and comparison schemes of the model. Our experiment shows that the proposed scheme has high fidelity and strong robustness along with a high embedding rate and the ability to prevent malicious tampering. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

28 pages, 2317 KiB  
Article
Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method
by Thi-Thu-Huong Le, Haeyoung Kim, Hyoeun Kang and Howon Kim
Sensors 2022, 22(3), 1154; https://doi.org/10.3390/s22031154 - 03 Feb 2022
Cited by 67 | Viewed by 6369
Abstract
In recent years, many methods for intrusion detection systems (IDS) have been designed and developed in the research community, which have achieved a perfect detection rate using IDS datasets. Deep neural networks (DNNs) are representative examples applied widely in IDS. However, DNN models [...] Read more.
In recent years, many methods for intrusion detection systems (IDS) have been designed and developed in the research community, which have achieved a perfect detection rate using IDS datasets. Deep neural networks (DNNs) are representative examples applied widely in IDS. However, DNN models are becoming increasingly complex in model architectures with high resource computing in hardware requirements. In addition, it is difficult for humans to obtain explanations behind the decisions made by these DNN models using large IoT-based IDS datasets. Many proposed IDS methods have not been applied in practical deployments, because of the lack of explanation given to cybersecurity experts, to support them in terms of optimizing their decisions according to the judgments of the IDS models. This paper aims to enhance the attack detection performance of IDS with big IoT-based IDS datasets as well as provide explanations of machine learning (ML) model predictions. The proposed ML-based IDS method is based on the ensemble trees approach, including decision tree (DT) and random forest (RF) classifiers which do not require high computing resources for training models. In addition, two big datasets are used for the experimental evaluation of the proposed method, NF-BoT-IoT-v2, and NF-ToN-IoT-v2 (new versions of the original BoT-IoT and ToN-IoT datasets), through the feature set of the net flow meter. In addition, the IoTDS20 dataset is used for experiments. Furthermore, the SHapley additive exPlanations (SHAP) is applied to the eXplainable AI (XAI) methodology to explain and interpret the classification decisions of DT and RF models; this is not only effective in interpreting the final decision of the ensemble tree approach but also supports cybersecurity experts in quickly optimizing and evaluating the correctness of their judgments based on the explanations of the results. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 3140 KiB  
Review
Detecting Malware by Analyzing App Permissions on Android Platform: A Systematic Literature Review
by Adeel Ehsan, Cagatay Catal and Alok Mishra
Sensors 2022, 22(20), 7928; https://doi.org/10.3390/s22207928 - 18 Oct 2022
Cited by 3 | Viewed by 3327
Abstract
Smartphone adaptation in society has been progressing at a very high speed. Having the ability to run on a vast variety of devices, much of the user base possesses an Android phone. Its popularity and flexibility have played a major role in making [...] Read more.
Smartphone adaptation in society has been progressing at a very high speed. Having the ability to run on a vast variety of devices, much of the user base possesses an Android phone. Its popularity and flexibility have played a major role in making it a target of different attacks via malware, causing loss to users, both financially and from a privacy perspective. Different malware and their variants are emerging every day, making it a huge challenge to come up with detection and preventive methodologies and tools. Research has spawned in various directions to yield effective malware detection mechanisms. Since malware can adopt different ways to attack and hide, accurate analysis is the key to detecting them. Like any usual mobile app, malware requires permission to take action and use device resources. There are 235 total permissions that the Android app can request on a device. Malware takes advantage of this to request unnecessary permissions, which would enable those to take malicious actions. Since permissions are critical, it is important and challenging to identify if an app is exploiting permissions and causing damage. The focus of this article is to analyze the identified studies that have been conducted with a focus on permission analysis for malware detection. With this perspective, a systematic literature review (SLR) has been produced. Several papers have been retrieved and selected for detailed analysis. Current challenges and different analyses were presented using the identified articles. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

17 pages, 2085 KiB  
Review
Deep Learning for Encrypted Traffic Classification and Unknown Data Detection
by Madushi H. Pathmaperuma, Yogachandran Rahulamathavan, Safak Dogan and Ahmet M. Kondoz
Sensors 2022, 22(19), 7643; https://doi.org/10.3390/s22197643 - 09 Oct 2022
Cited by 6 | Viewed by 3311
Abstract
Despite the widespread use of encryption techniques to provide confidentiality over Internet communications, mobile device users are still susceptible to privacy and security risks. In this paper, a novel Deep Neural Network (DNN) based on a user activity detection framework is proposed to [...] Read more.
Despite the widespread use of encryption techniques to provide confidentiality over Internet communications, mobile device users are still susceptible to privacy and security risks. In this paper, a novel Deep Neural Network (DNN) based on a user activity detection framework is proposed to identify fine-grained user activities performed on mobile applications (known as in-app activities) from a sniffed encrypted Internet traffic stream. One of the challenges is that there are countless applications, and it is practically impossible to collect and train a DNN model using all possible data from them. Therefore, in this work, we exploit the probability distribution of a DNN output layer to filter the data from applications that are not considered during the model training (i.e., unknown data). The proposed framework uses a time window-based approach to divide the traffic flow of activity into segments so that in-app activities can be identified just by observing only a fraction of the activity-related traffic. Our tests have shown that the DNN-based framework has demonstrated an accuracy of 90% or above in identifying previously trained in-app activities and an average accuracy of 79% in identifying previously untrained in-app activity traffic as unknown data when this framework is employed. Full article
(This article belongs to the Special Issue Cyber Security and AI)
Show Figures

Figure 1

Back to TopTop