Security Challenges and Opportunities of Artificial Intelligence/Big Data Scenarios

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 2618

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: artificial intelligence and big data security; satellite internet security; optical communication systems and optical network security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mechanical and Electrical Engineering, Hunan University, Changsha 410082, China
Interests: big data privacy protection; intelligent buildings; information-based building management systems; energy consumption prediction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: big data privacy protection; multi-modality learning; interactive AI for vision

Special Issue Information

Dear Colleagues,

The convergence of Artificial Intelligence (AI) and Big Data has transformative implications across various sectors, offering unparalleled prospects for innovation and operational efficiency. AI systems, bolstered by the depth and breadth of Big Data, are poised to revolutionize decision making, predictive analytics, and personalized services. The fusion of AI and Big Data offers immense innovation, but it also poses acute security challenges. As AI systems leverage vast data for enhanced decision making, they expose vulnerabilities and sensitive data risks. The security environment is fraught with challenges, from the vulnerabilities embedded within AI algorithms to the management of vast troves of sensitive data. The risks posed by data breaches, algorithmic biases, and the misuse of AI for malicious intent are significant and demand immediate attention.

This Special Issue is particularly interested in technical, experimental, and methodological contributions that explore the interface between security and the advancement of AI and Big Data. We encourage the submission of papers that offer innovative approaches, strategies, and applications aimed at mitigating security risks and enhancing the trustworthiness of AI and Big Data systems. Special consideration will be given to research that advances our understanding of how to secure these technologies for the betterment of industrial processes, user safety, and overall system integrity.

Suggest themes.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Intrinsic AI security;
  • Big Data security;
  • Derived AI security;
  • Data breaches;
  • AI-enabled security;
  • Malicious use of AI;
  • Ethical AI;
  • Regulatory frameworks;
  • Predictive analytics;
  • Privacy protection;
  • Trustworthy biometrics;
  • Security in satellite network;
  • Cybersecurity;
  • Algorithmic bias;
  • AI data security;
  • AI model security

Dr. Xiaodan Yan
Prof. Dr. Ke Yan
Dr. Muyi Sun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • security and privacy protection
  • privacy protection
  • vulnerability
  • cybersecurity
  • secure communication

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3365 KiB  
Article
Robust Federated Learning Against Data Poisoning Attacks: Prevention and Detection of Attacked Nodes
by Pretom Roy Ovi and Aryya Gangopadhyay
Electronics 2025, 14(15), 2970; https://doi.org/10.3390/electronics14152970 - 25 Jul 2025
Abstract
Federated learning (FL) enables collaborative model building among a large number of participants without sharing sensitive data to the central server. Because of its distributed nature, FL has limited control over local data and the corresponding training process. Therefore, it is susceptible to [...] Read more.
Federated learning (FL) enables collaborative model building among a large number of participants without sharing sensitive data to the central server. Because of its distributed nature, FL has limited control over local data and the corresponding training process. Therefore, it is susceptible to data poisoning attacks where malicious workers use malicious training data to train the model. Furthermore, attackers on the worker side can easily manipulate local data by swapping the labels of training instances, adding noise to training instances, and adding out-of-distribution training instances in the local data to initiate data poisoning attacks. And local workers under such attacks carry incorrect information to the server, poison the global model, and cause misclassifications. So, the prevention and detection of such data poisoning attacks is crucial to build a robust federated training framework. To address this, we propose a prevention strategy in federated learning, namely confident federated learning, to protect workers from such data poisoning attacks. Our proposed prevention strategy at first validates the label quality of local training samples by characterizing and identifying label errors in the local training data, and then excludes the detected mislabeled samples from the local training. To this aim, we experiment with our proposed approach on both the image and audio domains, and our experimental results validated the robustness of our proposed confident federated learning in preventing the data poisoning attacks. Our proposed method can successfully detect the mislabeled training samples with above 85% accuracy and exclude those detected samples from the training set to prevent data poisoning attacks on the local workers. However, our prevention strategy can successfully prevent the attack locally in the presence of a certain percentage of poisonous samples. Beyond that percentage, the prevention strategy may not be effective in preventing attacks. In such cases, detection of the attacked workers is needed. So, in addition to the prevention strategy, we propose a novel detection strategy in the federated learning framework to detect the malicious workers under attack. We propose to create a class-wise cluster representation for every participating worker by utilizing the neuron activation maps of local models and analyze the resulting clusters to filter out the workers under attack before model aggregation. We experimentally demonstrated the efficacy of our proposed detection strategy in detecting workers affected by data poisoning attacks, along with the attack types, e.g., label-flipping or dirty labeling. In addition, our experimental results suggest that the global model could not converge even after a large number of training rounds in the presence of malicious workers, whereas after detecting the malicious workers with our proposed detection method and discarding them from model aggregation, we ensured that the global model achieved convergence within very few training rounds. Furthermore, our proposed approach stays robust under different data distributions and model sizes and does not require prior knowledge about the number of attackers in the system. Full article
Show Figures

Figure 1

16 pages, 10129 KiB  
Article
PestOOD: An AI-Enabled Solution for Advancing Grain Security via Out-of-Distribution Pest Detection
by Jida Tian, Chuanyang Ma, Jiangtao Li and Huiling Zhou
Electronics 2025, 14(14), 2868; https://doi.org/10.3390/electronics14142868 - 18 Jul 2025
Viewed by 124
Abstract
Detecting stored-grain pests on the surface of the grain pile plays an important role in integrated pest management (IPM), which is crucial for grain security. Recently, numerous deep learning-based pest detection methods have been proposed. However, a critical limitation of existing methods is [...] Read more.
Detecting stored-grain pests on the surface of the grain pile plays an important role in integrated pest management (IPM), which is crucial for grain security. Recently, numerous deep learning-based pest detection methods have been proposed. However, a critical limitation of existing methods is their inability to detect out-of-distribution (OOD) categories that are unseen during training. When encountering such objects, these methods often misclassify them as in-distribution (ID) categories. To address this challenge, we propose a one-stage framework named PestOOD for out-of-distribution stored-grain pest detection via flow-based feature reconstruction. Specifically, we propose a novel Flow-Based OOD Feature Generation (FOFG) module that generates OOD features for detector training via feature reconstruction. This helps the detector learn to recognize OOD objects more effectively. Additionally, to prevent network overfitting that may lead to an excessive focus on ID feature extraction, we propose a Noisy DropBlock (NDB) module and integrate it into the backbone network. Finally, to ensure effective network convergence, a Stage-Wise Training Strategy (STS) is proposed. We conducted extensive experiments on our previously established multi-class stored-grain pest dataset. The results show that our proposed PestOOD demonstrates superior performance over state-of-the-art methods, providing an effective AI-enabled solution to ensure grain security. Full article
Show Figures

Figure 1

19 pages, 1821 KiB  
Article
Mitigating DDoS Attacks in LEO Satellite Networks Through Bottleneck Minimize Routing
by Fangzhou Meng, Xiaodan Yan, Yuanjian Zhang, Jian Yang, Ang Cao, Ruiqi Liu and Yongli Zhao
Electronics 2025, 14(12), 2376; https://doi.org/10.3390/electronics14122376 - 10 Jun 2025
Viewed by 536
Abstract
In this paper, we focus on defending against distributed denial-of-service (DDoS) attacks in a low-earth-orbit (LEO) satellite network (LSN). To enhance the security of LSN, we propose the K-Bottleneck Minimize routing method. The algorithm ensures path diversity while avoiding vulnerable bottleneck paths, which [...] Read more.
In this paper, we focus on defending against distributed denial-of-service (DDoS) attacks in a low-earth-orbit (LEO) satellite network (LSN). To enhance the security of LSN, we propose the K-Bottleneck Minimize routing method. The algorithm ensures path diversity while avoiding vulnerable bottleneck paths, which significantly increases the cost for attackers. Additionally, the attacker’s detectability is reduced. The results show that the algorithm avoids the bottleneck paths that are vulnerable to attacks, improves the attacker’s cost by about 13.1% and 16.6% on average and median, and improves the detectability of attackers by 48.5% and 45.4% on average and median. The algorithm generates multiple non-overlapping inter-satellite paths, preventing the exploitation of bottleneck paths and ensuring better robustness and attack resistance. Full article
Show Figures

Figure 1

20 pages, 1186 KiB  
Article
A Practical Human-Centric Risk Management (HRM) Methodology
by Kitty Kioskli, Eleni Seralidou and Nineta Polemi
Electronics 2025, 14(3), 486; https://doi.org/10.3390/electronics14030486 - 25 Jan 2025
Cited by 1 | Viewed by 1293
Abstract
Various standards (e.g., ISO 27000x, ISO 31000:2018) and methodologies (e.g., NIST SP 800-53, NIST SP 800-37, NIST SP 800-161, ETSI TS 102 165-1, NISTIR 8286) are available for risk assessment. However, these standards often overlook the human element. Studies have shown that adversary [...] Read more.
Various standards (e.g., ISO 27000x, ISO 31000:2018) and methodologies (e.g., NIST SP 800-53, NIST SP 800-37, NIST SP 800-161, ETSI TS 102 165-1, NISTIR 8286) are available for risk assessment. However, these standards often overlook the human element. Studies have shown that adversary profiles (AP), which detail the maturity of attackers, significantly affect vulnerability assessments and risk calculations. Similarly, the maturity of the users interacting with the Information and Communication Technologies (ICT) system in adopting security practices impacts risk calculations. In this paper, we identify and estimate the maturity of user profiles (UP) and propose an enhanced risk assessment methodology, HRM (based on ISO 27001), that incorporates the human element into the risk evaluation. Social measures, such as awareness programs, training, and behavioral interventions, alongside technical controls, are included in the Human-Centric Risk Management (HRM) risk treatment phase. These measures enhance user security hygiene and resilience, reducing risks and ensuring comprehensive security strategies in SMEs. Full article
Show Figures

Figure 1

Back to TopTop