Advancements in Distributed Intelligent Security Through AI-Driven Solutions

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 3146

Special Issue Editors


E-Mail Website
Guest Editor
Institutes of Artificial Intelligence, Guangzhou University, Guangzhou 510006, China
Interests: machine learning security; Internet of Things; edge computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Data Science, City University of Macau, Macau 999074, China
Interests: differential privacy; optimization principle
Department of Computing, The Hong Kong Polytechnic University, Hung Hom 999077, Hong Kong, China
Interests: billion-scale RecSys; graph neural networks; large language models

Special Issue Information

Dear Colleagues,

Rapid advancements in artificial intelligence (AI) are revolutionizing the cybersecurity landscape, particularly in distributed computing environments. With the growing dependence on decentralized systems, such as edge computing, the Internet of Things (IoT), and federated learning, traditional centralized security frameworks are increasingly being outpaced by the complexity and scale of modern networks. These emerging environments require advanced, intelligent security solutions capable of adapting to ever-evolving threats and vulnerabilities.

This Special Issue seeks to explore the intersection of AI and distributed intelligent security, presenting groundbreaking research that leverages AI to address the unique challenges posed by distributed systems. AI-driven security solutions hold the promise of transforming cybersecurity by enabling real-time detection, prevention, and mitigation of attacks in distributed environments. Through adaptive, scalable, and autonomous capabilities, AI has the potential to create more resilient security architectures that proactively protect decentralized networks and data systems.

We invite researchers, academics, and practitioners to contribute their latest insights and advancements in this rapidly evolving field. Submissions are encouraged to focus on topics that explore AI’s role in enhancing security across distributed systems, whether through novel algorithms, theoretical advancements, or practical applications. Interdisciplinary research and case studies highlighting real-world implementations are also welcomed.

Topics of interest include, but are not limited to, the following:

  1. AI-enhanced Intrusion Detection and Prevention Systems: New methodologies that leverage machine learning, deep learning, or other AI techniques to detect and prevent unauthorized access or anomalies in distributed systems.
  2. Federated Learning Security and Privacy: Approaches for ensuring data privacy and security within federated learning frameworks, including secure model aggregation, adversarial attacks, and defense mechanisms.
  3. Blockchain-based Security Protocols: Applications of blockchain technology to secure distributed environments, with a focus on decentralized trust mechanisms, consensus protocols, and secure data sharing.
  4. Machine Learning for Anomaly Detection: Leveraging machine learning models to identify outliers or unusual behavior across decentralized networks, ensuring early threat detection.
  5. AI-driven Encryption Techniques: Innovations in encryption driven by AI that enhance data security and privacy across distributed systems, focusing on lightweight and scalable encryption models.
  6. Adaptive Cybersecurity Architectures: AI-enabled frameworks that automatically adapt to evolving threats, providing dynamic defense mechanisms for distributed systems.
  7. AI for Real-time Security Analytics: Leveraging AI to process vast amounts of distributed data in real time, providing predictive and actionable insights to prevent cyberattacks.
  8. AI-powered Privacy-preserving Techniques: Research on AI-driven methods for preserving user privacy across distributed systems, including differential privacy, homomorphic encryption, and secure multi-party computation.
  9. Application of Generative Models in Security: The use of generative models such as GANs for cybersecurity applications, including generating synthetic data for training and simulating attack scenarios.
  10. Deep Learning and Natural Language Processing for Security: Applying advanced deep learning models and NLP techniques to enhance the detection and mitigation of cyber threats, particularly in analyzing textual data like phishing emails or malicious code.

Dr. Kongyang Chen
Dr. Jianping Cai
Dr. Hao Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • distributed systems
  • artificial intelligence security
  • privacy-preserving techniques
  • cybersecurity

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4549 KiB  
Article
Tampering Detection in Absolute Moment Block Truncation Coding (AMBTC) Compressed Code Using Matrix Coding
by Yijie Lin, Ching-Chun Chang and Chin-Chen Chang
Electronics 2025, 14(9), 1831; https://doi.org/10.3390/electronics14091831 - 29 Apr 2025
Abstract
With the increasing use of digital image compression technology, ensuring data integrity and security within the compression domain has become a crucial area of research. Absolute moment block truncation coding (AMBTC), an efficient lossy compression algorithm, is widely used for low-bitrate image storage [...] Read more.
With the increasing use of digital image compression technology, ensuring data integrity and security within the compression domain has become a crucial area of research. Absolute moment block truncation coding (AMBTC), an efficient lossy compression algorithm, is widely used for low-bitrate image storage and transmission. However, existing studies have primarily focused on tamper detection for AMBTC compressed images, often overlooking the integrity of the AMBTC compressed code itself. To address this gap, this paper introduces a novel anti-tampering scheme specifically designed for AMBTC compressed code. The proposed scheme utilizes shuffle pairing to establish a one-to-one relationship between image blocks. The hash value, calculated as verification data from the original data of each block, is then embedded into the bitmap of its corresponding block using the matrix coding algorithm. Additionally, a tampering localization mechanism is incorporated to enhance the security of the compressed code without introducing additional redundancy. The experimental results demonstrate that the proposed scheme effectively detects tampering with high accuracy, providing protection for AMBTC compressed code. Full article
26 pages, 7768 KiB  
Article
Privacy-Preserving Information Extraction for Ethical Case Studies in Machine Learning Using ChatGLM-LtMP
by Xindan Gao, Xinyi Ba, Jian Xing and Ying Liu
Electronics 2025, 14(7), 1352; https://doi.org/10.3390/electronics14071352 - 28 Mar 2025
Viewed by 313
Abstract
Ensuring privacy protection in machine learning is crucial for handling sensitive information, particularly in ethical case studies within computer engineering. Traditional information extraction methods often expose private data to risks such as membership inference and reconstruction attacks, compromising confidentiality. To address these concerns, [...] Read more.
Ensuring privacy protection in machine learning is crucial for handling sensitive information, particularly in ethical case studies within computer engineering. Traditional information extraction methods often expose private data to risks such as membership inference and reconstruction attacks, compromising confidentiality. To address these concerns, we propose ChatGLM-LtMP, a privacy-preserving information extraction framework that integrates Least-to-Most Prompting and P-Tuning v2 for structured and secure data retrieval. By employing controlled prompting mechanisms, our approach minimizes data exposure while maintaining high accuracy (93.71%), outperforming baseline models. Additionally, we construct a knowledge graph using the Neo4j 4.4 database and integrate LangChain 0.2 for case-based intelligent question answering. This framework enables secure and interpretable extraction of ethical case data, making it suitable for applications in sensitive machine learning scenarios. The proposed method advances information extraction, safeguarding sensitive ethical cases from potential attacks in automated learning environments. Full article
Show Figures

Figure 1

20 pages, 270 KiB  
Article
A Novel User Behavior Modeling Scheme for Edge Devices with Dynamic Privacy Budget Allocation
by Hua Zhang, Hao Huang and Cheng Peng
Electronics 2025, 14(5), 954; https://doi.org/10.3390/electronics14050954 - 27 Feb 2025
Viewed by 395
Abstract
Federated learning (FL) enables privacy-preserving collaborative model training across edge devices without exposing raw user data, but it is vulnerable to privacy leakage through shared model updates, making differential privacy (DP) essential. Existing DP-based FL methods, such as fixed-noise DP, suffer from excessive [...] Read more.
Federated learning (FL) enables privacy-preserving collaborative model training across edge devices without exposing raw user data, but it is vulnerable to privacy leakage through shared model updates, making differential privacy (DP) essential. Existing DP-based FL methods, such as fixed-noise DP, suffer from excessive noise injection and inefficient privacy budget allocation, which degrade model accuracy. To address these limitations, we propose an adaptive differential privacy mechanism that dynamically adjusts the noise based on gradient sensitivity, optimizing the privacy–accuracy trade-off, along with a hierarchical privacy budget management strategy to minimize cumulative privacy loss. We also incorporate communication-efficient techniques like gradient sparsification and quantization to reduce bandwidth usage without sacrificing privacy guarantees. Experimental results on three real-world datasets showed that our adaptive DP-FL method improved accuracy by up to 8.1%, reduced privacy loss by 38%, and lowered communication overhead by 15–18%. While promising, our method’s robustness against advanced privacy attacks and its scalability in real-world edge environments are areas for future exploration, highlighting the need for further validation in practical FL applications such as personalized recommendation and privacy-sensitive user behavior modeling. Full article
Show Figures

Figure 1

19 pages, 291 KiB  
Article
Towards Federated Robust Approximation of Nonlinear Systems with Differential Privacy Guarantee
by Zhijie Yang, Xiaolong Yan, Guoguang Chen, Mingli Niu and Xiaoli Tian
Electronics 2025, 14(5), 937; https://doi.org/10.3390/electronics14050937 - 26 Feb 2025
Viewed by 472
Abstract
Nonlinear systems, characterized by their complex and often unpredictable dynamics, are essential in various scientific and engineering applications. However, accurately modeling these systems remains challenging due to their nonlinearity, high-dimensional interactions, and the privacy concerns inherent in data-sensitive domains. Existing federated learning approaches [...] Read more.
Nonlinear systems, characterized by their complex and often unpredictable dynamics, are essential in various scientific and engineering applications. However, accurately modeling these systems remains challenging due to their nonlinearity, high-dimensional interactions, and the privacy concerns inherent in data-sensitive domains. Existing federated learning approaches struggle to model such complex behaviors, particularly due to their inability to capture high-dimensional interactions and their failure to maintain privacy while ensuring robust model performance. This paper presents a novel federated learning framework for the robust approximation of nonlinear systems, addressing these challenges by integrating differential privacy to protect sensitive data without compromising model utility. The proposed framework enables decentralized training across multiple clients, ensuring privacy through differential privacy mechanisms that mitigate risks of information leakage via gradient updates. Advanced neural network architectures are employed to effectively approximate nonlinear dynamics, with stability and scalability ensured by rigorous theoretical analysis. We compare our approach with both centralized and decentralized federated models, highlighting the advantages of our framework, particularly in terms of privacy preservation. Comprehensive experiments on benchmark datasets, such as the Lorenz system and real-world climate data, demonstrate that our federated model achieves comparable accuracy to centralized approaches while offering strong privacy guarantees. The system efficiently handles data heterogeneity and dynamic nonlinear behavior, scaling well with both the number of clients and model complexity. These findings demonstrate a pathway for the secure and scalable deployment of machine learning models in nonlinear system modeling, effectively balancing accuracy, privacy, and computational performance. Full article
Show Figures

Figure 1

19 pages, 262 KiB  
Article
Fine-Grained Encrypted Traffic Classification Using Dual Embedding and Graph Neural Networks
by Zhengyang Liu, Qiang Wei, Qisong Song and Chaoyuan Duan
Electronics 2025, 14(4), 778; https://doi.org/10.3390/electronics14040778 - 17 Feb 2025
Viewed by 680
Abstract
Encrypted traffic classification poses significant challenges in network security due to the growing use of encryption protocols, which obscure packet payloads. This paper introduces a novel framework that leverages dual embedding mechanisms and Graph Neural Networks (GNNs) to model both temporal and spatial [...] Read more.
Encrypted traffic classification poses significant challenges in network security due to the growing use of encryption protocols, which obscure packet payloads. This paper introduces a novel framework that leverages dual embedding mechanisms and Graph Neural Networks (GNNs) to model both temporal and spatial dependencies in traffic flows. By utilizing metadata features such as packet size, inter-arrival times, and protocol attributes, the framework achieves robust classification without relying on payload content. The proposed framework demonstrates an average classification accuracy of 96.7%, F1-score of 96.0%, and AUC-ROC of 97.9% across benchmark datasets, including ISCX VPN-nonVPN, QUIC, and USTC-TFC2016. These results mark an improvement of up to 8% in F1-score and 10% in AUC-ROC compared to state-of-the-art baselines. Extensive experiments validate the framework’s scalability and robustness, confirming its potential for real-world applications like intrusion detection and network monitoring. The integration of dual embedding mechanisms and GNNs allows for accurate fine-grained classification of encrypted traffic flows, addressing critical challenges in modern network security. Full article
Show Figures

Figure 1

14 pages, 715 KiB  
Article
BATG: A Backdoor Attack Method Based on Trigger Generation
by Weixuan Tang, Haoke Xie, Yuan Rao, Min Long, Tao Qi and Zhili Zhou
Electronics 2024, 13(24), 5031; https://doi.org/10.3390/electronics13245031 - 21 Dec 2024
Viewed by 820
Abstract
Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required [...] Read more.
Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required for real-world applications, especially when it comes to resisting image compression techniques, such as JPEG and WEBP. To address these issues, in this paper, we propose a Backdoor Attack Method based on Trigger Generation (BATG). Specifically, a deep convolutional generative network is utilized as the trigger generation model to generate effective trigger images and an Invertible Neural Network (INN) is utilized as the trigger injection model to embed the generated trigger images into clean images to create poisoned images. Furthermore, a noise layer is used to simulate image compression attacks for adversarial training, enhancing the robustness against real-world image compression. Comprehensive experiments on benchmark datasets demonstrate the effectiveness, invisibility, and robustness of the proposed BATG. Full article
Show Figures

Figure 1

Back to TopTop