applsci-logo

Journal Browser

Journal Browser

Privacy-Preserving and System Security Control Based on Machine Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 September 2025 | Viewed by 3597

Special Issue Editors


E-Mail Website
Guest Editor
Automatic Control, Computers & Electronics Department, Petroleum-Gas University of Ploiesti, 100680 Ploiesti, Romania
Interests: cybersecurity; industrial control system security; personal identification methods; Industry 4.0 technologies
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science and Software Engineering Department, Laval University, Quebec City, QC G1V 0A6, Canada
Interests: security; cryptographic protocols; anomaly/intrusion detection; reverse engineering; AI/ML/DL
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automation and Industrial Informatics, Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania
Interests: networked-embedded sensing; information processing; control engineering; building automation; smart city; data analytics; computational intelligence; industry and energy applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The widespread use of information and communication technologies has revolutionized our lives, offering unprecedented convenience and connectivity. However, this progress has also introduced significant challenges to privacy preservation and system security. Machine learning techniques have emerged as powerful tools capable of addressing the evolving security and privacy challenges.

This Special Issue on “Privacy-Preserving and System Security Control Based on Machine Learning” invites original research contributions and review articles to explore the latest advancements and applications of machine learning for privacy preservation and system security. We welcome submissions that cover a broad range of topics, including but not limited to the following:

  • Privacy-preserving machine learning algorithms;
  • Differential privacy;
  • Adversarial machine learning for security control;
  • Anomaly detection using machine learning for system security;
  • Zero-day attack detection using machine learning;
  • Secure federated learning;
  • Privacy-enhancing technologies in machine learning.

We expect this Special Issue to provide a timely and significant platform for researchers and practitioners to present their latest findings and foster collaborations in this swiftly advancing field. We are looking forward to your contributions.

Dr. Emil Pricop
Dr. Jaouhar Fattahi
Dr. Grigore Stamatescu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • privacy preserving
  • system security
  • machine learning
  • differential privacy
  • adversarial machine learning
  • secure federated learning
  • machine learning based attack detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1386 KiB  
Article
Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection
by Xavier Larriva-Novo, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas and Óscar Jover
Appl. Sci. 2024, 14(24), 11511; https://doi.org/10.3390/app142411511 - 10 Dec 2024
Viewed by 949
Abstract
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and [...] Read more.
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and explainability from different applications that use AI predictions, making it hard to understand by cybersecurity operators why decisions were made. To address this, the concept of Explainable AI (XAI) has been introduced to make the AI’s decisions more understandable at both global and local levels. This not only boosts confidence in the AI but also aids in identifying different attributes commonly used in cyberattacks for the exploitation of flaws or vulnerabilities. This study proposes two developments: first, the creation and evaluation of machine learning models for an IDS with the objective to use Reinforcement Learning (RL) to classify malicious network traffic, and second, the development of a methodology to extract multi-level explanations from the RL model to identify, detect, and understand how different attributes affect uncertain types of attack categories. Full article
Show Figures

Figure 1

14 pages, 503 KiB  
Article
Robust Federated Learning for Mitigating Advanced Persistent Threats in Cyber-Physical Systems
by Ehsan Hallaji, Roozbeh Razavi-Far and Mehrdad Saif
Appl. Sci. 2024, 14(19), 8840; https://doi.org/10.3390/app14198840 - 1 Oct 2024
Viewed by 1487
Abstract
Malware triage is essential for the security of cyber-physical systems, particularly against Advanced Persistent Threats (APTs). Proper data for this task, however, are hard to come by, as organizations are often reluctant to share their network data due to security concerns. To tackle [...] Read more.
Malware triage is essential for the security of cyber-physical systems, particularly against Advanced Persistent Threats (APTs). Proper data for this task, however, are hard to come by, as organizations are often reluctant to share their network data due to security concerns. To tackle this issue, this paper presents a secure and distributed framework for the collaborative training of a global model for APT triage without compromising privacy. Using this framework, organizations can share knowledge of APTs without disclosing private data. Moreover, the proposed design employs robust aggregation protocols to safeguard the global model against potential adversaries. The proposed framework is evaluated using real-world data with 15 different APT mechanisms. To make the simulations more challenging, we assume that edge nodes have partial knowledge of APTs. The obtained results demonstrate that participants in the proposed framework can privately share their knowledge, resulting in a robust global model that accurately detects APTs with significant improvement across different model architectures. Under optimal conditions, the designed framework detects almost all APT scenarios with an accuracy of over 90 percent. Full article
Show Figures

Figure 1

Back to TopTop