Advancements in Adversarial Machine Learning: Techniques, Applications and Security
A special issue of Applied Sciences (ISSN 2076-3417).
Deadline for manuscript submissions: closed (20 December 2023) | Viewed by 481
Special Issue Editor
Interests: adversarial ML; algorithm bias; computer vision; Troj-AI
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Dear Colleagues,
In recent years, machine learning (ML) models have made significant strides in various domains, revolutionizing industries and enabling groundbreaking applications. However, with the growing reliance on these models, concerns surrounding their vulnerability to adversarial attacks have also intensified. Adversarial attacks refer to malicious attempts to exploit vulnerabilities in ML models, compromising their integrity, reliability, and security. This Special Issue aims to explore the emerging frontiers in adversarial attacks, focusing on topics such as poisoning attacks, evasion attacks, trojans, model manipulation, backdoors, hardware-based attacks (e.g., bit-flip), and other related techniques.
Scope and Objectives:
This Special Issue seeks to provide a comprehensive platform for researchers, practitioners, and experts to share their insights, discoveries, and innovations in the field of adversarial attacks on ML models. We encourage submissions that cover a wide range of topics, including but not limited to:
Poisoning Attacks: techniques that aim to manipulate the training data to insert malicious samples, leading to biased or compromised model behavior.
Evasion Attacks: adversarial samples that are carefully crafted to deceive ML models during the inference phase, causing misclassification or incorrect outputs.
Trojans: covertly implanted triggers or patterns within ML models that can be triggered by specific inputs to induce malicious behavior or compromise model integrity.
Model Manipulation: techniques that exploit vulnerabilities in the model architecture or optimization process to manipulate model outputs or induce specific behaviors.
Backdoor Attacks: techniques that introduce hidden functionality or behavior in ML models that can be triggered by specific inputs or conditions, often without detection.
Hardware-Based Attacks: attacks that target the underlying hardware components of ML systems, including bit-flip attacks, fault injections, or other physical attacks to compromise model performance or security.
Defenses and Countermeasures: novel defense mechanisms, detection techniques, and mitigation strategies to enhance the resilience of ML models against adversarial attacks.
Dr. Kishor Datta Gupta
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue policies can be found here.