You are currently viewing a new version of our website. To view the old version click .

Adversarial Machine Learning: Theories and Applications

Topic Information

Dear Colleagues,

Adversarial Machine Learning has emerged as a critical and rapidly growing research area at the intersection of machine learning, cybersecurity, and artificial intelligence. It deals with the study of vulnerabilities and defenses of machine learning models against adversarial attacks. In recent years, machine learning has achieved remarkable success in various applications, including computer vision, natural language processing, speech recognition, and autonomous systems. However, as these models are increasingly deployed in safety-critical systems, there is a growing concern about their susceptibility to adversarial attacks. Adversarial attacks aim to deceive machine learning models into making incorrect predictions or decisions. These perturbations are often imperceptible to human eyes/insights but can cause significant changes in model outputs. The vulnerability of machine learning models to adversarial attacks has raised fundamental questions/problems about their robustness, reliability, and safety in real-world scenarios. This special issue aims to explore the recent advancements and applications of Adversarial Machine Learning. Adversarial Machine Learning poses significant challenges in various domains, including computer vision, natural language processing, and more. Adversarial attacks can lead to severe consequences, such as misclassification of images, manipulated data, or compromised model integrity. The development of intelligent defense techniques becomes crucial to safeguard the integrity and reliability of machine learning models in real-world applications. We invite researchers to submit original works that shed light on the theories and practical applications of Adversarial Machine Learning. We encourage submissions that contribute novel insights, methodologies, or empirical findings in this rapidly evolving field. The topics of interest include but are not limited to the following:

  • Interpretable/explainable adversarial machine learning
  • Adversarial attacks in computer vision and pattern recognition
  • Adversarial challenges in natural language processing
  • Adversarial scene Scenarios understanding: object segmentation / motion segmentation / visual tracking in video/image sequences by machine learning
  • Adversarial correspondence learning: enhancing robustness in image matching
  • Adversarial robustness in deep learning
  • Embedding adversarial learning
  • Violence/anomaly detection
  • Robustness estimation or benchmarking of machine learning models
  • Privacy and security concerns in adversarial machine learning
  • Real-world applications and case studies of adversarial machine learning

Dr. Feiran Huang
Dr. Shuyuan Lin
Dr. Xiaoming Zhang
Dr. Yang Lu
Topic Editors

Keywords

  • adversarial attacks
  • machine learning
  • robust estimation
  • computer vision
  • natural language processing
  • deep learning
  • privacy preservation
  • correspondence learning

Participating Journals

Applied Sciences
Open Access
82,922 Articles
Launched in 2011
2.5Impact Factor
5.5CiteScore
20 DaysMedian Time to First Decision
Q2Highest JCR Category Ranking
Machine Learning and Knowledge Extraction
Open Access
600 Articles
Launched in 2019
6.0Impact Factor
9.9CiteScore
26 DaysMedian Time to First Decision
Q1Highest JCR Category Ranking
Mathematics
Open Access
25,178 Articles
Launched in 2013
2.2Impact Factor
4.6CiteScore
18 DaysMedian Time to First Decision
Q1Highest JCR Category Ranking
Remote Sensing
Open Access
40,042 Articles
Launched in 2009
4.1Impact Factor
8.6CiteScore
25 DaysMedian Time to First Decision
Q1Highest JCR Category Ranking

Published Papers