Securing Artificial Intelligence Against Attacks

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: 31 May 2026 | Viewed by 1393

Special Issue Editors


E-Mail Website
Guest Editor
Institute of IT Security Research, St. Pölten University of Applied Sciences, 3100 St. Pölten, Austria
Interests: IT security; privacy; interactive machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Information Sciences and Technology (IST), Penn State University, State College, PA 16801, USA
Interests: software security; network security; software engineering

Special Issue Information

Dear Colleagues,

In recent years, AI has started to permeate many fields and industries with affordable and scalable services. Expectations are high that this trend will continue for a while, providing new AI-based services to experts and the general public alike, but also bringing change to many classical approaches, which can be enhanced greatly with AI.

This, of course, opens up new attack surfaces and attack vectors, using AI as both a tool for attacks, but also as a target. The latter is especially important, as currently prominent AI technologies like LLMs and DNNs lack explainability and thus also transparency. Furthermore, many classical security approaches like penetration testing need to be adapted to this situation, as, e.g., translating the finding of a security-relevant error in an AI system into a fix is generally not possible for many currently employed algorithms. In addition, complex systems like neural networks might increase the attack surface quite drastically, requiring special treatment not only from a technical perspective, but also from a risk management one.

This Special Issue is dedicated to research results in the area of security for AI systems. It calls for cutting-edge contributions to fundamental theoretical research as well as its application in practice. This Special Issue covers, but is not limited to, the following topics:

  • Attacks against AI systems, e.g., model and data poisoning and model extraction attacks;
  • Privacy and AI;
  • Trustworthy AI and trust in data-driven systems;
  • Controllable AI;
  • Secure and trustworthy data provisioning;
  • Resilient data-driven systems;
  • AI risk analysis and AI risk management;
  • Implementing security requirements from the AI Act;
  • Security testing AI, e.g., penetration testing;
  • Proof of concepts for secure AI systems.

Prof. Dr. Peter Kieseberg
Prof. Dr. Jungwoo Ryoo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • trustworthy AI
  • AI attacks
  • attacking AI
  • secure AI
  • controllable AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 548 KB  
Article
Social Engineering with AI
by Alexandru-Raul Matecas, Peter Kieseberg and Simon Tjoa
Future Internet 2025, 17(11), 515; https://doi.org/10.3390/fi17110515 - 12 Nov 2025
Viewed by 929
Abstract
The new availability of powerful Artificial Intelligence (AI) as an everyday copilot has instigated a new wave of attack techniques, especially in the area of Social Engineering (SE). The possibility of generating a multitude of different templates within seconds in order to carry [...] Read more.
The new availability of powerful Artificial Intelligence (AI) as an everyday copilot has instigated a new wave of attack techniques, especially in the area of Social Engineering (SE). The possibility of generating a multitude of different templates within seconds in order to carry out an SE-attack lowers the entry barrier for potential threat actors. Still, the question remains whether this can be done using openly available tools without specialized expert skill sets on the attacker side, and how these compare to each other. This paper conducts three experiments based on a blueprint from a real-world CFO fraud attack, which utilized two of the most used social engineering attacks, phishing and vishing, and investigates the success rate of these SE attacks based on utilizing different available LLMs. The third experiment centers around the training of an AI-powered chatbot to act as a social engineer and gather sensitive information from interacting users. As this work focuses on the offensive side of SE, all conducted experiments return promising results, proving not only the ability and effectiveness of AI technology to act unethically, but also the little to no implied restrictions. Based on a reflection on the findings and potential countermeasures available, this research provides a deeper understanding of the development and deployment of AI-enhanced SE attacks, further highlighting potential dangers, as well as mitigation methods against this “upgraded” type of threat. Full article
(This article belongs to the Special Issue Securing Artificial Intelligence Against Attacks)
Show Figures

Figure 1

Back to TopTop