Security and Privacy for Artificial Intelligence: Opportunities and Challenges

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (15 February 2024) | Viewed by 9041

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical, Computer, and Software Engineering, The University of Auckland, Auckland 1010, New Zealand
Interests: embedded computer vision systems; secure computer architecture; hardware/software co-design for artificial intelligence

Special Issue Information

Dear Colleagues,

There has recently been a growing amount of interest in employing Artificial Intelligence (AI) in many applications, such as autonomous vehicles, industrial robotics, medical devices, and smart home systems, required to guarantee reliability, energy efficiency, time predictability, and high-level accuracy. As such applications process sensitive data and are connected to the Internet, security and privacy issues have become a critical concern. One important aspect of designing secure AI-based systems, especially on edge devices, is investigating the energy consumption overhead while achieving the required level of processing accuracy and performance.

The aim of this Special Issue is to collect high-quality submissions on research to avoid or mitigate the security vulnerabilities and achieve privacy preservation for AI applications considering the design at different levels of abstraction, from system-level design to micro-architecture level and hardware/software co-design. In addition, research outcomes to identify critical challenges and suggest further research opportunities are also welcome.

Dr. Morteza Biglari-Abhari
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • privacy preserving
  • secure processing architectures
  • TinyML
  • hardware/software co-design

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 911 KiB  
Article
Unveiling the Dark Side of ChatGPT: Exploring Cyberattacks and Enhancing User Awareness
by Moatsum Alawida, Bayan Abu Shawar, Oludare Isaac Abiodun, Abid Mehmood, Abiodun Esther Omolara and Ahmad K. Al Hwaitat
Information 2024, 15(1), 27; https://doi.org/10.3390/info15010027 - 2 Jan 2024
Cited by 1 | Viewed by 4571
Abstract
The Chat Generative Pre-training Transformer (GPT), also known as ChatGPT, is a powerful generative AI model that can simulate human-like dialogues across a variety of domains. However, this popularity has attracted the attention of malicious actors who exploit ChatGPT to launch cyberattacks. This [...] Read more.
The Chat Generative Pre-training Transformer (GPT), also known as ChatGPT, is a powerful generative AI model that can simulate human-like dialogues across a variety of domains. However, this popularity has attracted the attention of malicious actors who exploit ChatGPT to launch cyberattacks. This paper examines the tactics that adversaries use to leverage ChatGPT in a variety of cyberattacks. Attackers pose as regular users and manipulate ChatGPT’s vulnerability to malicious interactions, particularly in the context of cyber assault. The paper presents illustrative examples of cyberattacks that are possible with ChatGPT and discusses the realm of ChatGPT-fueled cybersecurity threats. The paper also investigates the extent of user awareness of the relationship between ChatGPT and cyberattacks. A survey of 253 participants was conducted, and their responses were measured on a three-point Likert scale. The results provide a comprehensive understanding of how ChatGPT can be used to improve business processes and identify areas for improvement. Over 80% of the participants agreed that cyber criminals use ChatGPT for malicious purposes. This finding underscores the importance of improving the security of this novel model. Organizations must take steps to protect their computational infrastructure. This analysis also highlights opportunities for streamlining processes, improving service quality, and increasing efficiency. Finally, the paper provides recommendations for using ChatGPT in a secure manner, outlining ways to mitigate potential cyberattacks and strengthen defenses against adversaries. Full article
Show Figures

Figure 1

20 pages, 4300 KiB  
Article
AdvRain: Adversarial Raindrops to Attack Camera-Based Smart Vision Systems
by Amira Guesmi, Muhammad Abdullah Hanif and Muhammad Shafique
Information 2023, 14(12), 634; https://doi.org/10.3390/info14120634 - 28 Nov 2023
Cited by 1 | Viewed by 1501
Abstract
Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classification are essential to reach appropriate decisions and take appropriate [...] Read more.
Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classification are essential to reach appropriate decisions and take appropriate and safe actions at all times. Current studies have demonstrated that “printed adversarial attacks”, known as physical adversarial attacks, can successfully mislead perception models such as object detectors and image classifiers. However, most of these physical attacks are based on noticeable and eye-catching patterns for generated perturbations making them identifiable/detectable by the human eye, in-field tests, or in test drives. In this paper, we propose a camera-based inconspicuous adversarial attack (AdvRain) capable of fooling camera-based perception systems over all objects of the same class. Unlike mask-based FakeWeather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the effects of a natural weather condition (i.e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera whenever an adversary plans to trigger an attack. Note, such perturbations are still inconspicuous in real-world deployments and their presence goes unnoticed due to their association with a natural phenomenon. To accomplish this, we develop an iterative process based on performing a random search aiming to identify critical positions to make sure that the performed transformation is adversarial for a target classifier. Our transformation is based on blurring predefined parts of the captured image corresponding to the areas covered by the raindrop. We achieve a drop in average model accuracy of more than 45% and 40% on VGG19 for ImageNet dataset and Resnet34 for Caltech-101 dataset, respectively, using only 20 raindrops. Full article
Show Figures

Figure 1

18 pages, 1469 KiB  
Article
A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural Networks
by Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina and Muhammad Shafique
Information 2023, 14(10), 537; https://doi.org/10.3390/info14100537 - 1 Oct 2023
Viewed by 1428
Abstract
Machine learning (ML) is widely used today, especially through deep neural networks (DNNs); however, increasing computational load and resource requirements have led to cloud-based solutions. To address this problem, a new generation of networks has emerged called spiking neural networks (SNNs), which mimic [...] Read more.
Machine learning (ML) is widely used today, especially through deep neural networks (DNNs); however, increasing computational load and resource requirements have led to cloud-based solutions. To address this problem, a new generation of networks has emerged called spiking neural networks (SNNs), which mimic the behavior of the human brain to improve efficiency and reduce energy consumption. These networks often process large amounts of sensitive information, such as confidential data, and thus privacy issues arise. Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting them. This research compares traditional DNNs and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme. The LeNet-5 and AlexNet models, widely-used convolutional architectures, are used for both DNN and SNN models based on their respective architectures, and the networks are trained and compared using the FashionMNIST dataset. The results show that SNNs using HE achieve up to 40% higher accuracy than DNNs for low values of the plaintext modulus t, although their execution time is longer due to their time-coding nature with multiple time steps. Full article
Show Figures

Figure 1

Back to TopTop