Black-Box Algorithms and Their Applications

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (1 November 2022) | Viewed by 7177

Special Issue Editor

Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Interests: machine learning; optimization; adversarial robustness; trustworthy artificial intelligence

Special Issue Information

Dear Colleagues,

Black-box algorithms require little to no information from the system under study and are often versatile and general enough to be applied to a wide range of problems in different domains. The most notable examples include black-box optimization algorithms such as zeroth-order and Bayesian optimization algorithms, which do not require derivatives or other explicit information for the function under optimization. Such algorithms have become an asset to various domains, such as machine learning, artificial intelligence, and cybersecurity. The scientific community can greatly benefit from the development of more efficient and scalable black-box algorithms.

We cordially invite you to submit high-quality research and review papers for this Special Issue on “Black-box Algorithms and Their Applications”, with subjects ranging from theories to applications of black-box algorithms. Submitted articles may focus on recent advances in black-box algorithms, including but not limited to:

  • The convergence, sample efficiency, acceleration, or other theoretical perspectives of black-box algorithms;
  • Novel applications of black-box algorithms, such as applications to machine learning, artificial intelligence, control theory, cybersecurity, and intelligent systems;
  • Software toolkits and benchmarks of black-box algorithms.

Dr. Huan Zhang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • black-box optimization
  • black-box function
  • zeroth order optimization
  • bayesian optimization
  • derivative-free optimization
  • stochastic approximation
  • black-box importance sampling
  • black-box robust optimization
  • black-box algorithms for machine learning
  • black-box algorithms for artificial intelligence
  • black-box algorithms for cybersecurity
  • black-box optimizers
  • black-box systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 607 KiB  
Article
LTU Attacker for Membership Inference
by Joseph Pedersen, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu and Isabelle Guyon
Algorithms 2022, 15(7), 254; https://doi.org/10.3390/a15070254 - 20 Jul 2022
Cited by 1 | Viewed by 2108
Abstract
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual [...] Read more.
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We prove that, under certain conditions, even a “naïve” LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies, leading to concrete necessary conditions to protect privacy, including: preventing over-fitting and adding some amount of randomness. This attack is straightforward to implement against any model trainer, and we demonstrate its performance against MemGaurd. However, we also show that such a naïve LTU Attacker can fail to attack the privacy of models known to be vulnerable in the literature, demonstrating that knowledge must be complemented with strong attack strategies to turn the LTU Attacker into a powerful means of evaluating privacy. The LTU Attacker can incorporate any existing attack strategy to compute individual privacy scores for each training sample. Our experiments on the QMNIST, CIFAR-10, and Location-30 datasets validate our theoretical results and confirm the roles of over-fitting prevention and randomness in the algorithms to protect against privacy attacks. Full article
(This article belongs to the Special Issue Black-Box Algorithms and Their Applications)
Show Figures

Figure 1

12 pages, 2376 KiB  
Article
Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification
by Kazuki Koga and Kazuhiro Takemoto
Algorithms 2022, 15(5), 144; https://doi.org/10.3390/a15050144 - 22 Apr 2022
Cited by 6 | Viewed by 3528
Abstract
Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated [...] Read more.
Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only input queries are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to white-box conditions in which adversaries can access model parameters. Nevertheless, we propose a method for generating UAPs using a simple hill-climbing search based only on DNN outputs to demonstrate that UAPs are easily generatable using a relatively small dataset under black-box conditions with representative DNN-based medical image classifications. Black-box UAPs can be used to conduct both nontargeted and targeted attacks. Overall, the black-box UAPs showed high attack success rates (40–90%). The vulnerability of the black-box UAPs was observed in several model architectures. The results indicate that adversaries can also generate UAPs through a simple procedure under the black-box condition to foil or control diagnostic medical imaging systems based on DNNs, and that UAPs are a more serious security threat. Full article
(This article belongs to the Special Issue Black-Box Algorithms and Their Applications)
Show Figures

Figure 1

Back to TopTop