Advance of Applied Statistics to White-Boxing AI in Engineering

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 1805

Special Issue Editor


E-Mail Website
Guest Editor
Department of information, Università di Milano, Milano, Italy
Interests: machine learning; data analytics

Special Issue Information

Dear Colleagues,

Artificial intelligence implementation is vastly beneficial. Engineers may manage pre-featured AI applications either as black boxes, to be properly fed with input, or as parts of their own design tools that can be explicitly integrated into overall projects. The second option requires white boxing the apps in terms of their statistic rationale, from the current perspective of machine learning methods, so as to restore a good level of knowledge symmetry between the designer and the implementer of the apps. This Special Issue aims to collect papers that reflect the advanced practices and research of statistics to convert AI engineering applications from third-party facilities to regular tools of the design arsenal with which engineers can be called to renew the industrial world.

Prof. Bruno Apolloni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • white boxing
  • statistic rationale
  • machine learning
  • AI engineering applications

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 501 KiB  
Article
Improving A/B Testing on the Basis of Possibilistic Reward Methods: A Numerical Analysis
by Miguel Martín, Antonio Jiménez-Martín, Alfonso Mateos and Josefa Z. Hernández
Symmetry 2021, 13(11), 2175; https://doi.org/10.3390/sym13112175 - 12 Nov 2021
Cited by 1 | Viewed by 1343
Abstract
A/B testing is used in digital contexts both to offer a more personalized service and to optimize the e-commerce purchasing process. A personalized service provides customers with the fastest possible access to the contents that they are most likely to use. An optimized [...] Read more.
A/B testing is used in digital contexts both to offer a more personalized service and to optimize the e-commerce purchasing process. A personalized service provides customers with the fastest possible access to the contents that they are most likely to use. An optimized e-commerce purchasing process reduces customer effort during online purchasing and assures that the largest possible number of customers place their order. The most widespread A/B testing method is to implement the equivalent of RCT (randomized controlled trials). Recently, however, some companies and solutions have addressed this experimentation process as a multi-armed bandit (MAB). This is known in the A/B testing market as dynamic traffic distribution. A complementary technique used to optimize the performance of A/B testing is to improve the experiment stopping criterion. In this paper, we propose an adaptation of A/B testing to account for possibilistic reward (PR) methods, together with the definition of a new stopping criterion also based on PR methods to be used for both classical A/B testing and A/B testing based on MAB algorithms. A comparative numerical analysis based on the simulation of real scenarios is used to analyze the performance of the proposed adaptations in both Bernoulli and non-Bernoulli environments. In this analysis, we show that the possibilistic reward method PR3 produced the lowest mean cumulative regret in non-Bernoulli environments, which proved to have a high confidence level and be highly stable as demonstrated by low standard deviation measures. PR3 behaves exactly the same as Thompson sampling in Bernoulli environments. The conclusion is that PR3 can be used efficiently in both environments in combination with the value remaining stopping criterion in Bernoulli environments and the PR3 bounds stopping criterion for non-Bernoulli environments. Full article
(This article belongs to the Special Issue Advance of Applied Statistics to White-Boxing AI in Engineering)
Show Figures

Figure 1

Back to TopTop