Next Article in Journal
A Combined Short Time Fourier Transform and Image Classification Transformer Model for Rolling Element Bearings Fault Diagnosis in Electric Motors
Next Article in Special Issue
Transfer Learning in Smart Environments
Previous Article in Journal
Explainable AI Framework for Multivariate Hydrochemical Time Series
Previous Article in Special Issue
Interpretable Topic Extraction and Word Embedding Learning Using Non-Negative Tensor DEDICOM
Article

Property Checking with Interpretable Error Characterization for Recurrent Neural Networks

by *,†, *,† and
Facultad de Ingeniería, Universidad ORT Uruguay, 11100 Montevideo, Uruguay
*
Authors to whom correspondence should be addressed.
Equal contribution.
Academic Editor: Yoichi Hayashi
Mach. Learn. Knowl. Extr. 2021, 3(1), 205-227; https://doi.org/10.3390/make3010010
Received: 16 December 2020 / Revised: 25 January 2021 / Accepted: 3 February 2021 / Published: 12 February 2021
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm. View Full-Text
Keywords: recurrent neural networks; probably approximately correct learning; black-box explainability recurrent neural networks; probably approximately correct learning; black-box explainability
Show Figures

Figure 1

MDPI and ACS Style

Mayr, F.; Yovine, S.; Visca, R. Property Checking with Interpretable Error Characterization for Recurrent Neural Networks. Mach. Learn. Knowl. Extr. 2021, 3, 205-227. https://doi.org/10.3390/make3010010

AMA Style

Mayr F, Yovine S, Visca R. Property Checking with Interpretable Error Characterization for Recurrent Neural Networks. Machine Learning and Knowledge Extraction. 2021; 3(1):205-227. https://doi.org/10.3390/make3010010

Chicago/Turabian Style

Mayr, Franz, Sergio Yovine, and Ramiro Visca. 2021. "Property Checking with Interpretable Error Characterization for Recurrent Neural Networks" Machine Learning and Knowledge Extraction 3, no. 1: 205-227. https://doi.org/10.3390/make3010010

Find Other Styles

Article Access Map by Country/Region

1
Back to TopTop