High-Dimensional Separability for One- and Few-Shot Learning

This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher’s discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA.


AI Errors and Correctors
The main driver of our research is the problem of Artificial Intelligence (AI) errors and their correction: all AI systems sometimes make errors and will make errors in the future. These errors must be detected and corrected immediately and locally in the networks of AI systems. If we do not solve this problem, then a new AI winter will come. Recall that the previous AI winters came after the hype peaks of inflated expectations and bold advertising: the general overconfidence of experts was a typical symptom of inflated arXiv:2106.15416v2 [cs.LG] 22 Oct 2021 The hypothesis of i.i.d. data samples is very popular in machine learning theory. It means that there exists a probability measure on the data space and the data points are drawn from the space according to this measure independently [11]. It is worth mentioning that the data point for supervising learning includes both the input and the desired output and the probability is defined on the input × output space. Existence and stationarity of the probability distribution in real life is a very strong hypothesis. To weaken this assumption, many auxiliary concepts have been developed, such as concept drift. Nevertheless, i.i.d samples remain a central assumption of statistical learning theory: the dataset is presumed to be an i.i.d. random sample drawn from a probability distribution [12].
Fundamental origins of AI errors could be different. Of course, they include software errors, unexpected human behaviour, and non-intended use as well as many other possible reasons. Nevertheless, the universal cause of errors is uncertainty in training data and in training process. The real world possibilities are not covered by the dataset.
The mistakes should be corrected. The systematic retraining of a big AI system seems to be rarely possible (here and below, AI skill means the ability to correctly solve a group of similar tasks): • To preserve existing skills we must use the full set of training data; • This approach requires significant computational resources for each error; • However, new errors may appear in the course of retraining; • The preservation of existing skills is not guaranteed; • The probability of damage to skills is a priori unknown.
Therefore, quick non-iterative methods which are free from the disadvantages listed above are required. This is the main challenge for the one-and few-shot learning methods.
To provide fast error correction, we must consider developing correctors , external devices that complement legacy Artificial Intelligence systems, diagnose the risk of error, and correct errors. The original AI system remains a part of the extended 'system + corrector' complex. Therefore, the correction is reversible, and the original system can always be extracted from the augmented AI complex. Correctors have two different functions: (1) they should recognise potential errors and (2) provide corrected outputs for situations with potential errors. The idealised scheme of a legacy AI system augmented with an elementary corrector is presented in Figure 2. Here, the legacy AI system is represented as a transformation that maps the input signals into internal signals and then into output signals: inputs → internal → outputs. The elementary corrector takes all these signals as inputs and makes a decision about correction (see Figure 2).

Internal signals Inputs Outputs
Corrector Request for correction Signals transmission Figure 2. A scheme of the operation of an elementary corrector of legacy AI systems. The elementary corrector receives the input signals of legacy AI system, the internal signals generated by this system in the decision-making process, and its output signals. The corrector then assesses the need for correction. The elementary corrector includes a binary classifier that separates situations with a high risk of error from normal functioning. If correction is required, the corrector sends a warning signal and a modified output for further use.
The universal part of the AI corrector is a classifier that should separate situations with erroneous behaviour from normal operation. It is a binary classifier for all types of AI. The generalisation ability of this classifier is its ability to recognise errors that it had never seen before. The training set for corrector consists of a collection of situations with normal operation of the legacy AI system (the 'normal' class) and a set of labelled errors. The detection and labelling of errors for training correctors can be performed by various methods, which include human inspection, decisions of other AI systems of their committees, signals of success or failure from the outer world, and other possibilities that are outside the scope of our work.
We can usually expect that a normal class of error-free operations includes many more examples than a set of labelled errors. Moreover, even the situation with one newly labelled error is of considerable interest. All the stochastic separation theorems were invented to develop the one-of few-shot learning rules for the binary error/normal operation classifiers.
A specific component of the AI corrector is the modified decision rule (the 'correction' itself). Of course, the general theory and algorithms are focused on the universal part of the correctors. For many classical families of data distributions, it is proved that the well-known Fisher discriminant is surprisingly a powerful tool for constructing correctors if the dimension of the data space is sufficiently high (most results of this type are collected in [13]). This is proven for a wide class of distributions, including log-concave distributions, their convex combinations, and product distributions with bounded support.
In this article, we refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. Moreover, the very notion of probability distribution in high dimensions may sometimes create more questions than answers. Therefore, after developing new stochastic separation theorems for data with fine-grained clusters, we present a possibility to substitute the probabilistic approach to foundations of the theory by more robust methods of functional analysis with the limit transition to infinite dimension.
The idea of the presence of fine-grained structures in data seems to be very natural and universal: the observable world consists of things. The data points represent situations. The qualitative difference between situations is in existence/absence of notable things there.
Many approaches to machine learning are based on the correction of errors. A well-known example is the backpropagation of errors, from the classical perceptron algorithms [14] to modern deep learning [15]. The need for correction of AI errors has been discussed in the reinforcement learning literature. In the area of model-based reinforcement learning, the motivation stems from inevitable discrepancies between the models of environments used for training an agent and the reality this agent operates in. In order to address the problem, a meta-learning approach, Hallucinated Replay, was suggested in [16]. In this approach, the agent is trained to predict correct states of the real environment from states generated by the model [17]. Formal justifications and performance bounds for Hallucinated Replay were established in [18]. Notwithstanding these successful developments, we note that the settings to which such strategies apply are largely Markov Decision Processes. Their practical relevance is therefore constrained by dimensionality of the system's state. In high dimension, the costs of exploring all states grows exponentially with dimension and, as a result, alternative approaches are needed. Most error correction algorithms use large training sets to prevent new errors from being created in situations where the system was operating normally. These algorithms are iterative in nature. On the contrary, the corrector technology in high dimension aims at non-iterative one-or few-shot error corrections.

One-and Few-Shot Learning
A set of labelled errors is needed for creation of AI corrector. If we have such a set, then the main problem is the fast non-iterative training of classifiers that separate situations with a high risk of error from situations in which the legacy AI system works well. Thus, the corrector problem includes the oneor few-shot learning problem, and one class is presented by a relatively small sample of errors.
Learning new concepts rapidly from small low-sample data is a key challenge in machine learning [19]. Despite the widespread perception of neural networks as monstrous giant systems, whose iterative training requires a lot of time and resources, mounting empirical evidence points to numerous successful examples of learning from modestly-sized datasets [20]. Moreover, training with one or several shots is possible. By definition, which has already become classic, "one-shot learning", consists of learning a class from a single labelled example [19]. In "few-shot learning" a classifier must generalise to new classes not seen in the training set, given only a small number of examples of each new class [21].
Several modern approaches to enabling this type of learning require preliminary training tasks that are similar but not fully identical to new tasks to be learned. After such preliminary training the system acquires new meta-skills: it can now learn new tasks, which are not crucially different from the previous ones, without the need for large training sets and training time. This heuristic is utilised in various constructions of one-and few-shot learning algorithms [22,23]. Similar meta-skills and learnability can also be gained through previous experience of solving various relevant problems or an appropriately organised meta-learning [21,24].
In general, a large body of one-and few-shot learning algorithms is based on combinations of a reasonable preparatory learning that aims to increase learnability and create meta-skills and simple learning routines facilitating learning from small number of examples after this propaedeutics. These simple methods create appropriate latent feature spaces for the trained models which are preconditioned for the task of learning from few or single examples. Typically, a copy of the same pretrained system is used for different one-and few-shot learning tasks. Nevertheless, plenty of approaches are applicable to few-shot minor modifications of the features using new tasks.
Despite a large number of different algorithms implementing one-and few-shot learning schemes have been proposed to date, effectiveness of one-and few-shot simple methods is based on either significant dimensionality reductions or the blessing of dimensionality effects [25,26]. A significant reduction in dimensionality means that several features have been extracted that are already sufficient for the purposes of learning. Thereafter, a well-elaborated library of efficient lower-dimensional statistical learning methods can be applied to solve new problems using the same features.
The blessing of dimensionality is a relatively new idea [27][28][29][30]. It means that simple classical techniques like linear Fisher's discriminants become unexpectedly powerful in high dimensions under some mild assumptions about regularity of probability distributions [31][32][33]. These assumptions typically require absence of extremely dense lumps of data, which can be defined as areas with relatively low volume but unexpectedly high probability (for more detail we refer to [13]). These lumps correspond to narrow but high peaks of probability density.
If a dataset consists of k such lumps then, for moderate values of k, this can be considered as a special case of dimensionality reduction. The centres of clusters are considered as 'principal points' to stress the analogy with principal components [34,35]. Such a clustered structure in system's latent space may emerge in the course of preparatory learning: images of data points in the latent space 'attract similar and repulse dissimilar' data points.
The one-and few-shot learning can be organised in all three situations described above: 1. If the feature space is effectively reduced, then the challenge of large dataset can be mitigated and we can rely on classical linear or non-linear methods of statistical learning. 2. In the situation of 'blessing of dimensionality', with sufficiently regular probability distribution in high dimensions, the simple linear (or kernel [36]) one-and few-shot methods become effective [13,26,33]. 3. If the data points in the latent space form dense clusters, then position of new data with respect to these clusters can be utilised for solving new tasks. We can also expect that new data may introduce new clusters, but persistence of the cluster structure seems to be important. The clusters themselves can be distributed in a multidimensional feature space. This is the novel and more general setting we are going to focus on below in Section 3.
There is a rich set of tools for dimensionality reduction. It includes the classical prototype, principal component analysis (PCA) (see, [35,37] and Appendix A.2), and many generalisations, from principal manifolds [38] and kernel PCA [39] to principal graphs [35,40] and autoencoders [41,42]. We briefly describe some of these elementary tools in the context of data preprocessing (Appendix A), but the detailed analysis of dimensionality reduction is out of the main scope of the paper.
In a series of previous works, we focused on the second item [13,25,[30][31][32][33]43]. The blessing of dimensionality effects that make the one-and few-shot learning possible for regular distributions of data are based on the stochastic separation theorems. All these theorems have a similar structure: for large dimensions, even in an exponentially large (relatively to the dimension) set of points, each point is separable from the rest by a linear functional, which is given by a simple explicit formula. These blessings of dimensionality phenomena are closely connected to the concentration of measure [44][45][46][47][48] and to the various versions of the central limit theorem in probability theory [49]. Of course, there remain open questions about sharp estimates for some distribution classes, but the general picture seems to be clear now.
In this work, we focus mainly on the third point and explore the blessings of dimensionality and related methods of one-and few-shot learning for multidimensional data with rich cluster structure. Such datasets cannot be described by regular probability densities with a priori bounded Lipschitz constants. Even more general assumptions about absence of sets with relatively small volume but relatively high probability fail. We believe that this option is especially important for applications.

Bibliographic Comments
All references presented in the paper matter. However, a separate quick guide to the bibliographic references about the main ideas may be helpful: • Blessing of dimensionality. In data analysis, the idea of blessing of dimensionality was formulated by Kainen [27]. Donoho considered the effects of the dimensionality blessing to be the main direction of the development of modern data science [28]. The mathematical backgrounds of blessing of dimensionality are in the measure concentration phenomena. The same phenomena form the background of statistical physics (Gibbs, Einstein, Khinchin-see the review [25]). Two modern books include most of the classical results and many new achievements of concentration of measure phenomena needed in data science [44,45] (but they do not include new stochastic separation theorems). Links between the blessing of dimensionality and the classical central limit theorems are recently discussed in [49]. • One-shot and few-shot learning. This is a new direction in machine learning. Two papers give a nice introduction in this area [19,20]. Stochastic separation theorems explained ubiquity of one-and few-shot learning [26]. • AI errors. The problem of AI errors is widely recognised. This is becoming the most important issue of serious concern when trying to use AI in real life. The Council of Europe Study report [10] demonstrates that the inevitability of errors of data-driven AI is now a big problem for society.
Many discouraging examples of such errors are published [50,51], collected in reviews [52], and accumulated in a special database, Artificial Intelligence Incident Database (AIID) [53,54]. The research interest to this problem increases as an answer of the scientific community to the request of AI users. There are several fundamental origins of AI errors including uncertainty in training data, uncertainty in training process, and uncertainty of real world-reality can deviate significantly from the fitted model. The systematic manifestations of these deviations are known as concept drift or model degradation phenomena [55]. • AI correctors. The idea of elementary corrector together with statistical foundations was proposed in [30]. First stochastic separation theorems were proved for several simple data distributions (uniform distributions in a ball and product distributions with bounded support) [31]. The collection of results for many practically important classes of distributions, including convex combinations of log-concave distributions is presented in [13]. Kernel version of stochastic separation theorem was proved [36]. The stochastic separation theorems were used for development of correctors tested on various data and problems, from the straightforward correction of errors [32] to knowledge transfer between AI systems [56]. • Data compactness. This is an old and celebrated idea proposed by Braverman in early 1960s [57].
Several methods of measurement compactness of data clouds were invented [58]. The possibility to replace data points by compacta in training of neural networks was discussed [59]. Besides theoretical backgrounds of AI and data mining, data compactness was used for unsupervised outlier detection in high dimensions [60] and other practical needs.

The Structure of the Paper
In Section 2 we briefly discuss the phenomenon of post-classical data. We begin with Donoho's definition of post-classical data analysis problems, where the number of attributes is greater than the number of data points [28]. Then we discuss alternative definitions and end with a real case study that started with a dataset in the dimension 5 × 10 5 and ended with five features that give an effective solution to the initial classification problem. Section 3 includes the main theoretical results of the paper, the stochastic separation theorems for the data distributions with fine-grained structure. For these theorems, we model clusters by geometric bodies (balls or ellipsoids) and work with distributions of ellipsoids in high dimensions. The hierarchical structure of data universe is introduced where each data cluster has a granular internal structure, etc. Separation theorems in infinite-dimensional limits are proven under assumptions of compact embedding of patterns into data space.
In Section 4, the algorithms (multi-correctors) for corrections of AI errors that work for multiple clusters of error are developed and tested. For such datasets, several elementary correctors and a dispatcher are required, which distributes situations for analysis to the most appropriate elementary corrector. In multi-corrector, each elementary corrector separates its own area of high-risk error situations and contains an alternative rule for making decisions in situations from this area. The input signals of the correctors are the input, internal, and output signals of the AI system to be corrected as well as any other available attributes of the situation. The system of correctors is controlled by a dispatcher, which is formed on the basis of a cluster analysis of errors and distributes the situations specified by the signal vectors between elementary correctors for evaluation and, if necessary, correction.
Multi-correctors are tested on the CIFAR-10 dataset. In this case study, we will illustrate how 'clustered' or 'granular universes' can arise in real data and show how a granular representation based multi-correctors structure can be used in challenging machine learning and Artificial Intelligence problems. These problems include learning new classes of data in legacy deep learning AI models and predicting AI errors. We present simple algorithms and workflows which can be used to solve these challenging tasks circumventing the needs for computationally expensive retraining. We also illustrate potential technical pitfalls and dichotomies requiring additional attention from the algorithms' users and designers.
In conclusion, we briefly review the results (Section 5). Discussion (Section 6) aims at explaining the main message: the success or failure of many machine learning algorithms, the possibility of meta-learning, and opportunities to learn continuously from relatively small data samples depend on the world structure. The capability of representing a real world situation as a collection of things with some features (properties) and relationships between these entities is the fundamental basis of knowledge of both humans and AI.
Appendices include auxiliary mathematical results and relevant technical information. In particular, in Appendix A we discuss the following preprocessing operations that may move the dataset from the postclassical area: • Correlation transformation that maps the dataspace into cross-correlation space between data samples: • PCA; • Supervised PCA; • Semi-supervised PCA; • Transfer Component Analysis (TCA); • The novel expectation-maximization Domain Adaptation PCA ('DAPCA').

Postclassical Data
High-dimensional post-classical world was defined in [28] by the inequality This post-classical world is different from the 'classical world', where we can consider infinite growth of the sample size for the given number of attributes. The classical statistical methodology was developed for the classical world based on the assumption of Thus, the classical statistical learning theory is mostly useless in the multidimensional post-classical world. These results all fail if d > N. The d > N case is not anomalous for the modern big data problems. It is the generic case: both the sample size and the number of attributes grow, but in many important cases the number of attributes grows faster than the number of labelled examples [28].
High-dimensional effects of the curse and blessing of dimensionality appear in a much wider area than specified by the inequality (1). A typical example gives the penomenon of quasiorthogonal dimension [61][62][63]: for a given ε > 0 and ϑ > 0 (assumed small) a random set of N vectors x i on a high-dimensional unit d-dimensional sphere satisfies the inequality for all i = j with probability p > 1 − ϑ when N < a exp(bd) and a and b depend on ε and ϑ only. This means that the quasiorthogonal dimension of an Euclidean space grows exponentially with dimension d. Such effects are important in machine learning [63]. Therefore, the Donoho boundary should be modified: the postclassical effects appear in high dimension when d logN.
The two different definitions of postclassical area, (1) and (2), are illustrated in Figure 3a.
The definition of the postclassical data world needs one more comment. The inequalities (1) and (2) used the number of attributes as the equivalent of the dimension of the data space. Behind this approach is the hypothesis that there is no strong dependency between attributes. In the real situations, the data dimensionality can be much less that the number of attributes, for example, in the case of the strong multicollinearity. If, say, the data are located along a straight line then for most approaches the dimension of the dataset is 1 and the value of d does not matter. Therefore, the definition (2) of the postclassical world needs to be modified further with the dimension of the dataset, dim(DataSet) instead of d: There are many various definitions of data dimensionality, see a brief review in [64,65]. For all of them, we can assume that dim(DataSet) < N and dim(DataSet) ≤ d (see Figure 3b). It may happen that the intrinsic dimensionality of the datasets is surprisingly low and variables have hidden interdependencies. The structure of multidimensional data point clouds can have globally complicated organisation which is sometimes difficult to represent with regular mathematical objects (such as manifolds) [65,66]. The postclassical world effects include the blessing and curse of dimensionality. The blessing and curse are based on the concentration of measure phenomena [44,[46][47][48] and are, in that sense, two sides of the same coin [33,43].
It may be possible to resolve the difficulties with the data analysis in Donoho area by adequate preprocessing described in Appendix A. Consider an example of successful descent from data dimension 5 × 10 5 to five-dimensional decision space [67]. The problem was to develop an 'optical tongue' that recognises toxicity of various substances. The optical assay included a mixture of sensitive fluorescent dyes and human skin cells. They generate fluorescence spectra distinctive for particular conditions. The system produced characteristic response to toxic chemicals.
Two fluorescence images were received for each chemical: with growing cells and without them (control). The images were 511 × 511 arrays of fluorescence intensities as functions of emission and excitation. The dataset included 34 irritating and 28 non-irritating (Non-IRR) compounds (62 chemicals in total). The input data vector for each compound had dimension 522,242. This dataset belonged to the Donoho area.
After selection of a training set, each fluorescence image was represented by the vector of its correlation coefficients with the images from the training set. The size of the training set was 43 examples (with several randomised training set/test set splittings) or 61 example (for leave one out cross-validation). After that, the data matrix was 43 × 43 or 61 × 61 symmetric matrix. Then the classical PCA was applied with the standard selection of the number of components by Kaiser rule that returned five components. Finally, in the reduced space the classical classification algorithms were applied (kNN, decision tree, linear discriminant, and other). Both sensitivity and specificity of the 3NN classifiers with adaptive distance and of decision tree exceeded 90% in leave one out cross-validation.
This case study demonstrates that simple preprocessing can sometimes return postclassical data to the classical domain. However, in truly multidimensional datasets, this approach can fail due to the quasiorthogonality effect [61][62][63]: centralised random vectors in large dimensions are nearly orthogonal under very broad assumptions, and the matrix of empirical correlation coefficients with high probability is often close to the identity matrix even for exponentially large data samples [63].

Fisher Separability
Recall that the classical Fisher discriminant between two classes with means µ 1 and µ 2 is separation of the classes by a hyperplane orthogonal to µ 1 − µ 2 in the inner product where (·, ·) is the standard inner product and S is the average (or the weighted average) of the sample covariance matrix of these two classes. Let the dataset be preprocessed. In particular, we assume that it is centralised, normalised, and approximately whitened. In this case, we use in the definition of Fisher's discriminant the standard inner product instead of ·, · .

Definition 1.
A point x is Fisher separable from a set Y ⊂ R n with threshold α ∈ (0, 1], or α-Fisher separable in short, if inequality holds for all y ∈ Y.

Definition 2.
A finite set Y ⊂ R n is Fisher separable with threshold α ∈ (0, 1], or α-Fisher separable in short, if inequality (5) holds for all x, y ∈ Y such that x = y.
Separation of points by simple and explicit inner products (5) is, from the practical point of view, more convenient than general linear separability that can be provided by support vector machines, for example. Of course, linear separability is more general than Fisher separability. This is obvious from the everyday low-dimensional experience, but in high dimensions Fisher separability becomes a generic phenomenon [30,31].
Theorem 1 below is a prototype of most stochastic separation theorems. Two heuristic conditions for the probability distribution of data points are used in the stochastic separation theorems: • The probability distribution has no heavy tails; • The sets of relatively small volume should not have large probability.
These conditions are not necessary and could be relaxed [13]. In the following Theorem 1 [32] the absence of heavy tails is formalised as the tail cut: the support of the distribution is a subset of the n-dimensional unit ball B n . The absence of the sets of small volume but large probability is formalised in this theorem by the inequality: where ρ is the distribution density, C > 0 is an arbitrary constant, V n (B n ) is the volume of the ball B n and 1 > r > 1/(2α). This inequality guarantees that the probability measure of each ball with the radius R ≤ 1/(2α) decays for n → ∞ in a geometric progression with denominator R/r. Condition 1 > r > 1/(2α) is possible only if α > 0.5, hence, in Theorem 1 we assume α ∈ (0.5, 1].
Proof. For a given y, the set of such x that x is not α-Fisher separable from y by inequality (5) is a ball given by inequality (5) This is the ball of excluded volume. The volume of the ball (7) does not exceed V = 1 2α n V n (B n ) for each y ∈ Y. The probability that point x belongs to such a ball does not exceed The probability that x belongs to the union of |Y| such balls does not exceed |Y|C 1 2rα n . For |Y| < δ(2rα) n /C this probability is smaller than δ and p > 1 − δ.
Note that: • The finite set Y in Theorem 1 is just a finite subset of the ball B n without any assumption of its randomness. We only used the assumption about distribution of x. • The distribution of x may deviate significantly from the uniform distribution in the ball B n . Moreover, this deviation may grow with dimension n as a geometric progression: where ρ uniform = 1/V n (B n ) is the density of uniform distribution and 1/(2α) < r < 1 under assumption that 1/2 < α ≤ 1.
Let, for example, α = 0.8, r = 0.9, C = 1, δ = 0.01. Table 1 shows the upper bounds on |Y| given by Theorem 1 in various dimensions n that guarantees α-Fisher separability of a random point x from Y with probability ≥ 0.99 if the ratio ρ(x)/ρ uniform is bounded by the geometric progression 1/r n . For example, for n = 100, we see that for any set with |Y| < 6.85 × 10 13 points in the unit ball and any distribution whose density ρ deviates from the uniform one by a factor at most 3.76 × 10 4 , a random point from this distribution is Fisher-separable (2) with α = 0.8 from all points in Y with 99% probability.
If we consider Y as a random set in B n that satisfies (6) for each point then with high probability Y is α-Fisher separable (each point from the rest of Y) under some constraints of |Y| from above. From Theorem 1 we get the following corollary. Corollary 1. If Y ⊂ B n is a random set Y = {y 1 , . . . , y |Y| } and for each j the conditional distributions of vector y j for any given positions of the other y k in B n satisfy the same conditions as the distribution of x in Theorem 1, then the probability of the random set Y to be α-Fisher separable can be easily estimated: Thus, let us take, for example, p > 0.99 if |Y| < (1/10) C −1/2 (2rα) n/2 ( Table 2). Multiple generalisations of Theorem 1 are proven with sharp estimates of |Y| for various families of probability distributions. In this section, we derive the stochastic separation theorems for distributions with cluster structure that violate significantly the assumption (6). For this purpose, in the following subsections we introduce models of cluster structures and modify the notion of Fisher separability to separate clusters. The structure of separation functionals remains explicit with a one-shot non-iterative learning but assimilates both information about the entire distribution and about the cluster being separated.

Granular Models of Clusters
The simplest model of a fine-grained distribution of data assumes that the data are grouped into dense clusters and each cluster is located inside a relatively small body (a granule) with random position. Under these conditions, the distributions of data inside the small granules do not matter and may be put out of consideration. What is important is the geometric characteristics of the granules and their distribution. This is a simple one-level version of the granular data representation [68,69]. The possibility to replace points by compacts in neural network learning was considered by Kainen [59]. He developed the idea that 'compacta can replace points'. In discussion, we will touch also a promising multilevel hierarchical granular representation.
Spherical granules allows a simple straightforward generalisation of Theorem 1. Consider spherical granules G z of radius R with centres z ∈ B n : Let G x and G y be two such granules. Let us reformulate the Fisher separation condition with threshold α for granules: Elementary geometric reasoning gives that the separability condition (8) holds if x (the centre of G x ) does not belong to the ball with radius 1 2α y + R(1 + 1 α ) centred at 1 2α y: This is analogous to the ball of excluded volume (7) for spherical granules. The difference from (7) is that both z and y are inflated into balls of radius R.
Let B be the closure of the ball defined in (7): Condition (9) implies that the distance between x and B is at least Then βx belongs to the boundary of B, hence (5) holds as an equality for βx: Thus, if x satisfies (9) then Combination of two last inequalities with (10) gives separability (8).
If the point y belongs to the unit ball B n then the radius of the ball of excluded volume (9) does not exceed Further on, the assumption ξ < 1 is used.

Theorem 2.
Consider a finite set of spherical granules G y with radius R and set of centres Y in B n . Let G x be a granule with radius R and a randomly chosen centre x from a distribution in the unit ball with the bounded probability density ρ(x). Assume that ρ(x) satisfies inequality (6) and the upper estimate of the radius of excluded ball (11) ξ < 1. Let 1 > r > ξ and Then the separability condition (8) holds for G x and all G y (y ∈ Y) with probability p > 1 − δ.
Proof. The separability condition (8) holds for the granule G x and all G y (y ∈ Y) if x does not belong to the excluded ball (9) for all y ∈ Y. The volume of the excluded ball is V = ξ n V n (B n ) for each y ∈ Y. The probability that point x belongs to such a ball does not exceed C ξ r n in accordance with the boundedness condition (6). Therefore, the probability that x belongs to the union of such balls does not exceed |Y|C ξ r n .
This probability is less than δ if |Y| < δ 1 C r ξ n . Table 3 shows how the number |Y| that guarantees separability (8) of a random granule G x from an arbitrarily selected set of |Y| granules with probability 0.99 grows with dimension for α = 0.9, r = 0.9, C = 1 and R = 0.1.
The separability condition (8) can be considered as Fisher separability (5) with inflation points to granules. From this point of view, Theorem 2 is a version of Theorem 1 with inflated points. An inflated version of Corollary 1 also exists. Table 3. The upper bound on |Y| that guarantees separation of granules G x and all G y (y ∈ Y) (8) with probability 0.99 according to Theorem 2 for α = 0.9, r = 0.9, C = 1 and R = 0.1 in various dimensions. Corollary 2. Let Y ⊂ B n be a random set Y = {y 1 , . . . , y |Y| }. Assume that for each j the density of conditional distribution of vector y j for any given positions of the other y k in B n exists and satisfies inequality (6). Consider a finite set of spherical granules G y with radius R and centres y ∈ Y in B n . For the radius of the excluded ball (11) assume ξ < r, where r < 1 is defined in (6). Then, with probability This upper border of |Y| grows with n in geometric progression.
The idea of spherical granules implies that, in relation to the entire dataset, the granules are more or less uniformly compressed in all directions and their diameter is relatively small (or, equivalently, the granules are inflated points, and this inflation is limited isotropically). Looking around, we can hypothesise quite different properties: in some directions, the granules can have large variety, it can be as large of variety as the whole set, but the dispersion decays in the sequence of the granule's principal components while the entire set is assumed to be whitened. Large diameter of granules is not an obstacle to the stochastic separation theorems. The following proposition gives a simple but instructive example.
Consider an arbitrary set of N intervals I j = [u j , v j ] ∈ B n (j = 1, . . . , N). Let x be a randomly chosen point from a distribution in the unit ball with the bounded probability density ρ(x). Assume that ρ(x) satisfies inequality (6) and N < δ 2C (2rα) n . Then with probability p > 1 − δ point x is Fisher-separable from any y ∈ ∪ j I j with threshold α (5).

Proof.
For given x and α, the Fisher's separability inequality defines a half-space for y (5). An interval I = [u, v] belongs to this half-space if and only if its ends, u and v, belong to it, that is, x is α-Fisher separable from u and v. Therefore, we can apply Theorem 1 to prove α-Fisher separability of x from the set The same statements are true for separation of a point from a set of simplexes of various dimension. For such estimates, only the number of vertices matters.
Consider granules in the form of ellipsoids with decaying sequence of length of the principal axes. Let be an infinite sequence of the upper bounds for semi-axes. Each ellipsoid granule in R n has a centre, z, an orthonormal basis of principal axes E = {e 1 , e 2 , . . . , e n }, and a sequence of semi-axes, . This ellipsoid is given by the inequality: Let the sequence Theorem 3. Consider a set of N elliptic granules (13) with centres z ∈ B n and a i ≤ d i . Let D be the union of all these granules. Assume that x ∈ B n is a random point from a distribution in the unit ball with the bounded probability density ρ(x) ≤ ρ max . Then for positive ε, ς where a and b do not depend on the dimensionality.
In proof of Theorem 3 we construct explicit estimates of probability in (14). This construction (Equation (21) below) is an important part of Theorem 3. It is based on the following lemmas about quasiorthogonality of random vectors.

Lemma 1.
Let e ∈ R n be any normalised vector, e = 1. Assume that x ∈ B n is a random point from a distribution in B n with the bounded probability density ρ(x) ≤ ρ max . Then, for any ε > 0 the probability Proof. The inequality (x, e) ≥ ε defines a spherical cap. This spherical cap can be estimated from above by the volume of a hemisphere of radius Figure 4). The volume W of this hemisphere is The probability that x belongs to this cap is bounded from above by the value ρ max W, which gives the estimate (15). ε 1 e Figure 4. Approximation of a spherical cap by a hemisphere. A spherical cap is portion of B n cut off by a plane on distance ε from the centre. It is approximated from above by a hemisphere of radius The vector x should belong to this spherical cap to ensure the inequality (x, e) ≥ ε. Lemma 2. Let e 1 , . . . , e N ∈ R n be normalised vectors, e i = 1. Assume that x ∈ B n is a random point from a distribution in B n with the bounded probability density ρ(x) ≤ ρ max . Then, for any ε > 0 the probability Proof. Notice that According to Lemma 1, each term in the last sum is estimated from above by the expression It is worth mentioning that the term ( √ 1 − ε 2 ) n decays exponentially when n increases. Let S z,E,A be an ellipsoid (13). Decompose a vector x ∈ R n in an orthonormal basis E = {e 1 , . . . , e n }: Lemma 3. For a given x ∈ R n . Maximisation of a linear functional (x, z ) on an ellipsoid (13) gives and the maximiser has the following coordinates in the principal axes: where z i = (z , e i ), and z i = (z, e i ) are coordinates of the vectors z , z in the basis E.
Proof. Introduce coordinates in the ellipsoid S z,E,A (13): ∆ i = z i − z i . In these coordinates, the objective function is For given x, z we have to maximise ∑ i ∆ i cos α i under the equality constraints: because the maximiser of a linear functional on a convex compact set belongs to the border of this compact. The method of Lagrange multipliers gives: To find the Lagrange multiplier λ, we use the equality constrain again and get where the '+' sign corresponds to the maximum and the '-' sign corresponds to the minimum of the objective function. Therefore, the required maximiser has the form (18) and the corresponding maximal value is given by (17).
Proof of Theorem 3 . The proof is organised as follows. Select sufficiently small R > 0 and find such k that d k+1 < R. For each elliptic granule select the first k vectors of its principal axes. There will be N vectors of the first axes, N vectors of the second axes, etc. Denote these families of vectors E 1 , E 2 , ..., E k : E i is a set of vectors of the ith principal axis for granules. Let E 0 be the set of the centres of granules. Select a small ϑ > 0. Use Lemma 2 and find the probability that for all e ∈ E i and for all i = 1, . . . , k the following quasiorthogonality condition holds: . Under this condition, evaluate the value of the separation functionals (17) in all granules as where z is the centre of the granule. Indeed, The quasiorthogonality condition gives that the first sum does not exceed ϑ. Recall that x ≤ 1 and ∑ i cosα 2 i = 1. Therefore, the second sum does not exceed R 2 . This gives us the required estimate (19). The first term, (x, z) is also small with high probability. This quasiorthogonality of x and N vectors of the centres of granules follows from Lemma 2. It should be noted that the requirement of qusiorthogonality of x to several families of vectors (N centres and kN principal axes) increases the pre-exponential factor in the negative term in (16). This increase can be compensated by a slight increase in the dimensionality because of the exponential factor there.
Let us construct the explicit estimates for given ε > 0, ς > 0. Take Under conditions of Theorem 3 several explicit exponential estimates of probabilities hold: 1. Volume of a ball with radius 1 − ς is V n (B n )(1 − ς) n . therefore for probability of x belong to this ball, we have Thus, the probability for all e ∈ E i , i = 1, . . . , k then, according to the choice of ϑ (20) and inequality (19), (x, z ) ≤ ε for all points from the granules z ∈ D. Therefore, (21) proves Theorem 3 with explicit estimate of the probability.
for all points from the granules z ∈ D. This is the analogue of α-Fisher separability of point x from elliptic granules.
Theorem 3 describes stochastic separation of a random point in n-dimensional dataspace from a set of N elliptic granules. For given N probability of α-Fisher separability exponentially approaches 1 with dimensionality growth. Equivalently, for a given probability, the upper bound on the number of granules that guarantees such a separation with this probability grows exponentially with the dimension of the data. We require two properties of the probability distribution: compact support and the existence of a probability density bounded from above. The interplay between the dependence of the maximal density on the dimension (similarly to (6)) and the exponents in the probability estimates (21) determines the estimate of the separation probability.
In Theorem 3 we analysed separation of a random point from a set of granules but it seems to be much more practical to consider separation of a random granule from a set of granules. For analysis of random granules a joint distribution of the position of the centre and the basis of principal axes is needed. Existence of strong dependencies between the position of the centre and the directions of principal axes may in special cases destroy the separability phenomenon. For example, if the first principal axis has length 1 or more and is parallel to the vector of the centre (i.e., e 1 = x/ x ) then this granule is not separated even from the origin. On the other hand, independence of these distributions guarantees stochastic separability, as follows from Theorem 4 below. Independence by itself is not needed. The essential condition is that for each orientation of the granule, the position of its centre remains rather uncertain. Theorem 4. Consider a set of N elliptic granules (13) with centres z ∈ B n and a i ≤ d i . Let D be the union of all these granules. Assume that x ∈ B n is a random point from a distribution in the unit ball with the bounded probability density ρ(x) ≤ ρ max . Let x be a centre of a random elliptic granule S x = S x,E x ,A x (13). Assume that for any basis of principal axes E and sequence of semi-axes A = {a i } (a i ≤ d i ) the conditional distribution of the centres of granules x given E x = E, A x = A has a density in B n uniformly bounded from above: where a and b do not depend on the dimensionality.
In the proof of Theorem 4 we estimate the probability (22) by a sum of decaying exponentials, which give explicit formulas for a and b as was done for Theorem 3 in (21).
Proof. We will prove (22) for an elipsoid S x (13) with given (not random) basis E and semiaxes a i ≤ d i , and with a random centre x ∈ B n assuming that the distribution density of x is bounded from above by ρ max .
Select sufficiently small R > 0 and find such k that d k+1 < R. For each granule, including S x with the centre x select the first k vectors of its principal axes. There will be N + 1 vectors of the first axes, N + 1 vectors of the second axes, etc. Denote these families of vectors E 1 , E 2 , ..., E k : E i is a set of vectors of the ith principal axis for all granules, S x . Let E 0 be the set of of the centres of granules (excluding the centre x of the granule S x . ) For a given ϑ > 0 the following estimate of probability holds (analogously to (21)).
Additionally, for this choice, (x, x ) ≥ 1 − ε for all x ∈ S x . Therefore, if ε < α 1+α , then α(x, x ) > (x, z ) for all z ∈ D and x ∈ S x with probability estimated in (23). This result can be considered as α-Fisher separability of elliptic granules in high dimensions with high probability.
Note that the the proof does not actually use that d i → 0. All that we use that lim sup , where < α 1+α . Hence the proof remains valid whenever lim sup .
It may be useful to formulate a version of Theorem 4 when S x is the granule of an arbitrary (non-random) shape but with a random centre as a separate Proposition.

Proposition 2.
Let D be the union of N elliptic granules (13) with centres in B n with a i ≤ d i . Let S z,E,A be one more such granule. Let x ∈ B n be a random point from a distribution in the unit ball with the bounded probability density ρ(x) ≤ ρ max . Let S x = S z,E,A + (x − z) be the granule S z,E,A shifted such that its centre becomes x. Then Theorem 4 is true for S x .
The proof is the same as the proof of Theorem 4. The estimates (21) and (23) are far from being sharp. Detailed analysis for various classes of distributions may give better estimates as it was done for separation of finite sets [13]. This work needs to be done for separation of granules as well.

Superstatistic Presentation of 'Granules'
The alternative approach to the granular structure of the distributions are soft clusters. They can be studied in the frame of superstatistical approach with representation of data distribution by a random mixture of distributions of points in individual clusters. We start with the following remark. Notice that Proposition 2 has the following easy corollary.

Corollary 3.
Let S x and D be as in Proposition 2. Let x and z be the points selected uniformly at random from S x and D, correspondingly. Then for positive , ζ where the constants a, b are the same as in Theorem 4.
By the law of total probability, Corollary 3 is weaker than Proposition 2. While Proposition 2 states that, with probability at least 1 − f (n), the whole granule S x can be separated from all points in D, Corollary 3 allows for the possibility that there could be a small portions of S x and D which are not separated from each other. As we will see below, this weakening allows us to prove the result in much greater generality, where the uniform distribution in granules is replaced by much more general log-concave distributions.
We say that density ρ : R n → [0, ∞) of random vector x (and the corresponding probability ) is a convex function on K. For example, the uniform distribution in any full-dimensional subset of R n (and in particular uniform distribution in granules (13)) has a log-concave density.
We say that ρ is whitened, or isotropic, if E[x] = 0, and where S n−1 is the unit sphere in R n . Equation (24) is equivalent to the statement that the variance-covariance matrix for the components of x is the identity matrix. This can be achieved by linear transformation, hence every log-concave random vector x can be represented as , Σ is (non-random) matrix and y is some isotropic log-concave random vector. An example of standard normal distribution shows that the support of isotropic log-concave distribution may be the whole R n . However, such distributions are known to be concentrated in a ball of radius √ n(1 + δ) with high probability. Specifically, ( [70], Theorem 1.1) implies that for any δ ∈ (0, 1) and any isotropic log-concave random vector in R n , where c, c > 0 are some absolute constants. Note that we have √ n but not n in the exponent, and this cannot be improved without requiring extra conditions on the distribution. We say that density for any δ ∈ (0, 1), and any isotropic strongly log-concave random vector x in R n . Fix some δ > 0 and infinite sequence d = ( (13), where Σ and x 0 are defined in (25) and B(0, (1 + δ) √ n) is the ball with centre 0 and radius (1 + δ) √ n. Then (26) and (27) imply that x ∈ S x 0 ,E,A with high probability. In combination with Proposition 2, this implies the following results.
for some constants a, b, c, c that do not depend on the dimensionality.
Proof. If follows from (26) and (δ, d)-admissibility of the distribution from which x has been selected that for some ellipsoid S 0 (13). Similarly, since z is selected from a mixture of N (δ, d)-admissible log-concave distributions, we have for some ellipsoids S 1 , . . . , S N (13). Let E be the event that ( If E does not happen than either (i) x ∈ S 0 , or (ii) z ∈ N i=1 S i , or (iii) x ∈ S 0 and z ∈ N i=1 S i , but E still does not happen. The probabilities of (i) and (ii) are at most c exp(−c δ 3 √ n), while the probability of (iii) is at most Nρ max V n (B n )a exp(−bn) by Proposition 2.
Exactly the same proof in combination with (27) implies the following version for strongly log-concave distributions.
for some constants a, b, c, c that do not depend on the dimensionality.

The Superstatistic form of the Prototype Stochastic Separation Theorem
Theorem 1 evaluates the probability that a random point x ∈ B n with bounded probability density is α-Fisher separable from an exponentially large finite set Y and demonstrates that under some natural conditions this probability tends to zero when dimension n tends to ∞. This phenomenon has a simple explanation: for any y ∈ B n the set of such x ∈ B n that x is not α-Fisher separable from y is a ball with radius y /(2α) < 1 and the fraction of this volume in B n decays as y 2α n . These arguments can be generalised with some efforts for the situation when we consider an elliptic granule instead of a random point x and an arbitrary probability distribution instead of a finite set Y. Instead of the estimate of the probability of a point x falling into a the ball of excluded volume (7), we use the following proposition for separability of a random point x of a granule S x with a random centre x from an arbitrary point z ∈ B n .
for some constants a, b, c, c that do not depend on the dimensionality.
We remark that because Propositions 5-7 hold for an arbitrary (non-random) point z ∈ B n , they also hold for point selected from any probability distribution within B n , and in particular if point z selected uniformly at random from any set D ⊂ B n .

Compact Embedding of Patterns and Hierarchical Universe
Stochastic separation theorems tell us that in large dimensions, randomly selected data points (or clusters of data) can be separated by simple and explicit functionals from an existing dataset with high probability, as long as the dataset is not too large (or the number of data clusters is not too large). The number of data points (or clusters) allowed in conditions of these theorems is bounded from above by an exponential function of dimension. Such theorems for data points (see, for example, Teorem 1 and [13]) or clusters (Theorems 2-4) are valid for broad families of probability distributions. Explicit estimations of probability to violate the separability property were found.
There is a circumstance that can devalue this (and many other) probabilistic results in high dimension. We almost never know the probability of a multivariate data distribution beyond strong simplification assumptions. In the postclassical world, observations cannot really help because we never have enough data to restore the probability density (again, strong simplification like independence assumption or dimensionality reduction can help, but this is not a general multidimensional case). A radical point of view is possible, according to which there is no such thing as a general multivariate probability distribution, since it is unobservable.
In the infinite-dimensional limit the situation can look simpler: instead of finite but small probabilities that decrease and tend to zero with increasing dimension (like in (21) and (23)) some statements become generic and hold 'almost always'. Such limits for concentrations on spheres and their equators were discussed by Lévy [71] as an important part of the measure concentration effects. In physics, this limit corresponds to the so-called thermodynamic limit of statistical mechanics [72,73]. In the infinite-dimensional limit many statements about high or low probabilities transform into 0-1 laws: something happens almost always or almost newer. The original Kolmogorov 0-1 law states, roughly speaking, that an event that depends on an infinite collection of independent random variables but is independent of any finite subset of these variables has probability zero or one (for precise formulation we refer to the monograph [74]). The infinite-dimensional 0-1 asymptotic might bring more light and be more transparent than the probabilistic formulas.
From the infinite-dimensional point of view, the 'elliptic granule' (13) with decaying sequence of diameters is a compact. The specific elliptic shape used in Theorem 3 is not very important and many generalisations are possible for the granules with decaying sequence of diameters. The main idea, from this point of view, is compact embedding of specific patterns into general population of data. This point of view was influenced by the hierarchy of Sobolev Embedding Theorems where the balls of embedded spaces appear to be compact in the image space.
The finite-dimensional hypothesis about granular structure of the datasets can be transformed into the infinite-dimensional view about compact embedding: the patterns correspond to the compact subsets of the dataspace. Moreover, this hypothesis can be extended to the hypothesis about hierarchical structure ( Figure 5): the data that correspond to a pattern also have the intrernal granular structure. To reveal this structure, we can apply centralisation and whitening to a granule. After that, the granule will transform into a new unit ball, the external set (the former 'Universe') will typically become infinitely far ('invisible'), and the internal structure can be seeking in the form of collection of compact granules in new topology. It should be stressed that this vision is not a theorem. It is proposed instead of typical dominance of smooth or even uniform distributions that populate theoretical studies in machine learning. On another hand, hierarchical structure was observed in various data analytics exercises: if there exists a natural semantic structure then we expect that data have the corresponding cluster structure. Moreover, various preprocessing operations make this structure more visible (see, for example, discussion of preprocessing in Appendix A).
The compact embedding idea was recently explicitly used in data analysis (see, for example, [75][76][77]). The infinite-dimensional representation and compact embedding hypothesis brings light to the very popular phenomenon of vulnerability of AI decisions in high-dimension world. According to recent research, such vulnerability seems to be a generic property of various formalisations of learning and attack processes in high-dimensional systems [78][79][80].
Let Q be an infinite-dimensional Banach space. The patterns, representations of a pattern, or their images in an observer systems, etc. are modelled below by compact subsets of Q.
Theorem 5 (Theorem of high-dimensional vulnerability). Consider two compact sets, K 0,1 ⊂ Q. For almost every y ∈ Q there exists such continuous linear functional l on Q, l ∈ Q * , that In particular, for every ε > 0 there exist such y ∈ Q and continuous linear functional l on Q, l ∈ Q * , that y < ε and (28) holds. If (28) holds, then K 0 ∩ (K 1 + y) = ∅. The perturbation y takes K 1 out of the intersection with K 0 . Moreover, linear separation of K 0 and perturbed K 1 (i.e., (K 1 + y)) is possible for almost always (28) (for almost any perturbation).
The definition of "almost always" is clarified in detail in Appendix B. The set of exclusions, i.e., the perturbations that do not satisfy (28) in Theorem 5, is completely thin in the following sense, according to Definition A1. A set Y ⊂ Q is completely thin, if for any compact space K the set of continuous maps Ψ : K → Q with non-empty intersection Ψ(K) ∩ Y = ∅ is set of first Bair category in the Banach space C(K, Q) of continuous maps K → Q equipped by the maximum norm.
Proof of Theorem 5 . Let co(V) be a closed convex hull of a set V ⊂ Q. The following sets are convex compacts in Q: co(K 0 ), co(K 1 ), and co(K 0 ) − co(K 1 ). Let Then the set co(K 1 ) + y − co(K 0 ) does not contain zero. It is a convex compact set. According to the Hahn-Banach separation theorem [81], there exists a continuous linear separating functional l ∈ Q * that separates the convex compact co(K 1 ) + y − co(K 0 ) from 0. The same functional separates its subset, K 1 + y − K 0 from zero, as required.
The set of exclusions, co(K 0 ) − co(K 1 ) (see (29)) is a compact convex set in Q. According to Riesz's theorem, it is nowhere dense in Q [81]. Moreover, for any compact space K the set of continuous maps Ψ : K → Q with non-empty intersection Ψ(K) ∩ Y = ∅ is a nowhere dense subset of Banach space C(K, Q) of continuous maps K → Q equipped by the maximum norm.
Indeed, let Ψ(K) ∩ Y = ∅. The set Ψ(K) is compact. Therefore, as it is proven, an arbitrary small perturbation y exists that takes Ψ(K) out of the intersection with Y: (Ψ(K) + y) ∩ Y = ∅. The minimal value min exists and is positive because compactness (Ψ(K) + y) and Y. Therefore, Ψ (K) ∩ Y = ∅ for all Ψ from a ball of maps in C(K, Q) This proofs that the set of continuous maps Ψ : K → Q with non-empty intersection Ψ(K) ∩ Y is a nowhere dense subset of C(K, Q). Thus, the set of exclusions is completely thin.
The following Corollary is simple but it may seem counterintuitive: Corollary 4. A compact set K 0 ⊂ Q can be separated from a countable set of compacts K i ⊂ Q by a single and arbitrary small perturbation y (y < ε for an arbitrary ε > 0): Almost all perturbations y ∈ Q provide this separation and the set of exclusions is completely thin.
Proof. First, refer to Theorem 5 (for separability of K 0 from one K i ). Then mention that countable union of completely thin set of exclusions is completely thin, whereas the whole Q is not (according to the Bair theorem, Q is not a set of first category).
Separability theorems for compactly embedded patterns might explain why the vulnerability to adversarial perturbations and stealth attacks is typical for high-dimensional AI systems based on data [78,79]. Two properties are important simultaneously: high dimensionality and compactness of patterns.

Structure of Multi-Correctors
In this section, we present the construction of error correctors for multidimensional AI systems operating in a multidimensional world. It combines a set of elementary correctors ( Figure 2) and a dispatcher that distributes the tasks between them. The population of possible errors is presented as a collection of clusters. Each elementary corrector works with its own cluster of situations with a high risk of error. It includes a binary classifier that separates that cluster from the rest of situations. Dispatcher is based on an unsupervised classifier that performs cluster analysis of errors, selects the most appropriate cluster for each operating situation, transmits the signals for analysis to the corresponding elementary corrector, and requests the correction decision from it ( Figure 6). Multi-corrector-a system of elementary correctors, controlled by the dispatcher, for reversible correction of legacy AI systems. The dispatcher receives signals from the AI system to be corrected (input signals of the AI system, internal signals generated in the decision-making process, and output signals) and selects from the elementary correctors the one that most corresponds to the situation and will process this situation to resolve the issue of correction. The decision rule, on the basis of which the dispatcher distributes situations between elementary correctors, is formed as a result of a cluster analysis of situations with diagnosed errors. Each elementary corrector processes situations from one cluster. When new errors are detected, the dispatcher modifies the definition of clusters. Cluster models are prepared and modified using the data stream online algorithms.
In brief, operation of multi-correctors ( Figure 6) can be described as follows: • The correction system is organised as a set of elementary correctors, controlled by the dispatcher; • Each elementary corrector 'owns' a certain class of errors and includes a binary classifier that separates situations with a high risk of these errors, which it owns, from other situations; • For each elementary corrector, a modified rule is set for operating of the corrected AI system in a situation with a high risk of error diagnosed by the classifier of this corrector; • The input to the corrector is a complete vector of signals, consisting of the input, internal, and output signals of the corrected Artificial Intelligence system, (as well as, if available, any other available attributes of the situation); Flowcharts of these operations are presented in Appendix C. Multi-correctors satisfy the following requirements: 1. Simplicity of construction; 2. Correction should not damage the existing skills of the system; 3. Speed (fast non-iterative learning); 4. Correction of new errors without destroying previous corrections.
For implementation of this structure, the construction of classifiers for elementary correctors and the online algorithms for clustering should be specified. For elementary correctors many choices are possible, for example: • Fisher's linear discriminant is simple, robust, and is proven to be applicable in high-dimensional data analysis [13,32]; • Kernel versions of non-iterative linear discriminants extend the area of application of the proposed systems, their separability properties were quantified and tested [36]; • Decision trees of mentioned elementary discriminants with bounded depth. These algorithms require small (bounded) number of iterations.
The population of clustering algorithms is huge [82]. The first choice for testing of multi-correctors [83] was partitioning around centroids by k means algorithm. The closest candidates for future development are multi-centroid algorithms that present clusters by networks if centroids (see, for example, [84]. This approach to clustering meets the idea of compact embedding, when the network of centres corresponds to the ε-net approximating the compact.

Datasets
In what follows our use-cases will evolve around a standard problem of supervised multi-class classification. In order to be specific and to ensure reproducibility of our observations and results, we will work with a well-known and widely available CIFAR-10 dataset [85,86]. The CIFAR-10 dataset is a collection of 32 × 32 colour images that are split across 10 classes: 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck' with 'airplane' being a label of Class 1, and 'truck' being a label of Class 10. The original CIFAR-10 dataset is further split into two subsets: a training set containing 5000 images per class (total number of images in the training set is 50,000), and a testing set with 1000 images per class (total number of images in the testing set is 10,000).

Tasks and Approach
We focus on two fundamental tasks: for a given legacy classifier: • (Task 1) devise an algorithm to learn a new class without catastrophic forgetting and retraining, and; • (Task 2) develop an algorithm to predict classification errors in the legacy classifier.
Let us now specify these tasks in more detail. As a legacy classifier we have used a deep convolutional neural network whose structure is shown in Table 4. The network's training set comprised 45,000 images corresponding to Class 1-9 (5000 images per class), and the test set comprised 9000 images from the CIFAR-10 testing set (1000 images per class). No data augmentation was invoked as a part of the training process. The network by stochastic gradient descent with the momentum parameter was set to 0.9 and mini-batches were of size 128. Overall, we trained the network over 70 epochs executed in 7 training episodes of 10-epoch training, and the learning rate was equal to 0.1/(1 + 0.001k), where k is the index of a training instance (a mini-batch) within a training episode.
The network's accuracy, expressed as the percentage of correct classifications, was 0.84 and 0.73 on the training and testing sets, respectively (rounded to the second decimal point). The network was trained in MATLAB R2021a. Each 10-epoch training episode took approximately 1.5 h to complete on an HP Zbook 15 G3 laptop with a Core i7-6820HQ CPU, 16 Gb of RAM, and Nvidia Quadro 1000M GPU.
Task 1 (learning a new class). Our first task was to equip the trained network with a capability to learn a new class without expensive retraining. In order to achieve this aim we adopted an approach and algorithms presented in [25,83]. According to this approach, for every input image u we generated its latent representation x of which the composition is shown in Table 5. In our experiments we kept all dropout layers active after training. This was implemented by using "forward" method instead of "predict" when accessing feature vectors of relevant layers in the trained network. The procedure enabled us to simulate an environment in which AI correctors operate on data that are subjected to random perturbations.
This process constituted our legacy AI system. Using these latent representations of images, we formed two sets: X and Y. The set X contained latent representations of the new class (Class 10-'trucks') from the CIFAR-10 training set (5000 images), and the set Y contained latent representations of all other images in CIFAR-10 training set (45,000 images). These sets have then been used to construct a multi-corrector in accordance with the following algorithm presented in [83].
Integration logic of the multi-corrector into the final system was as follows [83]: Since the set Y corresponds to data samples from previously learned classes, a positive response in the multi-corrector (condition (w , x w ) > θ holds) 'flags' that this data point is to be associated with classes that have already been learned (Classes 1-9). Absence of a positive response indicates that the data point is to be associated with the new class (Class 10).
Task 2 (predicting errors of a trained legacy classifier). In addition to learning a new class without retraining, we considered the problem of predicting correct performance of a trained legacy classifier. In this setting, the set X of vectors corresponding to incorrect classifications on CIFAR-10 training set, and the set Y contained latent representations of images form CIFAR-10 training set that have been correctly  1. Determining the centroidx of the X . Generate two sets, X c , the centralised set X , and Y * , the set obtained from Y by subtractingx from each of its elements. 2. Construct Principal Components for the centralised set X c . 3. Using Kaiser, broken stick, conditioning rule, or otherwise, select m ≤ n Principal Components, h 1 , . . . , h m , corresponding to the first largest eivenvalues λ 1 ≥ · · · ≥ λ m > 0 of the covariance matrix of the set X c , and project the centralized set X c as well as Y * onto these vectors. The operation returns sets X r and Y * r , respectively: λ m corresponding to the whitening transformation for the set X r . Apply the whitening transformation to sets X r and Y * r . This returns sets X w and Y * w : 5. Cluster the set Y * w into k clusters Y * w,1 , . . . , Y * w,k (using e.g. the k-means algorithm or otherwise). Letȳ 1 , . . . ,ȳ k be their corresponding centroids. 6. For each pair (X w , Y * w,i ), i = 1, . . . , k, construct (normalised) Fisher discriminants w 1 , . . . , w k : If multiple thresholds are given then an element z is associated with the set Y * w,i if (w i , z) > θ i and with the set X w if (w i , z) ≤ θ i .
classified. Similar to the previous task, predictor of the classifier's error was constructed in accordance with Algorithms 1 and 2.

Compute
3. Associate the vector x with the set Y if (w , x w ) > θ and with the set X otherwise. If multiple thresholds are given then associate the vector x with the set Y if (w , x w ) > θ and with the set X otherwise.
Output: a label attributed to the vector x.
Testing protocols. Performance of the algorithms was assessed on CIFAR-10 testing set. For Task 1, we tested how well our new system-the legacy network shown in Table 4 combined with the multi-corrector constructed by Algorithms 1 and 2-performs on images from CIFAR-10 testing set. For Task 2, we assessed how well the multi-corrector, trained on CIFAR-10 training set, predicts errors of the legacy network for images of 9 classes (Class 1-9) taken from CIFAR-10 testing set.

Results
Task 1 (learning a new class). Performance of the multi-corrector in the task of learning a new class is illustrated in Figure 7. In these experiments, we projected onto the first 20 principal components. The rationale for choosing these 20 principal components was that for these components the ratio of the largest eigenvalue to the eigenvalue that is associated with the principal component is always smaller than 10. The figure shows ROC curves in which true positives are images from the new class and identified as a new class, and False positives are defined as images from already learned classes (Classes 1-9) but identified as a new class (Class 10) by the combined system. As we can see from Figure 7, performance of the system saturates at about 10 clusters which indicates a peculiar granular structure of the data universe in this example: clusters are apparently not equal in terms of their impact on the overall performance, and the benefit of using more clusters decays rapidly as the number of clusters grows.
We note that the system performance and generalisation depends on both ambient dimension (the number of principal components used) and the number of clusters. This phenomenon is illustrated in in Figure 8. When the number of dimensions increases (top row in Figure 8), the gap between a single-cluster corrector and a multi-cluster corrector narrows. Yet, as can be observed from this experiment, the system generalises well.
When the number of clusters increases from 10 to 300, the system overfits. This is not surprising as given the size of our training set (50,000 images to learn from) splitting the data into 300 clusters implies that each 100-dimensional discriminant in Algorithm 1 is constructed, on average, from mere 170 samples. The lack of data to learn from and 'diffusion' and shattering of clusters in high dimension could be contributors to the instability. Nevertheless, as the right plot shows, the system still generalises at the level that is similar to the 10-cluster scenario. When the ambient dimension increases further we observe a dramatic performance collapse for the multi-corrector constructed by Algorithms 1 and 2. Now 300-dimensional vectors are built from on average 170 points. The procedure is inherently unstable and in this sense such results are expected in this limit.
Task 2 (predicting errors). A very similar picture occurs in the task of predicting errors of legacy classifiers. For our specific case, performance of 10-cluster multi-corrector with projection onto 20 principal components in shown in Figure 9. In this task, true positives are errors of the original classifier which have been correctly identified as errors by the corrector. False positives are data correctly classified by the original deep neural network but which nevertheless have been labelled as errors by the corrector. According to Figure 9, the multi-corrector model generalises well and delivers circa 70% specificity and sensitivity on the test set. Figure 9. Prediction of errors. Solid curves marked by green triangles (on the left) and red circles (on the right) correspond to 10-cluster multi-corrector. Solid curves marked by squares (blue on the left and black on the right) are produced by a single-cluster elementary corrector. Dashed lines with the same marks show performance of the same system but constructed on data sets in the reduced feature space formed by attributes 1 − 137 (see Table 5). Another interesting phenomenon illustrated by Figure 9 is the apparent importance of how the information from the legacy AI model is aggregated into correcting cascades. Dashed lines in Figure 9 show what happens if latent representations are formed by signals taken from layers 26 and 19 only. In this case the impact of clustering becomes less pronounced, suggesting the importance of feature selection for optimal performance.
Computational efficiency. Computational costs of constructing multi-correctors is remarkably small. For example, learning a new class with a 10-cluster multi-corrector and 20 principal components took 1.32 seconds on the same hardware used to train the original legacy classifier. When the number of clusters and dimension increases to 300 and 300, respectively, the amount of time needed to construct the multi-corrector was 37.7 s. These figures show that not only clustered universes and multi-correctors are feasible in applications but they are also extremely efficient computationally. We do not wish to suggest that they are a replacement of deeper retraining. Yet, as we see from these experiments, they can be particularly efficient in the tasks of incremental learning-learning an additional class in a multi-class problem-if implemented appropriately.

Dimensionality and Multi-Corrector Performance
The CIFAR-10 training set contains 5000 images per class, and the testing set contains 1000 image per class. The total number of data samples is 60,000. Dimension of the input space is 3072. Dimension of the space of latent representation is 393. The shortened feature space with coordinates x 1 − x 137 is also used. Three versions of PCA dimensionality reduction were tested, with 20, 100, and 300 principal components. We can see that the number of samples significantly exceeds all the dimensions (60,000 versus 20, 100, 137, 300, and 393). The question arises: is this classical or already postclassical zone of data dimensionality (see Figure 3)?
Compare the number of samples to the critical size |Y| of the dataset Y that allows one to separate a random point x from the set Y by Fisher's discriminant (Definition 1) with threshold α = 0.8 and probability p = 1 − δ = 0.99. Theorem 1 gives this estimate. If x is uniformly distributed in a ball then, according to Theorem 1, we produce the following table. Table 6 ensures us that for dimensions 100, 137, 300, and 393 the CIFAR-10 dataset is very deeply in the postclassical area. The only question appeared for dimension 20. Theorem 1 gives that for this dimension, the postclassical area ends at |Y| > 121. Nevertheless, the multi-correctors work well in this dimension. The reason for this efficiency could be the fine-grained cluster structure of the dataset. Theorem 1 is true for any dataset Y without any hypothesis about data distribution. It estimates the number of points |Y|. On the contrary, according to Theorems 2 and 3, for a fine-grained structure the number of granules should be counted and not the number of points. Stochastic separation theorems are needed to evaluate the areas of applicability of machine learning algorithms in the multidimensional world of postclassical data. They also provide ideas for developing appropriate algorithms. The first stochastic separation theorems led to elementary correctors ( Figure 2) [30,31]. The theorems for data with fine-grained distributions are related to the multi-corrector algorithm. Of course, the detailed structure of multi-correctors may vary, and in this work we considered the first and basic version.

Conclusions
In this work, we used the modified Donoho's definition of postclassical data (Section 2). The postclassical data are defined by relations between the intrinsic dimensionality of the data dim(DataSet) and the logarithm of the number of data samples (2), dim (DataSet) log N. In the postclassical area (Figure 3), the classical statistical learning approaches may become useless and the phenomena of curse and blessing of dimensionality become important. Among these phenomena are quasiorthogonality [61][62][63], systematically used in our work, and stochastic separation theorems [13,31].
Distributions of data in real life tasks can be far from any regular distribution. One of the typical phenomena is rich cluster structure. Multi-clustering and recently described hubness phenomena are important in high-dimensional data analysis and it is impossible to analyse the real life datasets without accounting of them [66,[87][88][89]. We used the granular distributions as models for multi-clustered data. Three models of clusters are proposed: spherical clusters, elliptic clusters, and superstatistical model, where clusters are represented by the peaks of distribution density and the whole distribution is a random mixture of these such peaks.
Hypothesis of compactness of granules has different forms for these cluster models. For spherical clusters, compactness is considered as a relatively small diameter of the granules comparing to the data standard deviation. This approach is close to the Duin measurement of compactness [58]. For the elliptic granules, the diameter can be large, but the sequence of the main diameters should decay. This idea is borrowed from functional analysis, the theory of Kolmogorov n-width [90][91][92] in its simplest form.
In Section 3, we formulated and proved stochastic separation theorems for fine-grained distributions. Instead of separation of random points we considered separation of clusters. The multi-clustered datasets demonstrate the curse and blessing of dimensionality effects for smaller dimensions than the classical distributions with the same number of data points because these effects depend on the number of clusters and their compactness characteristics, see Theorem 2 for spheric granules, Theorems 3 and 4 for elliptic granules, and Propositions 3-5 for granules modelled by the distribution peaks of different shapes.
The probability of a multivariate real-life data distribution is usually unknown and we never have enough data to restore the probability density for postclassical data. Therefore, in Section 3.5 we developed the infinite-dimensional approach that does not use the unobservable probability distributions. For measure concentration on spheres and equators, infinite-dimensional limit was considered by Lévy in his functional analysis book [71]. Instead of spheric or elliptic granules, just compact subsets are considered and Theorem 5 about separability in families of compact sets explains why the vulnerability to adversarial perturbations and stealth attacks is typical for high-dimensional AI systems based on data [78,79]. Two properties are important simultaneously: high dimensionality and compactness of patterns.
Multi-corrector, a special ideal device for correction of AI errors in the worlds of high-dimensional multi-clustered data, is developed and tested (Section 4). It includes a family of elementary correctors managed by a dispatcher (Figure 6). The dispatcher distributes situations between elementary correctors using a classification model created in the course of cluster analysis of diagnosed errors. Each elementary corrector deals with its own cluster. Multi-correctors are tested on the CIFAR-10 database solving two tasks: (i) learn a new class (without catastrophic forgetting and retraining) and (ii) predict classification errors. Testing was organised for a different number of principal components involved and for a different number of clusters. The tests demonstrates that the multi-corrector model generalises well with appropriate specificity and sensitivity on the test set. The details are presented in Figures 7-9.
Several directions of future work have become open. The main challenge is to develop a technology for creating reliable and self-correcting augmented AI ecosystems in which each AI is dressed-up with a cloud of correctors. These correctors increase the reliability of AI by removing errors and at the same time serve as a special storage device-a memory of detected errors for further interiorisation. The correctors also enable knowledge transfer between AIs and can be used to protect their "host" AI from various attacks by repairing the effects of malicious actions. In addition, they may model attacks on AIs [78,79], opening new ways to assess the efficiency of defence measures and protocols employed by AI owners. There are also many special technical questions that require further attention and work. These include the analysis of reducibility of multidimensional data and the development of precise criteria, enabling one to decide if a given dataset is a postclassical dataset, to which our current work applies, or if it is the classical one, to which conventional statistical learning approaches may still be applicable.

Discussion
The preprocessing in the postclassical data world (Figure 3 and Appendix A) is a challenging task because no classical statistical methods are applicable when the sample size is much smaller than data dimensionality (the Donoho area (Section 2, (1) [28]). The correlation transformation (Appendix A.1) moves data out of the Donoho area yet, certain specific non-classical effects still persist when the sample size remains much smaller than the exponential of the data dimensionality (2). Dimensionality reduction methods should combine two sets of goals: sensible grouping and extraction of relevant features. For these purposes, combining supervised and unsupervised learning techniques is necessary. Data labels from supervised approaches add sense and context to the subsequent analysis of unlabelled data. The simple geometric methods like supervised PCA, semisupervised PCA (Appendix A.2), and Domain Adaptation PCA (DAPCA) (Appendix A.2) may serve as prototypes of more complex and less controllable approaches. They can also be used to simplify large deep learning systems [93].
Data in postlclassical world are rarefied. At the same time, values of regular functionals on data are concentrated near their median values [44,46]. Combinations of these properties produce the 'blessing of dimensionality' [27,28,71]. The most important manifestation of these effects for applied data analysis beyond the central limit theorem are quasiorthogonality [61][62][63] and stochastic separation theorems [13,31]. These results give the theoretical backgrounds for creation of intellectual devices of a new type: correctors of AI systems. In this paper, we presented a new family of stochastic separation theorems for fine-grained data distributions with different geometry of clusters (Section 3). These results enable development of multi-correctors for multidimensional AI with a granular distribution of errors. On real data, such correctors showed better performance than simple correctors.
Various versions of multi-correctors that provide fast and reversible correction of AI errors should be supplemented by an additional special operation of interiorisation of corrections. Accumulation of many corrections will, step by step, spend the blessing of dimensionality resource: after implementing elementary corrections, the probability of success for new correctors may decrease. This can be considered as accumulation of technical debt. In psychology, interiorisation is the process of making skills, attitudes, thoughts, and knowledge an integrated part of one's own being. For large legacy AI systems, interiorisation of corrections means the supervising retraining of the system. Here a complex "legacy system+multi-corrector" acts as a supervisor and labels the data, while the system itself learns by assimilating the fast flow of generated data.
The construction of correctors with their subsequent interiorisation can be considered as a tool for solving the problem of model degradation and concept drift. An increase in the error rate is a signal of degradation of the model and a systematic decrease in performance [55]. The nature of data changes in time, due to the evolution of the system under analysis. Coping with this phenomena required combination of supervised, semi-supervised, and even unsupervised learning. Semi-supervised and unsupervised methods help to self-assess model degradation in preprocessing mode in real time and modify the classification model and features before actual errors occur [94]. Error correctors provide reversible modification of AI systems without iterative retraining and can assimilate significant concept drift.
We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. In this work, we generalise this framework and ideas to a much richer class of distributions. We introduce a new model of data-a possibly infinite-dimensional data universe with hierarchical structure in which each data cluster has a granular internal structure, etc. The idealised concept of granular Hierarchical Universe ( Figure 5) is intended to replace the ideal picture of a smooth unimodal distribution popular in statistical science.
The infinite-dimensional version of theorems about separation of compact clusters and families of such clusters demonstrates the importance of the hypothesis about compact embedding of data clusters (Section 3.5). The hypothesis of images compactness appeared in data analysis and machine learning several times in many different forms. Perhaps, it was first introduced by E.M. Braverman [57]. This was a guess about the data structure in the real world. It is now widely accepted that real data are rarely i.i.d samples from a regular distribution. Getting the right guess about the distribution of data is essential to the success of machine learning.
According to a modern deep learning textbook, "the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the 'real world' that an AI agent experiences and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about" ( [15], Section 5.5.2]).

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript: Assume that the number of data points is less than the number of attributes (1). In this situation, we can decrease the dimension of space by many simple transformations. It is possible to apply PCA and delete all the components with vanishing eigenvalues. This could be a non-optimal approach if originally d is very large. It is also possible to restrict the analysis by the space generated by the data vectors. Let the data sample be a set of N vectors x i in R d . One way to reduce the description is the following correlation transformation that maps the dataspace into cross-correlation space: 1. Centralize data (subtract the mean); 2. Delete coordinates with vanishing variance; (Caution: signals with small variance may be important, whereas signals with large variance may be irrelevant for the target task! This standard operation can help but can also impair the results.) 3. Standardise data (normalise to unit standard deviations in coordinates), or use another normalisation, if this is more appropriate; (Caution: transformation to the dimensionless variables is necessary but selection of the scale (standard deviation) affects the relative importance of the signals and can impair the results.) 4. Normalise the data vectors to unit length: x i → x i / x i (Caution: this simple normalisation is convenient but deletes one attribute, the length. If this attribute is expected to be important than it could be reasonable to use the mean value of x i that gives normalisation to the unit average length.) 5. Introduce coordinates in the subspace spanned by the dataset, Span{x i } using projections on x i . 6. Each new data point y will be represented by a N-dimensional vector of inner products with coordinates (y, x i ).
After this transformation, the data matrix becomes the Gram matrix (x i , x j ). For the centralised and normalised data, these inner products can be considered as correlation coefficients. For such datasets, the number of attributes coincides with the number of data points. The next step may be PCA or another method of dimensionality reduction. The simple and routine formalisation operations can significantly affect the results of data analysis and choosing the right option cannot be done a priori.
However, if the dataset is truly multidimensional, then the correlation transformation can return a data matrix with strong diagonal dominance. Centralised random vectors will be almost orthogonal due to the phenomenon of quasi-orthogonality [61,63]. This effect can make the application of PCA after the correlation transformation less efficient.
There is a different approach to dealing with relatively small samples in multidimensional data spaces. In the Donoho area (see (1) and Figure 3a) we can try to produce a probabilistic generative model and then use it for generating additional data.
The zeroth approximation is the naïve Bayes model. This means assuming that the attributes are independent. The probability distribution is the product of distributions of attributes values. In dimension d, we need to fit the d one-dimensional densities, which is much easier than reconstructing the d-dimensional density in the entire data space. The naïve Bayes approximation can be augmented by accounting strong pair correlations, etc. The resulting approximation may be represented in the form of a Bayesian network [95,96].
There are many methods for generating the probability distribution from data, based on the maximum likelihood estimation married with the network representation of the distribution, like deep latent Gaussian models [97].
The physical interpretation of the log-likelihood as energy (or free energy) gave rise to many popular heuristic approaches like the Boltzmann machine or restricted Boltzmann machine [98] that create approximation of the energy.
Extensive experience was accumulated in the use of various generative models of probability distribution. They can be used to leave the Donoho area by augmentation of the dataset with additional samples generated by the model. The statistical status of such augmentation is not always clear because selection of the best model is an intractable problem and we never have enough data and time to solve it. In large dimension, the models are tested on a standard task: accurate imputations of missing data for the samples never seen before. These tests should check if the majority of correlations captured by the model are significant (and not spurious) and may be used to evaluate the False Discovery Rate (FDR).
A good heuristic should provide a reasonable balance between the risk of missing significant correlations and the risk of including spurious correlations. This is a typical multiple testing problem and in the postclassical data world we cannot be always sure that we solved this problem properly. The standard correcting for multiplicity (see, for example, [99]) may result in too many false negative errors (missed correlations). However, without such corrections, any findings should be seen as hypothesis generating and not as definitive results [100]. This difficulty can be considered as the fundamental incompleteness of the postclassical datasets.
Appendix A.2. Unsupervised, Supervised, and Semisupervised PCA PCA remains the standard and very popular tool for dimensionality reduction and unsupervised data preprocessing. It was introduced by K. Pearson in 1900 as a tool for data approximation by straight lines and planes of best fit. Of course, minimisation of the mean square distance from the data point to its projection on a plane (i.e., mean square error of the approximation) is equivalent to maximisation of the variance of projections (because Pythagorean theorem). This second formulation became the main definition of PCA in textbooks [37]. The third definition of PCA, which we will use below, is more convenient for developing various generalisations. [35].
Let a data sample x i ∈ R d (i = 1, . . . , N) be given and centralised, and let Π be a projector of R d on a q-dimensional plane. The problem is to find the q-dimensional plane that maximises the scattering of the data projections 1 2 For projection on a straight line (1D subspace) with the normalised basis vector e the scattering (A1) is where the coefficients of the quadratic form (e, Qe) are the sample covariance coefficients q lm = 1 N−1 ∑ i x il x im , and x il (l = 1, . . . , d) are coordinates of the data vector x i . If {e 1 , . . . , e q } is an orthonormal basis of the q-dimensional plane in data space, then the maximum scattering of data projections (A1) is achieved, when e 1 , . . . , e q are eigenvectors of Q that correspond to the q largest eigenvalues of Q (taking into account possible multiplicity) λ 1 ≥ λ 2 ≥ . . . ≥ λ q . This is the standard PCA exactly. A deep problem with using PCA in data analysis is that the major components are not necessarily the most important or even relevant for the target task. Users rarely need to simply explain a certain fraction of variance. Instead, they need to solve a classification, prediction, or other meaningful task. Discarding certain major principal components is a common practice in many applications. First principal components are frequently considered to be associated with technical artifacts in the analysis of omics datasets in bioinformatics, and their removal might improve the downstream analyses [101,102]. Even more than 10 first principal components have to be removed sometimes, in order to increase the signal/noise ratio [103].
The component ranking can be made more meaningful if we change the form (A1) and include additional information about the target problem in the principal component definition. The form (A1) allows many useful generalisations. Introduce weight W ij for each pair: The weight W ij may be positive for some pairs (repulsion) or negative for some other pairs (attraction). The weight matrix is symmetric, W ij = W ji . Again, the problem of H maximisation leads to a diagonalisation of a symmetric matrix. Consider projection on a 1D subspace with the normalised basis vector e and define a new quadratic form with coefficients q W lm : Maximum of H (A3) on q-dimensional planes is achieved when this plane is spanned by q eigenvectors of the matrix Q W = (q W lm ) (A4) that correspond to q largest eigenvalues of Q W (taking into account possible multiplicity) To prove this statement we can mention that the functional H for a q-dimensional plane (A3) is the sum of the functionals (A4) calculated for vectors from any orthonormal basis of this plane. Let this basis be {e 1 , . . . , e q }. Decompose each e i in the orthonormal basis of Q W eigenvectors and follow the classical proof for PCA.
There are several methods for the weights assignment: • Classical PCA, W ij ≡ 1; • Supervised PCA for classification tasks [104,105]. The dataset is split into several classes, K v (v = 1, 2, . . . , r). Follow the strategy 'attract similar and repulse dissimilar'. If x i and x j belong to the same class, then W ij = −α < 0 (attraction). If x i and x j belong to different classes, then W ij = 1 (repulsion).
This preprocessing can substitute several layers of feature extraction deep learning network [93]. • Supervised PCA for any supervising task. The dataset for supervising tasks is augmented by labels (the desired outputs). There is proximity (or distance, if possible) between these desired outputs. The weight W ij is defined as a function of this proximity. The closer the desired outputs are, the smaller the weights should be. They can change sign (from classical repulsion, W ij > 0 to attraction, W ij < 0) or simply change the strength of repulsion. • Semi-supervised PCA was defined for a mixture of labelled and unlabelled data [106]. The data are labelled for classification task. For the labelled data, weights are defined as above for supervised PCA. Inside the set of unlabelled data the classical PCA repulsion is used.
All these modifications of PCA are formally very close. They are defined by a maximisation of the functional (A3) for different distributions of weights. This maximisation is transformed into the spectral problem of a symmetric matrix Q W (see (A4) or its simple modification (A5)). The dimensionality reduction is achieved by projection of data onto linear span of q eigenvectors of Q W that correspond to the largest eigenvalues.
How many components to retain is a nontrivial question even for the classic PCA [107]. The methods based on the evaluation of the fraction of variance unexplained or, what is the same, the relative mean square error of the data approximation by the projection, are popular but we should have in mind that this projection should not only approximate the data but also be a filter that selects meaningful features. Therefore, the selection of components to keep depends on the problem we aim to solve and heuristic approaches with several trials of different numbers of components may be more useful than an unambiguous formal criterion. Special attention is needed to the cases when some eigenvalues of Q W become negative. Let λ 1 ≥ λ 2 ≥ . . . ≥ λ r > 0 but for other eigenvalues 0 ≥ λ r+1 ≥ . . .. In this case, a further increase in the dimension of the approximating plane above r does not lead to an increase in H but definitely increases the quality of data approximation. The standard practice is not to use eigenvectors that correspond to non-positive eigenvalues [93].
The classical hypothesis of machine learning is existence of the probability distribution and the same (even unknown) distribution for the training and test sets. The problem of domain adaptation arises when the training set differs from the data that the system should work with under operational conditions. Such situations are typical. The problem is that the new data have no known labels. We have to utilise a known labelled training set (from the "source domain') and a new unlabelled training set (from the 'target domain'). The idea is to modify the data and to make the non-labelled data as close to the labelled one as possible. This transformation should erase the difference between the data distributions in two sets and, at the same time, do not destroy the possibility to solve effectively the machine learning problem for the labelled set.
The key question in domain learning is definition of the objective functional: how to measure the difference in distributions between the source domain sample and the target domain sample. The clue to the answer gives the idea [108]: • Select a family of classifiers in data space; • Choose the best classifier from this family for separation the source domain samples from the target domain samples; • The error of this classifier is an objective function for maximisation (large classification error means that the samples are indistinguishable by the selected family of classifiers).
Ideally, there are two systems: a classifier that distinguishes the feature vector as either a source or target and a feature generator that learns a combination of tasks: to mimic the discriminator and to ensure the successful learning in the source domain. There are many attempts to implement this idea [109,110]. In particular, an effective neural network realisation trains a deep neural network system to accurately classify source samples but decreases the ability of the associated classifier that uses the same feature set to detect whether each example belongs to the source or target domains [111]. The scattering objective function (A3) can combine these two targets for learning of feature generation: success in the learning in the source domain and indistinguishability of the source and target datasets.
Transfer Component Analysis (TCA) was proposed to specify attraction between the clouds of projections of labelled and unlabelled data [112]. The distance between the source and target samples was defined as the distance between the projections of their mean points. Attraction between the mean points of the labelled and unlabelled data was postulated. Let µ L and µ U be these mean points. Their attraction means that a new term should be added to Q W (A4): where weights W ir are assigned by the same rules as in semisupervised PCA, and β > 0 is the attraction coefficient between the mean points of the labelled and unlabelled data samples. Domain Adaptation PCA (DAPCA) also takes advantage of this idea of task mix within a weighted PCA framework (A3). The classifier used is the classical kNN (k nearest neighbours). Let the source dataset (input vectors) be X, the target dataset be Y, X is split into different classes: X = K 1 ∪ . . . ∪ K r . Enumerate points in Y ∪ X The weights are: • If x i , x j ∈ K v then W ij = −α < 0 (the source samples from one class, attraction); • If x i ∈ K u x j ∈ K v (u = v) then W ij = 1 (the source samples from different classes, repulsion); • x i , x j ∈ Y then W ij = β > 0 (the target samples, repulsion); • For each target sample x i ∈ Y find k closest source samples in X. Denote this set E i . For each x j ∈ E i , W ij = −γ < 0 (the weight for connections of a target sample and the k closest source samples, attraction).
The weights in this method depend on three non-negative numbers, α, β, and γ and on the number of nearest neighbours, k. Of course, the values of the constants can vary for different samples and classes, if there is sufficient reason for such a generalisation.
kNN classification can be affected by irrelevant features that create difference between the source and target domains and should be erased in the feature selection procedure. This difficulty can be resolved by the iterative DAPCA. Use the basic algorithm as the first iteration. It gives the q-dimensional plane of major components (the eigenvectors Q W ) with the orthogonal projector in it Π 1 . Find for each target sample k nearest neighbours from the source samples in the projection on this plane (use for definition of k nearest neighbours the seminorm Π 1 (x) − Π 1 (y ). Assign new W ij using these nearest neighbours. Find new projector Π 2 and new nearest neighbours. Iterate. The iterations converge in a finite number of steps, because the functional H (A4) increases at each step (as in the k-means and similar splitting algorithms). Even if the convergence (in high dimensions) is too long, then the early stop can produce a useful feature set. The iterative DAPCA helps also to resolve the classical distance concentration difficulty: in essentially large dimensional distributions the kNN search may be affected by the distance concentration phenomena: most of the distances are close to the median value [113]. Even use of fractional norms or quasinorms do not save the situation [114], but dimensionality reduction with deleting the irrelevant features may help.
If the target domain is empty then TPA, DAPSA, and iterative DAPCA degenerate to the semi-supervised PCA in the source domain. If there is no source domain then they turn into classical PCA in the target domain.
The described procedures of supervised PCA, semi-supervised PCA, TCA, DAPCA, or iterative DAPCA prepare a relevant feature space. The distribution of data in this space is expectedly far from a regular unimodal distribution. It is assumed that in this space the samples will form dense clumps with a lower data density between them.

Appendix B. 'Almost Always' in Infinite-Dimensional Spaces
As it was mentioned in Section 3.5, in the infinite-dimensional limit many statements about high or low probabilities transform into 0-1 laws: something happens almost always or almost newer. Such limits for concentrations on spheres and their equators were discussed by Lévy [71] as an important part of the measure concentration effects. In physics, this limit corresponds to the so-called thermodynamic limit of statistical mechanics [72,73]. The original Kolmogorov 0-1 law states, roughly speaking, that an event that depends on an infinite collection of independent random variables but is independent of any finite subset of these variables, has probability zero or one (for precise formulation we refer to the monograph [74]). The infinite-dimensional 0-1 asymptotic might bring more light and be more transparent than the probabilistic formulas.
This may be surprising, but the problem is what 'almost always' means. Formally, various definitions of genericity are constructed as follows. All systems (or cases, or situations, and so on) under consideration are somehow parameterised-by sets of vectors, functions, matrices, etc. Thus, the 'space of systems' Q can be described. Then the 'meagre (or thin) sets' are introduced into Q, i.e., the sets, which we shall later neglect. The union of a finite or countable number of meager sets, as well as the intersection of any number of them should be meager set again, while the whole Q is not thin. There are two traditional ways to determine thinness.
1. The sets of measure zero are negligible. 2. The sets of Baire first category are negligible.
The first definition requires existence of a special measure such that all relevant distributions are expected to be absolute continuous with respect to it. In Theorem 1, for example, we assumed that the probability distribution (yet unknown) has density and is absolutely continuous with respect to Lebesgue measure. Moreover, we used a version of the 'Smeared (or Smoothed) Absolute Continuity' (SmAC) condition (6) [9,32], which means that the sets of relatively small volume cannot have high probability, whereas absolute continuity means that sets of zero volume have probability zero. Unfortunately, in the infinite-dimensional spaces we usually do not have such a sensible measure. It is very easy to understand if we look on the volumes of balls in Hilbert space with orthonormal basis {e i }. If the measure of a ball is function of its radius and the measure of a ball of radius R is finite, then the balls of radius R/4 have zero measure (because infinitely many such balls with the centres at points Re i /2 can be packed in the ball of radius R/4), and, therefore, the ball of radius R has zero measure because it can be covered by a countable set of balls of radius R/4. Hence, all balls have either zero or infinite measure.
The second definition is widely accepted when we deal with the functional parameters. The construction begins with nowhere dense sets. The set Y is nowhere dense in Q, if in any non-empty open set V ⊂ Q (for example, in a ball) there exists a non-empty open subset W ⊂ V (for example, a ball), which does not intersect with Y: W ∩ Y = ∅. Roughly speaking, Y is 'full of holes'-in any neighbourhood of any point of the set Y there is an open hole. Countable union of nowhere dense sets is called the set of first category. The second usual way is to define thin sets as the sets of first category. A residual set (a 'thick' set) is the complement of a set of the first category. If a set is not meagre it is said to be of the second category. The Baire classification is nontrivial in the so-called Baire spaces, where every intersection of a countable collection of open dense sets is also dense. Complete metric spaces and, in particular, Banach spaces are Baire spaces. Therefore, for Banach spaces of functions, the common definition of negligible set is 'set of first Baire category'. Such famous results as transversality theorem in differential topology [115] or Pugh closing lemma [116] and Kupka-Smale theorem [117] in differential dynamics.
Despite these great successes, it is also widely recognised that the Bair category approach to generic properties requires at least great care. Here are some examples of correct but useless statements about 'generic' properties of function: almost every continuous function is not differentiable; almost every C 1 -function is not convex. Their meaning for applications is most probably this: the genericity used above for continuous functions or for C 1 -function is irrelevant to the subject.
Contradictions between the measure-based and category-based definitions of negligible sets are well known even in dimension one: even the real line R can be divided into two sets, one of which has zero measure, the other is of first category [118]. Genericity in the sense of measure and genericity in the sense of category differ significantly in the applications where both concepts can be used.
The conflict between the two main views on genericity and negligibility stimulated efforts to invent new and stronger approaches. The formal requirements to new definitions are: • A union of countable family of thin sets should be thin.
• Any subset of a thin set should be thin. • The whole space is not thin.
Of course, if we take care not to throw the baby out with the bath water then in R n , where both classical definition are applicable, we expect that thin sets should be of first category and have zero measure. It was not clear a priori whether such a theory is possible with proof nontrivial and important generic properties. It turned out that it is possible. To substantiate the effectiveness of evolutionary optimisation, a theory of completely negligible sets in Banach spaces was developed. [119,120].
Let Q be a real Banach space. Consider compact subsets in Q parameterised by points of a compact space K. It can be presented as a Banach space C(K, Q) of continuous maps K → Q in the maximum norm.
Definition A1. A set Y ⊂ Q is completely thin, if for any compact space K the set of continuous maps Ψ : K → Q with non-empty intersection Ψ(K) ∩ Y = ∅ is set of first Bair category.
The union of a finite or countable number of completely thin sets is completely thin. Any subset of a completely thin point is completely thin, while the whole Q is not. A set Y in the Banach space Q is completely thin, if for any compact set K in Q and arbitrary positive ε > 0 there exists a vector q ∈ Q, such that q < ε and K + q does not intersect Y: (K + q) ∩ Y = ∅. All compact sets in infinite-dimensional Banach spaces and closed linear subspaces with infinite codimension are completely thin.
Only empty set is completely thin in a finite-dimensional space R n . Examples below demonstrate that almost all continuous functions have very natural properties: the set of zeros is nowhere dense, and the (global) maximiser is unique. Below the wording 'almost always' means: the set of exclusions is completely thin.
Proposition A1 ( [119,120]). Let X have no isolated points. Then • Almost always a function f ∈ C(X) has nowhere dense set of zeros {x ∈ X | f (x) = 0} (the set of exclusions is completely thin in C(X)). • Almost always a function f ∈ C(X) has only one point of global maximum.
The following proposition is a tool for proof that some typical properties of functions hold almost always for all functions from a generic compact set.
Proposition A2 ( [119,120]). If a set Y in the Banach space Q is completely thin, then for any compact metric space K the set of continuous maps Ψ : K → Q with non-empty intersection Ψ(K) ∩ Y = ∅ is completely thin in the Banach space C(K, Q).
Proposition A3 ( [119,120]). Let X have no isolated points. Then for any compact space K and almost every continuous map Ψ : K → C(X) all functions f ∈ Ψ(K) have nowhere dense sets of zeros (the set of exclusions is completely thin in C(K, C(X))).
In other words, in almost every compact family of continuous functions all the functions have nowhere dense sets of zeros.
Qualitatively, the concept of a completely thin set was introduced as a tool for identifying typical properties of infinite-dimensional objects, the violation of which is unlikely ('improbable') in any reasonable sense.

Appendix C. Flowchart of Multi-Corrector Operation
In Section 4, we introduced multi-corrector of AI systems. The basic scheme of this device is presented in Figure 6. It includes several elementary correctors (see Figure 2) and a dispatcher. A cluster of errors is owned by each elementary corrector. An elementary corrector evaluates the risk of errors from its own cluster for an arbitrary operation situation and takes the decision to correct or not to correct the legacy AI decision for this situation. For any situation, the dispatcher selects the most appropriate elementary corrector to make a decision about correction. To find a suitable corrector, it uses a cluster error model. When new errors are found, the cluster model changes. More detailed presentation of multi-corrector operation is given by the following flowcharts. The notations are described in Figure A1.

Data
Procedure Decision point Inputs AI system Is correction necessary? Figure A1. Notations used in the flowcharts. All flowcharts use a unified set of blocks: blocks in the form of parallelograms display data, rectangular blocks display procedures, and blocks in the form of rhombuses display the branching points of processes (algorithms) or decision points. The arrows reflect the transfer of data and control.
Flowcharts and blocks are numbered. The flowchart number is mentioned at the top of the drawings. If a block is present in different flowcharts, then it carries the number assigned to it in the top-level flowchart. The relations between different flowcharts are presented in Figure A2.
AI system with correctors Flowchart 10 Legacy AI system Flowchart 12 Correction system Flowchart 14 Single corrector Flowchart 143

Dispatcher of correction Flowchart 141
Modifying the clustering model on the fly Flowchart 1416 Figure A2. The tree of flowcharts: 10-Operation of the modified AI system ( Figure A3); 12-Operation of the legacy AI system ( Figure A4), 14-Operation of the correction system ( Figure A5); 143-Single corrector operation ( Figure A6); 141-The work of the dispatcher ( Figure A7); 1415-Online modification of the cluster model ( Figure A8).
Operation of the modified AI system (10) Inputs Legacy AI system 10 Figure A3. Operation of the modified AI system (10). Input signals (11) are fed to the input of the AI system (12), which at the output gives out the complete vector of the signal (13) that can be used for correction. The complete signal vector (13) is fed to the input of the correction system (14). The correction system (14) calculates the correction of the output signals (15).  Figure A4. Operation of the legacy AI system (12). Input signals (11) are fed to the input of the AI system. The AI system generates vectors of internal signals (123) and output signals (122). Input signals (11), internal signals (123), and output signals (122) form the complete signal vector (13).
Operation of the correction system (14) Figure A5. Operation of the correction system (14). The complete vector of signals (13) is fed to the dispatcher input (141). The dispatcher (141) selects from the correctors the one that most closely matches the situation (142). The selected corrector (142) and the complete signal vector (13) are used to correct the signals (13). The computed corrected outputs (15)