Non-Parametric Probability Distributions Embedded Inside of a Linear Space Provided with a Quadratic Metric

: There exist uncertain situations in which a random event is not a measurable set, but it is a point of a linear space inside of which it is possible to study different random quantities characterized by non-parametric probability distributions. We show that if an event is not a measurable set then it is contained in a closed structure which is not a σ -algebra but a linear space over R . We think of probability as being a mass. It is really a mass with respect to problems of statistical sampling. It is a mass with respect to problems of social sciences. In particular, it is a mass with regard to economic situations studied by means of the subjective notion of utility. We are able to decompose a random quantity meant as a geometric entity inside of a metric space. It is also possible to decompose its prevision and variance inside of it. We show a quadratic metric in order to obtain the variance of a random quantity. The origin of the notion of variability is not standardized within this context. It always depends on the state of information and knowledge of an individual. We study different intrinsic properties of non-parametric probability distributions as well as of probabilistic indices summarizing them. We deﬁne the notion of α -distance between two non-parametric probability distributions.


Introduction
We propose a mathematical model where the probability of an event has a concrete image [1]. On the other hand, the difference between two opposite points of view is well known: some scholars interpret probability as a subjective measure of the degree of belief, whereas others consider it as an objective measure connected with measurable sets [2]. We will refer to those situations characterizing economic science, statistics and other related fields of interest in which such a difference has no reason to exist because it is evident that an event cannot naturally be meant as a measurable set [3]. We have elsewhere shown that the subjective approach to decisions under uncertainty, as we propose it, has innovative contributions to offer because the probability is basically viewed as the solution to a specific decision problem rather than an opening assumption [4].

Probability Viewed as a Mass
There exist situations for which a systematic set-theoretical interpretation of events is not conceptually satisfactory. It follows that an event is not always a measurable set, so a mechanical transposition of all what concerns measure theory into the calculus of probability is not always appropriate [5]. There exist situations for which an event is then an unequivocal proposition such that, by betting on it, it is possible to establish whether the event is true or false because it has occurred or not. It is consequently possible to establish whether the bet has been won or lost. We will show in this paper that an event is contained in a closed structure which is not a σ-algebra but a linear space over R. An event has an intrinsic meaning which is independent of the mathematical notion of measurable set; we do not select a specific orthonormal basis of the linear space under consideration among all its possible orthonormal bases for this reason. Uncertainty about an event depends on a lack of information [6]. It ceases only when a given individual receives certain information about it. Probability deals with events in the sense of single and well-defined cases. It always depends on a given state of information and knowledge of the individual evaluating. It is then a subjective probability [7]. We think of probability as being a mass. It is always a function defined on the entire space of events: the sum of their non-negative probabilities is equal to 1, so axiomatic probability theory is satisfied. Nevertheless, it can freely be distributed without altering its geometric support and the measure that appears more natural in the context represented by the space of random quantities coinciding with a linear space over R. We observe that different distributions of mass are different measures, but the notion of measure has no special status, unlike what happens when we refer to measure theory. When we speak about mass as a measure, we mean that it can coherently be moved in whatever way an individual likes. When we speak about mass as a measure, we do not mean something fixed.

What We Mean about a Random Quantity
A random quantity is a mathematical function on the space of its possible outcomes. Its domain is a collection of possible events, where every event is expressed by a real number [8]. A quantity is random for an individual because he does not know the true value of it. The true value of a random quantity is unique. If an individual calls it random, then it is unknown to him. He is therefore in doubt between at least two possible values [9]. How much the domain of the possible of an individual is large depends on his state of uncertainty at a given moment. Such a domain contains all logically possible values of a random quantity for a given individual at a given instant [10]. In particular, we consider the finest possible partition into atomic events within this context. In other words, we consider different points constituting the space of possible outcomes. It is embedded in a linear space over R. That alternative which will turn out to be verified a posteriori is nothing but a random point in a linear space over R. It expresses everything there is to be said. We have to note that a set is always subdivisible into subsets. Nevertheless, its subdivision necessarily stops when it reaches its constituent elements. It cannot evidently continue. An event is conversely subdivisible without never stopping even if in any situation it is appropriate to stop as soon as the subdivision is sufficient for the problem under consideration. All logically possible values of a random quantity are then points that are not further subdivisible for the purposes of the problem into account. Each random quantity can coherently be assigned a distribution of probability as an expression of the attitude of the individual under consideration. We do not admit that a probability distribution is already attached to it. We say that a probability distribution can vary from individual to individual. It can vary in accordance with the state of information and knowledge of each individual.

The Logic of Certainty
An individual is in a state of uncertainty, at a precise moment, when she/he is in a state of lack of knowledge due to imperfect information at that instant [11]. The latter could be connected with many facts or events [12]. We are not, however, interested in understanding why information and knowledge of an individual at a given instant are imperfect. We are only interested in observing that such an imperfection objectively exists [13]. We are always dominated by uncertainty, even when we think, we reason and we decide. If we are always dominated by uncertainty then we should use only the language of uncertainty characterized by probability. Nevertheless, we do not immediately use it but we firstly use the language of certainty, where the logic of certainty is a structure built above it because it is used in order to reduce the risk of falling into error when we deductively reason. The logic of certainty separates what it is determined because it is true or false from what it is not determined because it is not either true or false. Such a separation is being made by an individual on the basis of the data he uses at a given instant. In other words, the logic of certainty enables us of identifying the domain of a mathematical function representing the random quantity under consideration [14]. With regard to a specific situation that an individual has to take into account there could exist a huge number of possible alternatives. Nevertheless, his information and knowledge will permit him of excluding some of them as impossible. All others will remain possible for him. They constitute the domain of the above function. On the other hand, if an individual is faced with an infinite number of conceivable alternatives, where their number could be countably infinite or uncountable, then his information and knowledge will permit him of considering a finite number of them. We are therefore able to speak about a limitation or approximation of expectations, where the latter is nothing but a discretization of continuous alternatives. It follows that it will be appropriate to assign probabilities to a finite number of points contained in the set of possible values of the random quantity under consideration. It is the domain of the mathematical function identifying it. It will be decomposed by us through geometry. We then say that the logic of certainty also coincides with geometry. We propose a geometric superstructure more effective than any other superstructure to representing all logically possible alternatives that can objectively be considered with reference to a random quantity.

Methodological Aspects Concerning Non-Parametric Probability Distributions
Let X be a random quantity whose possible values are denoted by I(X) = {x 1 , x 2 , . . . , x m }, where we have x 1 < x 2 < . . . < x m without loss of generality. We evidently deal with a finite partition of incompatible and exhaustive events. It follows that x 1 is the true value of X if E 1 occurs with subjective probability equal to p 1 , x 2 is the true value of X if E 2 occurs with subjective probability equal to p 2 , . . . , x m is the true value of X if E m occurs with subjective probability equal to p m , where it turns out to be ∑ m i=1 p i = 1. An individual distributes a unit mass of probability among all possible events contained in X and expressed by means of real numbers identifying I(X) [15]. He enters into the logic of prevision in order to carry out this thing [16]. He is able to attribute to the different possible events a greater or lesser degree of belief. It is nothing but a new, extralogical, subjective, personal and relative factor expressing his attitude viewed as his inclination to expect that a particular event rather than others will be true at the right time [17]. We have to observe a very important point: the logic of certainty obeys the laws of mathematics. On the other hand, we deductively reason when we are faced with mathematics. In particular, it obeys the laws of geometry within this context. This is because all possible and elementary events contained in a random quantity are geometrically represented as points inside of a linear space over R. The logic of prevision conversely obeys the conditions of coherence pertaining to the meaning of probability, not to motives of a mathematical nature [18]. Given a finite partition of incompatible and exhaustive elementary events characterizing a random quantity, the conditions of coherence impose no limits on the probabilities that an individual may subjectively assign, except that the sum of all non-negative probabilities under consideration has to be equal to 1. A combination of bets on all events of a finite partition of incompatible and exhaustive elementary events coincides with a single bet on the certain event. It is a fair bet if and only if the sum of all non-negative probabilities under consideration is equal to 1. Such a bet is consequently acceptable by an individual in both senses indifferently. If an individual conversely accepts the same kind of bet, where the sum of all non-negative probabilities under consideration is not equal to 1, then his decision-making leads him to a certain and undesirable loss. A bet viewed as a real or conceptual experiment concerning a probability distribution known over a finite partition of elementary events permits of measuring the notion of prevision of a random quantity and, especially, of probability of an event from an operational point of view.

A Geometric Definition of a Random Quantity
Let E m be a linear space over R provided with a quadratic metric and let {e j }, j = 1, . . . , m, be an orthonormal basis of it [19]. Any element of E m is uniquely determined by a linear combination of basis vectors. In particular, it is possible to obtain as well as where we have We have evidently used the Einstein summation convention in Equation (2).
Having said that, we prove the following . , x m } be the set of all logically possible values of X. Each logically possible value of X is then associated with a single and well-defined random event belonging to one of m straight lines of E m on which a same Cartesian coordinate system is established.
Proof. Each contravariant component of x ∈ E m can be seen as a vector of E m written in the form given by (i) x, i = 1, . . . , m. Thus, we write with x 1 ∈ R, as well as with x m ∈ R. It is evident that (1) x and e 1 are collinear as well as (m) x and e m . It follows that it is possible to write where each (i) x is an element of a subspace of E m denoted by E m (i) , i = 1, . . . , m, whose dimension is equal to 1. We obtain because the direct sum of m subspaces of E m is nothing but E m itself. Such a direct sum is also orthogonal. We note that one has with dim E m = m. The contravariant components of (i) x are given by with i = 1, . . . , m. We observe that δ j i is the Kronecker delta. If it turns out to be i = j, then we get δ j i = 1. Instead, when it turns out to be i = j, we get δ j i = 0. We observe that Equation (9) is characterized by the Einstein summation convention. Hence, we can write We then consider m oriented straight lines of E m which are expressed in the same unit of length [20]. They are pairwise orthogonal and meet in the origin of E m . It is the zero vector of E m . We do not contemplate particular m-tuples of real numbers belonging to any straight line of E m but we consider only real numbers associated with each of them. All of this results from the geometric property of collinearity just shown. Each straight line of E m expresses the whole of the space of alternatives with respect to one of m alternatives of X. Each straight line of E m contains infinite possible alternatives. Regarding one of m alternatives of X, we note that the knowledge and information of an individual, at a given instant of time, do not allow him to exclude a real number only. It is still reasonable for him because it is not either true or false [21]. The same is if thinking about all the other m − 1 alternatives of X.

A Canonical Expression of a Random Quantity
We observe that all the events contained in X are embedded in E m . Probability meant as a mass is defined inside of a linear space provided with a quadratic metric. The same symbol P is used in order to denote both the notion of prevision or mathematical expectation of a random quantity and the notion of probability of an event. This is because an event is nothing but a particular random quantity [22]. Anyway, we deal with m masses denoted by p 1 , p 2 , . . . , p m such that it is possible to write p 1 + p 2 + . . . + p m = 1. They are located on m components denoted by x 1 , x 2 , . . . , x m of m vectors denoted by (1) x, (2) x, . . . , (m) x of E m . We consider a probability distribution on R inside of E m in this way. This is because x 1 , x 2 , . . . , x m are real numbers. We have evidently with w ∈ E m , where {e j }, j = 1, . . . , m, is an orthonormal basis of E m , it turns out to be where we have for every i = 1, . . . , m. It follows that it is possible to establish the following Definition 1. Let id R : R → R be the identity function on R, where R is a linear space over itself. Given m elementary events of a finite partition of incompatible and exhaustive events, a random quantity denoted by X is the restriction of id R to I(X) ⊂ R such that it turns out to be id R|I(X) : I(X) → R.
In particular, we say that X is a linear operator whose canonical expression coincides with Equation (12). We say that X is an isometry. This means that each single event could uniquely be represented by infinite numbers; therefore, we can write I(X) = {x 1 + a, x 2 + a, . . . , x m + a}, where a ∈ R is an arbitrary constant. In this way, we are clearly considering infinite translations and different quantities from a geometric perspective. Notwithstanding, they are the same quantity from a randomness point of view because events and their associated probabilities do not vary. On the other hand, if two or more propositions could represent the same event encompassed in X, thus two or more real numbers can uniquely identify it [23]. Hence, a change of origin is not essential from a randomness perspective. We can always lead back the changed origin to the beginning one. Therefore, in this way, we consider a different closed structure. The latter is not a σ-algebra but it is a linear subspace over R, where every linear subspace is nothing but a linear space contained in another whose dimension is higher. Since every event contained in X belongs to one of them according to Equation (11), we deal with m subspaces of dimension 1. A random quantity X whose logically possible values identify an m-dimensional vector of E m is an element of a set of random quantities denoted by (1) S [24]. We observe that it is possible to write where (1) S is an m-dimensional linear space contained in E m . The reason is that the sum of two vectors belonging to (1) S must be a vector whose components are all different. Therefore, it belongs to (1) S in this way. It belongs to (1) S if and only if its components are all different. The same is when considering the multiplication of a vector of (1) S by a real number that is non-zero. Hence, we say that (1) S is closed with respect to the sum of two vectors of it and the multiplication of a vector of it by a real number that is different from zero. We consider a closed structure coinciding with an m-dimensional linear space contained in E m in this way. We observe that E m can also be read as an affine space over itself and each element of E m can be read as a point of an affine space, where the zero vector of E m is the origin of it. We could then be faced with a point of an affine space or a vector of a linear space. We choose a covariant notation with respect to the components of p ∈ E m , so we write with ∑ m i=1 p i = 1, where p i represents a subjective probability assigned to x i , i = 1, . . . , m, by an individual according to his/her degree of belief in the occurrence of x i [18]. If we write then we identify a distribution of probability embedded inside of a linear space provided with a quadratic metric. A coherent prevision of X is given by It is linear and homogeneous [25]. From P(E i ) = p i , i = 1, . . . , m, it follows that it turns out to be We note that the covariant components of every vector of E m coincide with the contravariant ones because we deal with an orthonormal basis of E m . Nevertheless, we want to stress that x and p are of a diverse nature.

A Coherent Prevision of a Random Quantity Viewed as an m-Dimensional Vector Coinciding with Its Center of Mass
After decomposing X into m single random events, we note that its coherent prevision is given by where we have 0 ≤ p i ≤ 1, i = 1, . . . , m, and ∑ m i=1 p i = 1 [26]. We say that Equation (19) is an m-dimensional vector belonging to E m whose contravariant components are all equal. We then writē We note that Equation (19) holds when the zero vector of E m coincides with the origin of E m . We note that the i-th contravariant component ofx is given bȳ where we have i = 1, . . . , m. Each contravariant component ofx is then obtained by means of a linear combination. The latter is characterized by Equation (21). We observe that the Einstein summation convention holds with regard to Equation (21). Each contravariant component ofx is therefore originated by m groups of numbers where every group of numbers consists of m numbers that are added.

A Decomposition of a Coherent Prevision of a Random Quantity
If it is possible to decompose a random quantity denoted by X, then it is also possible to decompose its coherent prevision denoted by P(X). We therefore consider the following Proposition 2. Let I(X) = {x 1 , x 2 , . . . , x m } be the set of all logically possible values of X, where it turns out to be x 1 < x 2 < . . . < x m . If {e j }, j = 1, . . . , m, is an orthonormal basis of E m , then y = (x 1 p 1 )e 1 + . . . + (x m p m )e m , with y ∈ E m , is a direct and orthogonal sum of m vectors belonging to m one-dimensional subspaces of E m .

Proof.
We have as well as with regard to the first subspace of E m . A same probability expressed by p 1 is associated with Equation (23) when we consider x 1 + a, where a ∈ R is an arbitrary constant. We identify different vectors on a same straight line in E m in this way [24]. Its direction is established by e 1 . All collinear vectors lying on the straight line established by e 1 represent the same event from a randomness point of view on condition that the starting inequalities given by x 1 < x 2 < . . . < x m remain the same when we write them in the form expressed by x 1 + a < x 2 + a < . . . < x m + a. Such an event is then verified when the true value of X which has occurred a posteriori coincides with the lowest possible value of X. Conversely, we write as well as with regard to the m-th subspace of E m . A same probability denoted by p m is associated with Equation (25) when we consider x m + a, where a ∈ R is an arbitrary constant. All collinear vectors lying on the straight line established by e m represent the same event from a randomness point of view on condition that the starting inequalities given by x 1 < x 2 < . . . < x m remain the same when we write them in the form expressed by x 1 + a < x 2 + a < . . . < x m + a. Such an event is then verified when the true value of X which has occurred a posteriori coincides with the highest possible value of X. What we have just said does not change by considering all other subspaces of E m . A coherent prevision of X always coincides with the direct sum of m vectors connected with m incompatible and exhaustive elementary events. Such a direct sum is also orthogonal. Let y be a vector belonging to E m and obtained by means of a linear combination of m vectors that are linearly independent. The contravariant components of y are m scalars whose sum coincides with a coherent prevision of X. Such a sum is connected with m incompatible and exhaustive elementary events [27]. We therefore write where we have y ∈ E m .
If we consider x i + a, with a ∈ R, instead of x i , i = 1, . . . , m, then we write where we have y ∈ E m . The contravariant components of y are m scalars whose sum coincides with a coherent prevision of X + a denoted by P(X + a).

Quadratic Indices and a Decomposition of the Variance of a Random Quantity
Given a coherent prevision of X, we are able to establish the following Definition 2. Let X d be a transformed random quantity whose possible values represent all deviations from P(X) =x ∈ E m . It is then represented by the vector x d = x −x ∈ E m whose contravariant components are given by x d i = x i −x i , i = 1, . . . , m.
If we consider X d, then we evidently deal with a linear transformation of X. It is a change of origin. It always depends on the state of information and knowledge of the individual evaluating. A coherent prevision of X d is necessarily given by Having said that, we firstly observe that the α-norm of the vector x ∈ E m identifying X is expressed by We use the term α-norm because we refer to the α-criterion of concordance introduced by Gini [28]. The notion of α-norm is a consequence of the notion of α-product with respect to two vectors belonging to E m and representing the logically possible values of two random quantities which are jointly considered. They are logically independent. They identify a bivariate random quantity which is generically denoted by X 12 = { 1 X, 2 X}. The number of its logically possible values is overall equal to m 2 . We evidently deal with a partition of m 2 incompatible and exhaustive elementary events. The joint distribution of X 12 is geometrically represented by the covariant components of the tensor p = (p i 1 i 2 ), where p is an affine tensor of order 2. From the notion of α-product results the one of α-norm because the latter is nothing but an α-product between two random quantities whose possible values are all equal. The covariant components of the tensor p = (p i 1 i 2 ) having different numerical values as indices are then equal to 0. We therefore say that the absolute maximum of concordance is actually obtained. We note that it turns out to be x 2 α ≥ 0. Secondly, the α-norm of the vector representing X d is given by It therefore represents the variance of X. The origin of the notion of variability is not evidently standardized within this context. The standard deviation of X is given by A metric connection between E m and a random quantity whose possible values are geometrically represented by an m-dimensional vector belonging to E m is therefore obtained by using the notion of α-norm. It is consequently possible to decompose a random quantity whose possible values represent all deviations fromx as well as the variance of X by using the geometric property of collinearity. After decomposing X and P(X) we are able to show the following Proof. What we have said with respect to X and P(X) continues to be valid when we consider a random quantity whose possible values represent all deviations fromx. We write as well as where we have z ∈ E m . The contravariant components of z are then m scalars whose sum coincides with the variance of X. Such a sum is connected with m incompatible and exhaustive elementary events.

Invariance of a Random Quantity Subjected to a Translation
If we transform all logically possible values of X by using a same m-dimensional constant denoted by a ∈ E m , then we obtain a transformed random quantity in this way. We denote it by X + a. It represents a translation of X. Its contravariant components are then expressed by x i = x i + a, where we have i = 1, . . . , m. We assign to them the same subjective probabilities assigned to x i , i = 1, . . . , m. A coherent prevision of X + a is then denoted by P(X + a) = P(X) + a =x + a.
We observe that all deviations fromx + a of the possible values of X + a are the same of the ones fromx of the possible values of X. We now transform all possible values of X by using a same m-dimensional constant which is different from a. We denote it by b ∈ E m . We obtain another transformed random quantity in this way. We denote it by X + b. It represents another translation of X. Its contravariant components are then expressed by x i = x i + b, where we have i = 1, . . . , m.
We assign to them the same subjective probabilities assigned to x i and x i + a, i = 1, . . . , m. A coherent prevision of X + b is then denoted by We observe that all deviations fromx + b of the possible values of X + b are the same of the ones fromx + a of the possible values of X + a. They are also the same of the ones fromx of the possible values of X. We then note that X d, (X+a) d and (X+b) d have the same possible values, so we write We therefore say that a random quantity denoted by X d is invariant when X is subjected to different translations. We make clear a basic point: all possible translations of X are characterized by the same subjective probabilities assigned to the possible values of the random quantities under consideration. This means that each event contained in those random quantities is always the same from a randomness point of view [29]. We also observe that the weighted summation of the possible values of X d = (X+a) d = (X+b) d is always equal to 0. Such a property must always characterize any random quantity whose possible values represent all deviations from a mean value. If it does not hold, then we are not able to speak about invariance of a random quantity whose possible values represent all deviations from a mean value.

A Particular Random Quantity Subjected to a Rotation
We establish the following We note that the contravariant indices of the generic element of A represent the rows of A. The covariant indices of the generic element of A represent its columns. After noting that the contravariant components of x d are given by x d i = x i −x i , i = 1, . . . , m, we observe that it turns out to be * This means that there exist m linear and homogeneous relationships between the contravariant components of x d and the ones of * x d. We want to wonder if * X d is invariant with respect to a rotation of x d. We then prove the following If we write where we have j = 1, . . . , m, then we note that we have m products, where each of them is given by x j = (x j −x j )p j , j = 1, . . . , m. We have alsox j = 0, j = 1, . . . , m. We consequently observe that Equation (38) can be expressed by * We observe that the transformation of the components of x is the same of the one connected with the components ofx. We note that the m subtrahends appearing in Equation (41) are all equal unlike the m subtrahends appearing in Equation (42). This means that Equation (40) does not necessarily hold, so * X d is not invariant with respect to a rotation of x d.
We observe that X d and * X d are the same quantity from a randomness point of view. They are different quantities from a geometric point of view. This means that there always exists an one-to-one correspondence between the events of the set of events characterizing X d and the ones of the set of events characterizing * X d. It is explained by the same probabilities which are coherently assigned to the corresponding events [30].

Intrinsic Properties of Probabilistic Indices
We prove the following Proposition 5. The variance of X and its standard deviation are invariant with respect to a rotation of x d determined by an m × m orthogonal matrix denoted by A.
Proof. Given x d ∈ E m , its α-norm is coherently expressed by Equation (30) [31]. Since it is possible to We use covariant indices when we are faced with probabilities. We are then able to write It follows that it turns out to be Given * X, where * X is a random quantity obtained when X is subjected to a rotation, let * x d be an m-dimensional vector of E m representing * X d. We then write We refer to Equation (37) and Equation (38) in Equation (45), so we are able to write * Since it turns out to be where δ j h represents the Kronecker delta, we consequently obtain * We can evidently write as well as All of this shows that the variance of X is invariant with respect to a rotation of x d determined by A. Since it is possible to write we note that the same thing goes when we consider the standard deviation associated with X and * X.

Variations Connected with the Bravais-Pearson Correlation Coefficient
Given X, let X v be a random quantity identifying variations. Its logically possible values are obtained by means of a relationship between two quantities expressed in the form of a ratio. We firstly consider the logically possible values of a random quantity representing all deviations from a mean value. It is defined with respect to X [32]. We secondly consider the standard deviation of X denoted by σ X . We then establish the following Definition 4. A random quantity identifying variations denoted by X v is geometrically represented by an m-dimensional vector of E m denoted by x v ∈ E m whose contravariant components are given by We note that the variance of X v as well as its standard deviation are always equal to 1. We therefore write as well as We observe that the logically possible values of a random quantity representing variations are invariant with respect to an affine transformation of them expressed by where we have a = 0. Having said that, we now consider a generic bivariate random quantity denoted by X 12 = { 1 X, 2 X}. Its possible values represent a partition of m 2 incompatible and exhaustive elementary events. We transform the logically possible values of 1 X and 2 X. We therefore obtain two random quantities representing two variations whose m-dimensional vectors of E m are respectively given by (1) v and (2) v. They geometrically represent 1 X v and 2 X v. We have to note a very important point: even if the logically possible values of 1 X and 2 X change we observe that their joint probabilities as well as their marginal probabilities are always the same. This means that we always consider the same events from a randomness point of view. We represent the joint probabilities of the joint distribution using an affine tensor of order 2 denoted by p. We note that its components are represented by using covariant indices, so we write p = p ij . Having said that, we consider the α-product between (1) v and (2) v, where it is a scalar product obtained by using the joint probabilities together with two equal-length sequences of contravariant components of m-dimensional vectors of E m . We write (1) We note that it turns out to be (2) v j p ij = (2) v i because we deal with a vector homography, so we are also able to write (1) It follows that it turns out to be (1) so we obtain the Bravais-Pearson correlation coefficient in this way. We want to realize that the Bravais-Pearson correlation coefficient is invariant with respect to a rotation characterized by an m × m orthogonal matrix denoted by A. Given (1) v, if it is subjected to a rotation established by A then its contravariant components are given by * Given (2) v, if it is subjected to a rotation established by A then its contravariant components are given by * The α-product between *  ) v are not invariant with respect to a rotation determined by A, is then given by * Since it turns out to be * we are able to establish that the Bravais-Pearson correlation coefficient is invariant with respect to a rotation determined by A. We therefore write *

A Measure of Distance between Two Non-Parametric Probability Distributions
After considering Equation (14), we now write where (2) S (1) is an m-dimensional linear space contained in E m . Let X 12 = { 1 X, 2 X} be a generic bivariate random quantity. Its possible values represent a partition of m 2 incompatible and exhaustive elementary events. Its marginal components denoted by 1 X and 2 X are geometrically represented by two m-dimensional vectors of E m denoted by (1) x and (2) x, where we have (1) x, (2) x ∈ (2) S (1) . We note that (2) S (1) contains all ordered pairs of m-dimensional vectors of E m identifying the logically possible values of the marginal components of a bivariate random quantity. An affine tensor of order 2 denoted by p uniquely corresponds to every ordered pair of vectors belonging to (2) S (1) . The components of p identify all joint probabilities characterizing a bivariate random quantity. Having said that, we write where it turns out to be λ ∈ R. It is possible to suppose that (1) x and (2) x are linearly independent without loss of generality. We write where we need to solve a linear equation having m − 1 unknowns in order to compute the marginal probabilities corresponding to y ∈ (2) S (1) . In particular, if it turns out to be λ = −1 then we obtain We therefore establish the following Definition 5. Given two non-parametric probability distributions, their α-distance coincides with y 2 α = (1) x − (2) x 2 α . It is the α-norm of an m-dimensional vector denoted by y belonging to (2) S (1) . Such a vector is one of infinite possible linear combinations of (1) x and (2) x, with (1) x, (2) x ∈ (2) S (1) . (65), it is possible to derive Schwarz's α-generalized inequality given by

From Equation
If λ = 1 then one has y = (1) x + (2) x, so it is possible to write the α-triangle inequality given by (1) The reverse α-triangle inequality is expressed by We also write cos γ = (1) x (2) x (1) so we say that (2) S (1) is a metric space [33]. What we have said can be referred to random quantities whose possible values represent all deviations from a mean value. On the other hand, the variance of a probability distribution is a reasonable measure of the riskiness involved. Given two transformed random quantities, we are also able to realize whether they are equally risky or not. We can understand which is their distance in terms of riskiness.

Some Future Works
If we consider n random quantities that are logically independent, where each of them is a partition of m (with m > n) incompatible and exhaustive elementary events, then it is also possible to consider a multivariate random quantity of order n. Every partition characterizing one of n random quantities is uniquely determined by m possible values that are necessarily all distinct. It is possible to study both n random quantities and a multivariate random quantity of order n inside of a linear space provided with a quadratic metric. It is analytically possible to decompose a multivariate random quantity of order n inside of a linear space provided with a quadratic metric in order to compute further summary indices. They can usefully be used by an individual in order to compare different non-parametric probability distributions [34]. If we decompose a multivariate random quantity of order n inside of a linear space provided with a quadratic metric then we observe that it is not possible to consider more than two random quantities at a time. It is possible to study probability inside of a linear space provided with a quadratic metric because the most important role in probability theory is played by the notion of linearity. On the other hand, it can be extended by considering the notion of multilinearity, so we are also able to interpret principal component analysis connected with non-parametric probability distributions in a new and profitable way. It is also possible to consider two different quadratic metrics in order to compare more than two probability distributions, where a linear quadratic metric is different from a non-linear quadratic metric. In particular, non-parametric probability distributions based on dichotomy between possibility and probability can also be used in order to study statistical issues connected with sampling as well as risky assets into problems of an economic nature characterizing decision theory [35]. They can consequently be used in order to treat cardinal utility functions into problems of an economic nature involving decisions under uncertainty and riskiness being made by an individual [36].

Conclusions
We have considered a linear space provided with a quadratic metric in order to represent all logically possible alternatives of a random quantity meant as a geometric entity. We did not assume a probability distribution as already attached to it. We have decomposed a random quantity as well as its coherent prevision inside of a linear space provided with a quadratic metric by using the geometric property of collinearity. We have shown a quadratic and linear metric by taking the α-criterion of concordance introduced by Gini into account. We have decomposed a random quantity whose possible values represent all deviations from a mean value as well as its variance inside of a linear space provided with a quadratic metric by using the geometric property of collinearity. We have realized that the origin of the notion of variability is not standardized but it always depends on the state of information and knowledge of an individual. We have shown different intrinsic properties of non-parametric probability distributions as well as of probabilistic indices summarizing them. We have defined the notion of α-distance between two non-parametric probability distributions. All of this works when an event is not a measurable set but it is an unequivocal proposition susceptible of being true or false at the right time. Probability viewed as a mass is then a non-negative and finitely additive function taking the value 1 on the whole space of events coinciding with a finite partition of incompatible and exhaustive outcomes characterizing a random quantity. All of this is interesting partly because it can be extended to more than two random quantities which are jointly considered.
Author Contributions: Both the authors contributed to the manuscript; however, P.A. had a central role in the conceptualization whereas F.M. contributed to improve the paper; writing-original draft preparation, P.A.; writing-review and editing, P.A. and F.M.; supervision, F.M.; funding acquisition, P.A. All authors have read and agreed to the published version of the manuscript.