Abstract
There exist uncertain situations in which a random event is not a measurable set, but it is a point of a linear space inside of which it is possible to study different random quantities characterized by non-parametric probability distributions. We show that if an event is not a measurable set then it is contained in a closed structure which is not a -algebra but a linear space over . We think of probability as being a mass. It is really a mass with respect to problems of statistical sampling. It is a mass with respect to problems of social sciences. In particular, it is a mass with regard to economic situations studied by means of the subjective notion of utility. We are able to decompose a random quantity meant as a geometric entity inside of a metric space. It is also possible to decompose its prevision and variance inside of it. We show a quadratic metric in order to obtain the variance of a random quantity. The origin of the notion of variability is not standardized within this context. It always depends on the state of information and knowledge of an individual. We study different intrinsic properties of non-parametric probability distributions as well as of probabilistic indices summarizing them. We define the notion of -distance between two non-parametric probability distributions.
1. Introduction
We propose a mathematical model where the probability of an event has a concrete image [1]. On the other hand, the difference between two opposite points of view is well known: some scholars interpret probability as a subjective measure of the degree of belief, whereas others consider it as an objective measure connected with measurable sets [2]. We will refer to those situations characterizing economic science, statistics and other related fields of interest in which such a difference has no reason to exist because it is evident that an event cannot naturally be meant as a measurable set [3]. We have elsewhere shown that the subjective approach to decisions under uncertainty, as we propose it, has innovative contributions to offer because the probability is basically viewed as the solution to a specific decision problem rather than an opening assumption [4].
2. Probability Viewed as a Mass
There exist situations for which a systematic set-theoretical interpretation of events is not conceptually satisfactory. It follows that an event is not always a measurable set, so a mechanical transposition of all what concerns measure theory into the calculus of probability is not always appropriate [5]. There exist situations for which an event is then an unequivocal proposition such that, by betting on it, it is possible to establish whether the event is true or false because it has occurred or not. It is consequently possible to establish whether the bet has been won or lost. We will show in this paper that an event is contained in a closed structure which is not a -algebra but a linear space over . An event has an intrinsic meaning which is independent of the mathematical notion of measurable set; we do not select a specific orthonormal basis of the linear space under consideration among all its possible orthonormal bases for this reason. Uncertainty about an event depends on a lack of information [6]. It ceases only when a given individual receives certain information about it. Probability deals with events in the sense of single and well-defined cases. It always depends on a given state of information and knowledge of the individual evaluating. It is then a subjective probability [7]. We think of probability as being a mass. It is always a function defined on the entire space of events: the sum of their non-negative probabilities is equal to 1, so axiomatic probability theory is satisfied. Nevertheless, it can freely be distributed without altering its geometric support and the measure that appears more natural in the context represented by the space of random quantities coinciding with a linear space over . We observe that different distributions of mass are different measures, but the notion of measure has no special status, unlike what happens when we refer to measure theory. When we speak about mass as a measure, we mean that it can coherently be moved in whatever way an individual likes. When we speak about mass as a measure, we do not mean something fixed.
3. What We Mean about a Random Quantity
A random quantity is a mathematical function on the space of its possible outcomes. Its domain is a collection of possible events, where every event is expressed by a real number [8]. A quantity is random for an individual because he does not know the true value of it. The true value of a random quantity is unique. If an individual calls it random, then it is unknown to him. He is therefore in doubt between at least two possible values [9]. How much the domain of the possible of an individual is large depends on his state of uncertainty at a given moment. Such a domain contains all logically possible values of a random quantity for a given individual at a given instant [10]. In particular, we consider the finest possible partition into atomic events within this context. In other words, we consider different points constituting the space of possible outcomes. It is embedded in a linear space over . That alternative which will turn out to be verified a posteriori is nothing but a random point in a linear space over . It expresses everything there is to be said. We have to note that a set is always subdivisible into subsets. Nevertheless, its subdivision necessarily stops when it reaches its constituent elements. It cannot evidently continue. An event is conversely subdivisible without never stopping even if in any situation it is appropriate to stop as soon as the subdivision is sufficient for the problem under consideration. All logically possible values of a random quantity are then points that are not further subdivisible for the purposes of the problem into account. Each random quantity can coherently be assigned a distribution of probability as an expression of the attitude of the individual under consideration. We do not admit that a probability distribution is already attached to it. We say that a probability distribution can vary from individual to individual. It can vary in accordance with the state of information and knowledge of each individual.
4. The Logic of Certainty
An individual is in a state of uncertainty, at a precise moment, when she/he is in a state of lack of knowledge due to imperfect information at that instant [11]. The latter could be connected with many facts or events [12]. We are not, however, interested in understanding why information and knowledge of an individual at a given instant are imperfect. We are only interested in observing that such an imperfection objectively exists [13]. We are always dominated by uncertainty, even when we think, we reason and we decide. If we are always dominated by uncertainty then we should use only the language of uncertainty characterized by probability. Nevertheless, we do not immediately use it but we firstly use the language of certainty, where the logic of certainty is a structure built above it because it is used in order to reduce the risk of falling into error when we deductively reason. The logic of certainty separates what it is determined because it is true or false from what it is not determined because it is not either true or false. Such a separation is being made by an individual on the basis of the data he uses at a given instant. In other words, the logic of certainty enables us of identifying the domain of a mathematical function representing the random quantity under consideration [14]. With regard to a specific situation that an individual has to take into account there could exist a huge number of possible alternatives. Nevertheless, his information and knowledge will permit him of excluding some of them as impossible. All others will remain possible for him. They constitute the domain of the above function. On the other hand, if an individual is faced with an infinite number of conceivable alternatives, where their number could be countably infinite or uncountable, then his information and knowledge will permit him of considering a finite number of them. We are therefore able to speak about a limitation or approximation of expectations, where the latter is nothing but a discretization of continuous alternatives. It follows that it will be appropriate to assign probabilities to a finite number of points contained in the set of possible values of the random quantity under consideration. It is the domain of the mathematical function identifying it. It will be decomposed by us through geometry. We then say that the logic of certainty also coincides with geometry. We propose a geometric superstructure more effective than any other superstructure to representing all logically possible alternatives that can objectively be considered with reference to a random quantity.
5. Methodological Aspects Concerning Non-Parametric Probability Distributions
Let X be a random quantity whose possible values are denoted by , where we have without loss of generality. We evidently deal with a finite partition of incompatible and exhaustive events. It follows that is the true value of X if occurs with subjective probability equal to , is the true value of X if occurs with subjective probability equal to , …, is the true value of X if occurs with subjective probability equal to , where it turns out to be . An individual distributes a unit mass of probability among all possible events contained in X and expressed by means of real numbers identifying [15]. He enters into the logic of prevision in order to carry out this thing [16]. He is able to attribute to the different possible events a greater or lesser degree of belief. It is nothing but a new, extralogical, subjective, personal and relative factor expressing his attitude viewed as his inclination to expect that a particular event rather than others will be true at the right time [17]. We have to observe a very important point: the logic of certainty obeys the laws of mathematics. On the other hand, we deductively reason when we are faced with mathematics. In particular, it obeys the laws of geometry within this context. This is because all possible and elementary events contained in a random quantity are geometrically represented as points inside of a linear space over . The logic of prevision conversely obeys the conditions of coherence pertaining to the meaning of probability, not to motives of a mathematical nature [18]. Given a finite partition of incompatible and exhaustive elementary events characterizing a random quantity, the conditions of coherence impose no limits on the probabilities that an individual may subjectively assign, except that the sum of all non-negative probabilities under consideration has to be equal to 1. A combination of bets on all events of a finite partition of incompatible and exhaustive elementary events coincides with a single bet on the certain event. It is a fair bet if and only if the sum of all non-negative probabilities under consideration is equal to 1. Such a bet is consequently acceptable by an individual in both senses indifferently. If an individual conversely accepts the same kind of bet, where the sum of all non-negative probabilities under consideration is not equal to 1, then his decision-making leads him to a certain and undesirable loss. A bet viewed as a real or conceptual experiment concerning a probability distribution known over a finite partition of elementary events permits of measuring the notion of prevision of a random quantity and, especially, of probability of an event from an operational point of view.
6. A Geometric Definition of a Random Quantity
Let be a linear space over provided with a quadratic metric and let , , be an orthonormal basis of it [19]. Any element of is uniquely determined by a linear combination of basis vectors. In particular, it is possible to obtain
as well as
where we have
with . We have evidently used the Einstein summation convention in Equation (2). Having said that, we prove the following
Proposition 1.
Let be the set of all logically possible values of X. Each logically possible value of X is then associated with a single and well-defined random event belonging to one of m straight lines of on which a same Cartesian coordinate system is established.
Proof.
Each contravariant component of can be seen as a vector of written in the form given by , . Thus, we write
with , as well as
with . It is evident that and are collinear as well as and . It follows that it is possible to write
where each is an element of a subspace of denoted by , , whose dimension is equal to 1. We obtain
because the direct sum of m subspaces of is nothing but itself. Such a direct sum is also orthogonal. We note that one has
with . The contravariant components of are given by
with . We observe that is the Kronecker delta. If it turns out to be , then we get . Instead, when it turns out to be , we get . We observe that Equation (9) is characterized by the Einstein summation convention. Hence, we can write
□
We then consider m oriented straight lines of which are expressed in the same unit of length [20]. They are pairwise orthogonal and meet in the origin of . It is the zero vector of . We do not contemplate particular m-tuples of real numbers belonging to any straight line of but we consider only real numbers associated with each of them. All of this results from the geometric property of collinearity just shown. Each straight line of expresses the whole of the space of alternatives with respect to one of m alternatives of X. Each straight line of contains infinite possible alternatives. Regarding one of m alternatives of X, we note that the knowledge and information of an individual, at a given instant of time, do not allow him to exclude a real number only. It is still reasonable for him because it is not either true or false [21]. The same is if thinking about all the other alternatives of X.
7. A Canonical Expression of a Random Quantity
We observe that all the events contained in X are embedded in . Probability meant as a mass is defined inside of a linear space provided with a quadratic metric. The same symbol is used in order to denote both the notion of prevision or mathematical expectation of a random quantity and the notion of probability of an event. This is because an event is nothing but a particular random quantity [22]. Anyway, we deal with m masses denoted by such that it is possible to write . They are located on m components denoted by of m vectors denoted by of . We consider a probability distribution on inside of in this way. This is because are real numbers. We have evidently , with , …, , with . After writing
with , where , , is an orthonormal basis of , it turns out to be
where we have
for every . It follows that it is possible to establish the following
Definition 1.
Let be the identity function on , where is a linear space over itself. Given m elementary events of a finite partition of incompatible and exhaustive events, a random quantity denoted by X is the restriction of to such that it turns out to be .
In particular, we say that X is a linear operator whose canonical expression coincides with Equation (12). We say that X is an isometry. This means that each single event could uniquely be represented by infinite numbers; therefore, we can write , where is an arbitrary constant. In this way, we are clearly considering infinite translations and different quantities from a geometric perspective. Notwithstanding, they are the same quantity from a randomness point of view because events and their associated probabilities do not vary. On the other hand, if two or more propositions could represent the same event encompassed in X, thus two or more real numbers can uniquely identify it [23]. Hence, a change of origin is not essential from a randomness perspective. We can always lead back the changed origin to the beginning one. Therefore, in this way, we consider a different closed structure. The latter is not a -algebra but it is a linear subspace over , where every linear subspace is nothing but a linear space contained in another whose dimension is higher. Since every event contained in X belongs to one of them according to Equation (11), we deal with m subspaces of dimension 1. A random quantity X whose logically possible values identify an m-dimensional vector of is an element of a set of random quantities denoted by [24]. We observe that it is possible to write
where is an m-dimensional linear space contained in . The reason is that the sum of two vectors belonging to must be a vector whose components are all different. Therefore, it belongs to in this way. It belongs to if and only if its components are all different. The same is when considering the multiplication of a vector of by a real number that is non-zero. Hence, we say that is closed with respect to the sum of two vectors of it and the multiplication of a vector of it by a real number that is different from zero. We consider a closed structure coinciding with an m-dimensional linear space contained in in this way. We observe that can also be read as an affine space over itself and each element of can be read as a point of an affine space, where the zero vector of is the origin of it. We could then be faced with a point of an affine space or a vector of a linear space. We choose a covariant notation with respect to the components of , so we write
with , where represents a subjective probability assigned to , , by an individual according to his/her degree of belief in the occurrence of [18]. If we write
then we identify a distribution of probability embedded inside of a linear space provided with a quadratic metric. A coherent prevision of X is given by
It is linear and homogeneous [25]. From , , it follows that it turns out to be
We note that the covariant components of every vector of coincide with the contravariant ones because we deal with an orthonormal basis of . Nevertheless, we want to stress that and are of a diverse nature.
8. A Coherent Prevision of a Random Quantity Viewed as an -Dimensional Vector Coinciding with Its Center of Mass
After decomposing X into m single random events, we note that its coherent prevision is given by
where we have , , and [26]. We say that Equation (19) is an m-dimensional vector belonging to whose contravariant components are all equal. We then write
We note that Equation(19) holds when the zero vector of coincides with the origin of . We note that the i-th contravariant component of is given by
where we have . Each contravariant component of is then obtained by means of a linear combination. The latter is characterized by Equation (21). We observe that the Einstein summation convention holds with regard to Equation (21). Each contravariant component of is therefore originated by m groups of numbers where every group of numbers consists of m numbers that are added.
9. A Decomposition of a Coherent Prevision of a Random Quantity
If it is possible to decompose a random quantity denoted by X, then it is also possible to decompose its coherent prevision denoted by . We therefore consider the following
Proposition 2.
Let be the set of all logically possible values of X, where it turns out to be . If , , is an orthonormal basis of , then , with , is a direct and orthogonal sum of m vectors belonging to m one-dimensional subspaces of .
Proof.
We have
as well as
with regard to the first subspace of . A same probability expressed by is associated with Equation (23) when we consider , where is an arbitrary constant. We identify different vectors on a same straight line in in this way [24]. Its direction is established by . All collinear vectors lying on the straight line established by represent the same event from a randomness point of view on condition that the starting inequalities given by remain the same when we write them in the form expressed by . Such an event is then verified when the true value of X which has occurred a posteriori coincides with the lowest possible value of X. Conversely, we write
as well as
with regard to the m-th subspace of . A same probability denoted by is associated with Equation (25) when we consider , where is an arbitrary constant. All collinear vectors lying on the straight line established by represent the same event from a randomness point of view on condition that the starting inequalities given by remain the same when we write them in the form expressed by . Such an event is then verified when the true value of X which has occurred a posteriori coincides with the highest possible value of X. What we have just said does not change by considering all other subspaces of . A coherent prevision of X always coincides with the direct sum of m vectors connected with m incompatible and exhaustive elementary events. Such a direct sum is also orthogonal. Let be a vector belonging to and obtained by means of a linear combination of m vectors that are linearly independent. The contravariant components of are m scalars whose sum coincides with a coherent prevision of X. Such a sum is connected with m incompatible and exhaustive elementary events [27]. We therefore write
where we have . □
If we consider , with , instead of , , then we write
where we have . The contravariant components of are m scalars whose sum coincides with a coherent prevision of denoted by .
10. Quadratic Indices and a Decomposition of the Variance of a Random Quantity
Given a coherent prevision of X, we are able to establish the following
Definition 2.
Let be a transformed random quantity whose possible values represent all deviations from . It is then represented by the vector whose contravariant components are given by , .
If we consider , then we evidently deal with a linear transformation of X. It is a change of origin. It always depends on the state of information and knowledge of the individual evaluating. A coherent prevision of is necessarily given by
Having said that, we firstly observe that the -norm of the vector identifying X is expressed by
We use the term -norm because we refer to the -criterion of concordance introduced by Gini [28]. The notion of -norm is a consequence of the notion of -product with respect to two vectors belonging to and representing the logically possible values of two random quantities which are jointly considered. They are logically independent. They identify a bivariate random quantity which is generically denoted by . The number of its logically possible values is overall equal to . We evidently deal with a partition of incompatible and exhaustive elementary events. The joint distribution of is geometrically represented by the covariant components of the tensor , where p is an affine tensor of order 2. From the notion of -product results the one of -norm because the latter is nothing but an -product between two random quantities whose possible values are all equal. The covariant components of the tensor having different numerical values as indices are then equal to 0. We therefore say that the absolute maximum of concordance is actually obtained. We note that it turns out to be . Secondly, the -norm of the vector representing is given by
It therefore represents the variance of X. The origin of the notion of variability is not evidently standardized within this context. The standard deviation of X is given by
A metric connection between and a random quantity whose possible values are geometrically represented by an m-dimensional vector belonging to is therefore obtained by using the notion of -norm. It is consequently possible to decompose a random quantity whose possible values represent all deviations from as well as the variance of X by using the geometric property of collinearity. After decomposing X and we are able to show the following
Proposition 3.
Let be the set of all logically possible values of , where it turns out to be . If , , is an orthonormal basis of , then , with , is a direct and orthogonal sum of m vectors belonging to m one-dimensional subspaces of .
Proof.
What we have said with respect to X and continues to be valid when we consider a random quantity whose possible values represent all deviations from . We write
as well as
where we have . The contravariant components of are then m scalars whose sum coincides with the variance of X. Such a sum is connected with m incompatible and exhaustive elementary events. □
11. Invariance of a Random Quantity Subjected to a Translation
If we transform all logically possible values of X by using a same m-dimensional constant denoted by , then we obtain a transformed random quantity in this way. We denote it by . It represents a translation of X. Its contravariant components are then expressed by , where we have . We assign to them the same subjective probabilities assigned to , . A coherent prevision of is then denoted by
We observe that all deviations from of the possible values of are the same of the ones from of the possible values of X. We now transform all possible values of X by using a same m-dimensional constant which is different from a. We denote it by . We obtain another transformed random quantity in this way. We denote it by . It represents another translation of X. Its contravariant components are then expressed by , where we have . We assign to them the same subjective probabilities assigned to and , . A coherent prevision of is then denoted by
We observe that all deviations from of the possible values of are the same of the ones from of the possible values of . They are also the same of the ones from of the possible values of X. We then note that , and have the same possible values, so we write
We therefore say that a random quantity denoted by is invariant when X is subjected to different translations. We make clear a basic point: all possible translations of X are characterized by the same subjective probabilities assigned to the possible values of the random quantities under consideration. This means that each event contained in those random quantities is always the same from a randomness point of view [29]. We also observe that the weighted summation of the possible values of is always equal to 0. Such a property must always characterize any random quantity whose possible values represent all deviations from a mean value. If it does not hold, then we are not able to speak about invariance of a random quantity whose possible values represent all deviations from a mean value.
12. A Particular Random Quantity Subjected to a Rotation
We establish the following
Definition 3.
Let be an m-dimensional vector of identifying a random quantity whose possible values represent all deviations from . Given an orthogonal matrix denoted by , where it turns out to be as well as , it is possible to write
where represents an m-dimensional vector of originated by a specific rotation of determined by A.
We note that the contravariant indices of the generic element of A represent the rows of A. The covariant indices of the generic element of A represent its columns. After noting that the contravariant components of are given by , , we observe that it turns out to be
This means that there exist m linear and homogeneous relationships between the contravariant components of and the ones of . We want to wonder if is invariant with respect to a rotation of . We then prove the following
Proposition 4.
The weighted summation of the possible values of , where identifies a rotated random quantity whose possible values represent all deviations from a mean value, is not necessarily equal to 0.
Proof.
Given Equation (37), we write
We denote by covariant indices m results, where each of them is originated by the multiplication of each possible value of by the corresponding probability. We evidently consider m possible values of as well as m corresponding probabilities. If we use covariant indices, then we have to write
in order to express the property connected with the possible values of a random quantity representing all deviations from a mean value and their corresponding probabilities. We have consequently to verify if it turns out to be
If we write
where we have , then we note that we have m products, where each of them is given by , . We have also , . We consequently observe that Equation (38) can be expressed by
We observe that the transformation of the components of is the same of the one connected with the components of . We note that the m subtrahends appearing in Equation (41) are all equal unlike the m subtrahends appearing in Equation (42). This means that Equation (40) does not necessarily hold, so is not invariant with respect to a rotation of . □
We observe that and are the same quantity from a randomness point of view. They are different quantities from a geometric point of view. This means that there always exists an one-to-one correspondence between the events of the set of events characterizing and the ones of the set of events characterizing . It is explained by the same probabilities which are coherently assigned to the corresponding events [30].
13. Intrinsic Properties of Probabilistic Indices
We prove the following
Proposition 5.
The variance of X and its standard deviation are invariant with respect to a rotation of determined by an orthogonal matrix denoted by A.
Proof.
Given , its -norm is coherently expressed by Equation (30) [31]. Since it is possible to write , we note that it turns out to be . We use covariant indices when we are faced with probabilities. We are then able to write
It follows that it turns out to be
Given , where is a random quantity obtained when X is subjected to a rotation, let be an m-dimensional vector of representing . We then write
Since it turns out to be
where represents the Kronecker delta, we consequently obtain
We can evidently write
as well as
All of this shows that the variance of X is invariant with respect to a rotation of determined by A. Since it is possible to write
we note that the same thing goes when we consider the standard deviation associated with X and . □
14. Variations Connected with the Bravais–Pearson Correlation Coefficient
Given X, let be a random quantity identifying variations. Its logically possible values are obtained by means of a relationship between two quantities expressed in the form of a ratio. We firstly consider the logically possible values of a random quantity representing all deviations from a mean value. It is defined with respect to X [32]. We secondly consider the standard deviation of X denoted by . We then establish the following
Definition 4.
A random quantity identifying variations denoted by is geometrically represented by an m-dimensional vector of denoted by whose contravariant components are given by
We note that the variance of as well as its standard deviation are always equal to 1. We therefore write
as well as
We observe that the logically possible values of a random quantity representing variations are invariant with respect to an affine transformation of them expressed by
where we have . Having said that, we now consider a generic bivariate random quantity denoted by . Its possible values represent a partition of incompatible and exhaustive elementary events. We transform the logically possible values of and . We therefore obtain two random quantities representing two variations whose m-dimensional vectors of are respectively given by and . They geometrically represent and . We have to note a very important point: even if the logically possible values of and change we observe that their joint probabilities as well as their marginal probabilities are always the same. This means that we always consider the same events from a randomness point of view. We represent the joint probabilities of the joint distribution using an affine tensor of order 2 denoted by p. We note that its components are represented by using covariant indices, so we write . Having said that, we consider the -product between and , where it is a scalar product obtained by using the joint probabilities together with two equal-length sequences of contravariant components of m-dimensional vectors of . We write
We note that it turns out to be because we deal with a vector homography, so we are also able to write
It follows that it turns out to be
so we obtain the Bravais-Pearson correlation coefficient in this way. We want to realize that the Bravais-Pearson correlation coefficient is invariant with respect to a rotation characterized by an orthogonal matrix denoted by A. Given , if it is subjected to a rotation established by A then its contravariant components are given by
Given , if it is subjected to a rotation established by A then its contravariant components are given by
The -product between and , where both and are not invariant with respect to a rotation determined by A, is then given by
Since it turns out to be
we are able to establish that the Bravais-Pearson correlation coefficient is invariant with respect to a rotation determined by A. We therefore write
15. A Measure of Distance between Two Non-Parametric Probability Distributions
After considering Equation (14), we now write
where is an m-dimensional linear space contained in . Let be a generic bivariate random quantity. Its possible values represent a partition of incompatible and exhaustive elementary events. Its marginal components denoted by and are geometrically represented by two m-dimensional vectors of denoted by and , where we have . We note that contains all ordered pairs of m-dimensional vectors of identifying the logically possible values of the marginal components of a bivariate random quantity. An affine tensor of order 2 denoted by p uniquely corresponds to every ordered pair of vectors belonging to . The components of p identify all joint probabilities characterizing a bivariate random quantity. Having said that, we write
where it turns out to be . It is possible to suppose that and are linearly independent without loss of generality. We write
where we need to solve a linear equation having unknowns in order to compute the marginal probabilities corresponding to . In particular, if it turns out to be then we obtain
We therefore establish the following
Definition 5.
Given two non-parametric probability distributions, their α-distance coincides with . It is the α-norm of an m-dimensional vector denoted by belonging to . Such a vector is one of infinite possible linear combinations of and , with .
From Equation (65), it is possible to derive Schwarz’s -generalized inequality given by
If then one has , so it is possible to write the -triangle inequality given by
The reverse -triangle inequality is expressed by
We also write
so we say that is a metric space [33]. What we have said can be referred to random quantities whose possible values represent all deviations from a mean value. On the other hand, the variance of a probability distribution is a reasonable measure of the riskiness involved. Given two transformed random quantities, we are also able to realize whether they are equally risky or not. We can understand which is their distance in terms of riskiness.
16. Some Future Works
If we consider n random quantities that are logically independent, where each of them is a partition of m (with ) incompatible and exhaustive elementary events, then it is also possible to consider a multivariate random quantity of order n. Every partition characterizing one of n random quantities is uniquely determined by m possible values that are necessarily all distinct. It is possible to study both n random quantities and a multivariate random quantity of order n inside of a linear space provided with a quadratic metric. It is analytically possible to decompose a multivariate random quantity of order n inside of a linear space provided with a quadratic metric in order to compute further summary indices. They can usefully be used by an individual in order to compare different non-parametric probability distributions [34]. If we decompose a multivariate random quantity of order n inside of a linear space provided with a quadratic metric then we observe that it is not possible to consider more than two random quantities at a time. It is possible to study probability inside of a linear space provided with a quadratic metric because the most important role in probability theory is played by the notion of linearity. On the other hand, it can be extended by considering the notion of multilinearity, so we are also able to interpret principal component analysis connected with non-parametric probability distributions in a new and profitable way. It is also possible to consider two different quadratic metrics in order to compare more than two probability distributions, where a linear quadratic metric is different from a non-linear quadratic metric. In particular, non-parametric probability distributions based on dichotomy between possibility and probability can also be used in order to study statistical issues connected with sampling as well as risky assets into problems of an economic nature characterizing decision theory [35]. They can consequently be used in order to treat cardinal utility functions into problems of an economic nature involving decisions under uncertainty and riskiness being made by an individual [36].
17. Conclusions
We have considered a linear space provided with a quadratic metric in order to represent all logically possible alternatives of a random quantity meant as a geometric entity. We did not assume a probability distribution as already attached to it. We have decomposed a random quantity as well as its coherent prevision inside of a linear space provided with a quadratic metric by using the geometric property of collinearity. We have shown a quadratic and linear metric by taking the -criterion of concordance introduced by Gini into account. We have decomposed a random quantity whose possible values represent all deviations from a mean value as well as its variance inside of a linear space provided with a quadratic metric by using the geometric property of collinearity. We have realized that the origin of the notion of variability is not standardized but it always depends on the state of information and knowledge of an individual. We have shown different intrinsic properties of non-parametric probability distributions as well as of probabilistic indices summarizing them. We have defined the notion of -distance between two non-parametric probability distributions. All of this works when an event is not a measurable set but it is an unequivocal proposition susceptible of being true or false at the right time. Probability viewed as a mass is then a non-negative and finitely additive function taking the value 1 on the whole space of events coinciding with a finite partition of incompatible and exhaustive outcomes characterizing a random quantity. All of this is interesting partly because it can be extended to more than two random quantities which are jointly considered.
Author Contributions
Both the authors contributed to the manuscript; however, P.A. had a central role in the conceptualization whereas F.M. contributed to improve the paper; writing–original draft preparation, P.A.; writing–review and editing, P.A. and F.M.; supervision, F.M.; funding acquisition, P.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Echenique, F. New developments in revealed preference theory: Decisions under risk, uncertainty, and intertemporal choice. Annu. Rev. Econ. 2020, 12, 299–316. [Google Scholar] [CrossRef]
- Berti, P.; Dreassi, E.; Rigo, P. A notion of conditional probability and some of its consequences. Decis. Econ. Financ. 2020, 43, 3–15. [Google Scholar] [CrossRef]
- Sudderth, W.D. Optimal markov strategies. Decis. Econ. Financ. 2020, 43, 43–54. [Google Scholar] [CrossRef]
- Denti, T.; Marinacci, M.; Montrucchio, L. A note on rational inattention and rate distortion theory. Decis. Econ. Financ. 2020, 43, 75–89. [Google Scholar] [CrossRef]
- Good, I.J. Subjective probability as the measure of a non-measurable set. In Logic, Methodology and Philosophy of Science; Nagel, E., Suppes, P., Tarski, A., Eds.; Stanford University Press: Stanford, CA, USA, 1962; pp. 319–329. [Google Scholar]
- Piccinato, L. De Finetti’s logic of uncertainty and its impact on statistical thinking and practice. In Bayesian Inference and Decision Techniques; Goel, P.K., Zellner, A., Eds.; North-Holland: Amsterdam, The Netherlands, 1986; pp. 13–30. [Google Scholar]
- Anscombe, F.J.; Aumann, R.J. A definition of subjective probability. Ann. Math. Stat. 1963, 34, 199–205. [Google Scholar] [CrossRef]
- Coletti, G.; Petturiti, D.; Vantaggi, B. Conditional belief functions as lower envelopes of conditional probabilities in a finite setting. Inf. Sci. 2016, 339, 64–84. [Google Scholar] [CrossRef]
- Gilio, A.; Sanfilippo, G. Conditional random quantities and compounds of conditionals. Stud. Log. 2014, 102, 709–729. [Google Scholar] [CrossRef]
- Coletti, G.; Petturiti, D.; Vantaggi, B. When upper conditional probabilities are conditional possibility measures. Fuzzy Sets Syst. 2016, 304, 45–64. [Google Scholar] [CrossRef]
- Capotorti, A.; Coletti, G.; Vantaggi, B. Standard and nonstandard representability of positive uncertainty orderings. Kybernetika 2014, 50, 189–215. [Google Scholar] [CrossRef]
- Savage, L.J. The theory of statistical decision. J. Am. Stat. Assoc. 1951, 46, 55–67. [Google Scholar] [CrossRef]
- Coletti, G.; Scozzafava, R.; Vantaggi, B. Possibilistic and probabilistic logic under coherence: Default reasoning and System P. Math. Slovaca 2015, 65, 863–890. [Google Scholar] [CrossRef]
- Koopman, B.O. The axioms and algebra of intuitive probability. Ann. Math. 1940, 41, 269–292. [Google Scholar] [CrossRef]
- Regazzini, E. Finitely additive conditional probabilities. Rend. Del Semin. Mat. Fis. Milano 1985, 55, 69–89. [Google Scholar] [CrossRef]
- Berti, P.; Regazzini, E.; Rigo, P. Strong previsions of random elements. Stat. Methods Appl. (J. Ital. Stat. Soc.) 2001, 10, 11–28. [Google Scholar] [CrossRef]
- Pfanzagl, J. Subjective probability derived from the Morgenstern-von Neumann utility theory. In Essays in Mathematical Economics in Honor of Oskar Morgenstern; Shubik, M., Ed.; Princeton University Press: Princeton, NJ, USA, 1967; pp. 237–251. [Google Scholar]
- Berti, P.; Rigo, P. On coherent conditional probabilities and disintegrations. Ann. Math. Artif. Intell. 2002, 35, 71–82. [Google Scholar] [CrossRef]
- Pompilj, G. On intrinsic independence. Bull. Int. Stat. Inst. 1957, 35, 91–97. [Google Scholar]
- Von Neumann, J. Examples of continuous geometries. Proc. Natl. Acad. Sci. USA 1936, 22, 101–108. [Google Scholar] [CrossRef]
- Nunke, R.J.; Savage, L.J. On the set of values of a nonatomic, finitely additive, finite measure. Proc. Am. Math. Soc. 1952, 3, 217–218. [Google Scholar] [CrossRef]
- Cassese, G.; Rigo, P.; Vantaggi, B. A special issue on the mathematics of subjective probability. Decis. Econ. Financ. 2020, 43, 1–2. [Google Scholar] [CrossRef]
- Gilio, A. Probabilistic reasoning under coherence in System P. Ann. Math. Artif. Intell. 2002, 34, 5–34. [Google Scholar] [CrossRef]
- Angelini, P. A risk-neutral consumer and his coherent decisions under uncertainty: A methodological analysis. Int. J. Econ. Financ. 2020, 12, 95–105. [Google Scholar] [CrossRef]
- Coletti, G.; Petturiti, D.; Vantaggi, B. Bayesian inference: The role of coherence to deal with a prior belief function. Stat. Methods Appl. 2014, 23, 519–545. [Google Scholar] [CrossRef]
- Pfeifer, N.; Sanfilippo, G. Probabilistic squares and hexagons of opposition under coherence. Int. J. Approx. Reason. 2017, 88, 282–294. [Google Scholar] [CrossRef]
- De Finetti, B. The proper approach to probability. In Exchangeability in Probability and Statistics; Koch, G., Spizzichino, F., Eds.; North-Holland Publishing Company: Amsterdam, The Netherlands, 1982; pp. 1–6. [Google Scholar]
- Forcina, A. Gini’s contributions to the theory of inference. Int. Stat. Rev. 1982, 50, 65–70. [Google Scholar] [CrossRef]
- De Finetti, B. Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science. Erkenntnis 1989, 31, 169–223. [Google Scholar] [CrossRef]
- De Finetti, B. The role of “Dutch Books” and of “proper scoring rules”. Br. J. Psychol. Sci. 1981, 32, 55–56. [Google Scholar] [CrossRef]
- De Finetti, B. Probability: The different views and terminologies in a critical analysis. In Logic, Methodology and Philosophy of Science VI; Cohen, L.J., Łoś, J., Pfeiffer, H., Podewski, K.P., Eds.; North-Holland Publishing Company: Amsterdam, The Netherlands, 1982; pp. 391–394. [Google Scholar]
- De Finetti, B. Probability and exchangeability from a subjective point of view. Int. Stat. Rev. 1979, 47, 129–135. [Google Scholar] [CrossRef]
- Chung, J.K.; Kannappan, P.; Ng, C.T.; Sahoo, P.K. Measures of distance between probability distributions. J. Math. Anal. Appl. 1989, 138, 280–292. [Google Scholar] [CrossRef]
- Acciaio, B.; Svindland, G. Are law-invariant risk functions concave on distributions? Depend. Model. 2013, 1, 54–64. [Google Scholar] [CrossRef]
- Rockafellar, R.T.; Uryasev, S.; Zabarankin, M. Generalized deviations in risk analysis. Financ. Stochastics 2006, 10, 51–74. [Google Scholar] [CrossRef]
- Drapeau, S.; Kupper, M. Risk preferences and their robust representation. Math. Oper. Res. 2013, 38, 28–62. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).