Previous Article in Journal
A Statistical Characterization of Median-Based Inequality Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparisons Between Frequency Distributions Based on Gini’s Approach: Principal Component Analysis Addressed to Time Series

by
Pierpaolo Angelini
Department of Statistical Sciences, Sapienza University of Rome, 00185 Rome, Italy
Econometrics 2025, 13(3), 32; https://doi.org/10.3390/econometrics13030032
Submission received: 4 May 2025 / Revised: 18 July 2025 / Accepted: 5 August 2025 / Published: 13 August 2025

Abstract

In this paper, time series of length T are seen as frequency distributions. Each distribution is defined with respect to a statistical variable having T observed values. A methodological system based on Gini’s approach is put forward, so the statistical model through which time series are handled is a frequency distribution studied inside a linear system. In addition to the starting frequency distributions that are observed, other frequency distributions are treated. Thus, marginal distributions based on the notion of proportionality are introduced together with joint distributions. Both distributions are statistical models. A fundamental invariance property related to marginal distributions is made explicit in this research work, so one can focus on collections of marginal frequency distributions, identifying multiple frequency distributions. For this reason, the latter is studied via a tensor. As frequency distributions are practical realizations of nonparametric probability distributions over R , one passes from frequency distributions to discrete random variables. In this paper, a mathematical model that generates time series is put forward. It is a stochastic process based on subjective previsions of random variables. A subdivision of the exchangeability of variables of a statistical nature is shown, so a reinterpretation of principal component analysis that is based on the notion of proportionality also characterizes this research work.

1. Introduction

In each sector of the development of scientific research, two lines of research can schematically be identified. Also, such lines merge together. The first line of research deals with the study of new problems and the deepening of issues that are already outlined. The second one deals with a careful analysis of the conceptual premises that underlie known knowledge. This analysis takes place to try to penetrate the intimate nature of known knowledge and to attempt to trace apparently distinct phenomena or tools back to some common ideas. The current paper addresses both such lines. Thus, the construction of specific techniques is handled, and some conceptual premises that lead to a reinterpretation of principal component analysis are shown too. It is up to the researcher, in the context of a specific scientific research, to establish which is the most suitable tool associated with the hypotheses and knowledge purposes. It is therefore about studying a system of hypotheses that leads to a plurality of solutions and identifying that one, among many alternatives, that is able to refind the instrumentation associated with principal component analysis (Hotelling, 1933; 1936).
The underlying theme of this paper is the following: time series of length T can be studied as frequency distributions inside linear systems. Finite-dimensional linear spaces over R and their subspaces are linear systems. The idea on which the current paper is technically based is about the notion of distance studied according to a pre-assigned direction. This idea was put forward by Luigi De Lucia, an Italian statistician and researcher who taught at the Sapienza University of Rome a few decades ago (De Lucia, 1965). In the current research work, one is particularly concerned with giving a statistical meaning to the concept of direction. The notion from which this research work starts is about the concept of proportion. The vector representation of frequency distributions can bring the theory of principal components back to a statistical technique that hinges on the notion of proportion. This representation can give to the notion of direction the statistical meaning previously suggested to be an essential requirement. Even specific probabilistic issues can be treated using a vector representation within which two distinct logics are considered (Angelini & Maturo, 2022a; 2023; Angelini, 2024a; 2024c). They are ordinary logic (two-valued logic) and the logic of uncertainty (infinitely many-valued logic). After transforming a frequency distribution into a discrete random variable, previsions of a random variable are treated. Previsions of random variables to which the theory of probability, or the logic of uncertainty, leads consist of a distribution, in accordance with the opinions of a given individual, of subjective expectations among all possible alternatives, whose number is finite. Only the distinction between possible and impossible alternatives is handled by ordinary logic. Each single alternative is therefore true or false whenever uncertainty ceases. A prevision is about the judgment of a given individual, at a certain moment, based on his state of information and knowledge at that given moment. Whenever an individual wants to critically judge the past judgment made by another one, it is possible to verify whether the one who made the prevision did not rightly consider some circumstances that could lead him to a better prevision. A new piece of information that can be obtained is used to make other previsions. If the state of information and knowledge of an individual changes, then his previsions, which are based on that state, also change. If one wants to criticize previsions that are made by an individual based on a specific state of information and knowledge using a different state of information and knowledge, then such a criticism is wrong.
Previsions are not predictions. Two distinct terms correspond to two distinct concepts. It is appropriate to highlight a contrast between a prevision and a prediction. It is possible to use the term prediction to indicate the statement that something will not happen even though it is logically possible, or that something will happen even though it is not logically certain. Thus, a prediction is a prophecy. A prediction is always right or wrong “a posteriori”. Conversely, regarding prevision, no matter what happens, nothing similar can be said in any sense. For instance, given a set of possible values for which one and only one is true “a posteriori”, a prediction is wrong whenever one tries to guess a value of the set under consideration and fails. Hence, a value is claimed to be true while it turns out to be false “a posteriori”. What is uncertain remains uncertain. It is not therefore possible to transform what is uncertain or possible into what is over-optimistically claimed to be certain or sure. Starting from an infinite number of alternatives, which is intrinsically illusory, one can practically obtain a “sure prediction”, for example, through a mathematical calculation given by
E ( X ) = + x f ( x ) d x ,
where f ( x ) is the continuous probability function of the random variable denoted by X. Here, an infinite number of alternatives have to be considered to calculate E ( X ) , f ( x ) is supposed to be known, and E ( X ) is supposed to exist. An event is a measurable set, so its probability is objectively a measure coinciding with the area under the graph of f between x = a and x = b expressed by
P ( a X b ) = a b f ( x ) d x .
One does not want to deny that the well-foundedness of the language of calculus leads to obtaining a value that is passed off as a “sure prediction”. After establishing the possible and observable outcomes of an experiment, what this paper does not admit is that an infinite number of alternatives is mathematically imagined in such a way that a functional scheme in the continuum appears. It will be shown that the image of a discrete random variable is a set of possible alternatives coinciding with the numerical values of a time series of length T. One therefore focuses on considering observed data, which are intrinsically cases immediately at hand and directly interesting, rather than mathematically imagining an infinite and illusory number of values for a continuous random variable. If two discrete variables are jointly studied, then how marginal and joint distributions can coexist is investigated. Similarly, if three or more discrete variables are studied together, then how marginal and multiple distributions can coexist is taken into account. In such investigations, the role played by the Fréchet class is essential. A fundamental invariance property related to marginal distributions is made explicit.
Section 2 deals with time series that are seen as frequency distributions. The notion of proportionality is handled. A numerical simulation is put forward. An essential metric element coinciding with a measure of the joint variability of two variables is pulled out. Section 3 studies an α -metric tensor defined with respect to a finite-dimensional linear manifold over R . Eigenvalues, eigenvectors, and eigenspaces, referring to the same α -metric tensor, are considered too. Section 4 focuses on the definition of the principal components of a multiple statistical variable and their properties. The geometric and statistical meaning of a particular linear manifold over R is analyzed in Section 5. Interdependence relationships between observed time series data are studied via a tensor. Proportionality equations are studied in Section 6. The structure of a specific characteristic equation is dealt with in Section 7. Section 8 focuses on how to pass from frequency distributions to random variables. A subdivision of the exchangeability of random variables is put forward. Stationary processes having statistical properties that do not change over time are taken into account. Finally, Section 9 contains the conclusions and future perspectives.

2. Time Series Seen as Frequency Distributions

Let Y be a variable of a statistical nature, such as the Gross Domestic Product (GDP) of a certain country. Based on what Vittorio Castellano, an Italian statistician and researcher who also helped to develop Corrado Gini’s ideas, put forward, it is possible to establish the following:
Definition 1.
A statistical variable is a generic set of potential values that an empirical quantity can come to have in a given ambit subject to observation.
As the values of a statistical variable are potential, it is not necessary that they are all distinct. An observation at time t is denoted by Y t , where t is an integer that varies from 1 to T. Hence, the total number of intervals or time periods which are considered is equal to T. An observation at time t denoted by Y t is an actual value of Y at time t. A time series is formally given by
Y t = { Y 1 , Y 2 , Y 3 , , Y T } .
The elements of the finite set given by (3) are intrinsically all different, in the sense that time 1 is different from time 2, time 3, and so on. Nevertheless, if one wants to focus on actual numerical values, then it is appropriate to use a row vector or a column one in order to identify the numerical values of a time series of length T. Geometrically, whenever one writes
Y t = Y 1 Y 2 Y 3 Y T = Y 1 Y 2 Y 3 Y T ,
one means that the numerical value which is observed at time 1 can even be equal to the one which is observed at time 2, or at time 3, …, or at time T.
It is possible to establish the following:
Definition 2.
A frequency distribution related to a single statistical variable does not show potential values, but it shows actual values. The actual values that a single statistical variable comes to have within a statistical population are caught by a marginal frequency distribution.
Each value is considered together with a relative frequency. The latter is a statistical weight. This means that an ordered pair of vectors can be studied inside a finite-dimensional linear space over R . For example, the following ordered pairs of vectors given by
3 5 5 5 7 7 8 9 , 1 / 8 1 / 8 1 / 8 1 / 8 1 / 8 1 / 8 1 / 8 1 / 8 ,
and
3 5 7 8 9 , 1 / 8 3 / 8 2 / 8 1 / 8 1 / 8
identify the same frequency distribution. The first distribution is a pair of vectors belonging to a linear space over R of dimension 8. The second one is a pair of vectors belonging to a linear space over R of dimension 5. Thus, for example, GDP per capita of a certain country is a time series expressed as a frequency distribution given by
Y 2019 = 48,000 Y 2020 = 48,250 Y 2021 = 48,500 Y 2022 = 49,000 Y 2023 = 49,300 , 1 / 5 1 / 5 1 / 5 1 / 5 1 / 5 ,
where all data of the first vector of the pair under consideration are expressed in United States dollars. A set of similar items that is of interest is a group of years. Such a set constitutes the statistical population of interest. Each year shows an actual value of GDP, which is weighted using a relative frequency. Here, the total number of annual intervals is equal to 5, but it can generally be greater than 5.

2.1. The Notion of Proportionality: Finite Sets and Vectors

Let A = { a 1 , a 2 } and B = { b 1 , b 2 } be two finite sets, with a 1 < a 2 , and b 1 < b 2 . They are therefore two ordered sets. Suppose a 1 > 0 , a 2 > 0 , and b 1 > 0 , b 2 > 0 . If one writes
a 1 : b 1 = a 2 : b 2 ,
then A and B are said to be proportional. This means that there exists a number denoted by h such that one writes
a 1 b 1 = a 2 b 2 = h .
In general, let A = { a i } and B = { b i } be two finite and ordered sets. Each set contains m elements, with m > 2 . If one writes
a i = h b i , i I m = { 1 , 2 , , m } ,
then A and B are said to be proportional. From (10), it follows
a i b i = k b i , i I m ,
where one has k = h 1 . Suppose that the direct difference between A and a set that is homothetic to B is proportional to C, where C is a set containing m elements too. Hence, one writes
a i x b i = y c i , i I m ,
where y is a constant of proportionality in the same way as h. It is very likely that the m equalities characterizing (12) do not hold. This is because A, B, and C are actually observed. Then, it is a question of establishing a criterion by which to construct appropriately a set C that must have certain pre-established requirements with respect to the set C. The elements of C have then to satisfy the following equalities
a i x b i = y c i , i I m .
The formal analogy between (12) and (13) is evident. The criterion that leads to the construction of C is the following. First, the elements of C are obtained by multiplying the elements of C by a same constant. Second, in this way, one obtains the elements of the set C , where one observes
C = C C .
In vector terms, (14) is expressed as follows
c = c c ,
with c = ( c 1 , , c m ) , c = ( c 1 , , c m ) , and c = ( c 1 , , c m ) that are vectors belonging to a linear space over R of dimension m. Third, if the following linear equation identifying a hyperplane
d 1 x 1 + d 2 x 2 + + d m x m = 0
is satisfied by both the vector a = ( a 1 , , a m ) and the vector b = ( b 1 , , b m ) , so one writes
d 1 a 1 + d 2 a 2 + + d m a m = 0 ,
and
d 1 b 1 + d 2 b 2 + + d m b m = 0 ,
where d = ( d 1 , , d m ) is the vector of coefficients, then the construction of the vector c leads to the following expression given by
d 1 c 1 + d 2 c 2 + + d m c m = 0 .
It is then said that c is orthogonal to a hyperplane, whose coefficients are given by d . Moreover, the vector c is the orthogonal projection along c given by
proj c ( c ) = c , c c , c c .
In vector terms, (13) is expressed as follows
a x b = y c ,
so the vector c is a linear combination of a and b . In vector terms, (10) is expressed as follows
a = h b ,
so a = h b is said to be proportional to b , with h R .

2.2. A Numerical Simulation

The Institute of Statistics of a given country published a collection of data relating to the final consumption expenditure of households divided by the total population of the same country. Time series related to the final consumption expenditure of households are observed. Household final consumption per capita is first an aggregate amount. Second, it is expressed in a disaggregated form. This is because it is divided by 20 regions, distinguishing the amounts of consumption related to the north of the country, together with its central part, from the ones related to the south of the same country. The published amounts of consumption expressed in United States dollars lead to an estimate that households consume more in the north, together with the centre, than in the south. More precisely, consumption in the north and in the centre is double than in the south. Thus, one obtains
{ x 1 , x 2 } ,
with
x 1 = 20,000 40,000 60,000 80,000 160,000 ,
and
x 2 = 40,000 80,000 120,000 160,000 320,000 .
From the following ordered triple of vectors
2019 2020 2021 2022 2023 , 20,000 40,000 60,000 80,000 160,000 , 1 / 5 1 / 5 1 / 5 1 / 5 1 / 5 ,
where the first vector identifies annual intervals, it follows that the last two vectors identify a frequency distribution. The same holds with respect to (25).
Time series are studied with respect to GDP per capita too (Testik & Sarikulak, 2021; Oancea & Simionescu, 2024). Observed GDP per capita of the country under consideration is expressed in United States dollars, and it is divided in the same way as final consumption, so one has
{ u 1 , u 2 } ,
with
u 1 = 30,000 60,000 90,000 120,000 240,000 ,
and
u 2 = 50,000 70,000 130,000 170,000 330,000 .
Gross domestic product is not now estimated in a disaggregated form, but it is actually observed in this form. Conversely, gross domestic product that is constructed is given by
{ u ^ 1 , u ^ 2 } ,
with
u ^ 1 = 60,000 120,000 180,000 240,000 480,000 ,
and
u ^ 2 = 150,000 210,000 390,000 510,000 990,000 .
It is clear that the following expressions given by
u ^ 1 = 2 u 1 ,
and
u ^ 2 = 3 u 2
hold, so u ^ 1 is said to be proportional to u 1 , and u ^ 2 is said to be proportional to u 2 . What is said in the previous subsection about the construction of C works in this subsection regarding the construction of (30). One writes
c = u c = u c = u ^ ,
so one observes
u 1 = u 1 u ^ 1 u 2 = u 2 u ^ 2 .
From (21), it follows that the expressions given by
x 1 x x 2 = y u ^ 1 ,
and
x 1 x x 2 = y u ^ 2
hold. They are proportionality equations. For example, one writes
20,000 3 × 40,000 = 5 / 3 × 60,000 40,000 3 × 80,000 = 5 / 3 × 120,000 60,000 3 × 120,000 = 5 / 3 × 180,000 80,000 3 × 160,000 = 5 / 3 × 240,000 160,000 3 × 320,000 = 5 / 3 × 480,000 ,
and
20,000 1 / 2 × 40,000 = 0 × 150,000 40,000 1 / 2 × 80,000 = 0 × 210,000 60,000 1 / 2 × 120,000 = 0 × 390,000 80,000 1 / 2 × 160,000 = 0 × 510,000 160,000 1 / 2 × 320,000 = 0 × 990,000 ,
where x and y in (37) and (38) are two parameters which are made explicit in (39) and (40) using the Rouché–Capelli theorem. Vectors u ^ 1 and u ^ 2 represent the values of specific variables, qualifying the reference models against which to measure the direct differences between the numerical values characterizing the starting variables expressed by x 1 and x 2 . In particular, x is said to be the adjustment coefficient of x 1 x 2 to the models that coincide with the vectors given by u ^ 1 and u ^ 2 . In this paper, models are frequency distributions according to Gini’s approach aimed at studying the comparison between frequency distributions (Bettuzzi, 1986; Gili & Bettuzzi, 1986). Gross domestic product that is constructed leads to the following two linear systems:
u 1 , x 1 α = u ^ 1 , x 1 α u 1 , x 2 α = u ^ 1 , x 2 α ,
and
u 2 , x 1 α = u ^ 2 , x 1 α u 2 , x 2 α = u ^ 2 , x 2 α .
In particular, if one writes
u 1 d , x 1 d α = u ^ 1 d , x 1 d α u 1 d , x 2 d α = u ^ 1 d , x 2 d α ,
and
u 2 d , x 1 d α = u ^ 2 d , x 1 d α u 2 d , x 2 d α = u ^ 2 d , x 2 d α ,
then each vector contained in the linear systems given by (43) and (44) identifies values that are deviations from the arithmetic mean of the corresponding marginal variables. Conditions of invariance of the covariances summarizing joint frequency distributions are expressed by (43) and (44). An immediate statistical meaning is associated with the construction of GDP. The existence of relative frequencies characterizing each joint frequency distribution that appears in (41)–(44) is indicated by the α symbol. Such a symbol denotes an inner product, called α -product, that also identifies a notion of distance. Given two marginal statistical variables that are distinct, the two marginal frequency distributions of the corresponding joint distribution remain fixed, so the set of all joint distributions, with the same given marginal frequencies that are invariant, constitutes the Fréchet class. As all elements of the Fréchet class are equivalent, an element of this class can be chosen based on a particular working hypothesis, which is made explicit by a given individual. Such a class is remarkable because it shows that the origin of the variability of a joint distribution is not standardized, but it depends on the knowledge hypothesis that can be made by a given researcher, and that is the basis characterizing the phenomenon, which is statistically studied. This is the innovative nature of the notion of comparison between frequency distributions as it appears understood in Gini’s approach (Gini, 1921; Forcina, 1982; Giorgi, 2005; Langel & Tillé, 2011). The validation of the methodical warning indicated by Gaetano Pietra, an Italian statistician who founded the School of Statistics of the University of Padua in 1927 and then directed it, takes place. He asked “One or more indices?” in 1936, and the answer to the methodological question “One or more indices?” is fully clear: more indices. If one writes
{ A , B } = { x 1 d , x 2 d } ,
C = { u 1 d , u 2 d } ,
and
C = { u ^ 1 d , u ^ 2 d } ,
then both (43) and (44) can be expressed in the following form
σ A C = σ A C σ B C = σ B C ,
from which the statistical meaning of the construction of GDP appears more explicitly.

2.3. An Essential Metric Element Coinciding with a Measure of the Joint Variability of Two Variables

Note the following:
Remark 1.
Given two or more marginal statistical variables, their marginal frequency distributions are known after having obtained information about the similar items of the statistical population under consideration. If two or more marginal statistical variables are studied together, then a multiple statistical variable arises. Whenever two or more marginal statistical variables are studied together, their marginal frequency distributions remain fixed. They are invariant. As a fundamental invariance property related to marginal frequency distributions is pulled out, a multiple statistical variable is studied via a tensor.
Remark 2.
Given m marginal statistical variables (where m 2 is an integer) characterized by m marginal frequency distributions that are invariant, m 2 joint frequency distributions can be studied. This is because m 2 joint distributions divide a multiple frequency distribution of order m characterizing a multiple variable of the same order. The latter consists of m marginal statistical variables.
Let X and Y be two distinct statistical variables. The values of each of them are observed time series data. Deviations from the arithmetic mean of X and Y are treated. As
Var ( X ) = Cov ( X , X )
holds, Var ( X ) is an α -distance between a marginal distribution, within which the squares of the deviations multiplied by the corresponding weights appear, and a degenerate one coinciding with the zero vector. Var ( X ) is a particular α -product, called α -norm of the vector denoted by x d . Such a vector is constructed with respect to x that represents X. One writes
Var ( X ) = x d α 2 ,
where it usually turns out to be
x d α 2 > 0 .
It is possible to observe
Cov ( X , Y ) = Cov ( Y , X ) ,
so the covariance is an essential metric element qualifying every multiple statistical variable of order 2 characterized by two marginal statistical variables. Here, X and Y are therefore the two components of a multiple statistical variable of order 2. One also writes
Var ( Y ) = Cov ( Y , Y ) ,
so the following square matrix of order 2 given by
Var ( X ) Cov ( X , Y ) Cov ( Y , X ) Var ( Y )
arises. Four joint frequency distributions that are summarized give rise to the elements of a tensor expressed by (54). Var ( X ) is obtained from a marginal distribution arranged in the form of a joint one, and the same is true for Var ( Y ) . The weights of the joint distribution used to obtain Cov ( X , Y ) coincide with the ones used to obtain Cov ( Y , X ) . The two marginal distributions of the joint distribution under consideration are invariant. The weights of the joint distribution under consideration can be chosen in such a way that Cov ( X , Y ) = Cov ( Y , X ) = 0 . After arranging into a table having an equal number of rows and columns all deviations related to X and Y in such a way that one can go from the smallest deviation to the largest one with respect to each variable, it is possible to choose the weights of the joint distribution putting them on the main diagonal of the table under consideration. On the other hand, it is also possible to choose the weights of the joint distribution, putting them on its antidiagonal. The weights of the joint distribution can therefore be chosen based on a particular working hypothesis, which is made explicit by a given individual. Such weights are not practically observed, unlike the marginal ones. Finally, the weights of the joint distribution could be chosen in such a way that any intermediate case with respect to the previous limit cases appears.

3. Multiple Statistical Variables and Their Multiple Frequency Distributions

3.1. Preliminaries

The numerical values of each marginal statistical variable can be expressed as deviations from the arithmetic mean of the corresponding variable. For example, the following multiple variable of order 2 formally expressed by
U 12 = { U 1 , U 2 } = { u 1 d , u 2 d }
is characterized by two ordered triples of vectors given by
2019 2020 2021 2022 2023 , 78,000 48,000 18,000 12,000 132,000 , 0.2 0.2 0.2 0.2 0.2 ,
and
2019 2020 2021 2022 2023 , 100,000 80,000 20,000 20,000 180,000 , 0.2 0.2 0.2 0.2 0.2 .
The second elements of (56) and (57) are vectors containing deviations from the arithmetic mean of the corresponding marginal statistical variables. The third elements of (56) and (57) are vectors containing relative frequencies.
The following inequality
N > m
must be satisfied so that the adopted classification scheme has a heuristic meaning. In the above inequality, N denotes the number of items of a statistical population, which is of interest. The number of statistical variables which are studied is denoted by m. Here, in particular, it turns out to be N = 5 and m = 2 . In general, let U i be a marginal statistical variable of a multiple variable of order m, where i I m . If the generic value with respect to the i-th marginal variable is denoted by . i U β , β being an index such that β I N = { 1 , 2 , , N } , then it is possible to write, in particular,
U 1 = . 1 U 1 . 1 U 2 . 1 U 3 . 1 U 4 . 1 U 5 ,
and
U 2 = . 2 U 1 . 2 U 2 . 2 U 3 . 2 U 4 . 2 U 5 .
In general, a multiple frequency distribution of order m characterizing a multiple statistical variable of the same order must have the property according to which a frequency is associated with each ordered m-tuple of numerical values of corresponding m variables expressed by
( . 1 U β , . 2 U β , , . m U β ) .
One or more association frequencies can be equal to zero. Moreover, all association frequencies sum to 1.

3.2. A Metric Tensor Characterizing a Finite-Dimensional Linear Space over R

As N is the number of actual values of each marginal statistical variable, it is appropriate to study each variable inside a linear space over R of dimension N having a Euclidean structure and denoted by E N . Let . N B e = { e β ; β I N } be an orthonormal basis of E N . Hence, the generic component of a specific tensor defined with respect to . N B e , called the metric tensor, is given by
. e g β γ = e β , e γ = δ β γ ,
where δ β γ denotes the Kronecker delta. One writes the generic component of a tensor to identify the whole tensor. The metric tensor that is defined with respect to . N B e gives rise to a square matrix of order N, whose entries are zeroes except the ones characterizing its main diagonal. They are all equal to 1. Each subscript of the two subscripts in the following N × N matrix
a 11 a 12 a 1 N a 21 a 22 a 2 N a N 1 a N 2 a N N
identifies a basis vector. For example, a 12 represents the element of (63) to which e 1 , e 2 corresponds. With respect to . N B e , the components of the vector expressed by
x i = x i β e β ,
where the Einstein summation notation is used, represent the numerical values of the i-th marginal statistical variable. Whenever a multiple statistical variable of order m is studied, it is necessary to write
x i = x i β e β , i I m .
By hypothesis, one observes N > m . Moreover, it is heuristically convenient to suppose that the m vectors given by (65) are linearly independent. Check the following:
Example 1.
If N = 5 and m = 2 , then the following vectors given by
u 1 = 30,000 e 1 + 60,000 e 2 + 90,000 e 3 + 120,000 e 4 + 240,000 e 5 ,
and
u 2 = 50,000 e 1 + 70,000 e 2 + 130,000 e 3 + 170,000 e 4 + 330,000 e 5
are linearly independent. The coefficients of the linear combination through which u 1 and u 2 are expressed identify the components of u 1 and u 2 with respect to the orthonormal basis under consideration given by the set { e 1 , , e 5 } . They are ( 30,000 , , 240,000 ) and ( 50,000 , , 330,000 ) . Whenever an orthonormal basis is chosen, only the coordinate vectors have to be taken into account. In other terms, only the components of u 1 and u 2 have to be taken into consideration. They are contravariant components of u 1 and u 2 with respect to the orthonormal basis under consideration. Here, the mechanism that generates the observed time series data is caught by linear combinations of vectors constituting an orthonormal basis of a Euclidean space. Additionally, as u 1 d and u 2 d are the components of a multiple statistical variable of order 2, they constitute a basis of a linear subspace of a Euclidean space.
Note the following:
Remark 3.
Let e 1 , e 2 , , e N be N basis vectors identifying N axis lines which are mutually perpendicular. The point where they meet is called the origin of an N-dimensional Euclidean space. Located vectors at the origin of a finite-dimensional linear space over R are fully identified via their endpoints. This is because their beginning point is always the origin of an N-dimensional Euclidean space expressed by the corresponding zero vector. Thus, an ordered N-tuple of real numbers can be either a point or a vector. A point is expressed by its coordinates. A vector is expressed by its components. The components of a vector can be contravariant or covariant. In a linear combination of basis vectors, coefficients that appear in an upper position express contravariant components of a vector. Hence, if one writes x = x i e i , then { x i } denotes contravariant components of x with respect to N basis vectors. Conversely, if one writes x = x i e i , then { x i } denotes covariant components of x with respect to N basis vectors. Whenever an orthonormal basis is chosen, the contravariant and covariant components of the same vector coincide. The contravariant component of x denoted by x 1 is geometrically the projection of x along e 1 . Such a projection is taken into account according to the parallel position to the hyperplane determined by the set of vectors { e 2 , , e N } = { e j ; j 1 } , so x 1 is a signed distance from an axis line. The contravariant component of x denoted by x 2 is the projection of x along e 2 , and so on. Even the covariant component of x denoted by x 1 is geometrically the projection of x along e 1 , but such a projection is now considered according to the perpendicular position to e 1 . It is then possible to verify that one writes x , e 1 = x 1 , where x 1 = x 1 within this context. The covariant component of x denoted by x 2 is the projection of x along e 2 that is considered according to the perpendicular position to e 2 . One writes x , e 2 = x 2 , where x 2 = x 2 within this context, and so on.
Remark 4.
The contravariant and covariant components of the same vector coincide whenever Euclidean spaces characterized by orthonormal bases are treated. Distinguishing between contravariant and covariant components of a vector is therefore inappropriate. Only the use of the notation concerning contravariant and covariant components of a vector is not inappropriate, being followed for statistical needs.

3.3. A Finite-Dimensional Linear Manifold over R

A linear manifold over R of dimension m denoted by . x M m is a linear subspace over R of dimension m. Its basis expressed by
I m [ x ] = { x i ; i I m }
consists of m vectors given by (65) that are supposed to be linearly independent. . x M m is embedded in E N . If x ¯ i denotes the vector having its contravariant components that are all equal to the arithmetic mean of the i-th marginal statistical variable and . x ¯ M m denotes the linear manifold over R of dimension m related to the vectors of this kind as i varies in I m , then the linear manifold over R obtained as a direct difference between . x M m and . x ¯ M m is given by
. x M ( 0 ) m = . x M m . x ¯ M m .
Each vector x i d . x M ( 0 ) m represents the deviations from the arithmetic mean of the i-th marginal statistical variable. Moreover, the set given by
. ( 0 ) I m [ x ] = { x i x ¯ i ; i I m } = { x i d ; i I m }
is a basis of . x M ( 0 ) m denoted by . m B x . Check the following:
Example 2.
If N = 5 and m = 2 , then the following vectors
u 1 = 30,000 60,000 90,000 120,000 240,000 ,
and
u 2 = 50,000 70,000 130,000 170,000 330,000
form a basis of . u M 2 . One writes
u ¯ 1 = 108,000 108,000 108,000 108,000 108,000 ,
and
u ¯ 2 = 150,000 150,000 150,000 150,000 150,000 ,
so a basis of the linear manifold over R denoted by . u ¯ 1 M 2 is given by the following ordered pair of vectors
108,000 108,000 108,000 0 0 , 0 0 0 108,000 108,000 .
The deviations characterizing the numerical values of the first marginal statistical variable are given by
1 · 30,000 60,000 90,000 120,000 240,000 + 0 · 50,000 70,000 130,000 170,000 330,000 108,000 108,000 108,000 0 0 0 0 0 108,000 108,000 = 78,000 48,000 18,000 12,000 132,000 .
The deviations characterizing the numerical values of the second marginal statistical variable are obtained in a similar way. The set of vectors containing deviations as their components is a basis of . u M ( 0 ) 2 denoted by . 2 B u = { u 1 d , u 2 d } . Observed time series data are treated by means of deviations. All elements of a linear subspace of E N are generated by u 1 d and u 2 d via linear combinations. This is therefore the mechanism that gives rise to all data which can rightly be handled by means of u 1 d and u 2 d . Linear and multilinear elements appear.

3.4. An α -Metric Tensor Defined with Respect to a Linear Manifold over R

Let E N be a linear space over R and let . N B e = { e β ; β I N } be an orthonormal basis of it. A multiple frequency distribution of order m is determined by an ordered pair of affine tensors of order m. Both affine tensors of order m belong to the linear space denoted by ( E N ) m . A basis of ( E N ) m is denoted by B N m = { e β . 1 e β . m } . The first affine tensor of the pair has N m contravariant components. By definition, each component of this tensor is the product of one of N contravariant components of one of m vectors. Each vector of m vectors identifies a marginal frequency distribution associated with the corresponding marginal statistical variable. This is because its components represent the deviations from the arithmetic mean of the variable under consideration, so calculating this index of central tendency requires knowing the corresponding distribution. The second affine tensor of the pair has N m covariant components. They identify association frequencies. Each frequency is associated with the product of one of N contravariant components of one of m vectors. Whenever m marginal statistical variables are studied, the relative frequencies of the corresponding marginal distributions remain fixed. They are invariant. If two or more marginal statistical variables are studied together, then the relative frequencies of the corresponding marginal distributions are coherently divided. In this way, association frequencies arise. It is possible to establish the following:
Definition 3.
An α-metric tensor defined with respect to a linear manifold over R of dimension m gives rise to a square matrix of order m. Each element of such a matrix is an inner product, called the α-product of two vectors, based on an ordered pair of affine tensors of order 2. The first tensor of the pair has contravariant components, the second one has covariant components.
A finite-dimensional linear manifold over R is denoted by . x M ( 0 ) m . Let . m B x be a basis of it. It is then possible to study m 2 ordered pairs of vectors denoted by ( x i d , x j d ) identifying deviations from the arithmetic mean of the corresponding marginal statistical variables X i and X j . Such variables identify a multiple statistical variable of order 2 denoted by { X i , X j } and they are obtained from a multiple statistical variable of order m denoted by X 12 m = { X 1 , X 2 , , X m } . Let ( E N ) 2 be the linear space containing affine tensors of order 2 and let B N 2 = { e β e γ } be a basis of it. The association frequencies are expressed by the following affine tensor of order 2, whose generic component is given by
n i j = . i j n β γ e β e γ .
Contravariant components that appear in (77) are inappropriately used. Thus, the generic component of an α -metric tensor defined with respect to . x M ( 0 ) m is given by the following inner product
. x g i j = x i d , x j d α = x i β x j γ . i j n β γ .
It is an α -product. The set given by { x i β x j γ } denotes N 2 contravariant components of an affine tensor of order 2. Each contravariant component is the product of the contravariant components of two vectors. The set given by { . i j n β γ } denotes N 2 covariant components of an affine tensor of order 2. They are association frequencies. In other terms, the generic component of an α -metric tensor is obtained by taking an ordered pair of vectors into account, to which corresponds, by construction, an affine tensor of order 2 representing association frequencies. Each vector of the previous ordered pair belongs to E N . Since a symmetric matrix arises, the number of distinct components of an α -metric tensor is given by
C m , 2 r = 1 2 m ( m + 1 ) ,
where C m , 2 r expresses combinations with repetition. It is possible that the two indices i and j characterizing (78) are equal, so the notion of α -norm of a vector given by
. x g i i = x i d α 2 = x i β x i β . i n β
takes place as a fundamental part of the elements of an α -metric tensor. Such a part properly expresses an α -distance. The following inequality
| . x g i j | . x g i i · . x g j j
is called the α -generalized Cauchy–Schwarz inequality and characterizes the α -metric structure of . x M ( 0 ) m . Note the following:
Remark 5.
An α-metric tensor defined with respect to a linear manifold over R of dimension m denoted by . x M ( 0 ) m gives rise to a square matrix of order m expressed by
. x g 11 . x g 12 . x g 1 m . x g 21 . x g 22 . x g 2 m . x g m 1 . x g m 2 . x g m m .
In this way, a subdivision of the exchangeability of m marginal statistical variables constituting the set given by { X 1 , X 2 , , X m } is shown. Ordered pairs of m marginal statistical variables are studied. There exists a fundamental invariance property related to m marginal frequency distributions, so multilinear relationships between m marginal variables are caught by (82).
Check the following:
Example 3.
If N = 4 and m = 3 , then a multiple statistical variable of order 3 is divided into three ordered triples of vectors. Each component of the first vector of each ordered triple denotes an annual interval. If x 1 = ( 30,000 , 60,000 , 90,000 , 120,000 ) , then the first ordered triple of vectors is given by
2019 2020 2021 2022 , 45,000 15,000 15,000 45,000 , 1 / 4 1 / 4 1 / 4 1 / 4 .
If x 2 = ( 31,000 , 62,000 , 93,000 , 121,000 ) , then the second ordered triple of vectors is given by
2019 2020 2021 2022 , 45,750 14,750 16,250 44,250 , 1 / 4 1 / 4 1 / 4 1 / 4 .
If x 3 = ( 50,000 , 70,000 , 130,000 , 170,000 ) , then the third ordered triple of vectors is given by
2019 2020 2021 2022 , 55,000 35,000 25,000 65,000 , 1 / 4 1 / 4 1 / 4 1 / 4 .
The last two vectors of each ordered triple identify a marginal frequency distribution. The three vectors given by x 1 d , x 2 d , and x 3 d are linearly independent. They form a basis of a linear manifold over R of dimension 3 embedded in E 4 . A multiple frequency distribution of order 3 is intrinsically divided into 3 2 = 9 inner products summarizing 3 2 = 9 joint or bivariate distributions. The latter characterize
X 12 = { X 1 , X 2 } = { x 1 d , x 2 d } ,
X 13 = { X 1 , X 3 } = { x 1 d , x 3 d } ,
and
X 23 = { X 2 , X 3 } = { x 2 d , x 3 d } ,
where X 12 , X 13 , and X 23 are multiple statistical variables of order 2 obtained from a multiple statistical variable of order 3 denoted by X 123 = { X 1 , X 2 , X 3 } = { x 1 d , x 2 d , x 3 d } . From X 12 , it follows
. x g 11 . x g 12 . x g 21 . x g 22 .
From X 13 , it follows
. x g 11 . x g 13 . x g 31 . x g 33 .
From X 23 , it follows
. x g 22 . x g 23 . x g 32 . x g 33 .
Putting all the α-products together, one obtains the following 3 × 3 matrix
. x g 11 . x g 12 . x g 13 . x g 21 . x g 22 . x g 23 . x g 31 . x g 32 . x g 33
characterizing the corresponding α-metric tensor. It is defined with respect to a linear manifold over R of dimension 3 embedded in E 4 . Whenever the α-norm of a vector is calculated, the association frequencies are fixed. Thus, one obtains the following Table 1 identifying the following joint distribution.
The same is true for the other α-norms. Whenever the α-product of two distinct vectors is calculated, the association frequencies can be chosen. They coherently divide marginal frequencies. Thus, one obtains the following Table 2 identifying the following joint distribution.
The same is true for the other α-products. From Table 1 and Table 2, ordered pairs of affine tensors of order 2 appear. The first tensor of each pair has 4 2 = 16 contravariant components, the second one has 4 2 = 16 covariant components. For instance, one writes
45,000 · 45,000 ; 45,000 · 15,000 ; ; 45,000 · 45,000 1 / 4 ; 0 ; ; 1 / 4
with respect to Table 1. A multiple statistical variable intrinsically studies interdependence relationships between the marginal statistical variables, which are the components of it.

3.5. Eigenvalues, Eigenvectors, and Eigenspaces Associated with an α -Metric Tensor

Given the following square matrix of order m
A = . x g 11 . x g 12 . x g 1 m . x g 21 . x g 22 . x g 2 m . x g m 1 . x g m 2 . x g m m ,
suppose that the solutions with algebraic multiplicity 1 of the characteristic equation, obtained by equating the characteristic polynomial to zero, are m (Frank, 1946). It is then possible to write
A = λ 1 0 0 0 λ 2 0 0 0 λ m ,
where λ 1 , λ 2 , …, λ m are m distinct eigenvalues of A (Tao & Vu, 2011; Landon et al., 2020; Denton et al., 2022). For each eigenvalue of A, one observes the following eigenvalue equations
A v = λ i v , i = 1 , , m ,
where v is a nonzero m × 1 column matrix. It is called an eigenvector of A, where λ i R is the corresponding eigenvalue. All eigenvectors associated with a given eigenvalue of A give rise to a linear subspace over R of dimension 1. It is called the eigenspace of A associated with a specific eigenvalue of the same square matrix. The eigenvectors associated with λ 1 , λ 2 , …, λ m are orthogonal in pairs. If such eigenvectors are normalized, then the scalar product of two normalized eigenvectors is orthonormal (Tipping & Bishop, 1999; Jolliffe & Cadima, 2016). The m × m matrix containing the set of normalized eigenvectors associated with λ 1 , λ 2 , …, λ m is orthogonal. The normalized eigenvector associated with λ 1 is an m × 1 column matrix embedded in the orthogonal matrix under consideration, the normalized eigenvector associated with λ 2 is an m × 1 column matrix embedded in the same orthogonal matrix, and so on. Each eigenvector can be written as
v = v i e i ,
where e i is an element of an orthonormal basis of a linear manifold over R of dimension m. The set given by { v i } contains m elements. They are the components of v . The identity matrix of order m arises and each column vector of it contains the components of a normalized eigenvector. Check the following:
Example 4.
From the following ordered triples of vectors
2019 2020 2021 2022 2023 , 78,000 48,000 18,000 12,000 132,000 , 0.2 0.2 0.2 0.2 0.2 ,
and
2019 2020 2021 2022 2023 , 100,000 80,000 20,000 20,000 180,000 , 0.2 0.2 0.2 0.2 0.2 ,
the following 2 × 2 matrix
A = 5,256,000,000 0 0 9,920,000,000
arises. Here, λ 1 = 5,256,000,000 , and λ 2 = 9,920,000,000 . The corresponding eigenvectors are v 1 = ( 5,256,000,000 , 0 ) , and v 2 = ( 0 , 9,920,000,000 ) . All eigenvectors associated with the same eigenvalue λ i , i = 1,2 , together with the zero vector, give rise to a linear subspace over R of dimension 1. It is the eigenspace of A associated with a specific eigenvalue λ i , i = 1,2 , of A . The α-metric tensor associated with a linear manifold over R of dimension 2 is a 2 × 2 diagonal matrix denoted by A . Hence, the covariance between ( 78,000 , 48,000 , , 132,000 ) and ( 100,000 , 80,000 , , 180,000 ) is taken to be equal to zero. In other terms, when ( 78,000 , , 132,000 ) and ( 100,000 , , 180,000 ) are jointly studied, the association frequencies are chosen in such a way that the α-product between ( 78,000 , , 132,000 ) and ( 100,000 , , 180,000 ) is equal to zero. Such an α-product is commutative.

4. The Principal Components of a Multiple Statistical Variable and Their Properties

Let
X 12 m = { X 1 , X 2 , , X m } = { x 1 d , x 2 d , , x m d }
be a multiple statistical variable of order m. The vectors given by x 1 d , x 2 d , , x m d are supposed to be linearly independent. Hence, they form a basis denoted by . m B x = { x 1 d , x 2 d , , x m d } of a linear manifold over R of dimension m embedded in E N . An α -metric tensor referring to this linear manifold over R of dimension m gives rise to a square matrix of order m. The set given by
{ . x λ ( k ) ; ( k ) I m }
identifies m distinct eigenvalues. The set given by
{ v ( k ) ; ( k ) I m }
contains m normalized eigenvectors. Such eigenvectors are α -orthogonal in pairs. The eigenvalues belonging to (88) and the eigenvectors belonging to (89) refer themselves to the same α -metric tensor. It is possible to establish the following:
Definition 4.
Given a multiple statistical variable of order m denoted by X 12 m = { X 1 , , X m } , the principal components referring to its multiple frequency distribution of the same order are expressed by linear combinations of m vectors, where each vector identifies a marginal frequency distribution. The coefficients of each linear combination are the components of a normalized eigenvector associated with the corresponding eigenvalue.
By definition, the principal components are expressed by
w ( h ) d = v ( h ) i x i d , ( h ) I m ,
where the set given by { v ( h ) i } denotes the components of a normalized eigenvector. Note the following:
Remark 6.
As the components of each vector x i d , i = 1 , , m , represent the deviations from the arithmetic mean of the corresponding marginal statistical variable, each vector x i d , i = 1 , , m includes a marginal frequency distribution that characterizes the corresponding marginal statistical variable. Thus, from
x i x ¯ i = x i d , i = 1 , , m ,
it follows
x i d + x ¯ i = x i , i = 1 , , m .
The set of the principal components referring to a multiple frequency distribution of order m associated with X 12 m is an α -orthogonal basis denoted by . m B w x = { w ( 1 ) d , , w ( m ) d } of the same linear manifold over R of dimension m embedded in E N . One writes
. w g ( k ) ( h ) = w ( k ) d , w ( h ) d α = λ ( k ) δ ( k ) ( h )
to denote the covariant components of an α -metric tensor as ( k ) and ( h ) vary in I m . This tensor makes evident the fundamental properties of the principal components. Hence, note the following:
Remark 7.
The principal components referring to a multiple frequency distribution of order m associated with X 12 m are α-orthogonal in pairs and the α-norm of each of them is equal to the eigenvalue corresponding to the eigenvector, whose components are the ones of the linear combination given by (90) and identifying the principal component itself.
From the following square matrix of order m
B = . w g ( 1 ) ( 1 ) . w g ( 1 ) ( 2 ) . w g ( 1 ) ( m ) . w g ( 2 ) ( 1 ) . w g ( 2 ) ( 2 ) . w g ( 2 ) ( m ) . w g ( m ) ( 1 ) . w g ( m ) ( 2 ) . w g ( m ) ( m ) ,
one observes
B = . x λ ( 1 ) 0 0 0 . x λ ( 2 ) 0 0 0 . x λ ( m ) ,
with . x λ ( 1 ) . x λ ( 2 ) . x λ ( m ) by hypothesis. Since the principal components are defined with respect to . m B x = { x 1 d , , x m d } , one writes
. x λ ( 1 ) 0 0 0 . x λ ( 2 ) 0 0 0 . x λ ( m ) = . w λ ( 1 ) 0 0 0 . w λ ( 2 ) 0 0 0 . w λ ( m ) .
Let . w g be the determinant of the covariant components of the α -metric tensor given by (92). If the cofactor of . w g ( k ) ( h ) is denoted by . w a ( k ) ( h ) , then the contravariant components of the same α -metric tensor are expressed by
. w g ( k ) ( h ) = . w a ( k ) ( h ) . w g
as ( k ) and ( h ) vary in I m , so it is possible to write
B = . w g ( 1 ) ( 1 ) . w g ( 1 ) ( 2 ) . w g ( 1 ) ( m ) . w g ( 2 ) ( 1 ) . w g ( 2 ) ( 2 ) . w g ( 2 ) ( m ) . w g ( m ) ( 1 ) . w g ( m ) ( 2 ) . w g ( m ) ( m ) .
It is clear that one observes
B = 1 . x λ ( 1 ) 0 0 0 1 . x λ ( 2 ) 0 0 0 1 . x λ ( m ) ,
so one writes
1 . x λ ( 1 ) 0 0 0 1 . x λ ( 2 ) 0 0 0 1 . x λ ( m ) = 1 . w λ ( 1 ) 0 0 0 1 . w λ ( 2 ) 0 0 0 1 . w λ ( m ) .

5. About the Geometric and Statistical Meaning of a Particular Linear Manifold over R

A finite-dimensional linear manifold over R is generated by a finite set of marginal frequency distributions. Its statistical meaning consists of this. Let . m B x = { x 1 d , , x m d } be a basis of a linear manifold over R of dimension m embedded in E N , with N > m . Let . m B u = { u 1 d , , u m d } be a basis of another linear manifold over R of dimension m embedded in E N , with N > m . If one writes
u ^ j d = h u j d , j = 1 , , m ,
with h R , then u ^ 1 d is said to be proportional to u 1 d , u ^ 2 d is said to be proportional to u 2 d , …, u ^ m d is said to be proportional to u m d . It is therefore possible to construct a basis, denoted by . m B u ^ = { u ^ 1 d , , u ^ m d } , of a specific linear manifold over R , whose vectors are said to be proportional to the ones of a basis, denoted by . m B u = { u 1 d , , u m d } , of a finite-dimensional linear manifold over R denoted by . u M ( 0 ) m . In this paper, observed data are analyzed within a finite-dimensional mathematical structure (linear space over R and its subspaces) that also includes unobserved data. Unobserved data are treated under a specific knowledge hypothesis that is made explicit by a given individual. The proportionality hypothesis is made explicit by him. The mathematical properties of the closed structure under consideration are therefore used to examine observed and unobserved data.
Each vector belonging to . m B u ^ can be written in the form given by
u ^ j d = u j h x h d , j = 1 , , m ,
and the same is true for every vector of the linear manifold over R of dimension m, whose basis is given by . m B u ^ . Thus, also the generic vector denoted by u ^ d can be expressed as a linear combination of the vectors belonging to . m B x = { x 1 d , , x m d } . The geometrical meaning of a finite-dimensional linear manifold over R is that every vector of it can be expressed as a linear combination of a finite number of basis vectors. It is possible to determine the covariant and contravariant components of u ^ j d , taking advantage of the covariant and contravariant components of the α -metric tensor that is constructed with respect to . m B x = { x 1 d , , x m d } . Hence, the covariant components of u ^ j d are given by
u j i = u j h . x g h i ,
the contravariant ones are expressed by
u j k = u j i . x g k i .
Interdependence relationships between marginal distributions given by x 1 d , …, x m d can be studied. Interdependence relationships between observed time series data expressed by x 1 d , …, x m d are studied via a tensor. Such relationships are of a multilinear nature. Additionally, given the linear combination expressed by (100), if one writes
u j h x h ,
where x h is different from x h d because x h has no deviations, then one can obtain a vector having N components that can be traced back to u ^ j d using the same relative frequencies of the corresponding marginal frequency distribution. Check the following:
Example 5.
From the following ordered triples of vectors
2019 2020 2021 2022 2023 , 52,000 32,000 12,000 8000 88,000 , 0.2 0.2 0.2 0.2 0.2 ,
and
2019 2020 2021 2022 2023 , 100,000 80,000 20,000 20,000 180,000 , 0.2 0.2 0.2 0.2 0.2 ,
it follows that . 2 B x = { x 1 d , x 2 d } is a basis of a linear manifold over R of dimension 2 embedded in E 5 . The second elements of each triple of vectors are x 1 d and x 2 d . The following 2 × 2 matrix
C = . x g 11 . x g 12 . x g 21 . x g 22 = 2,336,000,000 0 0 9,920,000,000
identifies the covariant components of the α-metric tensor that is constructed with respect to . 2 B x = { x 1 d , x 2 d } . The vectors belonging to . 2 B u = { u 1 d , u 2 d } are the second elements of the following ordered triples of vectors
2019 2020 2021 2022 2023 , 78,000 48,000 18,000 12,000 132,000 , 0.2 0.2 0.2 0.2 0.2 ,
and
2019 2020 2021 2022 2023 , 100,000 80,000 20,000 20,000 180,000 , 0.2 0.2 0.2 0.2 0.2 .
The vectors belonging to . 2 B u ^ = { u ^ 1 d , u ^ 2 d } are the second elements of the following ordered triples of vectors
2019 2020 2021 2022 2023 , 156,000 96,000 36,000 24,000 264,000 , 0.2 0.2 0.2 0.2 0.2 ,
and
2019 2020 2021 2022 2023 , 300,000 240,000 60,000 60,000 540,000 , 0.2 0.2 0.2 0.2 0.2 .
One can write
u ^ 1 d = 3 x 1 d + 0 x 2 d ,
and
u ^ 2 d = 0 x 1 d + 3 x 2 d .
Thus, the covariant components of u ^ 1 d and u ^ 2 d are given by
u 11 = u 1 1 . x g 11 + u 1 2 . x g 21 = 7,008,000,000 ,
u 12 = u 1 1 . x g 12 + u 1 2 . x g 22 = 0 ,
u 21 = u 2 1 . x g 11 + u 2 2 . x g 21 = 0 ,
u 22 = u 2 1 . x g 12 + u 2 2 . x g 22 = 29,760,000,000 .
Conversely, the contravariant components of u ^ 1 d and u ^ 2 d are expressed by
u 1 1 = u 11 . x g 11 + u 12 . x g 12 = 7,008,000,000 · 1 2,336,000,000 = 3 ,
u 1 2 = u 11 . x g 21 + u 12 . x g 22 = 0 ,
u 2 1 = u 21 . x g 11 + u 22 . x g 12 = 0 ,
u 2 2 = u 21 . x g 21 + u 22 . x g 22 = 29,760,000,000 · 1 9,920,000,000 = 3 .
It is possible to determine in this way the covariant and contravariant components of the generic vector denoted by u ^ d that is expressed as a linear combination of the vectors belonging to . 2 B x = { x 1 d , x 2 d } . Here, . 2 B x is not an orthonormal basis. Additionally, from
3 · 20,000 40,000 60,000 80,000 160,000 + 4 · 50,000 70,000 130,000 170,000 330,000 = 260,000 400,000 700,000 920,000 1,800,000 ,
it follows that it is possible to find the vector that coincides with the one containing deviations only. Such a vector is given by
3 · 52,000 32,000 12,000 8000 88,000 + 4 · 100,000 80,000 20,000 20,000 180,000 = 556,000 416,000 116,000 104,000 984,000 ,
so the following expressions
260,000 816,000 = 556,000 400,000 816,000 = 416,000 700,000 816,000 = 116,000 920,000 816,000 = 104,000 1,800,000 816,000 = 984,000
that characterize the right-hand side of the previous equality hold. The arithmetic mean of the marginal statistical variable, whose actual values are given by
260,000 400,000 700,000 920,000 1,800,000 ,
is equal to 816,000.

6. Proportionality Equations

The vectors belonging to the linear manifold over R denoted by . u ^ M ( 0 ) m represent the logical and formal qualification of the statistical model. Instead, the vectors belonging to the linear manifold over R denoted by . x M ( 0 ) m express the starting frequency distributions. The vectors belonging to . u ^ M ( 0 ) m get involved with respect to the starting frequency distributions because specific knowledge purposes are made explicit. In this paper, the proportionality purposes are made explicit. If the vectors belonging to . u ^ M ( 0 ) m characterize the model, and therefore, represent the units of measurement with respect to which to measure the vectors that identify the starting frequency distributions, then proportionality equations must be expressed with respect to the vectors identifying a basis of . u ^ M ( 0 ) m . One writes
x i d x i k u ^ k d = x i k . u ^ k . d , i I m ,
where k K ( s ) and k K ( s ) . By definition, K ( s ) and K ( s ) are two sets that represent a partition of I m . Such sets contain s and s values. They are positive natural numbers that the indices associated with a specific linear manifold over R can come to have in such a way that one obtains s + s = m . Thus, one observes K ( s ) I m , K ( s ) I m , K ( s ) K ( s ) = , and K ( s ) K ( s ) = I m .

6.1. Particular Proportionality Equations

If K ( s ) is a set containing m 1 positive natural numbers and K ( s ) is consequently a set containing an element only, then particular proportionality equations take place. Hence, one writes
x i d x i k ¯ u ^ k ¯ d = x i k ̲ u ^ k ̲ d , i I m ,
where the right-hand side of (105) is a monomial. Each vector denoted by x i d includes a marginal frequency distribution identifying the corresponding observed time series. This distribution has an influence on the way of being of the frequency distributions associated with other observed time series and is, in turn, influenced by them. Each vector denoted by u ^ j d must be interpreted in the same way with regard to its mutual influence on the other frequency distributions denoted using similar symbols. Instead, unlike x i d , the vectors denoted by u ^ j d represent the logical and formal expression of the formulation of a hypothesis about the structure of marginal frequency distributions in the statistical population. The set of similar items that is of interest is therefore the result of the mutual influences of distinct time series. The left-hand side of (105) expresses the difference between an observed time series, which is determined by the concurrence of m time series, and a linear combination of the remaining m 1 vectors expressed in terms of the optimal situation represented by the model. This difference is what must be expressed by x i d whenever the concurrence of the remaining m 1 vectors is eliminated from x i d itself. If the coefficients x i k ¯ of the linear combination given by x i k ¯ u ^ k ¯ d are different from + 1 , then this means that the concurrence of the remaining m 1 vectors must be eliminated from x i d because it is considered to be anomalous. Such a concurrence is considered to be abnormal with respect to a specific and formulated hypothesis that is associated with frequency distributions expressed by m 1 vectors denoted by u ^ k ¯ d . Instead, if the coefficients x i k ¯ are all equal to + 1 , then this means that the contribution of already optimal m 1 vectors is eliminated from x i d , in the sense that such vectors are in accordance with a specific and formulated hypothesis. It does not seem to be that particular proportionality equations, which are shown in this subsection, identify an “ad hoc” empirical method (Keogh & Lin, 2005). They are therefore based on logical elements that have to be taken into account in the analysis of real data (Kendrick & Jaycox, 1965; Ram, 1986; Granger, 2004). Check the following:
Example 6.
A basis of a linear manifold over R of dimension 2 embedded in E 5 is denoted by . 2 B x = { x 1 d , x 2 d } . One observes
x 1 d = 52,000 32,000 12,000 8000 8 8,000 ,
and
x 2 d = 100,000 80,000 20,000 20,000 180,000 .
Let . 2 B u ^ = { u ^ 1 d , u ^ 2 d } be a basis of another linear manifold over R of dimension 2 embedded in E 5 . Thus, one writes
u ^ 1 d = 156,000 96,000 36,000 24,000 264,000 ,
and
u ^ 2 d = 300,000 240,000 60,000 60,000 540,000 .
It is therefore possible to consider the following proportionality equations given by
52,000 32,000 12,000 8000 88,000 x 156,000 96,000 36,000 24,000 264,000 = y 300,000 240,000 60,000 60,000 540,000 ,
and
52,000 32,000 12,000 8000 88,000 y 300,000 240,000 60,000 60,000 540,000 = x 156,000 96,000 36,000 24,000 264,000 .
The two parameters x and y are made explicit using the Rouché–Capelli theorem. Additionally, other proportionality equations are given by
100,000 80,000 20,000 20,000 180,000 α 300,000 240,000 60,000 60,000 540,000 = β 156,000 96,000 36,000 24,000 264,000 ,
and
100,000 80,000 20,000 20,000 180,000 β 156,000 96,000 36,000 24,000 264,000 = α 300,000 240,000 60,000 60,000 540,000 ,
where the two parameters α and β are again made explicit using the Rouché–Capelli theorem.

6.2. Particular Proportionality Equations Having an α -Orthogonal Direction

Particular proportionality equations can be written by focusing on a specific basis of a linear manifold over R of dimension m. Such a basis contains the principal components referring to a multiple frequency distribution of order m. As the vectors identifying principal components are α -orthogonal in pairs, particular proportionality equations having an α -orthogonal direction are obtained. One writes
v i ( h ) w ( h ) d v i ( k ¯ ) w ( k ¯ ) d = v i ( k ̲ ) w ( k ̲ ) d ,
where the right-hand side of (106) is a vector expressing the α -orthogonal direction of the difference that appears as a vector in the left-hand side of it. Note the following:
Remark 8.
The vector of the left-hand side of (106) is obtained as a difference. Such a vector is a distance. The vector that appears on the right-hand side of (106) expresses the direction of the vector appearing on the left-hand side of it. This direction is an α-orthogonal direction. This is because principal components are involved. One of the properties of principal components is that they are α-orthogonal in pairs.
It is possible to highlight the ideal structure of a specific time series in the case in which this time series does not undergo alterations due to the way of being of the other time series within the statistical population. The minuend of (106) represents an observed time series, while the subtrahend of (106) expresses a linear combination of distributions, where each distribution has an ideal structure in itself. Hence, (106) shows that an observed time series is set against a theoretical one, having an ideal structure in itself. Here, one can see a particular conception of statistical population, as it appears understood in the thought of Paolo Fortunati, who was an Italian statistician and researcher who taught at the University of Bologna a few decades ago and was also inspired by the research work of Corrado Gini.

7. The Structure of a Specific Characteristic Equation

A fundamental theorem, called theorem of α -orthogonality, is the following:
Theorem 1.
Let u ^ 1 d , , u ^ m d be vectors such that one writes u ^ j d = u j h x h d , j = 1 , , m . If the following m 2 expressions
u ^ i d , u ^ j d α = 0 , i < j I m ,
are true, then the vectors u ^ 1 d , , u ^ m d coincide with the principal components.
Such a theorem is proved in the Appendix A of this paper. As one writes
. u ^ g i j = u ^ i d , u ^ j d α = u i k u j h . x g k h = 0 , i < j I m ,
it is possible to determine a specific characteristic equation associated with the following m × m matrix given by
. x g 11 . x g 12 . x g 1 m . x g 21 . x g 22 . x g 2 m . x g m 1 . x g m 2 . x g m m .
Hence, one focuses on the covariant components of the α -metric tensor that is constructed with respect to . m B x = { x 1 d , , x m d } . Check the following:
Example 7.
Let . 2 B x = { x 1 d , x 2 d } be a basis of a linear manifold over R of dimension 2 embedded in E 5 . One observes
x 1 d = 52,000 32,000 12,000 8000 88,000 ,
and
x 2 d = 100,000 80,000 20,000 20,000 180,000 .
The following 2 × 2 matrix
C = . x g 11 . x g 12 . x g 21 . x g 22 = 2,336,000,000 0 0 9,920,000,000
identifies the covariant components of the α-metric tensor that is constructed with respect to . 2 B x = { x 1 d , x 2 d } . Here, λ ( 1 ) = 2,336,000,000 , and λ ( 2 ) = 9,920,000,000 are the two eigenvalues. The corresponding eigenvectors are v ( 1 ) = ( 2,336,000,000 , 0 ) , and v ( 2 ) = ( 0 , 9,920,000,000 ) . All eigenvectors associated with the same eigenvalue λ ( i ) , ( i ) = 1 , 2 , together with the zero vector, give rise to a linear subspace over R of dimension 1. It is the eigenspace of C related to a specific eigenvalue λ ( i ) , ( i ) = 1 , 2 , of C. The corresponding characteristic equation of C is given by
( . x g k h λ ( k ) δ k h ) v ( k ) k = 0 .
This equation can be written in the following form
. x g k h v ( k ) k = λ ( k ) δ k h v ( k ) k ,
where the two sides of it are expressed by two matrix products that give as their results two equal real numbers. The result of their subtraction is therefore equal to zero. Every eigenvector is normalized. Its components are used in the linear combination that defines the corresponding principal component. From
λ I = δ 11 = λ δ 12 = 0 δ 21 = 0 δ 22 = λ ,
where I is the identity matrix of order 2, it follows that one observes
2,336,000,000 9,920,000,000 1 0 = 2,336,000,000 2,336,000,000 1 0 ,
so it turns out to be 2,336,000,000 = 2,336,000,000 , and
2,336,000,000 9,920,000,000 0 1 = 9,920,000,000 9,920,000,000 0 1 ,
so it turns out to be 9,920,000,000 = 9,920,000,000 .

8. From Frequency Distributions to Random Variables: The Two Sides of the Same Coin

A statistical variable denoted by X is an “a priori” mathematical variable, in the sense that it identifies a collection of potential values that an empirical quantity denoted by X can come to have. A frequency distribution is an “a posteriori” empirical function from a set containing similar items that characterize a statistical population of interest to a set containing actual values of the same statistical variable X. An empirical quantity X has actual values after having obtained information about the similar items of the statistical population under consideration. A frequency distribution assigns to each element of the domain of the function exactly one element of the codomain of the same function. A random variable denoted by X is an “a priori” mathematical function. After considering distinct values that a statistical variable comes to have, it is possible to pass from a frequency distribution to a random variable in order to make coherent previsions of the same random variable. In general, a random variable X on a sample space S is a function from S into the set R of real numbers such that the pre-image of any interval of R is an event in S (Coletti et al., 2014; Sanfilippo et al., 2020; Berti & Rigo, 2021). Here, a random variable X on a sample space S is a function from S into the set R of real numbers such that the pre-image of [ a , a ] , where a is a real number, is an event in S. The image of X is the finite set of those numbers assigned by X to S. Hence, a discrete random variable X on S induces a function that assigns probabilities to the points identifying the image of X. The image of X contains distinct values that the same statistical variable X treated by the above empirical function representing a frequency distribution comes to have. Each time series of length T is seen as a frequency distribution, so the image of X is given by the set { x 1 , x 2 , , x N } , where it is possible to assume x 1 < x 2 < < x N . The image of X therefore contains the numerical values of a time series of length T. The components of the following vector
x = x 1 x 2 x N
belonging to a Euclidean space of dimension N represent such values. Probability is not a primitive notion within this context, but it is the degree of belief in the occurrence of a single event assigned by an individual at a given moment and with a specific set of information and knowledge. Such a set of information and knowledge is not unchangeable, but it can change from moment to moment. Making a prevision of X means to distribute, among all the possible alternatives that identify the image of X, one’s own expectations. At a first stage, it is possible to consider infinitely many nonparametric probability distributions over R related to X. As the numbers x 1 , x 2 , , x N assigned by X to S are on a real number line after making a reduction in dimension, making a prevision of X means that, at a second stage, it is possible to choose a point belonging to a closed convex set (Angelini, 2024b). In this way, Bayes’ theorem implicitly appears. A convergence process takes place. A closed convex set is a closed line segment obtained via a linear interpolation. New prevision points based on the range of a discrete set of known possible points expressed by { x 1 , x 2 , , x N } are obtained. Such prevision points are the elements of an uncountable set. This set contains all admissible previsions of X at a first stage. All the points that are contained between two distinct endpoints, given by x 1 and x N , respectively, of a closed line segment can be chosen by a given individual as a prevision or mathematical expectation of X at a first stage. One writes
P ( X ) = x i p i = x 1 p 1 + x 2 p 2 + + x N p N ,
where P stands for prevision or expected value. In other terms, it is possible to consider N 1 non-negative values that N probabilities summing to 1 and denoted by p 1 , p 2 , , p N can come to have in such a way that one obtains
x 1 P ( X ) x N
at a first stage. It is always admissible to attribute an objective value to the reasons underlying the choice of p 1 , p 2 , , p N . At a second stage, an element of the set of all admissible previsions is chosen by a given individual based on a different state of information and knowledge associated with him. The notion of the prevision of a random variable does not use particularly powerful mathematical methods. However, it is logically powerful. Within this context, the subjective opinion is a reasonable object of a rigorous study. Uncertainty about an event is of a personalistic nature, in the sense that it depends on an incomplete state of information and knowledge that a given individual detects, so uncertainty ceases when sure information is received by him. Until that time, it is possible to attribute a subjective probability to the event under consideration (Edwards et al., 1963; de Finetti, 1989). The same is true whenever a given number of mutually exclusive events numerically expressed by x 1 , x 2 , , x N is considered (Angelini & Maturo, 2022b). If the set of all admissible previsions of X at a first stage is denoted by A, then a σ -algebra on a real number line given by
Σ = { A , A c , , U }
holds, where the complement of A is denoted by A c , and a universal set is denoted by U . If two or more time series of length T are studied, then a time series of length T corresponds to a random variable and vice versa. Statistical and random variables are the two sides of the same coin.
The possible alternatives that identify the image of X are studied using the notion of vector contained in a given finite-dimensional linear space over R . The contravariant components of such a vector represent the possible alternatives that identify the image of X. Hence, an event is not necessarily a measurable set, but it can be a number coinciding with a component of a vector. By focusing on a sequence of real numbers that is contained in a finite-dimensional linear space over R , it is always possible to take an appropriate number of dimensions into account to outline linearly the study in progress. More specifically, it is always possible to take a higher number of dimensions into account, so one can focus on a greater sequence of real numbers. In fact, a sequence of real numbers that is contained in a finite-dimensional linear space over R is usually defined regardless of the exact indication of the dimension of the linear space over R to which it refers. Subsequently, a sequence of real numbers can always be put on a straight line, which is a linear space over R of dimension 1. In this way, a reduction in dimension takes place. Conversely, handling the image of X by means of the notion of set implies that, in general, if a given finite set, which is intrinsically a well-defined collection of elements, is subdivided into its constituent elements, then it is not possible to divide it further. In other words, it is not possible for its constituent elements to increase. The cardinality of a given set cannot change. Instead, the mathematical properties of a vector remain fixed even if its components increase before focusing on a reduction in dimension obtained whenever a linear space over R of dimension 1 is taken into account.

8.1. A Subdivision of the Exchangeability of Random Variables

Even marginal probabilities can be subdivided, and this leads to a subdivision of the exchangeability of random variables. The notion of exchangeability characterizes the Bayesian interpretation of probability (Diaconis, 1977; Diaconis & Freedman, 1980; Spizzichino, 2009). For example, let X 12 m = { X 1 , X 2 , , X m } be a multiple random variable consisting of m marginal random variables. Each marginal random variable has a probability distribution remaining fixed after bringing it out. It is therefore invariant. A subdivision of the exchangeability of random variables holds because it is possible to consider different pairs of marginal random variables. It is also possible to consider pairs of marginal random variables such that each element of the pair is the same marginal random variable. The number of permutations of 2 distinct marginal random variables is equal to 2 ! . The number of permutations of 2 equal marginal random variables is given by 2 ! 2 ! = 1 . If two distinct marginal random variables having two probability distributions that remain fixed are jointly studied, then the masses of the corresponding joint probability distribution can be chosen in such a way that marginal masses are coherently subdivided. A square matrix of order m is therefore given by
P ( X 1 X 1 ) P ( X 1 X 2 ) P ( X 1 X m ) P ( X 2 X 1 ) P ( X 2 X 2 ) P ( X 2 X m ) P ( X m X 1 ) P ( X m X 2 ) P ( X m X m ) .
The above matrix is symmetric. One has P ( X 1 X 2 ) = P ( X 2 X 1 ) , …, P ( X 1 X m ) = P ( X m X 1 ) , …, P ( X 2 X m ) = P ( X m X 2 ) , so one observes an invariance property of the notion of prevision or expected value with respect to permutations of marginal random variables. From (113), it follows that it is possible to rank X 1 , X 2 , …, X m . One of my forthcoming research papers is going to show that, at a first stage, the size of the difference between any two previsions of two random variables may not matter.

8.2. Variances and Covariances

If one focuses on deviations from the corresponding mean, then it is possible to write
Var ( X 1 ) Cov ( X 1 , X 2 ) Cov ( X 1 , X m ) Cov ( X 2 , X 1 ) Var ( X 2 ) Cov ( X 2 , X m ) Cov ( X m , X 1 ) Cov ( X m , X 2 ) Var ( X m ) ,
and
1 Cov ( X 1 , X 2 ) Var ( X 1 ) Var ( X 2 ) Cov ( X 1 , X m ) Var ( X 1 ) Var ( X m ) Cov ( X 2 , X 1 ) Var ( X 2 ) Var ( X 1 ) 1 Cov ( X 2 , X m ) Var ( X 2 ) Var ( X m ) Cov ( X m , X 1 ) Var ( X m ) Var ( X 1 ) Cov ( X m , X 2 ) Var ( X m ) Var ( X 2 ) 1 .
Both matrices given by (114) and (115) are symmetric. Gini’s approach is based on a fundamental invariance property that characterizes each marginal distribution. According to this approach, the way of understanding the model is such that the weights of the corresponding joint distributions can be chosen based on a particular working hypothesis. If m marginal variables are supposed to be uncorrelated, then the weights of all joint distributions characterizing two distinct variables out of m are chosen in such a way that the corresponding covariances are equal to 0. Note the following:
Remark 9.
If X and Y are two distinct variables and each of them is characterized by N deviations from the arithmetic mean of the corresponding variable, then such deviations can be arranged into a table having N rows and N columns in such a way that it is possible to go from the smallest deviation to the largest one with respect to each variable. An index of concordance is expressed by
Cov ( X , Y ) Cov ( 1 ) ( X , Y ) .
It was put forward by Corrado Gini. Its possible values are contained in the closed interval [ 0,1 ] . The covariance between X and Y, which is practically observed, is denoted by Cov ( X , Y ) . The covariance between X and Y, which is theoretically obtained by placing the joint statistical weights only on the main diagonal of the N × N table under consideration, is denoted by Cov ( 1 ) ( X , Y ) . The marginal statistical weights remain fixed. The statistical model is given by the joint distribution that leads to determining Cov ( 1 ) ( X , Y ) , so two joint distributions are compared. The former is of a real nature. It is observed. The latter is of a theoretical nature. It is the joint distribution that leads to determining Cov ( 1 ) ( X , Y ) . Based on what is shown in this paper, (116) is equal to 1. This is because the joint statistical weights are not practically observed, but they are chosen based on a particular working hypothesis, which is made explicit by a given individual. In general, this implies that two or more marginal distributions can be compared based on a specific hypothesis using joint weights characterizing joint distributions that are not practically observed.

8.3. Stationary Processes

In this paper, observed time series data are geometrically handled. Observed time series are practical realizations of stochastic processes. For example, if the translates of (28) and (29) are expressed by
30,000 + 2000 = 32,000 60,000 + 2000 = 62,000 90,000 + 2000 = 92,000 120,000 + 2000 = 122,000 240,000 + 2000 = 242,000 ,
and
50,000 + 3000 = 53,000 70,000 + 3000 = 73,000 130,000 + 3000 = 133,000 170,000 + 3000 = 173,000 330,000 + 3000 = 333,000 ,
then the deviations from the corresponding arithmetic means are the same. The arithmetic means of the deviations are all equal to 0. After considering actual translations, the arithmetic means of the corresponding deviations are all equal to 0. They do not change over time. Even variances and standard deviations do not change because all deviations remain unchanged (Eberlein, 1986). In the international literature, a class or collection of non-stationary models contains models such as, for example, ARIMA (Ho & Xie, 1998). Unlike non-stationary models, here, strong stationary processes are pulled out. They are stochastic processes having statistical properties that do not change over time (Diaconis & Fill, 1990; Matthews, 1992; Liu & Lin, 2009). The joint probability distributions of the processes remain the same when shifted in time. The role played by the Fréchet class is therefore essential. This is because a fundamental invariance property related to marginal distributions is made explicit. Here, what is described by the probabilistic law with which a given phenomenon evolves over time is a mathematical model based on the notion of prevision of a random variable X, whose possible values are expressed by observed time series data denoted by x 1 , x 2 , , x N . It is admissible that the state of information and knowledge associated with a given individual leads him to determine ( x 1 + k ) , ( x 2 + k ) , …, ( x N + k ) as possible values for X, where k R . Thus, the set of all admissible previsions of X at a first stage is an uncountable set coinciding with a closed line segment, whose endpoints are ( x 1 + k ) and ( x N + k ) , respectively. After choosing a value as a prevision of X at a second stage using Bayes’ theorem, an ordered list of N + 1 real numbers belonging to a linear space over R that has a higher dimension with respect to the previous one, whose dimension is equal to N, takes place.

9. Conclusions

In this paper, time series of length T are seen as frequency distributions. They are studied inside linear systems. According to Gini’s approach followed in this paper, the statistical model with which observed frequency distributions are compared is a frequency distribution. Such a distribution is not of a theoretical nature, so it is not a functional scheme in the continuum such as, for example, the normal distribution, but it plays a practical role that must be specified in order to operate the comparison between observed frequency distributions. Thus, marginal frequency distributions based on the notion of proportionality are taken into consideration together with joint frequency distributions. The latter are elements of the Fréchet class. Such a class shows that the origin of the variability of a joint distribution is not standardized, but it depends on the knowledge hypothesis that can be made by a given individual, and that is the basis characterizing the phenomenon that is statistically studied. This research work focuses on multiple statistical variables by means of which it is possible to study interdependence relationships between marginal statistical variables that are the components of the multiple statistical variables under consideration. Just as it is illusory to think of an infinite number of alternatives whenever a finite number of outcomes of an experiment is practically observed, it is equally illusory to consider the weights of a joint distribution as elements that are fixed once and for all when an invariance property related to observed marginal distributions is made explicit. This is what Gini’s approach is about. Frequency distributions are practical realizations of nonparametric probability distributions over R . Hence, it is possible to pass from frequency distributions to random variables. It follows that a subdivision of the exchangeability of random variables can be realized. A subdivision of the exchangeability of variables of a statistical nature is first shown. The mechanism that generates the numerical values of a time series of length T is made explicit using linear combinations of vectors. Observed time series data are treated by means of deviations. Such deviations are the contravariant components of vectors that constitute a basis of a linear subspace of a Euclidean space. These basis vectors generate all elements of a linear subspace via linear combinations. Interdependence relationships between observed time series data can be studied via a tensor. In this paper, observed data are analyzed within a mathematical structure that also includes unobserved data. Unobserved data are treated under a specific knowledge hypothesis that is always made explicit by an individual. The mathematical properties of the closed structure under consideration are used to examine both types of data. It is possible to make previsions about time series in an analogous way to previsions about random variables. The latter can be made using a Bayesian approach based on an operational notion of probability that is not therefore seen as a primitive concept, unlike, for example, point and line in geometry. As points and lines are primitive concepts in classical Euclidean geometry, they are axiomatically handled. According to the approach followed in this research work, the logical aspects of the concepts must not be merged with the empirical ones, as unfortunately it seems to be now usual in the international literature, but they have to be kept distinct. The notion of the prevision of a random variable is based on such a distinction. Even stationary processes that are pulled out in this research work are in accordance with such a distinction. It follows that statistical issues treated by Corrado Gini and his followers can be merged with the probabilistic ones treated by Bruno de Finetti and his Bayesian followers. These issues are the two sides of the same coin. A reinterpretation of principal component analysis that is based on the notion of proportionality is shown. The characteristic polynomial of a specific square matrix, the characteristic equation of the same square matrix, eigenvalues, eigenvectors, and eigenspaces referring to the same specific square matrix are studied through a vector representation of frequency distributions having a heuristic nature. Inner products coinciding with α -products also identify α -distances between two marginal distributions. Particular proportionality equations are studied in such a way that a vector obtained as a difference of two vectors expresses a distance. An α -orthogonal direction of this distance is treated via principal components. Having deepened the logical bases of the techniques used in this paper, it is possible to think of those algorithms that can be associated with such techniques as parts of some future research papers.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Author can confirm that all relevant data are included in the article.

Conflicts of Interest

The author declares that they have no conflict of interest.

Appendix A. Proof of Theorem of α -Orthogonality

Let m = 2 be. Thus, it is possible to focus on
u ^ i d , u ^ j d α = 0
only, where i = 1 , and j = 2 , so i < j . There is a bilinear relationship between the α -metric tensor defined with respect to . m B u ^ and the one defined with respect to . m B x . One writes
. u ^ g i j = u ^ i d , u ^ j d α ,
and
. u ^ g i j = u ^ i d , u ^ j d α = u i k u j h . x g k h ,
so
. u ^ g i j = u i k u j h . x g k h = 0
holds. The set of all eigenvectors of a square matrix of order m corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of the same square matrix of order m associated with that eigenvalue. Let N ( k ) be the characteristic space associated with λ ( k ) . By hypothesis, all the eigenvalues are distinct, so the corresponding characteristic spaces are α -orthogonal in pairs. It follows that every element of . x M ( 0 ) m can uniquely be expressed as a direct sum of disjoint sets. Thus, every element of . x M ( 0 ) m belongs to a specific characteristic space. In general, one writes
. x M ( 0 ) m = N ( 1 ) N ( 2 ) N ( m ) .
In particular, if m = 2 , then one writes
. x M ( 0 ) 2 = N ( 1 ) N ( 2 ) .
Let
Z i : z i d = μ i u ^ i d , μ i R ,
be a one-dimensional linear manifold over R . Let
Z i : { z j d ; j i }
be the complementary linear manifold over R , whose dimension is equal to m 1 . If m = 2 , then its dimension is equal to 1. It is possible to write
. x M ( 0 ) m = Z i Z i ,
where Z i and Z i are α -orthogonal. In particular, one can observe u ^ i d Z i and μ j u ^ j d Z i , with j i , so (A4) becomes
u i k μ j u j h . x g k h = 0 ,
where j i . The set given by { u i k } identifies the contravariant components of u ^ i d Z i with respect to . m B x . The set given by { μ j u j h } identifies the contravariant components of μ j u ^ j d Z i with respect to . m B x , where j i . The set given by { u i k . x g k h } identifies the covariant components of u ^ i d Z i with respect to . m B x . Since (A7) holds, the covariant components of z i d are also given by z i h = μ i u i h . It follows that the vectors having covariant components given by u i k . x g k h and μ i u i h belong to the same eigenspace denoted by Z i , so there is one and only one real number denoted by τ i R such that one writes
u i k . x g k h = τ i μ i u i h .
From (A11), the following characteristic equation
( . x g k h τ i μ i δ k h ) u i k = 0
can be written. If one compares (108) with (A12), then one observes that the eigenvalues and eigenvectors associated with (108) and (A12) are the same. From (A1), it follows that u ^ i d and u ^ j d , where i = 1 , and j = 2 , so i < j , are principal components. By definition, each principal component is a linear combination of m basis vectors. The same conclusion can be obtained whether it turns out to be m > 2 .

References

  1. Angelini, P. (2024a). Extended least squares making evident nonlinear relationships between variables: Portfolios of financial assets. Journal of Risk and Financial Management, 17(8), 336. [Google Scholar] [CrossRef]
  2. Angelini, P. (2024b). Financial decisions based on zero-sum games: New conceptual and mathematical outcomes. International Journal of Financial Studies, 12, 56. [Google Scholar] [CrossRef]
  3. Angelini, P. (2024c). Invariance of the mathematical expectation of a random quantity and its consequences. Risks, 12, 14. [Google Scholar] [CrossRef]
  4. Angelini, P., & Maturo, F. (2022a). Jensen’s inequality connected with a double random good. Mathematical Methods of Statistics, 31(2), 74–90. [Google Scholar] [CrossRef]
  5. Angelini, P., & Maturo, F. (2022b). The price of risk based on multilinear measures. International Review of Economics and Finance, 81, 39–57. [Google Scholar] [CrossRef]
  6. Angelini, P., & Maturo, F. (2023). Tensors associated with mean quadratic differences explaining the riskiness of portfolios of financial assets. Journal of Risk and Financial Management, 16, 369. [Google Scholar] [CrossRef]
  7. Berti, P., & Rigo, P. (2021). Finitely additive mixtures of probability measures. Journal of Mathematical Analysis and Applications, 500(1), 125114. [Google Scholar] [CrossRef]
  8. Bettuzzi, G. (1986). On the definition of quadratic indices of correlation between standardized deviations. Statistica, 46(3), 325–341. [Google Scholar]
  9. Coletti, G., Petturiti, D., & Vantaggi, B. (2014). Possibilistic and probabilistic likelihood functions and their extensions: Common features and specific characteristics. Fuzzy Sets and Systems, 250, 25–51. [Google Scholar] [CrossRef]
  10. de Finetti, B. (1989). Probabilism: A critical essay on the theory of probability and on the value of science. Erkenntnis, 31(2–3), 169–223. [Google Scholar] [CrossRef]
  11. De Lucia, L. (1965). Variabilità superficiale e dissomiglianza tra distribuzioni semplici. Metron, XXIV(1–4). Available online: https://hdl.handle.net/2027/mdp.39015079393073 (accessed on 1 May 2025).
  12. Denton, P. B., Parke, S. J., Tao, T., & Zhang, X. (2022). Eigenvectors from eigenvalues: A survey of a basic identity in linear algebra. Bulletin of the American Mathematical Society, 59(1), 31–58. [Google Scholar] [CrossRef]
  13. Diaconis, P. (1977). Finite forms of de Finetti’s theorem on exchangeability. Synthese, 36(2), 271–281. [Google Scholar] [CrossRef]
  14. Diaconis, P., & Fill, J. A. (1990). Strong stationary times via a new form of duality. The Annals of Probability, 18(4), 1483–1522. [Google Scholar] [CrossRef]
  15. Diaconis, P., & Freedman, D. (1980). Finite exchangeable sequences. The Annals of Probability, 8(4), 745–764. [Google Scholar] [CrossRef]
  16. Eberlein, E. (1986). On strong invariance principles under dependence assumptions. The Annals of Probability, 14(1), 260–270. [Google Scholar] [CrossRef]
  17. Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70(3), 193–242. [Google Scholar] [CrossRef]
  18. Forcina, A. (1982). Gini’s contributions to the theory of inference. International Statistical Review/Revue Internationale de Statistique, 50(1), 65–70. [Google Scholar] [CrossRef]
  19. Frank, E. (1946). On the zeros of polynomials with complex coefficients. Bulletin of the American Mathematical Society, 52(2), 144–157. [Google Scholar] [CrossRef]
  20. Gili, A., & Bettuzzi, G. (1986). About concordance square indexes among deviations: Correlation indexes. Statistica, 46(1), 17–46. [Google Scholar]
  21. Gini, C. (1921). Measurement of inequality of incomes. The Economic Journal, 31(121), 124–126. [Google Scholar] [CrossRef]
  22. Giorgi, G. M. (2005). Gini’s scientific work: An evergreen. Metron-International Journal of Statistics, 63(3), 299–315. [Google Scholar]
  23. Granger, C. W. J. (2004). Time series analysis, cointegration, and applications. American Economic Review, 94(3), 421–425. [Google Scholar] [CrossRef]
  24. Ho, S. L., & Xie, M. (1998). The use of ARIMA models for reliability forecasting and analysis. Computers & Industrial Engineering, 35(1–2), 213–216. [Google Scholar] [CrossRef]
  25. Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24(6), 417–441. [Google Scholar] [CrossRef]
  26. Hotelling, H. (1936). Relations between two sets of variates. Biometrika, 28(3-4), 321–377. [Google Scholar] [CrossRef]
  27. Jolliffe, I. T., & Cadima, J. (2016). Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202. [Google Scholar] [CrossRef] [PubMed]
  28. Kendrick, J. W., & Jaycox, C. M. (1965). The concept and estimation of gross state product. Southern Economic Journal, 32(2), 153–168. [Google Scholar] [CrossRef]
  29. Keogh, E., & Lin, J. (2005). Clustering of time-series subsequences is meaningless: Implications for previous and future research. Knowledge and Information Systems, 8(2), 154–177. [Google Scholar] [CrossRef]
  30. Landon, B., Lopatto, P., & Marcinek, J. (2020). Comparison theorem for some extremal eigenvalue statistics. The Annals of Probability, 48(6), 2894–2919. [Google Scholar] [CrossRef]
  31. Langel, M., & Tillé, Y. (2011). Corrado Gini, a pioneer in balanced sampling and inequality theory. Metron-International Journal of Statistics, 69(1), 45–65. [Google Scholar] [CrossRef]
  32. Liu, W., & Lin, Z. (2009). Strong approximation for a class of stationary processes. Stochastic Processes and Their Applications, 119(1), 249–280. [Google Scholar] [CrossRef]
  33. Matthews, P. (1992). Strong stationary times and eigenvalues. Journal of Applied Probability, 29(1), 228–233. [Google Scholar] [CrossRef]
  34. Oancea, B., & Simionescu, M. (2024). Gross domestic product forecasting: Harnessing machine learning for accurate economic predictions in a univariate setting. Electronics, 13(24), 4918. [Google Scholar] [CrossRef]
  35. Ram, R. (1986). Government size and economic growth: A new framework and some evidence from cross-section and time-series data. The American Economic Review, 76(1), 191–203. [Google Scholar]
  36. Sanfilippo, G., Gilio, A., Over, D. E., & Pfeifer, N. (2020). Probabilities of conditionals and previsions of iterated conditionals. International Journal of Approximate Reasoning, 121, 150–173. [Google Scholar] [CrossRef]
  37. Spizzichino, F. (2009). A concept of duality for multivariate exchangeable survival models. Fuzzy Sets and Systems, 160(3), 325–333. [Google Scholar] [CrossRef]
  38. Tao, T., & Vu, V. (2011). Random matrices: Universality of local eigenvalue statistics. Acta Mathematica, 206(1), 127–204. [Google Scholar] [CrossRef]
  39. Testik, M. C., & Sarikulak, O. (2021). Change points of real GDP per capita time series corresponding to the periods of industrial revolutions. Technological Forecasting and Social Change, 170, 120911. [Google Scholar] [CrossRef]
  40. Tipping, M. E., & Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society Series B: Statistical Methodology, 61(3), 611–622. [Google Scholar] [CrossRef]
Table 1. How the association frequencies are fixed.
Table 1. How the association frequencies are fixed.
Vector 2−45,000−15,00015,00045,000Sum
Vector 1
−45,0001/40001/4
−15,00001/4001/4
15,000001/401/4
45,0000001/41/4
Sum1/41/41/41/41
Table 2. How the association frequencies are chosen.
Table 2. How the association frequencies are chosen.
Vector 2−45,750−14,75016,25044,250Sum
Vector 1
−45,0001/161/161/161/161/4
−15,0001/161/161/161/161/4
15,0001/161/161/161/161/4
45,0001/161/161/161/161/4
Sum1/41/41/41/41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Angelini, P. Comparisons Between Frequency Distributions Based on Gini’s Approach: Principal Component Analysis Addressed to Time Series. Econometrics 2025, 13, 32. https://doi.org/10.3390/econometrics13030032

AMA Style

Angelini P. Comparisons Between Frequency Distributions Based on Gini’s Approach: Principal Component Analysis Addressed to Time Series. Econometrics. 2025; 13(3):32. https://doi.org/10.3390/econometrics13030032

Chicago/Turabian Style

Angelini, Pierpaolo. 2025. "Comparisons Between Frequency Distributions Based on Gini’s Approach: Principal Component Analysis Addressed to Time Series" Econometrics 13, no. 3: 32. https://doi.org/10.3390/econometrics13030032

APA Style

Angelini, P. (2025). Comparisons Between Frequency Distributions Based on Gini’s Approach: Principal Component Analysis Addressed to Time Series. Econometrics, 13(3), 32. https://doi.org/10.3390/econometrics13030032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop