Polynomial Representations of High-Dimensional Observations of Random Processes

The paper investigates the problem of performing correlation analysis when the number of observations is very large. In such a case, it is often necessary to combine the random observations to achieve dimensionality reduction of the problem. A novel class of statistical measures is obtained by approximating the Taylor expansion of a general multivariate scalar function by a univariate polynomial in the variable given as a simple sum of the original random variables. The mean value of the polynomial is then a weighted sum of statistical central sum-moments with the weights being application dependent. Computing the sum-moments is computationally efficient and amenable to mathematical analysis, provided that the distribution of the sum of random variables can be obtained. Among several auxiliary results also obtained, the first order sum-moments corresponding to sample means are used to reduce the numerical complexity of linear regression by partitioning the data into disjoint subsets. Illustrative examples are provided assuming the first and the second order Markov processes.


Introduction
The interest in developing and improving statistical methods and models is driven by the ever increasing volumes and variety of data. Making sense of data often requires to uncover the existing patterns and relationships in data. One of the most commonly used universal tools for data exploration is evaluation of cross-correlation among different data sets and of auto-correlation of data sequence with itself. For example, correlation values can be utilized to carry out sensitivity analysis, to forecast future values, to visualize complex relationships among subsystems, and to evaluate other important spatio-temporal statistical properties. However, it has been well established that correlations do not imply causalization.
The correlations measure linear dependencies between pairs of random variables which is implicitly exploited in linear regression. The correlation values rely on estimated or empirical statistical moments such as sample mean and sample variance. If the mean values are removed from data, the correlations are referred to as covariances. The pairwise correlations can be represented as a fully connected graph with edges parameterized by time shifts, and possibly by amplitude adjustments between the corresponding data sequences. However, the graph may become excessively complex when the number of random variables considered is large; for example, when analyzing multiple long sequences of random observations. The problem of extending the notion of correlations and covariances to more than two random variables has been considered previously in literature. In particular, the multirelation has been defined as a measure of linearity for multiple random variables in [1]. This measure is based on a geometric analysis of linear regression. Under the assumption of multivariate Student-t distribution, the statistical significance of the multirelation coefficient is defined in terms of eigenvalues of the correlation matrix in [2]. An univariate correlation measure for multiple random arXiv:2010.07991v1 [cs.IT] 15 Oct 2020 variables is defined in [3] to be a sum of elements on the main diagonal of the covariance matrix. Linear regression is again utilized in [4] to define the multiple correlation coefficient. It is derived from the coefficient of determination of linear regression as a proportion of the variance in the dependent variable which is predictable from the independent variables.
The distance correlation measure between two random vectors based on the hypothesis testing of independence of random variables is proposed in [5]. Different operations on random variables including ordering, weighting and a non-linear transformation are assumed in [6] to define a family of Minkowski distance measures for random vectors. A maximal correlation measure for random vectors is introduced in [7]. It generalizes the concept of maximal information coefficient while non-linear transformations are also used to allow assessment of non-linear correlations. Similarly to [3], the sample cross-correlation between multiple realizations of two random vectors is shown in [8] to be proportional to the sum of diagonal elements of the product of the corresponding correlation matrices. The reference [9] investigates two-dimensional general and central moments for random matrices which are invariant to similarity transformations. Two new distance measures for random vectors are defined in [10] assuming the joint characteristic function. These measures can be also used to test for statistical independence of the random vectors. The most recent paper [11] derives a linear correlation measure for multiple random variables using determinants of the correlation sub-matrices.
This brief literature survey indicates that there are still no commonly accepted correlation measures for multiple random variables. The measures considered in the literature are either rigorously derived but mathematically rather complicated, or there are many modifications of the existing, simpler, but well understood measures. In this paper, it is shown that by constraining the complexity of multivariate Taylor series to reduce the number of its parameters or degrees-of-freedom, the Taylor series can be rewritten as a finite degree univariate polynomial. The independent variable of the polynomial is equal to a simple sum of the random variables considered. The polynomial defines a many-to-one transformation of multiple random variables to another scalar random variable. The polynomial coefficients are real-valued constants, and they are application dependent. The mean value of the polynomial represents a broad class of polynomial measures which can be used for any number of random variables. The mean value of each polynomial element corresponds to a general or central moment of the sum of random variables. Therefore, these moments are referred to here as sum-moments. In case of multiple random vectors, similarly to computing the correlation or covariance matrix by first concatenating the vectors into one long vector, the sum-moments can be readily defined and computed for such a concatenated random vector. The main advantages of assuming sum-moments to study statistical properties of multiple random variables or multiple random vectors are the clarity in understanding their statistical significance, mathematical simplicity of their definitions, and as long as the distribution of the sum of random variables can be found, the closed-form expression for the sum-moments can be obtained.
Before introducing polynomial representations of random vectors in Section 4 together with central and general sum-moments, a number of auxiliary results are presented in Section 2 and Section 3. In particular, Section 2 summarizes the key results and concepts from the literature concerning stationary random processes, their parameter estimation via linear regression and method of moments, and how to generate the 1st and the 2nd order Markov processes is also discussed. Section 3 extends the results from Section 2 by deriving additional results which are used in Section 4 or in Section 5 such as a low complexity approximation of linear regression and a procedure to generate multiple Gaussian processes with defined auto-correlation and cross-correlation. The main results of the paper are obtained in Section 4 including defining a class of polynomial statistical measures and sum-moments for multiple random variables and random processes. Other related concepts involving sums of random variables are also reviewed. Section 5 provides several examples to illustrate and evaluate the results obtained in the paper. The paper is concluded in Section 6.

Background
This section reviews key concepts and results from the literature which are used to develop new results in the subsequent sections. Specifically, the following concepts are briefly summarized: stationarity of random processes, estimation of general and central moments and of correlation and covariance using the method of moments, definition of cosine similarity and Minkowski distance, parameter estimation via linear regression, generation of the 1st and the 2nd order Markov processes, and selected properties of polynomials and multivariate functions are given. Moreover, note that more straightforward proofs for some lemmas are only indicated, and not fully and rigorously elaborated.

Random processes
Consider a real-valued one-dimensional random process x(t) ∈ R over a continuous time t ∈ R . The process is The random variables X X X = {X 1 , . . . , X N } ∈ R N are completely statistically described by their joint density function The process x(t) is further assumed to be K-th order stationary [12].
Lemma 1. The K-th order stationary process is also (K − k)-th order stationary, k = 1, 2, . . . , K − 1, for any subset Proof. The unwanted random variables can be integrated out from the joint density.
Unless otherwise stated, all observations of random processes are assumed to be stationary. The expectation, E[X ] = R x f x (x) dx, of a random variable X is a measure of its mean value. A linear correlation between between two random variables x(t 1 ) and x(t 2 ) is defined as [12], The (auto-) covariance measures a linear dependency between two zero-mean random variables, i.e., It can be shown that the maximum of C x (τ), τ = t 2 −t 1 , occurs for τ = 0, corresponding to the variance of a stationary process x(t), i.e., The covariance C x (τ) can be normalized, so that, −1 ≤ C x (τ)/C x (0) ≤ 1. Furthermore, for real-valued regular processes, the covariance has an even symmetry, i.e., C x (τ) = C x (−τ), [12].
For N > 2, it is convenient to define a random vector, X X X = [X 1 , . . . , X N ] T , and its mean as,X X X = E[X X X ] where (·) T denotes the vector or matrix transpose. The corresponding covariance matrix, has as its elements the pairwise covariances, [C C C x ] i j = C x (t j − t i ), i, j = 1, 2, . . . , N.
Its elements are the covariances, In addition to the first order (mean value) and the second order (covariance) statistical moments, higher order statistics are given by the general and the central moments, respectively, defined as [13], where | · | denotes the absolute value, and note, g 1 (X ) = E[X ]. The positive integer-valued moments (1) facilitate mathematically tractable integration, and prevent producing complex numbers, if the argument is negative. Note also that the absolute value in (1) is necessary if X is complex valued, or if m is odd, in order to make the moments to be real-valued and convex. The central moment can be normalized by the variance as, The cosine similarity between two equal-length vectors, X X X 1 = [X 11 , . . . , X 1N ] T and X X X 2 = [X 21 , . . . , X 2N ] T , is defined as [14], The Minkowski distance between two equal-length vectors X X X 1 and X X X 2 is defined as the l m -norm [6], i.e.,

Estimation methods
Statistical moments can be empirically estimated from measured data using sample moments. Such inference strategy is referred to as the method of moments [15]. In particular, under the ergodicity assumption, the first moment of random variable X is estimated as a sample mean from the measurements X i = x i , i.e., It is straightforward to show that the sample mean estimator is unbiased and consistent [15]. More generally, the expected value of a transformed random variable h(X ) is calculated as, where d is the number of degrees of freedom used in the transformation h, i.e., the number of other parameters which must be estimated. For example, the variance of X is estimated as, whereX is the estimated mean value of X.
Assuming random sequences {X 1i } 1:N 1 and {X 2i } 1:N 2 , their auto-covariance and cross-covariance, respectively, are estimated as [15], The condition k N is necessary, since the accuracy of estimates decreases with k.
The parameters of a random process can be estimated by fitting a suitable data model to the measurements. Denote such data model as, X i = M i (P P P), i = 1, 2, . . . , N. Assuming the least-squares (LS) criterion, the vector of parameters, P P P = [P 1 , . . . , P D ] T ⊆ R D , is estimated as, For continuous parameters, the minimum is obtained using the derivatives, i.e., let d dP P P M i (P P P) =Ṁ i (P P P), and, For a linear data model, M i (P P P) = w w w T i P P P where w w w i = [w 1i , . . . , w Di ] T are known coefficients, the LS estimate can be obtained in the closed-form, i.e.,P where (·) −1 denotes the matrix inverse.

Generating random processes
The task is to generate a discrete time stationary random process with a given probability density and a given auto-covariance. The usual strategy is to generate a correlated Gaussian process followed by a non-linear memoryless transformation. For instance, the autoregressive (AR) process described by the second order difference equation with constants 0 < a < 1 and 0 < b [12], i.e., generates the 2nd order Markov process from a zero-mean white (i.e., uncorrelated) process u(n). The resulting process x(n) has the auto-covariance, where the variance σ 2 is set by the coefficient b. On the other hand, the AR process, generates the 1st order Markov process with the auto-covariance, Lemma 3.
[16] The stationary random process x(k) with auto-covariance C x (k) is transformed by a linear time-invariant system with real-valued impulse response h(k) into another stationary random process y(k) = Proof. By definition, the output covariance, Proof. For any multivariate function h(x x x) ∈ R and any t 0 ∈ R , the expectation, assuming Definition 1, and provided that the dimension N is at most equal to the stationarity order K.
For shorter sequences, the linear transformation, X X X = T T TU U U, can be used to generate a normally distributed vector X X X ∈ R N having the covariance matrix, C C C x = T T T T T T T , from uncorrelated Gaussian vector U U U ∈ R N . The mean, For longer sequences, equivalently, a linear time-invariant filter can be used as indicated by Lemma 3.
Consequently, convex polynomials can be generated as follows.
Lemma 6. Let Q Q Q ∈ R m×m be a positive semi-definite matrix, and assume a polynomial, Then, for any q 0 , q 1 ∈ R , the polynomial p(x) of degree 2m, Assuming a non-negative integer m ∈ {0} ∪ N + = {0, 1, 2, . . .}, define the following notations to simplify the subsequent mathematical expressions [17]: Note that |x x x| denotes the sum of elements of x x x whereas x x x 1 is the sum of absolute values of elements.
Lemma 7. The m-th power of a finite sum can be expanded as [18], |x x x| m = ∑ |n n n|=m m! n n n! x x x n n n .
Theorem 1. The multivariate Taylor's expansion of a (m + 1)-order differentiable function f : R N → R about the point x x x ∈ R N is written as [18], for some t ∈ (0, 1).

Background Extensions
In this section, additional results are obtained which are used in the next section to introduce statistical sum-moments for random vectors. In particular, the mean cosine similarity, the mean Minkowski distance as well as higher central moments for random vectors are defined. A polynomial approximation of univariate functions is shown to be a linear regression problem. A numerically efficient solution of the LS problem is derived. Finally, a procedure to generate multiple Gaussian processes with defined auto-covariance and cross-covariance is devised.
Recall the second moment of a random variable X, i.e., It is straightforward to show that µ 2 is minimized for c = E[X ], giving the variance of X. On the other hand, For random vectors, both cosine similarity and Minkowski distance are random variables. Assuming that the vectors are jointly stationary, and their elements are identically distributed, the mean cosine similarity can be defined as,S , and, the average cross-correlation is calculated dimension-wise as, The mean Minkowski distance for random vectors can be defined as, Recognizing the m-th general moment in (5), the m-th power of the mean Minkowski distance can be normalized as, where the average Minkowski distance between two random vectors is, Furthermore, note that for m = 2, Assuming positive integers m m m = {m 1 , m 2 , . . . , m N }, the higher order joint central moments for a random vector X X X = {X i } 1:N can be defined as, or, using more compact index notation as,

Linear LS estimation
The linear LS estimation can be used to fit a degree (D − 1) polynomial to N samples of a random process x(t) at discrete time instances t 1 ,t 2 , . . . ,t N . Hence, consider the polynomial data model, (2) gives the estimates, Assuming D = 2 parameters, i.e., the linear LS regression for a straight line, the parameters P 1 and P 2 to be estimated must satisfy the following equality: Denoting the weighted averages, , a necessary but not sufficient condition for the linear LS estimation of parameters P 1 and P 2 is, In the LS terminology, ( w w w 1 2 2 ,w 12 ) represent independent variables whereasX is a dependent variable.
Note that all N measurements are used in (7). However, if N is sufficiently large, the data points could be split into two disjoint sets of N 1 and N 2 elements, N 1 + N 2 = N, respectively. The parameters P 1 and P 2 can be then readily estimated by solving the following set of two equations, i.e., where a 1 , a 2 > 1 are some constants (to be determined later), and I 1 and I 2 are two disjoint index sets, such that I 1 ∪ I 2 = {1, 2, . . . , N}, and their cardinality, |I 1 | = N 1 and |I 2 | = N 2 . Note that, Consequently, the approximate LS estimates are computed as, There are 2 N possibilities how to split N data points into two disjoint subsets indexed by I 1 and I 2 . More importantly, the estimates (10) do not guarantee the minimum LS fit, i.e., achieving the minimum squared error (MSE). However, the complexity of performing the LS fit is greatly reduced by splitting the data, since independently of the value of N 1, only a 2 × 2 matrix needs to be inverted. The optimum LS fit and the approximate LS fit (10) are depicted in Figure 1. The points A 1 and A 2 in Figure 1 correspond to the data indexes I 1 and I 2 , respectively.
The mid-point, B = a 1 A 1 + a 2 A 2 , follows from (9). Note that B is always located at the intersection of optimum and approximate lines LS opt and LS apr , respectively. The vertical arrows at points A 1 and A 2 in Figure 1 indicate that the dependent valuesX 1 andX 2 are random variables. The larger the variation of the gradient of line LS apr in Figure 1, the larger the uncertainty and the probability that the line LS apr deviates from the optimum line LS opt . Given 0 < p < 1, there exists ξ > 0, such that the probability of the gradient G apr of LS apr to be within the given bounds is, For stationary measurements (cf. (8)), the means and the variances in (11) are equal to, The uncertainty in computing G apr from data, and thus, also the probability of line LS apr deviating from line LS opt is inversely proportional to the width of the interval in (11) Consequently, the numerator of (12) must be minimized, and the denominator maximized.
In order to minimize the numerator in (12), it is convenient to choose, a 1 = N ν 1 and a 2 = N ν 2 , where ν ∈ R + is a constant to be optimized. It is straightforward to show that the expression, (N 1/2−ν 1 ) is convex, i.e., it has a unique global minimum for ∀ν > 1/2. Hence, a necessary condition to reduce the approximation error is that a 1 > N 1/2 1 and a 2 > N 1/2 2 . For numerical convenience, let a 1 = N 1 and a 2 = N 2 . Then, the definitions of dependent and independent variables in (8) become arithmetic averages, and the optimum index subsets have the cardinality, Furthermore, in order to maximize the denominator in (12), assume that the independent variables ( w w w 1 2 2 ,w 12 ) in (8) are sorted by w li , i.e., let w 11 < w 12 < . . . < w 1N . This ordering and the condition (13)  In summary, the approximate linear LS regression can be efficiently computed with a good accuracy by splitting the data into multiple disjoint subsets, calculating the average data points in each of these subsets, and then solving the set of linear equations with the same number of unknown parameters. The data splitting should exploit ordering of data points by one of the independent variables. Moreover, it can be expected that the accuracy of the approximate LS regression is going to improve with the number of data points N. Numerical evaluation of the approximate LS regression is considered in Section 5.

Generating pairwise-correlated Gaussian processes
How to generate a single correlated Gaussian process is well established in the literature, and it has been described in Section 2.3. Note also that the linear transformation matrix assumed in Section 2.3 does not have to be square.
Proof. The matrix is positive definite, if U U U T T T T T T T TU U U > 0 for ∀U U U ∈ R N 2 . The product, U U U T T T T T T T TU U U = (T T TU U U ) T (T T TU U U ) = T T TU U U 2 2 > 0 where · 2 is the Euclidean norm of a vector. If N 1 > N 2 , then some rows in T T T must be linearly dependent, and it is then possible to find a non-zero vector U U U, such that T T TU U U = 0.
Consequently, a linear transformation of uncorrelated Gaussian vector U U U ∈ R N 2 can be used to generate a correlated Gaussian vector X X X ∈ R N 1 , provided that, N 1 ≤ N 2 .
Furthermore, it is often necessary to generate multiple mutually correlated Gaussian processes with given auto-correlation and cross-correlation.
Lemma 9. The linear transformation, generates a pair of correlated Gaussian vectors x x x 1 ∈ R N 1 and x x x 2 ∈ R N 2 from uncorrelated zero-mean Gaussian vectors u u u 1 , u u u 2 ∈ R N where 0 0 0 denotes a zero matrix, and according to Lemma 8, it is necessary that, max(N 1 , N 2 ) ≤ N.
The corresponding (auto-) correlation and cross-correlation matrices are, E u u u 1 u u u T 1 = σ 2 1 I I I, E u u u 2 u u u T 2 = σ 2 2 I I I, E u u u 1 u u u T 2 = 0 0 0 where I I I denotes an identity matrix.
Proof. The proof is straightforward by substituting (14) to definitions of C C C x 1 , C C C x 2 and C C C x 1 x 2 .
The following corollary details the procedure described in Lemma 9.
Corollary 1. Given T T T 2 , calculate the (auto-) correlation matrix, C C C x 2 = K K KT T T T 2 , or vice versa. Then, given the cross-correlation matrix, C C C x 1 x 2 , compute K K K = C C C x 1 x 2 T T T 2 C C C −T x 2 . Finally, given T T T 1 , calculate the (auto-) correlation matrix, C C C x 1 = T T T 1 T T T T 1 + K K KK K K T , or, obtain T T T 1 by solving the matrix equation, T T T 1 T T T T 1 = C C C x 1 − K K KK K K T . Note that the matrix equation, T T T T T T T = C C C, can be solved for T T T using the singular value decomposition, C C C = U U U ΛU U U T where U U U is a unitary matrix and Λ is a diagonal matrix of eigenvalues. Then, T T T = U U U √ Λ.

Polynomial Statistics and Sum-Moments for Vectors of Random Variables
The main objective of this section is to define a universal function to effectively measure the statistics of random vectors and random processes observed in multiple discrete time instances. The function should: (1) be universally applicable for an arbitrary number of random variables and random vectors, (2) be symmetric, so that all random variables are considered equally, (3) lead to mathematically tractable expressions, and (4) be convex, so its parameters can be optimized. Let f : R N → R denote such a mapping from N random variables X i to a scalar random variable Y , i.e., where, for a random process, X i = x(t i ). In order to satisfy the symmetry requirement, it can be assumed that the random variables X i are first combined as, using a binary commutative operator • such as addition or multiplication. More importantly, if the random variables in (16) are combined using addition, then the function f must be non-linear. The mapping (15) or (16) defines a scalar random field over discrete time instances corresponding to the discrete time samples of the random process x(t).

Definition 4.
A random process x(t) observed at N discrete time instances t t t = {t i } 1:N defines a N-dimensional scalar random field, Assuming a multivariate Taylor's expansion of the function f (x x x) defined in Theorem 1 on p. 8 about its mean x x x = E[x x x], the random field can be expressed as, where R m is the reminder. Thus, givenx x x, the value of f (x x x + h h h) is a weighted sum of h h h n n n plus an offset f (x x x).
The key realization is that any function f having all the desired properties can be defined by combining the equations (16) and (17). In particular, neglecting the reminder R m , the number of parameters in (17) can be greatly reduced by approximating the weights ∂ n n n f (x x x) in (17) with the coefficients (l! p l ) which are independent of n n n.
Moreover, instead of precisely determining the values of p l ∈ R to obtain the best possible approximation of the original function f , it is useful as well as sufficient to constrain the Taylor expansion (17) for the class of functions that are exactly constructed as, where p 0 =x x x = E[x x x], and the value m and the coefficients p l ∈ R are determined by the requirements of a given application.
Proof. The proof would repeat the results and discussion given in the text preceding the lemma.
The most important statistical property of the random variable Y = f (x x x) is its mean value.
Lemma 11. The mean value of the random field F (t t t) created from N ≤ K discrete time observations of the K-th order stationary random process x(t) is a (N − 1)-dimensional scalar field, Proof. Assuming the polynomial field constructor f defined in Lemma 10, its mean value is equal to the weighted sum of central moments, µ l (x x x) = E |x x x −x x x| l . According to Lemma 1, the condition N ≤ K is sufficient to reduce the dimension of expectation by one. For example, choosing the first sample X 1 = x(t 1 ) as a reference, the probability density of samples becomes,

Related concepts
Assume the scalar field constructor f defined in (18) for a K-th order stationarity random process x(t). Define the auxiliary random variable, where a is a normalization constant, a = 0. The expression (18) can be then rewritten as, Y (a) = ∑ m l=0 p l Z l (a). For a = 1, Z(1) has a zero mean, and the variance, var[Z (1) In data processing, the sample mean is intended to be an estimate of the true population mean, i.e., N 1 is required. Here, Z(a) is calculated over a finite number of vector or process dimensions, so it is a random variable for ∀a ∈ R \ {0}. The variable Z(N) should be then referred to as arithmetic average or a center of gravity of the random vector X X X in the Euclidean space R N , i.e., Note that (19) is not an l 1 -norm, since the variables X i are not summed with absolute values. If random variables X i are independent, the distribution of Z(a) is given by convolution of their marginal distributions. For correlated observations, if the multivariate characteristic function,f (s s s) = E e js s s·X X X , of X X X can be obtained, the distribution of Z(a) = |X X X|/a is calculated as, where the dot product, s a · 1 1 1 =< s a , 1 1 1 >, and 1 1 1 denotes an all-ones vector.
Many other properties involving the sums of random variables can be obtained. For instance, if the random variables X i are independent and have zero mean, then, Considering Lemma 10 and 11, an important statistic for a random vector can be defined as the m-th central sum-moment.
Definition 5. The m-th central sum-moment of a random vector X X X ∈ R N is computed as,

Lemma 12.
The second central sum-moment of a random vector is equal to the sum of all elements of its covariance matrix, i.e., Furthermore, the second central sum-moment is also equal to the variance of |X X X|, i.e., Proof. The first equality is shown by expanding the expectation, and substituting for elements of the covariance matrix, [C C C x ] i, j = cov[X i , X j ]. The second expression follows by noting that ∑ N i=1 (X i −X i ) has zero mean.
In the literature, there are other measures involving sums of random variables. For instance, in Mean-Field Theory, the model dimensionality is reduced by representing N-dimensional vectors by their center of gravity [19]. The central point of a vector is also used in the first order approximation of multivariate functions in [20], and in the model overall variance in [21].
In Measure Theory [22], the total variation (TV) of a real-valued univariate function, x : (t 0 ,t N ) → R , is defined as the supremum over all N-segment partitioning of the interval (t 0 ,t N ), i.e., The TV concept can be adopted for observations X i = x(t i ) of a stationary random process x(t) at discrete time instances, {t i } 0:N . A mathematically tractable mean TV measure can be defined as, Jensen's inequality for a random vector assuming equal weights can be stated as [23], Alternatively, exchanging the expectation and summation, Jensen's inequality becomes, Moreover, if the right-hand side of (20) is to be minimized and m = 2, the inequality in (20) changes to equality. In particular, consider the minimum mean square error (MMSE) estimation of a vector of random parameters P P P = {P i } 1:N from measurements X X X. DenotingP i (X X X ) = E[P i |X X X ], the MMSE estimatorP P P(X X X ) minimizes [15], where the expectations are over the conditional distribution f P P P|X X X . In signal processing, a length N moving average (MA) filter transforms the input sequence X i into an output sequence Y i by discrete-time convolution , i.e., The (auto-) correlations of the input and output sequences are related by Lemma 3 on p. 6, i.e., Note that if the input process is stationary, then the input and output processes are jointly stationary.

Multiple random processes
The major complication with observing, evaluating and processing multiple random processes is how to achieve their time alignment and amplitude normalization (scaling). Focusing here on time alignment problem only, denote the discrete time observation instances of L random processes as, Assume that the first time instance t l1 of every process serves as a reference. Then, there are (L − 1) uncertainties in time alignment of L processes, i.e., ∆ l = (t l1 − t 11 ) ∈ R , l = 2, 3, . . . , L The (L − 1) values ∆ l are unknown parameters which must be estimated. Note also that the difference, represents an unknown time shift between the process x l (t) and x k (t).
For any multivariate stationary distribution of observations of random processes, the corresponding cross-correlation normally attains a maximum for some time shift between any two processes [12]. Hence, a usual strategy for aligning the observed sequences is to locate the maximum value of their cross-correlation. The time shifts ∆ l , l = 1, 2, . . . , L, are then estimated as, Assuming the center valuesX 1 andX 2 defined in (19) as scalar representations of vectors X X X 1 and X X X 2 , their cross-covariance can be computed as, The task is now how to generalize the pairwise cross-covariance (23) for the case of multiple random vectors having possibly different lengths. If all random vectors of interest are concatenated into one single vector, the m-th joint central sum-moment can be then defined by utilizing Lemma 10 and Lemma 11.
Definition 6. The m-th central sum-moment for L random processes with N l observations, l = 1, 2, . . . , L, is computed as, Lemma 13. The second central sum-moment for L random processes with N l observations, l = 1, 2, . . . , L, is equal to the sum of all pairwise covariances, i.e., Proof. The expression can be obtained by expanding the sum, and then applying the expectation.
Many other properties of central and non-central sum-moments can be obtained. For example, assuming two equal-length vectors X X X 1 and X X X 2 , it is straightforward to show that,

Illustrative Examples
This section provides examples to quantitatively evaluate the results developed in the previous sections. In particular, the accuracy of approximate linear LS regression proposed in Section 3.1 is assessed to justify its lower computational complexity. The central sum-moments introduced in Section 4 are compared assuming correlated Gaussian processes. Finally, several signal processing problems involving the 1st order Markov process are investigated.

Linear regression
Consider a classical one-dimensional linear LS regression problem with independent and identically normally distributed errors. The data points are generated as, where E i are zero-mean, uncorrelated Gaussian samples having the equal variance σ 2 e , and P 1 and P 2 are unknown parameters to be estimated. This LS problem can be solved exactly using the expression (2), and substituting w 1i = 1 and w 2i = ∆i, ∀i = 1, 2, . . . , N. Alternatively, to avoid inverting the N × N data matrix, the procedure devised in Section 3.1 suggests to split the data into two equal-size subsets, compute the average data point for each subset, and then solve the corresponding set of two equations with two unknowns. Specifically, the set of two equations with unknown parametersP 1 andP 2 is, assuming N is even, and using, ∑ (24) is, As a numerical example, assume the true values P 1 = 1.5, P 2 = 0.3, E[E i ] = 0, var[E i ] = 1, and N = 40 data points. Figure 2 shows the intervals (T − var[T ],T + var[T ]) versus the subset size 1 ≤ N 1 ≤ N/2 for the random variable T defined as, 2 are the total MSEs. In the limit, lim N→∞ (S apr (N) − S opt (N)) = 0, since a sufficiently large subset of data is as good as the complete set of data. For finite N, it is likely that S apr (N) > S opt (N), so the lower bounds in Figure 2 converge much faster to zero than the upper bounds.

Comparison of central moments
Assuming Lemma 12 and eq. (3), the second central sum-moment of the 2nd order Markov process of length N is, These moments are compared in Figure 3 for different values of the sequence length N, three values of the parameter α, and σ 2 = 1. It can be observed that the values of the second central sum-moment are increasing with N and the level of correlation, e −α .  Consider now the following three central moments of order m = 1, 2, . . ., i.e., The moment,S mnk , is the mean Minkowski distance; the scaling by √ N is introduced to facilitate the comparison with the other two moments, i.e., the mean sum-moment ∑ µ 2 , and the mean sum-moment ∑μ2 having the samples summed as the l 1 norm. More importantly, assuming a correlated Gaussian process X i = x(t i ), the central sum-moment can be readily obtained in a closed form whereas obtaining the closed form expressions for the other two metrics may be mathematically intractable. In particular, by factoring the covariance matrix as, vC x = T T T x T T T T x , the correlated Gaussian vector can be expressed as, X X X = T T TU U U. Then the sum, |X X X| = 1 1 1 T T T TU U U, and the m-th central sum-moment can be computed as, where U is a zero-mean, unit-variance Gaussian random variable, and Γ denotes the gamma function. Figure 4 shows all three moments as a function of sequence length N for three values of parameter α assuming the 1st order Markov process. The vertical axis in Figure 4 is scaled by 1/N for convenience. Note that, for uncorrelated, i.e., independent Gaussian samples, the moments ∑ µ 2 and ∑μ2 are identical. More importantly, all three moments are strictly increasing with the number of samples N and with the order m.

Signal processing problems for the 1st order Markov process
Consider the 1st order Markov process observed at the output of a length N MA filter. According to (22), the (auto-) covariance of the output process is, Conjecture 1. The MA filtering of the 1st order Markov process generates nearly a 2nd order Markov process.
The parameter of the 2nd order Markov process approximating the combined (auto-) covariance (25) can be obtained using the LS regression fit, i.e., Substituting (3) and (25) into (26), and letting the first derivative to be equal to zero, the LS estimateα must satisfy the linear equation, which can be readily solved forα, and W −1 denotes the Lambert function [24]. A discrete time sequence of N elements has the (auto-) covariance constrained to (2N − 1) time indexes as indicated in (25). Assuming the length N MA filter, and that there are (n x N) samples, n x = 1, 2, . . ., of the 1st order Markov process available, the (auto-) covariance (25) has the overall length 2N(n x + 1) − 3 samples. Figure 5 compares the MSE, of the LS fit of the (auto-) covariance of the 2nd order Markov process to the combined (auto-) covariance of the 1st order Markov process and the impulse response of the MA filter assuming three values of α and two values of n x . Given α and n x , Figure 5 shows that the best LS fit occurs for a certain value of the MA filter length N. It can be concluded that, in general, the 1st order Markov process changes to the 2nd order Markov process at the output of a MA filter. The second problem to investigate is a linear MMSE (LMMSE) prediction of the 1st order Markov process observed at the output of a MA filter. In particular, given N samples X i = x(t i ), i = 1, 2, . . . , N, of a random process having the (auto-) covariance (25), the task is to predict its future value, X N+1 = x(t N+1 ), t N+1 > t N .
In general, the impulse response h h h of the LMMSE filter to estimate an unknown scalar parameter P from measurements X X X is computed as [15], Here, the unknown parameter P = X N+1 , and E[X N+1 X i ] = C 1MP+MA (N + 1 − i) and E[X i X j ] = C 1MP+MA (i − j) in Consequently, the predicted value, X N+1 = X N C 1MP+MA (1). Note that the same procedure, but excluding the MA filter, gives the LMMSE estimate, X N+1 = X N C 1MP (1). The last problem to consider is a time alignment of two zero-mean, jointly stationary processes. It is assumed that the normalized cross-covariance of these two processes is, Denote the uncertainty in determining the difference, (i − j), as ∆. In order to estimate the unknown parameters α and ∆, the left-hand side of (28) can be estimated by the method of moments, i.e., let E X 2 The cross-covariance (28) can be then rewritten as, = e −α|∆+k| (1 + α|∆ + k|), k = 0, 1, 2, . . .
Assuming, without loss of generality, that ∆ ≥ 0, the absolute value in (29) can be ignored. Consequently, the unknown parameters α and ∆ can be obtained as a linear LS fit to N measured and calculated valuesṽ k in linear model (29).

Conclusions
The development of a novel statistical measure to enable correlation analysis for multiple random vectors resumed by summarizing background knowledge on statistical description of discrete time random processes. This was then extended with the derivation of several supporting results which were used in the following sections. Specifically, it was shown that linear regression can be effectively approximated by splitting the data into disjoint subsets, and assuming only one average data point within each subset. In addition, a procedure for generating multiple Gaussian processes with prescribed auto-covariance and cross-covariance was devised. The main result of the paper was obtained by assuming the Taylor's expansion of multivariate scalar functions, and then approximating the Taylor's expansion by a univariate polynomial. The single polynomial variable is a simple sum of variables in the original multivariate function. The polynomial approximation represents a mapping from multiple discrete time observations of a random process to a scalar random field. The mean field value is a weighted sum of canonical central moments with increasing orders. These moments were named central sum-moments to reflect how they are defined. The sum-moments were then discussed in light of other similar concepts such as total variance, Mean Field Theory, and moving average sequence filtering. Illustrative examples were studied in the last section of the paper. In particular, the accuracy of approximate linear regression was evaluated quantitatively assuming two disjoint data subsets. Assuming the 1st and the 2nd order Markov processes, the central sum-moments were compared with the mean Minkowski distance. For Gaussian processes, the central sum-moments can be obtained in a closed form. The remaining problems investigated moving average filtering of the 1st order Markov processes, and its prediction using a linear MMSE filter.
Funding: This research received no external funding.

Conflicts of Interest:
The author declares no conflict of interest.

Abbreviations
The following abbreviations are used in the paper: 1MP 1st order Markov process 2MP 2nd order Markov process AR autoregressive LMMSE linear minimum mean square error LS least squares MMSE minimum mean square error MSE mean square error MA moving average TV total variance