Analytical Formulas for Conditional Mixed Moments of Generalized Stochastic Correlation Process

: This paper proposes a simple and novel approach based on solving a partial differential equation (PDE) to establish the concise analytical formulas for a conditional moment and mixed moment of the Jacobi process with constant parameters, accomplished by including random ﬂuctuations with an asymmetric Wiener process and without any knowledge of the transition probability density function. Our idea involves a system with a recurrence differential equation which leads to the PDE by involving an asymmetric matrix. Then, by using Itô’s lemma, all formulas for the Jacobi process with constant parameters as well as time-dependent parameters are extended to the generalized stochastic correlation processes. In addition, their statistical properties are provided in closed forms. Finally, to illustrate applications of the proposed formulas in practice, estimations of parametric methods based on the moments are mentioned, particularly in the method of moments estimators.


Introduction
The diffusion process has been studied thoroughly in seeking a solution for a stochastic differential equation (SDE) as well as for its properties, such as the conditional moments and mixed moments, which play significant roles in various applications and are especially beneficial for the estimation of rate parameters. Usually, these moments can be directly evaluated by utilizing the transition probability density function (PDF), which is sometimes unknown, complicated, or unavailable in closed form. Hence, the analytical formula for the moments of the SDE may be unavailable. The important application of these moments is parameter estimation. There are many tools to estimate parameters, such as the maximum likelihood estimator (MLE), which is one of the most efficient tools. Sometimes, however, it cannot be performed directly for the data of processes for which the transition PDFs are unknown or complicated. Thus, the moments are required for estimating parameters; this can be performed via several methods, e.g., martingale estimating functions, quasi-likelihood methods, nonlinear weighted least squares estimation, and method of moments (MM).
The aim of this paper is mainly to propose a simple analytical formula for conditional mixed moments of a generalized stochastic correlation process without requiring the transition PDF. As for more specific details, we let Ω, F s , {F s } 0≤s≤T 1 ≤T 2 , P be a filtered probability space generated by an adapted stochastic process {ρ t } s≤t≤T 2 . This paper focuses on the conditional expectation of a product of polynomial functions ρ n 1 T 1 and ρ n 2 T 2 of the form called a conditional mixed moment up to order n 1 + n 2 for n 1 , n 2 ∈ Z + 0 , the analytical formula of which has not been provided, where ρ ∈ [ρ min , ρ max ] ⊆ R and ρ t evolve according to a generalized stochastic correlation process (time-dependent parameters) governed by the following SDE: where W t is an asymmetric Wiener process, θ * (t) > 0, σ * (t) > 0, and ρ min < µ * (t) < ρ max for all t ∈ [s, T 2 ]. The parameter θ * (t) corresponds to the mean-reverting parameter, µ * (t) represents the mean of the process, and σ * (t) is the volatility coefficient which determines the state space of the diffusion. Emmerich [1] showed that the stochastic correlation process, which is (2) when the parameters θ * (t), µ * (t),, and σ * (t) are constant, ρ min = −1, and ρ max = 1, fulfills the natural features which correlation is expected to possess. In fact, this process is a transformed version of the Jacobi process [2]. In other words, the Jacobi process is the generalized stochastic correlation process (2) when the parameters θ * (t), µ * (t), and σ * (t) are constant, ρ min = 0, and ρ max = 1. Moreover, the Jacobi process is commonly used to describe the dynamic of discretely sampled data with range [0, 1], such as the regime probability or default probability, discount coefficient, and arbitrage free pure discount bond price; see e.g., [2,3]. The conditional mixed moment (1) becomes the well-known conditional moment when γ 1 = 0. It is worth noting that the conditional moment, which is usually used in many branches of mathematical science (especially in describing the dynamics of observed data), has been studied extensively from a probabilistic viewpoint. In 2002, to study the moment evaluation of interest rates, Delbaen and Shirakawa [2] provided an analytical formula for the transition PDF of the Jacobi process through solving it using the orthogonal polynomials with the Fokker-Planck equation, called Jacobi polynomials. In addition, an analytical formula for the conditional moments of the Jacobi process was algebraically solved by applying the transition PDF; see Figure 1. The transition PDF of Jacobi process is very complicated and involves the Jacobi polynomials; their formula is difficult to work with, especially, when extending it to a formula for conditional mixed moments (1). The authors showed that the Jacobi process, which is bounded on [0, 1], becomes a more general bounded process on [ρ min , ρ max ] by using Itô's lemma; see more details in [2]. In this case, an analytical formula for conditional moments of the new bounded process is provided in [2] as well. In 2004, Gouriéroux and Valéry [3] proposed a method to find the conditional mixed moments in order to calibrate the values of parameters on well conditional moments. Their idea used the conditional power moments, sometimes called the tower property, on the conditional moments. However, their formula for the conditional moments is based on solving the system of conditional moments recursively.
In this work, by utilizing the Feynman-Kac formula, which is transformed from the Kolmogorov equation by using Itô's lemma, we provide a simple analytical formula for conditional moments of the Jacobi process. The key interesting element of our work is that we successfully solve the partial differential equation (PDE) given in the Feynman-Kac formula, as shown in Figure 1. The obtained formula does not require solving any recursive system, as is the case in the literature to date. In addition, by applying the obtained formula with the binomial theorem, we immediately obtain a simple analytical formula for conditional moments of the generalized stochastic correlation process (2). Moreover, we extend the obtained formulas to the conditional mixed moments (1) using the tower property. We propose an analytical formula for several mathematical properties, such as the conditional variance, covariance, central moment and correlation, as consequences of our results.
The overall idea of our results relies on a PDE solution provided by the Feynman-Kac formula, which corresponds to the solution of (1). Roughly speaking, by assuming the solution of the PDE as a polynomial expression, we can solve the coefficients to receive a closed-form formula directly. The key motivation for the form of conditional moments, that is, a solution to PDE, is based on [4][5][6][7]. Because the SDE in the Jacobi process has linear drift coefficient and polynomial squared diffusion coefficient, the closed-form solutions of the conditional moments can be assumed by the polynomial expansion; see more details in [4,[8][9][10][11].
The rest of this paper is organized as follows. Section 2 provides a brief overview of the extended Jacobi process and the generalized stochastic correlation process. The key methodology and main theorems are proposed in Section 3. Experimental validations of proposed formulas are shown in Section 4 via Monte Carlo (MC) simulations. To illustrate applications in practice, parameter estimation methods based on conditional moments are mentioned in Section 5.

Jacobi and Generalized Stochastic Correlation Processes
The Jacobi process is a class of solvable diffusion processes the solution of which satisfies the Pearson equation [12]. It involves a wide variety of issues in many branches, such as chemistry, physics and engineering; see more details in [13]. Over the past decade, the Jacobi process has been considered as one class of the Pearson diffusion process [4], sometimes called a generalized Jacobi process. The Pearson diffusion process is presented via an Itô process having linear drift coefficient and diffusion in quadratic square, which its dynamics follows: where W t is an asymmetric Wiener process, X t is in state space, θ > 0, and a, b, and c are constants which ensure that the quadratic squared diffusion coefficient in (3) is well-defined for all t in time space. By considering the transition PDF of the Pearson diffusion process through the Fokker-Planck equation, Forman and Sørensen [4] classified it based on the stationary solution into six classes, including the Jacobi process. Under the classification of Forman and Sørensen [4], the Pearson diffusion process becomes the Jacobi process under conditions a < 0 and b 2 − 4ac > 0. The simplest form of the Jacobi process follows the SDE (3) when a = −b < 0 and c = 0, and its dynamics follow Unlike the Cox-Ingersoll-Ross process [14], which is only bounded below, all values produced from the Jacobi process (4) are bounded both below and above. To avoid inaccessible boundary points 0, 1, almost certain with respect to probability measure P, we need a sufficient condition that is −a ≤ µ ≤ 1 + a; see e.g., [2,15]. Under this condi-tion, a generalized case of the Jacobi process (4) can be obtained by applying Itô's lemma with ρ t = ρ min + (ρ max − ρ min )X t . In this work, we call this the generalized stochastic correlation process (constant parameters) governed by the SDE Comparing (2) with (5) yields θ * (t) = θ, µ * (t) = (ρ max − ρ min )µ + ρ min and σ * (t) = √ −2a θ. Figure 2 summarizes the relation among processes (2)-(5) and (8). However, we return to the extended Jacobi process (8) again in Section 3.
The Jacobi process (4) special case In the context of conditional expectation, a rising question is whether the conditional expectation can be calculated directly by using the transition PDF. We begin with the transition PDF of the Jacobi process, which is associated with the Jacobi polynomials through the Jacobi generator's eigenfunctions; see more details in [16,17]. In this case, we discuss only the simplest case provided in (4). We use the transition PDF following Leonenko's version [17], which can be rewritten as where beta(x) = x β (1−x) α B(α+1,β+1) is the invariant distribution, B(α, β) = Γ(α)Γ(β) Γ(α+β) is the beta function, and Γ(·) is the gamma function and for α, β ∈ (−1, ∞). The well-known parameter in (7) is λ j , which is the discrete spectrum of the generator corresponding to the Jacobi polynomial P (α,β) j (z). As shown in (6) and (7), the formula for conditional expectations such as the moments are difficult to calculate using the transition PDF, and this becomes even more complicated for conditional mixed moments (1). To overcome this issue, the Feynman-Kac formula is applied here.

Main Results
As strong empirical evidence indicates that movements in finance-based practices tend to involve time (see more details in [18][19][20]), we therefore extend the dynamics of the Jacobi process (4) governed by time-varying parameters, called the extended Jacobi process, where W t is an asymmetric Wiener process, θ(t) > 0, a(t) < 0, and 0 < µ(t) < 1 for all t. The well-known instant SDE processes governed by time parameters are the extended Ornstein-Uhlenbeck [19] and the extended Cox-Ingersoll-Ross [21] processes. However, to ensure the existence and uniqueness of the process (8), it is required that are Borel-measurable and satisfy the local Lipschitz and linear growth conditions; see more details in [22]. This section is partitioned into three subsections consisting of ten theorems and two lemmas. This section presents the key methodology used in this paper as well as the main results. To achieve our aim (1), we first study the extended Jacobi process (8). The generalized stochastic correlation process is transformed from the extended Jacobi process, as well as the properties. Several consequences of the obtained theorems are investigated in the later part of this section.

Extended Jacobi Process
By solving the PDE in the Feynman-Kac formula, Theorem 1 provides an analytical formula for the γth conditional moments based on the extended Jacobi process (8) where γ ∈ R. Unlike the previous works in the literature, the obtained formula is given as the infinite sum, the limit of which is first assumed to converge uniformly.

Theorem 1.
Suppose that X t follows the extended Jacobi process (8). The γth conditional moment for γ ∈ R is and τ = T − s, given that the infinite series in (9) converges uniformly on D γ E , where the coefficients in (9) are expressed by Proof. By the Feynman-Kac formula [23], for all (x, τ) ∈ D γ E , subject to the initial condition By comparing the coefficients of (9) and (13), we obtain the conditions P γ 0 (0) = 1 and P γ k (0) = 0 for k ∈ Z + . To solve (12), we use (9) to find the partial derivatives U τ , U x and U xx , which are After substituting the above partial derivatives into (12), it can be simplified to obtain Under the assumption of the uniform convergence of the infinite series in (9) over D γ E , the above equation can be solved through the following system of recurrence differential equations: with initial conditions P γ 0 (0) = 1 and P γ k (0) = 0 for k ∈ Z + . As the system (14) consists only of the general linear first-order differential equations, the coefficients in (9) are therefore obtained by solving the system (14) in the form of recursive relation, which provides the results (10).
According to the infinite sum (9), a convergent case needs to be mentioned. Theorem 2 is a special case of Theorem 1 when γ is a non-negative integer. In such a case, the infinite sum, which can cause a truncation error in practice, can be reduced to a finite sum. It should be noted that our proposed formulas for the extended Jacobi process are more general, covering the formulas provided in [2,3].
The other formula in the form of a finite sum is presented in Corollary 1 when a constant γ = m + µ(τ) a(τ) + 1 for all τ ≥ 0 and m ∈ N. (10) and (11).

Proof. The result is directly obtained by inserting
To establish the results for the system of linear recurrence differential equations shown in (14) when all parameters are constants, we provide an efficient tool in Lemma 1 in order to consider the conditional moments in the Jacobi process (4) as well as the consequences. Lemma 1. Let α k , β k ∈ R and n ∈ Z + . For distinct α 0 , α 1 , α 2 , . . . , α n , the recurrence differential equations provided by where the initial conditions y k (0) = φ k ∈ R for k ∈ {0, 1, 2, . . . , n} have the solutions Proof. For k ∈ {0, 1, 2, . . . , n}, we can rewrite (17) in the matrix form which is denoted by d dt y(t) = Ly(t) subject to the initial condition y(0) = Φ. Even though L contains asymmetric structure, it is easy to see that its solution is y(t) = e tL Φ. Note that the coefficient matrix L is the lower triangular matrix. It is well known that the eigenvalues of L are its diagonal entries, i.e., α j for j ∈ {0, 1, 2, . . . , n}. As these eigenvalues are all distinct values, the matrix L can be diagonalizable. In the other words, L = SΛS −1 . Thus, the solution can be expressed in the following form: where Λ = diag{α 0 , α 1 , α 2 , . . . , α n } is the eigenvalue matrix of L and S := [s k,j ] is the eigenvector matrix of L for all k, j ∈ {0, 1, 2, . . . , n}. Let the jth column of S, which is the eigenvector corresponding to α j , be denoted by s j = [s 0,j , s 1,j , s 2,j , . . . , s n,j ] . Then, Because the matrix L has all distinct eigenvalues, it is simple and has completely n + 1 eigenvectors. Hence, for each eigenvalue α j , the system (19) has only one free variable. In solving, we let s j,j be the free variable which contains a value equal to one. Thus, we can directly solve (19) with s j,j = 1 to obtain s k,j = 0 for k ∈ {0, 1, 2, . . . , j − 1} and for k ∈ {j + 1, j + 2, j + 3, . . . , n}. After varying all column indices j from 0 to n, we have the eigenvector matrix S as the lower triangular matrix with elements s k,j . Next, the inverse of eigenvector matrix S, denoted by S −1 := [r k,j ], can be calculated directly. Accordingly, it is the lower triangular matrix, with entries r k,j . Then, we can explicitly express the elements s k,j and r k,j , respectively, as follows: Now, we substitute the obtained matrices into (18), namely, . . .
Theorem 3. Suppose that X t follows the Jacobi process (4). Then, the γth conditional moment for Proof. For the Jacobi process (4) the parameters in (8)  as provided in (22). The key idea of the proof is to solve the coefficients P γ k (τ) in (14), which can be accomplished straightforwardly using Lemma 1. We consider a partial sum of (20) from k = 0 to k = n. Recall the system (14); now we have with distinct A By applying Lemma 1, the solution of the coefficients P γ k (τ) in (23) is (21) for all k ∈ {0, 1, 2, . . . , n}. Hence, under the assumption that the infinite series in (20) uniformly converges on D γ J , (21) holds for all k ∈ Z + as required.
In the case that γ = n ∈ Z + 0 , U γ J (x, τ) can be expressed as a power series in terms of x which terminates at finite order. This means that Theorem 4 reduces the result (15) in Theorem 2 to a finite sum of order n.

Theorem 4.
Suppose that X t follows the Jacobi process (4). Then, the nth conditional moment for where A n j and B n j are as provided in (22).
Proof. The proof is rather trivial by combining Theorems 2 and 3.
The following corollary can be reduced from Theorem 3 using the same idea as in Corollary 1.

Corollary 2. According to Theorem 3, with a constant
where A γ j and B γ j are as provided in (22).
Proof. The proof is rather trivial by combining the idea of the proofs in Theorem 3 and Corollary 1.
In addition, Theorem 5 is transformed from (24) in Theorem 4 to the unconditional moment as τ → ∞; the obtained result no longer depends on x.
Theorem 5. Suppose that X t follows Jacobi process (4). Then, the nth unconditional moment at equilibrium for n ∈ Z + 0 , 0 < x < 1 and τ = T − s ≥ 0 is provided by Proof. According to (24) in Theorem 4, because A n j < 0 for all j < n the coefficient terms of x n−k provided in (21) approach 0 as τ → ∞ for j, k ∈ {0, 1, 2, . . . , n − 1}, except in the case that k = j = n. We have A

Generalized Stochastic Correlation Process
Theorem 6 provides a relation between the extended Jacobi (8) and generalized stochastic correlation processes (5) through Itô's lemma, and provides a formula for the conditional moments of the generalized stochastic correlation process (5) in closed form. Theorem 6. Let X t follow the extended Jacobi process (8) where X t ∈ (0, 1) for all t ∈ [s, T]. Suppose that ρ t = ρ min + (ρ max − ρ min )X t for all t ∈ [s, T]. Then, (8) becomes a generalized stochastic correlation process (27) and ρ t ∈ (ρ min , ρ max ) for all t ∈ [s, T]. In addition, its nth conditional moment is where x = ρ−ρ min ρ max −ρ min , τ = T − s and U k E is defined in (15).
Proof. Applying ρ t = ρ min + (ρ max − ρ min )X t with Itô's lemma provides as shown in (27). As The analytical formula for the conditional moments is determined in two cases. For the case where ρ min = 0, we have For the other case, ρ min = 0, the binomial theorem results in As X t follow the extended Jacobi process (8), applying Theorem 2 yields the two cases in (28).
In addition, Theorem 6 becomes Theorem 7 under the constant parameters; the stationary property at T → ∞ is studied in Theorem 7.
By applying the tower property, we derive an interesting result of the conditional mixed moments (1) of process (2). To the best of our knowledge, no other authors have found the simple formula as shown in Theorem 8. However, the following lemma is needed.

Lemma 2.
Suppose that X t follows the extended Jacobi process (8) and 0 ≤ s ≤ T 1 ≤ T 2 . The conditional mixed moment up to order n 1 + n 2 for n 1 , where the parameters dependent on time are provided in (10). In the spacial case of the Jacobi process (4), the parameters are defined in (21).
Proof. Using the tower property for 0 ≤ s < T 1 ≤ T 2 , the conditional mixed moment of the extended Jacobi process (8) can be expressed as After applying Theorem 2 twice, we have Theorem 8. According to Theorem 6 with 0 ≤ s ≤ T 1 ≤ T 2 , the conditional mixed moment of the generalized stochastic correlation process (27) up to order n 1 + n 2 for n 1 , n 2 ∈ Z + is where x = ρ−ρ min ρ max −ρ min and the conditional mixed moment E X n 1 T 1 X n 2 T 2 | X s = x of the extended Jacobi process (8) is provided in Lemma 2.
Proof. For the case where ρ min = 0, similar to the proof of Theorem 6, it is not difficult to check and is thus omitted here. For the latter case, applying the binomial theorem twice yields where the analytical formula of conditional mixed moments E X k T 1 X j T 2 | X s = x , for 0 ≤ k ≤ n 1 and 0 ≤ j ≤ n 2 , is provided in Lemma 2. This completes the proof.

Remark 3.
Applying the idea in the proofs of Lemma 2 and Theorem 8, the general formula for conditional mixed moments E ρ n 1 can be directly obtained. The advantage of our formula for conditional mixed moments (8) is its simple closed form, which can be used in many applications, especially to estimate the functions of the powers of observed processes which appeared in Sørensen [24], Leonenko and Šuvak [25,26], and Avram et al. [27]. Moreover, in order to study the integrated Jacobi process, the conditional mixed moments need to be evaluated. However, their proposed formulas are very complicated; see Forman and Sørensen [4]. Thus, our results can be applied easily.
Before finishing this section, we summarize the relationship of the presented formulas in the form of the diagram displayed in Figure 3, which shows the development process of the formulas consisting of ten theorems and two lemmas, which are categorized as performed in processes (2), (4), (5) and (8).

Statistical Properties
The conditional variance of the generalized stochastic correlation process (27) can be expressed as where U γ G (ρ, T − s) is defined in Theorem 6. Furthermore, the nth moment about the mean, that is, the nth central moment, can be expressed as Well-known instances for the central moment are the zero-th moment µ 0 = 1, the 1st central moment µ 1 = 0, the 2nd central moment µ 2 = Var[ρ T | ρ s = ρ], called the conditional variance, and the third µ 3 and fourth µ 4 , known as the skewness and kurtosis, respectively.
We now move our focus to the conditional covariance and correlation. By applying and the conditional correlation of the generalized stochastic correlation process (27) is .
It should be noted that the analytical formulas for the conditional covariance and correlation can be extended to the analytical of Cov ρ Several of the related applications as estimator tools are mentioned in [24][25][26][27][28].

Experimental Validation
As our results proposed in Section 3 are mainly based on the extended Jacobi process (8), this experimental validation section discusses this process first. In this experiment, we applied the Euler-Maruyama (EM) discretization method with MC simulations to process (8). LetX t be a time-discretized approximation of X t that is generated on time interval [0, T] into N steps, i.e., 0 = t 0 < t 1 < t 2 < . . . < t N = T. Then, the EM approximate is defined bŷ where the initial valueX t 0 = X t 0 , ∆t = t i+1 − t i is the size of the time step and Z i is the standard normal random variable. We illustrate the validations of the 1st moment (n = 1) of the formula (15) via the parameters studied by Ardian and Kumral [29] for the evolution of gold prices and interest rates. For the generalized stochastic correlation process (2), their estimated parameters are θ * (t) = 1.15, µ * (t) = 0.17 and σ * (t) = 0.56 with ρ min = −1 and ρ max = 1. Thus, for the extended Jacobi process (8), those parameters correspond to θ(t) = 1.15, µ(t) = 0.59 and a(t) = −0.14 for t ∈ [0, τ]; note that these parameters are all constants. This then corresponds to the Jacobi process (4) as well, which can compute the 1st conditional moment using formula (24) directly. This work was implemented in MATLAB libraries available in GitHub repositories: https://github.com/TyMathAD/Conditional_ Mixed_Moments accessed on 21 April 2022.
To test the efficiency of the 1st moment U 1 J (x, τ), we compared the obtained results with MC simulations at various points (x, τ), where x, τ ∈ {0.1, 0.2, 0.3, . . . , 1}. These simulations were examined with the time step ∆t = 0.0001 and varied with the sample paths by 100, 1000, and 10,000, as depicted in Figure 4, which is the contour plotting of absolute errors between our formula and MC simulations. From Figure 4, we can see obviously that the contour colors trend to the dark blue shade for the larger path numbers. This means that the absolute errors approach zero. Figure 4a-c produces average absolute errors equal to 1.77 × 10 −2 , 5.67 × 10 −3 and 6.42 × 10 −4 , respectively. Hence, the MC simulations are most likely to converge to our formula.  In this validation, these obtained results of our formula and the MC simulations based on the EM method (33) were computed by implementing MATLAB R2021a software run on a laptop computer configured with the following details: Intel(R) Core(TM) i7-5700HQ, CPU @2.70 GHz, 16.0 GB RAM, Windows 10, 64-bit Operating System. As a result, the computational run time of our analytical formula is around 0.0145 s, while the MC simulations consume run times of 1.43, 4.32, and 40.21 s for 100, 1000, and 10,000 sample paths, respectively. Thus, we can see that the times of MC simulations are more tremendously expensive than our formula, especially, with large path numbers. It is notable that the MC simulations with just 100 paths spent more computing time than our formula, with almost 100 times the time elapsed. Hence, for a more accurate result the use of MC simulations may not be a good choice in terms of computing time. In contrast, the proposed formula is independent of any discretizations and has a very low computational cost. Therefore, the formulas presented here are efficient and suitable for practical use.
Moreover, we used the above parameters to compute the 1st and 2nd conditional moments, U 1 G (ρ, τ) and U 2 G (ρ, τ), in order to model the correlation between gold prices and interest rates. We computed these moments utilizing the presented formula (28) at different values (ρ, τ) ∈ [−1, 1] × [0, 1.5]. The obtained results are demonstrated by surface plots in Figure 5. In addition, we plotted the graphical contours of the 1st and 2nd conditional moments for (ρ, τ) ∈ [−1, 1] × [0, 5]. It can be seen that when τ is increasing, the obtained results converge to a certain value for both moments. This can be seen from Figure 6, in that the contour colors trend to a light blue shade which has an approximate value of 0.1. Using Theorem 7, it is confirmed that when τ → ∞ these 1st and 2nd conditional moments are 0.17 and 0.15, respectively, corresponding to Figure 6.
Note that one primary concern for our proposed formula in Theorem 1 is that the coefficients P γ k (τ) for k ∈ Z + 0 in (10) may not be exactly integrable. Thus, numerical integration methods are needed to manipulate the integral terms, such as a trapezoidal rule, Simpson's rule, etc. One efficient method that we suggest to handle these integral terms is the Chebyshev integration method provided by Boonklurb et al. [30][31][32][33], which provides higher accuracy than other integration methods under the same discretization.

Method of Moments Estimator
In certain cases, the MM is superseded by Fisher's method when estimating parameters of a known family of probability distribution, as the MLEs have a higher probability of being suitable to the quantities to be estimated. However, in certain cases such as the examples of gamma and beta distributions, MLEs may be intractable without computer programming. In this case, estimation using MM can be used as a first approximation of the solutions of the MLEs; the MM and the method of MLEs are symbiotic in this respect.
The key idea of the MM is to calibrate a well set of parameter values based on suitable conditional moments. In this section, suppose that we need to calibrate an unknown parameter vector θ = (θ * , µ * , σ * ) ∈ R 3 of the generalized stochastic correlation process (2), where the value of the true parameters is the vector θ 0 on discretely observed data ρ t i 1≤i≤n , where t i−1 < t i for all i ∈ {1, 2, 3, . . . , n}. Normally, the basic conditional moments selected for calibration may be the first three conditional moments of the form provided in Theorem 6. It is sufficient to solve the unknown vector θ; however, in 2004, Gouriéroux and Valéry [3] suggested that we need to choose those conditional moments satisfying the identities of observed interest data.
They further determined sufficient moments to be adequately informative, such as the 1st, 2nd, 3rd (skewness), and 4th (kurtosis) conditional moments and the mixed moments E ρ t i ρ 2 t i−1 | ρ t 0 = ρ and E ρ 2 t i ρ 2 t i−1 | ρ t 0 = ρ to capture the dynamics of the risk premium and the possible volatility persistence, respectively. Their set of conditional moments selected for implementing MM is with the conditional moments and mixed moments appearing above having been proposed in Theorems 6 and 8, respectively. In order to estimate parameters, we suppose that the conditional expectations of f (ρ t i , θ), E[ f (ρ t i , θ) | F t ], exist as a real number for all i ∈ {1, 2, 3, . . . , n}, satisfying E[ f (ρ t i , θ 0 ) | F t ] = 0, and let The MM estimator of θ 0 based on the conditional expectation E[ f (ρ t i , θ) | F t ] is the solution to the system of equations f n (θ) = 0. If we cannot solve the exact value of θ, a good estimate of the true value θ 0 , calledθ, is needed. In other words, we need aθ that makes f n θ close to 0; see more details in [34]. In any event, the algorithm that we suggest would use either Newton's method or iterative methods to solve the system of nonlinear equations f n θ = 0.
It should be noted that in certain cases, infrequent with large sample sizes and not as infrequent with small ones, the estimates provided by the MM are not suitable. In this case, they may be outside of the parameter space and it does not make sense to rely on the sample provided by the MM. In the context of the properties of the MM and its generalized version, under sufficient conditions they are consistent and asymptotically normally distributed; see for more details in [34,35].

Conclusions
Without the knowledge of the transition PDF, this paper presents a simple and novel approach for obtaining the analytical formulas for conditional moments and mixed moments of the extended Jacobi process. Those analytical formulas become concise forms under the Jacobi process. In addition, the analytical formulas for the unconditional moments are provided. By applying Ito's lemma to the extended Jacobi process we obtain the generalized stochastic correlation process, and its analytic formulas for conditional moments are proposed. Statistical properties, namely, conditional variance, central moment, covariance, and correlation, are formulated. The validation of our formulas is shown by comparing the results with MC simulations. Our results can be used to find the parameters of correlation processes between financial product prices. Our study provides additional support for the work of those who require statistical tools to studying data governed by generalized stochastic correlation processes. Finally, a tool for estimating parameters concerning the calculation of moments is provided as well.
Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement: Not applicable.
Acknowledgments: This research project is supported by Second Century Fund (C2F), Chulalongkorn University. We are grateful for a variety of valuable suggestions from the anonymous referees which have substantially improved the quality and presentation of the results.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

EM
Euler-Maruyama MC Monte Carlo MLE maximum likelihood estimator MM Method of moment PDE Partial differential equation PDF Probability density function SDE Stochastic differential equation