1. Introduction
The correlation matrices 
 are real, symmetric 
, positive semi-definite 
 with 
 being the spectrum of 
 and having a unit diagonal, 
 These correlation matrices appear when one is interested in computing correlations between pairs of random variables. For example, in the finance industry, the correlation between stocks measured over a fixed amount of time, and the missing data can compromise the correlations and hence will direct to a non-positive semi-definite matrix. Similarly, a practitioner can explore the effect on a portfolio that assigns the correlations between assets differently from a certain amount of historical values, which can easily destroy the semi-definiteness of the matrix (or matrices); for more details, refer to [
1,
2,
3].
The approximation of correlation matrices has led much intention and interest in finding a nearest correlation matrix 
X that minimizes the matrix Frobenius norm of 
 in matrix nearness problem
      
      where for 
 being as the symmetric matrices and the quantity 
 is positive semi-definite matrix and 
 as 
 represents Hilbert space with 
. The constraints in above minimization problem are closed and convex sets so that the matrix nearness problem has a unique solution [
4].
The matrix nearness problems have been extensively studied over the last thirty-two years. Much of the literature is available about some ad hoc mathematical methods that are not guaranteed to give the best solution to the problem. The method given by Knal and ten Burge [
5] decomposes the matrix 
X as 
 and then minimizes the objective function over 
 columns. Lurie and Goldberg [
6] minimize the quantity 
 with 
R being as an upper triangular matrix having a unit 2-norm for it’s columns by using the Gauss–Newton method. In [
7], Higham proposed an alternating projection algorithm using convex analysis, which has a linear convergence towards the solution of matrix nearness problems.
A more general matrix nearness problem than the above problem was studied in [
8] by Malick, where he had used convex set and general linear constraints to replace positive semi-definiteness and unit 
. Malick had applied the quasi-Newton method to dual problem after dualize the linear constraints on the 
. A quadratically convergent Newton method for solving matrix nearness problem was studied by Qi and Sun [
9], where they proposed a dual problem to matrix nearness problem and then proposed its solution. The numerical method developed by Toh [
10] demands the solution of dense linear systems with dimension 
, mathematical construction of preconditioners and then apply them to compute the nearest correlation matrix 
X to solve the matrix nearness problem. A globally convergent Newton’s method proposed by Qi and Sun [
9] computes the nearest correlation matrix 
X for the above matrix nearness problem.
In this work, we propose a low-rank ODE’s based method to compute a nearest correlation matrix X for a given n-dimensional matrix A. Our method works in two ways: first, it allows us to shift the smallest negative eigenvalue  of the perturbed matrix , such that it becomes non-negative, that is, . We compute the perturbation matrix E with  by constructing and then solving an associated optimization problem. Secondly, the construction and solution to optimization problem allow to shift all negative eigenvalues  of given matrix A such that its spectrum becomes non-negative, that is, .
  Overview of the Article
In 
Section 2, we provide the preliminaries of our article. We give the definitions of symmetric, positive semi-definite matrix, a correlation matrix, and nearest correlation matrix.
Section 3 of our article is devoted to the problem under consideration. We briefly discuss the aim of matrix nearness problem. 
Section 4 of our article is dedicated to the computation of a gradient system of ordinary differential equations to localize the smallest negative eigenvalue 
 from the spectrum of an admissible perturbed matrix 
 for given 
 The localization of eigenvalues 
 simultaneously from the spectrum of admissible perturbation matrix is addressed in 
Section 5. Finally, numerical experimentation and conclusion are presented in 
Section 6 and 
Section 7, respectively.
  2. Preliminaries
Definition 1. A matrix  is called symmetric if 
 Definition 2. A matrix  is called positive semi-definite if   denotes the spectrum of matrix 
 Definition 3. A square symmetric matrix  is called correlation matrix if -entry is the correlation between the columns i and j of A.
 Definition 4. A matrix  is called a nearest correlation matrix if:    and 
   3. Matrix Nearness Problem
The matrix nearness problem aims the computation of a matrix X, which is symmetric, positive semi-definite, unit diagonal and having off diagonal entries between  and 1 for a given matrix  The matrix A is not necessary to have it’s entries belonging to interval  In fact, it can have real random entries. It is possible that all the eigenvalues of A are negative, or some of them are negative while others are positive. The negative eigenvalues are the basic hindrance to make A a nearest correlation matrix. Moreover, one must remain careful about the structure of A. The structure of A must have all properties, as defined in the preliminaries Section. For this purpose, we aim to compute the perturbation matrix  having zero diagonal and a Frobenius norm bounded above by 1. In the very next Section, we present our methodology to compute the matrix E, which allows us to shift all negative eigenvalues of given A to make them non-negative.
  4. Localization of Smallest Negative Eigenvalue  
In this section, we aim to localize the smallest negative eigenvalue 
 of given matrix 
. This involves shifting the smallest negative eigenvalue of perturbation matrix 
 with 
 a small positive parameter. The matrix 
E has structure with 
. Moreover,
      
  4.1. Construction of Perturbation Matrix
We compute the perturbation matrix 
 and then determine the direction 
. The computation of 
 indicates that how fast the smallest negative eigenvalue 
 grows, that is, 
 We need to compute the perturbed matrix 
 with 
 For this purpose, we make use of eigenvalue problem
        
        with 
 being an eigenvector for smallest negative eigenvalue 
 Furthermore, 
By differentiating Equation (
1) w.r.t. 
t, we have
        
Multiplying both sides with 
 gives us
        
This further implies that,
        
Using Equation (
2), we have that
        
Thus, Equation (
3) becomes as
        
We take 
 in Equation (
3), and 
 results in the following optimization problem.
  4.2. Formulation of Optimization Problem
The optimization problem given below allows one to determine the direction 
 so that the solution of the system of ODE’s indicates the maximum growth of the smallest negative eigenvalue 
        with 
 is eigenvector corresponding to smallest negative eigenvalue 
 The notation ∗ represents complex conjugate transpose. The solution of the maximization problem addressed in Equation (
5) is given as the following.
  4.3. Lemma 4.2.1.
Consider that 
 is a non-zero matrix with an admissible perturbation matrix
        
Let 
 are non-zero eigen vectors associated with smallest negative eigenvalue 
 Then solution 
Z to maximization problem in Equation (
5) is of the form
        
        having 
 as projection of direction matrix 
Z onto manifold of matrices 
.
Proof.  The proof is based on the idea of computation of orthogonal projection on manifold of matrices, and for this we refer to [
11]. □
   4.4. A Gradient System of ODE’s
The solution matrix
        
        of the maximization problem addressed in Equation (
5) allows following gradient system of ODEs on manifold of matrices 
  4.5. Characterization of Gradient System of ODE’s
The solution of gradient system of ODE’s addressed in Equation (
7) possesses the following characteristic properties:
  5. Localization of ,  
Our goal in this Section is to transfer simultaneously the smallest negative eigenvalue  and the negative eigenvalue  which is next to  from spectrum of perturbation matrix  so that ,  becomes strictly positive.
  5.1. Optimization Problem
The following maximization problem allows one to determine direction matrix 
 so that a solution associated with gradient system of ODEs obtained after solving the problem, which indicates maximum growth for 
 and 
 respectively. The maximization problem, in order to have a simultaneous maximum growth for both 
 and 
, is given as
        
Now, our aim is to give the solution corresponding to maximization problem addressed in Equation (
8).
  5.2. A Gradient System of ODE’s
The optimal solution to maximization addressed in Equation (
8) is given by gradient system of ODE’s as
        
The gradient system of ODEs addressed in Equation (
9) as a function 
 with
        
The choice of parameter 
 could be sufficiently large enough so that 
 and having
        
For parameter 
, we have that 
 Next, we fix 
 and for parameter 
, we obtain
        
The admissible perturbation matrix 
 takes the form
        
As, 
The initial perturbation matrix 
 is given as
        
The initial perturbation matrix 
 is decomposed into 
 and 
, the upper and lower triangular matrices respectively as,
        
In a similar manner, the perturbation matrix 
 has the structure
        
        and
        
The maximization of eigenvalues 
 and 
 demands the computation of the perturbation matrix 
 and projection of 
Z onto manifold of family of matrices 
. The change in the admissible perturbation matrix 
 is given as
        
Both 
 and 
 are computed as
        
The computation 
 cause maximum growth of 
 in the maximization problem as,
        
The solution to maximization problem addressed in Equation (
12) is given by ODE 
 as,
        
        having
        
        and has the form
        
The matrix 
 is decomposed into 
, an upper triangular matrix and lower triangular matrix 
 as
        
Remark 1. The upper triangular matrix  The optimal solution to maximization problem addressed in Equation (8) has the form The solution  addressed in Equation (13) is determined with the help of Euler’s method, that is, Finally, the eigenvalue problemcomputes the non-negative spectrum for the perturbed system and it show that the matrix X is a nearest correlation matrix as it is symmetric, positive semi-definite, unit diagonal and off diagonal entries lies in the interval  and 1.    6. Numerical Experimentation
This section presents numerical experimentation for the computation of nearest correlation matrices for a given matrix A. For simplicity, we take .
Example 1. We consider a ten-dimensional matrix A as The eigenvalues of A are  which clearly contains the negative eigenvalues. The perturbation matrix E has a zero diagonal and help us to shift the negative eigenvalues of perturbed matrix . The perturbation matrix E is computed as We compute matrix , which alters the eigenvalues of given matrix A; at the initial stage, this matrix also has three negative eigenvalues but different from those corresponding to A. The matrix  and its eigenvalues are computed as The eigenvalues of  are  and clearly is not a nearest correlation matrix. Matrix  is also not a nearest correlation matrix but it has shifted the eigenvalues of  such that only one eigenvalue is negative from the spectrum. The matrix  and its spectrum are given as respectively. The matrix  is a nearest correlation matrix, as not only does it have a unit diagonal, but its symmetric, positive semi-definite and off diagonal entries lie within the interval . The matrix  and its positive spectrum are given as follows. The positive spectrum of nearest correlation matrix is {0.061, 0.936, 1.004, 1.025, 1.067, 1.084, 1.131, 1.153, 1.217, 1.319}.
 Example 2. We consider an eight dimensional matrix A as The perturbation matrix E has a zero diagonal and helps us to shift the negative eigenvalues of perturbed matrix . The perturbation matrix E is computed as We compute matrix , which alters the eigenvalues of given matrix A; at the initial stage, this matrix also has three negative eigenvalues but different from those corresponding to A. The matrix  and its eigenvalues are computed as The eigenvalues of  are  and clearly is not a nearest correlation matrix. Matrix  is also not a nearest correlation matrix but it has shifted the eigenvalues of , such that two eigenvalues are negative from the spectrum. The matrix  and its spectrum are given as respectively. The matrix  is a nearest correlation matrix, as not only does it have a unit diagonal, but its symmetric, positive semi-definite and off diagonal entries lie within the interval . The matrix  and its positive spectrum are given as follows. The spectrum of nearest correlation matrix is obtained as 
   7. Conclusions
In this article, we have presented a low-rank ordinary differential equations-based technique to compute the nearest correlation matrix; that is, symmetric, positive semidefinite, unit diagonal and off-diagonal entries between  and 1 for a given n-dimensional real-valued matrix. The proposed methodology is based on transforming the negative spectrum of the original matrix, which involves the computation of perturbation level and an admissible perturbation having zero diagonal. The numerical experimentation turns over the desired perturbations for the computation of nearest correlation matrix for randomly chosen real-valued matrices.