Open Access
This article is

- freely available
- re-usable

*Symmetry*
**2019**,
*11*(10),
1311;
https://doi.org/10.3390/sym11101311

Article

Structure Learning of Gaussian Markov Random Fields with False Discovery Rate Control

^{1}

Computer Science, Hanyang University ERICA, Ansan 15588, Korea

^{2}

Department of Mathematics, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland

^{3}

Institute of Mathematics, University of Wroclaw, 50-384 Wroclaw, Poland

^{*}

Author to whom correspondence should be addressed.

Received: 11 September 2019 / Accepted: 15 October 2019 / Published: 18 October 2019

## Abstract

**:**

In this paper, we propose a new estimation procedure for discovering the structure of Gaussian Markov random fields (MRFs) with false discovery rate (FDR) control, making use of the sorted ${\ell}_{1}$-norm (SL1) regularization. A Gaussian MRF is an acyclic graph representing a multivariate Gaussian distribution, where nodes are random variables and edges represent the conditional dependence between the connected nodes. Since it is possible to learn the edge structure of Gaussian MRFs directly from data, Gaussian MRFs provide an excellent way to understand complex data by revealing the dependence structure among many inputs features, such as genes, sensors, users, documents, etc. In learning the graphical structure of Gaussian MRFs, it is desired to discover the actual edges of the underlying but unknown probabilistic graphical model—it becomes more complicated when the number of random variables (features) p increases, compared to the number of data points n. In particular, when $p\gg n$, it is statistically unavoidable for any estimation procedure to include false edges. Therefore, there have been many trials to reduce the false detection of edges, in particular, using different types of regularization on the learning parameters. Our method makes use of the SL1 regularization, introduced recently for model selection in linear regression. We focus on the benefit of SL1 regularization that it can be used to control the FDR of detecting important random variables. Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges.

Keywords:

Gaussian Markov random field; Inverse Covariance Matrix Estimation; FDR control## 1. Introduction

Estimation of the graphical structure of Gaussian Markov random fields (MRFs) has been the topic of active research in machine learning, data analysis and statistics. The reason is that they provide efficient means for representing complex statistical relations of many variables in forms of a simple undirected graph, disclosing new insights about interactions of genes, users, news articles, operational parts of a human driver, to name a few.

One mainstream of the research is to estimate the structure by maximum likelihood estimation (MLE), penalizing the ${\ell}_{1}$-norm of the learning parameters. In this framework, structure learning of a Gaussian MRF is equivalent to finding a sparse inverse covariance matrix of a multivariate Gaussian distribution. To formally describe the connection, let us we consider n samples ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ of p jointly Gaussian random variables following $\mathcal{N}(0,{\Sigma}_{p\times p})$, where the mean is zero without loss of generality and ${\Sigma}_{p\times p}$ is the covariance matrix. The estimation is essentially the MLE of the inverse covariance matrix $\mathsf{\Theta}:={\Sigma}^{-1}$ under an ${\ell}_{1}$-norm penalty, which can be stated it as a convex optimization problem [1]:

$$\widehat{\mathsf{\Theta}}=\underset{\mathsf{\Theta}\in {\mathbb{R}}^{p\times p},{\mathsf{\Theta}}^{\prime}=\mathsf{\Theta}}{\mathrm{arg}\mathrm{min}}\phantom{\rule{0.277778em}{0ex}}-logdet\mathsf{\Theta}+\mathrm{tr}\left(S\mathsf{\Theta}\right)+\lambda {\parallel \mathsf{\Theta}\parallel}_{1}$$

Here, $S:=\frac{1}{n}{\sum}_{i=1}^{n}{x}_{i}{x}_{i}^{\prime}$ is the sample covariance matrix, ${\parallel \mathsf{\Theta}\parallel}_{1}:={\sum}_{i,j}|{\mathsf{\Theta}}_{ij}|$ and $\lambda >0$ is a tuning parameter that determines the element-wise sparsity of $\widehat{\mathsf{\Theta}}$.

The ${\ell}_{1}$-regularized MLE approach (1) has been addressed quite extensively in literature. The convex optimization formulation has been first discussed in References [2,3]. A block-coordinate descent type algorithm was developed in Reference [4], while revealing the fact that the sub-problems of (1) can be solved in forms of the LASSO regression problems [5]. More efficient solvers have been developed to deal with high-dimensional cases [6,7,8,9,10,11,12,13]. In the theoretical side, we are interested in two aspects—one is the estimation quality of $\widehat{\mathsf{\Theta}}$ and another is the variable selection quality of $\widehat{\mathsf{\Theta}}$. These aspects are still in under investigation in theory and experiments [1,3,14,15,16,17,18,19,20], as new analyses become available for the closely related ${\ell}_{1}$-penalized LASSO regression in vector spaces.

Among these, our method inherits the spirit of Reference [14,19] in particular, where the authors considered the problem (1) in terms of a collection of local regression problems defined for each random variables.

#### LASSO and SLOPE

Under a linear data model $b=A\beta +\u03f5$ with a data matrix $A\in {\mathbb{R}}^{n\times p}$ and $\u03f5\sim \mathcal{N}(0,{\sigma}^{2}{I}_{n})$, the ${\ell}_{1}$-penalized estimation of $\beta $ is known as the LASSO regression [5], whose estimate is given by solving the following convex optimization problem:
where $\lambda >0$ is a tuning parameter which determines the sparsity of the estimate $\widehat{\beta}$. Important statistical properties of $\widehat{\beta}$ is that (i) the distance between the estimate $\widehat{\beta}$ and the population parameter vector $\beta $ and (ii) the detection of non-zero locations of $\beta $ by $\widehat{\beta}$. We can rephrase the former as the estimation error and the latter as the model selection error. These two types of error are dependent on how to choose the value of the tuning parameter $\lambda $. Regarding the model selection, it is known that the LASSO regression can control the family-wise error rate (FWER) at level $\alpha \in [0,1]$ by choosing $\lambda =\sigma {\mathsf{\Phi}}^{-1}(1-\alpha /2p)$ [21]. The FWER is essentially the probability of including at least one entry as non-zero in $\widehat{\beta}$ which is zero in $\beta $. In high-dimensional cases where $p\gg n$, controlling FWER is quite restrictive since it is unavoidable to do false positive detection of nonzero entries. As a result, FWER control can lead to weak detection power of nonzero entries.

$$\widehat{\beta}=\underset{\beta \in {\mathbb{R}}^{p}}{\mathrm{arg}\mathrm{min}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{2}{\parallel b-A\beta \parallel}_{2}^{2}+\lambda {\parallel \beta \parallel}_{1}$$

The SLOPE is an alternative procedure for the estimation of $\beta $, using the sorted ${\ell}_{1}$-norm penalization instead. The SLOPE solves a modified convex optimization problem,
where ${J}_{\lambda}(\xb7)$ is the sorted ${\ell}_{1}$-norm defined by
where ${\lambda}_{1}>{\lambda}_{2}>\cdots >{\lambda}_{p}>0$ and ${|\beta |}_{\left(k\right)}$ is the kth largest component of $\beta $ in magnitude. In Reference [21], it has been shown that, for linear regression, the SLOPE procedure can control the false discovery rate (FDR) at level $q\in [0,1]$ of model selection by choosing ${\lambda}_{i}={\mathsf{\Phi}}^{-1}(1-i\xb7q/2p)$. The FDR is the expected ratio of false discovery (i.e., the number of false nonzero entries) to total discovery. Since controlling FDR is less restrictive for model selection compared to the FWER control, FDR control can lead to a significant increase in detection power, while it may slightly increase the total number of false discovery [21].

$$\widehat{\beta}=\underset{\beta \in {\mathbb{R}}^{p}}{\mathrm{arg}\mathrm{min}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{2}{\parallel b-A\beta \parallel}_{2}^{2}+{J}_{\lambda}\left(\beta \right)$$

$${J}_{\lambda}\left(\beta \right):=\sum _{i=1}^{p}{\lambda}_{i}{|\beta |}_{\left(i\right)}$$

This paper is motivated by the SLOPE method [21] for its use of the SL1 regularization, where it brings many benefits not available with the popular ${\ell}_{1}$-based regularization—the capability of false discovery rate (FDR) control [21,22], adaptivity to unknown signal sparsity [23] and clustering of coefficients [24,25]. Also, efficient optimization methods [13,21,26] and more theoretical analysis [23,27,28,29] are under active research.

In this paper, we propose a new procedure to find a sparse inverse covariance matrix estimate, we call nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation). Our nsSLOPE procedure uses the sorted ${\ell}_{1}$-norm for penalized model selection, whereas the existing gLASSO (1) method uses the ${\ell}_{1}$-norm for the purpose. We investigate our method in two aspects in theory and in experiments, showing that (i) how the estimation error can be bounded, and (ii) how the model selection (more specifically, the neighborhood selection [14]) can be done with an FDR control in the edge structure of the Gaussian Markov random field. We also provide an efficient but straightforward estimation algorithm which fits for parallel computation.

## 2. nsSLOPE (Neighborhood Selection Sorted L-One Penalized Estimation)

Our method is based on the idea that the estimation of the inverse covariance matrix of a multivariate normal distribution $\mathcal{N}(0,{\Sigma}_{p\times p})$ can be decomposed into the multiple regression on conditional distributions [14,19].

For a formal description of our method, let us consider a p-dimensional random vector $\mathrm{x}\sim \mathcal{N}(0,{\Sigma}_{p\times p})$, denoting its ith component as ${\mathrm{x}}_{i}\in \mathbb{R}$ and the sub-vector without the ith component as ${\mathrm{x}}_{-i}\in {\mathbb{R}}^{p-1}$. For the inverse covariance matrix $\mathsf{\Theta}:={\Sigma}^{-1}$, we use ${\mathsf{\Theta}}_{i}$ and ${\mathsf{\Theta}}_{-i}$ to denote the ith column of the matrix and the rest of $\mathsf{\Theta}$ without the ith column, respectively.

From the Bayes rule, we can decompose the full log-likelihood into the following parts:

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \sum _{j=1}^{n}log{P}_{\mathsf{\Theta}}(\mathrm{x}={x}^{j})=\sum _{j=1}^{n}log{P}_{{\mathsf{\Theta}}_{i}}({\mathrm{x}}_{i}={x}_{i}^{j}|{\mathrm{x}}_{-i}={x}_{-i}^{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}+\sum _{j=1}^{n}log{P}_{{\mathsf{\Theta}}_{-i}}({\mathrm{x}}_{-i}={x}_{-i}^{j}).\hfill \end{array}$$

This decomposition allows us a block-wise optimization of the full log-likelihood, which iteratively optimizes ${P}_{{\mathsf{\Theta}}_{i}}({\mathrm{x}}_{i}|{\mathrm{x}}_{-i})$ while the parameters in ${P}_{{\mathsf{\Theta}}_{-i}}\left({\mathrm{x}}_{-i}\right)$ are fixed at the current value.

#### 2.1. Sub-Problems

In the block-wise optimization approach we mentioned above, we need to deal with the conditional distribution ${P}_{{\mathsf{\Theta}}_{i}}({\mathrm{x}}_{i}|{\mathrm{x}}_{-i})$ in each iteration. When $\mathrm{x}\sim \mathcal{N}(0,\Sigma )$, the conditional distribution also follows the Gaussian distribution [30], in particular:

$$\begin{array}{c}\hfill {\mathrm{x}}_{i}|{\mathrm{x}}_{-i}\sim \mathcal{N}({\mu}_{i},{\sigma}_{i}^{2}),\phantom{\rule{0.277778em}{0ex}}\left\{\begin{array}{c}{\mu}_{i}:={\mathrm{x}}_{-i}{\Sigma}_{-i,-i}^{-1}{\Sigma}_{-i,i}\hfill \\ {\sigma}_{i}^{2}:={\Sigma}_{ii}-{\Sigma}_{i,-i}{\Sigma}_{-i,-i}^{-1}{\Sigma}_{-i,i}.\hfill \end{array}\right.\end{array}$$

Here ${\Sigma}_{-i,i}\in {\mathbb{R}}^{p-1}$ denotes the ith column of $\Sigma $ without the ith row element, ${\Sigma}_{-i,-i}\in {\mathbb{R}}^{(p-1)\times (p-1)}$ denotes the sub-matrix of $\Sigma $ without the ith column and the ith row and ${\Sigma}_{ii}={\Sigma}_{i,i}$ is the ith diagonal component of $\Sigma $. Once we define ${\beta}^{i}$ as follow,
then the conditional distribution now looks like:

$${\beta}^{i}:={\Sigma}_{-i,-i}^{-1}{\Sigma}_{-i,i}\in {\mathbb{R}}^{p-1},$$

$$\begin{array}{c}\hfill {\mathrm{x}}_{i}|{\mathrm{x}}_{-i}\sim N({\mu}_{i},{\sigma}_{i}^{2}),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\left\{\begin{array}{c}{\mu}_{i}={\mathrm{x}}_{-i}{\beta}^{i}\hfill \\ {\sigma}_{i}^{2}={\Sigma}_{ii}-{\left({\beta}^{i}\right)}^{\prime}{\Sigma}_{-i,-i}{\beta}^{i}.\hfill \end{array}\right.\end{array}$$

This indicates that the minimization of the conditional log-likelihood can be understood as a local regression for the random variable ${\mathrm{x}}_{i}$, under the data model:

$${\mathrm{x}}_{i}={\mathrm{x}}_{-i}{\beta}^{i}+{\nu}_{i},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\nu}_{i}\sim \mathcal{N}(0,{\sigma}_{i}^{2}).$$

To obtain the solution of the local regression problem (2), we consider a convex optimization based on the sorted ${\ell}_{1}$-norm [21,25]. In particular, for each local problem index $i\in \{1,\cdots ,p\}$, we solve

$${\widehat{\beta}}^{i}\in \underset{{\beta}^{i}\in {\mathbb{R}}^{p-1}}{min}\frac{1}{2}{\parallel {X}_{i}-{X}_{-i}{\beta}^{i}\parallel}_{2}^{2}+{\widehat{\sigma}}_{i}\xb7{J}_{\lambda}\left({\beta}^{i}\right).$$

Here, the matrix $X\in {\mathbb{R}}^{n\times p}$ consists of n i.i.d. p-dimensional samples in rows, ${X}_{i}$ is the ith column of X and ${X}_{-i}\in {\mathbb{R}}^{n\times (p-1)}$ is the sub-matrix of X without the ith column. Note that the sub-problem (3) requires an estimate of ${\sigma}_{i}$, namely ${\widehat{\sigma}}_{i}$, which will be computed dependent on the information on the other sub-problem indices (namely, ${\Sigma}_{-i,-i}$). Because of this, the problems (3) for indices $i=1,2,\cdots ,p$ are not independent. This contrasts our method to the neighborhood selection algorithms [14,19], based on the ${\ell}_{1}$-regularization.

#### 2.2. Connection to Inverse Covariance Matrix Estimation

With ${\widehat{\beta}}^{i}$ obtained from (3) as an estimate of ${\Sigma}_{-i,-i}^{-1}{\Sigma}_{-i,i}$ for $i=1,2,\cdots ,p$, now the question is how to use the information for the estimation of $\mathsf{\Theta}={\Sigma}^{-1}$ without an explicit inversion of the matrix. For the purpose, we first consider the block-wise formulation of $\Sigma \mathsf{\Theta}=I$:
putting the ith row and the ith column to the last positions whenever necessary. There we can see that ${\Sigma}_{-i,-i}{\mathsf{\Theta}}_{-i,i}+{\Sigma}_{-i,i}{\mathsf{\Theta}}_{ii}=0$. Also, from the block-wise inversion of $\mathsf{\Theta}={\Sigma}^{-1}$, we have (unnecessary block matrices are replaced with *):

$$\left[\begin{array}{cc}{\Sigma}_{-i,-i}& {\Sigma}_{-i,i}\\ {\Sigma}_{i,-i}& {\Sigma}_{ii}\end{array}\right]\left[\begin{array}{cc}{\mathsf{\Theta}}_{-i,-i}& {\mathsf{\Theta}}_{-i,i}\\ {\mathsf{\Theta}}_{i,-i}& {\mathsf{\Theta}}_{ii}\end{array}\right]=\left[\begin{array}{cc}{I}_{(p-1)\times (p-1)}& 0\\ 0& 1\end{array}\right]$$

$$\begin{array}{cc}{\left[\begin{array}{cc}{\Sigma}_{-i,-i}& {\Sigma}_{-i,i}\\ {\Sigma}_{i,-i}& {\Sigma}_{ii}\end{array}\right]}^{-1}\hfill & \hfill =\left[\begin{array}{cc}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\ast & \ast \\ \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\ast & {({\Sigma}_{ii}-{\Sigma}_{i,-i}{\Sigma}_{-i,-i}^{-1}{\Sigma}_{-i,i})}^{-1}\end{array}\right]\end{array}$$

From these and the definition of ${\beta}^{i}$, we can establish the following relations:

$$\begin{array}{c}\hfill \left\{\begin{array}{cc}{\beta}^{i}\hfill & ={\Sigma}_{-i,-i}^{-1}{\Sigma}_{-i,i}=-\frac{{\mathsf{\Theta}}_{-i,i}}{{\mathsf{\Theta}}_{ii}}\hfill \\ {\mathsf{\Theta}}_{ii}\hfill & ={({\Sigma}_{ii}-{\left({\beta}^{i}\right)}^{\prime}{\Sigma}_{-i,-i}{\beta}^{i})}^{-1}=1/{\sigma}_{i}^{2}.\hfill \end{array}\right.\end{array}$$

Note that ${\mathsf{\Theta}}_{ii}>0$ for a positive definite matrix $\mathsf{\Theta}$ and that (4) implies the sparsity pattern of ${\mathsf{\Theta}}_{-i,i}$ and ${\beta}^{i}$ must be the same. Also, the updates (4) for $i=1,2,\cdots ,i$ are not independent, since the computation of ${\sigma}_{i}=1/{\mathsf{\Theta}}_{ii}$ depends on ${\Sigma}_{-i,-i}$. However, if we estimate ${\sigma}_{i}$ based on the sample covariance matrix instead of $\Sigma $, (i) our updates (4) no longer needs to explicitly compute or store the ${\Sigma}_{p\times p}$ matrix, unlike the gLASSO and (ii) our sub-problems become mutually independent and thus solvable in parallel.

## 3. Algorithm

Our algorithm, called nsSLOPE, is summarized in Algorithm 1, which is essentially a block-coordinate descent algorithm [12,31]. Our algorithm may look similar to that of Yuan [19] but there are several important differences—(i) each sub-problem in our algorithm solves a SLOPE formulation (SL1 regularization), while Yuan’s sub-problem is either LASSO or Dantzig selector (${\ell}_{1}$ regularization); (ii) our sub-problem makes use of the estimate ${\widehat{\mathsf{\Theta}}}_{ii}$ in addition to ${\widehat{\mathsf{\Theta}}}_{-i,i}$.

Algorithm 1: The nsSLOPE Algorithm |

Input: $X\in {\mathbb{R}}^{n\times p}$ with zero-centered columnsInput: $S=\frac{1}{n}{X}^{\prime}X$Input: The target level of FDR qSet $\lambda =({\lambda}_{1},\cdots ,{\lambda}_{p-1})$ according to Section 3.2; |

#### 3.1. Sub-Problem Solver

To solve our sub-problems (3), we use the SLOPE R package, which implements the proximal gradient descent algorithm of Reference [32] with acceleration based on Nesterov’s original idea [33]. The algorithm requires to compute the proximal operator involving ${J}_{\lambda}(\xb7)$, namely

$$\mathrm{pro}{\mathrm{x}}_{{J}_{\lambda}}\left(z\right):=\underset{x\in {\mathbb{R}}^{p-1}}{\mathrm{arg}\mathrm{min}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{2}{\parallel x-z\parallel}^{2}+{J}_{\lambda}\left(x\right).$$

This can be computed in $\mathcal{O}(plogp)$ time using an algorithm from Reference [21]. The optimality of sub-problem is declared by the primal-dual gap and we use a tight threshold value, ${10}^{-7}$.

#### 3.2. Choice of $\lambda =({\lambda}_{1},\cdots ,{\lambda}_{p-1})$

For the sequence of $\lambda $ values in sub-problems, we use so-called the Benjamini-Hochberg (BH) sequence [21]:

$${\lambda}_{i}^{BH}={\mathsf{\Phi}}^{-1}(1-iq/2(p-1)),\phantom{\rule{0.277778em}{0ex}}i=1,\cdots ,p-1.$$

Here $q\in [0,1]$ is the target level of FDR control we discuss later and ${\mathsf{\Phi}}^{-1}\left(\alpha \right)$ is the $\alpha $th quantile of the standard normal distribution. In fact, when the design matrix ${X}_{-i}$ in the SLOPE sub-problem (3) is not orthogonal, it is beneficial to use an adjusted version of this sequence. This sequence is generated by:
and if the sequence ${\lambda}_{1},\cdots ,{\lambda}_{k}$ is non-increasing but ${\lambda}_{i}$ begins to increase after the index ${k}^{\ast}$, we set all the remaining values equal to ${\lambda}_{{k}^{\ast}}$, so that the resulting sequence will be non-increasing. For more discussion about the adjustment, we refer to Section 3.2.2 of Reference [21].

$${\lambda}_{i}={\lambda}_{i}^{BH}\sqrt{1+w(i-1)\sum _{j<i}{\lambda}_{j}^{2}},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}w\left(k\right)=\frac{1}{n-k-1},$$

#### 3.3. Estimation of ${\mathsf{\Theta}}_{ii}$

To solve the ith sub-problem in our algorithm, we need to estimate the value of ${\mathsf{\Theta}}_{ii}$. This can be done using (4), that is, ${\mathsf{\Theta}}_{ii}={({\Sigma}_{ii}-{\left({\beta}^{i}\right)}^{\prime}{\Sigma}_{-i,-i}{\beta}^{i})}^{-1}$. However, this implies that (i) we need to keep an estimate of $\Sigma \in {\mathbb{R}}^{p\times p}$ additionally, (ii) the computation of the ith sub-problem will be dependent on all the other indices, as it needs to access ${\Sigma}_{-i,-i}$, requiring the algorithm to run sequentially.

To avoid these overheads, we compute the estimate using the sample covariance matrix $S=\frac{1}{n}{X}^{\prime}X$ instead (we assume that the columns of $X\in {\mathbb{R}}^{n\times p}$ are centered):

$${\tilde{\mathsf{\Theta}}}_{ii}={({S}_{ii}-2{\widehat{\beta}}^{\prime}{S}_{-i,i}+{\widehat{\beta}}^{\prime}{S}_{-i,-i}\widehat{\beta})}^{-1}.$$

This allows us to compute the inner loop of Algorithm 1 in parallel.

#### 3.4. Stopping Criterion of the External Loop

To terminate the outer loop in Algorithm 1, we check if the diagonal entries of $\widehat{\mathsf{\Theta}}$ have converged, that is, the algorithm is stopped when the ${\ell}_{\infty}$-norm different between two consecutive iterates is below a threshold value, ${10}^{-3}$. The value is slightly loose but we have found no practical difference by making it tighter. Note that it is suffice to check the optimality of the diagonal entries of $\widehat{\mathsf{\Theta}}$ since the optimality of ${\widehat{\beta}}^{i}$’s is enforced by the sub-problem solver and ${\widehat{\mathsf{\Theta}}}_{-i,i}=-{\widehat{\mathsf{\Theta}}}_{ii}{\widehat{\beta}}^{i}$.

#### 3.5. Uniqueness of Sub-Problem Solutions

When $p>n$, our sub-problems may have multiple solutions, which may prevent the global convergence of our algorithm. We may adopt the technique in Reference [34] to inject a strongly convex proximity term into each sub-problem objective, so that each sub-problem will have a unique solution. In our experience, however, we encountered no convergence issues using stopping threshold values in the range of ${10}^{-3}\sim {10}^{-7}$ for the outer loop.

## 4. Analysis

In this section, we provide two theoretical results of our nsSLOPE procedure—(i) an estimation error bound regarding the distance between our estimate $\tilde{\mathsf{\Theta}}$ and the true model parameter $\mathsf{\Theta}$ and (ii) group-wise FDR control on discovering the true edges in the Gaussian MRF corresponding to $\mathsf{\Theta}$.

We first discuss about the estimation error bound, for which we divide our analysis into two parts regarding (i) off-diagonal entries and (ii) diagonal entries of $\mathsf{\Theta}$.

#### 4.1. Estimation Error Analysis

#### 4.1.1. Off-Diagonal Entries

From (4), we see that ${\widehat{\beta}}^{i}=-{\widehat{\mathsf{\Theta}}}_{-i,i}/{\widehat{\mathsf{\Theta}}}_{ii}$, in other words, when ${\widehat{\mathsf{\Theta}}}_{ii}$ is fixed, the off-diagonal entries ${\widehat{\mathsf{\Theta}}}_{-i,i}$ is determined solely by ${\widehat{\beta}}^{i}$, and therefore we can focus on the estimation error of ${\widehat{\beta}}^{i}$.

To discuss the estimation error of ${\widehat{\beta}}^{i}$, it is convenient to consider a constrained reformulation of the sub-problem (3):

$$\begin{array}{c}\hfill \widehat{\beta}\in \underset{\beta \in {\mathbb{R}}^{d}}{\mathrm{arg}\mathrm{min}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{J}_{\lambda}\left(\beta \right)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{s}.\mathrm{t}.\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{n}{\parallel b-A\beta \parallel}_{1}\le \u03f5.\end{array}$$

Hereafter, for the sake of simplicity, we use notations $b:={X}_{i}\in {\mathbb{R}}^{n}$ and $A:={X}_{-i}\in {R}^{n\times d}$ for $d:=p-1$, also dropping sub-problem indices in $\widehat{\beta}$ and $\u03f5>0$. In this view, the data model being considered in each sub-problem is as follows,

$$b=A{\beta}^{\ast}+\nu ,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\nu \sim \mathcal{N}(0,{\sigma}_{i}^{2}{I}_{n}).$$

For the analysis, we make the following assumptions:

- The true signal ${\beta}^{\ast}\in {\mathbb{R}}^{d}$ satisfies $\parallel {\beta}^{\ast}{\parallel}_{1}\le \sqrt{s}$ for some $s>0$ (this condition is satisfied for example, if $\parallel {\beta}^{\ast}{\parallel}_{2}\le 1$ and ${\beta}^{\ast}$ is s-sparse, that is, it has at most s nonzero elements).
- The noise satisfies the condition $\frac{1}{n}{\parallel \nu \parallel}_{1}\le \u03f5.$ This will allow us to say that the true signal ${\beta}^{\ast}$ is feasible with respect to the constraint in (6).

We provide the following result, which shows that $\widehat{\beta}$ approaches the true ${\beta}^{\ast}$ in high probability:

**Theorem**

**1.**

Suppose that $\widehat{\beta}$ is an estimate of ${\beta}^{\ast}$ obtained by solving the sub-problem (6). Consider the factorization $A:={X}_{-i}=BC$ where $B\in {\mathbb{R}}^{n\times k}$ ($n\le k$) is a Gaussian random matrix whose entries are sampled i.i.d. from $\mathcal{N}(0,1)$ and $C\in {\mathbb{R}}^{k\times d}$ is a deterministic matrix such that ${C}^{\prime}C={\Sigma}_{-i,-i}$. Such decomposition is possible since the rows of A are independent samples from $\mathcal{N}(0,{\Sigma}_{-i,-i})$. Then we have,
with probability at least $1-2exp\left(-\frac{n{t}^{2}}{2s}\frac{{\overline{\lambda}}^{2}}{{\lambda}_{1}^{2}}\right)$, where $\overline{\lambda}=\frac{1}{d}{\sum}_{i=1}^{d}{\lambda}_{i}$ and ${\parallel C\parallel}_{1}={max}_{j=1,\cdots ,d}{\sum}_{i=1}^{k}|{C}_{ij}|$.

$$\begin{array}{c}\hfill \parallel \widehat{\beta}-{\beta}^{\ast}{\parallel}_{C}^{2}\le \sqrt{2\pi}\left(8\sqrt{2}{\parallel C\parallel}_{1}\frac{{\lambda}_{1}}{\overline{\lambda}}\sqrt{\frac{slogk}{n}}+\u03f5+t\right)\end{array}$$

We need to discuss a few results before proving Theorem 1.

**Theorem**

**2.**

Let T be a bounded subset of ${\mathbb{R}}^{d}$. For an $\u03f5>0$, consider the set

$${T}_{\u03f5}:=\left\{u\in T:\frac{1}{n}{\parallel Au\parallel}_{1}\le \u03f5\right\}.$$

Then
holds with probability at least $1-2exp\left(-\frac{n{t}^{2}}{2d{\left(T\right)}^{2}}\right)$, where $g\in {\mathbb{R}}^{k}$ is a standard Gaussian random vector and $d\left(T\right):={max}_{u\in T}{\parallel u\parallel}_{2}$.

$$\underset{u\in {T}_{\u03f5}}{sup}{\left({u}^{\prime}{C}^{\prime}Cu\right)}^{1/2}\le \sqrt{\frac{8\pi}{n}}\mathbb{E}\underset{u\in S}{sup}|\langle {C}^{\prime}g,u\rangle |+\sqrt{\frac{\pi}{2}}(\u03f5+t),$$

**Proof.**

The result follows from an extended general ${M}^{\ast}$ inequality in expectation [35]. □

The next result shows that the first term of the upper bound in Theorem 2 can be bounded without using expectation.

**Lemma**

**1.**

The quantity $\mathbb{E}{sup}_{u\in \mathcal{K}-\mathcal{K}}|\langle {C}^{\prime}g,u\rangle |$ is called thewidthof $\mathcal{K}$ and is bounded as follows,

$$\mathbb{E}\underset{u\in \mathcal{K}-\mathcal{K}}{sup}|\langle {C}^{\prime}g,u\rangle |\le 4\sqrt{2}{\parallel C\parallel}_{1}\frac{{\lambda}_{1}}{\overline{\lambda}}\sqrt{slogk}.$$

**Proof.**

This result is a part of the proof for Theorem 3.1 in Reference [35]. □

Using Theorem 2 and Lemma 1, we can derive a high probability error bound on the estimation from noisy observations,
where the true signal ${\beta}^{\ast}$ belongs to a bounded subset $\mathcal{K}\in {\mathbb{R}}^{d}$. The following corollaries are straightforward extensions of Theorems 3.3 and 3.4 of Reference [35], given our Theorem 2 (so we skip the proofs).

$$b=A{\beta}^{\ast}+\nu ,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{n}{\parallel \nu \parallel}_{1}\le \u03f5,$$

**Corollary**

**1.**

Choose $\widehat{\beta}$ to be any vector satisfying that

$$\widehat{\beta}\in \mathcal{K},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{n}{\parallel b-A\widehat{\beta}\parallel}_{1}\le \u03f5.$$

Then,
with probability at least $1-2exp\left(-\frac{2n{t}^{2}}{d{\left(T\right)}^{2}}\right)$.

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{\beta \in \mathcal{K}}{sup}{\left[{(\widehat{\beta}-\beta )}^{\prime}{C}^{\prime}C(\widehat{\beta}-\beta )\right]}^{1/2}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\le \sqrt{2\pi}\left(8\sqrt{2}{\parallel C\parallel}_{1}\frac{{\lambda}_{1}}{\overline{\lambda}}\sqrt{\frac{slogk}{n}}+\u03f5+t\right)\hfill \end{array}$$

Now we show the error bound for the estimates we obtain by solving the optimization problem (6). For the purpose, we make use of the Minkowski functional of the set $\mathcal{K}$,

$${\parallel \beta \parallel}_{\mathcal{K}}:=inf\{r>0:r\beta \in \mathcal{K}\}$$

If $\mathcal{K}\subset {\mathbb{R}}^{p}$ is a compact and origin-symmetric convex set with non-empty interior, then ${\parallel \xb7\parallel}_{\mathcal{K}}$ defines a norm in ${\mathbb{R}}^{p}$. Note that $\beta \in \mathcal{K}$ if and only if ${\parallel \beta \parallel}_{\mathcal{K}}\le 1$.

**Corollary**

**2.**

$$\widehat{\beta}\in \underset{\beta \in {\mathbb{R}}^{p}}{\mathrm{arg}\mathrm{min}}{\phantom{\rule{0.277778em}{0ex}}\parallel \beta \parallel}_{\mathcal{K}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathit{subject}\phantom{\rule{4.pt}{0ex}}\mathit{to}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{n}{\parallel b-A\beta \parallel}_{1}\le \u03f5.$$

Then
$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{\beta \in \mathcal{K}}{sup}{\left[{(\widehat{\beta}-\beta )}^{\prime}{C}^{\prime}C(\widehat{\beta}-\beta )\right]}^{1/2}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\le \sqrt{2\pi}\left(8\sqrt{2}{\parallel C\parallel}_{1}\frac{{\lambda}_{1}}{\overline{\lambda}}\sqrt{\frac{slogk}{n}}+\u03f5+t\right)\hfill \end{array}$$
with probability at least $1-2exp\left(-\frac{2n{t}^{2}}{d{\left(T\right)}^{2}}\right)$.

Finally, we show that solving the constrained form of the sub-problems (6) also satisfies essentially the same error bound in Corollary 2.

**Proof**

**of Theorem 1**

Since we assumed that ${\parallel \beta \parallel}_{1}\le \sqrt{s}$, we construct the subset $\mathcal{K}$ so that all vectors $\beta $ with ${\parallel \beta \parallel}_{1}\le \sqrt{s}$ will be contained in $\mathcal{K}$. That is,

$$\mathcal{K}:=\{\beta \in {\mathbb{R}}^{p}:{J}_{\lambda}\left(\beta \right)\le {\lambda}_{1}\sqrt{s}\}.$$

This is a sphere defined in the SL1-norm ${J}_{\lambda}(\xb7)$: in this case, the Minkowski functional ${\parallel \xb7\parallel}_{\mathcal{K}}$ is proportional to ${J}_{\lambda}(\xb7)$ and thus the same solution minimizes both ${J}_{\lambda}(\xb7)$ and ${\parallel \xb7\parallel}_{\mathcal{K}}$.

Recall that $d\left(T\right):={max}_{u\in T}{\parallel u\parallel}_{2}$ and we choose $T=\mathcal{K}-\mathcal{K}$. Since ${\parallel \beta \parallel}_{2}\le {\parallel \beta \parallel}_{1}\le \frac{1}{\overline{\lambda}}{J}_{\lambda}\left(\beta \right)$, for $\beta \in \mathcal{K}$ we have ${\parallel \beta \parallel}_{2}\le \frac{{\lambda}_{1}}{\overline{\lambda}}\sqrt{s}$. This implies that $d\left(T\right)=2\frac{{\lambda}_{1}}{\overline{\lambda}}\sqrt{s}$. □

#### 4.1.2. Diagonal Entries

We estimate ${\mathsf{\Theta}}_{ii}$ based on the residual sum of squares (RSS) as suggested by [19],

$$\begin{array}{cc}\hfill {\widehat{\sigma}}_{i}^{2}={\left({\widehat{\mathsf{\Theta}}}_{ii}\right)}^{-1}& =\parallel {X}_{i}-{X}_{-i}^{\prime}{\widehat{\beta}}^{i}{\parallel}^{2}/n\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={S}_{ii}-2{\left({\widehat{\beta}}^{i}\right)}^{\prime}{S}_{-i,i}+{\left({\widehat{\beta}}^{i}\right)}^{\prime}{S}_{-i,-i}{\widehat{\beta}}^{i}.\hfill \end{array}$$

Unlike in Reference [19], we directly analyze the estimation error of ${\widehat{\mathsf{\Theta}}}_{ii}$ based on a chi-square tail bound.

**Theorem**

**3.**

For all small enough $\alpha >0$ so that $\alpha /{\sigma}_{i}^{2}\in [0,1/2)$, we have
where for $\nu \sim \mathcal{N}(0,{\sigma}_{i}^{2}{I}_{n})$,

$$\mathbb{P}(|{\widehat{\mathsf{\Theta}}}_{ii}^{-1}-{\mathsf{\Theta}}_{ii}^{-1}-\delta ({\left({\beta}^{i}\right)}^{\ast},{\widehat{\beta}}^{i})|\ge \alpha )\le exp\left(-\frac{3}{16}n{\alpha}^{2}\right)$$

$$\delta ({\beta}^{\ast},\widehat{\beta}):={({\beta}^{\ast}-\widehat{\beta})}^{\prime}{S}_{-i,-i}({\beta}^{\ast}-\widehat{\beta})+2{\nu}^{\prime}{X}_{-i}({\beta}^{\ast}-\widehat{\beta})/n$$

**Proof.**

Using the same notation as the previous section, that is, $b={X}_{i}$ and $A={X}_{-i}$, consider the estimate in discussion,
where the last equality is from $b=A{\beta}^{\ast}+\nu $. Therefore,
where ${S}_{-i,-i}:={A}^{\prime}A/n={X}_{-i}^{\prime}{X}_{-i}/n$. The last term is the sum of squares of independent ${\nu}_{i}\sim \mathcal{N}(0,{\sigma}_{i}^{2})$ and therefore it follows the chi-square distribution, that is, $\left({\nu}^{\prime}\nu \right)/{\sigma}_{i}^{2}\sim {\chi}_{n}^{2}$. Applying the tail bound [36] for a chi-square random variable Z with d degrees of freedom:
we get for all small enough $\alpha >0$,
□

$${\widehat{\sigma}}_{i}^{2}=\parallel b-A\widehat{\beta}{\parallel}^{2}/n={\parallel A({\beta}^{\ast}-\widehat{\beta})+\nu \parallel}^{2}/n$$

$${\widehat{\sigma}}_{i}^{2}={({\beta}^{\ast}-\widehat{\beta})}^{\prime}{S}_{-i,-i}({\beta}^{\ast}-\widehat{\beta})+2{\nu}^{\prime}A({\beta}^{\ast}-\widehat{\beta})/n+{\nu}^{\prime}\nu /n$$

$$\mathbb{P}(|{d}^{-1}Z-1|\ge \alpha )\le exp\left(-\frac{3}{16}d{\alpha}^{2}\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\alpha \in [0,1/2).$$

$$\mathbb{P}(|{\nu}^{\prime}\nu /n-{\sigma}_{i}^{2}|\ge \alpha )\le exp\left(-\frac{3}{16}n{\alpha}^{2}\right).$$

#### 4.1.3. Discussion on Asymptotic Behaviors

Our two main results above, Theorems 1 and 3, indicate how well our estimate of the off-diagonal entries ${\widehat{\mathsf{\Theta}}}_{-i,i}$ and diagonal entries ${\widehat{\mathsf{\Theta}}}_{ii}$ would behave. Based on these results, we can discuss the estimation error of the full matrix $\widehat{\mathsf{\Theta}}$ compared to the true precision matrix $\mathsf{\Theta}$.

From Theorem 1, we can deduce that with ${C}^{\prime}C={\Sigma}_{-i,-i}$ and $z\left(n\right)=O\left(\sqrt{slogk/n}\right)$,
where ${\mathrm{ev}}_{min}\left({\Sigma}_{-i,-i}\right)$ is the smallest eigenvalue of the symmetric positive definite ${\Sigma}_{-i,-i}$. That is, using the interlacing property of eigenvalues, we have
where $\parallel \mathsf{\Theta}\parallel ={\mathrm{ev}}_{max}\left(\mathsf{\Theta}\right)$ is the spectral radius of $\mathsf{\Theta}$. Therefore when $n\to \infty $, the distance between $\widehat{\beta}$ and ${\beta}^{\ast}$ is bounded by $\sqrt{2\pi}\parallel \mathsf{\Theta}\parallel (\u03f5+t)$. Here, we can consider $t\to 0$ in a way such as $n{t}^{2}\to \infty $ as n increases, so that the bound in Theorem 1 will hold with the probability approaching one. That is, in rough asymptotics,

$$\begin{array}{cc}\hfill {\mathrm{ev}}_{min}\left({\Sigma}_{-i,-i}\right)\parallel \widehat{\beta}-{\beta}^{\ast}\parallel \phantom{\rule{1.em}{0ex}}& \le {\left[(\widehat{\beta}-{\beta}^{\ast}){\Sigma}_{-i,-i}(\widehat{\beta}-\beta )\right]}^{1/2}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \le \sqrt{2\pi}(z\left(n\right)+\u03f5+t)\hfill \end{array}$$

$$\parallel \widehat{\beta}-{\beta}^{\ast}\parallel \le \frac{\sqrt{2\pi}(z\left(n\right)+\u03f5+t)}{{\mathrm{ev}}_{min}\left({\Sigma}_{-i,-i}\right)}\le \sqrt{2\pi}\parallel \mathsf{\Theta}\parallel (z\left(n\right)+\u03f5+t)$$

$$\parallel \widehat{\beta}-{\beta}^{\ast}\parallel \le \u03f5\sqrt{2\pi}\parallel \mathsf{\Theta}\parallel .$$

Under the conditions above, Theorem 3 indicates that:
using our assumption that ${\parallel \nu /n\parallel}_{1}\le \u03f5$. We can further introduce assumptions on $\parallel \mathsf{\Theta}\parallel $ and $S\mathsf{\Theta}$ as in Reference [19], so that we can quantify the upper bound but here we will simply say that $\delta ({\beta}^{\ast},\widehat{\beta})\le 2c{\u03f5}^{2}$, where c is a constant depending on the properties of the full matrices S and $\mathsf{\Theta}$. If this is the case, then from Theorem 3 for an $\alpha \to 0$ such that $n{\alpha}^{2}\to \infty $, we see that
with the probability approaching one.

$$\begin{array}{cc}\hfill \delta ({\beta}^{\ast},\widehat{\beta})& \le \parallel S\parallel \parallel {\beta}^{\ast}-\widehat{\beta}{\parallel}^{2}{+2\parallel \nu /n\parallel}_{1}{\parallel {X}_{-i}({\beta}^{\ast}-\widehat{\beta})\parallel}_{\infty}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \le 2{\u03f5}^{2}\parallel \mathsf{\Theta}\parallel (\pi \parallel S\parallel \parallel \mathsf{\Theta}\parallel +\sqrt{2\pi}\parallel {X}_{-i}{\parallel}_{\infty})\hfill \end{array}$$

$$|{\widehat{\mathsf{\Theta}}}_{ii}^{-1}-{\mathsf{\Theta}}_{ii}^{-1}|\le \delta ({\left({\beta}^{i}\right)}^{\ast},{\widehat{\beta}}^{i})\le 2c{\u03f5}^{2},$$

Therefore, if we can drive $\u03f5\to 0$ as $n\to \infty $, then (7) and (8) imply the convergence of the diagonal ${\widehat{\mathsf{\Theta}}}_{ii}\to {\mathsf{\Theta}}_{ii}$ and the off-diagonal ${\widehat{\mathsf{\Theta}}}_{-i,i}\to {\mathsf{\Theta}}_{-i,i}$ in probability. Since $\parallel {X}_{i}-{X}_{-i}{\beta \parallel}_{1}\le \u03f5$ (or $\parallel {X}_{i}-{X}_{-i}{\beta}^{i}\parallel \le {\u03f5}_{i}$ more precisely), $\u03f5\to 0$ is likely to happen when $(p-1)/n\ge 1$, implying that $p\to \infty $ as well as $n\to \infty $.

#### 4.2. Neighborhood FDR Control under Group Assumptions

Here we consider $\widehat{\beta}$ obtained by solving the unconstrained form (3) with the same data model we discussed above for the sub-problems:
with $b={X}_{i}\in {\mathbb{R}}^{n}$ and $A={X}_{-i}\in {\mathbb{R}}^{n\times d}$ with $d=p-1$. Here we focus on a particular but an interesting case where the columns of A form orthogonal groups, that is, under the decomposition $A=BC$, ${C}^{\prime}C={\Sigma}_{-i,-i}$ forms a block-diagonal matrix. We also assume that the columns of A belonging to the same group are highly correlated, in the sense that for any columns ${a}_{i}$ and ${a}_{j}$ of A corresponding to the same group, their correlation is high enough to satisfy that

$$b=A\beta +\nu ,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\nu \sim \mathcal{N}(0,{\sigma}^{2}{I}_{n})$$

$$\parallel {a}_{i}-{a}_{j}\parallel \le \underset{i=1,\cdots ,d-1}{min}\{{\lambda}_{i}-{\lambda}_{i+1}\}/\parallel b\parallel .$$

This implies that ${\widehat{\beta}}_{i}={\widehat{\beta}}_{j}$ by Theorem 2.1 of Reference [25], which simplifies our analysis. Note that if ${a}_{i}$ and ${a}_{j}$ belong to different blocks, then our assumption above implies ${a}_{i}^{\prime}{a}_{j}=0$. Finally, we further assume that $\parallel {a}_{i}\parallel \le 1$ for $i=1,\cdots ,d$.

Consider a collection of non-overlapping index subsets $g\subset \{1,\cdots ,d\}$ as the set of groups G. Under the block-diagonal covariance matrix assumption above, we see that
where ${\beta}_{g}$ denotes the representative of the same coefficients ${\beta}_{i}$ for all $i\in g$, ${\tilde{a}}_{g}:={\sum}_{i\in g}{a}_{i}$ and ${\beta}_{g}^{\prime}:=\parallel {\tilde{a}}_{g}\parallel {\beta}_{g}$. This tells us that we can replace $A\beta $ by $\tilde{A}{\beta}^{\prime}$, if we define $\tilde{A}\in {\mathbb{R}}^{n\times |G|}$ containing $\frac{{\tilde{a}}_{g}}{\parallel {\tilde{a}}_{g}\parallel}$ in its columns (so that ${\tilde{A}}^{\prime}\tilde{A}={I}_{|G|})$ and consider the vector of group-representative coefficients ${\beta}^{\prime}\in {\mathbb{R}}^{|G|}$.

$$A\beta =\sum _{g\in G}\sum _{i\in g}{a}_{i}{\beta}_{i}=\sum _{g\in G}\left(\sum _{i\in g}{a}_{i}\right){\beta}_{g}=\sum _{g\in G}\frac{{\tilde{a}}_{g}}{\parallel {\tilde{a}}_{g}\parallel}{\beta}_{g}^{\prime}$$

The regularizer can be rewritten similarly,
where ${\lambda}_{i}^{\prime}:=({\lambda}_{{\sum}_{j=1}^{i-1}|{g}_{\left(j\right)}|+1}+\cdots +{\lambda}_{{\sum}_{j=1}^{i}|{g}_{\left(i\right)}|})/\parallel {\tilde{a}}_{{g}_{\left(i\right)}}\parallel $, denoting by ${g}_{\left(i\right)}$ the group which has the ith largest coefficient in ${\beta}^{\prime}$ in magnitude and by $|{g}_{\left(i\right)}|$ the size of the group.

$${J}_{\lambda}\left(\beta \right)=\sum _{i=1}^{|G|}{\lambda}_{i}^{\prime}{|{\beta}^{\prime}|}_{\left(i\right)}={J}_{{\lambda}^{\prime}}\left({\beta}^{\prime}\right)$$

Using the fact that ${\tilde{A}}^{\prime}\tilde{A}={I}_{|G|}$, we can recast the regression model (9) as ${b}^{\prime}={\tilde{A}}^{\prime}b=\beta +{\tilde{A}}^{\prime}\nu \phantom{\rule{0.277778em}{0ex}}\sim \phantom{\rule{0.277778em}{0ex}}\mathcal{N}(\beta ,{\sigma}^{2}{I}_{|G|})$, and consider a much simpler form of the problem (3),
where we can easily check that ${\lambda}^{\prime}=({\lambda}_{1}^{\prime},\cdots ,{\lambda}_{|G|}^{\prime})$ satisfies ${\lambda}_{1}^{\prime}\ge \cdots \ge {\lambda}_{|G|}^{\prime}$. This is exactly the form of the SLOPE problem with an orthogonal design matrix in Reference [21], except that the new ${\lambda}^{\prime}$ sequence is not exactly the Benjamini-Hochberg sequence (5).

$${\widehat{\beta}}^{\prime}=\underset{{\beta}^{\prime}\in {\mathbb{R}}^{|G|}}{\mathrm{arg}\mathrm{min}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{2}{\parallel {b}^{\prime}-{\beta}^{\prime}\parallel}^{2}+\sigma {J}_{{\lambda}^{\prime}}\left({\beta}^{\prime}\right),$$

We can consider the problem (3) (and respectively (10)) in the context of multiple hypothesis testing of d (resp. $|G|$) null hypothesis ${H}_{i}:{\beta}_{i}=0$ (resp. ${H}_{{g}_{i}}:{\beta}_{i}^{\prime}=0$) for $i=1,\cdots ,d$ (resp. $i=1,\cdots ,|G|$), where we reject ${H}_{i}$ (resp. ${H}_{{g}_{i}}$) if and only if ${\widehat{\beta}}_{i}\ne 0$ (resp. ${\widehat{\beta}}_{i}^{\prime}\ne 0$). In this setting, the Lemmas B.1 and B.2 in Reference [21] still holds for our problem (10), since they are independent of choosing the particular $\lambda $ sequence.

In the following, V is the number of individual false rejections, R is the number of total individual rejections and ${R}_{g}$ is the number of total group rejections. The following lemmas are slightly modified versions of Lemmas B.1 and B.2 from Reference [21], respectively, to fit our group-wise setup.

**Lemma**

**2.**

Let ${H}_{g}$ be a null hypothesis and let $r\ge 1$. Then

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \{{b}^{\prime}:{H}_{g}\phantom{\rule{0.277778em}{0ex}}\mathit{is}\phantom{\rule{4.pt}{0ex}}\mathit{rejected}\phantom{\rule{0.166667em}{0ex}}\wedge \phantom{\rule{0.166667em}{0ex}}{R}_{g}=r\}=\{{b}^{\prime}:|{b}_{g}^{\prime}|>\sigma {\lambda}_{r}^{\prime}\phantom{\rule{0.166667em}{0ex}}\wedge \phantom{\rule{0.166667em}{0ex}}{R}_{g}=r\}.\hfill \end{array}$$

**Lemma**

**3.**

Consider applying the procedure (10) for new data ${\tilde{b}}^{\prime}=({b}_{1}^{\prime},\cdots ,{b}_{g-1}^{\prime},{b}_{g+1}^{\prime},\cdots ,{b}_{|G|}^{\prime})$ with $\tilde{\lambda}=({\lambda}_{2}^{\prime},\cdots ,{\lambda}_{d}^{\prime})$ and let ${\tilde{R}}_{g}$ be the number of group rejections made. Then with $r\ge 1$,

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \{{b}^{\prime}:|{b}_{g}^{\prime}|>\sigma {\lambda}_{r}^{\prime}\phantom{\rule{0.166667em}{0ex}}\wedge \phantom{\rule{0.166667em}{0ex}}{R}_{g}=r\}\subset \{{b}^{\prime}:|{b}_{g}^{\prime}|>\sigma {\lambda}_{r}^{\prime}\phantom{\rule{0.166667em}{0ex}}\wedge \phantom{\rule{0.166667em}{0ex}}{\tilde{R}}_{g}=r-1\}.\hfill \end{array}$$

Using these, we can show our FDR control result.

**Theorem**

**4.**

Consider the procedure (10) we consider in a sub-problem of nsSLOPE as a multiple testing of group-wise hypotheses, where we reject the null group hypothesis ${H}_{g}:{\beta}_{g}^{\prime}=0$ when ${\beta}_{g}^{\prime}\ne 0$, rejecting all individual hypotheses in the group, that is, all ${H}_{i}$, $i\in g$, are rejected. Using ${\lambda}_{1},\cdots ,{\lambda}_{p-1}$ defined as in (5), the procedure controls FDR at the level $q\in [0,1]$:
where

$$\mathrm{FDR}=\mathbb{E}\left[\frac{V}{R\vee 1}\right]\le \frac{{G}_{0}q}{d}\le q$$

$$\left\{\begin{array}{cc}{G}_{0}\hfill & :=|\{g:{\left({\beta}_{g}^{\prime}\right)}^{\ast}=0\}|\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}(\#\mathrm{true}\mathrm{null}\mathrm{group}\mathrm{hypotheses})\hfill \\ V\hfill & :=|\{i:{\beta}_{i}^{\ast}=0,{\widehat{\beta}}_{i}^{\ast}\ne 0\}|\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}(\#\mathrm{false}\mathrm{ind}.\mathrm{rejections})\hfill \\ R\hfill & :=|\{i:{\widehat{\beta}}_{i}^{\ast}\ne 0\}|\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}(\#\mathrm{all}\mathrm{individual}\mathrm{rejections})\hfill \end{array}\right.$$

**Proof.**

Suppose that ${H}_{g}$ is rejected. Then,

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \mathbb{P}({H}_{g}\phantom{\rule{0.277778em}{0ex}}\mathrm{rejected}\phantom{\rule{0.277778em}{0ex}}\wedge \phantom{\rule{0.277778em}{0ex}}{R}_{g}=r)\stackrel{\left(i\right)}{\le}\mathbb{P}(|{b}_{g}^{\prime}|\ge \sigma {\lambda}_{r}^{\prime}\phantom{\rule{0.277778em}{0ex}}\wedge \phantom{\rule{0.277778em}{0ex}}{\tilde{R}}_{g}=r-1)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\stackrel{\left(ii\right)}{=}\mathbb{P}(|{b}_{g}^{\prime}|\ge \sigma {\lambda}_{r}^{\prime})\phantom{\rule{0.277778em}{0ex}}\mathbb{P}({\tilde{R}}_{g}=r-1)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\stackrel{\left(iii\right)}{\le}\mathbb{P}\left(|{b}_{g}^{\prime}|\ge \sigma \frac{|{g}_{\left(r\right)}|{\lambda}_{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|}}{\parallel {\tilde{a}}_{{g}_{\left(r\right)}}\parallel}\right)\phantom{\rule{0.277778em}{0ex}}\mathbb{P}({\tilde{R}}_{g}=r-1)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\stackrel{\left(iv\right)}{\le}\mathbb{P}(|{b}_{g}^{\prime}|\ge \sigma {\lambda}_{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|})\phantom{\rule{0.277778em}{0ex}}\mathbb{P}({\tilde{R}}_{g}=r-1)\hfill \end{array}$$

The derivations above are—(i) by using Lemmas 2 and 3; (ii) from the independence between ${b}_{g}^{\prime}$ and ${\tilde{b}}^{\prime}$; (iii) by taking the smallest term in the summation of ${\lambda}_{r}^{\prime}$ multiplied by the number of terms; (iv) due to the assumption that $\parallel {a}_{i}\parallel \le 1$ for all i, so that $\parallel {\tilde{a}}_{{g}_{\left(r\right)}}\parallel \le |{g}_{\left(r\right)}|$ by triangle inequality.

Now, consider the group-wise hypotheses testing in (10) is configured in a way that the first ${G}_{0}$ hypotheses are null in truth, that is, ${H}_{{g}_{i}}:{\beta}_{i}^{\prime}=0$ for $i\le {G}_{0}$. Then we have:
□

$$\begin{array}{cc}\hfill \mathrm{FDR}& =\mathbb{E}\left[\frac{V}{R\vee 1}\right]=\sum _{r=1}^{|G|}\mathbb{E}\left[\frac{V}{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|}{\mathbb{1}}_{\{{R}_{g}=r\}}\right]\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\sum _{r=1}^{|G|}\frac{1}{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|}\sum _{i=1}^{{G}_{0}}\mathbb{E}\left[{\mathbb{1}}_{\left\{{H}_{{g}_{i}}\phantom{\rule{4.pt}{0ex}}\mathrm{rejected}\right\}}{\mathbb{1}}_{\{{R}_{{g}_{i}}=r\}}\right]\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\sum _{r=1}^{|G|}\frac{1}{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|}\sum _{i=1}^{{G}_{0}}\mathbb{P}({H}_{{g}_{i}}\phantom{\rule{4.pt}{0ex}}\mathrm{rejected}\wedge {R}_{{g}_{i}}=r)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \le \sum _{r=1}^{|G|}\frac{1}{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|}\sum _{i=1}^{{G}_{0}}\frac{{\sum}_{j=1}^{r}|{g}_{\left(j\right)}|q}{d}\mathbb{P}({\tilde{R}}_{{g}_{i}}=r-1)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\sum _{r\ge 1}q\frac{{G}_{0}}{d}\mathbb{P}({\tilde{R}}_{{g}_{i}}=r-1)=q\frac{{G}_{0}}{d}\le q.\hfill \end{array}$$

Since the above theorem applies for each sub-problem, which can be considered as for the ith random variable to find its neighbors to be connected in the Gaussian Markov random field defined by $\mathsf{\Theta}$, we call this result as neighborhood FDR control.

## 5. Numerical Results

We show that the theoretical properties discussed above also works in simulated settings.

#### 5.1. Quality of Estimation

For all numerical examples here, we generate $n=100,200,300,400$ random i.i.d. samples from $\mathcal{N}(0,{\Sigma}_{p\times p})$, where $p=500$ is fixed. We plant a simple block diagonal structure into the true matrix $\Sigma $, which is also preserved in the precision matrix $\mathsf{\Theta}={\Sigma}^{-1}$. All the blocks had the same size of 4, so that we have total 125 blocks at the diagonal of $\mathsf{\Theta}$. We set ${\mathsf{\Theta}}_{ii}=1$ and set the entries ${\mathsf{\Theta}}_{ij}$, $i\ne j$, to a high enough value whenever $(i,j)$ belongs to those blocks, in order to represent groups of highly correlated variables. All experiments were repeated for 25 times.

The $\lambda $ sequence for nsSLOPE has been chosen according to Section 3.2 with respect to a target FDR level $q=0.05$. For the gLASSO, we have used the $\lambda $ value discussed in Reference [3], to control the family-wise error rate (FWER, the chance of any false rejection of null hypotheses) to $\alpha =0.05$.

#### 5.1.1. Mean Square Estimation Error

Recall our discussion of error bounds in Theorem 1 (off-diagonal scaled by ${\widehat{\mathsf{\Theta}}}_{ii}$) and Theorem 3 (diagonal), followed by Section 4.1.3 where we roughly sketched the asymptotic behaviors of ${\widehat{\mathsf{\Theta}}}_{ii}$ and ${\widehat{\mathsf{\Theta}}}_{-i,i}$.

The top panel of Figure 1 shows mean square error (MSE) between estimated quantities and the true models that we have created, where estimates are obtained by our method nsSLOPE, without or with symmetrization at the end of Algorithm 1, as well as by the gLASSO [4] which solves the ${\ell}_{1}$-based MLE problem (1) with a block coordinate descent strategy. The off-diagonal estimation was consistently good overall settings, while the estimation error of diagonal entries were kept improving as n is being increased, which was we predicted in Section 4.1.3. We believe that our estimation of the diagonal has room for improvement, for example, using more accurate reverse-scaling to compensate the normalization within the SLOPE procedure.

#### 5.1.2. FDR Control

A more exciting part of our results is the FDR control discussed in Theorem 4 and here we check how well the sparsity structure of the precision matrix is recovered. For comparison, we measure the power (the fraction of the true nonzero entries discovered by the algorithm) and the FDR (for the whole precision matrix).

The bottom panel of Figure 1 shows the statistics. In all cases, the empirical FDR was controlled around the desired level $0.05$ by all methods, although our method kept the level quite strictly, having significantly larger power than the gLASSO. This is understandable since FWER control of the gLASSO if often too respective, thereby limiting the power to detect true positive entries. It is also consistent with the results reported for SL1-based penalized regression [21,23], which indeed is one of the key benefits of SL1-based methods.

#### 5.2. Structure Discovery

To further investigate the structure discovery by nsSLOPE, we have experimented with two different types of covariance matrices—one with a block-diagonal structure and another with a hub structure. The covariance matrix of the block diagonal case has been constructed using the data.simulation function from the

`varclust`R package, with $n=100$, $SNR=1$, $K=4$, $numb.vars=10$, $max.dim=3$ (this gives us a data matrix with 100 examples and 40 variables). In the hub structure case, we have created a 20-dimensional covariance matrix with ones on the diagonal and $0.2$ in the first column and the last row of the matrix. Then we have used the mvrnorm function from the`MASS`R package to sample 500 data points from a multivariate Gaussian distribution with zero mean and the constructed covariance matrix.The true covariance matrix and the two estimates from gLASSO and nsSLOPE are shown in Figure 2. In nsSLOPE, the FDR control has been used based on the Benjamini-Hochberg sequence (5) with a target FDR value $q=0.05$ and in gLASSO the FWER control [3] has been used with a target value $\alpha =0.05$. Our method nsSLOPE appears to be more sensitive for finding the true structure, although it may contain a slightly more false discovery.

## 6. Conclusions

We introduced a new procedure based on the recently proposed sorted ${\ell}_{1}$ regularization, to find solutions of sparse precision matrix estimation with more attractive statistical properties than the existing ${\ell}_{1}$-based frameworks. We believe there are many aspects of SL1 in graphical models to be investigated, especially when the inverse covariance has a more complex structure. Still, we hope our results will provide a basis for research and practical applications.

Our selection of the $\lambda $ values in this paper requires independence assumptions on features or blocks of features. Although some extensions are possible [21], it would be desirable to consider a general framework, for example, based on Bayesian inference considering the posterior distribution derived from the loss and the regularizer [37,38], which enables us to evaluate the uncertainty of edge discovery and to find $\lambda $ values from data.

## Author Contributions

Conceptualization, S.L. and P.S.; methodology, S.L. and P.S.; software, S.L. and P.S.; validation, S.L, P.S. and M.B.; formal analysis, S.L.; writing–original draft preparation, S.L.; writing–review and editing, S.L.

## Funding

This work was supported by the research fund of Hanyang University (HY-2018-N).

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Yuan, M.; Lin, Y. Model selection and estimation in the Gaussian graphical model. Biometrika
**2007**, 94, 19–35. [Google Scholar] [CrossRef] - D’Aspremont, A.; Banerjee, O.; El Ghaoui, L. First-Order Methods for Sparse Covariance Selection. SIAM J. Matrix Anal. Appl.
**2008**, 30, 56–66. [Google Scholar] [CrossRef] - Banerjee, O.; Ghaoui, L.E.; d’Aspremont, A. Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. J. Mach. Learn. Res.
**2008**, 9, 485–516. [Google Scholar] - Friedman, J.; Hastie, T.; Tibshirani, R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics
**2008**, 9, 432–441. [Google Scholar] [CrossRef] [PubMed] - Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. (Ser. B)
**1996**, 58, 267–288. [Google Scholar] [CrossRef] - Oztoprak, F.; Nocedal, J.; Rennie, S.; Olsen, P.A. Newton-Like Methods for Sparse Inverse Covariance Estimation. In Advances in Neural Information Processing Systems 25; MIT Press: Cambridge, MA, USA, 2012; pp. 764–772. [Google Scholar]
- Rolfs, B.; Rajaratnam, B.; Guillot, D.; Wong, I.; Maleki, A. Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation. In Advances in Neural Information Processing Systems 25; MIT Press: Cambridge, MA, USA, 2012; pp. 1574–1582. [Google Scholar]
- Hsieh, C.J.; Dhillon, I.S.; Ravikumar, P.K.; Sustik, M.A. Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation. In Advances in Neural Information Processing Systems 24; MIT Press: Cambridge, MA, USA, 2011; pp. 2330–2338. [Google Scholar]
- Hsieh, C.J.; Banerjee, A.; Dhillon, I.S.; Ravikumar, P.K. A Divide-and-Conquer Method for Sparse Inverse Covariance Estimation. In Advances in Neural Information Processing Systems 25; MIT Press: Cambridge, MA, USA, 2012; pp. 2330–2338. [Google Scholar]
- Hsieh, C.J.; Sustik, M.A.; Dhillon, I.; Ravikumar, P.; Poldrack, R. BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables. In Advances in Neural Information Processing Systems 26; MIT Press: Cambridge, MA, USA, 2013; pp. 3165–3173. [Google Scholar]
- Mazumder, R.; Hastie, T. Exact Covariance Thresholding into Connected Components for Large-scale Graphical Lasso. J. Mach. Learn. Res.
**2012**, 13, 781–794. [Google Scholar] - Treister, E.; Turek, J.S. A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation. In Advances in Neural Information Processing Systems 27; MIT Press: Cambridge, MA, USA, 2014; pp. 927–935. [Google Scholar]
- Zhang, R.; Fattahi, S.; Sojoudi, S. Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion; International Conference on Machine Learning, PMLR: Stockholm, Sweden, 2018. [Google Scholar]
- Meinshausen, N.; Bühlmann, P. High-dimensional graphs and variable selection with the Lasso. Ann. Stat.
**2006**, 34, 1436–1462. [Google Scholar] [CrossRef] - Meinshausen, N.; Bühlmann, P. Stability selection. J. R. Stat. Soc. (Ser. B)
**2010**, 72, 417–473. [Google Scholar] [CrossRef] - Rothman, A.J.; Bickel, P.J.; Levina, E.; Zhu, J. Sparse permutation invariant covariance estimation. Electron. J. Stat.
**2008**, 2, 494–515. [Google Scholar] [CrossRef] - Lam, C.; Fan, J. Sparsistency and rates of convergence in large covariance matrix estimation. Ann. Stat.
**2009**, 37, 4254–4278. [Google Scholar] [CrossRef] - Raskutti, G.; Yu, B.; Wainwright, M.J.; Ravikumar, P.K. Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of ℓ
_{1}-regularized MLE. In Advances in Neural Information Processing Systems 21; MIT Press: Cambridge, MA, USA, 2009; pp. 1329–1336. [Google Scholar] - Yuan, M. High Dimensional Inverse Covariance Matrix Estimation via Linear Programming. J. Mach. Learn. Res.
**2010**, 11, 2261–2286. [Google Scholar] - Fattahi, S.; Zhang, R.Y.; Sojoudi, S. Sparse Inverse Covariance Estimation for Chordal Structures. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; pp. 837–844. [Google Scholar]
- Bogdan, M.; van den Berg, E.; Sabatti, C.; Su, W.; Candes, E.J. SLOPE—Adaptive Variable Selection via Convex Optimization. Ann. Appl. Stat.
**2015**, 9, 1103–1140. [Google Scholar] [CrossRef] [PubMed] - Brzyski, D.; Su, W.; Bogdan, M. Group SLOPE—Adaptive selection of groups of predictors. arXiv
**2015**, arXiv:1511.09078. [Google Scholar] [CrossRef] [PubMed] - Su, W.; Candès, E. SLOPE is adaptive to unknown sparsity and asymptotically minimax. Ann. Stat.
**2016**, 44, 1038–1068. [Google Scholar] [CrossRef] - Bondell, H.D.; Reich, B.J. Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR. Biometrics
**2008**, 64, 115–123. [Google Scholar] [CrossRef] - Figueiredo, M.A.T.; Nowak, R.D. Ordered Weighted L1 Regularized Regression with Strongly Correlated Covariates: Theoretical Aspects. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, 9–11 May 2016; pp. 930–938. [Google Scholar]
- Lee, S.; Brzyski, D.; Bogdan, M. Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with the Ordered l1-Norm. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), Cadiz, Spain, 9–11 May 2016; Volume 51, pp. 780–789. [Google Scholar]
- Chen, S.; Banerjee, A. Structured Matrix Recovery via the Generalized Dantzig Selector. In Advances in Neural Information Processing Systems 29; Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; pp. 3252–3260. [Google Scholar]
- Bellec, P.C.; Lecué, G.; Tsybakov, A.B. Slope meets Lasso: Improved oracle bounds and optimality. arXiv
**2017**, arXiv:1605.08651v3. [Google Scholar] [CrossRef] - Derumigny, A. Improved bounds for Square-Root Lasso and Square-Root Slope. Electron. J. Stat.
**2018**, 12, 741–766. [Google Scholar] [CrossRef] - Anderson, T.W. An Introduction to Multivariate Statistical Analysis; Wiley-Interscience: London, UK, 2003. [Google Scholar]
- Beck, A.; Tetruashvili, L. On the Convergence of Block Coordinate Descent Type Methods. SIAM J. Optim.
**2013**, 23, 2037–2060. [Google Scholar] [CrossRef] - Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci.
**2009**, 2, 183–202. [Google Scholar] [CrossRef] - Nesterov, Y. A Method of Solving a Convex Programming Problem with Convergence Rate O(1/k
^{2}). Soviet Math. Dokl.**1983**, 27, 372–376. [Google Scholar] - Razaviyayn, M.; Hong, M.; Luo, Z.Q. A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization. SIAM J. Optim.
**2013**, 23, 1126–1153. [Google Scholar] [CrossRef] - Figueiredo, M.; Nowak, R. Sparse estimation with strongly correlated variables using ordered weighted ℓ
_{1}regularization. arXiv**2014**, arXiv:1409.4005. [Google Scholar] - Johnstone, I.M. Chi-square oracle inequalities. Lect. Notes-Monogr. Ser.
**2001**, 36, 399–418. [Google Scholar] - Park, T.; Casella, G. The Bayesian Lasso. J. Am. Stat. Assoc.
**2008**, 103, 681–686. [Google Scholar] [CrossRef] - Mallick, H.; Yi, N. A New Bayesian Lasso. Stat. Its Interface
**2014**, 7, 571–582. [Google Scholar] [CrossRef]

**Figure 1.**Quality of estimation. Top: empirical false discovery rate (FDR) levels (averaged over 25 repetitions) and the nominal level of $q=0.05$ (solid black horizontal line). Bottom: mean square error of diagonal and off-diagonal entries of the precision matrix. $p=500$ was fixed for both panels and $n=100$, 200, 300 and 400 were tried. (“nsSLOPE”: nsSLOPE without symmetrization, “+symm”: with symmetrization and gLASSO.)

**Figure 2.**Examples of structure discovery. Top: a covariance matrix with block diagonal structure. Bottom: a hub structure. True covariance matrix is shown on the left and gLASSO and nsSLOPE estimates (only the nonzero patterns) of the precision matrix are shown in the middle and in the right panels, respectively.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).