Parameter Estimation with the Ordered $\ell_{2}$ Regularization via an Alternating Direction Method of Multipliers

Regularization is a popular technique in machine learning for model estimation and avoiding overfitting. Prior studies have found that modern ordered regularization can be more effective in handling highly correlated, high-dimensional data than traditional regularization. The reason stems from the fact that the ordered regularization can reject irrelevant variables and yield an accurate estimation of the parameters. How to scale up the ordered regularization problems when facing the large-scale training data remains an unanswered question. This paper explores the problem of parameter estimation with the ordered $\ell_{2}$-regularization via Alternating Direction Method of Multipliers (ADMM), called ADMM-O$\ell_{2}$. The advantages of ADMM-O$\ell_{2}$ include (i) scaling up the ordered $\ell_{2}$ to a large-scale dataset, (ii) predicting parameters correctly by excluding irrelevant variables automatically, and (iii) having a fast convergence rate. Experiment results on both synthetic data and real data indicate that ADMM-O$\ell_{2}$ can perform better than or comparable to several state-of-the-art baselines.


Introduction
In the machine learning literature, one of the most important challenges involves estimating parameters accurately and selecting relevant variables from highly correlated, high-dimensional data. Researchers have noticed many highly correlated features in high-dimensional data (Tibshirani, 1996). Models often overfit or underfit high-dimensional data because they have a large number of variables but only a few of them are actually relevant; most others are irrelevant or redundant. An underfitting model contributes to estimation bias (i.e., high bias and low variance) in the model fitting because it keeps out relevant variables, whereas an overfitting raises estimation error (i.e., low bias and high variance) since it includes irrelevant variables in the model.
To illustrate an application of our proposed method, consider a study of gene expression data. This is a high-dimensional dataset and contains highly correlated genes. The geneticist always likes to determine which variants/genes contribute to changes in biological phenomena (e.g., increases in blood cholesterol level, etc.) . Therefore, the aim is to explicitly identify all relevant variants. The penalized regularization models such as ℓ 1 , ℓ 2 , and so forth have recently become a topic of great interest within machine learning, statistics (Tibshirani, 1996), and optimization (Bach et al., 2012) communities as classic approaches to estimate parameters. The ℓ 1 -based method is not a preferred selection method for groups of variables among which pairwise correlations are significant because the lasso arbitrarily selects a single variable from the group without any consideration of which one to select (Efron et al., 2004). Furthermore, if the selected value of a parameter is too small, the ℓ 1 -based method would select many irrelevant variables, thus degrading its performance. On the other hand, a large value of the parameter would yield a large bias (Bogdan et al., 2013). Another point worth noting is that few ℓ 1 regularization methods are either adaptive, computationally tractable, or distributed, but no such method contains all three properties together. Therefore, the aim of this study is to develop a model for parameter estimation and to determine relevant variables in highly correlated, high-dimensional data based on the ordered ℓ 2 . This model has the following three properties all together: adaptive (Our method is adaptive in the sense that it reduces the cost of including new relevant variables as more variables are added to the model due to rank-based penalization properties.), tractable (A computationally intractable method is a computer algorithm that takes a very long time to execute a mathematical solution. A computationally tractable method is exactly the opposite of intractable method.), and distributed.
Several adaptive and nonadaptive methods have been proposed for parameter estimation and variable selection in large-scale datasets. Different principles are adopted in these procedures to estimate parameters. For example, an adaptive solution (i.e., an ordered ℓ 1 ) (Bogdan et al., 2013) is a norm and, therefore, convex. Regularization parameters are sorted in non-increasing order in the ordered ℓ 1 , in which the ordered regularization penalizes regression coefficients according to their order, with higher orders closer to the top and having larger penalties. Pan et al. (2017) proposed a partial sorted ℓ p norm, which is non-convex and non-smooth. In contrast, the ordered ℓ 2 regularization is convex and smooth, just as the standard ℓ 2 norm is convex and smooth in Reference (Azghani et al., 2015). Pan et al. (2017) considered p-values between 0 < p ≤ 1 that do not cover ℓ 2 , ℓ ∞ norms, and so forth. Pan et al. (2017) also did not provide details of other partially sorted norms when p ≥ 2 and used random projection and the partial sorted ℓ p norm to complete the parameter estimation, whereas we have used ADMM with the ordered ℓ 2 . A nonadaptive solution (i.e., an elastic net) (Zou and Hastie, 2005) is a mixture of both ordinary ℓ 1 and ℓ 2 . In particular, it is a useful model when the number of predictors (p) is much larger than the number of observations (n) or in any situation where the predictor variables are correlated. Table 1 presents the important properties of the regularizers. As seen in Table 1, ℓ 2 and the ordered ℓ 2 regularizers are suitable methods for highly correlated, high-dimensional grouping data rather than ℓ 1 and the ordered ℓ 1 regularizers. The ordered ℓ 2 encourages grouping, whereas most of ℓ 1 -based methods promote sparsity. Here, grouping signifies a group of strongly correlated variables in high-dimensional data. We used the ordered ℓ 2 regularization in our method instead of ℓ 2 regularization because the ordered ℓ 2 regularization is adaptive. Finally, ADMM has a parallel behavior for solving large-scale convex optimization problems. Our model also employs ADMM and inherits distributed properties of native ADMM . Hence, our model is also distributed. Bogdan et al. (2013) did not provide any details about how they applied ADMM in the ordered ℓ 1 regularization.
In this paper, we propose "Parameter Estimation with the Ordered ℓ 2 Regularization via ADMM" called ADMM-Oℓ 2 to find relevant parameters from a model. ℓ 2 is a ridge regression; similarly, the ordered ℓ 2 becomes an ordered ridge regression. The main contribution of this paper is not to present a superior method but rather to introduce a quasi-version of the ℓ 2 regularization method and to concurrently raise awareness of the existing methods. As part of this research, we introduced a modern ordered ℓ 2 regularization method and proved that the square root of the ordered ℓ 2 is a norm and, thus, convex. Therefore, it is also tractable. In addition, the regularization method used an ordered elastic net method to combine the widely used ordered ℓ 1 penalty with modern ordered ℓ 2 penalty for ridge regression. The ordered elastic net is also proposed by the scholars in this paper. To the best of our knowledge, this is one of the first method to use the ordered ℓ 2 regularization with ADMM for parameter estimation and variable selection. In Sections 3 and 4, we explain the integration of ADMM with the ordered ℓ 2 and further details about it.
The rest of the paper is arranged as follows. Related works are discussed in Section 2, along with a presentation of the ordered ℓ 2 regularization in Section 3. Section 4 describes the application of ADMM to the ordered ℓ 2 . Section 5 presents the experiments conducted. Finally, Section 6 closes the paper with a conclusion.  Deng et al. (2013) presented efficient algorithms for group sparse optimization with mixed ℓ 2,1 regularization for the estimation and reconstruction of signals. Their technique is rooted in a variable splitting strategy and ADMM. Zou and Hastie (2005) suggested that an elastic net, a generalization of the lasso, is a linear combination of both ℓ 1 and ℓ 2 norm. It contributes to sparsity without permitting a coefficient to become too large. However, Candes and Tao (2007) introduced a new estimator called the Dantzig selector for a linear model when the parameters are larger than the number of observations for which they established optimal ℓ 2 rate properties under a sparsity. Chen et al. (2015) enforced sparse embedding to ridge regression, obtaining solutionsx with x − x * 2 ≤ ǫ x * small, where x * is optimal, and also did this in O(nnz(A)+ n 3 /ǫ 2 ) time, where nnz(A) is the number of nonzero entries of A. Recently, Bogdan et al. (2013) have proposed an ordered ℓ 1 -regularization technique inspired from a statistical viewpoint, in particular, by a focus on controlling the false discovery rate (FDR) for variable selection in linear regressions. Our proposed method is similar but focused on parameter estimation based on the ordered ℓ 2 regularization and ADMM. Several methods have been proposed based on Reference (Bogdan et al., 2013) and similar ideas. For example,  introduced a new model-fitting strategy called Sorted L-One Penalized Estimation (SLOPE) which regularizes least-squares estimates with rank-dependent penalty coefficients. Zeng and Figueiredo (2014) proposed DWSL1 as a generalization of octagonal shrinkage and clustering algorithm (OSCAR) that aims to promote feature grouping without previous knowledge of the group structure. Pan et al. (2017) have introduced an image restoration method based on a random projection and a partial sorted ℓ p norm. In this method, an input signal is decomposed into two components: a low-rank component and a sparse component. The low-rank component is approximated by random projection, and the sparse one is recovered by the partial sorted ℓ p norm. Our method can potentially be used in various other domains such as cyber security (Albanese et al., 2014) and recommendation (Amato et al., 2017).

ADMM
Researchers have paid a significant amount of attention to ADMM because of its capability of dealing with objective functions independently and simultaneously. Second, ADMM has proved to be a genuine fit in the field of large-scale data-distributed optimization. However, ADMM is not a new algorithm; it was first introduced by References ( Glowinski and Marroco, 1975;Gabay and Mercier, 1976) in the mid-1970s, with roots as far back as the mid-1950s. In addition, ADMM originated from an augmented method with a Lagrangian multiplier (Hestenes, 1969). It became more popular when  published papers about ADMM. The classic ADMM algorithm applies to the following "ADMM-ready" form of problems.
The wide range of applications have also inspired the study of the convergence properties of ADMM. Under mild assumptions, ADMM can converge for all choices of the step size. Ghadimi et al. (2015) provided some advice on tuning over-relaxed ADMM in the quadratic problems. Deng and Yin (2016) have also suggested linear convergence results under the consideration of only a single strongly convex term, given that linear operator's A and B are full-rank matrices. These convergence results bound error as measured by an approximation to the primal-dual gap. Goldstein et al. (2014) created an accelerated version of ADMM that converges more quickly than traditional ADMM under an assumption that both objective functions are strongly convex. Yan and Yin (2016) explained, in detail, the different kinds of convergence properties of ADMM and their prerequisite rules for converging. For further studies on ADMM, see Reference .

The Ordered ℓ 2 Regularization
The proposed parameter estimation and variable selection method in this paper is computationally manageable and adaptive. This procedure depends on the ordered ℓ 2 regularization. Let λ = (λ 1 , λ 2 , . . . , λ p ) be a decreasing sequence of positive scalars that satisfy the following condition: The ordered ℓ 2 regularization of a vector x ∈ R p when λ 1 > 0 can be defined as follows: where λ BH (k) is called a BHq method (Benjamini and Hochberg, 1995), which generates an adaptive and a non-increasing value for λ (Reference , Section 1.1). The details of λ BH (k) are available in Section 4.2. For ease of presentation, we have written λ k in place of λ BH (k) in the rest of the paper.
is the order statistic of the magnitudes of x (David and Nagaraja, 2003). The subscript k of x enclosed in parentheses indicates the kth-order statistic of a sample. Suppose that x is a sample size of 4. Hence, four numbers are observed in x if the sample values of x = (−2.1, −0.5, 3.2, 7.2). The order statistics of x would be x 2 (1) = 7.2 2 , x 2 (2) = 3.2 2 , x 2 (3) = 2.1 2 , x 2 (4) = 0.5 2 . The ordered ℓ 2 regularization is expressed as the first largest value of λ times the square of the first largest entry of x, plus the second largest value of λ times the square of the second largest entry of x, and so on. A ∈ R n×p and b ∈ R n are a matrix and a vector, respectively. The ordered ℓ 2 regularized loss minimization can be expressed as follows: Theorem 1 If the square root of J λ (x) (Equation (3)) is a norm on R p and a function . : R p → R satisfying the following three properties, then the following Corollaries 1 and 2 are true.
i (Positivity) x ≥ 0 for any x ∈ R p and x = 0 if and only if x = 0.
ii (Homogeneity) cx = |c| x for any x ∈ R p and c ∈ R.
iii (Triangle inequality) x + y ≤ x + y for any x, y ∈ R p . Note: x and x 2 are used interchangeably.

Notations Explanations Notations Explanations
Matrix denoted by uppercase letter f loss convex function Vector denoted by lowercase letter g Regularizer part (ℓ 1 or ℓ 2 etc.) the ordered norm Eq. equation Note: We often use the ordered ℓ 2 norm/regularization, OL2, and ADMM-Oℓ 2 interchangeably.
Corollary 2 When all the λ ′ k s take on an equal positive value, J λ (x) reduces to the square of the usual ℓ 2 norm.
Proofs of the theorem and corollaries are provided in Appendix A. Table 2 shows the notations used in this paper and their meanings.

The Ordered Ridge Regression
We propose an ordered ridge regression in Equation (5) and call it the ordered ridge regression because we used an ordered ℓ 2 regularization with an objective function instead of a standard ℓ 2 regularization. The ordered ridge regression is commonly used for parameter estimation and variable selection, particularly when data are strongly correlated and highly dimensional. The ordered ridge regression can be defined as follows: where x ∈ R p denotes an unknown regression coefficient, A ∈ R n×p (p ≫ n) is a known matrix, b ∈ R n represents a response vector, and J λ (x) is the ordered ℓ 2 regularization. The optimal parameter choice for the ordered ridge regression is much more stable than that for a regular lasso; also, it achieves adaptivity in the following senses.
i For decreasing (λ k ), each parameter λ k marks the entry or removal of some variable from the current model (therefore, its coefficient becomes either zero or nonzero); thus, coefficients remain constant in the model. We achieved this by putting some threshold values for λ k (Reference Bogdan et al. (2013), Section 1.4).
ii We observed that the price of including new variables declines as more variables are added to the model when the λ k decreases.

Applying ADMM to the Ordered Ridge Regression
In order to apply ADMM to the problem in Equation (5), we first transform it into an equivalent form of the problem in Equation (1) by introducing an auxiliary variable z.
We can see that Equation (6) has two blocks of variables (i.e., x and z). Its objective function is separable in the form of Equation (1) λ k |x (k) | 2 , where A = I and B = −I. Therefore, ADMM is applicable to Equation (6). An augmented Lagrangian form of Equation (6) can be defined as follows: where y ∈ R p is a Lagrangian multiplier and ρ > 0 denotes a penalty parameter. Next, we apply ADMM to the augmented Lagrangian equation of Equation (7) (Reference , Section 3.1), which renders ADMM iterations as follows: Proximal gradient methods are well known for solving convex optimization problems for which the objective function is the sum of a smooth loss function and a non-smooth penalty function (Schmidt et al., 2011;Parikh et al., 2014;. A well-studied example is ℓ 1 regularized least squares (Bogdan et al., 2013;Tibshirani, 1996). It should be noted that an ordered ℓ 1 norm is convex but not smooth. Therefore, these researchers used a proximal gradient method. In contrast, we have employed an ADMM method because ADMM can solve convex optimization problems for which the objective function is either the sum of a smooth loss function and a non-smooth penalty function or both loss and penalty function are smooth and ADMM also supports parallelism. In the ordered ridge regression, both loss and penalty function are smooth, whereas, in the ordered elastic net, loss function is smooth and penalty function is non-smooth.

Scaled Form
We can also define ADMM in scaled form by merging a linear and a quadratic term in augmented Lagrangian and then a scaled dual variable, which is shorter and more appropriate. The scaled dual form of ADMM iterations in Equation (8) can be expressed as follows: where u = 1 ρ y and u is the scaled dual variable. Next, we can minimize the augmented Lagrangian in Equation (7) with respect to x and z, successively. Minimizing Equation (7) with respect to x becomes the x subproblem of Equation (9a), and it can be expressed as follows: After computing a derivative of Equation (10a) with respect to x, then the setting of the derivative of x becomes equal to zero. Notice that this is a convex problem; therefore, it minimizes to solve the following linear system of Equation (10b): Minimizing problem Equation (7) w.r.t. z, we obtain Equation (9b), and it results in the following z subproblem: After computing a derivative of Equation (11a) with respect to z, then the setting of the derivative of z becomes equal to zero. Notice that this is a convex problem; therefore, it minimizes to solve the following linear system of Equation (11b): Finally, the multiplier (i.e., the scaled dual variable u) is updated in the following way: Optimality conditions: Primal and dual feasibility are essential and adequate optimality conditions for ADMM in Equation (6)  . Dual residual (S k+1 ) and primal residual (γ k+1 ) can be defined as follows: Stopping criteria: The stopping criterion for an ordered ridge regression is that primal and dual residuals must be small: We set ǫ abs = 10 −4 and ǫ rel = 10 −2 . For further details about this choice, see Reference , Section 3).

Over-Relaxed ADMM Algorithm
By comparing Equations (1) and (6), we can write Equation (6)  We observed that ADMM Algorithm 1 computes an exact solution for each subproblem and that their convergence is guaranteed by existing ADMM theory (Glowinski, 2008;Deng and Yin, 2016;Goldstein et al., 2014). The most important and computationally intensive operation here is matrix inversion in line 3 of Algorithm 1. Here, matrix A is high-dimensional (p ≫ n) and (A T A + ρ * I) takes O(np 2 ) and its inverse (i.e., (A T A + ρ * I) −1 ) takes O(p 3 ). We compute (A T A + ρ * I) −1 and A T b outside loop; then, we are left with (inverse * (A T b + ρ(z k − u k )) T ), which is O(p 2 ), while Algorithm 1: Over-relaxed ADMM for the ordered ridge regression : end while addition and subtraction take O(p). (A T A + ρ * I) −1 is also cacheable, so the complexity is just O(p 3 ) + k * O(np 2 + p) heuristically with k number of iteration.
Generating the ordered parameter (λ k ): As mentioned in the beginning, we set out to identify a computationally tractable and adaptive solution. The regularizing sequences play a vital role in achieving this goal. Therefore, we generated adaptive values of (λ k ) such that regressor coefficients are penalized according to their respective order. Our regularizing sequence procedure is motivated by the BHq procedure (Benjamini and Hochberg, 1995). The BHq method generates (λ k ) sequences as follows: where k > 0, Φ −1 (α) is αth quantile of a standard normal distribution, and q is a parameter, namely q ∈ [0; 1]. We started with λ 1 = λ BH (1) as an initial value of the ordered parameter (λ k ). Algorithm 2 presents a method for generating sorted (λ k ). The difference between lines 5 and 6 in Algorithm 2 is that line 5 is for low-dimensional (n ≤ p) data and that line 6 is for high-dimensional data (p ≫ n). Finally, we used the ordered (λ k ) from Algorithm 2 (i.e., the adaptive value of (λ k )) in the ordered ridge regression of Equations (6) and (7) instead of ordinary λ. This makes the ordered ℓ 2 adaptive and different from standard ℓ 2 .

The Ordered Elastic Net
A standard ℓ 2 (or an ordered ℓ 2 ) regularization is a commonly used tool to estimate parameters for microarray datasets (strongly correlated grouping). However, a key drawback of the ℓ 2 regularization is that it cannot automatically select relevant variables because the ℓ 2 regularization shrinks coefficient estimates closer but not exactly equal to zero (Reference James et al. (2013), Chapter 6.2). On the other hand, a standard ℓ 1 (or an ordered ℓ 1 ) regularization can automatically determine relevant variables due to its sparsity property. However, the ℓ 1 regularization also has a limitation. Especially when different variables are highly correlated, the ℓ 1 regularization tends to pick only a few of them and to remove the remaining ones-even important ones that might be better predictors. To overcome the limitations of both ℓ 1 and ℓ 2 regularization, we proposed another method called an ordered elastic net (the ordered ℓ 1,2 regularization or ADMM-Oℓ 1,2 or -Oℓ 1,2 ), similar to a standard elastic net (Zou and Hastie, 2005), by combining the ordered ℓ 2 regularization with the ordered ℓ 1 regularization and the elastic net. By doing so, the ordered ℓ 1,2 regularization automatically selects relevant variables in a way similar to the ordered ℓ 1 regularization. In addition, it can select groups of strongly correlated variables. The key difference between the ordered elastic net and the standard elastic net is a regularization term. We apply the ordered ℓ 1 and ℓ 2 regularization in the ordered elastic net instead of the standard ℓ 1 and ℓ 2 regularization. This approach means that the ordered elastic net inherits the sparsity, grouping, and adaptive properties of the ordered ℓ 1 and ℓ 2 regularization. We have also employed ADMM to solve the ordered ℓ 1,2 regularized loss minimization as follows: ⇔ min For simplicity, let λ 1 = αλ BH and λ 2 = (1 − α)λ BH . The ordered elastic net becomes Now, we can transform the above ordered elastic net equation into an equivalent form of Equation (1) by introducing an auxiliary variable z.
We can minimize Equation (16) w.r.t. x and z in the same way as we minimized the ordered ℓ 2 regularization in Section 4 and Sections 4.1 and 4.2. Therefore, we directly present the final results below without any details. The + sign means to select max (0,value).

Experiments
A series of experiments were conducted on both simulated and real data to examine the performance of a proposed method. In this section, first, a concept to select a correct sequence of (λ k )s is discussed. Second, an experiment on synthetic data is presented that describes the convergence of the lasso, SortedL1, ADMM-Oℓ 2 , and ADMM-Oℓ 1,2 . Finally, the proposed method is applied to a real feature selection dataset. The performance of the ADMM-Oℓ 1,2 method is analyzed in comparison with  (14), while the dashed and dotted lines of λ k are given by Equation (15) for n = p and n = 2p, respectively.
state-of-the-art methods: the lasso and SortedL1. These two methods are chosen for comparison because they are very similar to the ADMM-Oℓ 1,2 method except they use the regular lasso and the ordered lasso, respectively, while ADMM-Oℓ 1,2 model employs the ordered ℓ 1,2 regularization with ADMM.
Experimental setting: The algorithms were implemented on Scala Spark TM with Scala code in both distributed and non-distributed versions. A distributed version of experiments was carried out in a cluster of virtual machines with four nodes: one master and three slaves. Each node has 10 GB of memory, 8 cores, CentOS release 6.2, and amd64: core-4.0-noarch. Apache spark TM 1.5.1 was deployed on it. The scholars also used IntelliJ IDEA 15 ULTIMATE as a Scala editor, interactive build tool sbt version 0.13.8, and Scala version 2.10.4. The standalone machine is a Lenovo desktop running Windows 7 Ultimate with an Intel TM Core TM i 3 Duo 3.20 GHz CPU and 4 GB of memory. The scholars used MATLAB TM version 8.2.0.701 running on a single machine to draw all figures. The source codes of the lasso, SortedL1, and ADMM-Oℓ 1,2 are available at References (Boyd, 2011;Bogdan, 2015;Anonymous, 2017), respectively. 5.1. Adjusting the Regularizing Sequence (λ k ) for the Ordered Ridge Regression Figure 1 was drawn using Algorithm (2), where p = 5000. As seen in Figure 1, when the value of a parameter (q = 0.4) becomes larger, the sequence (λ k ) decreases, while (λ k ) increases for a small value of q = 0.055. However, the goal is to obtain a non-increasing order of sequence (λ k ) by adjusting the value of q, which stimulates convergence. Here, adjusting means tuning the value of the parameter q using the BHq procedure to yield a suitable sequence (λ k ) such that it improves performance.

Experimental Results of Synthetic Data
In this section, numerical examples show the convergences of ADMM-Oℓ 1,2 , ADMM-Oℓ 2 , and other methods. A tiny, dense example of an ordered ℓ 2 regularization is examined, where the feature matrix A has n = 1500 examples and p = 5000 features. Synthetic data is generated as follows: create a matrix A and choose A i,j using N (0, 1) and then normalize columns of the matrix A to have the unit ℓ 2 norm. x 0 ∈ R p is generated such that each sampled from x 0 ∼ N (0, 0.02) is a Gaussian distribution. Label b is calculated as b = A*x 0 + v, where v ∼ N (0, 10 −3 * I), which is the Gaussian noise. A penalty parameter ρ = 1.0, an over-relaxed parameter α = 1.0, and termination tolerances ǫ abs ≤ 10 −4 and ǫ rel ≤ 10 −2 are used. Variables u 0 ∈ R p and z 0 ∈ R p are initialized to be zero. λ ∈ R p is a non-increasing ordered sequence according to Section 5.1 and Algorithm 2. Figure 2a,b indicates the convergence of ADMM-Oℓ 2 and ADMM-Oℓ 1,2 , respectively. Figure 3a,b shows the convergence of the ordered ℓ 1 regularization and the lasso, respectively. It can be seen from the Figures 2 and 3 that the ordered ℓ 2 regularization converges faster than all algorithms. The ordered ℓ 1 , lasso, ordered ℓ 1,2 , and ordered ℓ 2 take less than 80, 30, 30, and 10 iterations, respectively to converge. Dual is not guaranteed to be feasible. Therefore, a level of infeasibility of dual is also needed to compute. A numerical experiment terminates whenever both the infeasibility (ŵ) and relative primal-dual gap (δ(b)) are less than equal to λ(1)(Tolinfeas (ǫ inf eas = 10 −6 ) and TolRelGap (ǫ gap = 10 −6 ), respectively) the ordered ℓ 1 regularization harness synthetic data provided by Reference (Bogdan et al., 2013). The same data is generated for the lasso as for the ordered ℓ 2 regularization except for an initial value of λ. For the lasso, set λ = 0.1 * λ max , where λ max = A T * b ∞ . The researchers also use 10-fold cross-validation (cv) with the lasso. For further details about this step, see Reference .

Experimental Results of Real Data
Variable selection difficulty arises when the number of features (p) are greater than the number of instances (n). The proposed method genuinely handles these types of issues. The practi-  cal application of the ADMM method is in many domains such as computer vision and graphics (Liu et al., 2012), analysis of biological data (Bien et al., 2013;Danaher et al., 2014), and smart electric grids (Kraning et al., 2014;Kekatos and Giannakis, 2012). A biological leukaemia dataset (Chih-Jen, 2017) was used to demonstrate the performance of the proposed method. Leukemia is a type of cancer that impairs the body's ability to build healthy blood cells. Leukemia begins in the bone marrow. There are many types of leukemia, such as acute lymphoblastic leukemia, acute myeloid leukemia, and chronic lymphocytic leukemia. The following two types of leukemia are used in this experiment: acute lymphoblastic leukemia and acute myeloid leukemia. The leukemia dataset consists of 7129 genes and 72 samples (Golub et al., 1999). Randomly split the data into training and test sets. In the training set, there are 38 samples, among which 27 are type I ALL (acute lymphoblastic leukemia) and 11 are type II AML (acute myeloid leukemia). The remaining 34 samples allowed us to test the prediction accuracy. The test set contains 20 type I ALL and 14 type II AML. The data were labeled according to the type of leukemia (ALL or AML). Therefore, before applying an ordered elastic net, the type of leukemia (ALL = −1, or AML = 1) is converted as a (−1, 1) response y. Predicted responseŷ is set to 1 ifŷ > 0; otherwise, it is set to −1. λ ∈ R p is a non-increasing, ordered sequence generated according to Section 5.1 and Algorithm 2. For the regular lasso, λ is a single scalar value generated using Equation (14). α = 0.1 is used for the leukemia dataset. All other settings are the same as experiment with synthetic data. Table 3 illustrates the experiment results of the leukemia dataset for different types of regularization. The lowest average mean square error (MSE) of regularization is the ordered ℓ 2 , followed by the ordered ℓ 1,2 and lasso, while the highest average MSE can be seen in the ordered ℓ 1 . Looking at Table 3 first, it is clear that the ordered ℓ 2 converges the fastest among all the regularizations. The second fastest converging regularization is the ordered ℓ 1,2 , while the slowest converging regularization is the ordered ℓ 1 . The ordered ℓ 2 takes an average iteration around 190 and an average time around 0.15 s to converge. On the other hand, the ordered ℓ 1,2 , the ordered ℓ 1 , and lasso take average iterations around 1381, 10,000, and 10,000, respectively, and average times around: 1.0, 14.0, and 5.0 s, respectively, to converge. It can also be seen from the data in Table 3 that the ordered ℓ 2 selected all the variables but that the goal is to select only the relevant variables from strongly correlated, high-dimensional dataset. Therefore, the ordered elastic net was proposed, which only selects relevant variables and discards irrelevant variables. As can be seen from Table 3, average MSE, time, and iteration in the ordered ℓ 1 regularization and lasso are significantly more than the ordered ℓ 1,2 regularization, although an average gene selection in the ordered ℓ 1,2 regularization is more than that of the ordered ℓ 1 regularization and lasso. The ordered ℓ 1 and lasso select averages around 84 and 7 variables, respectively, whereas the ordered ℓ 1,2 selects an average around 107 variables. The lasso performs poorly on the leukemia dataset. The reason for this is that strongly correlated variables are present in the leukemia dataset. In general, the ordered elastic net performs better than the ordered ℓ 1 and lasso. Figure 4 shows the ordered elastic net solution paths and the variable selection results.

Conclusions
In this paper, we showed a method for optimizing an ordered ℓ 2 problem under an ADMM framework, called ADMM-Oℓ 2 . As an implementation of ADMM-Oℓ 2 , the ridge regression with the ordered ℓ 2 regularization is shown. We also presented a method for variable selection, called ADMM-Oℓ 1,2 which employs the ordered ℓ 1 and ℓ 2 . We see the ordered ℓ 1,2 as a generalization of the ordered ℓ 1 , which is shown as an important tool for model fitting, feature selection, and parameter estimation. Experimental results show that the ADMM-Oℓ 1,2 method correctly estimates parameter, selects relevant variables, and excludes irrelevant variables for microarray data. Our method is also computationally tractable, adaptive, and distributed. The ordered ℓ 2 regularization is convex and can be  ordered elastic net model is given by the fit at an average iteration of 1380.6 with an average selected gene of 106.6 (indicated by a dotted line). Input leukemia data.
optimized efficiently with faster convergence rate. Additionally, we have shown that our algorithm has complexity O(p 3 ) + k * O(np 2 + p) heuristically, where k is the number of iterations. In future work, we plan to apply our method to other regularization models with complex penalties.