Previous Article in Journal
Unified Convergence Criteria of Derivative-Free Iterative Methods for Solving Nonlinear Equations

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# An Algebraic-Based Primal–Dual Interior-Point Algorithm for Rotated Quadratic Cone Optimization

by
Karima Tamsaouete
and
Baha Alzalg
*,†
Department of Mathematics, The University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Computation 2023, 11(3), 50; https://doi.org/10.3390/computation11030050
Submission received: 21 January 2023 / Revised: 18 February 2023 / Accepted: 22 February 2023 / Published: 2 March 2023

## Abstract

:
In rotated quadratic cone programming problems, we minimize a linear objective function over the intersection of an affine linear manifold with the Cartesian product of rotated quadratic cones. In this paper, we introduce the rotated quadratic cone programming problems as a “self-made” class of optimization problems. Based on our own Euclidean Jordan algebra, we present a glimpse of the duality theory associated with these problems and develop a special-purpose primal–dual interior-point algorithm for solving them. The efficiency of the proposed algorithm is shown by providing some numerical examples.

## 1. Introduction

Rotated quadratic cone programming (RQCP) problems are convex conic optimization problems [1,2,3,4,5,6] in which we minimize a linear objective function over the intersection of an affine linear manifold with the Cartesian product of rotated quadratic cones, where the nth-dimensional rotated quadratic cone is defined as
where, $R +$ is the set of positive real numbers and $∥ · ∥$ is the Euclidean norm.
Many optimization problems are formulated as RQCPs (see Section 2.3 in [7] and Section 4 in [8], for example, but not limited to the problem of minimizing the harmonic mean of positive affine functions, the problem of maximizing the geometric mean of non-negative affine functions, the logarithmic Tchebychev approximation problem, problems involving fractional quadratic functions, problems with inequalities involving rational powers, problems with inequalities involving p-norms, and problems involving pairs of quadratic forms (also called minimum-volume covering ellipsoid problems).
It is known that the rotated quadratic cone is converted into the second-order cone under a linear transformation. In fact, the restricted hyperbolic constraint $x 1 x 2 ≥ ∥ x ¯ ∥ 2$ is equivalent to the set of linear and second-order cone constraints: $u = x 1 + x 2 , v = x 1 − x 2 , w = ( v ; 2 x ¯ )$, and $u ≥ ∥ w ∥$. Based on this observation, all earlier work on RQCP problems has converted them to second-order cone programming problems, but while doing this can be easier than developing special-purpose algorithms for solving RQCPs, this approach may not always be the cheapest one in terms of computational cost.
Mathematical optimization together with evolutionary algorithms are today a state-of-the-art methodology in solving hard problems in machine learning and artificial intelligence, see for example [9,10,11]. Going back years in time, the introduction of the interior-point methods (IPMs) during in the 1980s perhaps was one of the most notable developments in the field of mathematical programming since its origination in the 1940s. Karmarker [12] was the first to propose them for linear programming, where their work generated a stir due to the superiority of their polynomial complexity results over that of the simplex method. It then seemed natural to expand these methods created for linear programming to solve nonlinear programs.
Nesterov and Nemirovskii [3] laid the groundwork for IPMs to solve convex programming problems, where primal (and dual) IPMs based on the so-called self-concordant barrier functions were taken into consideration. Nesterov and Todd [4] later presented symmetric primal–dual IPMs for problems over a specific class of cones termed as self-scaled cones, allowing for a symmetric approach to the primal and dual problems.
We point out that Nesterov and Todd’s work [4] did not take a Jordan algebraic approach, but rather Güler’s work [13] is credited with being the first to link Jordan algebras and optimization. Güler [13] noted that the family of self-scaled cones are the same as the family of the so-called symmetric cones for which a thorough classification theory is available [14]. The characteristics of these algebras act as a key toolkit for the analysis of IPMs for optimization over symmetric cones. Due to their diverse applications, the most important classes of symmetric cone optimization problems are linear programming, second-order cone programming [7], and semi-definite programming [15] (see also Part IV in [16] which gives a thorough presentation of these three classes of optimization problems). Several IPMs have been developed for these classes of conic optimization problems; for example, [2,7,15,17,18,19,20,21,22,23,24].
There are two classes of IPMs for solving linear and non-linear convex optimization problems. The first class solely uses dual or primal methods (see, for example, [25,26,27]). The second class is based on primal–dual methods, which were developed by [23] and [24] and are more useful and efficient than the first. These methods involve applying Newton’s method to the Karush–Kuhn–Tucker (KKT) system up until a convergence condition is satisfied.
In [28], the authors have set up the Euclidean Jordan algebra (EJA) associated with the rotated quadratic cone and presented several spectral and algebraic characteristics of this EJA, where the authors have found that the rotated quadratic cone is the cone of squares of some EJAs (confirming that it is a symmetric cone). To our knowledge, no specialized algorithms for RQCPs that make use of the EJA of the underlying rotated quadratic cones. This paper is an attempt to introduce RQCPs as another self-contained paradigm of symmetric cone optimization, where we introduce RQCP as a “self-made” class of optimization problems and develop special purpose primal–dual interior-point algorithms (second class of IPMs) for solving RQCP problems based on the EJA in [28], which offers a useful set of tools for the analysis of IPMs related to RQCPs.
The so-called commutative class of primal–dual IPMs was designed by Monteiro and Zhang [29] for semi-definite programming, and by Alizadeh and Goldfarb [7] for second-order cone programming, and then extended by Schmieta and Alizadeh [30] for symmetric cone programming. This paper uses the machinery of EJA built in [28] to carry out an extension of this commutative class to RQCP. We prove polynomial complexities for versions of the short-, semi-long-, and long-step path-following IPMs using NT, HRVW/KSH/M, or dual HRVW/KSH/M directions (equivalent to NT, $X S$, or $S X$ directions in semi-definite programming).
This paper is organized as follows: In Section 2, we calculate the derivatives of the logarithmic barrier function associated with the rotated quadratic cone and prove the corresponding self-concordance property. The formulation of the RQCP problems with the optimality conditions are provided in Section 3. Section 4 applies Newton’s method and discusses the commutative direction. The proposed path-following algorithm for RQCP and its complexity are given in Section 5. Section 6 shows the efficiency of the proposed algorithm by providing some numerical results. We close this paper with Section 7, which contains some concluding remarks.

## 2. The Algebra and the Logarithmic Barrier of the Rotated Quadratic Cone

In Table 1, we summarize the Jordan algebraic notions associated with the cone $𝓡 + n$.
In this section, we compute the derivatives of the logarithmic barrier associated with our cone and use them to prove the self-concordance of this barrier. To obtain these results, we do not use concepts outside of the EJA established in [28] and summarized in Table 1.
Associated with the cone $𝓡 + n$, the logarithmic barrier function $b : Int ( 𝓡 + n ) → R$ is defined as
We provide a proof the following lemma, which is a useful tool in order to prove some fundamental properties of our barrier. The inner product •, inverse $x − 1$, norm $∥ · ∥ F$, and matrices $L ( · )$ and $Q ·$ used in Lemma 1 are defined in Table 1.
Lemma 1.
Let $x ∈ 𝓡 + n$ having $x ≻ 0$ and $h ∈ 𝓡 + n$ be a non-zero vector. Then,
(i)
The gradient $∇ x b ( x ) = − 2 H x − 1$. Therefore, $∇ x b ( x ) [ h ] = − 2 h • x − 1$.
(ii)
The Hessian $∇ x x 2 b ( x ) = 2 H Q x − 1$. Therefore, $∇ x x 2 b ( x ) [ h , h ] = ∥ Q x − 1 / 2 h ∥ F 2$.
(iii)
The third derivative $∇ x x x 3 b ( x ) [ h , h , h ] = − 4 ( Q x − 1 / 2 h ) • ( Q x − 1 / 2 h ) 2$.
Proof of Lemma 1.
For item (i), we have
$∇ x b ( x ) = − 1 det ( x ) x 2 x 1 − 2 x ¯ = − 1 det ( x ) R x 1 x 2 2 x ¯ = − 2 det ( x ) R x 1 / 2 x 2 / 2 x ¯ = − 2 det ( x ) R H x 1 x 2 x ¯ = − 2 H x − 1 .$
Item (ii) follows by using item (i) and noting that the Jacobean of $x − 1$ is
$J x x − 1 = J x x 2 x 1 x 2 − ∥ x ¯ ∥ 2 x 1 x 1 x 2 − ∥ x ¯ ∥ 2 x ¯ x 1 x 2 − ∥ x ¯ ∥ 2 = − 1 det ( x ) 2 − x 2 2 − | | x ¯ | | 2 2 x 2 x ¯ T − | | x ¯ | | 2 − x 1 2 2 x 1 x ¯ T x 2 x ¯ x 1 x ¯ − 2 x ¯ x ¯ T − det ( x ) I n − 2 = − Q x − 1 ,$
and that
For item (iii), note that
It follows that
where we used the fact that $L ( Q x − 1 / 2 h ) Q x − 1 / 2 h = ( Q x − 1 / 2 h ) 2$ to obtain the last equality. The proof is complete.   □
The notion of self-concordance introduced by Nesterov and Nemirovskii [3] is essential to the existence of polynomial-time interior-point algorithms for convex conic programming problems. We have the following definition.
Definition 1
(Definition 2.1.1 in [3]). Let V be a finite-dimensional real vector space, G be an open non-empty convex subset of V, and let f be a $C 3$, convex mapping from G to $R$. Then, f is called α-self-concordant on G with the parameter $α > 0$ if for every $x ∈ G$ and $h ∈ V$, the following inequality holds
An α-self-concordant function f on G is called strongly α-self-concordant if f tends to infinity for any sequence approaching a boundary point of G.
In the proof of the next theorem, we use the inequalities that
$| x • y | ≤ 1 2 ∥ x ∥ F ∥ y ∥ F , and ∥ x 2 ∥ F ≤ ∥ x ∥ F 2 ,$
for any $x$ and $y$ residing a Jordan algebra. These two inequalities can be seen by noting that
$∥ x 2 ∥ F = λ 1 4 ( x ) + λ 2 4 ( x ) ≤ λ 1 2 ( x ) + λ 2 2 ( x ) = ∥ x ∥ F 2 ,$
and
We are now ready to provide a proof for the following theorem.
Theorem 1.
The logarithmic barrier function $b ( x )$ is a 1-strongly self-concordant barrier on $Int ( 𝓡 + n )$.
Proof of Theorem 1.
Note that for any sequence approaching the $𝓡 + n$ boundary point, $b ( · )$ goes to infinity. Using items (ii) and (iii) in Lemma 1, we have
Thus, the inequality in (3) holds. Hence, the logarithmic barrier $b ( x )$ on $Int ( 𝓡 + n )$ is 1-strongly self-concordant.    □

## 3. Rotated Quadratic Cone Programming Problem and Duality

In this section, we introduce and define the RQCP problem along with a discussion of the duality theory and the optimality conditions for these problems.
Let $A ∈ R m × n$ be a real matrix whose m rows reside in the Euclidean Jordan algebra $( 𝓡 n , ∘ , • )$, and let $A T$ be its transpose. Associated with A, we define the matrix–vector product “$𝓐 ·$” as
where $x ∈ 𝓡 n$, $a i ∈ 𝓡 n$ is the ith-row of A for $i = 1 , 2 , … , m$, and H is defined in Table 1. The operator $𝓐$ is the half-identity matrix defined to map $( 𝓡 n , ∘ , • )$ into $R m$, while the transpose $A T$ is defined to map $R m$ into $( 𝓡 n , ∘ , • )$. If $x ∈ 𝓡 n$ and $y ∈ R m$, one can easily show that
$( 𝓐 x ) T y = x • ( A T y )$
An RQCP problem in the primal form is the problem
where $x ∈ 𝓡 n$ is the primal variable, $y ∈ R m$ is the dual variable, and $s ∈ 𝓡 n$ is the dual slack variable.
Let $𝓕 ≜ 𝓕 P × 𝓕 D$ and $𝓕 ∘ ≜ 𝓕 P ∘ × 𝓕 D ∘$ denote the feasible and strictly feasible primal-dual sets for the pair (P, D), respectively, where
Problem (P) (respectively, Problem (D)) is said to be strictly feasible if $𝓕 P ∘ ≠ ∅$ (respectively, $𝓕 D ∘ ≠ ∅$). Now, we make two assumptions about the pair (P, D): First, we assume that the matrix A has a full row rank. This assumption is standard and is added for convenience. Second, we assume that the strictly feasible set $𝓕 0$ is non-empty. This assumption guarantees that the strong duality holds for the RQCP problem and suggests that both Problems (P) and (D) have a unique solution.
We give with a proof the following weak duality result.
Lemma 2
(Weak duality). If $x ∈ 𝓕 P$ and $( y , s ) ∈ 𝓕 D$, then the duality gap is $c • x − b T y = x • s ≥ 0$.
Proof of Theorem 2.
Let $x ∈ 𝓕 P$ and $( y , s ) ∈ 𝓕 D$, then $A T y + s = c , 𝓐 x = b$, and $x , s ⪰ 0$. Then, using (5), it follows that
$c • x − b T y = ( A T y + s ) • x − ( 𝓐 x ) T y = ( A T y ) • x + x • s − ( 𝓐 x ) T y = x • s .$
Because $x , s ⪰ 0$, we have that
$x 1 x 2 s 1 s 2 ≥ ∥ x ¯ ∥ 2 ∥ s ¯ ∥ 2 .$
Applying the arithmetic inequality to $x 1 s 1$ and $x 2 s 2$, we obtain
$1 4 ( x 1 s 1 + x 2 s 2 ) 2 ≥ x 1 x 2 s 1 s 2 .$
Combining (6) and (7), we have $( x 1 s 1 + x 2 s 2 ) 2 / 4 ≥ ∥ x ¯ ∥ 2 ∥ s ¯ ∥ 2$. Taking the square root of both sides and applying the Cauchy–Schwartz inequality, we obtain
$x • s = 1 2 ( x 1 s 1 + x 2 s 2 ) + x ¯ T s ¯ ≥ ∥ x ¯ ∥ ∥ s ¯ ∥ + x ¯ T s ¯ ≥ | x ¯ T s ¯ | + x ¯ T s ¯ ≥ 0 .$
The proof is complete.    □
It is known that the strong duality property can fail in general conic programming problems, but a slightly weaker property can be shown for them [3]. Using the Karush–Kuhn–Tucker (KKT) conditions, we provide a proof of the following semi-strong duality result, which provides conditions for such a slightly weaker property to hold in RQCP.
Lemma 3
(Semi-strong duality). Let Problems (P) and (D) be strictly feasible. If Problem (P) is solvable, then so is its dual (D) and their optimal values are equal.
Proof of Theorem 3.
Since (P) is strictly feasible and solvable, then there is a solution $x ∈ 𝓕 P$ to which the KKT conditions can be applied. According to this, there must be Lagrange multiplier vectors $y$ and $s$ such that $( x , y , s )$ satisfies the conditions
$𝓐 x = b , A T y + s = c , x • s = 0 , x , s ⪰ 0 .$
It follows that (D) can be solved using $( y , s )$. Let us suppose that $( u , v )$ is any feasible solution to the dual problem (D); then,
$b T u ≤ c • x = x • s + b T y = b T y ,$
where the inequality was obtained using weak duality (Lemma 2) and the last equality was obtained using the last equation in (8). Since $( u , v )$ was chosen arbitrarily, the optimal solution to Problem (D) is $( y , s )$ and $c • x = b T y$. The proof is complete.   □
The strong duality result in the following lemma can be obtained by applying the duality relations to our problem formulation.
Theorem 2
(Strong duality). Let Problems (P) and (D) be strictly feasible; then, they must also have optimal solutions, say $x ★$ and $( y ★ , s ★ )$, respectively, and $c • x ★ = b T y ★ ( i . e . , x ★ • s ★ = 0 ) .$
As one of the optimality conditions of RQCP, we describe the complementarity condition in the following lemma.
Lemma 4
(Complementarity condition). Let $x , s ∈ 𝓡 n$ have $x , s ⪰ 0$. Then $x • s = 0$ if and only if $x ∘ s = 0$.
Proof of Theorem 4.
Let $x , s ∈ 𝓡 n$ have $x , s ⪰ 0$. First, we prove the direction from left to right. Assume that $x • s = 0$. To show that $x ∘ s = 0$, we must show that (see the definition of the Jordan product “∘” used in Table 1)
$( i ) x 1 s 1 + x ¯ T s ¯ = 0 ; ( ii ) x 2 s 2 + x ¯ T s ¯ = 0 ; ( iii ) 1 2 ( s 1 + s 2 ) x ¯ + 1 2 ( x 1 + x 2 ) s ¯ = 0 .$
If $( x 1 = 0 and x 2 = 0 )$ or $( s 1 = 0 and s 2 = 0 )$, then $x = 0$ in the first case and $s = 0$ in the second one, indicating that (i), (ii), and (iii) trivially hold. As a result, we need to consider $x 1 , x 2 > 0$ and $s 1 , s 2 > 0$. Then, by taking the square root of both sides in (7), using the fact that $x , s ⪰ 0$, and applying the Cauchy–Schwartz inequality, we obtain
$− 1 2 ( x 1 s 1 + x 2 s 2 ) ≤ − ( x 1 x 2 s 1 s 2 ) 1 / 2 ≤ − ∥ x ¯ ∥ ∥ s ¯ ∥ ≤ x ¯ T s ¯ .$
Therefore, $x • s = 1 2 ( x 1 s 1 + x 2 s 2 ) + x ¯ T s ¯ = 0$ if and only if $x ¯ T s ¯ = − 1 2 ( x 1 s 1 + x 2 s 2 )$. This is true if and only if the inequalities in (9) are satisfied as equalities. This simply holds true if and only if either $x = 0$ or $s = 0$, in which (i), (ii), and (iii) are trivially held, or
$x ≩ 0 , s ≩ 0 , x ¯ = − β s ¯ , x 1 = ∥ x ¯ ∥ = β ∥ s ¯ ∥ = β s 2 , and x 2 = ∥ x ¯ ∥ = β ∥ s ¯ ∥ = β s 1 ,$
where $β > 0$.
Note that the first equation in (10), or equivalently $x ¯ + β s ¯ = 0$, implies that $∥ x ¯ ∥ 2 + β x ¯ T s ¯ = 0$. Using (10), this can be written as
$β s 2 2 + x ¯ T s ¯ = 0 and β s 1 2 + x ¯ T s ¯ = 0 , or as x 1 s 2 + x ¯ T s ¯ = 0 and x 2 s 1 + x ¯ T s ¯ = 0 .$
From (10), we have that $s 1 = s 2$. Then, $x 1 s 1 + x ¯ T s ¯ = 0 and x 2 s 2 + x ¯ T s ¯ = 0 ,$ as desired in (i) and (ii). For (iii), using (10) again, we have
$x ¯ + β s ¯ = x ¯ + x 1 s 2 s ¯ = 0 and x ¯ + β s ¯ = x ¯ + x 2 s 1 s ¯ = 0 .$
This implies that $( x 1 + x 2 ) s ¯ + ( s 1 + s 2 ) x ¯ = 0$, or as desired in (iii).
Now, we prove the direction from right to left. Let us assume that $x ∘ s = 0$. From (i) and (ii), we have that $x 1 s 1 + x 2 s 2 + 2 x ¯ T s ¯ = 0$, or $x • s = 1 2 ( x 1 s 1 + x 2 s 2 ) + x ¯ T s ¯ = 0$ as desired. The proof is complete.   □
From the above results, the following corollary is now immediate.
Corollary 1
(Optimality conditions). Let us assume that both Problems (P) and (D) are strictly feasible. Then, $( x , ( y , s ) ) ∈ 𝓡 n × R m × 𝓡 n$ is an optimal solution to the pair (P, D) if and only if
$𝓐 x = b , A T y + s = c , x ∘ s = 0 , x , s ⪰ 0 .$

## 4. The Newton System and Commutative Directions

In this section, we present the logarithmic barrier problems for the pair (P, D) and the Newton system corresponding to them, as well as a subclass of the MZ family of search directions known as the commutative directions.
The logarithmic barrier problems associated with the pair (P, D) are the problems
where $μ ≜ 1 2 x • s$ is a small positive scalar and is typically referred to as the barrier parameter.
The solutions of the pair $( P μ , D μ )$ can be characterized by the following perturbed KKT optimality conditions.
$𝓐 x = b , A T y + s = c , x ∘ s = δ μ e , x , s ≻ 0 ,$
where $e = ( 1 ; 1 ; 0 )$ is the identity vector of $𝓡 n$ as defined in Table 1, and $δ ∈ ( 0 , 1 )$ is a centering parameter that reduces the barrier term $μ$. For any $μ > 0$, System (11) has a unique solution, indicated by $( x μ , y μ , s μ )$, where $x μ$ is called the $μ$-center for (P) and the pair $( y μ , s μ )$ is called the $μ$-center for (D). The set of all $μ$-centers that solve the perturbed KKT system (11) is called the central path of the pair (P, D), and is defined as
Due the assumption that $𝓕 0 ≠ 0$, the central path is well defined. As $μ$ gets closer to zero, the $μ$-central point $( x μ , y μ , s μ )$ converges toward an $ϵ$-approximate solution $( x ★ , y ★ , s ★ )$ of (P, D).
Now, we reformulate the complementary condition $x ∘ s = δ μ e$ in (11), which is a direct consequence of Lemma 28 in [30].
Lemma 5.
Let $x , s , p ∈ ( 𝓡 n , ∘ , • )$ be such that $x , s ≻ 0$, and $p$ is invertible (i.e., $det ( p ) ≠ 0$). Then $x ∘ s = δ μ e$ if and only if $Q p x ∘ Q p − 1 s = δ μ e$.
In order to solve System (11), we apply Newton’s method to this system and obtain
$𝓐 Δ x = b − 𝓐 x , A T Δ y + Δ s = c − A T y − s , Q p x ∘ Q p − 1 Δ s + Q p Δ x ∘ Q p − 1 s = δ μ e − Q p x ∘ Q p − 1 s ,$
where $( Δ x , Δ y , Δ s ) ∈ ( 𝓡 n , ∘ , • ) × R m × ( 𝓡 n , ∘ , • )$ is called the Newton search direction.
In the theory of Jordan algebra, two elements of a Jordan algebra operator commute if they share the same set of eigenvectors. In particular, two vectors $u , v s . ∈ 𝓡 n$ operator commute if $c i = c i ( u ) = c i ( v )$ for $i = 1 , 2$, i.e., $u = λ 1 ( u ) c 1 + λ 2 ( u ) c 2$ and $v = λ 1 ( v ) c 1 + λ 2 ( v ) c 2$. The vectors $x$ and $s$ in System (12) may not operator commute. We need now to scale the underlying system so that the scaled vectors operator commute. In practice, this scaling is needed to guarantee that we iterate in the interior of the quadratic rotated cone.
Let $𝓒 ( x , s )$ be the set of nonsingular elements so that the scaled vectors operator commute, i.e.,
We call the set of directions $( Δ x , Δ y , Δ s )$ arising by choosing $p$ from $𝓒 ( x , s )$ the commutative class of directions for RQCP, and call a direction in this class a commutative direction.
As mentioned earlier, the commutative class of primal-dual IPMs was designed by Monteiro and Zhang [29] for semidefinite programming, and by Alizadeh and Goldfarb [7] for second-order cone programming, and then extended by Schmieta and Alizadeh [30] for symmetric cone programming. We concentrate on three prominent choices of $p$, and each choice leads to a different direction in the commutative class of search directions: First, the choice $p = s 1 / 2$ is referred to as the HRVW/KSH/M direction, and is equivalent to the XS direction in semidefinite programming (introduced by Helmberg, Rendl, Vanderbei, and Wolkowicz [31], and Kojima, Shindoh, and Hara [32] independently, and then rediscovered by Monteiro [29]). Second, the choice $p = x − 1 / 2$ is referred to as the dual HRVW/KSH/M direction, and is equivalent to the SX direction in semidefinite programming. Third, the choice $p = ( Q x 1 / 2 ( Q x 1 / 2 s ) − 1 / 2 ) − 1 / 2 = ( Q s − 1 / 2 ( Q s 1 / 2 x ) 1 / 2 ) − 1 / 2$ is equivalent to the NT direction in semidefinite programming (introduced by Nesterov and Todd [4]).
Now, associated with $p ∈ 𝓒 ( x , s )$, we make the following change of variables:
$x → x ¯ ≜ Q p x , s → s ̲ ≜ Q p − 1 s , c → c ̲ ≜ Q p − 1 c , A → A ̲ ≜ A Q p − 1 , 𝓐 → 𝓐 ̲ ≜ A H Q p − 1 .$
Because $Q p − 1 = Q p − 1$ (see [28], Theorem 4.3), System (12) is equivalent to
$𝓐 ̲ Δ x ¯ = r p , A ̲ T Δ y + Δ s ̲ = r d , x ¯ ∘ Δ s ̲ + Δ x ¯ ∘ s ̲ = r c , or equivalently 𝓐 ̲ O O O A ̲ T I L ( s ̲ ) O L ( x ¯ ) Δ x ¯ Δ y Δ s ̲ = r p r d r c ,$
where $r p , r d$, and $r c$ are given by
$r p ≜ b − 𝓐 ̲ x ¯ , r d ≜ c ̲ − A ̲ T y − s ̲ , r c ≜ δ μ e − x ¯ ∘ s ̲ .$
Applying block Gaussian elimination to (13), we obtained the Newton search directions $( Δ x ¯ , Δ y , Δ s ̲ )$:
To obtain the search directions $Δ x$ and $Δ s$ for (12), we apply inverse scaling to $Δ x ¯$ and $Δ s ̲$ as follows:
$Δ x ≜ Q p − 1 Δ x ¯ and Δ s ≜ Q p Δ s ̲ .$
Finally, we take a step size $α$ so that the new point $( x + , y + , s + ) ≜ ( x , y , s ) + α ( Δ x , Δ y , Δ s )$ is generated in the neighborhood of the central path; see Figure 1.

## 5. Path-Following Interior-Point Algorithms

Primal–dual path-following IPMs for solving the pair (P, D) are introduced in this section in three different lengths: short, semi-long, and long-step.
The generation of each iteration $( x μ ( k ) , y μ ( k ) , s μ ( k ) )$ in the neighborhood of the central path $𝓒 𝓟$ is one of the main issues in the path-following IPMs, and we use proximity measure functions to handle this. With our adherence to the central path, the duality gap sequence ${ μ k }$ will converge, and a bound on the number of iterations needed to obtain the optimal solution of the pair problem (P, D) will be polynomial.
The standard way to classify the proximity measures is to measure the distance to a specific point on the central path $𝓒 𝓟$. More specifically, the proximity measures for $x , s ≻ 0$ are given as
The three different distances in (14) lead to the following three neighborhoods along the central path:
where $γ ∈ ( 0 , 1 )$ is a constant known as the neighborhood parameter.
By Proposition 21 in [30], both $Q x 1 / 2 s$ and $Q x ¯ 1 / 2 s ̲$ have the same eigenvalues, and since all neighborhoods $𝓝 ρ ( γ )$ may be described in terms of the eigenvalue of $Q x 1 / 2 s$, one can see that the neighborhoods defined in (15) are scaling-invariant, i.e., $( x , s ) ∈ 𝓝 ρ ( γ ) if and only if ( x ¯ , s ̲ ) ∈ 𝓝 ρ ( γ ) ,$ where $ρ$ can be selected as $F , 2$, or $− ∞$.
Furthermore, given the eigenvalue characterization of $d ρ ( x , s )$, we can find that $d − ∞ ( x , s ) ≤ d 2 ( x , s ) ≤ d F ( x , s )$, and hence $𝓝 F ( γ ) ⊆ 𝓝 2 ( γ ) ⊆ 𝓝 − ∞ ( γ ) .$
The performance of the path-following IPMs for RQCP problems greatly depends on the neighborhood $𝓝 ρ ( γ )$ of the central path and the centering parameter $δ$ that we select. These options allow us to divide the path-following IPMs for our problem into three categories: Short, semi-long, and long-step. More specifically:
• Selecting $𝓝 F ( γ )$ as the neighborhood yields the short-step algorithm;
• Selecting $𝓝 2 ( γ )$ as the neighborhood yields the semi-long-step algorithm;
• Selecting $𝓝 − ∞ ( γ )$ as the neighborhood yields the long-step algorithm.
We indicate that the long-step version of the algorithm seems to outperform the short-step version of the algorithm in practical performance. In Table 2, we compare some of the categorized versions of this algorithm. The proposed path-following interior-point algorithm for solving the pair of problems (P, D) is described in Algorithm 1 and Figure 2.
 Algorithm 1: A path-following interior algorithm for solving RQCP problem.
The convergence and time complexity for Algorithm 1 is given in the following theorem. This theorem is a consequence of Theorem 1 that we verified in Section 2 and Theorem 37 in [30] after taking the rotated quadratic cone $𝓡 + n$ as our underlying symmetric cone.
Theorem 3.
If each iteration in Algorithm 1 follows the NT direction, then the short-step algorithm terminates in $𝓞 ( ( 2 r ) 1 / 2 log ( 1 / ϵ ) )$ iterations, and the semi-long and long-step algorithms terminate in $𝓞 ( 2 r log ( 1 / ϵ ) )$ iterations. If each iteration in Algorithm 1 follows the HRVW/KSH/M direction or dual HRVW/KSH/M direction, then the short-step algorithm terminates in $𝓞 ( ( 2 r ) 1 / 2 log ( 1 / ϵ ) )$ iterations, the semi-long-step algorithm terminates in $𝓞 ( 2 r log ( 1 / ϵ ) )$ iterations, and the long-step algorithm terminates in $𝓞 ( ( 2 r ) 3 / 2 log ( 1 / ϵ ) )$ iterations.
Before closing this section and moving forward to our numerical results below, we indicate that the above analysis can be carried out and extended word-by-word if Problem (P, D) is given in multiple block-setting by using the notations introduced in Table 1.

## 6. Numerical Results

In this section, we will assess how well the proposed method performs when implemented to some RQCP problems: The problem of minimizing the harmonic mean of positive affine function, and some randomly generating problems. We also contrast the numerical results of the proposed algorithm for the randomly generated problems with those of the two symmetric cone programming software package systems: CVX [33] and MOSEK [34].
We produced our numerical results using MATLAB version R2021a on a PC with an Intel (R) Core (TM) i3-1005G1 processor operating at 1.20 GHz and 4 GB of physical memory. In the first numerical example, we consider a problem in multiple block setting. The dimensions of our test problems are denoted by the letters m and n, and the number of block splittings is denoted by r. “Iter” stands for the typical number of iterations needed to obtain $ϵ$-optimal solutions, while “CPU” stands for the typical CPU time needed to reach an $ϵ$-optimal solution to the underlying problem.
In all of our tests, we use the long-step path-following version of Algorithm 1, and choose the dual HRVW/KSH/M direction (i.e., our scaling vector is $p = x − 1 / 2$). Furthermore, when the condition of convergence is not satisfied at an iteration step, it will generate a new solution that must be in the neighborhood of the central path. We choose the step size of $α$ that was proposed in [30].
Example 1
(Minimizing the harmonic mean of affine functions). We consider the problem of minimizing the harmonic mean of (positive) affine functions of $x ∈ R n$ [7]:
$min ∑ i = 1 r 1 a i T x + β i min ∑ i = 1 r u i s . t . a i T x + β i > 0 , i = 1 , 2 , ⋯ , r , which can be cast as s . t . u i ( a i T x + β i ) ≥ 1 , i = 1 , 2 , ⋯ , r , d j T x + h j ≥ 0 , j = 1 , 2 , ⋯ , m ; the RQCP problem u i ≥ 0 , i = 1 , 2 , ⋯ , r , d j T x + h j ≥ 0 , j = 1 , 2 , ⋯ , m .$
We implement this problem for sizes $n = 6 , 12 , 24 , 30 , 36$, $m = 3 , 9 , 15$, and the number of blocks $r = 5 , 10 , 15 , 20$. We generate the coefficients $a i , β i , d j$ and $h j$ randomly from a list of numbers uniformly distributed between −1 and 1 for all $i = 1 , 2 , ⋯ , r$ and $j = 1 , 2 , ⋯ , m$. We take the parameters $ϵ = 10 − 6$, $σ = 0.55$, and $γ = 0.8$. The initial solutions of $x ( 0 ) , s ( 0 )$ are also chosen randomly from a list of uniformly distributed numbers between −1 and 1, while $y ( 0 )$ is chosen as the zero vector, and $u i , i = 1 , 2 , ⋯ , r$, are all set to take values between 0 and 1. We display the numerical results obtained for this example in Table 3, and visualize them graphically in Figure 3.
Example 2
(Randomly generated problems). In this example, the coefficients A and b are generated at random from a list of uniformly distributed numbers between −1 and 1. We set $b = 𝓐 x ( 0 )$ and $c = A T y ( 0 ) + s ( 0 )$, and choose the parameters $ϵ = 10 − 4$, $σ = 0.33$ and $γ = 0.9$. The size of the problem is given so that $n = 2 m$, where m is ranging from 5 to 1000. The initial solutions are chosen as follows: $x ( 0 ) = e$, $s ( 0 ) = e$ and $y ( 0 ) = 0$. We display our numerical results in Table 4, and visualize them graphically in Figure 4. The results from the CVX and MOSEK solvers are also presented in Table 4 and Figure 4 for comparison purposes.
We can see overall that the computational results show that Algorithm 1 performs well in practice. We can see from the numerical results shown in Table 3 and Table 4 and represented in Figure 3 and Figure 4 that the number of iterations and CPU time required by Algorithm 1 increase as the dimension of the underlying problem increases, indicating that the dimension of the problem and the dimension of rotated quadratic cones have an impact on the number of iterations and the amount of time required by the proposed algorithm. Furthermore, when the randomly generated problems are solved using the CVX and Mosek solvers, we can see that most of the solutions exhibited a slight bias toward Algorithm 1 in terms of both the number of iterations and CPU time. This is most likely because these solvers begin at infeasible points or because the stopping condition of the algorithm used in these solvers differs from that of Algorithm 1.

## 7. Concluding Remarks

All earlier work on optimization problems over the rotated quadratic cones has formulated these problems as second-order cone programming problems, and while doing this can be easier than developing special-purpose algorithms for solving this class of optimization problems, this approach may not always be the cheapest one in terms of computational cost. In this paper, we have introduced the rotated quadratic cone programming problems as a “self-made” class of optimization problems. We have proved that the barrier function associated with our cone is strongly self-concordant. We have discussed the duality theory associated with these problems, along with the development of the commutative class of search directions, and have developed a primal–dual interior-point algorithm rotated quadratic cone optimization problems based on our own Euclidean Jordan algebra. The efficiency of the proposed algorithm is shown by providing some numerical examples and comparing some of them with results from MOSEK and CVX solvers.
The proposed algorithm is attractive from an algebraic point of view. Most of this attractiveness comes from exploiting the algebraic structure of the quadratic rotated cone which allowed us to explicitly give expressions for the inverse operator, the linear representation, and the quadratic operator, and use these operators to compute the derivatives of the barrier function explicitly. In spite of its attractiveness, the algorithm has several limitations such as producing a good starting point, developing a practical step length selection procedure in the primal space, and reducing the barrier parameter with a practical strategy in our setting. These limitations, however, could be addressed in a future research paper developing practical implementations. Future work can be performed on developing algorithms for solving mixed-integer rotated quadratic cone optimization problems.

## Author Contributions

K.T. and B.A. conceived the idea, set up the analysis, wrote the proofs, designed and performed the experiments, analyzed the results, drafted the initial manuscript, and revised the final manuscript. All authors have read and agreed to this version of the manuscript.

## Funding

This research received no external funding.

Not applicable.

Not applicable.

## Data Availability Statement

Data are contained within the article.

## Acknowledgments

The authors thank Blake Whitman from The Ohio State University for reading the manuscript and pointing out some printing errors. The authors also thank the two anonymous referees for their constructive comments and suggestions for improvements.

## Conflicts of Interest

The authors have no competing interests to declare.

## Abbreviations

The following abbreviations are used in this manuscript:
 RQCP Rotated quadratic cone programming EJA Euclidean Jordan algebra IPM Interior-point method KKT Karush–Kuhn–Tucker Iter Number of iterations CPU Central processing unit

## References

1. Montoya, O.; Gil-González, W.; Garcés, A. On the conic convex approximation to locate and size fixed-step capacitor banks in distribution networks. Computation 2022, 10, 32. [Google Scholar] [CrossRef]
2. Alzalg, B. A primal-dual interior-point method based on various selections of displacement step for symmetric optimization. Comput. Optim. Appl. 2019, 72, 363–390. [Google Scholar] [CrossRef]
3. Nesterov, Y.; Nemirovskii, A. Interior-Point Polynomial Algorithms in Convex Programming; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
4. Nesterov, Y.E.; Todd, M.J. Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 1997, 22, 1–42. [Google Scholar] [CrossRef] [Green Version]
5. Manshadi, S.D.; Liu, G.; Khodayar, M.E.; Wang, J.; Dai, R. A convex relaxation approach for power flow problem. J. Mod. Power Syst. Clean Energy 2019, 7, 1399–1410. [Google Scholar] [CrossRef] [Green Version]
6. Manshadi, S.D.; Liu, G.; Khodayar, M.E.; Wang, J.; Dai, R. A distributed convex relaxation approach to solve the power flow problem. IEEE Syst. J. 2019, 14, 803–812. [Google Scholar] [CrossRef]
7. Alizadeh, F.; Goldfarb, D. Second-order cone programming. Math. Program. 2003, 95, 3–51. [Google Scholar] [CrossRef]
8. Alzalg, B.; Pirhaji, M. Elliptic cone optimization and primal–dual path-following algorithms. Optimization 2017, 66, 2245–2274. [Google Scholar] [CrossRef]
9. Malakar, S.; Ghosh, M.; Bhowmik, S.; Sarkar, R.; Nasipuri, M. A GA based hierarchical feature selection approach for handwritten word recognition. Neural. Comput. Appl. 2020, 32, 2533–2552. [Google Scholar] [CrossRef]
10. Bacanin, N.; Stoean, R.; Zivkovic, M.; Petrovic, A.; Rashid, T.A.; Bezdan, T. Performance of a novel chaotic firefly algorithm with enhanced exploration for tackling global optimization problems: Application for dropout regularization. Mathematics 2021, 9, 2705. [Google Scholar] [CrossRef]
11. Tuba, E.; Bacanin, N. An algorithm for handwritten digit recognition using projection histograms and SVM classifier. In Proceedings of the 2015 23rd Telecommunications Forum Telfor (TELFOR), Belgrade, Serbia, 24–26 November 2015; pp. 464–467. [Google Scholar]
12. Karmarkar, N. A new polynomial-time algorithm for linear programming. In Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing, Washington, DC, USA, 30 April–2 May 1984; pp. 302–311. [Google Scholar]
13. Güler, O. Barrier functions in interior point methods. Math. Oper. Res. 1996, 21, 860–885. [Google Scholar] [CrossRef] [Green Version]
14. Faraut, J. Analysis on Symmetric Cones; Oxford Mathematical Monographs: Oxford, UK, 1994. [Google Scholar]
15. Todd, M. Semidefinite optimization. Acta Numer. 2001, 10, 515–560. [Google Scholar] [CrossRef] [Green Version]
16. Alzalg, B. Combinatorial and Algorithmic Mathematics: From Foundation to Optimization; Kindle Direct Publishing: Seattle, WA, USA, 2022. [Google Scholar]
17. Alzalg, B. Homogeneous self-dual algorithms for stochastic second-order cone programming. J. Optim. Theory Appl. 2014, 163, 148–164. [Google Scholar] [CrossRef]
18. Alzalg, B. Volumetric barrier decomposition algorithms for stochastic quadratic second-order cone programming. Appl. Math. Comput. 2015, 265, 494–508. [Google Scholar] [CrossRef]
19. Alzalg, B. A logarithmic barrier interior-point method based on majorant functions for second-order cone programming. Optim. Lett. 2020, 14, 729–746. [Google Scholar] [CrossRef]
20. Alzalg, B.; Badarneh, K.; Ababneh, A. An infeasible interior-point algorithm for stochastic second-order cone optimization. J. Optim. Theory Appl. 2019, 181, 324–346. [Google Scholar] [CrossRef]
21. Alzalg, B.; Gafour, A.; Alzaleq, L. Volumetric barrier cutting plane algorithms for stochastic linear semi-infinite optimization. IEEE Access 2019, 8, 4995–5008. [Google Scholar] [CrossRef]
22. Alzalg, B.; Pirhaji, M. Primal-dual path-following algorithms for circular programming. Commun. Comb. Optim. 2017, 2, 65–85. [Google Scholar]
23. Kojima, M.; Mizuno, S.; Yoshise, A. A primal-dual interior point algorithm for linear programming. In Progress in Math. Program; Springer: New York, NY, USA, 1989; pp. 29–47. [Google Scholar]
24. Monteiro, R.D.; Adler, I. Interior Path Following Primal-Dual Algorithms. Part I: Linear Programming. Math. Program. 1989, 44, 27–41. [Google Scholar] [CrossRef]
25. Alzalg, B. Primal interior-point decomposition algorithms for two-stage stochastic extended second-order cone programming. Optimization 2018, 67, 2291–2323. [Google Scholar] [CrossRef]
26. Goldfarb, D.; Liu, S. An O(n3L) Primal interior point algorithm for convex quadratic programming. Math. Program. 1991, 49, 325–340. [Google Scholar] [CrossRef]
27. Tiande, G.; Shiquan, W. Properties of primal interior point methods for QP. Optimization 1996, 37, 227–238. [Google Scholar] [CrossRef]
28. Alzalg, B.; Tamsaouete, K.; Benakkouche, L.; Ababneh, A. The Jordan Algebraic Structure of the Rotated Quadratic Cone. Submitted for Publication. 2017. Available online: https://optimization-online.org/wp-content/uploads/2023/01/RQC.pdf (accessed on 31 January 2023).
29. Monteiro, R.D.C.; Zhang, Y. A unified analysis for a class of path-following primal-dual interior-point algorithms for semidefinite programming. Math. Program. 1998, 81, 281–299. [Google Scholar] [CrossRef] [Green Version]
30. Schmieta, S.H.; Alizadeh, F. Extension of primal-dual interior point algorithms to symmetric cones. Math. Program. 2003, 96, 409–438. [Google Scholar] [CrossRef]
31. Helmberg, C.; Rendl, F.; Vanderbei, R.J.; Wolkowicz, H. An interior-point method for semidefinite programming. SIAM J. Optim. 1996, 6, 342–361. [Google Scholar] [CrossRef]
32. Kojima, M.; Shindoh, S.; Hara, S. Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM J. Optim. 1997, 7, 86–125. [Google Scholar] [CrossRef]
33. Grant, M.; Boyd, S.; Ye, Y. CVX: Matlab Software for Disciplined Convex Programming (Webpage and Software); CVX Research, Inc.: Austin, TX, USA, 2009. [Google Scholar]
34. Mosek ApS. Mosek Optimization Toolbox for Matlab. In User’s Guide and Reference Manual; Version 4; Mosek ApS: Copenhagen, Denmark, 2019. [Google Scholar]
Figure 1. The alteration between steps to follow the central path.
Figure 1. The alteration between steps to follow the central path.
Figure 2. A pseudocode for Algorithm 1.
Figure 2. A pseudocode for Algorithm 1.
Figure 3. Dot plots of the numerical results obtained in Example 1.
Figure 3. Dot plots of the numerical results obtained in Example 1.
Figure 4. Two-dimensional plots for the numerical results in Example 2.
Figure 4. Two-dimensional plots for the numerical results in Example 2.
Table 1. The algebraic notions and concepts associated with $𝓡 + n$ (for more details, refer to [28]).
Table 1. The algebraic notions and concepts associated with $𝓡 + n$ (for more details, refer to [28]).
Notion/ConceptDefinition
In single-setting
Space$𝓡 n ≜ { x = ( x 1 ; x 2 ; x ¯ ) : x 1 , x 2 ∈ R + , x ¯ ∈ R n − 2 }$
Rotated quadratic cone$𝓡 + n ≜ { x ∈ 𝓡 n : x 1 1 / 2 x 2 1 / 2 ≥ ∥ x ¯ ∥ }$
Half-identity matrix$H n ≜ 1 2 0 0 T 0 1 2 0 T 0 0 I n − 2$
Rotation–reflection matrix$R n ≜ 0 1 0 T 1 0 0 T 0 0 − I n − 2$
Positive semi-definiteness$x ⪰ 𝓡 + n 0$ (or simply $x ⪰ 0$), which means that $x ∈ 𝓡 + n$, i.e., $x 1 1 / 2 x 2 1 / 2 ≥ ∥ x ¯ ∥$
Positive definiteness$x ≻ 𝓡 + n 0$ (or simply $x ≻ 0$), which means that $x ∈ Int ( 𝓡 + n )$, i.e., $x 1 1 / 2 x 2 1 / 2 > ∥ x ¯ ∥$
Eigenvalues$λ 1 , 2 ( x ) ≜ x 1 x 2 ± ∥ x ¯ ∥ 2$
Eigenvectors$c 1 , 2 ( x ) ≜ 1 2 ± x 1 − x 2 2 ( x 1 − x 2 ) 2 + 4 x ¯ T x ¯ ; 1 2 ∓ x 1 − x 2 2 ( x 1 − x 2 ) 2 + 4 x ¯ T x ¯ ; ± x ¯ ( x 1 − x 2 ) 2 + 4 x ¯ T x ¯$
Trace$trace ( x ) ≜ λ 1 ( x ) + λ 2 ( x ) = x 1 + x 2$
Determinant$det ( x ) ≜ λ 1 ( x ) λ 2 ( x ) = x 1 x 2 − ∥ x ¯ ∥ 2$
Identity$e ≜ c 1 ( x ) + c 2 ( x ) = ( 1 ; 1 ; 0 )$
Spectral decomposition of $f ( x )$$f ( x ) ≜ f ( λ 1 ( x ) ) c 1 ( x ) + f ( λ 2 ( x ) ) c 2 ( x )$; f is any real valued continuous func.
Inverse$x − 1 ≜ λ 1 − 1 ( x ) c 1 ( x ) + λ 2 − 1 ( x ) c 2 ( x ) = 1 det ( x ) R x$
Linear representation$L ( x ) ≜ x 1 0 x ¯ T 0 x 2 x ¯ T 1 2 x ¯ 1 2 x ¯ 1 2 ( x 1 + x 2 ) I n − 2$
Quadratic representation$Q x ≜ 2 L ( x ) 2 − L ( x 2 ) = x 1 2 ∥ x ¯ ∥ 2 2 x 1 x ¯ T ∥ x ¯ ∥ 2 x 2 2 2 x 2 x ¯ T x 1 x ¯ x 2 x ¯ 2 x ¯ x ¯ T + det ( x ) I n − 2$
Quadratic operator $Q x , y : 𝓡 n ⟶ 𝓡 n$$Q x , y ≜ L ( x ) L ( y ) + L ( y ) L ( x ) − L ( x ∘ y )$

Jordan product $∘ : 𝓡 n × 𝓡 n ⟶ 𝓡 n$$x ∘ y ≜ L ( x ) y = x 1 y 1 + x ¯ T y ¯ ; x 2 y 2 + x ¯ T y ¯ ; 1 2 ( y 1 + y 2 ) x ¯ + 1 2 ( x 1 + x 2 ) y ¯$
Inner product $• : 𝓡 n × 𝓡 n ⟶ R$$x • y ≜ 1 2 trace ( x ∘ y ) = x T H y = 1 2 ( x 1 y 1 + x 2 y 2 ) + x ¯ T y ¯$
Frobenius norm$∥ x ∥ F ≜ λ 1 2 ( x ) + λ 2 2 ( x ) = x 1 2 + x 2 2 + 2 | | x ¯ | | 2 = 2 x • x$
Rank$rk ( 𝓡 n ) ≜ 2$
In block-setting (r blocks)
Space$𝓡 ≜ 𝓡 n 1 × 𝓡 n 2 × ⋯ × 𝓡 n r$
Cone$𝓡 + ≜ 𝓡 + n 1 × 𝓡 + n 2 × ⋯ × 𝓡 + n r$
Elements/vectors$x = ( x ( 1 ) ; x ( 2 ) ; ⋯ ; x ( r ) )$, with $x ( i ) ∈ 𝓡 + n i$ for each $i = 1 , 2 , ⋯ , r$
Half-identity matrix$H ≜ H n 1 ⊕ H n 2 ⊕ ⋯ ⊕ H n r$
Rotation–reflection matrix$R ≜ R n 1 ⊕ R n 2 ⊕ ⋯ ⊕ R n r$
Positive semi-definiteness$x ⪰ 0$, which means $x ∈ 𝓡 + n$, i.e., $x ( i ) ∈ 𝓡 + n i$ for each $i = 1 , 2 , ⋯ , r$
Positive definiteness$x ≻ 0$, which means $x ∈ Int ( 𝓡 + n )$, i.e., $x ( i ) ∈ Int ( 𝓡 + n i )$ for each $i = 1 , 2 , ⋯ , r$
Trace$trace ( x ) ≜ ∑ i = 1 r trace ( x ( i ) )$
Determinant$det ( x ) ≜ ∏ i = 1 r det ( x ( i ) )$
Identity$e ≜ ( e ( 1 ) ; e ( 2 ) ; ⋯ ; e ( r ) )$.
Vector function $f ( x )$$f ( x ) ≜ ( f ( x ( 1 ) ) ; f ( x ( 2 ) ) ; ⋯ ; f ( x ( r ) ) )$; f is any real valued continuous func.
Jordan product$x ∘ y ≜ ( x ( 1 ) ∘ y ( 1 ) ; x ( 2 ) ∘ y ( 2 ) ; ⋯ ; x ( r ) ∘ y ( r ) )$
Inner product$x • y ≜ x ( 1 ) • y ( 1 ) + x ( 2 ) • y ( 2 ) + ⋯ + x ( n r ) • y ( n r )$
Dot product$x T y ≜ x ( 1 ) T y ( 1 ) + ⋯ + x ( r ) T y ( r )$
Linear representation$L ( x ) ≜ L ( x ( 1 ) ) ⊕ L ( x ( 2 ) ) ⊕ ⋯ ⊕ L ( x ( r ) )$
Quadratic representation$Q x ≜ Q x ( 1 ) ⊕ Q x ( 2 ) ⊕ ⋯ ⊕ Q x ( r )$
Quadratic operator$Q x , y ≜ Q x ( 1 ) , y ( 1 ) ⊕ Q x ( 2 ) , y ( 2 ) ⊕ ⋯ ⊕ Q x ( r ) , y ( r )$
Frobenius norm$∥ x ∥ F ≜ ∑ i = 1 r ∥ x ( i ) ∥ F$
Rank$rk ( 𝓡 ) ≜ 2 r$
Table 2. Contrasting some features of path-following IPMs in short, semi-long, and long-step versions.
Table 2. Contrasting some features of path-following IPMs in short, semi-long, and long-step versions.
FeaturesShort-Step AlgorithmSemi-Long-Step AlgorithmLong-Step Algorithm
Neighborhood $𝓝 ρ ( γ )$$𝓝 F ( γ )$$𝓝 − ∞ ( γ )$$𝓝 2 ( γ )$
Factor $σ$$( 0 , 1 )$$( 0 , 1 )$$( 0 , 1 )$
Centering parameter $δ$$1 − σ / 2 r$$( 0 , 1 )$$( 0 , 1 )$
No. of iter. in NT direction$𝓞 ( 2 r log ( 1 / ϵ ) )$$𝓞 ( 2 r log ( 1 / ϵ ) )$$𝓞 ( 2 r log ( 1 / ϵ ) )$
No. of iter. in HRVW/KSH/M$𝓞 ( 2 r log ( 1 / ϵ ) )$$𝓞 ( 2 r log ( 1 / ϵ ) )$$𝓞 ( ( 2 r ) 3 / 2 log ( 1 / ϵ ) )$
direction or its dual
Table 3. The numerical results obtained for minimizing harmonic mean of affine functions in Example 1.
Table 3. The numerical results obtained for minimizing harmonic mean of affine functions in Example 1.
mnr       Iter.     CPU(s) mnr       Iter.     CPU(s) mnr    Iter.    CPU(s)
365       2     0.0156 965       8     0.1562 1565    32    0.4062
3610       4     0.0781 9610       16     0.0781 15610    42    0.3125
3615       5     0.1094 9615       20     0.4844 15615    44    0.5781
3620       5     0.1562 9620       15     0.5938 15620    57    0.4844
3125       5     0.0312 9125       24     0.1094 15125    27    0.5406
31210       7     0.1094 91210       27     0.2188 151210    32    0.5546
31215       8     0.2031 91215       27     0.4804 151215    24    0.7679
31220       11     0.2931 91220       32     0.5000 151220    31    0.5932
3245       11     0.1719 9245       22     0.1875 15245    36    0.5381
32410       15     0.2500 92410       25     0.2969 152410    31    0.8344
32415        16     0.1562 92415       29     0.3750 152415    52    0.9306
32420       19     0.3901 92420       33     0.5156 152420    49    0.8656
3305       16     0.1904 9305       23      0.1875 15305    40    0.6265
33010        20     0.1406 93010       39     0.2969 153010    44    0.9018
33015       23     0.2812 93015       23     0.2344 153015    49    0.8031
33020       26     0.3112 93020       41     0.5469 153020    54    0.8750
3365       32     0.3938 9365       28     0.1719 15365    52    0.6925
33610       19     0.2998 93610       35     0.3906 153610    54    0.7562
33615       34     0.3438 93615       39     0.3438 153615    69    0.7906
33620       26     0.5469 93620       47     0.3125 153620    72    0.9789
Table 4. The numerical results obtained for the randomly generated problems in Example 2.
Table 4. The numerical results obtained for the randomly generated problems in Example 2.
Problem SizeAlgorithm 1CVXMOSEK
$( m , n )$        Iter.CPU(s)        Iter.CPU(s)Iter.CPU(s)
(5, 10)        30.0156        40.2850.22
(10, 20)        50.0313        60.3080.18
(20, 40)        50.0625        60.3990.20
(30, 60)        60.1406        60.2090.19
(40, 80)        60.1250        50.3380.38
(50, 60)        90.0781        60.2580.62
(60, 120)        120.2086        70.30100.41
(70, 140)        110.2188        120.52110.79
(80,160)        140.3100        210.38170.60
(90, 180)        150.3343        130.49150.79
(100, 200)        100.4531        140.66190.74
(200, 400)        121.0900        150.4481.85
(300,600)        74.0313        90.5395.41
(400, 800)        109.804        141.021911.97
(500, 1000)        2218.6406        2911.783116.90
(600, 1200)        2527.5938        3041.033731.67
(700, 1400)        3948.2500        4351.975065.36
(800, 1600)        6161.2813        7372.778694.64
(900, 1800)        5365.4651        3470.507183.45
(1000, 2000)        6674.7968        6979.327691.54
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Tamsaouete, K.; Alzalg, B. An Algebraic-Based Primal–Dual Interior-Point Algorithm for Rotated Quadratic Cone Optimization. Computation 2023, 11, 50. https://doi.org/10.3390/computation11030050

AMA Style

Tamsaouete K, Alzalg B. An Algebraic-Based Primal–Dual Interior-Point Algorithm for Rotated Quadratic Cone Optimization. Computation. 2023; 11(3):50. https://doi.org/10.3390/computation11030050

Chicago/Turabian Style

Tamsaouete, Karima, and Baha Alzalg. 2023. "An Algebraic-Based Primal–Dual Interior-Point Algorithm for Rotated Quadratic Cone Optimization" Computation 11, no. 3: 50. https://doi.org/10.3390/computation11030050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.