Next Article in Journal
On-Body Smartphone Localization with an Accelerometer
Previous Article in Journal
A Big Network Traffic Data Fusion Approach Based on Fisher and Deep Auto-Encoder
Previous Article in Special Issue
Communication-Theoretic Model of Power Talk for a Single-Bus DC Microgrid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimax Duality for MIMO Interference Networks

by
Andreas Dotzler
*,
Maximilian Riemensberger
and
Wolfgang Utschick
Associate Institute for Signal Processing, Technische Universität München, 80290 Munich, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Information 2016, 7(2), 19; https://doi.org/10.3390/info7020019
Submission received: 1 November 2015 / Revised: 25 January 2016 / Accepted: 6 February 2016 / Published: 23 March 2016
(This article belongs to the Special Issue Communication Theory)

Abstract

:
A minimax duality for a Gaussian mutual information expression was introduced by Yu. An interesting observation is the relationship between cost constraints on the transmit covariances and noise covariances in the dual problem via Lagrangian multipliers. We introduce a minimax duality for general MIMO interference networks, where noise and transmit covariances are optimized subject to linear conic constraints. We observe a fully symmetric relationship between the solutions of both networks, where the roles of the optimization variables and Lagrangian multipliers are inverted. The least favorable noise covariance itself provides a Lagrangian multiplier for the linear conic constraint on the transmit covariance in the dual network, while the transmit covariance provides a Lagrangian multiplier for the constraint on the interference plus noise covariance in the dual network. The degrees of freedom available for optimization are constituted by linear subspaces, where the orthogonal subspaces induce the constraints in the dual network. For the proof of our duality we make use of the existing polite water-filling network duality and as a by-product we are able to show that maximization problems in MIMO interference networks have a zero-duality gap for a special formulation of the dual function. Our minimax duality unifies and extends several results, including the original minimax duality and other known network dualities. New results and applications are MIMO transmission strategies that manage and handle uncertainty due to unknown inter-cell interference and information theoretic proofs concerning cooperation in networks and optimality of proper signaling.

1. Introduction

From the literature, we know network dualities of the following three types: signal-to-interference-and-noise-ratio (SINR) dualities [1,2,3,4], mean-square-error (MSE) dualities [5,6,7], and rate dualities [4,8,9,10,11,12]. The dualities show that the region of achievable SINRs, MSEs, and rates, respectively, are the same for two networks. The most prominent and important application of network dualities is the duality of the broadcast channel (BC) and the multiple access channel (MAC) with interference (pre-)subtraction, where the dualities show that SINR, MSE, and achievable rate region are identical for Gaussian signalling. The main benefit for converting the BC problem (downlink) into the MAC (uplink) domain is that for the optimal decoding order of the users, the problem formulation obtained for the uplink can be reformulated as a convex optimization problem.
The network dualities for the BC-MAC scenario with interference (pre-)subtraction and a sum-power constraint have been extended along several directions. A more general BC-MAC duality, which also holds for linear signal processing, i.e., without non-linear interference cancellation, is introduced in [11]. In [12,13], dualities for general interference networks are considered. Furthermore, the solutions are characterized by polite-waterfilling, a result we use in the proof of our duality. While most network dualities only hold for a sum-power constraint, there are some network dualities that extended to more general constraints. Dualities for scenarios with multiple linear constraints, or per antenna power constraint as a special case, can be found in [13,14,15,16]. In [10], a worst-case noise optimization is considered by a minimax duality and the connection of network duality and Lagrangian duality is revealed. Worst-case noise optimization for MIMO communications is also considered in [17,18]. Minimax expressions also appear when considering the compound capacity of a MIMO channel [19,20], where the uncertainty is in the channel knowledge instead of the noise.
SINR dualities and conic optimization, for example, semidefinite programming and second order cone programming, are frequently used in the context of power minimization problems under SINR constraints, or the related max-min SINR optimization [1,21,22]. Usually, transmit filters are reparameterized as rank-one matrices and relaxation of the rank-one constraint reveals a convex reformulation of the problem, where the SINR constraints become conic constraints. Note that this is very different to the linear conic constraints we use. Our result generalizes the original minimax duality by Yu [10] to a larger class of constraints and to interference networks, which contain broadcast and multiple access channels (with and without non-linear interference cancellation) as special cases.
Yu [10] introduced a minimax duality for the MIMO channel. For a channel matrix H , noise symbol covariance R , and transmit symbol covariance Q , he considers a saddle point problem of the mutual information W ( R , Q ) under linear constraints:
min R 0 max Q 0 W ( R , Q ) : tr Q Ω λ Ω : tr R Σ λ Σ
The dual saddle point problem with flipped channel matrix H H , noise symbol covariance Ω, and transmit symbol covariance Σ is given by
min Ω 0 max Σ 0 W ( Ω , Σ ) : tr Σ R λ R : tr Ω Q λ Q
where the constraint matrices Q , R are a solution of Equation (1) with λ Q , λ R as Lagrangian multipliers. The other way round, the constraint matrices Σ , Ω of Equation (1) are solution for Equation (2) with Lagrangian multipliers λ Σ = λ R , λ Ω = λ Q . By Lagrangian duality, Yu reveals the relationship between the cost constraints in one problem, the optimization variables in the dual problem, and the Lagrangian multipliers.
We consider a minimax optimization of a weighted sum of the mutual informations of the links in a network, where the minimization is over the noise covariances and the maximization is with respect to the transmit covariances. The feasible sets of the covariances are modeled by linear conic constraints, i.e., the covariance matrix has to be smaller than a constraint matrix in the sense of positive-semidefiniteness. The use of the linear conic constraints is not motivated by a specific application, instead we will see that they are general enough to model a large class of constraints, including sum-power constraints, per antenna power constraints, or shaping constraints. Shaping constraints for the optimization of transmit covariances in MIMO systems are for example considered in [23,24,25,26,27].
The contributions of this paper are as follows. We introduce the downlink and uplink minimax problems under linear conic constraints and show that they not only exhibit an equal optimal value, but share a fully symmetric system of optimality conditions with inverted roles of the optimization variables and Lagrangian multipliers. The least favorable noise covariance itself provides a Lagrangian multiplier for the linear conic constraint on the transmit covariance in the dual network, while the transmit covariance provides a Lagrangian multiplier for the constraint on the interference plus noise covariance in the dual network (The idea to include the interference plus noise covariances as an optimization variable, which then appears in the optimality conditions, was independently developed in [28]. Unfortunately, the current version of [28] available at arXiv contains several flaws, the most significant one being the following: The authors claim that max-min equals min-max for problem Equations (7)–(9) and the solution is a saddle point. This is wrong—here is a counter example: Consider a interference network with square channels that are full rank. For the min-max version, the outer minimization over Ω results in Ω l = I l which leads to Σ l = 0 and a shut-down of the network. This is certainly not the result of the max-min optimization). Further, we reveal the connection of noise covariance constraint in one problem and transmit covariance in the other via linear subspaces and their orthogonal complements. Unlike existing work, we directly include scenarios where the noise covariance is non-invertible by using the rather general Lagrangian Multiplier Rule to formulate the optimality conditions. Computing the full subdifferential of the utility allows us to parameterize the complete solution set. The direct connection of downlink and uplink solutions via the optimality conditions renders explicit transformation rules, as discussed in [9,11], obsolete. The novel minimax duality unifies and extends several existing results, for example the least favorable noise capacity [17,18] and optimization of the transmit covariances under a per antenna power constraint [14,16].
Our work extends the traditional BC-MAC duality in two dimensions: from the BC-MAC scenario to arbitrary interference networks and from simple linear power constraints to a more general type of constraints. Many optimization problems in interference networks are proven to be NP-hard, so our duality can not help to solve those efficiently. Usually, efficient solvability is tied to the BC-MAC. By modelling this scenario as a specially structured interference network we are able to state conditions for efficient solvability by the network structure and the constraints imposed. This enables future use for other scenarios, for example relay channels where interference is removed by other means or cooperative transmission of multiple transmitters (with individual constraints), single frequency broadcast networks, or network MIMO. Concerning practical applications, there are several applications and problems that are intractable without our duality for example MIMO transmission strategies that manage and handle uncertainty by inter-cell interference. Numerically computing the worst-case noise covariances was previously not possible for more general networks. The worst-case noise approach can be used to handle the otherwise complicated coupling of interference networks, for example in cognitive radio networks [29] or device-to-device networks [30]. Optimizing the BC channel for shaping constraints was not possible without the extension to the more general constraints by our duality. The shaping constraints reduce the uncertainty in interference networks. In [31] the duality is extended to multiple linear constraints and practical algorithms for numerical computation are presented in [32]. Combining the worst case noise approach and the shaping constraints facilitates a very general cognitive radio concept that also holds for receivers with multiple receive antennas and enables efficient solvability of the underlying problems, see [29] for details. Further, our duality enables information theoretic proofs concerning cooperation in networks [33] and optimality of proper signaling [26,34].
The paper is organized as follows. In Section 2, we discuss our system model and introduce our idea of linear conic constraints. In Section 3, we present our main contribution: the minimax duality with linear conic constraints is stated in Theorem 1. Applications of the minimax duality are discussed in Section 4. In Section 4.1, we relate our work to existing results and show how they can be reproduced by reformulating them for our minimax duality, and in Section 4.2 we discuss novel results enabled by our minimax duality. In Section 5, we discuss conditions for existence of a solution to the minimax problem, we state and investigate the optimality conditions, and prove the minimax duality.
The following notations are used. Vectors are denoted by lower case bold and matrices by upper case bold letters. Additionally, we use upper case bold letters for tuples of matrices, such as A = A 1 , , A K . The operator · H stands for Hermitian transpose, tr ( · ) is the trace of a matrix, and · is the determinant of a matrix. The order relationships ⪰ and ⪯ are understood in the positive-semidefinite sense for matrices. For matrix tuples, A B means A k B k k and A B means A k B k k . We use I for the identity matrix, or a tuple of identity matrices, and 0 for the all zero matrix, or a tuple of all zero matrices. The dimensions of I and 0 are either clear form the context or explicitly mentioned. We denote sets by upper case calligraphic letters. The linear subspace of N × N Hermitian matrices is denoted as S N × N and the subspace of tuples of Hermitian matrices, where the k th element is N k × N k , is denoted as k K S N k × N k . For linear subspaces, the operator ⊕ denotes the direct sum. Given S , let T , T such that [ T T ] unitary and full rank, T H S T 0 , and T H S T = 0 , then we say that T , T provide a full unitary basis for the fundamental subspaces of S . By this we can parameterize S as
S = T T T H S T 0 0 0 T T H
and the pseudo inverse, denoted as S + , is given by S + = T ( T H S T ) 1 T H .

2. System Model

We consider a MIMO interference network with a set of data links K , with K = | K | . The transmitter of the link k has N k antennas and the receiver has M k receive antennas. The fixed and flat-fading channel between the transmitter of link i and the receiver of link k is H k i C M k × N i . The zero-mean proper Gaussian input symbol of link k is y k C N k with covariance Q k = E y k y k H , and the output symbol is x k C M k . The input and output of a link k are related by
x k = H k k y k + i K \ k H k i y i + n k
where n k C M k is additive noise that is zero mean circularly symmetric Gaussian with covariance R k = E n k n k H . Assuming all transmit and noise signals to be independent, the interference plus noise covariance at the receiver of link k is
S k = R k + i K \ k H k i Q i H k i H
The dual network has the same set of links with roles of receiver and transmitter inverted and flipped channels, meaning the channel between the uplink transmitter of link i and the uplink receiver of link k is H i k H C N k × M i . We refer to the original problem as the downlink and the dual network as the uplink. In the uplink, the zero-mean proper Gaussian input symbol of link k is ξ k C M k with covariance Σ k = E ξ k ξ k H and the output symbol is ρ k C N k . The input and output of a link k are related by
ρ k = H k k H ξ k + i K \ k H i k H ξ i + η k
where η k C N k is additive noise with covariance Ω k = E η k η k H . The interference plus noise covariance at the receiver of link k is
Ψ k = Ω k + i K \ k H i k H Σ i H i k
Choosing the channel matrices accordingly, this model is general enough to cover various forms of interference networks including interfering broadcast and multiple access channels as well as mixtures thereof, see [12] for details. In Section 4.1.5, we discuss in more detail how to model a broadcast channel and the corresponding multiple access channel with non-linear interference cancellation.
For the downlink, the mutual information between x k and y k , denoted as I k ( x k ; y k ) , can be expressed as a function of the noise covariance R k and the transmit covariances Q = ( Q 1 , , Q K ) . For the uplink, the mutual information between ρ k and ξ k , denoted as I k ( ρ k ; ξ k ) , can be expressed as a function of the noise covariance Ω k and the transmit covariances Σ = ( Σ 1 , , Σ K ) . For the considered network optimization problems the utilities are weighted sums of the mutual informations of the links:
W ( R , Q ) = k K w k I k ( x k ; y k )
W ( Ω , Σ ) = k K w k I k ( ρ k ; ξ k )
where w 1 , , w K are non-negative weights, which can be used to prioritize links or to explore boundary points of the achievable rate region. We assume that interference is treated as additional noise, which allows us to express the utility by the interference plus noise covariances S = ( S 1 , , S K ) and Ω = ( Ω 1 , , Ω K ) defined by Equations (4) and (6):
W ( R , Q ) = U ( S , Q ) : S k = R k + i K \ k H k i Q i H k i H k K
W ( Ω , Σ ) = U ( Ψ , Σ ) : Ψ k = Ω k + i K \ k H i k H Σ i H i k k K
where in case the mutual information is finite
U ( S , Q ) = k K w k log I + S k + H k k Q k H k k H
U ( Ψ , Σ ) = k K w k log I + Ψ k + H k k H Σ k H k k
The conditions for finite mutual information and a derivation of Equations (11) and (12) can be found in Appendix A.

2.1. Linear Conic Constraints

The transmit and noise covariances are constrained by linear conic constraints:
(13) Q C + Z , Z Z (14) R B + Y , Y Y (15) Σ B + Y , Y Y (16) Ω C + Z , Z Z
where B = ( B 1 , , B K ) and C = ( C 1 , , C K ) are fixed, and the degrees of freedom available for optimization of Z = ( Z 1 , , Z K ) , Z = ( Z 1 , , Z K ) , Y = ( Y 1 , , Y K ) , and Y = ( Y 1 , , Y K ) are given by Z , Z , Y , Y , which are linear subspaces of tuples of appropriately sized Hermitian matrices, in particular Z k K S N k × N k , Z k K S N k × N k , Y k K S M k × M k , Y k K S M k × M k .
The prototype for the linear conic constraints Equations (13)–(16) is
W A + X , X X
The choices for X X that allow for positive semidefinite W are given by A + X 0 . If A and X are matrices (instead of tuples of matrices), this is a linear matrix inequality (LMI), the intersection of a affine subspace with the cone of positive semidefinite matrices. The constant nullspace of a LMI is the set of vectors λ for which λ H ( A + X ) λ = 0 X X where A + X 0 , see [35] for details. In this work, we additionally regard tuples of matrices, where the positive semidefinite constraint has to hold for every element of the tuple, while the subspace is defined for the tuple. Hence, we define a set that is similar to the constant null-space, but is defined in the matrix tuple space instead of the vector tuple space:
N ( A , X ) = Λ 1 Λ 1 H , , Λ K Λ K H : Λ k H ( A k + X k ) Λ k = 0 , k , X X where A + X 0
Additionally, we define a set that contains all the directions where all elements of the tuple are positive semidefinite:
P ( X ) = X X : X 0
Lemma 1. 
Let A = { λ A : λ R } and X the orthogonal complement of A X with respect to the appropriately sized Hermitian matrices, then P ( X ) N ( A , X ) .
Proof. 
See Appendix B. ☐
Note that in general N ( A , X ) P ( X ) (Take the following example: A = 1 0 0 0 , X = λ 0 0 0 1 : λ R , X = λ 0 1 1 0 : λ R where 0 1 1 0 N ( A , X ) , but 0 1 1 0 P ( X ) = 0 0 0 0 ). If A 0 then N ( A , X ) = P ( X ) = { 0 } .

2.1.1. Relationship to Linear Constraints

There exist several network dualities, where the transmit covariances are constrained by one or multiple linear constraints [13,14,15,16]. So how do the linear conic constraints relate to the linear constraints? As known from semidefinite programming, a LMI can always be expressed in primal form with a finite set of linear constraints and one positive-semidefinite constraint. In principle, this can be done for Equation (17), but this does not result in linear constraints on the covariance. The following Lemma states a condition under which Equation (17) can be replaced by multiple linear constraints on W , which then can be handled by the existing dualities [13,14,15,16].
Lemma 2. 
Let A = { λ A : λ R } and X be two subspaces such that A X is the orthogonal complement of X . If the set E = E : E 0 , E A X is polyhedral, then the linear conic constraint W A + X , X X , can be formulated by a finite number of linear constraints on W , i.e.,
k K tr W k A k E k e 0 e { 1 , , E }
where E 1 , , E E are the extreme directions of E .
Proof. 
See Appendix C. ☐
For the inverse direction: a single linear constraint can always be replaced by a linear conic constraint, see for example Section 2.1.2. So multiple linear constraints can be transformed to multiple conic constraints, a case which is not regarded in this work but suggested as an extension a future work in Section 6. For special cases, such as in Section 2.1.2, multiple linear constraints can be represented by a single conic constraint. Linear conic constraints allow to model a variety of constraints on the covariances such as sum power constraints, per-antenna power constraints, or shaping constraints. The constraints can be joint constraints on all links, groups of links, or individual constraints per link. In the following, we present some examples how several relevant constraints can be modeled by linear conic constraint.

2.1.2. Examples

Network Sum-Power Constraint

Consider a constraint on the downlink transmit covariances that constrains the sum-power output of the network by
k K tr ( A k Q k ) P
By Lemma 2, we identify the linear conic version of the constraint as
C = P k K tr ( A k A k ) A , Z = Z : k K tr ( A k Z k ) = 0 , Z = { 0 }

Per Link Sum-Power Constraint

If the power per link is constrained individually, i.e.,
k K tr ( Q k ) P k k K
then by Lemma 2 we identify the linear conic version of the constraint as
(24) C = P 1 N 1 I , , P K N K I (25) Z = Z : tr ( Z k ) = 0 k K (26) Z = Z : Z = ( λ 1 I , , λ K I ) , k K P k λ k = 0

Shaping Constraint

A shaping constraint, that is Q C , is already given in the form of a linear conic constraint where, Z = { 0 } and Z = Z : k K tr ( C k Z k ) = 0 . A shaping constraint cannot be formulated by a finite number of linear constraints, as the set E is the cone of positive semidefinite matrices, or the tuple version thereof, which is not polyhedral and has infinitely many extreme directions.

3. Minimax Duality with Linear Conic Constraints

Theorem 1. 
Given is a MIMO interference network (downlink) with data links K and the dual network (uplink) with the same set of data links but roles of receiver and transmitter inverted and flipped channels. Consider minimax problems with a weighted-sum of the links’ mutual information as utility and linear conic constraints on the transmit and noise covariance matrices:
C D = inf R 0 , Y sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R B + Y , Y Y C U = inf Ω 0 , Z sup Σ 0 , Y W ( Σ , Ω ) : Σ B + Y , Y Y : Ω C + Z , Z Z
The subspaces that define the constraint sets are B = { λ B : λ R } , Y , Y and C = { λ C : λ R } , Z , Z , where k K tr ( B k B k ) = k K tr ( C k C k ) . The subspace triples B , Y , Y and C , Z , Z , respectively, are mutually orthogonal and their direct sum is the space of all K-tuples k K S M k × M k and k K S N k × N k , respectively. If
(C1)
H k k Λ k = 0 k K , Λ N ( C , Z )
(C2)
H k k Λ k = 0 k K , Λ N ( C , Z )
(C3)
Λ k H k k = 0 k K , Λ N ( B , Y )
(C4)
Λ k H k k = 0 k K , Λ N ( B , Y ) ,
both problems have a solution and it holds C D = C U .
Furthermore, there exists a solution of the downlink problem
( R ˇ = B + Y ˇ , Q ˇ = C + Z ˇ N ˇ , S ˇ ) , ( Ω ˇ = γ ˇ C + Z ˇ , Σ ˇ = γ ˇ B + Y ˇ L ˇ , Ψ ˇ , K ˇ )
that is connected to a solution for the uplink problem
( Ω ^ = C + Z ^ , Σ ^ = B + Y ^ L ^ , Ψ ^ ) , ( R ^ = γ ^ B + Y ^ , Q ^ = γ ^ C + Z ^ N ^ , S ^ , M ^ )
by inverting the roles of the optimization variables and Lagrangian multipliers, that is
R ˇ , S ˇ , Q ˇ , Z ˇ , Y ˇ , N ˇ = 1 γ R ^ , S ^ , Q ^ , Z ^ , Y ^ , N ^ and Ω ˇ , Ψ ˇ , Σ ˇ , Y ˇ , Z ˇ , L ˇ = γ Ω ^ , Ψ ^ , Σ ^ , Y ^ , Z ^ , L ^
where γ = γ ˇ = γ ^ (We introduce additional optimization variables S ˇ and Ψ ^ , which are defined by Equations (4) and (6). The connection of Lagrangian multipliers to the constraints can be found in Table 1).
Remark 1. 
The Theorem holds for arbitrary channel matrix sizes and there are no assumptions on the rank of the channel matrices. Whether the minimax problems have a solution does not solely depend on the channel matrices, instead we need to jointly consider the constraints and channel matrices. In general, this can not be done by analyzing individual links as the tuple subspaces can couple all links of the network. The derivation of the joint conditions on channels and constraints for existence of a solution (C1)–(C4) can be found in Section 5.1.
Remark 2. 
The mutual information of a link is independent of the noise covariances of the other links and the utility is non-increasing in the noise covariance, thus we can always find a solution where the noise covariance constraint holds with equality.
Remark 3. 
If the feasible set for the downlink covariances is given by
R : k K tr ( B k R k ) = P , R B Y
this is mathematically equivalent to
R : R = B + Y , Y Y
if k K tr ( B k B k ) = P . This formulation was used in work based on earlier versions of the duality, see for example [36].
The Theorem requires k K tr ( B k B k ) = k K tr ( C k C k ) , which is not always the case for the problems considered in the following. By scaling of the channel matrices and optimization variables we can artificially enforce the condition for the downlink. By reverting this scaling of the channel in the uplink we obtain an asymmetric version of the minimax duality, which is formally stated by the following lemma:
Lemma 3. 
Let b = k K tr ( B k B k ) and c = k K tr ( C k C k ) , if p and n such that p n = c b then
inf R 0 , Y sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R B + Y , Y Y
= inf Ω 0 , Z sup Σ 0 , Y W ( Σ , Ω ) : Σ p B + Y , Y Y : Ω n C + Z , Z Z
Proof. 
See Appendix D. ☐

4. Applications

We directly proceed with applications of the minimax duality and present the rather technical proof in Section 5.

4.1. Relationship to Existing Work

4.1.1. Original Minimax Duality

As our result is an extension and generalization of the result in [10], we are able to reestablish the original minimax duality as a special case. Specifically, we reproduce the proof of the Sato bound to be tight for the sum-rate in the broadcast with non-linear interference presubtraction. The Sato upper bound on the sum-capacity of the multi-user downlink is established by assuming full cooperation of the receivers [37], which makes the scenario equivalent to the optimization of a point-to-point link, where the minimization is with respect to the joint noise covariance, where the user noise covariances are fixed blocks on the diagonal and the off-diagonal blocks can be freely chosen.
The set of users is U , with U = | U | , where M u is the number of receive antennas of user u. The transmitter has N transmit antennas and M = u U M u the number of total receive antennas. The channel of user u is H u C N × M u and the composite channel of the point-to-point channel where all receivers cooperate is H = [ H 1 T , , H U T ] T C N × M . The receive symbol covariance in the downlink, and therefore the transmit symbol covariance in the uplink, is block partitioned such that block i , j is the covariance of the symbols received by user i and j and the k-th block on the diagonal is the marginal covariance of user k. We assume the downlink noise to be the identity matrix scaled by σ 2 for every user. The appropriate model for subspaces B , Y , Y , which are subspaces of S M × M , is B = σ 2 I . Y is the subspace of Hermitian matrices that have all zero matrices as blocks on the diagonal and allow for an arbitrary choice of the other blocks, which implies Y is the set of all blockdiagonal matrices with trace equal to zero. Let the structure of * and * be as follows
* = * 1 , 1 0 0 0 * 2 , 2 0 0 0 * M , M , * = 0 * 1 , 2 * 1 , M * 2 , 1 0 * M 1 , M * M , 1 * M , M 1 0
then
B = σ 2 I 0 0 σ 2 I , Y = Y = * , Y = Y : Y = * , tr ( Y ) = 0
According to Section 2.1.2, the linear conic version of the sum-power constraint on the downlink transmit covariances is
C = P N I , Z = Z : tr ( Z ) = 0 , Z = { 0 }
where C , Z , Z are subspaces of S N × N . The downlink problem is
min R 0 , * max Q 0 , Z W ( R , Q ) : Q P N I + Z , tr ( Z ) = 0 : R σ 2 I + *
With B 0 and C 0 conditions (C1)–(C4) hold true, and using the asymmetric minimax duality by Lemma 3 the uplink problem is
min Ω 0 , Z max Σ 0 , Y W ( Ω , Σ ) : Σ p σ 2 I + Y , Y Y : Ω n P N I + Z , Z Z
= min Ω 0 max Σ 0 , * W ( Ω , Σ ) : Σ p σ 2 I + * , tr ( * ) = 0 : Ω n P N I
With b = tr ( B B ) = M σ 4 , c = tr ( C C ) = P 2 N , and c b = P 2 N M σ 4 we can select p = P σ 2 M and n = σ 2 N P , as p n = P 2 N M σ 2 = c b , and we obtain
min Ω 0 max Σ 0 , * W ( Ω , Σ ) : Σ P M I + * , tr ( * ) = 0 : Ω σ 2 I
With Z = { 0 } , Ω = σ 2 I is a solution for the uplink noise, see Remark 2. For the constraint on the uplink transmit covariance, we observe that B and all elements of Y are blockdiagonal. Thus, Σ will be blockdiagonal, and the users do not cooperate in the uplink. Furthermore, Σ P M I + * , tr ( * ) = 0 implies a sum-power constraint of P of the uplink transmit covariances. Let Σ 1 , , Σ U be the blocks on the blockdiagonal Σ, then the uplink problem is
max Σ 0 log I + 1 σ 2 H H Σ H : tr ( Σ ) P , Σ is blockdiagonal
= max Σ 0 log I + 1 σ 2 u U H u H Σ u H u : u U tr ( Σ u ) P
We can see that the value of the uplink problem is equal to the sum-capacity of a multiple access channel with a sum-power constraint. It is well known that for the same sum-power constraint the same sum-rate can be achieved in the broadcast channel [4,8,9], which proves the Sato bound to be tight for the sum-rate.

4.1.2. Structure of the Worst Case Noise Covariance

In [9] it is shown that Equation (80) is a solution to the worst case downlink noise covariance problem Equation (70) if R is non-singular. Further, it is conjectured that it is also a solution for the singular case, but a proof is missing. By Lemma 4 we restate this result and provide an alternative proof that also holds for the singular case. The uplink problem Equation (74) can be reformulated as (For simplicity, but without loss of generality, we set σ 2 = 1 )
max D , Σ 0 log D : D = I + H H Σ H , tr ( Σ ) P , Σ is blockdiagonal
Let A , λ , F , and M be Lagrangian multipliers for the constraints D = I + H H Σ H , tr ( Σ ) P , Σ is blockdiagonal, and Σ 0 .
Lemma 4. 
Given a solution D , Σ to the uplink problem with Lagrangian multipliers A , λ , the remaining Lagrangian multipliers can be selected as
F = H A H H H 1 A H 1 H 0 0 H U A H U H
M = λ I 0 0 λ I H 1 A H 1 H 0 0 H U A H U H
and a solution to the worst case downlink noise covariance problem Equation (70) is given by
(79) R = I + 1 λ F = 1 λ H A H H + 1 λ M (80) = I H 1 A H 2 H λ H 1 A H U H λ H 2 A H 1 H λ I H 2 A H U H λ H U A H 1 H λ H U A H 2 H λ I
Proof. 
See Appendix E. ☐

4.1.3. Worst Case Noise Capacity

The capacity of a point-to-point link with channel H C M × N under a worst case noise assumption is given by
max Q 0 min R 0 log I + R + H Q H H : tr ( R ) σ 2 : tr ( Q ) P
In [17], it is shown that this worst case noise capacity is equivalent to the capacity without channel knowledge and white noise.
log I + P σ 2 H H H
While in [17] this is proven by matrix majorization theory, it can alternatively be reproduced by the minimax duality. Computation of Equation (81) means to solve a convex-concave game with individual convex constraint sets, for which max and min can be exchanged and we can instead use
min R 0 max Q 0 log I + R + H Q H H : tr ( Q ) P : tr ( R ) σ 2
The sum-power constraints are modelled according to Section 2.1.2:
B = σ 2 M I , Y = Y : tr ( Y ) = 0 , Y = { 0 }
and
C = P N I , Z = Z : tr ( Z ) = 0 , Z = { 0 }
With B 0 and C 0 conditions (C1)–(C4) hold true and using the asymmetric minimax duality by Lemma 3 the uplink problem is
min Ω 0 max Σ 0 log I + Ω + H H Σ H : Σ p σ 2 M I : Ω n P N I
With b = tr ( B B ) = σ 4 M , c = tr ( C C ) = P 2 N , and c b = M P 2 N σ 4 we can select p = M P σ 2 and n = N σ 2 P , as p n = M P 2 N σ 4 = c b , and we obtain
min Ω 0 max Σ 0 log I + Ω + H H Σ H : Σ P I : Ω σ 2 I = log I + P σ 2 H H H

4.1.4. Network Duality under (Generalized) Sum-Power Constraints

Here we regard the special case, where the noise covariances are fixed and appear as a generalized sum-power constraint in the dual network. Let
S D ( R , Ω ) = max Q 0 W ( R , Q ) : k K tr ( Ω k Q k ) = P
and
S U ( Ω , R ) = max Σ 0 W ( Ω , Σ ) : k K tr ( R k Σ k ) = P
where we assume R , Ω , and the channels such that the maxima exist. This scenario is for example considered in [12], where by using an SINR duality it is proven that S D ( R , Ω ) = S U ( Ω , R ) and the polite waterfilling result is established. In the following, we show how the network maximization problems under generalized sum-power constraints, Equations (88) and (89), can be formulated in our minimax framework. It can be verified that the conditions for existence of the maxima of Equations (88) and (89) imply conditions (C1)–(C4). For the downlink, the absence of a minimization over the noise covariance can be modelled, by selecting
B = R , Y = { 0 } , Y = Y : k K tr ( R k Y k ) = 0
which means that R is a solution for the downlink noise covariances, see Remark 2. The conic version of the generalized sum-power constraint is introduced in Section 2.1.2:
C = P k K tr ( Ω k Ω k ) Ω , Z = Z : k K tr ( Ω k Z k ) = 0 , Z = { 0 }
Using the asymmetric mini-max duality by Lemma 3, the uplink problem is
min Ω 0 , Z max Σ 0 , Y W ( Σ , Ω ) : Σ p R + Y , Y Y : Ω n P k K tr ( Ω k Ω k ) Ω + Z , Z Z
With b = k K tr ( B k B k ) = k K tr ( R k R k ) , c = k K tr ( C k C k ) = P 2 k K tr ( Ω k Ω k ) , and c b = P 2 k K tr ( R k R k ) k K tr ( Ω k Ω k ) we can select p = P k K tr ( R k R k ) and n = k K tr ( Ω k Ω k ) P , as p n = P 2 k K tr ( R k R k ) k K tr ( Ω k Ω k ) = c b , and we obtain
min Ω 0 , Z max Σ 0 , Y W ( Σ , Ω ) : Σ P k K tr ( R k R k ) R + Y , Y Y : Ω Ω + Z , Z Z
which is the minimax version of Equation (89).

4.1.5. Duality of Broadcast and Multiple Access Channel 

As in Section 4.1.1, the set of users is U , with U = | U | , and the channel of user u with M u receive antennas is H u C N × M u , where N is the number of antennas at the broadcast transmitter. Broadcast and multiple access channels are modelled by considering one link per user and first setting H u 1 = H u 2 = = H u K = H u u K and N k = N k K . Second, if interference between a user u and user v is removed by non-linear interference cancellation, we set H v u = 0 instead. This leads to the well known connection: given some ordering of the users, removing interference in the downlink such that a user does not interfere with a user of higher order matches the uplink model where interference is successively canceled in the inverse order. Thus, our minimax duality provides a minimax BC-MAC duality for any type of constraints that can be modeled as linear conic constraints. It holds for linear precoding as well as non-linear interference cancellation in an arbitrary order.
A particularly interesting case is where the decoding order of the uplink is matched to the weights of the users. If the decoding order is equal to the ascending order of the weights and additionally the noise covariance is the same for every user in the uplink, then the inner maximization of the uplink problem can be reformulated as a convex optimization problem. This renders the uplink problem a convex-concave game, for which efficient algorithms to find a solution are available. The following lemma describes the structure of the linear conic constraints and the subspaces that imply an equal uplink noise covariance for all users.
Lemma 5. 
Let the subspaces C = { λ C : λ R } , Z , Z such that they are mutually orthogonal and their direct sum is S N × N . If the constraint on the downlink transmit covariances is a constraint that solely acts on the sum of the transmit covariances, i.e.,
k K Q k C + Z , Z Z
then the constraint on the uplink noise covariances is
Ω k C + Z , Z Z k K
which allows for a solution where all noise covariances are equal.
Proof. 
See Appendix F. ☐
Lemma 6. 
Consider a broadcast problem. If the encoding order is equal to the descending order of the user weights, the constraint on the downlink transmit covariances solely acts on the sum of the transmit covariances, and conditions (C1)–(C4) hold true, then every point that fulfills the optimality conditions is a solution.
Proof. 
The uplink problem can be reformulated as convex-concave game, where every point that fulfills the optimality conditions is a solution. By Lemma 11 every point that fulfills the downlink optimality condition also fulfills the uplink optimality conditions and therefore is a solution. ☐
Thus, for these scenarios we have the freedom to either solve for the uplink or downlink optimality conditions, whatever is more convenient.
Lemma 7. 
Consider a broadcast problem. If the encoding order is equal to the descending order of the user weights, the constraint on the downlink transmit covariances solely acts on the sum of the transmit covariances, and conditions (C1)–(C4) hold true, then
min R 0 , Y max Q 0 , Z W ( R , Q ) : k K Q k C + Z , Z Z : R B + Y , Y Y
= max Q 0 , Z min R 0 , Y W ( R , Q ) : R B + Y , Y Y : k K Q k C + Z , Z Z
Proof. 
By Lemma 6 every point that fulfills the optimality conditions of Equation (96) is a solution. The optimality conditions are the same for Equations (96) and (97) and therefore both problems have an equal optimal value. ☐

4.2. Novel Results

4.2.1. Interference Robust Multi-User MIMO

Our multi-user minimax duality enables new results concerning robustness to interference in MIMO networks, where uncertainty in the spatial signature of interfering signals is a major source of performance degradation. In the companion work [36], two possible techniques to provide robustness for interfering broadcast channels are discussed. Both decouple the optimization of the network into individual problems per basestation, one by considering the interference of the other cells by a worst-case approximation and the other method enforces fixed sum-transmit covariances at each transmitter, which removes the uncertainty in the interference.
For the worst-case approximation, the minimization of the noise plus other-cell interference covariance in the downlink is constrained by a noise power constraint for every user individually, which translates to scaled identity matrices as transmit covariances in the uplink, see Section 2.1.2. The sum-power constraint on the downlink transmit covariances leads to fixed uplink noise covariances, see Section 2.1.2. Thus, the uplink problem is a maximization of the scaling coefficients of the uplink transmit covariances under a constraint on the sum of the coefficients. For details see Appendix G. This result provides a generalization of the worst-case noise capacity from point-to-point channels [17] or sum-rate in the broadcast channel [18] to the complete capacity region of the MIMO broadcast channel.
The second approach constrains the sum of the downlink transmit covariances by a shaping constraint, which can be fulfilled with equality for some solution (If the shaping constraint is fulfilled with equality depends on the number of receive antennas and the rank of the channels. Equality can be artificially enforced by adding a virtual user whose interference to the other users is canceled by non-linear interference pre-subtraction. Whether the gain by increased robustness outweighs the pain by additional interference to the other cells depends on the scenario). Assuming the channels to be static (for some time period) the constant transmit covariances lead to a constant inter-cell interference covariance at the users, which removes the uncertainty. The fixed noise covariances in the downlink translate to a sum power constraint on the uplink transmit covariances. The shaping constraint on the sum of the transmit covariances leads to equal uplink noise covariances for all users in the uplink, where the minimization is under a noise power constraint, see Section 4.1.5.
Both approaches can be modelled by using our minimax duality, see [36] for details. As for both approaches the constraints on the downlink transmit covariance are constraints that act solely on the sum of the transmit covariance, i. e. a sum-power constraint and a sum shaping constraint, the uplink problem has a convex-concave game reformulation and can be solved efficiently for non-linear interference (pre-)subtraction.

4.2.2. Information Theoretic Proofs

Cooperation in Wireless Networks

The connection of uplink and downlink via orthogonal subspaces allows to model cooperation in wireless networks, as for example the cooperation of the receivers to compute the Sato bound. Reconsider the result of Section 4.1.1: By letting the receivers cooperate, the correlations among the noise symbols of different users are included into the optimization and can be freely chosen by allowing all degrees of freedom on the off-diagonal blocks of the joint noise covariance, which induces a linear subspace. As a result, the joint uplink covariances are in the orthogonal subspace, which results in a block-diagonal matrix and thus no cooperation of the transmitters. Without this cooperation, the uplink is equivalent to a multiple access channel and by the BC-MAC duality translates to a broadcast channel without cooperation. Thus, whenever the additional degrees of freedom introduced by cooperation can be modeled as a linear subspace, we can use the minimax duality to transfer the problem into the uplink, which then might reveal whether cooperation is beneficial. An example can be found in [33], where carrier-cooperation is modelled in our minimax duality.

Optimality of Proper Signaling

Results from information theory show that improper signals are beneficial for some communication scenarios. Recent results use our minimax duality to proof the optimality of proper signaling for the MIMO broadcast channel under a shaping constraint [26] and for the MIMO relay channel with partial decode-and-forward [34]. A key ingredient for the proof is the observation that in a composite real representation a covariance matrix can be partitioned in a power shaping component and an impropriety component, such that the components are elements of linear subspaces that are orthogonal complements. Using such a formulation in our minimax duality, the problem can be transfered to an uplink problem, where the worst case noise is proper. For the considered scenarios, it can be shown that there exist solutions for the uplink with proper transmit symbols. The translation back into a downlink solution preserves this property.

5. Proof of the Theorem

5.1. Conditions for Existence of a Solution

To derive conditions (C1)–(C4), we use the generalization of the constant nullspace Equation (18), and the positive semidefinite directions of a subspace Equation (19). For the downlink, the following conditions ensure that those directions where noise or transmit covariance can become arbitrarily large have no influence on the utility
H k k Λ k = 0 k K , Λ P ( Z )
Λ k H k k = 0 k K , Λ P ( Y )
The constraints on the transmit covariance can be used to prohibit certain spatial directions of the transmit signal, resulting in a non-trivial constant nullspace N ( C , Z ) { 0 } . By a projection of the channel matrix we can artificially enforce
H k k Λ k = 0 k K , Λ N ( C , Z )
without changing the solutions. By imposing this condition we are able to formulate the conditions for the inner optimization to have a finite value as a condition on the channels and independent of the constraints on the transmit covariance:
Λ k H k k = 0 k K , Λ N ( B , Y )
Assume the condition does not hold, then there exists a Λ N ( B , Y ) with Λ k = λ k λ k H k and λ k H H k k 0 for some k . Condition Equation (100) implies that for any λ k where λ k H H k k 0 there exists a feasible Q where λ k H H k k Q k H k k H λ k > 0 . The inequality of the transmit covariance constraint allows us to remove any interference to k by setting all other transmit covariances to zero and we have S k = R k . As λ k H R k λ k = 0 for all feasible R k we can use λ k H as a receive filter to obtain a noise free communication channel, and the inner optimization does not have a finite value. For the uplink, the equivalent conditions are
(102) Λ k H k k = 0 k K , Λ P ( Y ) (103) H k k Λ k = 0 k K , Λ P ( Z ) (104) Λ k H k k = 0 k K , Λ N ( B , Y ) (105) H k k Λ k = 0 k K , Λ N ( C , Z )
Combining the conditions for downlink Equations (98)–(101) with the conditions for the uplink Equations (102)–(105) we can see that by Lemma 1 the conditions Equations (98), (99), (102) and (103) are superfluous. The remaining are conditions (C1)–(C4) stated in the Theorem. For B 0 and C 0 we have P ( Y ) = N ( B , Y ) = { 0 } and P ( Z ) = N ( C , Z ) = { 0 } and the conditions (C1)–(C4) hold true. Throughout the paper we assume (C1)–(C4) to hold true.

5.2. Properties of the Solutions – Optimality Conditions

Before we present the proof of the Theorem 1 in Section 5.4, we investigate some properties of the solutions and necessary optimality conditions of the minimax problems. A good candidate for optimality conditions are the KKT conditions, but as the utility fails to be differentiable everywhere we use the more general Lagrange Multiplier Rule, see for example [38, Th. 6.1.1]. The conditions are necessary, but not sufficient, for optimality.
By Remark 2, there is always a solution for the noise covariances where the linear conic constraint holds with equality. This allows us to actively enforce equality, which may reduce the set of solutions but does not affect the optimal value. For the proof, we will introduce N 0 and L 0 as slack variables for the constraints Q C + Z and Σ B + Y , such that N = C + Z Q and L = B + Y Σ . Additionally, we will use scaled versions of the optimization problems, and we explicitly include the interference plus noise covariances into the optimization:
C ˜ D ( α ) = min R 0 Y Y max S Q 0 Z Z U ( S , Q ) : S k = R k + i K \ k H k i Q i H k i H k , Q α C + Z : R = α B + Y
C ˜ U ( β ) = min Ω 0 Z Z max Ψ Σ 0 Y Y U ( Ψ , Σ ) : Ψ k = Ω k + i K \ k H i k H Σ i H i k k , Σ β B + Y : Ω = β C + Z
Apart from the obvious equalities C D = C ˜ D ( α ) | α = 1 and C U = C ˜ U ( β ) | β = 1 , the relationship between the original problem and the scaled problem is expressed by the following lemma:
Lemma 8. 
(a)
Given α > 0 and a point which fulfills the optimality conditions for C ˜ D ( α ) , a point with equal utility, which fulfills the optimality conditions for α = 1 , can be found by scaling the optimization variables by 1 α and the Lagrangian multipliers by α.
(b)
Given β > 0 and a point which fulfills the optimality conditions for C ˜ U ( β ) , a point with equal utility, which fulfills the optimality conditions for β = 1 , can be found by scaling the optimization variables by 1 β and the Lagrangian multipliers by β.
Proof. 
We introduce Lagrangian multipliers as shown in Table 1. The Lagrangian multiplier for a subspace constraint is element of the orthogonal complement of the subspace. Therefore, the Lagrangian multiplier for Z Z is element of C Z , which we parameterize by ν C + Z , Z Z in order to explicitly highlight the component in the direction of C . The same explanation holds for the Lagrangian multiplier μ B + Y , Y Z for the constraint Y Y . The Lagrangian function for C ˜ D ( α ) can be written as
L D = k K w k log I + S k + H k k Q k H k k H + k K tr Σ k ( S k R k i K \ k H k i Q i H k i H ) k K tr ( Ω k ( Q k α C k Z k ) ) k K tr ( ( ν C k + Z k ) Z k ) + k K tr ( K k Q k ) + + k K tr ( X k ( R k α B k Y k ) ) + k K tr ( ( μ B k + Y k ) Y k ) k K tr ( L k R k )
where the signs are selected such that Ω , K , L are positive semi-definite. As
tr Σ k H k i Q i H k i H = tr H k i H Σ k H k i Q i
and by introducing the auxiliary variable Ψ, where
Ψ k = Ω k + i K \ k H i k H Σ i H i k , k K
the Lagrangian function can be written as
L D = k K w k log I + S k + H k k Q k H k k H + + k K tr ( Σ k S k ) + k K tr ( K k Ψ k ) Q k + + k K tr Ω k ν C k Z k Z k k K tr X k μ B k Y k Y k k K tr ( Σ k + L k X k ) R k + + k K tr Ω k α C k k K tr ( X k α B k )
The optimality conditions for C ˜ D ( α ) are shown in Table 1. Equations (27)–(33) are the constraints on the optimization variables, Equations (34) and (35) are the properties of the Lagrangian multipliers due to the chosen parameterization, Equation (36) is the substitution introduced by Equation (110), Equations (37)–(39) are the complimentary slackness conditions for the inequality constraints, and Equations (40)–(44) are the stationarity conditions. The Lagrangian function is differentiable with respect to Z , Y , R , Q , which in this order gives us Equations (40)–(43). The utility is not differentiable with respect to S at a point where S is not invertible and we have to compute and parameterize the complete subdifferential to obtain Equation (44). This can be done by using [38, Th. 2.5.1], see Appendix H for details and the definition of the set E k Δ . As a remark, E k Δ = { 0 } if S k invertible.
Let
( R = α B + Y , Q = α C + Z N , S ) , ( Ω = ν C + Z , Σ = μ B + Y L , Ψ , K )
be a solution of the scaled downlink, and substitute ( R , Y , Q , Z , N , S ) by α ( R ˇ , Y ˇ , Q ˇ , Z ˇ , N ˇ , S ˇ ) and ( Σ , Ω , ν , Z , K , X , μ , Y , L , Ψ ) by 1 α ( Σ ˇ , Ω ˇ , ν ˇ , Z ˇ , K ˇ , X ˇ , μ ˇ , Y ˇ , L ˇ , Ψ ˇ ) . Thus, the scaling factor α cancels out everywhere and
( R ˇ = B + Y ˇ , Q ˇ = C + Z ˇ N ˇ , S ˇ ) , ( Ω ˇ = ν ˇ C + Z ˇ , Σ ˇ = μ ˇ B + Y ˇ L ˇ , Ψ ˇ , K ˇ )
provides a point that fulfills the optimality conditions of the unscaled problem, where α = 1 . The utility Equation (11) is not changed by an equal scaling of interference-plus-noise covariance and transmit covariance. This proves (a).
The conversion between scaled and unscaled problem is
1 α ( R = α B + Y , Q = α C + Z N , S ) , α ( Ω = ν C + Z , Σ = μ B + Y L , Ψ , K ) = ( R ˇ = B + Y ˇ , Q ˇ = C + Z ˇ N ˇ , S ˇ ) , ( Ω ˇ = ν ˇ C + Z ˇ , Σ ˇ = μ ˇ B + Y ˇ L ˇ , Ψ ˇ , K ˇ )
For the uplink, we use the substitution
S k = R k + i K \ k H k i Q k H k i H k K
and the Lagrangian function can be written as
L U = k K w k log I + Ψ k + H k k H Σ k H k k + + k K tr ( Q k Ψ k ) + k K tr ( M k S k ) Σ k + + k K tr R k σ B k Y k Y k k K tr W k τ C k Z k Z k k K tr ( Q k + N k W k ) Ω k + + k K tr R k β B k k K tr ( W k β C k )
The explanation of the optimality conditions of the uplink, the definition of Q k Δ , and the proof of (b) follows by symmetry. The conversion between scaled and unscaled problem is
1 β ( Ω = β C + Z , Σ = β B + Y L , Ψ ) , β ( R = σ B + Y , Q = τ C + Z N , S , M ) = ( Ω ^ = C + Z ^ , Σ ^ = B + Y ^ L ^ , Ψ ^ ) , ( R ^ = σ ^ B + Y , Q ^ = τ ^ C + Z ^ N ^ , S ^ , M ^ )
The characterization of the solutions to optimization problems for MIMO interference networks by polite waterfilling is introduced in [12]. The next lemma states the polite waterfilling result for the special case of an isolated point-to-point link. Although a proof can be found in the original work [12], we decided to include an alternative proof that especially highlights scenarios with non-invertible noise covariances and the conditions for which these have a solution. Further, we present the proof in a way that is directly connected to our problem, which allows us to introduce an additional result as a corollary to the proof.
Lemma 9. 
Consider a MIMO link with channel matrix H and non-zero noise covariance S and a link with flipped channel H H and non-zero noise covariance Ψ. The matrices T , T and Φ , Φ provide a full unitary basis for the fundamental subspaces of S and Ψ respectively.
If H Φ = 0 and T H H = 0 , then the mutual information of both links is finite, the problems
sup Q 0 w log I + S + H Q H H tr Ψ Q
and
sup Σ 0 w log I + Ψ + H H Σ H tr S Σ
have a solution, and the following equalities hold for any pair of solutions:
(a)
Φ H H H S + H Q Φ = Φ H H H Σ H Ψ + Φ
(b)
T H S + H Q H H T = T H Σ H Ψ + H H T
(c)
Σ { Σ : Σ = w S + H Q H H + + w S + Σ Δ , Σ Δ E Δ }
(d)
Q { Q : Q = w Ψ + H H Σ H + + w Ψ + Q Δ , Q Δ Q Δ }
(e)
log I + S + H Q H H = log I + Ψ + H H Σ H
(f)
tr Ψ Q = tr S Σ
where the definition of E Δ and Q Δ is in Lemma 12, presented in Appendix H.
We say that Q , Σ , S , Ψ are connected by polite waterfilling if they fulfill the equalities of the Lemma.
Proof. 
The condition for the downlink mutual information to be finite is T H H = 0 , see Lemma 12 in Appendix A. If H Φ 0 , there exists a q with H q 0 and tr ( Φ q q H ) = 0 and for Q = α q q H the utility becomes arbitrarily large for increasing α. So, if H Φ 0 , then Equation (118) does not have a solution. Conversely, H Φ = 0 implies tr ( Ψ Q ) > 0 whenever H Q 0 , and as the logarithm grows slower than any linear function, the supremum is finite. By symmetry, the condition for a finite uplink mutual information is H Φ = 0 and the condition for Equation (119) to have a solution is T H H = 0 .
Assume the conditions of the lemma hold true and consider the problem
max S = S max Q 0 w log I + S + H Q H H tr Ψ Q
which has the same solutions as Equation (118).
As the utility fails to be differentiable everywhere, the popular KKT conditions cannot be applied, and we use the more general Lagrange Multiplier Rule, see for example [38, Th. 6.1.1]. Let Σ and K be the Lagrangian multipliers for S = S and Q 0 , the optimality conditions are
(121) Σ { Σ : Σ = w S + H Q H H + + w S + + w Σ Δ , Σ Δ E Δ } (122) Ψ = w H H S + H Q H H + H + K (123) tr ( K Q ) = 0 , K 0 , Q 0
where E Δ is given by Lemma 13 in Appendix H. Furthermore, consider the following problem
max Ψ = Ψ max Σ 0 w log I + Ψ + H H Σ H tr S Σ
which has the same solutions as Equation (119). Let Q and M be the Lagrangian multipliers for Ψ = Ψ and Σ 0 . The optimality conditions are
(125) Q { Q : Q = w Ψ + H H Σ H + + w Ψ + + w Q Δ , Q Δ Q Δ } (126) S = w H Ψ + H H Σ H + H H + M (127) tr ( M Σ ) = 0 , M 0 , Σ 0
where Q Δ is defined analogously to E Δ given by Lemma 13 in Appendix H. By standard calculus, it can be shown that every solution to Equations (121)–(123) is a solution to Equations (125)–(127) and vice versa, and that these solutions fulfil the equalities of the Lemma. See Appendix I for details. ☐
Corollary 1. 
For both C ˜ D ( α ) and C ˜ U ( β ) , if a point fulfills the optimality conditions, then for every link k K , Q k , Σ k , S k , Ψ k are connected by polite waterfilling.
Proof. 
Consider the downlink and notice that Equations (121)–(123) are fulfilled by Equations (30), (34), (38), (43), and (44), meaning that Q k , Σ k , S k , Ψ k are connected by polite waterfilling if the conditions for the Lemma, these are H k k Φ k = 0 and T k H k k = 0 , hold. Consider the downlink. By conditions (C1)–(C4) we know that unless T k H k k = 0 the mutual information is infinite, and this point does not fulfill the optimality conditions. By multiplying Equation (43) with Φ k H from the left and Φ k from the right we obtain
(128) 0 = Φ k H H k k H ( S k + H k k Q k H k k H ) + H k k Φ k + Φ k H K k Φ k (129) = Φ k H H k k H T k ( T k H S k T k + T k H H k k Q k H k k H T k ) 1 T k H H k k Φ k + Φ k H K k Φ k
As both summands are positive-semidefinite matrices, they must both be equal to 0 , and with ( T k k H S k k T k k + T k k H H k k Q k k H k k H T k k ) 1 0 we must have T k H H k k Φ k = 0 . As T k H k k = 0 and [ T k T k ] full rank, we have [ T k T k ] H H k k Φ k = 0 , and H k k Φ k = 0 follows. The proof for the uplink follows by symmetry. ☐
Lemma 10. 
Independent of the scaling factors α and β, the following holds:
(a)
for every point that fulfills the downlink optimality conditions, we have
μ k K tr ( B k B k ) = ν k K tr ( C k C k )
(b)
for every point that fulfills the uplink optimality conditions, we have
σ k K tr ( B k B k ) = τ k K tr ( C k C k )
Proof. 
The mutual orthogonality of the linear subspace triples B , Y , Y and C , Z , Z , respectively, implies
k K tr ( B k Y k ) = k K tr ( B k Y k ) = k K tr ( Y k Y k ) = 0
and
k K tr ( C k Z k ) = k K tr ( C k Z k ) = k K tr ( Z k C k ) = 0
respectively. Consider the downlink. By combining Equations (37) and (40), we have
(134) k K tr ( Ω k Q k ) = α k K tr ( Ω k C k ) (135) k K tr ( Ω k Q k ) = α k K tr ( ( ν C k + Z k ) C k ) (136) k K tr ( Ω k Q k ) = α ν k K tr ( C k C k )
Multiplying Equation (31) with B from the right, applying the trace operator on both sides, and summing over all links, we obtain
(137) k K tr ( R k B k ) = α k K tr ( B k B k ) + k K tr ( Y k B k ) (138) = α k K tr ( B k B k )
Further, combining Equations (41) and (42) we have Σ = μ B + Y L and multiplying with R from the left, applying the trace operator operator on both sides, and summing over all links gives us
(139) k K tr ( R k Σ k ) = μ k K tr ( R k B k ) + k K tr ( R k Y k ) k K tr ( R k L k ) (140) = μ k K tr ( R k B k )
where we used tr ( R k L k ) = 0 from Equation (39). By combining Equations (140) and (138) it holds:
k K tr ( R k Σ k ) = α μ k K tr ( B k B k )
Multiplying Equation (27) by Σ k from the right and Equation (43) by Q k from the right, applying the trace operator on both sides, and summing over all links gives us
(142) k K tr S k Σ k = k K tr R k Σ k + k K i K \ k tr H k i Q i H k i H Σ k (143) k K tr Ψ k Q k = k K tr Ω k Q k + k K i K \ k tr H i k H Σ i H i k Q k
With the trace property Equation (109) and the equivalence of the penalty terms from the polite waterfilling property Lemma 9 Equation (9), we obtain
k K tr R k Σ k = k K tr Ω k Q k
Combining this with Equations (136) and (141), we have
α μ k K tr ( B k B k ) = α ν k K tr ( C k C k )
The proof for the uplink follows by symmetry. ☐
Lemma 11. 
If k K tr ( B k B k ) = k K tr ( C k C k ) , then downlink and uplink problem share the following optimality conditions:
(146) S k = R k + i K \ k H k i Q i H k i H k (147) Ψ k = Ω k + i K \ k H i k H Σ i H i k k (148) Q = α C + Z N (149) Σ = β B + Y L (150) R = α B + Y (151) Ω = β C + Z (152) tr K k Q k = 0 k (153) tr L k R k = 0 k (154) tr M k Σ k = 0 k (155) tr N k Ω k = 0 k (156) S k = w k H k k Ψ k + H k k H Σ k H k k + H k k H + M k k (157) Ψ k = w k H k k H S k + H k k Q k H k k H + H k k + K k k (158) Q k { Q k : Q k = w k Ψ k + H k k H Σ k H k k + + w k Ψ k + w k Q k Δ , Q k Δ Q k Δ } k (159) Σ k { Σ k : Σ k = w k S k + H k k Q k H k k H + + w k S k + w k Σ k Δ , Σ k Δ E k Δ } k
R 0 , Ω 0 , Q 0 , Σ 0 , S 0 , Ψ 0 , K 0 , M 0 , L 0 , N 0
Z Z , Y Y , Y Y , Z Z
where T k , T k and Φ k , Φ k provide a full unitary basis for the fundamental subspaces of S k and Ψ k , respectively. Specifically, the following relationships hold:
(a)
given α > 0 , β > 0 , and a point that fulfills the shared conditions, this point fulfills Equations (27)–(44) for ν = μ = β , and Equation (45)–(62) for τ = σ = α .
(b)
given a scaling factor α > 0 any point that fulfils the downlink optimality conditions Equations (27)–(44) also fulfills the shared conditions with β = ν = μ ,
(c)
given a scaling factor β > 0 , any point that fulfils the uplink optimality conditions Equations (45)–(62) also fulfills the shared conditions with α = τ = σ .
Proof. 
First, note that if k K tr ( B k B k ) = k K tr ( C k C k ) , then Lemma 10 implies μ = ν and σ = τ . To prove (a) for the downlink, we show that all conditions in Equations (27)–(44) that are not explicitly stated in the shared conditions hold for any point that fulfills the shared conditions. For the downlink this is condition Equation (37), which holds by combining Equations (148) and (155), and conditions Equations (41) and (42), which are equivalent to Equation (149) by eliminating the superfluous variable X . The proof for the uplink follows by symmetry.
To prove (b), we show that all shared conditions that are not explicitly stated in Equations (27)–(44) hold for any point that fulfills Equations (27)–(44). Conditions Equations (148) and (155) hold by Equation (37) and N = α C + Z Q as N is the slack variable for Equation (148). N 0 in Equation (160) is by definition of N . The conditions Equations (154), (156), and (158), hold true by the polite waterfilling connection due to Corollary 1. The proof for (c) follows from the proof for (b) by symmetry. ☐

5.3. Proof of the Network Duality for Fixed Noise and (Generalized) Sum-Power Constraint

Before we prove C D = C U for the general minimax problems, we discuss the special case where the noise covariances are fixed and appear as a generalized sum-power constraint in the dual network. This is the scenario from [12] introduced in Section 4.1.4, where the functions S D ( R , Ω ) and S U ( Ω , R ) are defined by Equations (88) and (89) respectively. As we have used the polite waterfilling result from [12] to prove Lemma 11, it is not surprising that we can in turn use Lemma 11 to prove S D ( R , Ω ) = S U ( Ω , R ) . To do so, we have to reformulate S D ( R , Ω ) and S U ( Ω , R ) in the minimax framework, see Section 4.1.4 for details. By Lemma 11 combined with Corollary 1, every solution to Equation (88) provides a point with equal utility that fulfills the optimality conditions of Equation (89) and vice versa. Assume there is a solution to Equation (88) and the point is not a solution to Equation (89), then there exists a point with higher utility in Equation (89). As this point needs to fulfill the optimality conditions, there must be a point with equal utility in Equation (88), this contradicts with the original point being a solution, and therefore S D ( R , Ω ) = S U ( Ω , R ) .

5.4. Proof of the Minimax Duality

Unfortunately, the argumentation of the previous section cannot be extended to minimax problems, which might have multiple points fulfilling the optimality conditions with different utilities. In addition to the solutions, there might exist points which fulfill the optimality conditions with higher or lower utility than the optimal value. To prove C D = C U for general minimax problems, we first show that both problems share an upper bound, and second, we show that this upper bound is tight. Consider a reformulation of the downlink problem as
(162) C D = min R 0 , Y max Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R B + Y , Y Y (163) = min R 0 , Y F D ( R ) : R B + Y , Y Y
where
F D ( R ) = max Q 0 , Z Z W ( R , Q ) : Q C + Z
We upper bound the inner, potentially non-convex, maximization from above with a Lagrangian dual problem:
F D ( R ) inf Ω 0 sup Q 0 , Z Z W ( R , Q ) k K tr ( Ω k ( Q k C k Z k ) )
where we assumed R such that the maximum of Equation (164) exists.
The infimum does not change if we remove those Ω where the inner problem does not have a solution. As Z is only constrained by Z Z , the inner problem of Equation (165) does not have a solution unless Ω is in the orthogonal complement of Z , which we can parameterize by Ω = ν C + Z , Z Z .
Thus, we can formulate the upper bound as
F D ( R ) inf ν , Ω 0 , Z sup Q 0 W ( R , Q ) k K tr ( Ω k Q k ) + k K tr ( Ω k C k ) : Ω = ν C + Z , Z Z
Furthermore, by Corollary 1, we know there exists a solution to C D where optimization variables and Lagrangian multipliers are connected by polite waterfilling. Using such a solution for Ω to evaluate the inner problem of Equation (166), Ω and k K tr ( Ω k C k ) are constants for the inner problem. As we assumed that R is such that the mutual information of all links is finite for finite transmit symbols, the inner problem has a solution. This implies that the infimum and supremum of Equation (166) are attained.
For every minimizing dual variable, the following conditions can be obtained from the complementary slackness conditions:
(167) k K tr ( Ω k ( Q k C k Z k ) ) = 0 (168) k K tr ( Ω k Q k ) = k K tr ( Ω k C k )
since k K tr ( ν C k + Z k ) Z k ) = 0 . The upper bound can be reformulated as
F D ( R ) min Ω 0 , Z max Q 0 W ( R , Q ) : k K tr ( Ω k Q k ) = k K tr ( Ω k C k ) : Ω = ν C + Z , Z Z
The value of the inner problem is the same if Ω is scaled (leading to an inverse scaling of the optimization variables). This allows us to fix the scaling by picking a P with
P = k K tr ( C k C k )
and scale Ω such that
k K tr ( Ω k Q k ) = k K tr ( Ω k C k ) = P
Combining Equations (170), (171), and Ω = ν C + Z , Z Z , we obtain ν = 1 , and the upper bound is
(172) F D ( R ) min Ω 0 , Z max Q 0 W ( R , Q ) : k K tr ( Ω k Q k ) = P : Ω = C + Z , Z Z (173) = min Ω 0 , Z S D ( R , Ω ) : Ω = C + Z , Z Z
where S D ( R , Ω ) is defined by Equation (88). Thus, we can formulate an upper bound on C D by
(174) C D min R 0 , Y min Ω 0 , Z S D ( R , Ω ) : Ω = C + Z , Z Z : R = B + Y , Y Y (175) = min R 0 , Y , Ω 0 , Z S D ( R , Ω ) : Ω = C + Z , Z Z , R = B + Y , Y Y
Now consider the uplink problem. By using R = τ B + Y , Y Y as Lagrangian multiplier to relax the constraint on the uplink transmit covariances and selecting
P = k K tr ( B k B k ) = k K tr ( R k Σ k ) = k K tr ( R k B k )
we can formulate an upper bound for the uplink as
C U min R 0 , Y , Ω 0 , Z S U ( Ω , R ) : Ω = C + Z , Z Z , R = B + Y , Y Y
where S U ( Ω , R ) is defined by Equation (89). We have P = k K tr ( B k B k ) = k K tr ( C k C k ) and as S D ( R , Ω ) = S U ( Ω , R ) , see Section 5.3, the bounds Equations (175) and (177) have the same value, and the problems C D and C U share an upper bound.
We can prove C D = C U by showing that the upper bound is tight. The bound by the Lagrangian dual problem on the inner problem of the downlink can be reformulated as an uplink minimax problem:
(178) F D ( R ) = max Q 0 , Z Z W ( R , Q ) : Q C + Z (179) min Ω 0 , Z S D ( R , Ω ) : Ω = C + Z , Z Z (180) = min Ω 0 , Z S U ( Ω , R ) : Ω = C + Z , Z Z (181) = min Ω 0 , Z max Σ ¯ 0 W ( Ω , Σ ¯ ) : k K tr ( R k Σ ¯ k ) = P : Ω = C + Z , Z Z (182) = min Ω 0 , Z max Σ ¯ 0 , Y ¯ W ( Ω , Σ ¯ ) : Σ ¯ B ¯ + Y ¯ , Y ¯ Y ¯ : Ω C + Z , Z Z (183) = min Ω 0 , Z max Σ ¯ 0 , Y ¯ W ( Ω , Σ ¯ ) : Σ ¯ β ¯ B ¯ + Y ¯ , Y ¯ Y ¯ : Ω β ¯ C + Z , Z Z
where we used S D ( R , Ω ) = S U ( Ω , R ) and replaced the generalized sum-power constraint k K tr ( R k Σ k ) by a conic constraint with
B ¯ = P k K tr ( R k R k ) R
and
Y ¯ = Y : tr ( B ¯ k Y k ) = 0
which implies Y ¯ = { 0 } , see Section 2.1.2 for an explanation. Equivalence of Equations (182) and (183) is given by Lemma 8.
By Lemma 11, every solution of Equation (183) fulfills the conditions Equations (45)–(62) and provides Lagrangian multipliers S ¯ , R ¯ , Q ¯ , Z ¯ , N ¯ , τ ¯ with
(186) Q ¯ = τ ¯ C + Z ¯ N ¯ , Z ¯ Z , Q ¯ 0 , N ¯ 0 (187) Q ¯ τ ¯ C + Z ¯ , Z ¯ Z , Q ¯ 0
and as Y ¯ = { 0 } , we have
(188) S ¯ k = R ¯ k + i K \ k H k i Q ¯ i H k i H k (189) R ¯ = σ ¯ B ¯
Lemma 10 holds true even if k K tr ( B ¯ k B ¯ k ) k K tr ( C k C k ) , and by Equation (131), we have
(190) σ ¯ = τ ¯ k K tr ( C k C k ) 1 k K tr ( B ¯ k B ¯ k ) (191) = τ ¯ P k K tr ( R k R k ) P 2 (192) = τ ¯ k K tr ( R k R k ) P
where we used k K tr ( B ¯ k B ¯ k ) = P 2 k K tr ( R k R k ) , which can be derived from Equation (184). Combined with Equation (189), we obtain
R ¯ = σ ¯ P k K tr ( R k R k ) R = τ ¯ R .
By Lemma 8(b), τ ¯ scales inversely with β ¯ without affecting the utility, so we are free to pick β ¯ such that τ ¯ = 1 . Given such a solution to Equation (183), the Lagrangian multipliers S ¯ , Q ¯ , Z ¯ fulfill Equations (187) and (188). By using τ ¯ = 1 and R ¯ = R , Equations (187) and (188) can be written as
(194) Q ¯ C + Z ¯ , Z ¯ Z , Q ¯ 0 (195) S ¯ k = R k + i K \ k H k i Q ¯ i H k i H k
Directly including the explicit parameterization of the interference plus noise covariance in Equation (178), we have
F D ( R ) = max Q 0 , S , Z Z U ( S , Q ) : S k = R k + i K \ k H k i Q i H k i H k K , Q C + Z
and we can see that Q ¯ , Z ¯ are a feasible solution to Equation (178) and have the same utility by the polite waterfilling connection, Lemma 9(e). Thus, Equation (165) holds with equality and the upper bound Equation (175) is tight for the downlink. Tightness of the upper bound for the uplink follows by symmetry and we have proven C D = C U .
By Lemma 10, we have σ ^ = τ ^ , and by introducing γ ^ = σ ^ = τ ^ , a solution to the unscaled uplink problem can be stated as
( Ω ^ = C + Z ^ , Σ ^ = B + Y ^ L ^ , Ψ ^ ) , ( R ^ = γ ^ B + Y ^ , Q ^ = γ ^ C + Z ^ N ^ , S ^ )
For any β and the conversion rule Equation (117), introduced with Lemma 8, a solution to C U ( β ) is given by
β ( Ω ^ = C + Z ^ , Σ ^ = B + Y ^ L ^ , Ψ ^ ) , 1 β ( R ^ = γ ^ B + Y ^ , Q ^ = γ ^ C + Z ^ N ^ , S ^ )
Inverting the roles of Lagrangian multipliers and optimization variables we obtain a solution to C D ( α ) for some α as
1 β ( R ^ = γ ^ B + Y ^ , Q ^ = γ ^ C + Z ^ N ^ , S ^ ) , β ( Ω ^ = C + Z ^ , Σ ^ = B + Y ^ L ^ , Ψ ^ )
By the conversion rules Equation (114) a solution to the unscaled downlink problem is given by
1 α β ( R ^ = σ ^ B + Y ^ , Q ^ = γ ^ C + Z ^ N ^ , S ^ ) , α β ( Ω ^ = C + Z ^ , Σ ^ = B + Y ^ L ^ , Ψ ^ )
Introducing γ ˇ = ν ˇ = μ ˇ and comparing the solution obtained from the uplink problem to the prototype of the downlink solution
( R ˇ = B + Y ˇ , Q ˇ = C + Z ˇ N ˇ , S ˇ ) , ( Ω ˇ = γ ˇ C + Z ˇ , Σ ˇ = γ ˇ B + Y ˇ L ˇ , Ψ ˇ , K ˇ )
we identify γ ^ α β = 1 and α β = γ ˇ . Introducing γ = γ ^ = γ ˇ = α β , we can see that uplink and downlink solution are connected by inverting the roles optimization variables and Lagrangian multipliers and a scaling by γ, respectively 1 γ , which proves the last part of the Theorem.

5.5. Zero Duality Gap

The upper bound Equation (165) by a Lagrangian dual problem was shown to be tight. Therefore, maximization problems in interference networks with linear conic constraints have a zero-duality gap for this special choice of the dual function. The zero duality gap result does not hold in general. For example it does not hold if the dual function is selected as optimization of the Lagrangian function given by Equation (108). Although we have L D = C D for every solution, the duality gap does not disappear as
sup S k w k log I + S k + H k k Q k H k k H + tr Σ k S k =
Here, the linear penalty term is not powerful enough to prevent the exponential growth of the utility for decreasing S k .
A Zero-Duality gap can be observed for the optimization of linear precoders for power minimization under SINR constraints [14,22], which allows to transfer the apparently non-convex problem into a convex problem that can be solved efficiently. For our scenario, the dual problem is always convex in the dual variables, but evaluating the dual function S D ( R , Ω ) still requires to solve a non-convex problem Equation (88). So, the zero duality gap is a property worth mentioning, but it is completely useless for the design of efficient algorithms.

5.6. Uplink-Downlink Transformation Rules

Several network dualities include some transformation rule for the solution of one network to the dual network. In our approach, the complete characterization of the uplink-downlink duality by a shared system of optimality condition reveals such a transformation as a connection between optimization variables and Lagrangian multipliers. For example, the transformation of the transmit covariances from one problem to the other is given by Equations (44) and (62). If the noise covariance of a user k has full rank, the only valid choice for Q k Δ , respectively Σ k Δ , is the zero matrix and the transformations are identical to those in [12]. Otherwise, Q k Δ and Σ k Δ , describe the additional degrees of freedom: this is where the conic constraint is not binding but the choice of has no influence on the utility. To describe the full dependencies, we start by the slack variables L , N being the Lagrangian multipliers for the positive semi-definiteness constraint of the noise covariances. So L and N indicate if and how the interference plus noise can be rank deficient, which sets the degrees of freedom for Q k Δ and Σ Δ and therefore the degrees of freedom in the linear conic constraint on Q and Σ, implying the set of possible slack variables.

5.7. Possible Extensions

Another direction for future work is to investigate if other types of constraints can be included in the duality. This is straight forward for the noise covariance. Let R be an arbitrary constraint set for the noise covariance R . We restate the downlink problem as
(203) inf R 0 , R R sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z (204) = inf R 0 inf R 0 sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R = R (205) = inf R 0 , R R inf R 0 , Y sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R = B + Y , Y Y
where B = R and Y = { 0 } . Now, we can apply the minimax duality to the inner problem and obtain
(206) inf R 0 , R R inf Ω 0 , Z sup Σ 0 , Y W ( Ω , Σ ) : k K tr ( Σ k R ) P : Ω C + Z , Z Z (207) = inf R 0 , R R inf Ω 0 , Z sup Σ 0 , Y W ( Ω , Σ ) : k K tr ( Σ k R ) P : Ω C + Z , Z Z
where the formulation by the network sum-power constraint is explained in Section 2.1.2. Concerning the generalization of constraints on the transmit covariances, we suggest to first extend the duality to multiple conic constraints for each link. Furthermore, an especially interesting direction is to include more general (convex) constraints on the transmit covariance into our duality. In [16] convex constraints are handled by replacing them with a single linear constraint, where the parameter needs to be found algorithmically. If the multiple linear constraints can be combined into a single linear conic constraint this is inherently done by our model. The general case is an open issue and suggested as future work.

6. Conclusions and Future Work

We presented a minimax duality for MIMO interference networks, where noise and transmit covariances are optimized subject to linear conic constraints. We illustrated how they are general enough to not only model typically used constraints such as the network sum-power constraint, the per link sum-power constraint, and shaping constraints, but also constraints on cooperation within the network or the propriety of signals. The interconnection of the downlink and uplink problems is by the flipped channel matrices, the common constraint matrices, and orthogonal subspaces. We showed that every solution has a polite water-filling structure and the downlink and uplink problem share a system of optimality conditions, where the solutions are connected by inverting the roles of the optimization variables and Lagrangian multipliers. We proof our minimax duality by using an upper bound from Lagrangian duality, which can be shown to be tight, thus revealing a zero-duality gap of maximization problems in MIMO interference networks under linear conic constraints.
Especially interesting is the BC-MAC duality, which can also be modeled in our framework. We were able to extend the existing BC-MAC dualities in two directions. First, we allow for more general constraints (for example shaping constraints) on the downlink transmit covariance and investigate the conditions under which the uplink problem has a convex-concave game structure. Second, we extended the worst case noise capacity from the sum-rate to the complete capacity region of the MIMO broadcast with non-linear interference pre-subtraction. New applications that are enabled by our duality are discussed in Section 4.2. These are the design of interference robust multi-user MIMO transmission strategies for interfering broadcast channels and information theoretic proofs concerning cooperation in networks and optimality of proper signals.
Concerning the generalization of constraints on the transmit covariances, we suggest to first extend the duality to multiple conic constraints for each link. Furthermore, an especially interesting direction is to include more general (convex) constraints on the transmit covariance into our duality. In [16], convex constraints are handled by replacing them with a single linear constraint, where the parameter needs to be found algorithmically. Furthermore, we believe that our result can serve as the theoretical basis for further research in the fields of robust transmission strategies, cognitive radio networks, and secrecy capacity, where conic constraints and minimax problems appear naturally in the proofs and numerical computations.

Author Contributions

Andreas Dotzler discovered the Min-Max Duality; Maximillian Riemensberger assisted with the proof; Wolfgang Utschick contributed his expertise in interference networks and network dualities.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A Mutual Information For Non-Invertible Noise Covariances

Lemma 12. 
Consider a MIMO link with channel matrix H and noise covariance S . The transmit covariance Q is constrained by Q Q , but can be freely chosen into all directions of the channel. Let T , T provide a full unitary basis for the fundamental subspaces of S . If T H H = 0 , then the mutual information is finite and is given by
max Q Q log I + S + H Q H H
Proof. 
The mutual information is infinite unless we have T H H = 0 , as otherwise there exists a Q for which we obtain a noise free communication channel by using T H as a receive filter. For finite mutual information, we can compute the mutual information for any Q by taking a limit:
(A2) lim ϵ 0 log I + ( S + ϵ T T H ) 1 H Q H H (A3) = lim ϵ 0 log I + T T ( T H S T ) 1 0 0 ϵ 1 I T T H H Q H H (A4) = lim ϵ 0 log I + H H T T ( T H S T ) 1 0 0 ϵ 1 I T T H H Q (A5) = lim ϵ 0 log I + H H T 0 ( T H S T ) 1 0 0 ϵ 1 I T H H 0 Q (A6) = log I + H H T ( T H S T ) 1 T H H Q (A7) = log I + S + H Q H H

Appendix B Proof of Lemma 1

For every Λ ˜ P ( X ) , we have Λ ˜ 0 , which implies every element of the tuple is Hermitian. If A + X 0 , then tr ( Λ ˜ k ( A k + X k ) ) 0 k K . Further, by the orthogonal subspaces we have
k K tr ( Λ ˜ k ( A k + X k ) ) = 0 X X
and therefore
tr ( Λ ˜ k ( A k + X k ) ) = 0 k K , X X , A + X 0
For Λ ˜ k C N k × N k let Λ ˜ k = i = 1 N k γ k i λ k i λ k i H such that γ k i 0 , k , i . Then it follows k K , X X , A + X 0 .
(B3) tr ( Λ ˜ k ( A k + X k ) ) = 0 (B4) i = 1 N k tr ( γ k i λ k i H ( A k + X k ) λ k i γ k i ) = 0
(B5) i = 1 N k γ k i λ k i H ( A k + X k ) λ k i γ k i = 0 (B6) Λ k H ( A k + X k ) Λ k = 0
and it follows that for Λ ˜ k = Λ k Λ k H k K we have Λ ˜ N ( A , X ) .

Appendix C Proof of Lemma 2

W 0 fulfills the linear conic constraint if
(C1) X X : W A + X (C2) X X : tr ( E k ( W k A k X k ) ) 0 , E 0 , k K (C3) X X : k K tr ( E k ( W k A k X k ) ) 0 , E 0 (C4) X X : k K tr ( E k ( W k A k X k ) ) 0 , E E (C5) k K tr ( E k ( W k A k ) ) 0 , E E
Equivalence of Equations (C1) and (C2) is clear by Equations (B3)–(B5). By summing over all links, Equation (C3) follows from Equation (C2). For the inverse direction, as Equation (C3) holds for any E we can select an arbitrary E k and set all other elements of the tuple to the all zero matrix and Equation (C2) follows. As Equation (C3) holds for all E it obviously holds for those where E E , this is Equation (C4). For the inverse direction, if E E then k K tr ( E k X k ) 0 and as X can be scaled arbitrarily and there exists a X , that fulfills the inequality. Equivalence of Equations (C4) and (C5) holds as k K tr ( E k X k ) = 0 for any X X , E E by definition. If E is polyhedral, every point can be expressed by a positive combination of the finite extreme points and Equation (C5) is equivalent to Equation (20).

Appendix D Proof of Lemma 3

Consider the downlink. By a scaling of the constraint matrices and channels by scaling factors θ and η such that
H ¯ k i = η θ H k i , B ¯ = 1 θ B , C ¯ = 1 η C
and substituting
( R ¯ , S ¯ , Q ¯ , Z ¯ , Y ¯ ) = 1 θ R , 1 θ S , 1 η Q , 1 η Z , 1 θ Y
we obtain the following equivalences:
inf R ¯ 0 Y ¯ Y sup S ¯ Q ¯ 0 Z ¯ Z k K w k log I + S ¯ + H ¯ k k Q ¯ k H ¯ k k H : S ¯ k = R ¯ k + i K \ k H ¯ k i Q ¯ i H ¯ k i H k , Q ¯ C ¯ + Z ¯ : R ¯ = B ¯ + Y ¯
= inf R ¯ 0 Y ¯ Y sup S ¯ Q ¯ 0 Z ¯ Z k K w k log I + S ¯ + H ¯ k k Q ¯ k H ¯ k k H : S ¯ k = R ¯ k + i K \ k H ¯ k i Q ¯ i H ¯ k i H k , η Q ¯ η C ¯ + η Z ¯ : θ R ¯ = θ B ¯ + θ Y ¯
= inf R 0 Y Y sup S Q 0 Z Z k K w k log I + 1 θ S + H ¯ k k 1 η Q k H ¯ k k H : 1 θ S k = 1 θ R k + i K \ k H ¯ k i 1 η Q i H ¯ k i H k , Q η C ¯ + Z : R = θ B ¯ + Y
= inf R 0 Y Y sup S Q 0 Z Z k K w k log I + S + θ H ¯ k k 1 η Q k 1 η H ¯ k k H θ : S k = R k + i K \ k θ H ¯ k i 1 η Q i 1 η H ¯ k i H θ k , Q η C ¯ + Z : R = θ B ¯ + Y
= inf R 0 Y Y sup S Q 0 Z Z k K w k log I + S + H k k Q k H k k H : S k = R k + i K \ k H k i Q i H k i H k , Q C + Z : R = B + Y .
Let the functions W ( R , Q ; s ) and W ( Ω , Σ ; s ) follow the definition of W ( R , Q ) and W ( Ω , Σ ) respectively, these are Equations (9) and (10), but all channels are scaled by s. Let B n = 1 b B and C n = 1 c C , where b = k K tr ( B k B k ) and c = k K tr ( C k C k ) , then
(D8) inf R 0 , Y sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R B + Y , Y Y (D9) = inf R 0 , Y sup Q 0 , Z W R , Q ; c b : Q 1 c C + Z , Z Z : R 1 b B + Y , Y Y (D10) = inf R 0 , Y sup Q 0 , Z W R , Q ; c b : Q C n + Z , Z Z : R B n + Y , Y Y
For the uplink, we have
(D11) inf Ω 0 , Z sup Σ 0 , Y W ( Σ , Ω ) : Σ B + Y , Y Y : Ω C + Z , Z Z (D12) = inf Ω 0 , Z sup Σ 0 , Y W Σ , Ω ; b c : Σ 1 b B + Y , Y Y : Ω 1 c C + Z , Z Z (D13) = inf Ω 0 , Z sup Σ 0 , Y W Σ , Ω ; b c : Σ B n + Y , Y Y : Ω C n + Z , Z Z
Combining the reformulations, we can prove the asymmetric minimax duality:
(D14) inf R 0 , Y sup Q 0 , Z W ( R , Q ) : Q C + Z , Z Z : R B + Y , Y Y (D15) = inf R 0 , Y sup Q 0 , Z W R , Q ; c b : Q C n + Z , Z Z : R B n + Y , Y Y (D16) = inf Ω 0 , Z sup Σ 0 , Y W Σ , Ω ; c b : Σ B n + Y , Y Y : Ω C n + Z , Z Z (D17) = inf Ω 0 , Z sup Σ 0 , Y W ( Σ , Ω ) : Σ c B n + Y , Y Y : Ω b C n + Z , Z Z (D18) = inf Ω 0 , Z sup Σ 0 , Y W ( Σ , Ω ) : Σ c b B + Y , Y Y : Ω b c C + Z , Z Z (D19) = inf Ω 0 , Z sup Σ 0 , Y W ( Σ , Ω ) : Σ β c b B + Y , Y Y : Ω β b c C + Z , Z Z
From Equations (D14) to Equation (D15) is by the equivalence of Equations (D8) and (D10). With k K tr ( B n k B n k ) = k K tr ( C n k C n k ) = 1 , we can apply the minimax duality by Theorem 1 to show the equivalence of Equations (D15) and (D16). We obtain Equation (D17) by inverting Equation (D11) to Equation (D12). For Equation (D18) we applied the definition of B n and C n and equivalence of Equations (D18) and (D19) is by Lemma 8. Let p = β c b and n = β b c . By choosing β, we can select any p and n as long as p n = β c b 1 β c b = c b .

Appendix E Proof of Lemma 4

The optimality conditions of Equation (76) are
(E1) D 1 = A (E2) λ I + F = H A H H + M (E3) F = 0 * * 0 (E4) Σ = Σ 1 0 0 Σ U (E5) M 0 (E6) tr ( M Σ ) = 0 (E7) tr ( Σ ) = P (E8) Σ 0
First, we show that Equations (77) and (78) provide valid Lagrangian multipliers, by showing they fulfill Equations (E2), (E3), (E5), and (E6). It is straight-forward to verify condition Equations (E2) and (E3). Condition (E2) combined with Equation (E3) implies
λ I H u A H u H u
as otherwise there would be no M such that M 0 . Comparing Equation (E9) to definition of M implies M 0 , this is Equation (E5). From Equations (E3) and (E4) follows tr ( F Σ ) = 0 and from Equation (E2) we obtain
(E10) λ tr ( Σ ) + tr ( F Σ ) = tr ( H A H H Σ ) + tr ( M Σ ) (E11) λ P = tr ( H A H H Σ )
where we used Equations (E6) and (E7). Finally, we combine Equations (78) and (E6). As the offdiagonal blocks do not contribute to the trace we have
tr ( M Σ ) = λ tr ( Σ ) tr ( H A H H Σ )
and by Equations (E7) and (E11) we have tr ( M Σ ) = 0 , which is Equation (E6). Second, we show that Equation (80) fulfills the optimality conditions of Equation (70). Introducing Σ as a Lagrangian multiplier for the structure of R and R 0 , which implies Σ is blockdiagonal and Σ 0 , and μ for tr ( Q ) P and K for Q 0 , these optimality conditions are
(E13) Σ = R + H Q H H + + R + + Σ Δ (E14) μ I = H H R + H Q H H + H + K (E15) tr ( K Q ) = 0 , K 0 , Q 0
We now introduce Ψ = I and
(E16) Q = ( Φ + H H Σ H ) + + Ψ (E17) = ( I + H H Σ H ) 1 + I (E18) S = H ( Ψ + H H Σ H ) + H H + M (E19) = H A H H + M
By the proof of Lemma 9 in Appendix I, we know that Equations (E16), (E18), (E5), (E6), and (E8), imply there exist Σ Δ , K that fulfill the following set of equations:
(E20) Σ = S + H Q H H + + S + + Σ Δ (E21) I = H H S + H Q H H + H + K (E22) tr ( K Q ) = 0 , K 0 , Q 0
Comparing Equations (E20)–(E22) to Equations (E13)–(E15), we can see that μ = λ , R = 1 λ S , Q = 1 λ Q , Σ = λ Σ , K = λ K fulfill the optimality conditions for the downlink problem. As R = 1 λ S matches Equation (80) the proof is complete.

Appendix F BC-MAC Duality – Conditions for a Convex Reformulation of the MAC Problem

Let C = ( C , , C ) k K S N × N and Z k K S N × N with C S N × N and
Z = Z : Z k = Z k K , Z Z
where Z is a subspace of S N × N that is orthogonal to C = { λ C : λ R } . Then the uplink noise constraint is
Ω k 1 K C + Z , Z Z k K
and there exists an uplink solution where all noise covariances are equal, see Remark 2 in Section 3. For this special choice, the subspace Z is given by
Z = Z : k K tr ( C Z k ) = 0 , k K tr ( Z Z k ) = 0 Z Z
In the following, we show that Equation (94) can be expressed by this model. The constraint on the downlink transmit covariance is
Q k 1 K C + Z k , Z Z k K
which implies
(F5) k K Q k 1 K k K C + k K Z k (F6) Q C + Z
where Q = k K Q k and Z = k K Z k . In the following, we show that not only Equation (F4) implies Equation (F6), but both are equivalent. We do so by showing that if given Q = ( Q 1 , , Q K ) , Q = k K Q k and Z that fulfill Equation (F6) we can construct a Z = ( Z 1 , , Z K ) with k K Z k = Z such that Equation (F4) holds. For the first K 1 users, we select Z k as Z k = Q k 1 K C , so that Equation (F4) is fulfilled for k = 1 , , K 1 , and we have
k K \ { K } Q k = ( K 1 ) 1 K C + k K \ { K } Z k
By subtracting Equation (F7) from Equation (F6), we obtain
(F8) Q k K \ K Q k 1 K C + Z k K \ K Z k (F9) Q K 1 K C + Z K
By Equation (F3) we have
k K tr ( C Z k ) = 0 tr ( C Z ) = 0
and
k K tr ( Z Z k ) = 0 tr ( Z Z ) = 0
and therefore Z Z , where Z is the orthogonal complement to C Z . Thus, if the constraint on the downlink transmit covariance is a constraint that solely acts on the sum of the transmit covariances, i.e.,
k K Q k C + Z , Z Z
then the uplink has a solution where all noise covariances are equal.

Appendix G Details of the Worst Case Noise Approximation in the Broadcast

The optimization under a worst-case noise approximation is given by
sup Q 0 inf R 0 W ( R , Q ) : tr ( R k ) σ k 2 k : k K tr ( Q k ) P
According to Section 2.1.2, the conic versions of the downlink constraint sets are
(G2) B = σ 1 2 M 1 I , , σ K 2 M K I (G3) Y = { Y : tr ( Y k ) = 0 k } (G4) Y = Y = λ 1 I , , λ K I , k K λ k σ k 2 = 0 (G5) C = P K N I (G6) Z = Z : k K tr ( Z k ) = 0 (G7) Z = { 0 }
Assuming the optimal encoding order and considering that the constraint on the downlink transmit covariance is a constraint on the sum covariance Lemma 7, allows us to exchange infimum and supremum. The asymmetric mini-max duality by Lemma 3 translates the downlink problem to the following uplink constraints
(G8) Σ p σ 1 2 M 1 I , , σ K 2 M K I + λ 1 I , , λ K I , k K λ k σ k 2 = 0 (G9) Ω n P K N I
By substituting
(G10) P k σ k 2 = p σ k 2 M k + λ k (G11) λ k = P k σ k 2 p σ k 2 M k
We have
(G12) k K λ k σ k 2 = 0 (G13) k K P k σ k 2 p σ k 2 M k σ k 2 = 0 (G14) k K P k = p k K σ k 4 M k
With b = k K tr ( B k B k ) = k K σ k 4 M k , c = k K tr ( C k C k ) = P 2 K N and c b = p n , selecting p = P k K σ k 4 M k implies n = K N P . The uplink problem becomes
inf Ω 0 sup Σ 0 P 1 , , P K W ( Σ , Ω ) : Σ k P k σ k 2 I k , k K P k = P : Ω I
= sup Σ 0 P 1 , , P K W ( Σ , I ) : Σ k P k σ k 2 I k , k K P k = P
Assuming users sorted matched to the encoding order we introduce α k = w k w k + 1 , k = 1 , , K 1 and α K = w K , and the uplink problem can be rewritten as
sup Σ 0 P 1 , , P K k K α k log I + j k H j j H Σ j H j j : Σ k P k σ k 2 I k , k K P k = P
= sup P 1 , , P K k K α k log I + j k P j σ j 2 H j j H H j j : k K P k = P
where we have used that the reformulated utility is non-decreasing in the transmit covariances and therefore Σ k = P k σ k 2 I .

Appendix H Subdifferential of the Mutual Information at Points where the Noise Covariances are not Invertible

Lemma 13. 
Consider a MIMO channel H with given noise covariance S and transmit covariance Q , such that the mutual information is finite and given by
f ( S ) = log I + S + H Q H H
Let T , T provide a full unitary basis for the fundamental subspaces of S . The subdifferential of f at S is given by
f ( S ) S = Σ : Σ = S + H Q H H + + S + + Σ Δ , Σ Δ E Δ
where
E Δ = conv J Σ Δ : Σ Δ = L T J T H T J H T H L + T J H T H L T J T H , L = S + H Q H H + + S +
and T H Σ Δ T = 0 Σ Δ E Δ .
Proof. 
According to [38, Th. 2.5.1], the subdifferential of a function f at x is the convex hull of the limit points of the sequence f ( x i ) x i where the limit x i x exists and x i avoids points where f is not differentiable. We introduce
S ˜ = T T T H S T 0 0 0 T T H + ϵ T T A B B H C T T H
where C 0 and
Δ = A B C 1 B H 0
We use S ˜ to parameterize S i by the choice of ϵ and S i S for ϵ 0 . As S ˜ 0 for every ϵ > 0 , we have
(H6) f ( S ˜ ) = log I + S ˜ 1 H Q H H (H7) = log S ˜ + H Q H H log S ˜
and
f ( S ˜ ) S ˜ = ( S ˜ + H Q H H ) 1 S ˜ 1
Using T H H = 0 and substituting
X = T H S T + T H H Q H H T
We obtain
(H10) [ T T ] H f ( S ˜ ) S ˜ [ T T ] (H11) = X + ϵ A ϵ B ϵ B H ϵ C 1 T H S T + ϵ A ϵ B ϵ B H ϵ C 1 (H12) = ( X + ϵ Δ ) 1 ( X + ϵ Δ ) 1 B C 1 C 1 B H ( X + ϵ Δ ) 1 1 ϵ C 1 + C 1 B H ( X + ϵ Δ ) 1 B C 1 (H13) ( T H S T + ϵ Δ ) 1 ( T H S T + ϵ Δ ) 1 B C 1 C 1 B H ( T H S T + ϵ Δ ) 1 1 ϵ C 1 C 1 B H ( T H S T + ϵ Δ ) 1 B C 1 (H14) = ( X + ϵ Δ ) 1 ( T H S T + ϵ Δ ) 1 ( X + ϵ Δ ) 1 ( T H S T ϵ Δ ) 1 B C 1 C 1 B H ( X + ϵ Δ ) 1 ( T H S T + ϵ Δ ) 1 + C 1 B H ( X + ϵ Δ ) 1 ( T H S T + ϵ Δ ) 1 B C 1
and
(H15) lim ϵ 0 f ( S ˜ ) S ˜ = [ T T ] X 1 ( T H S T ) 1 X 1 ( T H S T ) 1 B C 1 C 1 B H X 1 ( T H S T ) 1 C 1 B H X 1 ( T H S T ) 1 B C 1 [ T T ] H = T X 1 ( T H S T ) 1 T H T X 1 ( T H S T ) 1 B C 1 T H T C 1 B H X 1 ( T H S T ) 1 T H + (H16) + T C 1 B H X 1 ( T H S T ) 1 B C 1 T H = T ( T H S T + T H H Q H H T ) 1 ( T H S T ) 1 T H T ( T H S T + T H H Q H H T ) 1 ( T H S T ) 1 T H T B C 1 T H T C 1 B H T H T ( T H S T + T H H Q H H T ) 1 ( T H S T ) 1 T H + (H17) + T C 1 B H T H T ( T H S T + T H H Q H H T ) 1 ( T H S T ) 1 T H T B C 1 T H = S + H Q H H + S + S + H Q H H + S + T B C 1 T H T C 1 B H T H S + H Q H H + S + + (H18) + T C 1 B H T H S + H Q H H + S + T B C 1 T H
We observe that the limit is independent of A , and when considering all possible choices for limit points, we can freely select B , C 0 as we can always select A such that S ˜ 0 . If C is not strictly positive, S ˜ is not strictly positive and f is not differentiable. Therefore, the limit point of this sequence is not relevant. We conclude that Equation (H4) allows to parameterize all relevant limit points.
We introduce the following substitutions
(H19) Σ Δ = L L T J T H T J H T H L + T J H T H L T J T H (H20) L = S + H Q H H + S + (H21) J = B C 1
As C 0 , B can have arbitrary values, J can take any value, and we can parameterize the subdifferential, given by the convex hull of the limit points, as
(H22) f ( S ) S = conv J L L T J T H T J H T H L + T J H T H L T J T H , L = S + H Q H H + S + (H23) = Σ : Σ = S + H Q H H + + S + Σ Δ , Σ Δ E Δ
where
E Δ = conv J Σ Δ : Σ Δ = L T J T H T J H T H L + T J H T H L T J T H , L = S + H Q H H + S +
With T H T = 0 we have T H Σ Δ T = 0 Σ Δ E Δ . ☐

Appendix I Details of the Proof for the Polite Waterfilling Lemma 9

In the following, we show that every solution to Equations (121)–(123) is a solution to Equation (125)–(127) and vice versa and that these solutions fullfil the equalities of Lemma 9. There are several implications of the conditions of the lemma that we use in the following:
(I1) [ T T ]  unitary and full rank  T H T = I (I2) [ Φ Φ ]  unitary and full rank  Φ H Φ = I (I3) T H S T 0 , T H S T = 0 S + = T ( T H S T ) 1 T H  and  S = T T H S T T H (I4) Φ H Ψ Φ 0 , Φ H Ψ Φ = 0 Ψ + = Φ ( Φ H Ψ Φ ) 1 Φ  and  Ψ = Φ Φ H Ψ Φ Φ H (I5) H Φ = 0 H = H Φ Φ H (I6) T H = 0 H = T T H H (I7) H Φ = 0 , Φ H Q Δ Φ = 0 H Q Δ H H = 0 (I8) T H = 0 , T H Σ Δ T = 0 H H Σ Δ H = 0
With H H Σ Δ H = 0 and Equation (121), we have
H H Σ H = w H H ( S + H Q H H ) + H + w H H S + H
and by adding it to Equation (122), we obtain
Ψ + H H Σ H = w H H S + H + K
Furthermore, tr ( K Q ) = 0 , K 0 , Q 0 implies K Q = 0 . Multiplying Equation (122) with Φ H from the left and Φ from the right and using H Φ = 0 and Φ H Ψ Φ = 0 yields Φ H K Φ = 0 , which implies K = Φ Φ H K Φ Φ H . Combined with K Q = 0 and Φ H Φ = I we have Φ H K Φ Φ H Q = 0 .
Now we are prepared to show that any solution of Equations (121)–(123) also fulfills Equation (125). As Q Δ is only constrained by Φ H Q Δ Φ = 0 and can be freely chosen otherwise, as long as Q 0 , it is sufficient to verify
(I11) Φ H Q Φ = w Φ H ( Ψ + H H Σ H ) + Φ + w ( Φ H Ψ + Φ ) (I12) Φ H Q Φ = w ( Φ H ( Ψ + H H Σ H ) Φ ) 1 + w ( Φ H Ψ Φ ) 1 (I13) Φ H ( Ψ + H H Σ H ) Φ Φ H Q Φ Φ H Ψ Φ = w Φ H Ψ Φ + w Φ H ( Ψ + H H Σ H ) Φ (I14) Φ H ( w H H S + H + K ) Φ Φ H Q Φ Φ H Ψ Φ = w Φ H H H Σ H Φ (I15) Φ H ( H H S + H ) Φ Φ H Q Φ Φ H Ψ Φ = Φ H H H Σ H Φ
where we used Equation (122) and Φ H K Φ Φ H Q = 0 . The left side of Equation (I15) is
Φ H ( H H S + H ) Φ Φ H Q Φ Φ H Ψ Φ (I16) = Φ H ( H H S + H ) Φ Φ H Q Φ Φ H ( w H H ( S + H Q H H ) + H + K ) Φ (I17) = w Φ H ( H H S + H ) Φ Φ H Q Φ Φ H H H ( S + H Q H H ) + H Φ (I18) = w Φ H H H T ( T H S T ) 1 T H H Φ Φ H Q Φ Φ H H H T ( T H S T + T H H Φ Φ H Q Φ Φ H H H T ) 1 T H H Φ (I19) = w H ¯ H S ¯ 1 H ¯ Q ¯ H ¯ H S ¯ + H ¯ Q ¯ H ¯ H 1 H ¯
where we substituted H ¯ = T H H Φ , S ¯ = T H S T , and Q ¯ = Φ H Q Φ . By Equation (I9), the right side of Equation (I15) can be reformulated as
Φ H H H Σ H Φ (I20) = w Φ H H H ( S + H Q H H ) + H Φ + w Φ H H H S + H Φ (I21) = w Φ H H H T ( T H S T + T H H Φ Φ H Q Φ Φ H H H T ) 1 T H H Φ + w Φ H H H T ( T H S T ) 1 T H H Φ (I22) = w H ¯ H ( S ¯ + H ¯ Q ¯ H ¯ H ) 1 H ¯ + w H ¯ H S ¯ 1 H ¯
Now, combining Equations (I16) and (I20) we can rewrite Equation (I15) as
(I23) H ¯ H S ¯ 1 H ¯ Q ¯ H ¯ H S ¯ + H ¯ Q ¯ H ¯ H 1 H ¯ = H ¯ H ( S ¯ + H ¯ Q ¯ H ¯ H ) 1 H ¯ + H ¯ H S ¯ 1 H ¯ (I24) ( H ¯ H S ¯ 1 H ¯ Q ¯ + I ) H ¯ H S ¯ + H ¯ Q ¯ H ¯ H 1 H ¯ = H ¯ H S ¯ 1 H ¯ (I25) H ¯ H S ¯ + H ¯ Q ¯ H ¯ H 1 H ¯ = ( H ¯ H S ¯ 1 H ¯ Q ¯ + I ) 1 H ¯ H S ¯ 1 H ¯
which holds true by the matrix inversion lemma.
Any solution of Equations (121)–(123) will satisfy Equation (126) by selecting M as
M = S w H Ψ + H H Σ H + H H
which leaves us to show that this choice is consistent with Equation (127).
By H Q Δ H H = 0 and multiplying Equation (125) with H from the left and with H H from the right, we have
w H Ψ + H H Σ H + H H = H Q H H w H Ψ + H H
and combined with Equation (I26), we have
(I28) M = S + H Q H H w H Ψ + H H (I29) Σ M = Σ S + Σ H Q H H w Σ H Ψ + H H (I30) tr ( Σ M ) = tr ( Σ S ) + tr ( Σ H Q H H ) w tr ( Σ H Ψ + H H )
From Equation (121) we have
(I31) T H Σ T = w ( T H S T + T H H Q H H T ) 1 + w ( T H S T ) 1 (I32) T H S T T H Σ T + T H H Q H H T T H Σ T = w T H H Q H H T ( T H S T ) 1 (I33) tr ( T H S T T H Σ T ) + tr ( T H H Q H H T T H Σ T ) = w tr ( T H H Q H H T ( T H S T ) 1 ) (I34) tr ( T T H S T T H Σ ) + tr ( T T H H Q H H T T H Σ ) = w tr ( H Q H H T ( T H S T ) 1 T H ) (I35) tr ( S Σ ) + tr ( H Q H H Σ ) = w tr ( H Q H H S + )
Combining Equations (I30) and (I35) we have
tr ( Σ M ) = w tr ( S + H Q H H ) w tr ( Σ H Ψ + H H )
Thus, tr ( Σ M ) = 0 holds if
Φ H H H S + H Q Φ = Φ H H H Σ H Ψ + Φ
and
Φ H H H S + H Q Φ = Φ H H H Σ H Ψ + Φ
As Φ H H = 0 and Ψ + Φ = 0 , it is sufficient to verify
(I39) Φ H H H S + H Q Φ = Φ H H H Σ H Ψ + Φ (I40) Φ H H H S + H Q Φ = Φ H H H Σ H Φ ( Φ H Ψ Φ ) 1 (I41) Φ H H H S + H Φ Φ H Q Φ Φ H Ψ Φ = Φ H H H Σ H Φ
which is Equation (I15) and was shown to hold true.
We have shown that every solution to Equations (121)–(123) is a solution to Equations (125)–(127) and the inverse direction follows by symmetry. It remains to show that Equations (121)–(123) and (125)–(127) imply the equalities of the Lemma. Equality (a) is Equation (I37) and a proof for (b) follows by symmetry. Equalities (c) and (d) are Equations (121) and (125). Equality (e) follows from (a) or (b). By symmetry, the same way we derived Equation (I35) we derive
tr ( Ψ Q ) + tr ( H H Σ H Q ) = w tr ( H H Σ H Ψ + )
and combining this with Equation (I35) and (a) implies (f).

References

  1. Rashid-Farrokhi, F.; Liu, K.; Tassiulas, L. Transmit beamforming and power control for cellular wireless systems. IEEE J. Sel. Areas Commun. 1998, 16, 1437–1450. [Google Scholar] [CrossRef]
  2. Visotsky, E.; Madhow, U. Optimum beamforming using transmit antenna arrays. In Proceedings of the IEEE 49th Vehicular Technology Conference, Houston, TX, USA, 16–20 May 1999; pp. 851–856.
  3. Boche, H.; Schubert, M. A general duality theory for uplink and downlink beamforming. In Proceedings of the IEEE 56th IEEE Vehicular Technology Conference (VTC 2002-Fall), Vancouver, BC, Canada, 24–28 September 2002; pp. 87–91.
  4. Viswanath, P.; Tse, D. Sum capacity of the vector Gaussian broadcast channel and uplink-downlink duality. IEEE Trans. Inf. Theory 2003, 49, 1912–1921. [Google Scholar] [CrossRef]
  5. Jorswieck, E.; Boche, H. Transmission strategies for the MIMO MAC with MMSE receiver: Average MSE optimization and achievable individual MSE region. IEEE Trans. Signal Process. 2003, 51, 2872–2881. [Google Scholar] [CrossRef]
  6. Shi, S.; Schubert, M.; Boche, H. Downlink MMSE transceiver optimization for multiuser MIMO systems: Duality and sum-MSE minimization. IEEE Trans. Signal Process. 2007, 55, 5436–5446. [Google Scholar] [CrossRef]
  7. Hunger, R.; Joham, M.; Utschick, W. On the MSE-duality of the broadcast channel and the multiple access channel. IEEE Trans. Signal Process. 2009, 57, 698–713. [Google Scholar] [CrossRef]
  8. Jindal, N.; Vishwanath, S.; Goldsmith, A. On the duality of Gaussian multiple-access and broadcast channels. IEEE Trans. Inf. Theory 2004, 50, 768–783. [Google Scholar] [CrossRef]
  9. Vishwanath, S.; Jindal, N.; Goldsmith, A. Duality, achievable rates, and sum-rate capacity of Gaussian MIMO broadcast channels. IEEE Trans. Inf. Theory 2003, 49, 2658–2668. [Google Scholar] [CrossRef]
  10. Yu, W. Uplink-downlink duality via minimax duality. IEEE Trans. Inf. Theory 2006, 52, 361–374. [Google Scholar] [CrossRef]
  11. Hunger, R.; Joham, M. A general rate duality of the MIMO multiple access channel and the MIMO broadcast channel. In Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM), New Orleans, LA, USA, 30 November–4 December 2008.
  12. Liu, A.; Liu, Y.; Xiang, H.; Luo, W. Duality, Polite Water-Filling, and Optimization for MIMO B-MAC Interference Networks and Itree Networks. 2010. [Google Scholar]
  13. Liu, A.; Liu, Y.; Xiang, H.; Luo, W. Polite water-filling for weighted sum-rate maximization in MIMO B-MAC networks under multiple linear constraints. IEEE Trans. Signal Process. 2012, 60, 834–847. [Google Scholar] [CrossRef]
  14. Yu, W.; Lan, T. Transmitter optimization for the multi-antenna downlink with per-antenna power constraints. IEEE Trans. Signal Process. 2007, 55, 2646–2660. [Google Scholar] [CrossRef]
  15. Huh, H.; Papadopoulos, H.C.; Caire, G. Multiuser MISO transmitter optimization for intercell interference mitigation. IEEE Trans. Signal Process. 2010, 58, 4272–4285. [Google Scholar] [CrossRef]
  16. Zhang, L.; Zhang, R.; Liang, Y.C.; Xin, Y.; Poor, H. On Gaussian MIMO BC-MAC duality with multiple transmit covariance constraints. IEEE Trans. Inf. Theory 2012, 58, 2064–2078. [Google Scholar] [CrossRef]
  17. Jorswieck, E.A.; Boche, H. Performance analysis of capacity of MIMO systems under multiuser interference based on worst-case noise behavior. EURASIP J. Wirel. Commun. Netw. 2004, 2004, 273–285. [Google Scholar] [CrossRef]
  18. Vishwanath, S.; Boyd, S.; Goldsmith, A. Worst-case capacity of Gaussian vector channels. In Proceedings of the 2003 Canadian Workshop on Information Theory, Waterloo, ON, Canada, 18–21 May 2003.
  19. Loyka, S.; Charalambous, C. On the compound capacity of a class of MIMO channels subject to normed uncertainty. IEEE Trans. Inf. Theory 2012, 58, 2048–2063. [Google Scholar] [CrossRef]
  20. Wiesel, A.; Eldar, Y.; Shamai, S. Optimization of the MIMO compound capacity. IEEE Trans. Wirel. Commun. 2007, 6, 1094–1101. [Google Scholar] [CrossRef]
  21. Bengtsson, M.; Ottersten, B. Optimal downlink beamforming using semidefinite optimization. In Proceedings of the 37th Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA, 22–24 September 1999; pp. 987–996.
  22. Wiesel, A.; Eldar, Y.; Shamai, S. Linear precoding via conic optimization for fixed MIMO receivers. IEEE Trans. Signal Process. 2006, 54, 161–176. [Google Scholar] [CrossRef]
  23. Palomar, D. Unified framework for linear MIMO transceivers with shaping constraints. IEEE Commun. Lett. 2004, 8, 697–699. [Google Scholar] [CrossRef]
  24. Scutari, G.; Palomar, D.; Barbarossa, S. MIMO cognitive radio: A game theoretical approach. In Proceedings of the 9th IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Recife, Brazil, 6–9 July 2008; pp. 426–430.
  25. Hammarwall, D.; Bengtsson, M.; Ottersten, B. On downlink beamforming with indefinite shaping constraints. IEEE Trans. Signal Process. 2006, 54, 3566–3580. [Google Scholar] [CrossRef]
  26. Hellings, C.; Weiland, L.; Utschick, W. Optimality of proper signaling in Gaussian MIMO broadcast channels with shaping constraints. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 3474–3478.
  27. Lameiro, C.; Santamaria, I.; Utschick, W. Interference shaping constraints for underlay MIMO interference channels. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 7313–7317.
  28. Chen, L.; You, S. The weighted sum rate maximization in MIMO interference networks: The minimax Lagrangian duality and algorithm. 2013. [Google Scholar]
  29. Dotzler, A.; Newinger, M.; Utschick, W. Covariance shapes as a cognitive radio concept for receivers with multiple antennas. In Proceedings of the 19th International ITG Workshop on Smart Antennas (WSA), Ilmenau, Germany, 3–5 March 2015.
  30. Newinger, M.; Dotzler, A.; Utschick, W. Interference shaping for device-to-device communication in cellular networks. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 4120–4125.
  31. Brunner, H.H.; Dotzler, A.; Utschick, W.; Nossek, J.A. Weighted sum rate maximization with multiple linear conic constraints. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 4635–4640.
  32. Brunner, H.H.; Dotzler, A.; Utschick, W.; Nossek, J.A. Intercell interference robustness tradeoff with loosened covariance shaping. In Proceedings of the 18th International ITG Workshop on Smart Antennas (WSA), Erlangen, Germany, 12–13 March 2014.
  33. Hellings, C.; Utschick, W. On carrier-cooperation in parallel gaussian mimo relay channels with partial decode-and-forward. In Proceedings of the 48th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2–5 November 2014; pp. 220–224.
  34. Hellings, C.; Gerdes, L.; Weiland, L.; Utschick, W. On optimal Gaussian signaling in MIMO relay channels with partial decode-and-forward. IEEE Trans. Signal Process. 2014, 62, 3153–3164. [Google Scholar]
  35. Ramana, M.; Goldman, A.J. Some geometric results in semidefinite programming. J. Glob. Optim. 1995, 7, 33–50. [Google Scholar] [CrossRef]
  36. Dotzler, A.; Riemensberger, M.; Utschick, W.; Dietl, G. Interference robustness for cellular MIMO networks. In Proceedings of the 13th IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cesme, Turkey, 17–20 June 2012; pp. 229–233.
  37. Sato, H. An outer bound to the capacity region of broadcast channels. IEEE Trans. Inf. Theory 1978, 24, 374–377. [Google Scholar] [CrossRef]
  38. Clarke, F.H. Optimization and Nonsmooth Analysis; Classics in Applied Mathematics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1990. [Google Scholar]
Table 1. Uplink and Downlink Optimality Conditions.
Table 1. Uplink and Downlink Optimality Conditions.
DownlinkUplink
LagrangianConstraintLagrangianConstraint
multiplier multiplier
Σ k
S k = R k + i K \ k H k i Q i H k i H k
Q k
Ψ k = Ω k + i K \ k H i k H Σ i H i k k
Ω
Q α C + Z
R
Σ β B + Y
ν C + Z
Z Z
σ B + Y
Y Y
K
Q 0
M
Σ 0
X
R = α B + Y
W
Ω = β C + Z
μ B + Y
Y Y
τ C + Z
Z Z
L
R 0
N
Ω 0
Subsitutions and constraints on dual variables
Ω 0 , K 0 , L 0
R 0 , M 0 , N 0
Z Z , Y Y
Y Y , Z Z
Ψ k = Ω k + i K \ k H i k H Σ i H i k k
S k = R k + i K \ k H k i Q i H k i H k
Complimentary slackness
tr Ω k Q k α C k Z k = 0 k
tr R k Σ k β B k Y k = 0 k
tr K k Q k = 0 k
tr M k Σ k = 0 k
tr L k R k = 0 k
tr N k Ω k = 0 k
Stationarity
Ω = ν C + Z
R = σ B + Y
X = μ B + Y
W = τ C + Z
Σ = X L
Q = W N
Ψ k = w k H k k H ( S k + H k k Q k H k k H ) + H k k + K k k
S k = w k H k k ( Ψ k + H k k H Σ k H k k ) + H k k H + M k k
Σ k { Σ k : Σ k = w k ( S k + H k k Q k H k k H ) + + + w k S k + + w k Σ k Δ , Σ k Δ E k Δ } k
Q k { Q k : Q k = w k ( Ψ k + H k k H Σ k H k k ) + + + w k Ψ k + + w k Q k Δ , Q k Δ Q k Δ } k

Share and Cite

MDPI and ACS Style

Dotzler, A.; Riemensberger, M.; Utschick, W. Minimax Duality for MIMO Interference Networks. Information 2016, 7, 19. https://doi.org/10.3390/info7020019

AMA Style

Dotzler A, Riemensberger M, Utschick W. Minimax Duality for MIMO Interference Networks. Information. 2016; 7(2):19. https://doi.org/10.3390/info7020019

Chicago/Turabian Style

Dotzler, Andreas, Maximilian Riemensberger, and Wolfgang Utschick. 2016. "Minimax Duality for MIMO Interference Networks" Information 7, no. 2: 19. https://doi.org/10.3390/info7020019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop