BROJA-2PID: A Robust Estimator for Bivariate Partial Information Decomposition

Makkeh, Theis, and Vicente found that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decomposition (BROJA PID) measure. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model. In this paper, we prove the important property of strong duality for the Cone Program and prove an equivalence between the Cone Program and the original Convex problem. Then, we describe in detail our software, explain how to use it, and perform some experiments comparing it to other estimators. Finally, we show that the software can be extended to compute some quantities of a trivaraite PID measure.


Introduction
For random variables X, Y, Z with finite range, consider the mutual information MI(X; Y, Z): the amount of information that the pair (Y, Z) contain about X. How can we quantify the contributions of Y and Z, respectively, to MI(X; Y, Z)? This question is at the heart of (bivariate) partial information decomposition (PID) [1][2][3][4]. Information theorists agree that there can be: information shared redundantly by Y and Z; information contained uniquely within Y but not within Z; information contained uniquely within Z but not within Y; and information that synergistically results from combining both Y and Z. The quantities are denoted by: SI(X; Y, Z); UI(X; Y\Z), UI(X; Z\Y); and CI(X; Y, Z), respectively. All four of these quantities add up to MI(X; Y, Z); moreover, the quantity of total information that Y has about X is decomposed into the quantity of unique information that Y has about X and shared information that Y shares with Z about X, and similarly for quantity of total information that Z has about X, thus SI(X; Y, Z) + UI(X; Y\Z) = MI(X; Y), and SI(X; Y, Z) + UI(X; Z\Y) = MI(X; Z). Hence, if the joint distribution of (X, Y, Z) is known, then there is (at most) one degree of freedom in defining a bivariate PID. In other words, defining the value of one of the information quantities defines a bivariate PID.
Bertschinger et al. [1] have given a definition of a bivariate PID where the synergistic information is defined as follows: CI(X; Y, Z) := max(MI(X; Y, Z) − MI(X ; Y , Z )) (1) where the maximum extends over all triples of random variables (X , Y , Z ) with the same 12,13-marginals as (X, Y, Z), i.e., P(X = x, Y = y) = P(X = x, Y = y) for all x, y and P(X = x, Z = z) = P(X = x, Z = z) for all x, z. It can easily be verified that this amounts to maximizing a concave function over a compact, polyhedral set described by inequalities [5]. q x, * ,z = p x, * ,z for all (x, z) ∈ X × Z q x,y,z ≥ 0 for all (x, y, z) ∈ X × Y × Z. (CP)

Cone Programming Model for Bivariate PID
In [5], we introduced a model for computing the BROJA bivariate PID based on a so-called "Cone Programming". Cone Programming is a far reaching generalization of Linear Programming: The usual inequality constraints which occur in Linear Programs can be replaced by so-called "generalized inequalities"-see below for details. Similar to Linear Programs, dedicated software is available for Cone Programs, but each type of generalized inequalities (i.e., each cone) requires its own algorithms. The specific type of generalized inequalities needed for the computation of the BROJA bivariate PID requires solvers for the so-called "Exponential Cone", of which two we are aware of ECOS [9] and SCS [10].
In the computational results of [5], we found that the Cone Programming approach (based on one of the available solvers) was, while not the fastest, the most robust of all methods for computing the BROJA bivariate PID which we tried, such as projected gradient descent, interior point for general convex programs, geometric programming, etc. The reason for this success is that the interior point method for Cone Programming is an extension of the efficient interior point methods of Linear Programming, see [11] for more details. This is why our software is based on the Exponential Cone Programming model.
In this section, we review the mathematical definitions to the point in which they are necessary to understand our model and the properties of the software based on it.

Background on Cone Programming
A nonempty closed convex cone K ⊆ R m is a closed set which is convex, i.e., for any x, y ∈ K and 0 ≤ θ ≤ 1 we have θx + (1 − θ)y ∈ K, and is a cone, i.e., for any x ∈ K and θ ≥ 0 we have θx ∈ K; for example, R n + is a closed convex cone. Cone Programming is a far-reaching generalization of Linear Programming, which may contain so-called generalized inequalities: For a fixed closed convex cone K ⊆ R m , the generalized inequality "a ≤ K b" denotes b − a ∈ K for any a, b ∈ R m . Recall the primal-dual pair of Linear Programming. The primal problem is, over variables η ∈ R m 1 and θ ∈ R m 2 . There are two properties that the pair (2) and (3) may or may not have, namely, weak and strong duality. The following defines the duality properties. (2) and (3). Then, we define the following,

2.
We say that (2) and (3) satisfy weak duality if for all w and all (η, θ) feasible solutions of (2) and (3), 3. If w is a feasible solution of (2) and (η, θ) is a feasible solution of (3), then the duality gap d is

4.
We say that (2) and (3) satisfy strong duality when the feasible solutions w and (η, θ) are optimal in (2) and (3), respectively, if and only if d is zero.
Weak duality always holds for a Linear Program, however strong duality holds for a Linear Program whenever a feasible solution of (2) or (3) exists. These duality properties are used to certify the optimality of w and (η, θ). The same concept of duality exists for Cone Programming, the primal cone problem is minimize c T w over variable w ∈ R n , where A ∈ R m 1 ×n , G ∈ R m 2 ×n , c ∈ R n , b ∈ R m 1 , and h ∈ R m 2 . The dual cone problem is, where K * := {u ∈ R n | u T v ≥ 0 for all v ∈ K} is the dual cone of K. The entries of the vector η ∈ R m 1 are called the dual variables for equality constraints, Aw = b. Those of θ ∈ R m 2 are the dual variables for generalized inequalities, Gw ≤ K h. The primal-dual pair of Conic Optimization (P) and (D) satisfies weak and strong duality in the same manner as the Linear Programming pair. In the following, we define the interior point of a Cone Program which is a necessary condition for strong duality (see Definition 1) to hold for the Conic Programming pair.

Definition 2.
Consider a primal-dual pair of the Conic Optimization (P) and (D). Then, the primal problem (P) has an interior pointx if, •x is a feasible solution of (P).
• There exists > 0 such that for any y ∈ R n , we have y ∈ K whenever h − Gx − y 2 ≤ .

2.
If c T w is finite and (P) has an interior pointw, then strong duality holds for (P) and (D).
If the requirements of Theorem 1 are met for a conic optimization problem, then weak and strong duality can be used as guarantees that the given solution of a Cone Program is optimal.
One of the closed convex cones which we use throughout the paper is the exponential cone, K exp , (see Figure 1) defined in [13] as {(r, t, q) ∈ R 3 | q > 0 and qe r/q ≤ t} ∪ {(r, p, 0) ∈ R 3 | r ≤ 0 and t ≥ 0}, (4) which is the closure of the set {(r, t, q) ∈ R 3 | q > 0 and qe r/q ≤ t}, and its dual cone, K * exp , (see Figure 1) is which is the closure of the set When K = K exp in (P), the Cone Program is referred to as "Exponential Cone Program".

The Exponential Cone Programming Model
The Convex Program (CP) which computes the bivariate partial information decomposition can be formulated as an Exponential Cone Program. Consider the following Exponential Cone Program where the variables are r, t, q ∈ R X×Y×Z . minimize − ∑ x,y,z r x,y,z subject to q x,y, * = p x,y, * for all (x, y) ∈ X × Y q x, * ,z = p x, * ,z for all (x, z) ∈ X × Z q * ,y,z − t x,y,z = 0 for all (x, y, z) ∈ X × Y × Z (−r x,y,z , −t x,y,z , −q x,y,z ) ≤ K exp 0 for all (x, y, z) ∈ X × Y × Z. (EXP) The first two constraints are the marginal equations of (CP). The third constraints connects the t-variables with the q-variables which will be denoted as coupling equations. The generalized inequality connects these to the variables forming the objective function. Proof. Let ¶ CP (b) and ¶ exp (b) be the feasible region of (CP) and (EXP), respectively. We define the following f : (q x,y,z ln q * ,y,z q x,y,z , q * ,y,z , q x,y,z ) if q xyz > 0 (0, q * ,y,z , q x,y,z ) if q x,y,z = 0.
For q x,y,z ∈ ¶ CP , we have and since conditional entropy at q x,y,z = 0 vanishes, then the objective function of (CP) evaluated at q ∈ ¶ CP is equal to that of (EXP) evaluated at f (q). If (r, t, q) ∈ ¶ exp \ Im( f ), then there exists x, y, z such that r x,y,z < q x,y,z ln t x,y,z q x,y,z and so The dual problem of (EXP) is Using the definition of K * exp the system consisting of (9a)-(9d) is equivalent to and so the dual problem of (EXP) can be formulated as Proposition 2. Strong duality holds for the primal-dual pair (EXP) and (D-EXP).
s is an interior point of (EXP). We refer to [14] for the proof. Hence, by Theorem 1, strong duality holds for the primal-dual pair (EXP) and (D-EXP).
Weak and strong duality in their turn provide a measure for the quality of the returned solution, for more details see Section 3.4.

The BROJA_2PID Estimator
We implemented the exponential cone program (EXP) in Python and used a conic optimization solver to get the desired solution. Note that we are aware of only two conic optimization software toolboxes which allow solving Exponential Cone Programs, ECOS and SCS. The current version of BROJA_2PID utilizes ECOS to solve the Exponential Cone Program (EXP). ECOS (we use the version from 8 November 2016) is a lightweight numerical software for solving Convex Cone programs [9].
This section describes the BROJA_2PID package form the user's perspective. We briefly explain how to install BROJA_2PID. Then, we illustrate the framework of BROJA_2PID and its functions. Further, we describe the input, tuning parameters, and output.

Installation
To install BROJA_2PID, you need Python to be installed on your machine. Currently, you need to install ECOS, the Exponential Cone solver. To do that, you most likely pip3 install ecos. If there are troubles installing ECOS, we refer to its Github repository https://github.com/embotech/ ecos-python. Finally, you need to gitclone the Github link of BROJA_2PID and it is ready to be used.

Computing Bivariate PID
In this subsection, we will explain how BROJA_2PID works. In Figure 2, we present a script as an example of using BROJA_2PID package to compute the partial information decomposition of the AND distribution, X = Y AND Z where Y and Z are independent and uniformly distributed in {0, 1}. 1 # test_and_gate .py 2 from BROJA_2PID import pid , BROJA_2PID_Exception We will go through the example ( Figure 2) to explain how BROJA_2PID works. The main function in BROJA_2PID package is pid(). It is a wrap up function which is used to compute the partial information decomposition. First, pid() prepares the "ingredients" of (EXP). Then, it calls the Cone Programming solver to find the optimal solution of (EXP). Finally, it receives from the Cone Programming solver the required solution to compute the decomposition.
The "ingredients" of (EXP) are the marginal and coupling equations, generalized inequalities, and the objective function. Thus, pid() needs to compute and store p x,y, * and p x, * ,z , the marginal distributions of (X, Y) and (X, Z). For this, pid() requires a distribution of X, Y, and Z. In Figure 2, the distribution comes from the AND gate where X = Y AND Z.
Distributions are stored as a Python dictionary data structure in which the random variable (x, y, z) is a triplet key and the probability at this point is the value. This provides an associate memory structure where the value of the random variable is a reference to the probability at that point. For example, the triplet (0, 0, 0) occurs with probability 1 /4 and so on for the other triplets. Thus, AND distribution is defined as a Python dictionary, andgate=dict() where andgate[ (0,0,0) ]=0.25 is assigning the key "(0, 0, 0)" a value "0.25" and so on.
Note that the user does not need to add the triplets with zero probability to the dictionary since pid() will always discard such triplets. In [5], the authors discussed in details how to handle the triplets with zero probability. The input of pid() is explained in details in the following subsection. Now, we briefly describe how pid() proceeds to return the promised decomposition. pid() calls the Cone Programming solver and provides it with the "ingredients" of (EXP) as a part of the solver's input. The solver finds the optimal solution of (EXP) and (D-EXP). When the solver halts, it returns the primal and dual solutions. Using the returned solutions, pid() computes the decomposition based on (1). The full process is explained in Figure 3. Finally, pid() returns a Python dictionary, returndata containing the partial information decomposition and information about the quality of the Cone Programming solver's solution. In Section 3.4, we give a detailed explanation on how to compute the quality of the solution and Table 3 contains a description of the keys and values of returndata.
For example, in the returned dictionary returndata for the AND gate, returndata['CI'] contains the quantity of synergistic information and returndata['Num_err'][0] the maximum primal feasibility violation of (EXP).
Note that conic optimization solver is always supposed to return a solution. Thus, BROJA_2PID will raise an exception, BROJA_2PID_Exception, when no solution is returned.

Input and Parameters
In BROJA_2PID package, pid() is the function which the user needs to compute the partial information decomposition. The function pid() takes as input a Python dictionary.
The Python dictionary represents a probability distribution. This distribution computes the vectors p x,y, * and p x, * ,z for the the marginal expressions in (EXP). A key of the Python dictionary is a triplet of (x, y, z) which is a possible outcome of the random variables X, Y, and Z. A value of the key (x, y, z) in the Python dictionary is a number which is the probability of X = x, Y = y, and Z = z.
The Cone Programming solver has to make sure while seeking the optimal solution of (EXP) that w and (η, θ) are feasible and (ideally) should halt when the duality gap is zero, i.e., w and (η, θ) are optimal. However, w and (η, θ) entries belong to R and computers represent real numbers up to floating precision. Thus, the Cone Programming solver considers a solution feasible when none of the constraints are violated, or optimal, duality gap is zero, up to a numerical precision (tolerance). The Cone Programming solver allows the user to modify the feasibility and optimality tolerances along with couple other parameters which are described in Table 1. To change the default Cone Programming solver parameters, the user should pass them to pid() as a dictionary. For example, in Figure 4, we change the maximum number of iterations which the solver can do. For this, we created a dictionary, parms=dict(). Then, we set a desired value, 1000, for the key 'max_iter'. Finally, we are required to pass parms to pid() as a dictionary, pid(andgate, * * parms). Note that, in the defined dictionary parms, the user only needs to define the keys for which the user wants to change the values.  The parameters output determines the printing mode of pid() and is an integer in {0, 1, 2}.
This means that it allows the user to control what will be printed on the screen. Table 2 gives a detailed description of the printing mode. pid() prints its output (python dictionary, see Section 3.4). 1 In addition to output=0, pid() prints a flags when it starts preparing (EXP). and another flag when it calls the conic optimization solver. 2 In addition to output=1, pid() prints the conic optimization solver's output.
(The conic optimization solver usually prints out the problem statistics and the status of optimization.) Currently, we only use ECOS to solve the Exponential Cone Program but in the future we will add the SCS solver. Thus, the user should determine which solver to use in the computations. For exmple, setting cone_solver="ECOS" will utilize ECOS in the computations.

Returned Data
The function pid() returns a Python dictionary called returndata. Table 3 describes the returned dictionary. Table 3. Description of returndata, the Python dictionary returned by pid().
(All information quantities are returned in bits.)
'Num_err' information about the quality of the solution.
'Solver' name of the solver used to optimize (CP).
(In this version, we only use ECOS, but other solvers might be added in the future.) Let w, η, and θ be the lists returned by the Cone Programming solver where w x,y,z = [r x,y,z , t x,y,z , q x,y,z ], η x,y,z = [λ x,y , λ x,z , µ x,y,z ], and θ x,y,z = [ν x,y,z ]. Note that w is the primal solution and (η, θ) is the dual solution. The dictionary returndata gives the user access to the partial information decomposition, namely, shared, unique, and synergistic information. The partial information decomposition is computed using only the positive values of q x,y,z . The value of the key 'Num_err' is a triplet such that the primal feasibility violation is returndata['Num_err'][0], the dual feasibility violation is returndata['Num_err'] [1], and returndata['Num_err'] [2] is the duality gap violation. In the following, we will explain how we compute the violations of primal and dual feasibility in addition to that of duality gap.
The primal feasibility of (EXP) is q x,y, * = p x,y, * q x, * ,z = p x, * ,z q * ,y,z = t x,y,z (−r x,y,z , −t x,y,z , −q x,y,z ) ≤ K exp 0 We check the violation of q x,y,z ≥ 0 which is required by K exp . Since all the non-positive q x,y,z are discarded when computing the decomposition, we check if the marginal equations are violated using only the positive q x,y,z . The coupling equations are ignored since they are just assigning values to the t x,y,z variables. ( q x,y, * − p x,y, * , q x, * ,z − p x, * ,z , −q x,y,z ) The dual feasibility of (D-EXP) is For dual feasibility violation, we check the non-negativity of (12). Thus, the error When w is the optimal solution of (EXP), we have − ∑ x,y,z r x,y,z = ∑ x,y,z q x,y,z log q x,y,z q * ,y,z = −H(X | Y, Z).

The duality gap of (EXP) and (D-EXP) is
where x,y λ x,y p x,y, * + ∑ x,z λ x,z p x, * ,z .
Since weak duality implies H(X | Y, Z) ≤ λ T b, we are left to check the non-negativity of (13) to inspect the duality gap. Thus, returndata['Num_err'] [2] is given by,

Tests
In this section, we test the performance of BROJA_2PID on three types of instances. We describe the instances that BROJA_2PID is tested against, report the results, and finally compare the performance of other estimators on the same instances. The two estimators that we compare the performance of BROJA_2PID to, which produce reasonable results and we are aware of, are COMPUTEUI and IBROJA. The first two types of instances are used as primitive validation tests. However, the last type of instances is used to evaluate the accuracy and efficiency of BROJA_2PID in computing the partial information decomposition. We used a computer server with Intel(R) Core(TM) i7-4790K CPU (4 cores) and 16GB of RAM to compute PID for the instances. All computations of BROJA_2PID and COMPUTEUI were done using one core, whereas IBROJA used multiple cores.

Paradigmatic Gates
The following set of instances have been studied extensively throughout the literature. The partial information decomposition of the set of instances is known [2]. Despite their simplicity, they acquire desired properties of shared or synergistic quantities.

Data
The first type of instances is based on the "gates" (RDN, UNQ, XOR, AND, RDNXOR, RDNUNQXOR, and XORAND) described in Table 1 of [1]. Each gate is given as a function (x, y, z) = G(W) which maps a (random) input W to a triple (x, y, z). The inputs are sampled uniformly at random, whereas, in Table 1 of [1], the inputs are independent and identically distributed.

Testing
All the gates are implemented as dictionaries and pid() is called successively with different printing modes to compute them. The latter is coded into the script file at the Github directory Testing/test_gates.py. The values of the partial information decomposition for all the gates distributions (when computed by pid()) were equal to the actual values up to precision error of order 10 −9 and the slowest time of computations is less than a millisecond.

Comparison with Other Estimators
Both estimators, COMPUTEUI and IBROJA, produced values of the partial information decomposition for all the gate distributions equal to the actual values up to precision error of order 10 −10 but the slowest time of computations is more than ten milliseconds for COMPUTEUI and 12 s for IBROJA.

COPY Gate
The COPY gate requires a large number of variables and constraints-see below for details. Thus, we used it to test the memory efficiency of the BROJA_2PID estimator. Since its decomposition is known, it also provides to some extent a validation for the correctness of the solution in large systems.

Data
COPY gate is the mapping of (y, z) chosen uniformly at random to a triplet (x, y, z) where x = (y, z). The COPY distribution overall size scales as |Y| 2 × |Z| 2 where y, z ∈ Y × Z. Proposition 18 in [1] shows that the partial information decomposition of COPY gate is Since Y and Z are independent random variables, then UI(X; Y\Z) = H(Y) and UI(X; Z\Y) = H(Z) and SI(Y; Z) = 0.

Testing
The COPY distributions is generated for different sizes of Y and Z where Y = [m] and Z = [n] for m, n ∈ N\{0}. Then, pid() is called to compute the partial information decomposition for each pair of m, n. Finally, the returndata dictionary is printed along with the running time of the BROJA_2PID estimator and the deviations of returndata['UIY'] and returndata['UIZ'] from H(Y) and H(Z), respectively. The latter process is implemented in Testing/test_large_copy.py.
The worst deviation was of percentage at most 10 −8 for any m, n ≤ 90.

Comparison with Other Estimators
The IBROJA estimator failed to give a solution to any instance since the machine was running out of memory. The computeUI estimator could solve instance of size less than or equal to 2.5 exp 7, but, for larger instances, the machine was running out of memory. COMPUTEUI was slower than BROJA_2PID by at least a factor of 40 and at most factor of 113; see Figure 5 for comparison.

Random Probability Distributions
This is the main set of instances for which we test the efficiency of BROJA_2PID estimator. It has three subsets of instance where each one is useful for an aspect of efficiency when the estimator is used against large systems. This set of instances had some hard distributions in the sense that the optimal solution lies on the boundary of feasible region of the problem (1).

Data
The last type of instances are joint distributions of (X, Y, Z) sampled uniformly at random over the probability simplex. We have three different sets of the joint distributions depending on the size of X, Y, and Z.
Note that, in each set, instances are grouped according to the varying value, i.e., |Y|, |Z|, and s, respectively.

Testing
The instances were generated using the Python script Testing/test_large_randompdf.py.
The latter script takes as command-line arguments |X|, |Y|, |Z| and the number of joint distributions of (X, Y, Z) the user wants to sample from the probability simplex. For example, if the user wants to create the instance of Set 1 with |Z| = 7, then the corresponding command-line is python3 test_large_randompdf.py 2 2 7 500. The script outputs the returndata along with the running time of BROJA_2PID estimator for each distribution and finally it prints the empirical average over all the distributions of SI(X; Y, Z), U I(X; Y\Z), U I(X; Y\Z), CI(X; Y, Z), and of the running time of BROJA_2PID estimator.
In the following, for each of the sets, we look at U I(X; Y\Z) to validate the solution, the returndata['Num_err'] triplet to examine the quality of the solution, and the running time to analyze the efficiency of the estimator. Validation. Sets 1 and 2 are mainly used to validate the solution of BROJA_2PID. For Set 1, when |Z| is considerably larger than |Y|, the amount of unique information that Y has about X is more likely to be small for any sampled joint distribution. Thus, for Set 1, the average UI(X; Y\Z) is expected to decrease as the size of Z increases. For Set 2, UI(X; Y\Z) is expected to increase as the size of Y increases, i.e., when |Y| is considerably larger than |Z|. BROJA_2PID shows such behavior of UI(X; Y\Z) on the instances of Sets 1 and 2 (see Figure 6). Quality. The estimator did well on most of the instances. The percentage of solved instances to optimality was at least 99% for each size in any set of instances. In Figure 7, we plot the successfully solved instances against the maximum value of the numerical error triplet returndata['Num_err'].
On the one hand, these plots show that, whenever an instance is solved successfully, the quality of the solution is good. On the other hand, we noticed that the duality gap, returndata['Num_err'] [2], was very large whenever the Cone Programming solver fails to find an optimal solution for an instance, i.e., the primal feasibility or dual feasibility or the duality gap is violated. In addition, even when BROJA_2PID fails to solve an instance to optimality, it will return a solution. (BROJA_2PIDraise an exception if the conic optimization solver fails to return a solution.) Thus, these results reflect the reliability of the solution returned by BROJA_2PID. Efficiency. To test the efficiency of BROJA_2PID in the sense of running time, we looked at Set 3 because Sets 1 and 2 are small scale systems.Set 3 has a large input size mimicking large scale systems. Testing Set 3 instances also reveals how the estimator empirically scales with the size of input. Figure 8 shows that the running time for BROJA_2PID estimator against large instances was below 50 minutes. Furthermore, the estimator has a scaling of |X| × |Y| × |Z|, so, on Set 3, it scales as N 3 where N is the size of input for the sampled distributions such that |X| = |Y| = |Z| = N.

Comparison with Other Estimators
We compare the BROJA_2PID estimator with the COMPUTEUI estimator and IBROJA against the instance of the same type of Set 3 for s = {2, . . . , 17}.
ComputeUI: We ran COMPUTEUI with the default parameters, which are the ε-far from optimality 10 −7 , maximum outer iterations 1000, and maximum inner iteration 1000, for more details, see [7]. The estimator COMPUTEUI was slower than BROJA_2PID on the instances of sizes |X| = |Y| = |Z| ≤ 12 and faster on the larger instances. For |X| = |Y| = |Z| ≤ 12, COMPUTEUI was slower than BROJA_2PID by at least a factor of 1.4 and at most factor of 1330; see Figure 9 for the actual running times. For 13 ≤ |X| = |Y| = |Z| ≤ 17, COMPUTEUI was faster than BROJA_2PID by at least a factor of 3.2 and at most factor of 39; see Figure 9 for the actual running times. The comparison shows that COMPUTEUI scales better than BROJA_2PID on large instances, whereas on the regime |X| = ||Y| = |Z| ≤ 12, which is needed in practice, BROJA_2PID scales better than COMPUTEUI. Even though the optimality criterion of COMPUTEUI is 10 −7 , the solution of BROJA_2PID was closer to the optima with a magnitude of 10 −5 more than that of COMPUTEUI which concludes that BROJA_2PID produces more enhanced solutions than those of COMPUTEUI. The test is implemented in Testing/test_from_file_broja_2pid_computeUI.py where the distributions in the folder randompdfs/ are the inputs.

Ibroja:
The estimator IBROJA is slower on any instances than BROJA_2PID. For |X| = |Y| = |Z| ≤ 7 IBROJA was slower than BROJA_2PID by at least a factor of 206 and at most factor of 6626. Note that the factor was increasing as |X| = |Y| = |Z| increases. We did not compute the instances of sizes |X| = |Y| = |Z| ≥ 8 since IBROJA started taking immensely long time to obtain the solutions for these instances.
The solution of BROJA_2PID was closer to the optima with a magnitude of 10 −3 for instances with s = ... more than that of IBROJA. Note that the results of this comparisons aligns with the claims imposed in [5] that first order methods are not suitable to tackle this problem. The test is implemented in Testing/test_from_file_dit.py where the distributions in the folder randompdfs/ are the inputs.

Cone Programming Model for Multivariate PID
Chicharro [8] introduced a multivariate PID measure using the so-called tree-base decompositions. The measure is similar to the bivariate BROJA PID measure yet it is not an extension of the BROJA PID measure. In this section, we show how to model some of the trivariate PID quantities using the exponential cone. Thus, with some modification, the BROJA_2PID can be extended to compute some of the trivariate PID quantities. Note that, due to time constraint, we could not check whether the other trivariate PID quantities can be also fitted into the exponential cone.
Let S, X, Y, Z be random variables with finite range, where S is the target and X, Y, Z are the sources. Chicharro [8] defined the quantity of synergistic information that the sources X, Y, Z hold about the target S as: CI(S; X 1 , X 2 , X 3 ) = max(MI(S; X, Y, Z) − MI(S ; X , Y , Z )) (14) where the maximum extends over the triples of random variables (S , X , Y , Z ) with the same 12,13,14-marginals as (S, X, Y, Z), i.e., P(S = s, X = x) = P(S = s, X = x) for all s, x, P (S = s, Y = y) = P(S = s, Y = y) for all s, y, and P(S = s, Z = z) = P(S = s, Z = z) for all s, z. The objective function of (14), given the marginal conditions, is equal, up to a constant not depending on S , X , Y , Z , to the conditional entropy H(S | X , Y , Z ). Thus, we find that (14) is equivalent to the following Convex Program: minimize ∑ s,x,y,z q s,x,y,z ln q s,x,y,z q * ,x,y,z over q ∈ R S×X×Y×Z subject to q s,x, * , * = p s,x, * , * for all (s, x) ∈ S × X q s, * ,y, * = p s, * ,y, * for all (s, y) ∈ S × Y q s, * , * ,z = p s, * , * ,z for all (s, z) ∈ S × Z q s,x,y,z ≥ 0 for all (s, x, y, z) ∈ S × X × Y × Z.
Proof. The proof follows in the same manner to that of Proposition 1.
Proof. The proof follows in the same manner to that of Proposition 2.
Chicharro [8] defined the quantity of unique information that the sources X hold about the target S as: UI(S; X\Y, Z) = min MI(S ; X , Y , Z ) − min MI(S ; Y , Z ) where both minimums extend over the triples of random variables (S , X , Y , Z ) with the same