Entropic Matroids and Their Representation

This paper investigates entropic matroids, that is, matroids whose rank function is given as the Shannon entropy of random variables. In particular, we consider p-entropic matroids, for which the random variables each have support of cardinality p. We draw connections between such entropic matroids and secret-sharing matroids and show that entropic matroids are linear matroids when p=2,3 but not when p=9. Our results leave open the possibility for p-entropic matroids to be linear whenever p is prime, with particular cases proved here. Applications of entropic matroids to coding theory and cryptography are also discussed.


Introduction
Matroid theory generalizes the notion of independence and rank beyond vector spaces. In a graphical matroid, for example, the rank of a subset of edges is the size of an acyclic spanning set of edges; analogous to the rank of a subset of vectors, which is the size of a spanning set of linearly independent vectors. It is natural to ask whether such combinatorial structures can also be obtained from probabilistic notions of independence, based on random variables. In particular, the entropy can be used to measure dependencies between random variables and it can be used to define a matroid rank function as discussed below. One can then investigate how such entropic matroids relate to other matroids, in particular whether they admit linear representations as graphical matroids do. Before giving formal definitions of such entropic matroids, we give some general definitions for matroids.

Definitions
We recall a few standard definitions related to matroids, see, for example, Oxley [1]. A matroid is a pair M = (E, r), where the ground set E is a finite set (typically E = [m], m ∈ Z + ) and where the rank function r : 2 E → Z + satisfies 1.
The submodularity property can be interpreted as a diminishing return property: for every A ⊆ B and x ∈ E, that is, the larger the set, the smaller the increase in rank when adding a new element. Independent sets in a matroid are the subsets S ⊆ E such that r(S) = |S| and maximal independent sets are called bases, whereas minimal dependent sets are called circuits.
A matroid M = (E, r) is linear if there is a vector space V and a map f : E → V such that r(S) = rank( f (S)) for all S ⊆ E, where rank denotes the rank function of V, that is, rank( f (S)) = dim span(f(S)). We say that a matroid is F-representable if in addition, V can be chosen as a vector space over the field F.
Given a matroid M, a minor of M = (E, F ) is a matroid that can be obtained from M by a finite sequence of the following two operations:

2.
Contraction: Given an independent set A ∈ F , we define the matroid M/A = (E \ A, {B ⊆ E \ A : B ∪ A ∈ F }).
We define the dual M * = (E, r * ) of a matroid M = (E, r) is defined by letting r * (A) = r(E \ A) + |A| − r(E) for all A ⊆ E. A matroid property is a dual property if M has the property if and only if M * does.
Theorem 1 (Woodall [2]). Being an F-representable matroid is a dual property, thas is, M is F-representable if and only if M * is.

Entropic Matroids
One may expect that matroids could also result from probabilistic structures. Perhaps the first possibility would be to define a matroid to be 'probabilistic' if its elements can be represented by random variables (with a joint distribution on some domain), such that a subset S is independent if the random variables indexed by S are mutually independent. This, however, does not necessarily give a matroid. For example, let X 1 and X 2 be independent random variables (for example, normally distributed) and let contains two independent random variables. So this violates the submodularity requirement.
On the other hand, it is well known that the entropy function satisfies the monotonicity and submodularity properties [3,4]. Namely, for a probability measure µ on a discrete set X , the entropy of µ in base q is defined by For two random variables X and Y with values in X and Y respectively and with joint distribution µ, we define the conditional entropy In particular, we have the chain rule of entropy H(X|Y) = H(X, Y) − H(Y). We also define the Hamming distance of two vectors x and y as d(x, y) = | {1 ≤ i ≤ n : x i = y i } | and the Hamming ball of radius r around x as B r (x) = {y : d(x, y) ≤ r}.
Furthermore, for a probability measure µ of m random variables defined each on a domain X , that is, for a probability distribution µ on X m , one can define the function where µ S is the marginal of µ on S, that is, By choosing the base q for the entropy in (4) to be |X |, we also get that r(S) ≤ |S|, with equality for uniform measures. Therefore, the above r satisfies the three axioms of a rank function, with the exception that r is not necessarily integral. In fact this defines a polymatroid (and r is also called a β-function [5]) and entropic polymatroids (i.e., polymatroids derived from such entropic β-functions) have been studied extensively in the literature; see References [6][7][8][9] and references therein. Using the Shannon entropy to study matroid structures already emerged in the works [10,11], where the family of pairs of sets (i, j) and K such that [m] such that X i and X j are conditionally independent given X K , with the latter expressed in terms of the Shannon entropy as r(i, K) + r(j, K) − r(i, j, K) − r(K) = 0.
However, we can also investigate what happens if this function r is in fact integral. This is the object of study in this paper.
where µ S is the marginal of µ on S and H is the Shannon entropy in base q.
Note that the entropy does not depend on the support of the random variables but only on their joint distribution. For this reason, the restriction that µ is taking values in [q] m is in fact equivalent to requiring that each random variable has a support of cardinality at most q. When working with the m underlying random variables X 1 , . . . , X m distributed according to µ, we write With the integrality constraint, the random variables representing a q-entropic matroid must be marginally either uniformly distributed or deterministic, each pair of random variables must be either independent or a deterministic function of each other, and so on. These represent therefore extremal dependencies. As discussed in Section 8, such distributions (with extremal dependencies) have recently emerged in the context of polarization theory and multi-user polar codes [12], which has motivated in part this paper. In Section 4, we also comment on the connection between secret sharing from cryptography.
It is well-known and easy to check that entropic matroids generalize linear matroids, see, for example, References [7,13]. For completeness we recall the proof, making explicit the dependency on the field size. Lemma 1. Let F be a finite field. If a matroid is F-representable then it is |F|-entropic.
Proof. Let M be an F-representable matroid and A be a matrix in F |E|×n whose rows correspond to elements of E so that a subset of rows is linearly independent in F n if and only if the corresponding subset of E is independent in M. Let Y 1 , . . . , Y n be mutually independent and uniformly distributed random variables over F and let Y = (Y 1 , . . . , Y n ). Then the vector of random variables (X 1 , . . . , X |E| ) = A · Y satisfies that for any B ⊆ E, H({X i : i ∈ B}) = rank {A i : i ∈ B}. Thus the entropy function on X 1 , . . . , X |E| recovers the rank function of M and M is |F|-entropic.
Our main goal throughout the remainder of this paper is to investigate whether entropic matroids are always representable over fields. As discussed in next section, we will approach this question by checking whether the forbidden minors of representable matroids are entropic or not. This strategy is justified by the fact that for the Shannon entropy, entropic matroids are a minor-closed class, as we will show in Lemma 2.

Results
We prove that for every p, a matroid is p-entropic if and only if it is secret-sharing with a ground set of size p, which is equivalent to being the matroid of an almost affine code with alphabet size p. Furthermore, we prove that for every p, being p-entropic is closed under taking matroid minors.
We give alternative proofs that for p = 2 and p = 3, being p-entropic is equivalent to being F p -representable by examining known forbidden minor characterizations. We also make some partial progress towards proving the same for other primes p. In the final section of the paper, we mention some applications of entropic matroids in coding.

Further Related Literature
Matroid representations and forbidden minors were studied in Reference [14] for GF(3), Reference [15,16] for GF(4) and some results for general fields were obtained in References [17][18][19]. Linear representable matroids are also intimately related to linear solutions to network coding problems, in particular in Reference [20], in which a network-constrained matroid enumeration algorithm is developed, as well as Reference [21] that considers integer-valued polymatroids and representable polymatroids in References [22,23]. Matroid's minors and the connection to Zhang-Yeung inequality was discussed in Reference [24], which shows in particular that almost entropic matroids have infinitely many excluded minor. Matroids, secret sharing and linearity are also discussed in several papers as mentioned in part earlier. Reference [25] gave the first example of an access structure (i.e., the parties that can recover the secret from their share) induced by a matroid, namely the Vamos matroid, that is non-ideal (a measure of optimality of the secret shares lengths); Reference [26] presented the first non-trivial lower bounds on the size of the domain of the shares for secret-sharing schemes realizing an access structure induced by the Vamos matroid and this is later improved in Reference [27] using using non-Shannon inequalities for the entropy function. As mentioned earlier, an important line of work is also dedicated to understanding the representation of entropic polymatroids for a fixed ground set cardinality [9], which is well-understood for cardinality 2 and 3 and more complicated for larger cardinality with the non-Shannon inequalities emerging.

Minors of Entropic Matroids
In this section, we prove the following: Lemma 2. Let M be an entropic matroid on random variables X 1 , . . . , X m with values in F p and with entropy H and joint distribution µ.
For any independent set A, M/A is entropic.
Proof. For each of the claims, we construct random variables and a probability distribution whose entropy agrees with the rank function of the matroid in question.
To prove (i), we consider the variable set A with the marginal distribution given by µ. Then H is integral on any subset of A, since it is integral on any subset of {X 1 , . . . , X m }. This implies (i).
To prove (ii), we consider two cases. If for any . . , X m } in this case and the result follows from (i).
Otherwise, we define a distribution on {X 1 , . . . , X i−1 , X i+1 , . . . , X m } by fixing any value x for X i with P [X i = x] > 0 and considering the probability distribution obtained by conditioning on the event {X i = x}. Now let A ⊆ {X 1 , . . . , X i−1 , X i+1 , . . . , X m }. There are two cases. If there is no circuit C with therefore H(X i |A) = 1 and so X i and A are independent. In this case, H(A|X i = x) = H(A), thus H agrees with the rank function of M/ {X i }.
If adding X i to A creates a circuit, then H(A, X i ) = H(A) and H(A|X i ) = H(A) − 1. Let X(A) denote the vector with components X j , j ∈ A and let Y = F A p denote the set of possible values of X(A).
In particular, the variables in B \ C are independent of X i in the marginal distribution on B and thus Since these probabilities add up to one, it follows that exactly p |C|−2 of them are non-zero, which yields This implies that H( it follows that we have H(A|X i = k) = H(A) − 1 for all summands. This implies that the entropy of the conditional distribution yields the entropic matroid M/ {X i } and this proves (ii). Finally, (iii) follows by applying (ii) repeatedly.
This lemma proves that the property of being an entropic matroid is closed under taking minors. This means that in order to show entropic matroids belong to a minor-closed class of matroids, it suffices to show that the forbidden minors of this class are not entropic.

Secret-Sharing and Almost Affine Matroids
Secret-sharing matroids were introduced in Reference [28]. These matroids are motivated by the problem of secret-sharing in cryptography [29,30], which refers to distributing a secret among a collection of parties via secret shares such that the secret can be reconstructed by combining a sufficient number (of possibly different types) of secrete shares, while individual shares being of no use on their own.
We use the following definitions from Reference [25]: Let A ∈ S I×E be a matrix, where S, I and E are finite sets. For i ∈ I, e ∈ E and Y ⊆ E \ {e}, we define n(i, e, Y) = a je : j ∈ I, a jy = a iy for all y ∈ Y . Then A is a secret-sharing matrix if for e ∈ E and Y ⊆ E \ {e}, either n(i, e, Y) = S for all i ∈ I or |n(i, e, Y)| = 1 for all i ∈ I. Any secret-sharing matrix induces a secret-sharing matroid with ground set E and rank function r(Y) the logarithm with base |S| of the number of distinct rows of the submatrix The interpretation is as follows. Suppose some row i ∈ I has been chosen in A but its value has been kept secret. Knowing A, one wishes to determine as much as possible about the values a ie , e ∈ E, without knowing which row has been selected. If by some means one has been able to determine the values a i f for all f ∈ Y ⊆ E. Then the possible values of a ie for some e ∈ E \ Y, consistent with the available information, are precisely the members of n(i, e, Y) (and this set can be determined despite not knowing i).
Secret-sharing matroids were connected to entropy rank functions in Reference [31], as further discussed below. We now formally connect the two classes of matroids.

Lemma 3.
If a matroid is p-entropic, then it is a secret-sharing matroid with a ground set of size p.
Proof. Given a p-entropic matroid M with ground set E and rank (entropy) function H, we let A be the matrix containing all vectors in Z E p which correspond to outcomes of positive probability in M. For every set Y of variables, A[Y] contains the possible outcomes of these variables. These outcomes are all equally likely and the number of distinct outcomes with positive probability is p H(Y) . This implies that to prove that M is a secret-sharing matroid, it suffices to prove that A is a secret-sharing matrix.
Let e ∈ E and Y ⊆ E \ {e}. Then n(i, e, Y) is the number of possible values of the random variable X e ∈ E associated with e when Y is fixed to its values in outcome i. But H(X e |Y) ∈ {0, 1} and if H(X e |Y) = 0 then X e is determined by the values of Y and |n(i, e, Y)| = 1 for all i; if H(X e |Y) = 1 then X e is independent of the values of the variables in Y and thus n(i, e, Y) = Z p . This proves that A is a secret-sharing matrix.
Note that this proof remains true for any p ∈ N ≥1 , that is, it does not require the ground set to be a field. The converse of Lemma 3 is true as well: every secret-sharing matroid is p-entropic for some p. This was observed in Reference [31] and we include a proof for completeness. Together, this observation and Lemma 3 provide an alternative characterization of entropic matroids as secret-sharing matroids.

Lemma 4.
Every secret-sharing matroid with ground set S is |S|-entropic.
Proof. Let M be a secret-sharing matroid and A a secret-sharing matrix inducing M. Without loss of generality, we may assume that A does not contain two identical rows, since this does not affect the structure of the matroid. The definition of secret-sharing matroids implies that the number of rows of A is a power |S| r of |S|. We define a probability distribution on the set of random variables {X e : e ∈ E} by setting the probability that (X e ) e∈E = a as |S| −r for every row a of A.
We proceed by induction on |E \ Y| to show that H(Y) (with the Shannon entropy with base |S|) is integral for every Y ⊆ E and moreover, that the resulting probability distribution on Y is the uniform distribution on the distinct rows of A[Y]. This is clearly true for Y = E, since H(E) = r. Let Y ⊂ E and let e ∈ E \ Y , then by the induction hypothesis, H(Y ∪ {e}) = k ∈ N. The matrix A[Y ∪ {e}] has |S| k distinct rows and each distinct row has the same probability |S| −k . If H(X e |Y) = 0, then H(Y) = k and distinct rows in A[Y ∪ {e}] are distinct rows of A[Y] and thus the distribution of the variables in Y is the same as for the variables of Y ∪ {e}. Therefore, we may assume that fixing the values of the variables in Y does not always determine X e . This means that n(i, e, Y) = |S| for all i.

In particular, every distinct row of A[Y] gives rise to |S| distinct rows in A[Y ∪ {e}] and thus A[Y]
has |S| k−1 distinct rows. Each distinct row has the same multiplicity |S| r−k in A[Y ∪ {e}] by the induction hypothesis and thus each distinct row of A[Y] has multiplicity |S| r−k+1 . Now the resulting distribution of the variables in Y is a uniform distribution with |S| k−1 distinct outcomes, therefore H(Y) = k − 1. Clearly, r M (Y) = k − 1 and therefore this induction allows us to conclude that the rank in M coincides with the entropy of the constructed distribution. This implies the result.
Seymour [25] proved that the Vamos matroid is not a secret sharing matroid. This implies that it is not an entropic matroid for any p.
Moreover, there is a secret-sharing matroid which is not representable over the corresponding field (with |S| elements) and which has been discovered by Simonis and Ashikhmin [32]. This example is the non-Pappus matroid, shown in Figure 1. This matroid has nine elements {1, . . . , 9} as its ground set E and each X ⊆ E has rank min(|X|, 3) with the exception of the eight 3-elements sets shown as colored lines, which each have rank 2. Pappus' theorem proves that this matroid is not representable over any field. Simonis and Ashikhmin [32] show that the row space of the matrix is a secret-sharing matrix, where each entry of the matrix is considered as an element of F 2 3 . They introduce another definition of entropic matroids via codes: a code (subset) C ⊆ S E is almost affine if r(Y) := log |S| (|C Y |) ∈ N 0 for all Y ⊆ E, where C Y denotes the projection of C to the variables in Y. The corresponding matroid M with ground set E and rank function r is called an almost affine matroid. It is not hard to see that this definition coincides with secret-sharing matroids by using the codewords in C as the rows of the secret-sharing matrix A and vice versa. These results show that not all entropic matroids are representable by giving a 9-entropic matroid which is not representable over any field.

The Case p = 2
An F 2 -representable matroid is called binary. The goal of this section is to prove the following.

Theorem 2. Every 2-entropic matroid is binary.
To prove this, we use the characterization of binary matroids proved by Tutte [33] stating that a matroid is binary if and only if it has no U 2,4 -minor. U 2,4 is the uniform matroid of rank two on four elements: E = [4] and F consists of all subsets of E of cardinality at most two. Using Tutte's characterization, the theorem follows from the next lemma.

Lemma 5. U 2,4 is not 2-entropic.
Proof. Suppose for a contradiction that µ is a probability distribution on four random variables X 1 , . . . , X 4 whose entropy is the rank function of U 2,4 , then H(X i ) = 1 for all i and H(X i , X j ) = 2 for all i = j; furthermore H(X 1 , X 2 , X 3 , X 4 ) = 2. This implies that P X i = a, X j = b = 1 4 for all i = j and a, b ∈ F 2 , because the marginal distribution of X i and X j has to be the product of two independent Ber 1 2 distributions to achieve an entropy of two. Furthermore, H(X i , X j |X k , X l ) = 0 for {i, j, k, l} = [4] by the chain rule and therefore for all a, b, c, d. Without loss of generality, we may assume that P [X 1 = 0, X 2 = 0, X 3 = 0, X 4 = 0] = 1 4 but then every other event in which at least two different variables X i and X j are zero must have probability zero, since P X i = 0, X j = 0 = 1 4 . Since P X i = 0, X j = 1 = 1 4 , it follows that all outcomes with three ones have probability 1 4 .

The Case p = 3
An F 3 -representable matroid is called ternary. The following structure theorem has been proved independently by Seymour [34] and Bixby [35], who attributed it to Reid.
The Fano plane, shown in Figure 2, has a ground set E = [7] and can be represented over and P [X = x] ∈ 0, 1 9 . As in the proof for U 2,4 , we may assume that P [X = 0] = 1 9 but then any other event with at least two zeros must have probability 0. This leaves six events, five with one zero and one with no zeros; but each of them has probability at most 1 9 , thus the total probabilities add up to at most 7 9 , a contradiction. Lemma 7. U 3,5 is not 3-entropic.
Every three distinct variables are independent and they determine the other two variables. It follows that, for every event, its probability is either zero or 1 27 . But there are only 81 outcomes and 27 of them occur with positive probability. Each of those 27 must differ from the others in at least three places, because if two outcomes are equal in three positions, the other two are determined and thus equal. This means that the Hamming balls of radius 1 around the outcomes with positive probability are disjoint. Each of these Hamming balls contains 11 elements: the outcome with positive probability and the outcomes in which one variable is flipped to one of the two other possible values. Therefore, we have at least 27 × 11 = 297 outcomes, a contradiction.

Lemma 8. The Fano plane is not 3-entropic.
Proof. Suppose for a contradiction that the Fano plane is 3-entropic and that X = {X 1 , . . . , X 7 } is a set of random variables whose entropy corresponds to their rank in the Fano matroid as shown in Figure 2. Since the maximum size of an independent set in the Fano matroid is three, any three independent variables determine the values of all the others; in particular, there are at most 27 outcomes with positive probability, which we denote by their values on the independent set X 1 , X 2 , X 3 . Since H(X 1 , X 2 , X 3 ) = 3, each of these outcomes has probability 1 27 , whereas all other outcomes have probability zero. It follows that we have a map f : F 3 3 → F 4 3 mapping the values on X 1 , X 2 , X 3 to the values on X 4 , . . . , X 7 , where X 2 and X 3 determine X 7 , X 1 and X 2 determine X 5 and X 3 and X 1 determine X 6 but every change of one of X 1 , X 2 , X 3 must change X 4 .
We consider the set of nine assignments of X 1 , X 2 , X 3 for which X 4 = 0. If every two of these have pairwise distance at least three, we can only have three distinct assignments. This implies that we may assume that there are two assignments with distance two. Furthermore, if we fix any two digits, exactly one choice is valid for the remaining digit. Therefore, up to isomorphism (exchanging symbols), the set looks as follows: {000, 012, 021, 102, 111, 120, 201, 210, 222}; and thus X 4 = X 1 + X 2 + X 3 .
The random variables X 2 , X 3 , X 4 determine X 5 , X 6 , X 7 and X 1 . In particular, both of the pairs X 1 , X 1 + X 2 + X 3 and X 2 , X 3 determine X 7 .
The above proof actually shows that the Fano plane is not p-entropic for any p > 2, which gives an alternative proof that it is not F p -representable for p > 2 either. . This shows that every 3-element set is independent in F * 7 , thus its circuits are exactly the complements of the three-element circuits of the Fano plane. To give a better understanding of these matroids, we expanded the symmetrical representation of F 7 given in Reference [36] and shown in Figure 3a to F * 7 . The result is shown in Figure 3b. Each color connects the elements of a circuit in one figure and the corresponding circuit given by its complement in the other figure. The cyclical order of the nodes in Figure 3a    Proof. Suppose for a contradiction that X = (X 1 , . . . , X 7 ) is a vector of random variables whose entropy coincides with the rank function of F * 7 . Since H(X 2 , X 3 , X 4 , X 5 ) = 4 and H(X) = 4, P [X = x] ∈ 0, 1 81 for all x ∈ F 7 3 . We refer to the events with positive probability as outcomes. By permuting the symbols, we may assume that 0000000 is a possible outcome. We consider the other outcomes (X 1 , X 6 , X 7 ) for X 2 = 0. No two of these outcomes can have distance one, because X 1 , X 2 , X 6 , X 7 is a cycle, so for fixed X 2 , any two distinct possible outcomes must have distance at least two on their restriction to (X 1 , X 6 , X 7 ). In the proof of the previous lemma, we have already to shown that by switching digits, we may assume that the set of images is {000, 012, 021, 102, 111, 120, 201, 210, 222}. As shown in Figure 4, this also determines the other two sets (but not necessarily which of them is which). This shows that X 1 + X 6 + X 7 is sufficient to determine X 2 and vice versa; by flipping symbols 1 and 2 for X 2 , we may assume that X 1 + X 6 + X 7 = X 2 .
We now fix X 3 . Then X 4 is determined by either X 1 , X 2 = X 1 + X 6 + X 7 or X 6 , X 7 and thus changing X 1 or adding k to X 6 and subtracting it from X 7 does not change X 4 . This implies that X 4 depends only on X 6 + X 7 (and X 3 ) and thus H(X 3 , X 4 , X 6 + X 7 ) = 2. Analogously, H(X 3 , X 5 , X 1 + X 7 ) = 2 and H(X 4 , X 5 , X 1 + X 6 ) = 2. Therefore, X 3 , X 4 and X 5 determine X 6 + X 7 + X 1 + X 7 + X 1 + X 6 = 2(X 1 + X 6 + X 7 ) = 2X 2 and since 2 = 0 in F 3 , this shows that H(X 2 , X 3 , X 4 , X 5 ) = 3, contradicting the assumption that X had the entropy function given by the rank in F * 7 . Combining these four lemmas with the characterization of ternary matroids, we have proved the following theorem (the interesting part being the only if part).

Comments for General Primes p
For ground sets of arbitrary size p, being p-representable is a stronger assumption than being p-entropic as the example of Simonis and Ashikhmin [32] of the non-Pappus matroid (see Figure 1) shows. However, no counterexamples exist in the case where the ground set has prime order.
In this section, we show that for primes p, every p-entropic matroid of rank at most two is linear, that is, let M be an entropic matroid with ground set E and H(E) ≤ 2, then M is linear. If H(E) < 2, this is true since any basis has at most one element. Furthermore, we may assume that every X ∈ E satisfies H(X) = 1, for otherwise X is deterministic and is represented by the zero vector in every linear representation. Proof. If M is F p -representable, then so is M \ {X}, since it is a minor-closed property. Suppose that M \ {X} is representable and let f : E \ {X} → V be a representation and let g : E → V be defined as f (Z) for Z = X and f (X) = f (Y). Let S ⊆ E. Then dim(span(g(S))) = H(S) for X ∈ E. If X ∈ S but Y ∈ S, then dim(span(g(S))) = dim If X, Y ∈ S, then dim(span(g(S))) = dim(span( f (S \ {X}))) = H(S \ {X}) = H(S) by applying submodularity to the sets {X, Y} and S \ {X}. This proves that g is an F p -representation of M.
Each pair of these p + 1 vectors is independent and a basis of F 2 p , thus they represent U 2,p+1 . The following lemma shows that any larger uniform matroid is neither p-entropic nor F 2 p -representable.
Proof. Suppose not and let C denote the set of possible outcomes for a probability distribution on p + 2 variables representing U 2,p+2 . By changing symbols, we may assume that (0, . . . , 0) is a possible outcome. Furthermore, there are p 2 outcomes and hence p of them begin with a zero. These p outcomes have the same value at the first coordinate X 1 but all other values are distinct (i.e., each X i for i > 1 takes all of its p possible values exactly once among these p outcomes, including value zero for outcome (0, . . . , 0)). Therefore, we can simultaneously change the other symbols so that these p outcomes become (0, 0, . . . , 0), (0, 1, . . . , 1), (0, 2, . . . , 2), . . . , (0, p − 1, . . . , p − 1). But then any other outcome not starting with zero satisfies that X 2 , . . . , X p+2 all take different values in Z p . Since there are only p values but p + 1 variables, this is a contradiction.
This shows that line matroids, which are among the forbidden minors of binary and ternary matroids, are p-entropic if and only if they are F p -linear.

Application: Entropic Matroids in Coding
We recall here a result proved in Reference [12] that makes entropic matroids emerge in a probabilistic context and which gives further motivations to studying entropic matroids. The result gives in particular a rate-optimal code for compressing correlated sources, similarly to the channel counter-part developed in Reference [37].
Let X n = (X 1 , . . . , X n ) be an i.i.d. sequence of discrete random variables taking values in X m . That is, X n is an m × n random matrix with i.i.d. columns of distribution µ on X m . One can assume that the support of X is finite (countable supports can be handled with truncation arguments) and to further simplify, we assume that X is binary, associating each element in the binary field, that is, X = GF (2).
Due to the i.i.d. nature of the sequence, the entropy of X n is the sum of each components' entropies H(µ), i.e., H(X n ) = nH(µ).
The next result shows that it is possible to transform the sequence X n with an invertible map that extracts the entropy in subsets of the components. In words, the transformation takes the i.i.d. vectors under an arbitrary µ to a sequence of distributions that correspond in the limit to entropic matroids.
Theorem 5 (Abbe [12]). Let m be a positive integer, n be a power of 2 and X n be an m × n random matrix with i.i.d. columns of distribution µ on F m 2 . Let Y n = X n G n over F 2 , where G n = 1 0 In other words, one starts with an i.i.d. sequence of random vectors under a distribution µ that defines an entropic polymatroid [m] ⊇ S → H(S) and after the transformation G n , one obtains a sequence of random vectors which is no longer i.i.d. but where each random vector given the past defines an entropic matroid in the limit. Having a matroid structure is of course much easier to handle for compression purposes, one simply has to pick a basis for each matroid, store the components in that basis and the other components are fully dependent on these so they can be recovered without being stored. Of course, in practice n is large but finite, and each random vector defines a polymatroid that is close to a matroid but a continuity argument allows to show that the components outside of the bases can still be recovered but only with high probability. Since a compression code is allowed to fail with a low probability of error, this is not an issue. Understanding the structure of these entropic matroids allows then one to better understand how the stored components can be allocated over the different components-see Reference [12] for further details.