Next Article in Journal
Entropy Derived from Causality
Next Article in Special Issue
Optimal Linear Error Correcting Delivery Schemes for Two Optimal Coded Caching Schemes
Previous Article in Journal
Information and Statistical Measures in Classical vs. Quantum Condensed-Matter and Related Systems
Previous Article in Special Issue
Straggler-Aware Distributed Learning: Communication–Computation Latency Trade-Off
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Index Coding Problem and Discrete Polymatroids †

by
Anoop Thomas
1 and
Balaji Sundar Rajan
2,*
1
School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Odisha 752050, India
2
Department of Electrical Communication Engineering, Indian Institute of Science Bangalore, Bangalore 560012, India
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in IEEE International Symposium on Information Theory, Aachen, Germany, 2017
Entropy 2020, 22(6), 646; https://doi.org/10.3390/e22060646
Submission received: 20 April 2020 / Revised: 5 June 2020 / Accepted: 7 June 2020 / Published: 10 June 2020

Abstract

:
The connections between index coding and matroid theory have been well studied in the recent past. Index coding solutions were first connected to multi linear representation of matroids. For vector linear index codes, discrete polymatroids, which can be viewed as a generalization of the matroids, were used. The index coding problem has been generalized recently to accommodate receivers that demand functions of messages and possess functions of messages. In this work we explore the connections between generalized index coding and discrete polymatroids. The conditions that need to be satisfied by a representable discrete polymatroid for a generalized index coding problem to have a vector linear solution is established. From a discrete polymatroid, an index coding problem with coded side information is constructed and it is shown that if the index coding problem has a certain optimal length solution then the discrete polymatroid is representable. If the generalized index coding problem is constructed from a matroid, it is shown that the index coding problem has a binary scalar linear solution of optimal length if and only if the matroid is binary representable.

1. Introduction

The broadcast nature of the wireless medium is utilized by many applications such as multimedia content delivery, audio and video on-demand and ad-hoc wireless networking. The index coding problem introduced by Birk and Kol [1] aims to increase the throughput of wireless networks. The model considered in [1] involves a source that possesses a set of messages and a set of receivers that demand messages. Each receiver knows a proper subset of messages, which is referred to as the side information. The source also knows the side information available to the receivers. It uses this knowledge to develop proper encoding techniques to satisfy the demands of the receivers at an increased throughput. The source needs to transmit functions of messages to ensure that the receivers are able to decode their demanded messages. An index code is an encoding scheme developed by the source to satisfy all the receivers. An encoding scheme with a minimum number of transmissions that enables all the receivers to decode its demanded messages is referred to as an optimal index code.
Bar-Yossef et al. [2] studied a special case of index coding problem and found that the length of the optimal linear index code is equal to the minrank of a related graph. Graph theory techniques were used to find optimal index codes for a certain class of index coding problems in [3,4]. The case in which the side information can be represented by a special structure, referred to as interlinked cycles, was studied in [5], and optimal codes were constructed for them in [6].
An instance of the conventional index coding problem involves a source that possesses all the messages and a set of receivers. Each receiver possesses a subset of messages called the side information or the Has-set and demands another subset of messages called the Want-set. The wireless broadcast channel is assumed to be noiseless. The source is aware of the messages possessed by each receiver and it aims to reduce the number of transmissions required to satisfy the demands of all the receivers.
The problem of index coding has been extended in many directions. The problem of index coding with the restricted information problem is introduced in [7]. In the index coding with the restricted information problem, for each receiver, there is a certain subset of messages that the receiver should not be able to decode, referred to as restricted messages. The source has to design encoding schemes that satisfy the demands of all the receivers while ensuring that no receiver will be able to decode its restricted messages. The problem of index coding with erroneous transmissions was studied by Dau et al. [8]. The problem of finding the minimum length index code that enables all the receivers to correct a specific number of errors is addressed. Error-correction is achieved by using extra transmissions. In [8], only the transmitted symbols were error prone. This was extended in [9], where the side-information possessed by the receiver is also error prone. In this paper, we consider another extension to the index coding problem referred to as the Generalized Index Coding (GIC) problem. The generalized index coding problem is a special case of the functional index coding problem introduced in [10]. In functional index coding, the receivers and the source possess functions of messages rather than the actual messages. The generalized index coding problem is a special case of functional index coding when the functions are restricted to be linear [11].
In a functional index coding problem, the Has-set and the Want-set may contain functions of messages rather than subsets of messages. Note that the conventional index coding problem is a special case of the functional index coding problem. The problem with the Has-sets being linear combinations of messages was studied in [11,12] where it was called index coding with coded side information. This was motivated by the fact that certain clients may fail to receive some coded transmissions possibly due to a power outage. The clients will now possess a few coded transmissions as side information and the new problem is an index coding problem with coded side information. Dai et al. [13] considered both Has-sets and Want-sets to be linear combinations of the messages and the corresponding index coding problem is referred to as the generalized index coding problem. For the generalized index coding problem, a generalized minrank parameter, which gives the optimal length for linear encodings, was found in [14]. The paper also considers error correcting index codes for generalized index coding problems and obtain bounds on the lengths of optimal index codes. For some specific class of generalized index coding problems, optimal error correcting index codes are found in [15].
Consider the scenario with one source and four receivers R 1 , R 2 , R 3 and R 4 . The source node possesses four packets X 1 , X 2 , X 3 and X 4 . Receiver R i wants packet X i for i = 1 , , 4 . The packets in the Has-set of the receivers is as follows: R 1 has packet X 2 , R 2 has packet X 1 , R 3 has packet X 4 and R 4 has packet X 3 . From the classical index coding problem, the source has to transmit two coded packets X 1 + X 2 and X 3 + X 4 to satisfy all receivers. The transmissions are over an erasure broadcast channel. Receivers R 1 and R 2 fail to receive X 1 + X 2 but receive X 3 + X 4 . Similarly, receivers R 3 and R 4 fails to receive X 3 + X 4 but receives X 1 + X 2 . At this point, if only classical index coding is considered, the source needs to transmit the two coded packets again. However, if we consider the coded packets available to the receivers then the source only needs to transmit X 1 + X 2 + X 3 + X 4 to satisfy all the receivers. Hence, by using generalized index coding, the source is able to save one transmission.
Index coding also finds application in the field of coded caching [16]. Once the caching scheme is fixed, for a specific demand, the design of the delivery scheme becomes an index coding problem. If the contents of the cache are encoded, then any delivery scheme corresponds to a solution of the corresponding generalized index coding problem. The relationship between index coding and coded caching is made use of in [17,18] to show the optimality of uncoded caching schemes. In [19], index coding techniques are used for caching problems to obtain solutions meeting outer bounds. The connections between index coding and coded caching are also used to design optimal error correcting delivery schemes for coded caching problems in [20,21,22,23].
The equivalence between index coding problems and network coding problem is established in [24]. From a network coding problem, an index coding problem is constructed and it is shown that a linear solution to the network coding problem exists if and only if a linear solution to the index coding problem exists. These results were extended to the non-linear case in [25]. The connections between network coding and matroid theory is established in [26]. Using these connections, it was shown in [27] that non-linear solutions perform better than linear solutions for the general network coding problem. The connection between index coding problems and multi-linear representation of matroids was studied in [24]. It was shown in [28] that a vector linear solution to an index coding problem exists if and only if there exists a representable discrete polymatroid satisfying certain conditions, which are determined by the index coding problem. The relationship between the network computing problem and functional index coding problem is established in [29]. In this paper, we establish the connections between generalized index coding problems, which is a special class of functional index coding problems and discrete polymatroids. The major contributions of this paper are as follows.
  • We establish a connection between vector linear index codes for a generalized index coding problem and representable discrete polymatroids. It is shown that the existence of a linear solution for a generalized index coding problem is connected to the existence of a representable discrete polymatroid satisfying certain conditions determined by the generalized index coding problem.
  • From a discrete polymatroid, we construct a generalized index coding problem and show that if the generalized index coding problem has a vector linear solution of optimal length over the binary field then the discrete polymatroid is representable over the binary field. An example to illustrate that the converse of the above result is not true, is also provided.
  • A generalized index coding problem is constructed from matroids and it is shown that the constructed problem has an optimal binary scalar linear solution if and only if the matroid is binary representable. This enables us to construct a generalized index coding problem from a binary representable matroid and the constructed index coding problem has an optimal binary scalar linear solution. Also, it is shown that certain generalized index coding problems do not have a binary scalar linear solution of optimal length using the above result.
A part of the content of this paper was presented in [30]. In this journal version, the proofs of all the claims are added along with detailed examples. The results in this paper are an extension of the results in [24,28]. Specifically, Theorem 3 in [28] establishes the connections between a vector linear index code for an index coding problem and a representable discrete polymatroid. It is shown that a vector linear index coding solution exists if and only if there exists a discrete polymatroid satisfying certain conditions derived from the index coding problem. In this paper, we extend these results to the case of generalized index coding problems. The extension was established by modifying the conditions that the corresponding discrete polymatroid has to satisfy. We also note that the result in [28] can be obtained as a special case of the theorem derived in this paper. Starting from a discrete polymatroid, an index coding problem is constructed in Theorem 4 in [28]. The constructed index coding problem is shown to have an optimal perfect linear solution. In this paper, the construction is modified to get generalized index coding problems from discrete polymatroids. The paper identifies the modifications required from the existing scenario to get new results and also discusses the limitations of the extension by providing counter examples. In the case of conventional index coding problems, necessary and sufficient conditions for the constructed index coding problem to have an optimal linear solution is provided in terms of the representability of discrete polymatroids. For the generalized index coding solution, we show that representability is not guaranteed to provide optimal linear index coding solution via a counter example. Figure 1 summarizes the above results connecting discrete polymatroids and generalized index coding problems constructed from discrete polymatroids. The paper also identifies the modifications required to obtain a stronger result. This is achieved by restricting the construction to matroids rather than discrete polymatroids. This construction is similar to the construction in [24]. Specifically, Theorem 12 in [24] shows that the index coding problem constructed from a matroid has an optimal perfect linear solution if the matroid has an n-linear representation. In this paper, we show that the generalized index coding problem constructed from binary representable matroids has optimal perfect linear solutions. It is also shown the if the generalized index coding problem has an optimal perfect linear solution, the matroid is binary representable. The above result can be used in the construction of generalized index coding problems having optimal scalar linear solutions. Figure 2 summarizes the results connecting matroids and generalized index coding problems.
The organization of the paper is as follows. In Section 2, we present a short review on functional index coding. In Section 3, basic results of matroids and discrete polymatroids are reviewed. In Section 4, the connections between generalized index coding and discrete polymatroids are established. In Section 5, a generalized index coding problem is constructed from discrete polymatroids and it is shown that the index coding problem constructed has an optimal vector linear solution only if the discrete polymatroid is representable. In Section 6, similar construction is employed on matroids and shown that the constructed generalized index coding problem has an optimal binary scalar linear solution if and only if the matroid is binary representable. We conclude with a summary of results in Section 7 along with few directions for further research.
Notations: The set { 1 , 2 , , m } is denoted as m and Z 0 denote the set of non-negative integers. A vector of length r in which the i th component is one and all other components are zeros, denoted as ϵ i , r . For a vector v of length r and A r , v ( A ) is the vector obtained by taking only the components of v indexed by the elements of A. For u , v Z 0 r , u v if all the components of v u are non-negative and u < v if u v and u v . For a set S, | S | denotes the cardinality of the set S and for a vector v Z 0 r , | v | denotes the absolute sum of components of v. For u , v Z 0 r , u v is the vector in which the i th component is the maximum of the i th components of u and v. For a vector v Z 0 r , ( v ) > 0 denotes the set of indices corresponding to the non-zero components of v. For a matrix M, M i denotes the i th column of matrix M and for a set S, M S denotes the submatrix obtained by concatenating the columns of M indexed by the set S. For vector subspaces V 1 , V 2 , , V m of a vector space V, the sum of vector spaces is the vector space i = 1 m V i = { i = 1 m v i | v i V i } .

2. Functional Index Coding

An index coding problem I ( X , R ) includes
  • a set of messages X = { x 1 , x 2 , , x m } and
  • a set of receiver nodes R { ( x , H ) ; x X , H X { x } } .
For a receiver node R = ( x , H ) R , x denotes the message demanded by the receiver R and H denotes the side information possessed by R. Each one of the messages x i , i m belongs to F q where F q is the finite field with q elements. In an index coding problem, the source can take n instances of each message and encode them together such that each receiver is able to decode all n instances of the demanded messages.
An index code over F q of length l and dimension n for the index coding problem I ( X , R ) is a function f : F q m n F q l , which satisfies the following condition. For every receiver R = ( x , H ) R , there exists a function ψ R : F q n | H | + l F q n such that ψ R ( ( x i ) i H , f ( y ) ) = x , y F q m n . The function ψ R is referred to as the decoding function at the receiver R. An index coding solution for which n = 1 is called scalar index code and if n > 1 , it is called a vector index code. An index code is called linear if the function f is linear.
The index coding problem was generalized to the functional index coding problem in [10]. In the functional index coding problem, the side information and the demands of the receivers may be functions of messages rather than only a subset of messages. The side information possessed by the receivers is described by a Has-set, which consists of functions of messages. The demands of the receiver are described by a Want-set. Each receiver R i is described by a tuple ( W i , H i ) , where W i , H i are sets of functions from F q m to F q .
In this paper, we consider those generalized index coding problems for which the functions demanded and possessed by the receivers are linear combinations of the messages.
Definition 1.
An instance I ( X , R ) of a generalized index coding problem comprises of
1. 
A source equipped with the message vector X = ( x 1 , x 2 , , x m ) , where x i F q , i m .
2. 
A set of clients or receivers R = { R 1 , R 2 , , R | R | } , where R i = ( W i , H i ) for all R i R . For any receiver R i , H i = { h i , 1 ( X ) , h i , 2 ( X ) , , h i , | H i | ( X ) } is the Has-set, where h i , j : F q m F q for 1 j | H i | and W i = { w i , 1 ( X ) , w i , 2 ( X ) , , w i , | W i | ( X ) } is the Want-set, where w i , k : F q m F q for 1 k | W i | .
The source can combine n instances of the messages and perform encoding operations such that the demands of the receivers are satisfied. Since the functions in the Has-set of a receiver R i are linear it can be represented as an inner product as follows. Each function h i , j H i can be expressed as the inner product h i , j ( X ) = X K i , j where K i , j F q m n × n is a matrix. For the receiver R i , we have | H i | functions in the Has-set, each represented by a matrix K i , j , 1 j | H i | . All the functions in the Has-set of receiver R i can be represented by a matrix K i F q m n × n | H i | called the knowledge matrix. Note that K i = [ K i , 1 , K i , 2 , , K i , | H i | ] . Similarly, the demand functions in the Want-set W i can be represented by demand matrices. Each function w i , j W i can be expressed as w i , j ( X ) = X D i , j where the matrix D i , j F q m n × n and all the functions in the Want-set of receiver R i can be described by the m n × n | W i | matrix D i = [ D i , 1 , D i , 2 , , D i , | W i | ] called the demand matrix.
An index code over F q of length l and dimension n for the generalized index coding problem I ( X , R ) is a function f : F q m n F q l , which satisfies the following condition. For every receiver R i = ( W i , H i ) R , there exists a function ψ R i : F q n | H i | + l F q n | W i | such that ψ R i ( X K i , f ( X ) ) = X D i , X F q m n . The definitions of linearity, scalar and vector index codes remains the same as that of conventional index codes.
When the index code f for a generalized index coding problem is linear it can be described as f ( X ) = X L , X F q m n , where L is a matrix of order m n × l over F q . The matrix L is called as the matrix corresponding to the linear index code f and the code f is referred to as the linear index code based on L.
For an index coding problem I ( X , R ) , define μ ( I ( X , R ) ) as the maximum number of receivers having the same Has-set. The length l and dimension n of an index coding solution for the index coding problem I ( X , R ) satisfy the condition l / n μ ( I ( X , R ) ) [24]. Computing the optimal length of an index coding solution is shown to be an NP-hard problem [31]. The lower bound offers a method to check whether the solution obtained is optimal.
Definition 2
([24]). An index coding solution for which l / n = μ ( I ( X , R ) ) is defined to be a perfect index coding solution.
Example 1.
Consider the generalized index coding problem with the message vector X = [ x 1 x 2 x 5 ] , x i F 2 . There are five receivers R 1 = ( x 1 , { x 2 } ) , R 2 = ( x 2 , { x 1 + x 5 } ) , R 3 = ( x 3 , { x 1 , x 4 } ) , R 4 = ( x 4 , { x 1 + x 2 + x 3 } ) and R 5 = ( x 5 + x 4 + x 3 , { x 2 , x 1 + x 3 } ) . Consider receiver R 5 = ( W 5 , H 5 ) . The knowledge matrix K 5 and the demand matrix D 5 are as follows.
K 5 = 0 1 1 0 0 1 0 0 0 0 , D 5 = 0 0 1 1 1 .
The source can satisfy the demands of all the receivers by transmitting three messages x 1 + x 2 , x 3 + x 4 and x 5 . The index code is scalar linear and is described by the matrix
L = 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 .

3. Matroids and Discrete Polymatroids

Matroids are mathematical structures that capture the fundamental properties of dependence, which are common to graphs and matrices. In [24], the connections between network coding, index coding and multi-linear representation of matroids are established. It was observed that each receiver imposes a certain dependency between the index coding solutions, side information and the demand. This dependency arises from the fact that, using the transmitted messages of an index coding solution, along with the messages present as side information, each receiver should be able to decode the demanded messages. It was observed in [28] that matroids cannot fully capture all the dependencies of a vector linear index coding solution, and discrete polymatroids having a more general structure were used to establish these connections. In this paper, we use discrete polymatroids to capture the dependencies of the generalized index coding problem and also show that the problem of finding a representation for a discrete polymatroid can be reduced to the problem of obtaining an optimal perfect linear index coding solution. In Section 3.1 and Section 3.2, we review the definitions and establish notations related to matroids and discrete polymatroids. In Section 3.2, we review how discrete polymatroids can be viewed as a generalization of matroids.

3.1. Matroids

In this subsection, we list a few basic definitions and results from the matroid theory. For a comprehensive treatment, the readers are referred to [32,33].
Definition 3.
Let E be a finite set. A matroid M on E is an ordered pair ( E , I ) , where the set I is a collection of subsets of E satisfying the following three conditions
(I1) 
ϕ I
(I2) 
If X I and X X , then X I .
(I3) 
If X 1 and X 2 are in I and | X 1 | < | X 2 | , then there is an element e X 2 X 1 such that X 1 e I .
The set E is called the ground set of the matroid and is also referred to as E ( M ) . The members of set I are called the independent sets of M . Independent sets are also denoted by I ( M ) . The set of independent sets in a matroid generalizes the notion of linear independence in vectors of a vector space. Note that the properties ( I 2 ) when specialized to the vectors of a vector space implies that a subset of a linearly independent set is linearly independent. Property ( I 3 ) is a generalization of the extension of a linearly independent set by using vectors from a larger linearly independent set. Similar to a vector space, a matroid can also be defined in different ways. The notion of the basis of a vector space and the dependent vectors of a vector space is also generalized in matroids and is provided below.
A maximal independent subset of E is called a basis of M and the set of all bases of M is denoted by B ( M ) . A subset of E that is not in I is called a dependent set. A minimal dependent set C E is referred to as a circuit. The set of all circuits of matroid M is denoted by C ( M ) . The circuits of a matroid satisfy the following conditions.
(C1)
No proper subset of a circuit is a circuit.
(C2)
If C 1 and C 2 are distinct circuits and c C 1 C 2 , then C 1 C 2 { c } contains a circuit
The axioms ( C 1 ) and ( C 2 ) can be viewed as generalizations of the properties of a minimal dependent set in a vector space. The axiom ( C 1 ) implies that any subset of a circuit is independent and is easily proved using the minimality given in the definition. The axiom ( C 2 ) shows that if there are two minimally dependent sets with a vector in common, then the union of two sets removing the common vector is a dependent set. This follows from the fact that the common vector can be expressed as combinations of other vectors in C 1 and C 2 , respectively, and thus establishing a dependency between the vectors in C 1 C 2 { c } . The notion of rank in a vector space is also generalized in the matroid as given below.
Each circuit of a matroid is a set that captures the dependencies existing in a matroid. An index coding problem induces certain dependencies. However, these dependencies are not minimal. Rather than trying to find the minimal dependent sets, a common approach is to use the notion of a rank function of matroid defined below to characterize the dependencies.
A function called the rank function is associated, the domain of which is the power set of E and codomain is the set of non-negative integers. The rank of any X E in M , denoted by r M ( X ) is defined as the maximum cardinality of a subset of X that is a member of I ( M ) . The rank of matroid is the rank of its ground set. The rank function of the matroid satisfies the following properties.
(R1)
r M ( X ) | X | , for all X E .
(R2)
r M ( X ) r M ( Y ) , for all X Y E .
(R3)
r M ( X Y ) + r M ( X Y ) r M ( X ) + r M ( Y ) , for all X , Y E .
Note that the rank of an independent set is equal to the cardinality of the independent set. A matroid is fully described by its rank function and a matroid M on ground set E with rank function r M denoted as M ( E , r M ) .
Example 2.
Consider the matroid M on the ground set 4 with the rank function r M defined as r M ( X ) = min { | X | , 2 } , X 4 . It follows from the definition of the rank function that the rank of an independent set is equal to the cardinality of the set. It also follows that any set with cardinality equal to the rank is an independent set. The set of independent sets of the matroid M is I ( M ) = { ϕ , { 1 } , { 2 } , { 3 } , { 4 } , { 1 , 2 } , { 1 , 3 } , { 1 , 4 } , { 2 , 3 } , { 2 , 4 } , { 3 , 4 } } . This matroid is referred to as the uniform matroi U 2 , 4 . The rank of the matroid is r M ( 4 ) = 2 . The set of circuits of the matroid is C ( M ) = { X 4 : | X | = 3 } . The set of all bases of M is B ( M ) = { X 4 : | X | = 2 } .
A matroid M is said to be representable over F q if there exists one-dimensional vector subspaces V 1 , V 2 , , V | E | of a vector space V such that dim ( i X V i ) = r M ( X ) , X E and the set of vector subspaces V i , i | E | , is said to form a representation of M . The one-dimensional vector subspaces V i , i | E | , can be described by a matrix A over F q , the i th column of which spans V i . A matroid M with matrix A as its representation is called the vector matroid of A and is denoted by M ( A ) . Each element in the ground set of M ( A ) corresponds to a column in A. For a subset S of the ground set E ( M ) , A S denotes the submatrix of A with columns corresponding to the elements of the ground set in S.
Example 3.
For the matroid considered in Example 2, consider the matrix A = 1 0 1 1 0 1 1 2 . Let V i , i 4 denote the space spanned by the i t h column of A over F 3 . It can be observed that any two columns of the matrix are linearly independent. Hence, for any set X 4 , such that | X | 2 , dim i X V i = | X | . Note that the number of rows of the matrix is 2 and hence dim i X V i = 2 , X 4 , | X | 3 . Hence, the matrix A is a representation of the matroid.
It was established in [24], that the problem of finding a multi-linear representation of matroids can be reduced to finding an optimal perfect linear index code for a corresponding index coding problem. Multi-linear representation of matroids was introduced in [34,35]. A matroid M on the ground set E is said to be multi-linearly representable of dimension n over F q if there exist vector subspaces V 1 , V 2 , , V | E | of a vector space V over F q such that dim ( i X V i ) = n r M ( X ) , X E . The vector subspaces V 1 , V 2 , , V | E | are said to form a multi-linear representation of dimension n over F q for the matroid M . The vector subspaces V i , i | E | can be described by matrices M 1 , M 2 , , M | E | of order n k × k over F q , where k is the rank of the matroid. Let M be the matrix obtained by concatenating the matrices M 1 , M 2 , , M | E | , M = [ M 1 M 2 M | E | ] . For every subset X E , rank ( M X ) = n r M ( X ) .

3.2. Discrete Polymatroids

Discrete polymatroids are multi-set analog to matroids. Linear representations of discrete polymatroids generalize the notion of linear and multi-linear representation of matroids. In this paper, we establish connections between vector linear index codes for the generalized index coding problem and representations of discrete polymatroids. In this subsection, we review the definitions and results from discrete polymatroids. For a comprehensive treatment, interested readers are referred to [36,37].
Definition 4
([36]). A discrete polymatroid D on the ground set m is a non-empty finite set of vectors in Z 0 m satisfying the following conditions:
  • If u D and v < u , then v D .
  • For all u , v D with | u | < | v | , there exists w D such that u < w u v .
Let 2 m denote the power set of the set m . For a discrete polymatroid D , the rank function ρ : 2 m Z 0 is defined as ρ ( A ) = max { | u ( A ) | , u D } , where A m and ρ ( ) = 0 . Alternatively, a discrete polymatroid D can be written in terms of its rank function as D = { x Z 0 m : | x ( A ) | ρ ( A ) , A m } . A discrete polymatroid is completely described by the rank function. So the discrete polymatroid D on m is also denoted by ( m , ρ ) . The ground set of discrete polymatroid is also denoted by E ( D ) .
A function ρ : 2 m Z 0 is the rank function of a discrete polymatroid if and only if it satisfies the following conditions [38]:
(D1)
For A B m , ρ ( A ) ρ ( B ) .
(D2)
A , B m , ρ ( A B ) + ρ ( A B ) ρ ( A ) + ρ ( B ) .
(D3)
ρ ( ) = 0 .
The difference between discrete polymatroids and matroids is better understood by comparing the properties of the rank functions of each of these structures. It can be observed that the main difference is that the rank of a matroid r M has to satisfy the additional property r M ( X ) | X | , X E ( M ) . This restriction is generalized and the advantage is that each element in the ground set can have different values for the rank. In particular, when the rank of every element in the ground set becomes one, the structure reduces to a matroid and when the rank of every element becomes n > 1 , it reduces to a matroid with multi-linear representation. This additional generalization is required to fully capture all the dependencies of a generalized index coding problem and this is illustrated in Theorem 1 in Section 4.
The notion of basis and circuits is also extended to the case of discrete polymatroids. A vector u D for which there does not exist v D such that u < v , is called a basis vector of D . Let B ( D ) denote the set of basis vectors of D . The sum of the components of a basis vector of D is referred to as the rank of D , denoted by ρ ( D ) . Note that ρ ( D ) = ρ ( m ) . For all the basis vectors, the sum of the components will be equal [37]. A discrete polymatroid can also be defined as the set of all integral subvectors of its basis vectors.
Example 4.
Consider a discrete polymatroid on the ground set 3 defined by the set of basis vectors B ( D ) = { ( 1 , 1 , 1 ) , ( 1 , 2 , 0 ) , ( 2 , 0 , 1 ) , ( 2 , 1 , 0 ) } . The set of vectors belonging to the discrete polymatroid is the integral subvectors of its basis vectors. Hence, the discrete polymatroid D is { ( 0 , 0 , 0 ) , ( 1 , 0 , 0 ) , ( 0 , 1 , 0 ) , ( 0 , 0 , 1 ) , ( 1 , 0 , 1 ) , ( 1 , 1 , 0 ) , ( 0 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 2 , 0 , 0 ) , ( 2 , 0 , 1 ) , ( 2 , 1 , 0 ) , ( 1 , 2 , 0 ) , ( 0 , 2 , 0 ) } . The rank function ρ of the discrete polymatroid is given by ρ ( { 1 } ) = ρ ( { 2 } ) = ρ ( { 2 , 3 } ) = 2 , ρ ( { 3 } ) = 1 and ρ ( { 1 , 2 } ) = ρ ( { 1 , 3 } ) = ρ ( { 1 , 2 , 3 } ) = 3 . It can be verified that the rank function satisfies the axioms (D1), (D2) and (D3).
Consider a discrete polymatroid D with rank function ρ on the ground set m . Consider the function ρ ( X ) = n ρ ( X ) , X m . The function ρ satisfies the conditions (D1), (D2) and (D3). The discrete polymatroid on the ground set m with the rank function ρ is denoted by n D .
Definition 5
([38]). A discrete polymatroid D on the ground set m with rank function ρ is said to be representable over F q if there exists vector subspaces V 1 , V 2 , , V m of a vector space E over F q such that d i m ( i X V i ) = ρ ( X ) , X m . The set of vector subspaces V i , i m , is said to form a representation of D . A discrete polymatroid is said to be representable if it is representable over some field. Each V i can be expressed as the column span of a ρ ( m ) × ρ ( { i } ) matrix A i . The concatenated matrix A = [ A 1 A 2 A m ] is referred to as the representing matrix of the discrete polymatroid D . It is shown in [39], that performing elementary row operations or column operations on A does not change the discrete polymatroid. In particular, pre multiplication and post multiplication by full rank matrices does not change the discrete polymatroid.
A discrete polymatroid can be constructed from any finite set of vector subspaces of a vector space. Let V 1 , V 2 , , V m be a collection of vector subspaces of a vector space V. For any subset S m , define r ( A ) = d i m ( i A V i ) . The function r satisfies the conditions (D1), (D2) and (D3) and hence it is the rank function of a discrete polymatroid on the ground set m . Let D ( V 1 , V 2 , , V m ) denote the representable discrete polymatroid on m with V 1 , V 2 , , V m as its representation.
Example 5.
Consider the set of matrices A 1 = 1 0 0 1 0 0 , A 2 = 0 1 0 1 1 1 and A 3 = 0 0 1 over F 2 . Let V i , i 3 denote the column span of A i . The set of vector spaces V i , i 3 is a representation for the discrete polymatroid in Example 4.
Definition 6
([28]). For a discrete polymatroid D with rank function ρ on the ground set m , a vector u Z 0 m is said to be an excluded vector if the i th component of u is less than or equal to ρ ( { i } ) , i m and u D . The set of excluded vectors for the discrete polymatroid D is denoted by D ( D ) . An excluded vector u D ( D ) is said to be a minimal excluded vector, if there does not exist v D ( D ) for which v < u . The set of minimal excluded vectors for the discrete polymatroid D is denoted by C ( D ) .
Discrete polymatroids can be viewed as a generalization of matroids [28,36,37]. There is a one-to-one correspondence between the independent sets, basis sets, dependent sets and circuits of a matroid to the vectors of an associated discrete polymatroid. For a matroid M there is an associated discrete polymatroid D ( M ) . This arises from the fact that the rank function of a matroid M satisfies the conditions (D1), (D2) and (D3). Hence, the rank function of matroid M also serves as the rank function of a corresponding discrete polymatroid D ( M ) . Consider an independent set I of the matroid M . Corresponding to the set I there exists a unique vector i I ϵ i , r belonging to D ( M ) . Discrete polymatroid D ( M ) can be written as { i I ϵ i , r : I I } where I is the set of independent sets of matroid M . For a basis set B of a matroid M , the vector i B ϵ i , r is a basis vector of D ( M ) and for a basis vector b of D ( M ) , the set ( b ) > 0 is a basis set of M . For a dependent set D of M , the vector i D ϵ i , r is an excluded vector of D ( M ) and conversely for an excluded vector d D ( D ( M ) ) , the set ( d ) > 0 is a dependent set of M . Similarly the set of minimal excluded vectors of D ( M ) and circuits of M are also related as follows. The set of circuits of matroid M is given by { ( u ) > 0 : u C ( D ( M ) ) } . For a circuit C of matroid M the vector i C ϵ i , r is a minimal excluded vector for D ( M ) .
Example 6.
Consider the uniform matroid U 2 , 4 considered in Example 2. The discrete polymatroid D ( U 2 , 4 ) is
{ ( 0 , 0 , 0 , 0 ) , ( 1 , 0 , 0 , 0 ) , ( 0 , 1 , 0 , 0 ) , ( 0 , 0 , 1 , 0 ) , ( 0 , 0 , 0 , 1 ) , ( 1 , 1 , 0 , 0 ) , ( 1 , 0 , 1 , 0 ) , ( 1 , 0 , 0 , 1 ) , ( 0 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 1 ) , ( 0 , 0 , 1 , 1 ) } .
The set of vectors belonging to the discrete polymatroid D ( U 2 , 4 ) can be obtained from the independent sets of the matroid. The set of excluded vectors can be obtained from the dependent sets. The set of excluded vectors for D ( U 2 , 4 ) is
{ ( 1 , 1 , 1 , 0 ) , ( 1 , 1 , 0 , 1 ) , ( 1 , 0 , 1 , 1 ) , ( 0 , 1 , 1 , 1 ) , ( 1 , 1 , 1 , 1 ) } .
The set of minimal excluded vectors of D ( U 2 , 4 ) corresponds to the circuits of the matroid U 2 , 4 . The set of minimal excluded vectors of D ( U 2 , 4 ) is given by { ( 1 , 1 , 1 , 0 ) , ( 1 , 1 , 0 , 1 ) , ( 1 , 0 , 1 , 1 ) , ( 0 , 1 , 1 , 1 ) } .

4. Generalized Index Coding Problem and Discrete Polymatroids

In this section, we explore the connections between the generalized index coding problem and representable discrete polymatroids. Theorem 1 below connects the existence of a linear index code of length l and dimension n for a generalized index coding problem to the problem of representation of a discrete polymatroid satisfying certain conditions.
Theorem 1.
A linear index code over F q of length l and dimension n exists for a generalized index coding problem I ( X , R ) if and only if there exists a discrete polymatroid D = ( m + 1 , ρ ) representable over F q with ρ ( D ) = m n and with A 1 , A 2 , , A m + 1 , as the representation matrices satisfying the following conditions:
(C1) 
ρ ( { i } ) = n , i m , ρ ( m ) = m n and ρ ( { m + 1 } ) = l .
(C2) 
For every receiver R i = ( W i , H i ) R described by ( D i , K i ) , rank ( [ A D i A K i A m + 1 ] ) = rank ( [ A K i A m + 1 ] ) , where A = [ A 1 A 2 A m ] .
Proof. 
First we prove the if part. Consider a discrete polymatroid D of rank m n representable over F q with representation A 1 , A 2 , , A m + 1 , satisfying conditions (C1) and (C2). The matrix A is the concatenation of matrices A 1 , A 2 , , A m . Condition (C1) implies that A i is m n × n matrix for i m and A m + 1 is m n × l matrix. From (C1) we have that r a n k ( A ) = m n making it invertible. Define A i = A 1 A i , i m + 1 . Consider the map f : F q m n F q l given by f ( X ) = X A m + 1 . We show that the map f forms an index code of length l and dimension n over F q . Consider any receiver R i = ( W i , H i ) described by ( D i , K i ) . From (C2) we have that the column span of the matrix A D i belongs to the span of columns of A K i and A m + 1 . Matrix A D i can be written as [ A K i A m + 1 ] M i where M i is an ( | H i | + l ) × | W i | matrix. Pre multiplying by A 1 , we have [ K i A m + 1 ] M i = D i . Hence X D i can be obtained at receiver R i from X K i and X A m + 1 .
To prove the only if part, we assume that a vector linear index code f over F q of length l and dimension n exists for the generalized index coding problem I ( X , R ) . The vector linear index code f can be written as f ( X ) = X A m + 1 where A m + 1 is a matrix of size m n × l . Let I be the identity matrix of size m n × m n . For i m , let A i be the matrix obtained by taking only the ( i ( n 1 ) + 1 ) t h to ( i n ) t h columns of I. Let V i be the column span of A i . Consider the discrete polymatroid D ( V 1 , V 2 , , V m + 1 ) . We claim that the discrete polymatroid D ( V 1 , V 2 , , V m + 1 ) satisfies the condition (C1) and (C2). Since the concatenation of matrices A i , i m forms an identity matrix, condition (C1) is satisfied. Consider a receiver ( D i , K i ) R . Since the vector index code X A m + 1 satisfies the receiver, X D i can be obtained from X K i and X A m + 1 . Since A is the identity matrix, condition (C2) is satisfied. □
Theorem 1 is a generalization of the result obtained in [28] where the vector linear solution of a conventional index coding problem was connected to discrete polymatroids. The fact that the source and receiver possess functions of messages changes condition (2) in Theorem 1. The concept of the representing matrix is used to extend the result in [28]. The result in [28] can be obtained from this result by imposing the restriction on the structure of matrices D i and K i .
Corollary 1.
Corresponding to a receiver R i = ( x i , H i ) of the conventional index coding problem, in which receiver R i demands the message x i and possess a subset of messages H i = { x j 1 , x j 2 , , x j k } as its side information, condition (C2) reduces to ρ ( { i } { j 1 , j 2 , , j k } { m + 1 } ) = ρ ( { j 1 , j 2 , , j k } { m + 1 } ) .
For conventional index coding, the matrix D i reduces to a vector with one non-zero entry corresponding to the demanded message. The matrix K i takes the structure of a submatrix of the identity matrix. Hence, A D i becomes A i and A K i corresponds to certain columns of the matrix A. The columns of the matrix A corresponds to the entries of the discrete polymatroid. By imposing the restrictions, condition (C2) can be expressed in terms of the elements of the ground set.
The necessary and sufficient conditions for a matrix L to correspond to a linear index code for a generalized index coding problem I ( X , R ) was found in [14]. Theorem 1 expresses the above condition in terms of properties of the corresponding discrete polymatroid D = ( m + 1 , ρ ) . In the remaining part of this section, we illustrate Theorem 1 with examples.
Example 7.
Consider the generalized index coding problem of Example 1. There are five messages and since the solution is scalar, dimension is one. Consider the set of matrices
A 1 = 1 0 0 0 0 , A 2 = 0 1 0 0 0 , A 3 = 0 0 1 0 0 , A 4 = 0 0 0 1 0 , A 5 = 0 0 0 0 1 .
Also let A 6 = L , the matrix corresponding to the index code of Example 1. Let V i denote the column span of A i for i 6 . The discrete polymatroid D ( V 1 , V 2 , , V 6 ) satisfies the conditions (C1) and (C2) of Theorem 1. The rank of the discrete polymatroid is equal to five since the vector spaces V 1 , V 2 , , V 5 are linearly independent. The rank of the vector space V 6 is equal to three, which is the length of the index code. We illustrate condition (C2) for receiver R 5 . For receiver R 5 , the matrices A D 5 , A K 5 and A 6 are as follows:
A D 5 = 0 0 1 1 1 , A K 5 = 0 1 1 0 0 1 0 0 0 0 , A 6 = 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 .
Clearly A D 5 lies in the column span of the matrix [ A K 5 A 6 ] . Condition (C2) can be similarly verified for every receiver.
Example 8.
Consider the generalized index coding problem with the message vector X = [ x 1 , x 2 , , x 5 ] , x i F 2 . There are five receivers R 1 = ( x 1 , { x 2 + x 5 } ) , R 2 = ( x 2 , { x 1 + x 3 } ) , R 3 = ( x 3 , { x 2 + x 4 } ) , R 4 = ( x 4 , { x 3 + x 5 } ) and R 5 = ( x 5 , { x 1 + x 2 } ) . The generalized index coding problem has an index code over F 2 of length six and dimension two. Note that the index code considered is a vector linear index code. For a receiver R i , i 5 the demand matrix D i is equal to [ ϵ 2 i 1 , 10 ϵ 2 i , 10 ] and the knowledge matrix K i = [ ϵ 2 i + 1 , 10 + ϵ 2 i 3 , 10 ϵ 2 i + 2 , 10 + ϵ 2 i 2 , 10 ] where the operations on the indices are modulo ten with 0 = 10 . The knowledge matrix K 1 and the demand matrix D 1 are as given below:
K 1 = 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 , D 1 = 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .
The vector linear index code of dimension two is described by the matrix
L = 1 0 0 1 0 0 0 1 1 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 0 0 0 1 1 0 0 0 1 1 0 1 0 0 0 0 0 1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 0 0 0 0 .
For i 5 , let A i be the matrix [ ϵ 2 i 1 , 10 ϵ 2 i , 10 ] and let A 6 = L . Let V i denote the column span of A i , i 6 . The discrete polymatroid D ( V 1 , V 2 , , V 6 ) satisfies the conditions of Theorem 1. For i 5 the dimension of vector space V i is equal to two. The rank of the discrete polymatroid is equal to ten since the vector spaces V 1 , V 2 , , V 5 are linearly independent. The dimension of the vector space V 6 is equal to six, which is the length of the vector linear index code. For receiver R 1 = ( x 1 , { x 2 + x 5 } the matrices A D 1 and A K 1 are D 1 and K 1 , respectively. Observe that D 1 lies in the span of the concatenated matrix [ K 1 L ] . Hence, condition (C2) is satisfied for receiver R 1 . Condition (C2) can be similarly verified for all other receivers.

5. Generalized Index Coding from Discrete Polymatroids

Discrete polymatroids can be viewed as a generalization of matroids, as explained in Section 3. In Section 4, we established the connections between a generalized index coding problem having a vector linear solution and representable discrete polymatroids. In this section, starting from a discrete polymatroid, we construct generalized index coding problems. This was motivated by the previous works, which construct index coding problems from matroids [24] and discrete polymatroids [28]. We establish a connection between an optimal perfect linear solution for the constructed generalized index coding problems and representable discrete polymatroids in Theorem 2. This technique helps to generate a class of generalized index coding problems and also helps in identifying representations of discrete polymatroids from the solutions of generalized index coding problems.
Consider a discrete polymatroid D on the ground set r with rank function ρ and ρ ( r ) = k . The generalized index coding problem I D ( Z , R ) is given below.
(i)
The set of source messages Z = X Y , where X = { x 1 , x 2 , , x k } and
Y = { y 1 1 , y 1 2 , , y 1 ρ ( { 1 } ) , y 2 1 , y 2 2 , , y 2 ρ ( { 2 } ) , , y r 1 , y r 2 , , y r ρ ( { r } ) } .
(ii)
The set of receivers R is a union of three sets of receivers R 1 , R 2 and R 3 defined below. Let ζ i = { y i 1 , y i 2 , , y i ρ ( { i } ) } .
Receivers in R 1 : For a basis vector b = i r b i ϵ i , r B ( D ) , we define the set S 1 ( b ) = { ( x j , l ( b ) > 0 η l ) for all j k and for all η l ζ l such that | η l | = b l } . R 1 = b B ( D ) S 1 ( b ) is the union of all such receivers for every basis of the discrete polymatroid D .
Receivers in R 2 : For a minimal excluded vector c = i r c i ϵ i , r C ( D ) , j ( c ) > 0 and p ρ ( { j } ) , define the set S 2 ( c , j , p ) as follows:
S 2 ( c , j , p ) = { ( y j p , y y Γ 1 Γ 2 ) : Γ 1 = l ( c ) > 0 { j } η l , η l ζ l , | η l | = c l , Γ 2 ζ j { y j p } , | Γ 2 | = c j 1 } .
Define R 2 = c C ( D ) j ( c ) > 0 p ρ ( { j } ) S 2 ( c , j , p ) .
Receivers in R 3 : Define R 3 = { ( y i j , X ) : i r , j ρ ( { i } ) }.
The presence of receivers in the set R 3 ensures that the minimum number of transmissions required by the above problem is n i r ρ ( { i } ) . Note that the receivers in R 3 have the same Has-set and μ ( I D ( X , R ) ) = i r ρ ( { i } ) . The construction is similar to the construction provided in [28]. The difference is in the set of receivers constructed from minimal excluded vectors of the discrete polymatroid D . The set of receivers in R 1 ensures that if a linear index coding solution exists then, from the messages corresponding to the elements in the basis of the discrete polymatroid and the transmitted messages, x j , j k is decodable. Receivers in R 2 have a function of messages as its side-information. Corresponding to every minimal excluded vector, a set of receivers is constructed, which captures the dependency existing in the minimal excluded vector. In this paper, receivers belonging to the set R 2 have a function of messages as its side information. The construction ensures that all the dependencies exiting in the minimal excluded vectors are captured by the receivers. The proof in [28] is modified for the case of these new set of receivers. We show that there exists a linear index coding solution, then we show that a representation can be constructed from the linear index code. The construction of receivers from a discrete polymatroid is made clear in the example below.
Example 9.
Consider the discrete polymatroid D on the ground set 3 with the rank function ρ given by ρ { 1 } = ρ { 2 } = 1 , ρ { 1 , 2 } = ρ { 3 } = 2 and ρ { 1 , 3 } = ρ { 2 , 3 } = ρ { 1 , 2 , 3 } = 3 . From the discrete polymatroid D we construct the generalized index coding problem I D ( Z , R ) .
The set of messages possessed by the source is Z = { x 1 , x 2 , x 3 } { y 1 1 , y 2 1 , y 3 1 , y 3 2 } . The set of receivers are constructed as in the theorem above. The set of basis vectors of the discrete polymatroid D is B ( D ) = { ( 1 , 1 , 1 ) , ( 1 , 0 , 2 ) , ( 0 , 1 , 2 ) } . We have,
S 1 ( ( 1 , 1 , 1 ) ) = { ( x i , { y 1 1 , y 2 1 , y 3 j } ) : i 3 , j 2 } , S 1 ( ( 1 , 0 , 2 ) ) = { ( x i , { y 1 1 , y 3 1 , y 3 2 } ) : i 3 } , S 1 ( ( 0 , 1 , 2 ) ) = { ( x i , { y 2 1 , y 3 1 , y 3 2 } ) : i 3 } , and R 1 = S 1 ( ( 1 , 1 , 1 ) ) S 1 ( ( 1 , 0 , 2 ) ) S 1 ( ( 0 , 1 , 2 ) ) .
There is only one excluded vector ( 1 , 1 , 2 ) . We have,
S 2 ( ( 1 , 1 , 2 ) , 1 , 1 ) = { ( y 1 1 , { y 2 1 + y 3 1 + y 3 2 } ) } , S 2 ( ( 1 , 1 , 2 ) , 2 , 1 ) = { ( y 2 1 , { y 1 1 + y 3 1 + y 3 2 } ) } , S 2 ( ( 1 , 1 , 2 ) , 3 , 1 ) = { ( y 3 1 , { y 1 1 + y 2 1 + y 3 2 } ) } ,
S 2 ( ( 1 , 1 , 2 ) , 3 , 2 ) = { ( y 3 2 , { y 1 1 + y 2 1 + y 3 2 } ) } , and R 2 = j ( c ) > 0 p ρ ( { j } ) S 2 ( ( 1 , 1 , 0 ) , j , p ) .
Third set of receivers R 3 is a collection of four receivers ( y 1 1 , X ) , ( y 2 1 , X ) , ( y 3 1 , X ) and ( y 3 2 , X ) where X = { x 1 , x 2 , x 3 } . From the set of receivers in R 3 it is clear that μ ( I D ( Z . R ) ) = 4 .
A connection between an optimal perfect linear solution for the generalized index coding problem I D ( Z , R ) and the representability of the discrete polymatroid D is established in Theorem 2 below.
Theorem 2.
If an optimal perfect linear index coding solution of dimension n over F 2 exists for the generalized index coding problem I D ( Z , R ) , then the discrete polymatroid n D is representable over F 2 .
Proof. 
Let t = k + i = 1 r ρ ( { i } ) denote the number of messages in the index coding problem I D ( Z , R ) . If an optimal perfect linear index coding solution of dimension n over F 2 exists for the index coding problem I D ( Z , R ) , then from Theorem 1, there exists a discrete polymatroid D representable over F 2 satisfying conditions (C1) and (C2). Discrete polymatroid D has rank n t and is over the ground set t + 1 . Let V 1 , V 2 , , V t + 1 be the vector spaces over F 2 , which form the representation of D . The vector spaces V i , i t can be expressed as the column span of matrices A i of order n t × n . The vector space V t + 1 can be written as the column span of A t + 1 of order n t × n i = 1 r ρ ( { i } ) . This is because the linear index code for the index coding problem is perfect. The matrix B = [ A 1 , A 2 , , A t ] is invertible from (C1). We can assume it to be the identity without loss of generality. Otherwise, define A i = B 1 A i , i t + 1 and vector spaces given by column spans of A i will form a representation of D . This is possible because pre multiplication by a full rank matrix does not change the discrete polymatroid.
The matrix A t + 1 is a n t × n i = 1 r ρ ( { i } ) matrix and we can also assume the matrix to have a specific structure. This is because the presence of receivers belonging to R 3 . Let A t + 1 = [ C T D T ] T where C is of order n k × n i = 1 r ρ ( { i } ) and D is of order ( n i = 1 r ρ ( { i } ) ) × ( n i = 1 r ρ ( { i } ) ) . The matrix A = [ A 1 A 2 A t + 1 ] is of the form A = I n k 0 C 0 I n ( t k ) D . The presence of receivers in R 3 ensures that the columns corresponding to the messages y i j , i r , j ρ ( { i } ) lies in the linear span of the columns corresponding to messages X and the columns of the matrix A t + 1 . We have
n t = rank I n k 0 C 0 I n ( t k ) D = rank I n k C 0 D = n k + r a n k [ D ] .
From this it follows that the matrix D has rank n ( t k ) and hence it is a full rank matrix. We can assume D to be the identity because if not we can define A t + 1 = A t D 1 and it still continues to be a valid representation. Let C i , i r denote the matrix obtained by taking only the ( n j = 1 i 1 ρ ( { j } ) + 1 ) th to ( n j = 1 i ρ ( i ) ) th columns of C. Let C i , j , j ρ ( { i } ) denote the n k × n matrix obtained by taking the ( ( j 1 ) n + 1 ) th to ( j n ) th columns of C i . Let V i denote the column span of C i and V i , j denote the column span of C i , j . We show that the vector subspaces V i , i r form a representation for the discrete polymatroid n D .
Consider a set S r with | S | = l . Let b S = arg max b D | b ( S ) | . Let b i S denote the i t h component of b S . Choose b i S independent vector subspaces from the set V i = { V i , j : j ρ ( { i } ) } , denoted as V i , o 1 , V i , o 2 , , V i , o b i S for every i r . These vectors can be chosen because the set V i contains ρ ( { i } ) independent vector subspaces. Note that b i S < ρ ( { i } ) , since b D . Let V i ^ = j b i S V i , o j and C i ^ = j b i S C i , o j . The vector C i ^ is a column vector, which is the sum of | b i S | column vectors of matrix C i . The receivers in S 1 ( b S ) demands the messages X and possess messages b i S messages from the set of messages y i j , j ρ ( { i } ) for all i r . The demand matrix of these receivers is I n k . From the fact that (C2) needs to be satisfied for the receivers belonging to S 1 ( b S ) , we have d i m ( i r V i ^ ) = rank [ I n k ] = n k = n ρ ( D ) . This implies that d i m ( i S V i ^ ) = n | b S ( S ) | . Since the vector space V i ^ is a subspace of V i , we have d i m ( i S V i ) n ρ ( S ) .
The elements of the subset S can be divided into two categories depending upon the value of b i S . The first category corresponds to the elements in ground set for which b s i S < ρ ( { s i } ) and the second category is the set of elements in ground set for which b s i S = ρ ( { s i } ) . Thus elements of S can be written as S = { s 1 , s 2 , , s m } { s m + 1 , s m + 2 , , s l } , where s 1 , s 2 , s m are the elements for which b s i S < ρ ( { s i } ) and the remaining elements satisfies b s i S = ρ ( { s i } ) . Consider the vector u = ( b s 1 S + 1 ) ϵ s 1 , r + i S { s 1 } b i S ϵ i , r . Since the vector does not belong to the discrete polymatroid it is an excluded vector. This implies that there exists a minimum excluded vector u m for which u m u . The s 1 th component of u m has to be b s 1 S + 1 , otherwise u m < b S and u m cannot be an excluded vector. The vector u m can be written as ( b s 1 S + 1 ) ϵ s 1 , r + i S s 1 c i S ϵ i , r , where c i S b i S . From the receivers belonging to the set S 2 ( u m , s 1 , p ) , where p ρ ( { s 1 } ) { o 1 , o 2 , , o b s 1 S } , it follows that
C s 1 , p = i ( u m ) > 0 { s 1 } C i ^ + C s 1 ^ .
This is true for every p ρ ( { s 1 } ) { o 1 , o 2 , , o b s 1 S } . Note that the vector space V s 1 , p is the column span of matrix C s 1 , p . It is true for any | b s 1 S | columns chosen in C s 1 ^ . It follows that the vector space V s 1 , p is a subspace of i ( u m ) > 0 V i ^ for all p ρ ( { s 1 } ) . From this, we obtain that p ρ ( { s 1 } ) V s 1 , p i ( u m ) > 0 V i ^ i S V i ^ . By a similar reasoning, V s j i S V i ^ , j m . Since b s j S = ρ ( { s j } ) , for j { m + 1 , m + 2 , , l } , we have V s j = V s j ^ for j { m + 1 , m + 2 , , l } . From the above facts we have i S V i i S V i ^ . Hence, d i m ( i S V i ) d i m ( i S V i ^ ) = n ρ ( S ) . Thus, we have established that d i m ( i S V i ) = n ρ ( S ) for an arbitrary subset S r . □
In Theorem 2, a generalized index coding problem is constructed from a discrete polymatroid and then it is shown that the discrete polymatroid is representable over the field F 2 if an optimal perfect linear index coding solution exists for the constructed generalized index coding problem. The proof of Theorem 2 follows closely the proof of a similar theorem in [28]. This is because in the constructed generalized index coding problem only the receivers constructed from minimal excluded vectors differ. We illustrate this theorem in Example 10. The converse of this result is however not true. In Example 11, from a binary representable discrete polymatroid we construct a generalized index coding problem for which there is no optimal perfect linear index coding solution.
Example 10.
Consider the discrete polymatroid described in Example 9. The set of receivers constructed from the discrete polymatroid is provided in Example 9. Consider the perfect index code in which the source transmits y 1 1 + x 1 , y 2 1 + x 2 , y 3 1 + x 3 and y 3 2 + x 1 + x 2 + x 3 . From Theorem 1 there exists a representable discrete polymatroid on the ground set 8 . Each element i 8 is represented by the column space of the matrix A i . For i 7 , A i = ϵ i , 7 and
A 8 = 1 0 0 1 0 1 0 1 0 0 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 .
It was shown in the proof of Theorem 2 that the matrix A 8 has a specific structure of the form [ C T D T ] T with the matrix D taking the form of an identity matrix. It was also shown that the matrix C forms a valid representation of the discrete polymatroid D . From the structure of A 8 we have
C = 1 0 0 1 0 1 0 1 0 0 1 1 C 1 C 2 C 3 .
It can be verified that the matrix C is a representing matrix for the discrete polymatroid D .
Example 11.
Consider the discrete polymatroid D on the ground set 3 with the rank function ρ given by ρ { 1 } = ρ { 2 } = ρ { 2 , 3 } = 2 , ρ { 3 } = 1 and ρ { 1 , 2 } = ρ { 1 , 3 } = ρ { 1 , 2 , 3 } = 3 . The generalized index coding problem I D ( Z , R ) constructed from the discrete polymatroid is as follows. The set of messages Z = { x 1 , x 2 , x 3 } { y 1 1 , y 1 2 , y 2 1 , y 2 2 , y 3 1 } .
The set of basis vectors for the problem is B ( D ) = { ( 1 , 1 , 1 ) , ( 1 , 2 , 0 ) , ( 2 , 0 , 1 ) , ( 2 , 1 , 0 ) } . We have,
S 1 ( ( 1 , 1 , 1 ) ) = { ( x i , { y 1 j , y 2 k , y 3 1 } ) : i 3 , j , k 2 } , S 1 ( ( 1 , 2 , 0 ) ) = { ( x i , { y 1 j , y 2 1 , y 2 2 } ) : i 3 , j 2 } , S 1 ( ( 2 , 0 , 1 ) ) = { ( x i , { y 1 1 , y 1 2 , y 3 1 } ) : i 3 } , S 1 ( ( 2 , 1 , 0 ) ) = { ( x i , { y 1 1 , y 1 2 , y 2 j } ) : i 3 , j 2 } and R 1 = S 1 ( ( 1 , 1 , 1 ) ) S 1 ( ( 1 , 2 , 0 ) ) S 1 ( ( 2 , 0 , 1 ) ) S 1 ( ( 2 , 1 , 0 ) ) .
The set of minimal excluded vectors of D are c 1 = ( 0 , 2 , 1 ) , c 2 = ( 2 , 1 , 1 ) and c 3 = ( 2 , 2 , 0 ) . We have,
S 2 ( c 1 , 2 , 1 ) = { ( y 2 1 , { y 2 2 + y 3 1 } ) } , S 2 ( c 1 , 2 , 2 ) = { ( y 2 2 , { y 2 1 + y 3 1 } ) } , S 2 ( c 1 , 3 , 1 ) = { ( y 3 1 , { y 2 1 + y 2 2 } ) } , S 2 ( c 2 , 1 , 1 ) = { ( y 1 1 , { y 1 2 + y 2 i + y 3 1 } ) : i 2 } , S 2 ( c 2 , 1 , 2 ) = { ( y 1 2 , { y 1 1 + y 2 i + y 3 1 } ) : i 2 } , S 2 ( c 2 , 2 , 1 ) = { ( y 2 1 , { y 1 1 + y 1 2 + y 3 1 } ) } , S 2 ( c 2 , 2 , 2 ) = { ( y 2 2 , { y 1 1 + y 1 2 + y 3 1 } ) } , S 2 ( c 2 , 3 , 1 ) = { ( y 3 1 , { y 1 1 + y 1 2 + y 2 i } ) : i 2 } , S 2 ( c 3 , 1 , 1 ) = { ( y 1 1 , { y 1 2 + y 2 1 + y 2 2 } ) } , S 2 ( c 3 , 1 , 2 ) = { ( y 1 2 , { y 1 1 + y 2 1 + y 2 2 } ) } , S 2 ( c 3 , 2 , 1 ) = { ( y 2 1 , { y 1 1 + y 1 2 + y 2 2 } ) } , S 2 ( c 3 , 2 , 2 ) = { ( y 2 2 , { y 1 1 + y 1 2 + y 2 1 } ) } and R 2 = c { c 1 , c 2 , c 3 } j ( c ) > 0 p ρ ( { j } ) S 2 ( c , j , p ) .
Third set of receivers R 3 is a collection of five receivers ( y 1 1 , X ) , ( y 1 2 , X ) , ( y 2 1 , X ) , ( y 2 2 , X ) , ( y 3 1 , X ) where X = { x 1 , x 2 , x 3 } .
The discrete polymatroid D has a binary representation given by the representing matrix
A = 1 0 0 1 0 0 1 0 1 0 0 0 1 1 1    A 1     A 2     A 3 .
Though the discrete polymatroid has a binary representation, the generalized index coding problem I D ( Z , R ) constructed from it does not have a perfect binary solution. Suppose there exists an optimal scalar perfect linear solution over F 2 . From Theorem 2, an optimal scalar perfect linear solution exists only if D is representable over F 2 . Every optimal scalar perfect linear solution for I D ( Z , R ) can be written as f ( Z ) = [ y 1 1 y 1 2 y 2 1 y 2 2 y 3 1 ] A + [ x 1 x 2 x 3 ] G where A is a 5 × 5 matrix over F 2 and G is a 3 × 5 matrix over F 2 . The matrix A needs to be full rank to ensure that the receivers belonging to R 3 are satisfied, which allows us to assume A to be the identity matrix. It can be shown by checking all possible solutions that there does not exist a 3 × 5 matrix G over F 2 that solves the generalized index coding problem. We provide an alternate proof here. Let G i denote the i t h column of G. The first three columns of the matrix G can be assumed to be the columns of the 3 × 3 identity matrix. The column G 5 has to be [ 1 1 1 ] T . The column G 5 cannot be [ 1 0 0 ] T , [ 0 1 0 ] T and [ 1 1 0 ] T , since d i m ( V 1 + V 3 ) = 3 . If G 5 = [ 0 1 1 ] T , the receivers ( x i , { y 1 2 , y 2 1 , y 3 1 } ) , i 3 fail to decode the demands. Similarly if G 5 = [ 0 1 1 ] T , the receivers ( x i , { y 1 1 , y 2 1 , y 3 1 } ) , i 3 fail and if G 5 = [ 1 0 1 ] T , the receivers ( x i , { y 1 1 , y 2 1 , y 3 1 } ) , i 3 fail to decode the demands. From the restrictions d i m ( V 2 + V 3 ) = 2 and d i m ( V 2 ) = 2 , the column vector G 4 has only two possibilities [ 1 1 0 ] T and [ 1 1 1 ] T . If G 4 is equal to [ 1 1 0 ] T , then receivers ( x i , { y 1 1 , y 1 2 , y 2 2 } ) fail and if G 4 = [ 1 1 1 ] T , then receivers ( x i , { y 1 1 . y 2 2 , y 3 1 } ) fail to decode the demands. This shows that there does not exist an optimal perfect scalar linear solution over F 2 for I D ( Z , R ) .

6. Matroids and Generalized Index Coding Problems

In this section, we construct a generalized index coding problem from a matroid. The construction explained in Section 5 is more general than this since discrete polymatroids can be viewed as a generalization of matroids. The advantage of this construction is the existence of an if and only if the relationship between the constructed index coding problem and the representability of the matroids as shown in Theorem 3. The construction enables us to start with any representable matroid and convert it to a generalized index coding problem with optimal perfect solutions. This paves the way for the construction of a large class of generalized index coding problems with optimal perfect solutions. The index code constructed from the matroid is similar to the construction provided in [24]. Receivers belonging to the set R 2 , which are constructed from the circuits of matroid are different, as explained below.
Definition 7.
Given a matroid M ( Y , r ) of rank k over the ground set Y = { y 1 , , y m } , we define a corresponding index coding with coded side information problem I M ( Z , R ) as follows:
1. 
Z = Y X , where X = { x 1 , , x k } ,
2. 
R = R 1 R 2 R 3 where
(a) 
R 1 = { ( x i , B ) ; B B ( M ) , i = 1 , , k }
(b) 
R 2 = { ( y , y j C { y } y j ) ; C C ( M ) , y C }
(c) 
R 3 = { ( y i , X ) ; i = 1 , , m }
There is a one to one correspondence between a matroid M and the discrete polymatroid D ( M ) . From the discrete polymatroid D ( M ) , we can construct a generalized index coding problem I D ( M ) ( Z , R ) . The generalized index coding problem I D ( M ) ( Z , R ) reduces to the generalized index coding problem I M ( Z , R ) . The rank of every element in the ground set is one and hence the set Y remains the same in both I D ( M ) ( Z , R ) and I M ( Z , R ) . It can also be seen that the receivers R 1 , R 2 and R 3 are same for I D ( M ) ( Z , R ) and I M ( Z , R ) .
Remark 1.
Every receiver in an index coding problem introduces a dependency between demanded messages, messages available as side information and transmitted messages. Matroids and discrete polymatroids captures these dependencies existing between its elements. Each message and the transmitted index code are viewed as elements in the ground set and the dependency between them is captured. However, for the set of dependent elements to characterize a matroid, it needs to satisfy certain axioms referred to as circuit axioms provided in the paper. The generalized index coding problem constructed in Definition 7 captures all the dependencies.
Theorem 3.
Consider a matroid M ( Y , r ) on the ground set Y = { y 1 , , y m } , and I M ( Z , R ) be the corresponding generalized index coding problem constructed from it. Then, the matroid M has a linear representation over F 2 if and only if there exists an optimal perfect scalar linear index code for I M ( Z , R ) over F 2 .
Proof. 
From Theorem 2 and from the fact that there is a one to one correspondence between discrete polymatroid D ( M ) and matroid M it follows that the matroid M has a linear representation over F 2 if there exists an optimal perfect scalar linear index code for I M ( Z , R ) . We prove the only if part here. We first assume that the matroid M is representable and show the existence of an optimal perfect scalar linear index code for the index coding problem I M ( Z , R ) .
Let M be a matrix representing the matroid M . Since the matroid M is of rank k, the matrix M is a k × m matrix. Let ξ = ( x 1 , , x k ) F 2 k , and χ = ( y 1 , , y m , x 1 , , x k ) F 2 ( m + k ) .
Consider the following linear map f ( χ ) = ( f 1 ( χ ) , , f m ( χ ) ) where
f i ( χ ) = y i + ξ M i F 2 , i = 1 , , m .
Note that f is a map from F 2 m + k to F 2 m . We show that f is an optimal perfect scalar linear index code for I M ( Z , R ) . To show this we show that all the receivers are able to satisfy their demands using their Has-sets and the transmitted messages.
  • Receivers in R 1 : Consider a basis B = { y i 1 , , y i k } B ( M ) , and let ρ i = ( x i , B ) R 1 , i = 1 , , k . We have f i j ( χ ) = y i j + ξ M i j , j = 1 , 2 , , k . Combining these equations we obtain
    [ f i 1 ( χ ) f i 2 ( χ ) f i k ( χ ) ] = [ y i 1 y i 2 y i k ] + ξ [ M i 1 M i 2 M i k ] .
    Since { y i 1 , , y i k } B ( M ) the matrix formed by concatenation of M i 1 , M i 2 , , M i k is invertible. Let B = [ M i 1 M i 2 M i k ] . The receivers can obtain ξ using the relation that
    ξ = [ f i 1 ( χ ) y i 1 f i 2 ( χ ) y i 2 f i k ( χ ) y i k ] B 1 .
  • Receivers in R 2 : Let C = { y i 1 , , y i c } C ( M ) and ρ = ( y i 1 , y j C { y i 1 } y j ) R 2 . Let C = C y i 1 . We have f i j ( χ ) = y i j + ξ M i j , j = 1 , 2 , , c . From this we can establish the relation
    f i 2 ( χ ) + + f i c ( χ ) = y i 2 + + y i c + ξ ( M i 2 + + M i c ) .
    Since the matroid is representable over a binary field, we have M i 1 = M i 2 + M i 3 + + M i c and the receiver can decode its demanded message y i 1 using the relation
    y i 1 = ( f i 1 ( χ ) + f i 2 ( χ ) + + f i c ( χ ) ) + ( y i 2 + + y i c ) .
    In a similar way, all receivers belonging to R 2 can decode their demanded messages.
  • Receivers in R 3 : For all ρ = ( y i , X ) R 3 , receivers can obtain its demanded message using the relation y i = f i ( χ ) ξ M i .
The index code is clearly linear and also μ ( I M ( Z , R ) ) = m . Hence, the code defined by the map f is an optimal perfect linear index code. This completes the proof. □
Theorem 3 shows the existence of a relationship between binary representability of matroids and solutions to certain generalized index coding problems. Theorem 3 is illustrated in Example 12. We use the theorem to show that not every generalized index coding problem has a perfect binary solution in Example 13.
Example 12.
The uniform matroid U 2 , 3 is defined on a ground set Y = { y 1 , y 2 , y 3 } of three elements, such that I Y and | I | 2 , r ( I ) = | I | , and r ( Y ) = 2 . Consider the binary linear representation of U 2 , 3 given by M = 1 0 1 0 1 1 . The index coding with the coded side information problem corresponding to this matroid has the source messages set χ = { y 1 , y 2 , y 3 , x 1 , x 2 } , where each message belongs to the finite field F 2 . There are three sets of receivers and they are given below.
  • Receivers in R 1 : { x 1 , { y 1 , y 2 } } , { x 2 , { y 1 , y 2 } } , { x 1 , { y 1 , y 3 } } , { x 2 , { y 1 , y 3 } } , { x 1 , { y 2 , y 3 } } , { x 2 , { y 2 , y 3 } } .
  • Receivers in R 2 : { y 1 , { y 2 + y 3 } } , { y 1 , { y 2 + y 3 } } , { y 1 , { y 2 + y 3 } }
  • Receivers in R 3 : { y 1 , { x 1 , x 2 } } , { y 2 , { x 1 , x 2 } } , { y 3 , { x 1 , x 2 } }
The optimal perfect linear index coding solution for the index coding problem is given by the map f : F 2 5 F 2 3 given by
f ( χ ) = [ y 1 y 2 y 3 ] + [ x 1 x 2 ] M .
The index code is as follows: c 1 = y 1 + x 1 , c 2 = y 2 + x 2 , c 3 = y 3 + x 1 + x 2
A receiver { x i , { y j , y k } } R 1 can decode its demand from the transmissions c j and c k . Note that c 1 + c 2 + c 3 = y 1 + y 2 + y 3 . Receivers belonging to the set R 2 can decode its demands by adding the information available in its Has-set to c 1 + c 2 + c 3 . The receivers { y 1 , { x 1 , x 2 } } , { y 2 , { x 1 , x 2 } } and { y 3 , { x 1 , x 2 } } belonging to R 3 can decode its demands from the transmissions c 1 , c 2 and c 3 , respectively. Thus, all the receivers in the index coding problem constructed from the uniform matroid U 2 , 3 , are able to decode its required messages. This is made more explicit in [40].
Example 13.
Consider the index coding problem with coded side information I ( Z , R ) :
The set of messages Z = { y 1 , y 2 , y 3 , y 4 , x 1 , x 2 } . The set of receivers are given below.
  • Receivers in R 1 : { ( x i , { y 1 , y 2 } ) , i 2 } , { ( x i , { y 1 , y 3 } ) , i 2 } , { ( x i , { y 1 , y 4 } ) , i 2 } , { ( x i , { y 2 , y 3 } ) , i 2 } , { ( x i , { y 2 , y 4 } ) , i 2 } , { ( x i , { y 3 , y 4 } ) , i 2 } .
  • Receivers in R 2 : { y 1 , { y 2 + y 3 } } , { y 2 , { y 1 + y 3 } } , { y 3 , { y 1 + y 3 } } , { y 1 , { y 2 + y 4 } } , { y 2 , { y 1 + y 4 } } , { y 4 , { y 1 + y 2 } } , { y 1 , { y 3 + y 4 } } , { y 3 , { y 1 + y 4 } } , { y 4 , { y 1 + y 3 } } , { y 2 , { y 3 + y 4 } } , { y 3 , { y 2 + y 4 } } and { y 4 , { y 2 + y 3 } }
  • Receivers in R 3 : { ( y i , { x 1 , x 2 , x 3 , x 4 ) , i 4 } } .
The above index coding problem is constructed from the uniform matroid U 2 , 4 . The uniform matroid is defined on a ground set Y = { y 1 , y 2 , y 3 , y 4 } such that I Y and | I | 2 , r ( I ) = | I | , and r ( Y ) = 2 . The matroid U 2 , 4 does not have a binary representation. The matroid has a representation over ternary field G F ( 3 ) :
V 1 = 1 0 , V 2 = 0 1 , V 3 = 1 1 , V 4 = 1 2 .
It can be verified that the above generalized index coding problem does not have an optimal perfect scalar binary linear solution as implied by Theorem 3. Since the matroid does not have a linear representation over the binary field, the generalized index coding problem constructed from it does not have an optimal scalar perfect linear solution.

7. Conclusions

In this work, we established a few connections between generalized index coding and discrete polymatroids. It is shown that the existence of a linear solution for a generalized index coding problem is connected to the existence of a representable discrete polymatroid satisfying certain conditions determined by the generalized index coding problem. From a discrete polymatroid, a corresponding generalized index coding problem is constructed and it is shown that a representation to the discrete polymatroid exists if an optimal perfect vector linear solution exists for the generalized index coding problem. An example is provided in the paper to illustrate that the converse of the above result is not true. When a similar generalized index coding problem is constructed from matroids, we show that a binary representation to the matroid exists if and only if the constructed index coding problem has an optimal binary scalar linear solution. The connection is helpful in determining whether the index coding problem has an optimal perfect binary scalar linear solution.
The results of this paper could be extended in the following directions. The construction explained in Section 5 is general and can be applied to any discrete polymatroid. A generalized index coding problem can be constructed from a non representable discrete polymatroid and further connections could be explored. Also for the constructed index coding problem, certain receivers (belonging to the set R 2 ) possess the sum of certain elements as its Has-set. The elements of the Has-set can be made into any other linear combinations and further study could be done. A possible direction is to construct a discrete polymatroid for any generalized index coding problems. There may exist generalized index coding problems having non-linear solutions for which a connection to a non-representable discrete polymatroid could be established. This has not yet been explored and requires further investigation. Similar extensions can be considered to Theorem 3. Connections between the matroids representable over a non binary field and the generalized index coding problems constructed from those matroids is also another open problem. The connections between error correcting index codes for generalized index coding problems and discrete polymatroids could also be explored.

Author Contributions

Conceptualization, A.T. and B.S.R. All authors equally contributed to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.

Acknowledgments

This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Birk, Y.; Kol, T. Coding-on-demand by an informed source (ISCOD) for efficient broadcast of different supplemental data to caching clients. IEEE Trans. Inf. Theory 2006, 52, 2825–2830. [Google Scholar] [CrossRef]
  2. Bar-Yossef, Z.; Birk, Y.; Jayram, T.; Kol, T. Index coding with side information. In Proceedings of the 2006 the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), Berkeley, CA, USA, 22–24 October 2006; pp. 197–206. [Google Scholar]
  3. Alon, N.; Hassidim, A.; Lubetzky, E.; Stav, U.; Weinstein, A. Broadcasting with side information. In Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science, Philadelphia, PA, USA, 25–28 October 2008; pp. 823–832. [Google Scholar]
  4. Ong, L.; Ho, C.K. Optimal index Codes for a class of multicast networks with receiver side information. In Proceedings of the 2012 IEEE International Conference on Communications, Ottawa, ON, Canada, 10–15 June 2012; pp. 2213–2218. [Google Scholar]
  5. Thapa, C.; Ong, L.; Johnson, S.J. Interlinked cycles for index coding: Generalizing cycles and cliques. IEEE Trans. Inf. Theory 2017, 63, 3692–3711. [Google Scholar] [CrossRef]
  6. Vaddi, M.B.; Rajan, B.S. Optimal index codes for a new class of interlinked cycle structure. IEEE Commun. Lett. 2018, 22, 684–687. [Google Scholar] [CrossRef]
  7. Dau, S.H.; Skachek, V.; Chee, Y.M. On the security of index coding with side information. IEEE Trans. Inf. Theory 2012, 58, 3975–3988. [Google Scholar] [CrossRef] [Green Version]
  8. Dau, S.H.; Skachek, V.; Chee, Y.M. Error correction for index coding with side information. IEEE Trans. Inf. Theory 2013, 59, 1517–1531. [Google Scholar] [CrossRef] [Green Version]
  9. Kim, J.-W.; No, J.-S. Index coding with erroneous side information. IEEE Trans. Inf. Theory 2017, 63, 7687–7697. [Google Scholar] [CrossRef]
  10. Gupta, A.; Rajan, B.S. Error-correcting functional index codes, generalized exclusive laws and graph coloring. In Proceedings of the 2016 IEEE International Conference on Communications (ICC), Kuala Lampur, Malaysia, 22–27 October 2016; pp. 1–7. [Google Scholar]
  11. Shum, K.W.; Dai, M.; Sung, C.W. Broadcasting with coded side information. In Proceedings of the 2012 IEEE 23rd International Symposium on Personal, Indoor and Mobile Radio Communications-(PIMRC), Sydney, Australia, 9–12 September 2012; pp. 89–94. [Google Scholar]
  12. Lee, N.; Dimakis, A.G.; Heath, R.W. Index coding with coded side-information. IEEE Commun. Lett. 2015, 19, 319–322. [Google Scholar] [CrossRef] [Green Version]
  13. Dai, M.; Shum, K.W.; Sung, C.W. Data dissemination with side information and feedback. IEEE Trans. Wirel. Commun. 2014, 13, 4708–4720. [Google Scholar] [CrossRef]
  14. Byrne, E.; Calderini, M. Error correction for index coding with coded side information. IEEE Trans. Inf. Theory 2017, 63, 3712–3728. [Google Scholar] [CrossRef] [Green Version]
  15. Karat, N.S.; Samuel, S.; Rajan, B.S. Optimal Error Correcting Index Codes for Some Generalized Index Coding Problems. IEEE Trans. Commun. 2019, 67, 929–942. [Google Scholar] [CrossRef]
  16. Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef] [Green Version]
  17. Parrinello, E.; Ünsal, A.; Elia, P. Fundamental Limits of Coded Caching with Multiple Antennas, Shared Caches and Uncoded Prefetching. IEEE Trans. Inf. Theory 2020, 66, 2252–2268. [Google Scholar] [CrossRef]
  18. Wan, K.; Tuninetti, D.; Piantanida, P. On the optimality of uncoded cache placement. In Proceedings of the 2016 IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 161–165. [Google Scholar]
  19. Wan, K.; Tuninetti, D.; Piantanida, P. A novel index coding scheme and its application to coded caching. In Proceedings of the 2017 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 12–17 February 2017; pp. 1–6. [Google Scholar]
  20. Karat, N.S.; Thomas, A.; Rajan, B.S. Error Correction in Coded Caching With Symmetric Batch Prefetching. IEEE Trans. Commun. 2019, 67, 5264–5274. [Google Scholar] [CrossRef]
  21. Karat, N.S.; Thomas, A.; Rajan, B.S. Optimal error correcting delivery scheme for an optimal coded caching scheme with small buffers. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 1710–1714. [Google Scholar]
  22. Karat, N.S.; Thomas, A.; Rajan, B.S. Optimal error correcting delivery scheme for coded caching with symmetric batch prefetching. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 2092–2096. [Google Scholar]
  23. Karat, N.S.; Dey, S.; Thomas, A.; Rajan, B.S. An optimal linear error correcting delivery scheme for coded caching with shared caches. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1217–1221. [Google Scholar]
  24. El Rouayheb, S.; Sprintson, A.; Georghiades, C. On the index coding problem and its relation to network coding and matroid theory. IEEE Trans. Inf. Theory 2010, 56, 3187–3195. [Google Scholar] [CrossRef] [Green Version]
  25. Effros, M.; El Rouayheb, S.; Langberg, M. An equivalence between network coding and index coding. IEEE Trans. Inf. Theory 2015, 61, 2478–2487. [Google Scholar] [CrossRef]
  26. Dougherty, R.; Freiling, C.; Zeger, K. Networks, matroids, and non-Shannon information inequalities. IEEE Trans. Inf. Theory 2007, 53, 1949–1969. [Google Scholar] [CrossRef] [Green Version]
  27. Dougherty, R.; Freiling, C.; Zeger, K. Insufficiency of linear coding in network information flow. IEEE Trans. Inf. Theory 2005, 51, 2745–2759. [Google Scholar] [CrossRef] [Green Version]
  28. Muralidharan, V.T.; Rajan, B.S. Linear network coding, linear index coding and representable discrete polymatroids. IEEE Trans. Inf. Theory 2016, 62, 4096–4119. [Google Scholar] [CrossRef] [Green Version]
  29. Gupta, A.; Rajan, B.S. A relation between network computation and functional index coding problems. IEEE Trans. Commun. 2017, 65, 705–714. [Google Scholar] [CrossRef] [Green Version]
  30. Thomas, A.; Rajan, B.S. Generalized index coding problem and discrete polymatroids. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2553–2557. [Google Scholar]
  31. Peeters, R. Orthogonal representations over finite fields and the chromatic number of graphs. Combinatorica 1996, 16, 417–431. [Google Scholar] [CrossRef] [Green Version]
  32. Welsh, D.J.A. Matroid Theory; Academic Press: London, UK, 1976. [Google Scholar]
  33. Oxley, J.G. Matroid Theory; Oxford University Press: New York, NY, USA, 1993. [Google Scholar]
  34. Simonis, J.; Ashikhmin, A. Almost affine codes. Des. Codes Cryptogr. 1998, 14, 179–197. [Google Scholar] [CrossRef]
  35. Matus, F. Matroid representations by partitions. Discret. Math. 1999, 203, 169–194. [Google Scholar] [CrossRef] [Green Version]
  36. Herzog, J.; Hibi, T. Discrete polymatroids. J. Algebr. Combinat. 2002, 16, 239–268. [Google Scholar] [CrossRef] [Green Version]
  37. Vladoiu, M. Discrete polymatroids. Analele Stiintifice Univ. Ovidius Constanta 2006, 14, 97–120. [Google Scholar]
  38. Farras, O.; Marti-Farre, J.; Padro, C. Ideal Multipartite Secret Sharing Schemes. J. Cryptol. 2012, 25, 434–463. [Google Scholar] [CrossRef]
  39. Thomas, A.; Rajan, B.S. Vector linear error correcting index codes and discrete polymatroids. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015; pp. 1039–1043. [Google Scholar]
  40. Thomas, A.; Rajan, B.S. Generalized Index Coding Problem and Discrete Polymatroids. arXiv 2017, arXiv:1701.03559v1. [Google Scholar]
Figure 1. Diagram illustrating the connections between discrete polymatroids and generalized index coding problems.
Figure 1. Diagram illustrating the connections between discrete polymatroids and generalized index coding problems.
Entropy 22 00646 g001
Figure 2. Diagram illustrating the connections between matroids and generalized index coding problems.
Figure 2. Diagram illustrating the connections between matroids and generalized index coding problems.
Entropy 22 00646 g002

Share and Cite

MDPI and ACS Style

Thomas, A.; Sundar Rajan, B. Generalized Index Coding Problem and Discrete Polymatroids. Entropy 2020, 22, 646. https://doi.org/10.3390/e22060646

AMA Style

Thomas A, Sundar Rajan B. Generalized Index Coding Problem and Discrete Polymatroids. Entropy. 2020; 22(6):646. https://doi.org/10.3390/e22060646

Chicago/Turabian Style

Thomas, Anoop, and Balaji Sundar Rajan. 2020. "Generalized Index Coding Problem and Discrete Polymatroids" Entropy 22, no. 6: 646. https://doi.org/10.3390/e22060646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop