Next Article in Journal
Random Search Walks Inside Absorbing Annuli
Previous Article in Journal
A Max-Flow Approach to Random Tensor Networks
Previous Article in Special Issue
Universal Encryption of Individual Sequences Under Maximal Information Leakage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers †

by
Mohammad Reza Deylam Salehi
* and
Derya Malak
*
Communication Systems Department, EURECOM, Sophia Antipolis, 06410 Biot, France
*
Authors to whom correspondence should be addressed.
An early version of this work appeared in Proceedings of the 2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 26–29 September 2023.
Entropy 2025, 27(7), 757; https://doi.org/10.3390/e27070757
Submission received: 30 May 2025 / Revised: 8 July 2025 / Accepted: 11 July 2025 / Published: 15 July 2025
(This article belongs to the Special Issue Information Theory and Data Compression)

Abstract

We address the problem of the distributed computation of arbitrary functions of two correlated sources, X 1 and X 2 , residing in two distributed source nodes, respectively. We exploit the structure of a computation task by coding source characteristic graphs (and multiple instances using the n-fold OR product of this graph with itself). For regular graphs and general graphs, we establish bounds on the optimal rate—characterized by the chromatic entropy for the n-fold graph products—that allows a receiver for asymptotically lossless computation of arbitrary functions over finite fields. For the special class of cycle graphs (i.e., 2-regular graphs), we establish an exact characterization of chromatic numbers and derive bounds on the required rates. Next, focusing on the more general class of d-regular graphs, we establish connections between d-regular graphs and expansion rates for n-fold graph products using graph spectra. Finally, for general graphs, we leverage the Gershgorin Circle Theorem (GCT) to provide a characterization of the spectra, which allows us to derive new bounds on the optimal rate. Our codes leverage the spectra of the computation and provide a graph expansion-based characterization to succinctly capture the computation structure, providing new insights into the problem of distributed computation of arbitrary functions.

1. Introduction

Data compression is the process of using fewer bits than the original size of the source, which is given by Shannon’s entropy of a source [1] in the case of a point-to-point communication model. In the setting of distributed sources, where the goal is to recover them jointly at a receiver, the Slepian–Wolf theorem gives the fundamental limits of rate for compression [2]. In the case when the receiver wants to compute a deterministic function of distributed sources, a further reduction in compression is possible via accounting for the structure of the function as well as the structure of the joint source distribution [3]. This approach is known as distributed functional compression, in which a function represents an abstraction of a particular task, the sources separately compress their data and send color encodings of their data to a common receiver, and the receiver, from the obtained colors, recovers the desired function of the sources. This approach differs from the conventional approach [2], where the receiver jointly decodes source sequences.

1.1. Motivation and Literature Review

Let us begin with an example. Consider a college student database with information including the rental records, demographics, and health of individuals. The Ministry of Science wants to offer housing aid to a particular group of students by requiring information solely on the rental contracts and payslips of the students, and without disclosing their personal data, due to privacy and redundancy constraints. In such settings, conventional data compression or joint source compression schemes are suboptimal, as they store and transmit streams of source data regardless of task relevance. Our work targets this gap by investigating a framework for distributed functional compression, where sources separately encode their inputs using graph-based colorings, and a receiver computes the desired function without decoding all of the source variables. The scenario of student housing is an example of realizing functional compression, which avoids compressing and transmitting large volumes of all available data and is instead tailored to the specifics of the function, i.e., a student’s eligibility for getting housing aid or not.
In Shannon’s breakthrough work in [1], the function to be recovered at the receiver is the identity function of the source variable, i.e., the source itself. Generalizing the noiseless coding of a discrete information source, given in [1], to distributed compression and joint decoding of two jointly distributed and finite alphabet random variables X 1 and X 2 , the Slepian–Wolf theorem gives a theoretical lower bound for the lossless coding rate of distributed coding of such sources [2], where the two data sequences of memoryless correlated sources with finite alphabets X 1 n and X 2 n are obtained by n repeated independent drawings from a discrete bivariate distribution. Practical implementation schemes for Slepian–Wolf compression have been proposed, including [4,5,6]. In functional compression of distributed sources X 1 and X 2 , however, the goal is to compress the sources separately while ensuring that a deterministic function f ( X 1 , X 2 ) of these sources can be calculated by a user. Prior attempts at functional compression can be categorized into works focusing on lossless and zero-error compression of functions [1,2,3,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] and those for which the compression schemes tolerate distortion for lossy reconstruction [24,25,26,27,28,29,30].
Several special cases of distributed compression have been studied. In [14], Ahlswede and Körner have determined the rate region for a distributed compression setting where separate encoders encode the n-th realizations of correlated sources X 1 and X 2 observed by sources one and two, respectively, and a receiver aims to only recover X 2 n . Körner and Marton, in [13], have dealt with the problem of distributed encoding of two binary sources X 1 and X 2 to compute their modulo-two sum, i.e., f ( X 1 , X 2 ) = ( X 1 + X 2 ) mod 2 , at the receiver. In [26], Yamamoto has extended the Wyner–Ziv model [24], which has addressed lossy source coding with side information at the decoder, to a setting where the decoder estimates a function f ( X 1 , X 2 ) of the source X 1 given side information X 2 . In [16], Han and Kobayashi have established an achievable distributed functional reconstruction scheme, which depends on the structure of f ( X 1 , X 2 ) and the joint distribution of ( X 1 , X 2 ) . Building on [26], optimal coding schemes and achievable rate regions have been derived for lossless and lossy compression of source X 1 for distributed computation of f ( X 1 , X 2 ) given side information X 2 [9,10,31,32], for distributed compression of sources X 1 and X 2 for the computation of f ( X 1 , X 2 ) [11,33]. More specifically, in [10], Orlitsky and Roche have provided a single letter characterization for general functions of two variables using the notion of source characteristic graphs (confusion graphs) introduced by Körner [34], where the vertices are the possible realizations of a source and the edges capture the function structure.

1.2. Overview and Contributions

In this work, we design a coding framework for the problem of distributed functional compression with two distributed sources having access to X 1 and X 2 , respectively, each with a finite alphabet, and a receiver that wants to reconstruct the function f ( X 1 , X 2 ) in an asymptotically lossless manner. To capture the structure of the function f in compression, we exploit the notion of source characteristic graphs. Given source variable X 1 , the limits of color reuse are determined by chromatic numbers χ ( G X 1 ) [22], where Witsenhausen studied zero-error compression using characteristic graphs with side information. The fundamental limits of compression rate are characterized by Körner’s graph entropy of G X 1 , i.e., H G ( X 1 ) [34]. While prior works have focused on specific function classes or leveraged receivers’ side information, we lack a framework that captures general computation tasks and the fundamental compression limits of distributed functional compression. To capture the computation structure abstracted by the function and characterize minimum achievable rates, we devise a graph-theoretic framework. This approach enables us to attain graph entropy for the given function, realized by computing minimum-entropy colorings of the n-fold OR products of the source characteristic graphs. As the vertex set size grows with n, determining minimum entropy colorings and bounding chromatic numbers become increasingly complex, even under structural regularities. Our framework addresses these challenges through spectral and expansion-based techniques for bounding rates for various classes of characteristic graphs.
To address different computation scenarios, we examine several characteristic graph topologies, including cyclic graphs, denoted by C i , where i is the number of vertices; their generalizations to d-regular graphs, denoted by G d , V ; and general characteristic graphs, denoted by G, as motivated next.
Cycle graphs (or cyclic graphs) appear in many practical scenarios, such as periodic functions and mod functions, which are widely used in cryptography, computer algebra and science, and musical arts [35]. In cryptography, Caesar Ciphers, Rivest-Shamir-Adleman (RSA) algorithm [36], Diffie-Hellman [37], the Advanced Encryption Standard (AES) [38], and the International Data Encryption Algorithm (IDEA) are widely used for secure data transmission [39]. The calculation of checksums within serial numbers is another application of interest [40]. For example, ISBNs (International Standard Book Numbers) use mod 11 arithmetic for 10-digit ISBNs or mod 10 for 13-digit ISBNs to detect errors. In addition, International Bank Account Numbers (IBANs) use mod 97 arithmetic to identify mistakes in bank account numbers entered by users. Cyclic characteristic graphs allow for a more efficient reuse of colors compared with acyclic graphs with a higher average degree. Furthermore, cycles and their products—built to capture a source sequence X 1 n —have good connectivity properties, enabling an exact characterization of their chromatic numbers (and bounding their graph entropies).
d-regular graphs have broad applications ranging from representing network topologies to modeling social networks, coding theory for constructing error-correcting codes [41], random walks and Markov chains in analyzing state transitions [42], spectral graph theory providing insights into graphs’ characteristics [43,44], and fault-tolerant systems [41,42,45,46,47,48]. In general, d-regular graphs help model frameworks for structured data, such as graph neural networks [49], making them one of the main topics of investigation in this paper.
The main contributions of this paper can be summarized as follows:
  • Cyclic characteristic graphs: We derive exact expressions for the degree of a vertex x n [ V n ] in the n-fold OR product C i n (as detailed by Alon and Orlitsky in [3]) of cycles C i (where i = V , see Proposition 2), denoted by d e g ( x ) , and for the chromatic number of even cycles C 2 k , denoted by χ ( C 2 k ) where k Z + . Then, we devise a polynomial-time (finding a minimum entropy coloring in general graphs is an NP-hard problem [50]) achievable coloring scheme for odd cycles C 2 k + 1 , leveraging the structure of C i and its OR products (see Proposition 3). Given C i , we investigate the largest eigenvalue of its adjacency matrix, and using that, we present bounds on the chromatic number (see Proposition 7). We also provide bounds on Körner’s graph entropy of C i (see Proposition 5).
  • d-regular characteristic graphs: We characterize the exact degree of a vertex and the chromatic number of d-regular graphs, denoted by G d , V and their n-fold OR products G d , V n (see Propositions 8 and 9). Additionally, given a d-regular graph, the concept of graph expansion helps determine how the corresponding OR products are related. Capturing the structure of the OR products graphs, we then present a lower bound on the expansion rate of G d , V n (see Proposition 10).
  • General characteristic graphs: Given a general graph, G ( V , E ) , we calculate the degree of each vertex for its n-fold OR product (see Corollary 4). We present upper and lower bounds on the expansion rate (see Corollary 7). We then investigate the entropy of general characteristic graphs (see Proposition 11). We derive bounds on the largest eigenvalue (see Corollary 6) and the chromatic number (see Corollary 5) using the adjacency matrix of the n-fold OR product graph and the famous Gershgorin Circle Theorem (GCT), which is a theorem that identifies the range of the eigenvalues for a given square matrix [51]. We use GCT to bound eigenvalues of the adjacency matrix of a given graph n-fold OR product via exploiting the structure of OR products (see Theorem 2 and Corollary 9).
By leveraging chromatic numbers and graph entropy bounds, our results, as outlined above, illustrate the connection between graph structures and entropy-based communication cost and provide insights for functional compression in distributed settings.

1.3. Organization

The rest of this manuscript is organized as follows. In Section 2, we review the literature and provide a technical preliminary on graphs, their valid colorings, and the n-fold OR products of characteristic graphs. In Section 3, we present the main results of the paper, including the achievable coloring schemes and the bounds on the degrees, eigenvalues, and chromatic numbers for cycles, d-regular graphs, and general graphs. We further derive upper and lower bounds on the graph entropy and expansion rate for the n-fold OR product of characteristic graphs. In Section 5, we summarize our key results and outline potential exploration directions. Proofs of the main results are presented in the Appendix.

1.4. Notation

Letter X denotes a discrete random variable with distribution p ( x ) over the finite alphabet X , where x is a realization of X, and X n = X 1 , X 2 , X n is an independent and identically distributed (i.i.d.) sequence where each element is distributed according to p ( x ) . We denote matrices and vectors by boldface letters, e.g., A . The joint distribution of the source variables X 1 and X 2 is denoted by p ( x 1 , x 2 ) . We denote by G X 1 and G X 2 the characteristic graphs that sources X 1 and X 2 build for computing a given function f ( X 1 , X 2 ) , respectively. We use the boldface notation x 1 n = x 11 , , x 1 n to represent the length n-th realization of X 1 , and similarly for X 2 . We let [ n ] = { 1 , , n } for n Z + , and [ a , b ] = { a , a + 1 , , b } for a , b Z + .
Given a graph G ( V , E ) , we denote by χ ( G ) and χ f ( G ) its chromatic and fractional chromatic numbers, respectively. We denote by d e g ( x k ) the degree of the vertex x k [ V ] , d max = max k [ V ] d e g ( x k ) is the maximum vertex degree of G, and d e g avg ( x k ) denotes the average degree over all x k [ V ] in G. We denote by C i = G ( V , E ) a cycle graph with i = V vertices, by C i n = G ( V n , E n ) its n-fold OR product, and by C i j ( l ) the set of distinct colors in sub-graph l [ V ] of the j-fold OR product of C i . We denote the coloring distribution of G by C G and the set of distinct colors of G by C ( G ) . We denote a d-regular graph on V vertices by G d , V . We denote a complete graph with i = V vertices and its n-fold OR product by K i and K i n , respectively. We denote C G as the PMF of a valid coloring of a graph G.
Given a graph G ( V , E ) , we denote by J V and I V an all-one and identity matrix of size V × V each, by A f the adjacency matrix, where A f = ( a x x ) 1 x , x V is a symmetric ( 0 , 1 ) -matrix with zeros on its diagonal, i.e., a x x = 0 , and a x x = 1 indicates that two distinct vertices x , x [ V ] are adjacent, and a x x = 0 when there is no edge between them. The determinant and trace of A f are denoted by det ( A f ) and t r a c e ( A f ) , respectively. The largest and the smallest eigenvalues of A f are denoted by λ 1 ( A f ) , and λ V ( A f ) , respectively, σ ( A f ) is the set of all eigenvalues of A f , and ϑ ( G j ) is the set of distinct eigenvalues of the adjacency matrix of G j , i.e., A f j . Exploiting GCT to characterize the eigenvalues of A f , the k-th interval that contains an eigenvalue is denoted by δ k , k [ V ] . LHS and RHS represent the left- and right-hand sides of an equation, respectively.

2. Technical Preliminary

This section introduces the fundamental concepts related to graph theory, such as degrees, independent sets, paths, cycles, d-regular graphs, and the notion of graph expansion [34,52,53,54,55]. Furthermore, it discusses the concepts of characteristic graphs, the n-fold OR products of characteristic graphs, and traditional and fractional coloring of graphs [55,56,57].

2.1. Source Characteristic Graphs and Their OR Products

A graph is represented by G ( V , E ) , where V = [ V ] denotes the set of its vertices, with cardinality | V | = V , and E is the set of its edges, with cardinality | E | = E .
Definition 1
(Degree of a vertex [52]). Given G ( V , E ) , the degree of a vertex x k [ V ] for k [ V ] , represented by d e g ( x k ) , is the number of edges it is connected to, i.e., the number of neighbors of x k [ V ] . The average degree across nodes in G is denoted by d avg = x k [ V ] d e g ( x k ) V .
We next introduce the concept of an independent set, which plays a critical role in determining a valid coloring of characteristic graphs that we detail in Section 2.2.
Definition 2
(Independent set and maximal independent set [53]). An independent set, I S G , in G ( V , E ) is a subset of vertices of V such that no two are adjacent. A maximal independent set, M I S G , is an independent set in G that is not a subset of any other independent set of G. A maximum independent set is an I S G with maximum cardinality, and its size is referred to as the independence number, denoted by α ( G ) .
In distributed functional compression with M source nodes, each holding X k X k , k [ M ] , a receiver aims to reconstruct f ( X 1 , X 2 , , X M ) . To aid in distinguishing function outcomes, each source k builds a characteristic graph G X k with vertex set X k and edges determined by the function and the joint source PMF. Next, we define characteristic graphs for a bivariate setting.
Definition 3
(Source characteristic graphs [34]). Let X 1 and X 2 be two distributed source variables with a joint distribution p ( x 1 , x 2 ) . Source one builds a characteristic graph G X 1 = G ( V , E ) for distinguishing the outcomes of a function f ( X 1 , X 2 ) , where V = X 1 , and an edge ( x 1 1 , x 1 2 ) E if and only if there exists a x 2 1 X 2 such that p ( x 1 1 , x 2 1 ) · p ( x 1 2 , x 2 1 ) > 0 and f ( x 1 1 , x 2 1 ) f ( x 1 2 , x 2 1 ) , i.e., these two vertices of G X 1 should be distinguished.
Definition 4
(Entropy of a characteristic graph [34]). Given a source random variable X 1 with characteristic graph G X 1 = G ( V , E ) , the entropy of G X 1 is defined as
H G X 1 ( X 1 ) = min X 1 U 1 M I S G X 1 I ( X 1 ; U 1 ) ,
where M I S G X 1 represents the set of all MISs of G X 1 [3]. The notation X 1 U 1 M I S G X 1 indicates that the minimization is performed over all distributions p ( u 1 , x 1 ) such that p ( u 1 , x 1 ) > 0 implies x 1 u 1 , where U 1 is an MIS of G X 1 .
From (1), it follows that H G X 1 ( X 1 ) H ( X 1 ) , yielding savings over [1]. Next, we introduce a path, which refers to a sequence of edges connecting a subset of vertices within a graph.
Definition 5
(Path and Hamiltonian path [58]). Given an undirected graph G ( V , E ) , a path is a sequence of vertices starting and ending with distinct vertices, where each pair of consecutive vertices is connected by an edge, and no vertex is repeated. A path that includes every vertex of a graph exactly once is called a Hamiltonian path.
We next define the class of d-regular graphs that embed the special case of cycles.
Definition 6
(d-regular graphs [54,55]). A d-regular graph G d , V ( V , E ) is a graph where each vertex has the same degree d, i.e., d = d e g ( x k ) for all x k [ V ] . A d-regular graph G d , V , where d , V Z + , satisfies V d + 1 . Furthermore, if d is odd, the total number of vertices V must be even [59]. Cyclic graphs are 2-regular graphs with a Hamiltonian path.
We next define the expansion rate of graphs.
Definition 7
(Expansion rate [60]). An undirected expander graph G is a graph having relatively few edges in comparison to its number of vertices while maintaining strong connectivity properties, which ensures that each vertex is reachable by paths from at least 2 directions [60]. The expansion of G with respect to a subset of its vertices Y [ V ] is determined as follows:
E θ ( G ) = | N G ( Y ) | | Y | ,
where N G ( Y ) = { u [ V ] , u Y : v Y , ( v , u ) E } denotes the set of neighbors of Y.
We next illustrate the concept of characteristic graphs with an example.
Example 1.
Consider the problem of distributed functional compression of f ( X 1 , X 2 ) = ( X 1 + X 2 ) mod 2 , with two source variables X 1 and X 2 and one receiver, where X 1 is uniform over the alphabet X 1 = { 0 , 1 , 2 , 3 } , and X 2 is uniform over X 2 = { 0 , 1 } .
For even values, i.e., X 1 { 0 , 2 } , the output is f ( X 1 , X 2 ) = X 2 , and for odd values, i.e., when X 1 { 1 , 3 } , we have f ( X 1 , X 2 ) = ( X 2 + 1 ) mod 2 . In the characteristic graph built for X 1 , namely G X 1 , we do not need to distinguish the elements of { 0 ,   2 } from each other, and similarly for the elements of { 1 ,   3 } . However, these two sets must be distinguished, which is possible via using two distinct colors, B and O. To that end, we assign the elements of { 0 ,   2 } and { 1 ,   3 } colors B and O, respectively. Similarly, the outcome X 2 = 1 is assigned R, and X 2 = 0 is assigned Y. We illustrate the coloring of G X 1 and G X 2 in Figure 1. Transmission of an assigned color pair from distributed sources to a common receiver according to the described rule is sufficient for the receiver to determine the corresponding outcome of f via a look-up table [11]. The scheme satisfies the necessary condition since both G X 1 and G X 2 require at least 2 colors (1 bit per source). Using fewer colors by assigning the same color to adjacent vertices violates the condition f ( x 1 1 , x 2 1 ) f ( x 1 2 , x 2 1 ) when p ( x 1 1 , x 2 1 ) · p ( x 1 2 , x 2 1 ) > 0 .
To determine the fundamental limits of lossless compression of a source sequence [3], we exploit the notion of n-fold OR products of graphs, as introduced next. Here, we exclusively focus on the OR product graphs for realizing lossless compression of multi-letter schemes [61]. If the goal is instead to perform zero-error source compression, one needs to employ the n-fold AND products of graphs [62,63].
Given a source variable X, we next detail the construction for the n-fold OR products graphs by generalizing the rule in Definition 3 for building G X to multiple source instances X n .
Definition 8
(n-fold OR product graph [55,56,57]). For n > 1 , the n-fold OR product of G X = G ( V , E ) is represented as G X n = G ( V n , E n ) , where V n = X n , and given two distinct vertices x 1 n = x 11 1 , , x 1 n 1 [ V n ] and x 2 n = x 21 1 , , x 2 n 1 [ V n ] , it holds that ( x 1 n , x 2 n ) E n whenat least one k [ n ] such that ( x 1 k 1 , x 1 k 2 ) E .
The n-fold OR product has been extensively used in the asymptotically lossless compression of distributed sources for function computation; see, e.g., [10,11,12]. Building on [34], the authors in [10] have demonstrated that given two distributed source variables X 1 and X 2 , the lowest sum rate needed for distributed computing of f ( X 1 , X 2 ) can be achieved by encoding the n-fold OR product graphs G X 1 n and G X 2 n , in the limit as n goes to infinity.
We next describe how to determine the n-fold OR product G X n of a characteristic graph iteratively, from the ( n 1 ) -fold OR product graph G X n 1 .
Definition 9
(Sub-graphs of G X n ). Given an n-fold OR product graph G X n , we denote the collection of graphs { G X n ( l ) } l [ V ] in G X n as the sub-graphs of G X n , where each of G X n ( l ) represents ( n 1 ) -fold OR product of G X with itself.
We next define ‘full connection’, which refers to links between sub-graphs in an OR product.
Definition 10
(Full connection between two graphs). Given G 1 ( V 1 , E 1 ) and G 2 ( V 2 , E 2 ) , if for each x 1 k [ V 1 ] and each x 2 t [ V 2 ] , there exists an edge ( x 1 k , x 2 t ) , i.e., the bipartite graph formed between [ V 1 ] and [ V 2 ] is complete, we describe the two graphs as having a full connection.
With these principles established, we next detail their application to functional compression.

2.2. Coloring of Characteristic Graphs

A valid (proper) vertex coloring of G X 1 assigns a color to each vertex such that adjacent vertices receive distinct colors, indicating which source realizations need different codes (colors). Non-adjacent vertices may share the same color, which is known as traditional graph coloring. A valid coloring that achieves the minimum entropy among all valid colorings provides a lower bound to the compression rate for the lossless reconstruction of the desired function. The minimum number of colors required to achieve a valid coloring of G X 1 is called the chromatic number, χ ( G X 1 ) (in general, the problem of determining χ ( G X 1 ) is NP-complete [50]). Fractional graph coloring generalizes the concept of traditional coloring by assigning a fixed number of distinct colors from a set of available colors to each x k [ V ] such that adjacent vertices have non-overlapping sets of colors [12,64]. Moreover, given the connection between the coloring distribution and the minimum entropy of graphs [34], the fractional chromatic number of G, which is a lower bound on the chromatic number of G, is important to investigate.
We next detail how to obtain a valid b-fold coloring out of a available colors.
Definition 11
(The fractional chromatic number). A valid b-fold coloring assigns sets of distinct colors with cardinality b to each vertex such that adjacent vertices receive disjoint sets of b colors. A valid a : b coloring is a valid b-fold coloring that uses a total of a available colors. The notation χ b ( G ) represents the b-fold chromatic number of G, with the smallest a number of colors such that an a : b coloring exists. The fractional chromatic number of G is given as
χ f ( G ) = lim b χ b ( G ) b = inf b χ b ( G ) b ,
where χ b ( G ) Z + , and the second equality follows from the subadditivity of χ b ( G ) [64].
Given a graph G, the fractional chromatic number of its n-fold product is computed as [12]:
χ f ( G n ) = χ f ( G ) n .
Traditional coloring is a special case of the valid a : b coloring of G n where b = 1 and a = χ ( G n ) .
To investigate a general characteristic graph G and its chromatic number and entropy, we aim to leverage the eigenvalue relationships of its adjacency matrix. Therefore, we define the GCT for square and block matrix representations to address coloring in general characteristic graphs.

2.3. Gershgorin Circle Theorem (GCT)

We next introduce the GCT for the eigenvalues of square matrices and later establish their connection to the chromatic number in Section 3.3.
Definition 12
(GCT for square matrices [51]). Given a square matrix A C V × V with elements a k t , where k , t [ V ] , we define D k as a circle that contains the eigenvalue λ k ( A ) as follows:
D k = λ k ( A ) C : | λ k ( A ) a k k | t : t k V | a k t | ,
where t : t k V | a k t | is the sum of the absolute values of the non-diagonal entries in the k-th row of A , and the set of all eigenvalues σ ( A ) = { λ k ( A ) : k [ V ] } satisfies σ ( A ) D 1 D 2 D V .
Given that A f n is a block matrix, we next define GCT for block matrices.
Definition 13
(GCT for block matrices [51,65]). Consider a symmetric matrix A R m n × m n , composed of n block matrices, where each block matrix is denoted by A k t R m × m for k , t [ m ] . Let σ ( A ) represent the set of all eigenvalues of A . The circle corresponding to eigenvalue λ k of the block matrix A is then defined as follows:
D k b = λ k ( A ) σ ( A ) : λ k ( A ) λ k ( A k k ) t : t k m | A k t | ,
where t : t k m | A k t | is the sum of the absolute values of the non-diagonal entries in the k-th block row of A , i.e., A k = [ A k 1 , A k 2 , , A k n ] . The set of eigenvalues of A , i.e., σ ( A ) , satisfies
σ ( A ) k = 1 n D k b .
From (6), we deduce that the regions (circles) covering the eigenvalues of A are centered at eigenvalues of A k k , and the circles’ radii are enlarged by the size of the non-diagonal matrices A k t . Given an n-fold OR product graph with an adjacency matrix A f n , where A f n is a V n × V n binary and symmetric matrix, the circles D k b in (6) can be simplified to block intervals δ k b .

3. Bounds on Cyclic and Regular Graphs

We here detail characteristic graphs that are d-regular and derive lower and upper bounds on their chromatic numbers and graph entropies. Our novel contributions include the characterization of chromatic numbers for n-fold OR products of cycles, as well as a novel coloring scheme for OR products of odd cycles, as detailed in Section 3.1. In Section 3.2, we bound the entropy of characteristic graphs for cycles. Given a cyclic graph, in Section 3.3, we analyze its key properties using the eigenvalues of its adjacency matrix, and in Section 3.4, we bound its chromatic number. In Section 3.5, we characterize the degrees and chromatic numbers for regular graphs and their OR products. Section 3.6 details the expansion rates of OR products of regular graphs, with implications on the fundamental limits of compressibility of such graphs.

3.1. Coloring Cyclic Graphs

Let C i be a cycle graph with i vertices that represents the characteristic graph that source X 1 builds for computing f (similarly for source X 2 ). For an even cycle, i = 2 k , and for an odd cycle, i = 2 k + 1 , for some k Z + . We seek to compress G X 1 and G X 2 to recover the desired function outcome at a receiver in an asymptotically lossless manner. To that end, we determine the minimum entropy coloring for the n-fold OR product of C i , denoted by C i n (and similarly for G X 2 ), for the receiver to decode f from the received colors.
We start by determining the degree of each vertex in C i j for j [ n ] .
Proposition 1
(Degree of vertices in C i n ). The degrees in the n-fold OR product of a cycle graph, C i n for n 2 , are calculated as follows:
d e g ( x n ) = 2 · V n 1 V 1 , x n [ V n ] .
Proof. 
See Appendix A. □
In regular graphs, where all vertices have the same degree, we omit the index k of x k in Propositions 1 and 8. From Proposition 1, we infer that for a given pair of vertices x t , x k [ V ] where t k , if d e g ( x t ) = d e g ( x k ) , then for the n-fold OR product, d e g ( x t n ) = d e g ( x k n ) , for x t n , x k n [ V n ] , i.e., taking the n-fold OR products does not alter the equality of degrees. Therefore, for any d-regular graph (including cycles), we derive the following result about the regularity of its OR products.
Corollary 1.
Given a d-regular graph G d , V , where d = d e g ( x ) , its n-fold OR product with itself for n 1 , i.e., G d , V n , is also a regular graph, with degree d e g ( x n ) , and total number of edges E n = k = 1 V n d e g ( x n ) / 2 .

3.1.1. Even Cycles

Here, we consider even cycles, denoted by C 2 k , k Z + . The vertices of C 2 k are sequentially numbered clockwise from 0 to 2 k 1 (e.g., see G X 1 in Figure 1), and alternatingly colored. Vertices with even indices are assigned one color, while those with odd indices receive another. We next determine the chromatic number of C 2 k n , denoted as χ ( C 2 k n ) .
Proposition 2
(Chromatic number of C 2 k n ). The chromatic number of C 2 k n is given as
χ ( C 2 k n ) = 2 n , k Z + , n 1 .
Proof. 
Given C 2 k , with χ ( C 2 k ) = 2 , its 2-fold OR product C 2 k 2 consists of ( 2 k ) 2 vertices and 2 k sub-graphs, { C 2 k 2 ( 1 ) , C 2 k 2 ( 2 ) , , C 2 k 2 ( 2 k ) } , each containing 2 k vertices. Since each sub-graph is two-colorable and fully connected to its neighbors, adjacent sub-graphs must use different colors. For instance, { C 2 k 2 ( 1 ) , C 2 k 2 ( 2 ) } requires four colors in total. However, due to the cyclic structure of OR products, alternating colors from C 2 k 2 ( 1 ) can cover C 2 k 2 ( 3 ) , and similarly for odd-indexed sub-graphs, meaning that χ ( C 2 k 2 ) = 4 . This method can also calculate χ ( C 2 k n ) from ( n 1 ) -fold to n-fold OR products. Figure 2 shows a valid coloring for C 4 3 , where χ ( C 4 3 ) = 8 . Similarly, by induction, χ ( C 2 k n ) satisfies (9). □

3.1.2. Odd Cycles

We here focus on odd cycles, namely C 2 k + 1 , k Z + , and their n-fold OR products. In the special case with 3 vertices, C 3 is a complete graph, and a valid coloring requires 3 distinct colors for a receiver to successfully recover the function. Furthermore, for a valid coloring of C 3 n , χ ( C 3 n ) = 3 n for n 1 . For coloring of an odd cycle with the length i = 2 k + 1 , for k 2 , one could reuse the colors. For instance, given C 5 , we have χ ( C 5 ) = 3 (for C 5 2 , a valid coloring is illustrated in Figure 3). We next present an achievable scheme for valid colorings of general odd cycles.
Proposition 3
(Chromatic numbers of odd cycles). The chromatic number of C i n + 1 , denoted as χ ( C i n + 1 ) , can be recursively computed from χ ( C i n ) as follows:
χ ( C i n + 1 ) = 2 χ ( C i n ) + χ ( C i n ) 2 , i = 2 k + 1 and k Z 2 .
Proof. 
See Appendix B. □
For even cycles, from Proposition 2, χ ( C i n ) = 2 n . For odd cycles, in Section 3.4, we will establish upper and lower bounds on χ ( C i n ) using the adjacency matrix of C i n , denoted as A f n .
We next demonstrate the gain in terms of the required number of colors of our approach in Proposition 3 over a greedy algorithm that does not leverage the structure of C 2 k + 1 n in coloring (see Figure 4).
Proposition 4
(The multiplicative gain of our approach over a greedy coloring algorithm). The gain of the recursive coloring approach in Proposition 3 for C 2 k + 1 n , k Z 2 , over the greedy algorithm, which calculates χ ( C 2 k + 1 ) and uses ( χ ( C 2 k + 1 ) ) n colors for coloring C 2 k + 1 n , is
η n = χ ( C 2 k + 1 ) n χ ( C 2 k + 1 n ) 1 . 2 n ,
which is exponential and unbounded as n , i.e., η = lim n η n = .
Proof. 
See Appendix C. □

3.2. Bounding the Chromatic Entropy of Cycles

Next, we establish an upper bound on the chromatic entropy of C i n , for i Z + .

3.2.1. Entropy of an Even Cycle

From Proposition 2, χ ( C 2 k n ) = 2 n . We recall that the chromatic entropy H C 2 k n χ ( X 1 ) , which is the minimum achievable entropy of a valid coloring of C 2 k n [3,34], and the characteristic graph entropy H C 2 k ( X 1 ) are related as follows [3] [Theorem 5]:
H C 2 k ( X 1 ) = lim n 1 n H C 2 k n χ ( X 1 ) ,
where
H C 2 k n χ ( X 1 ) = min C C 2 k n H ( C C 2 k n ) .
If the distribution of C 2 k n is uniform, then H C 2 k ( X 1 ) is given as follows:
H C 2 k ( X 1 ) = lim n 1 n log 2 2 n = 1 bits .

3.2.2. Entropy of an Odd Cycle

We next examine the chromatic and graph entropies of C 2 k + 1 . Unlike C 2 k , the coloring PMF for odd cycles, p ( C C 2 k + 1 n ) , is non-uniform (i.e., H C 2 k + 1 n χ ( X 1 ) < log 2 χ ( C 2 k + 1 n ) ), as demonstrated through the examples below.
Example 2.
Given that X 1 follows a uniform distribution over an alphabet X 1 such that | X 1 | = 5 and with a characteristic graph G X 1 = C 5 , the coloring PMF satisfies, p ( C C 5 ) = { 1 5 , 2 5 , 2 5 } with χ ( C 5 ) = 3 , and using (13) for chromatic entropy, yields H C 5 χ ( X 1 ) = min C C 5 H ( C C 5 ) = 1.52 . For the 2-fold OR product graph C 5 2 , where the PMF that satisfies the minimum coloring entropy is p ( C C 5 2 ) = { 4 25 , 4 25 , 4 25 , 4 25 , 4 25 , 2 25 , 2 25 , 1 25 } with χ ( C 5 2 ) = 8 , similarly using (13) we obtain
1 2 H ( C C 5 2 ) = 1.37 < H ( C C 5 ) = 1.52 b i t s .
While from Example 2, we can determine H C 5 χ ( X 1 ) and H C 5 2 χ ( X 1 ) , determining H C 5 n χ ( X 1 ) , which corresponds to the minimum entropy among all possible valid colorings, becomes complex for large n. Next, given a characteristic graph G X 1 = C 2 k + 1 , we establish an upper bound on H C 2 k + 1 ( X 1 ) by employing (12) and (13) and devising valid colorings for the MISs of C 2 k + 1 n .
Proposition 5
(An upper bound on H C 2 k + 1 ( X 1 ) ). The entropy is upper bounded as follows:
H C 2 k + 1 ( X 1 ) 1 n H ζ n · k n ( 2 k + 1 ) n , ζ n 1 · k n 1 ( 2 k + 1 ) n , , ζ 0 · 1 ( 2 k + 1 ) n ,
where ζ t , t [ n ] { 0 } , represents the number of maximum independent sets with size α ( C 2 k + 1 t ) , and ζ n satisfies
( 2 k + 1 ) n · ( k 2 1 ) k 2 ( n + 1 ) 1 · k n < ζ n < ( k 2 ( n + 1 ) 1 ) · ( 2 k + 1 ) n · k ( n 1 ) ( 2 k + 1 ) 2 n · ( k 2 1 ) · k ( n 1 ) ( k 2 ( n + 1 ) 1 ) .
Proof. 
See Appendix D. □
In (16), as color reuse increases (i.e., independent sets with high cardinality and high ζ n ), H C 2 k + 1 ( X 1 ) decreases. We later generalize Proposition 5 to general graphs in Section 4.3.
Next, we apply Proposition 5 to C 5 3 to derive an upper bound to H ( C C 5 3 ) .
Example 3.
The cardinalities of maximum independent sets for different graph products C 5 j , i.e., α ( C 5 j ) , where j [ 3 ] { 0 } , are given as α ( C 5 0 ) = 2 0 , α ( C 5 1 ) = 2 1 , α ( C 5 2 ) = 2 2 , and α ( C 5 3 ) = 2 3 , respectively, where χ ( C 5 3 ) = 20 , which can be recursively computed using (10). Employing Proposition 5 yields
ζ 3 · 8 125 + ζ 2 · 4 125 + ζ 1 · 2 125 + ζ 0 · 1 125 = 1 .
Employing the ordering ζ 3 > ζ 2 > ζ 1 > ζ 0 in (18) leads to the simplification, 64 ζ 0 + 16 ζ 0 + 4 ζ 0 + ζ 0 125 . Hence, ζ 0 25 17 . Because ζ t Z + , t [ 3 ] { 0 } , we have ζ 0 = 1 . Employing the same ordering in (18) also yields the simplification 8 ζ 3 + 4 ( ζ 3 2 ) + 2 ( ζ 3 4 ) + ζ 3 8 125 , which yields the condition ζ 3 200 17 . Because ζ t Z + , (18) yields the upper bound ζ 3 15 . Employing these lower and upper bounds to (16) leads to 1.37 1 3 H C 5 3 χ ( X 1 ) 1.41 .
Next, we consider another example for C 5 3 , where we employ Proposition 5 to determine ζ t ’s and p ( C C 5 3 ) (similar to Examples 2–3). We then apply Huffman coding [66] to the coloring random variable to optimize the compression rate of C 5 3 .
Example 4
(Huffman coding for a given characteristic graph coloring). Consider the setting of Example 2, where χ ( C 5 ) = 3 . Maximum independent sets of C 5 are represented by the colors Y, B, and R with PMF p ( C C 5 ) = 1 5 , 2 5 , 2 5 . To achieve H C 5 χ ( X 1 ) approximately, a binary Huffman tree is constructed for each color. The assigned codes are Y : 1 , R : 00 , and B : 01 . Similarly, for C 5 2 , from (10), χ ( C 5 2 ) = 8 . The color set is denoted as C ( C 5 3 ) = { c 1 , c 2 , , c 8 } , with the corresponding Huffman codes: c 1 : 11 , c 2 : 000 , c 3 : 001 , c 4 : 010 , c 5 : 011 , c 6 : 101 , c 7 : 1000 , and c 8 : 1001 . For C 5 3 , χ ( C 5 3 ) = 20 , we have the following coloring PMF:
p ( C C 5 3 ) = 8 125 , , 8 125 , 4 125 , , 4 125 , 2 5 , 2 5 , 1 5 ,
where there are ζ 3 = 13 colors with probability 8 125 , ζ 2 = 4 colors with probability 4 125 , ζ 1 = 2 colors with probability 2 125 , and ζ 0 = 1 color with probability 1 125 , where the set of ζ t ’s are uniquely determined employing χ ( C 5 3 ) = t = 0 3 ζ t = 20 and ζ t ζ t 1 . Building the binary Huffman encoding tree using p ( C C 5 3 ) in (19), helps assign codes for encoding C C 5 3 , that achieves the minimum average code length.
Next, we examine the relationship between the chromatic numbers of cycles and the eigenvalues of their adjacency matrices.

3.3. Eigenvalues of the Adjacency Matrices of C i

Let A f { 0 , 1 } V × V be the adjacency matrix of G ( V , E ) , where A f ( l 1 , l 2 ) = 1 if distinct vertices l 1 , l 2 [ V ] must be distinguished, and 0 otherwise. Since there are no self-loops, A f ( l 1 , l 1 ) = 0 . The adjacency matrix of the 2-fold OR product, A f 2 , is composed of diagonal blocks of A f , where entries equal to 1 are replaced with all-one matrices J V , and zeros with all-zero matrices Z V , representing full or no connectivity between sub-graphs C i 2 ( l ) for l [ V ] .
Similarly, we can construct A f n for the n-fold OR product using induction. Given the n-fold OR product of C i with itself, the adjacency matrix of C i n , has the following block structure:
A f n = A f n 1 J V n 1 Z V n 1 Z V n 1 J V n 1 J V n 1 A f n 1 J V n 1 Z V n 1 Z V n 1 J V n 1 Z V n 1 Z V n 1 J V n 1 A f n 1 ,
consisting of V row-block matrix partitions of size V n 1 each. In every block row of A f n , there are exactly two J V n 1 matrices and one A f n 1 , i.e., the adjacency matrix of the ( n 1 ) -fold OR product of C i . We next characterize the eigenvalues of the all-ones matrix J V of size V × V , which represents full connectivity between adjacent sub-graphs in the 2-fold OR product (see Definition 8). The proof of the following lemma is given in [67] [Lemma 1].
Lemma 1.
The eigenvalues of the all-ones matrix J V 1 V × V are 0 and V, with algebraic multiplicities V 1 and 1, respectively.
From Lemma 1, we have λ 1 ( J V ) = V . To calculate the eigenvalues of A f n , one needs to solve for σ ( A f n ) { λ R : det ( A f n λ I V n ) = 0 } , which is the set of all eigenvalues of A f n . Let { λ k ( A f ) , k [ V ] } be the set of eigenvalues of A f , and { ν k ( A f 2 ) , k [ V 2 ] } be the set of eigenvalues of the V 2 × V 2 matrix A f 2 . We also let { u k , k [ V ] } and { v k , k [ V 2 ] } be the sets of eigenvectors of A f and A f 2 , respectively. Solving A f u = λ k ( A f ) u determines λ k ( A f ) for k [ V ] , associated with u . Similarly, for the 2-fold OR product graph, the eigenvalues of A f 2 are determined by solving
A f 2 v = ν ( A f 2 ) v ,
where v = [ v 1 , v 2 , , v V ] , and v k is a V × 1 vector, for each k [ V ] . In other words, for each ν ( A f 2 ) , we have a set of V block row equations, each containing V scalar equations. More specifically, ν ( A f 2 ) in (21) satisfies the following V block equations:
A f v k + J V v k + 1 + J V v k 1 = ν ( A f 2 ) v k , k [ V ] .
Using (22), we next derive the eigenvalues of A f n .
Theorem 1
(Distinct eigenvalues of C i n ). The adjacency matrix A f n for C i n where n 2 , has the same distinct eigenvalues of A f n 1 as well as two new distinct eigenvalues.
Proof. 
See Appendix E. □
Next, in Section 3.4, we examine the relation between the eigenvalues and the chromatic number of C i n to establish lower and upper bounds on χ ( C i n ) .

3.4. Bounding the Chromatic Number of C i Using the Eigenvalues of A f

Given a cycle C i with an adjacency matrix A f , where we denote by ϑ ( C i ) the set of its distinct eigenvalues, λ 1 ( A f ) its largest, and λ V ( A f ) its smallest eigenvalue [68]. Exploiting these, we can derive the following lower and upper bounds on χ ( C i ) as follows [69,70]:
1 λ 1 ( A f ) λ V ( A f ) χ ( C i ) λ 1 ( A f ) + 1 ,
where two lower bounds on λ V have been derived in [71] and [72], respectively:
λ V ( A f ) 2 E · ( ( V 1 ) / 2 ) ,
λ V ( A f ) V / 2 · ( ( V + 1 ) / 2 ) .
We next apply the bound in (23) to χ ( C i j ) corresponding to A f j , j [ n ] . To that end, let us consider an example where i = 5 and j = 2 . Given C 5 , the set of distinct eigenvalues of A f , using numerical evaluation, are given as ϑ ( C 5 ) = { 1.618 ,   0.618 ,   2 } . Note also that ϑ ( C 5 2 ) = { 6.09 ,   1.61803 ,   0.61803 ,   5.09016 ,   12 } for A f 2 . Consider the set ϑ ( C 5 2 ) , where λ V 2 ( A f 2 ) = 6.09 ; for the 2-fold OR product graph C 5 2 , an application of (24) yields λ V 2 ( A f 2 ) 60 . Similarly, an application of (25) yields that λ V 2 ( A f 2 ) 12.748 . Hence, for the 2-fold OR product, one can find that the bound in [72] is tighter for cycles and their n-fold OR products.
We do not have an exact characterization for λ V n ( A f n ) . However, using the lower bounds in (24) and (25) can help derive a lower bound on χ ( C i n ) . On the other hand, for λ 1 ( A f n ) , recalling from Proposition 1 that all vertices of C i n for n 2 have equal degrees, and exploiting this feature, we next derive an exact characterization of λ 1 ( A f n ) for n 2 .
Proposition 6
(The largest eigenvalue for the adjacency matrix of C i n ). The largest eigenvalue of C i is λ 1 ( A f ) = 2 , and the largest eigenvalue of the n-fold OR product C i n is determined as
λ 1 ( A f n ) = 2 + j [ n 1 ] 2 V j , n 2 .
Proof. 
We prove it using Definition 1, (23), and the proof of Theorem 1—which leverages the fact that the eigenvalues of the sum of adjacency matrices equal the sum of their individual eigenvalues. The calculation of eigenvalues for the adjacency matrix of C i j is given in (22), which links the eigenvalues of A f j with the two J V matrices of size V j 1 × V j 1 in each block row. From (A10) and (A11), the largest eigenvalue for a power graph C i n is obtained by adding λ 1 ( A f n 1 ) for a sub-graph C i n 1 and twice the largest eigenvalue of J V n 1 , which is V n 1 (see Lemma 1). Thus, λ 1 ( A f ) = 2 , and for n 2 , we achieve (26). □
Next, we present new bounds on χ ( C i j ) by exploiting (23), (24) and (25) which lower bound λ V j ( A f j ) , and Proposition 6, which gives the exact value of λ 1 ( A f j ) .
Proposition 7
(Bounding χ ( C i n ) using eigenvalues of A f n ). The chromatic number χ ( C i n ) is lower and upper bounded as
1 2 + j = 1 n 1 2 V j max V n 2 · ( V n + 1 ) 2 , 2 E n · ( V n 1 ) 2 χ ( C i n ) j = 1 n 1 2 V j + 3 .
Proof. 
Combining (23), which bounds λ 1 and λ V of A f , with Proposition 6, as well as (25) that lower bounds λ V [72], we have 1 2 + j = 1 n 1 2 V j λ V n χ ( C i n ) 2 + j = 1 n 1 2 V j + 1 . We further simplify · , because λ 1 is a positive integer (see Proposition 6). Finally, substituting λ V n with the maximum of the lower bounds in (24) and (25) leads to (27). □
To complement Proposition 7, we next derive another bound that depicts the relation between the degree of each node in C i n and χ ( C i n ) .
Corollary 2
(Bounding χ ( C i n ) using degrees of C i n ).  χ ( C i n ) satisfies the following relation:
1 + d e g ( x n ) 2 E n ( V n 1 ) · ( d e g ( x n ) ) + ( 1 + i = 1 n 1 2 V i ) · ( d e g ( x n ) ) χ ( C i n ) d e g ( x n ) + 1 .
Proof. 
To characterize d e g ( x n ) , we apply (8) from Proposition 1, and to bound χ ( C i n ) , we use (23). Then, we apply the lower bound on λ V n given in [73], which yields
λ V ( A f ) 2 E ( V 1 ) · ( min x [ V ] d e g ( x ) ) + min x [ V ] d e g ( x ) 1 · ( max x [ V ] d e g ( x ) ) ,
giving the lower bound in (28). For the upper bound, we use (26), where d e g ( x n ) = λ 1 ( A f n ) . □
Next, we consider d-regular graphs, and characterize the vertex degrees and chromatic numbers for the n-fold OR products of d-regular graphs.

3.5. From Cycles to d-Regular Graphs

Building on our analysis of C i n in Section 3.1, Section 3.2 and Section 3.3, we now focus on d-regular source characteristic graphs. Next, we derive a closed-form expression for the degree of G d , V n .
Proposition 8
(Degrees of vertices in G d , V n ). Given a d-regular graph G d , V = G ( V , E ) , the degree of each vertex in the n-fold OR product, denoted by G d , V n , for n 2 , is expressed as:
d e g ( x n ) = d · V n 1 V 1 , x n [ V n ] .
Proof. 
The proof follows from employing Definition 6 for G d , V . For details, see Appendix F. □
Using (30), for the n-fold OR product of G d , V where V is even, we next determine χ ( G d , V n ) .
Proposition 9
(The chromatic number of G d , V n ). The chromatic number of the n-fold OR product of G d , V with an even number of vertices, i.e., V = 2 k , k Z + , is determined as
χ ( G d , V n ) = d n , n 1 .
Proof. 
See Appendix G. □
For example, the 3-regular graph G 3 , 6 (see Figure 5) has χ ( G 3 , 6 ) = 3 . For n = 2 , there are 6 sub-graphs, namely { G 3 , 6 2 ( l ) } l [ 6 ] , where χ ( G 3 , 6 2 ( l ) ) = 3 for all l [ 6 ] . Let us choose a set of vertices in G 3 , 6 2 belonging to { G 3 , 6 2 ( 5 ) , G 3 , 6 2 ( 6 ) , G 3 , 6 2 ( 1 ) } . We observe that this chosen subset is a complete graph, indicating that χ ( G 3 , 6 2 ) 9 . Reusing the same colors for the vertices of the remaining sub-graphs ( G 3 , 6 2 ( 2 ) , G 3 , 6 2 ( 3 ) , G 3 , 6 2 ( 4 ) ), we deduce that χ ( G 3 , 6 2 ) = 3 2 = 9 .
Next, given n-fold OR products of d-regular graphs, G d , V n , we examine their expansion properties and present bounds on their expansion rates.

3.6. d-Regular Graphs and Graph Expansion

Graph expansion quantifies how well-connected a graph is by measuring how easily subsets of vertices are connected to the remaining vertices of the graph. Graph expansion has applications in fields such as parallel computation [74,75], complexity theory, and cryptography [76,77] due to its strong vertex connectivity and robustness properties [44]. In expander graphs, for any pair of distinct vertices u , v [ V ] , a path from u to v exists (see Definition 5). We next examine expansion rates for several classes of expander graphs, including G d , V , C i , and K i .
Given G d , V , the second largest eigenvalue of its adjacency matrix A f , namely λ 2 ( A f ) , contributes to the linear expansion of G d , V , where the number of edges grows linearly with the total number of vertices. Since the n-fold OR product of G d , V yields a deg ( x n ) -regular graph, it exhibits the connectivity properties of expander graphs.
Proposition 10
(A lower bound on expansion rates of G d , V n ). The expansion rate of G d , V n , given its adjacency matrix A f n , is lower bounded as follows:
E θ ( G d , V n ) ( d · V n 1 V 1 ) 2 Λ 2 ( G d , V n ) + ( d · V n 1 V 1 ) 2 Λ 2 ( G d , V n ) · | Y | V n ,
where
Λ ( G d , V n ) = max λ 2 ( A f n ) , | λ V n ( A f n ) | ,
with λ 2 ( A f n ) and λ V n ( A f n ) being the second and the smallest eigenvalues of A f n , respectively.
Proof. 
As noted in Section 2, for a given G d , V with A f , λ 1 ( A f ) = d , and the eigenvalues of A f n for any connected G d , V n are ordered as { d λ 1 > λ 2 > > λ V n } . For any subset Y of G d , V , [78] shows that the size of | N G ( Y ) | satisfies the following:
| N G ( Y ) | d 2 · | Y | Λ 2 ( G ) + ( d 2 Λ 2 ( G ) ) · | Y | V .
In (34), Λ ( G ) = max ( λ 2 , | λ V | ) d , equality holds if and only if G is bipartite or is disconnected (composed of singleton vertices) [44]. Dividing both sides of (34) by | Y | , letting E θ ( G n ) = | N G n ( Y ) | / | Y | , and taking in the total number of vertices, V n , in (34) yields (32). □
In Proposition 10, given a graph G d , V , Λ ( G d , V ) captures its connectivity and structural balance. We infer that as | Y | increases, the expansion rate is small, indicating less connectivity between vertices. Given a connected G d , V , as evident from (2), the maximum expansion rate, denoted by E θ u b , is achieved when G d , V n is a complete graph, K i n , where i [ V n ] , as follows:
E θ u b ( V n 1 ) 2 1 + ( ( V n 1 ) 2 1 ) · | Y | V n .
The minimum expansion rate, denoted by E θ l b , is achieved when G d , V n = C i n because C i n has the minimum number of edges to ensure a Hamiltonian path for each x [ V ] . This yields
E θ l b ( 2 · V n 1 V 1 ) 2 λ V n 2 ( C i n ) + ( ( 2 · V n 1 V 1 ) 2 λ V n 2 ( C i n ) ) · | Y | V n .
We next examine the relationship between λ 1 ( A f ) and λ 2 ( A f ) for sub-graphs of an expander graph, such as G d , V . To this end, we apply [43] [Lemma 3], which states that for any subset of vertices, Y [ V ] in G d , V , the induced sub-graph S (An induced sub-graph is formed by selecting a subset of vertices S [ V ] includes all edges in E whose endpoints lie in S.) satisfies the following property:
λ 1 ( S ) λ 2 ( A f ) + ( d λ 2 ( A f ) ) · ( | Y | / V ) .
For a connected 2-regular graph ( C i ), adapting (37) and selecting vertex sets S and Y such that Y S = C i n , we establish a simplified relation between λ 1 ( A f n 1 ) and λ 2 ( A f n ) , as follows.
Corollary 3
(The spectral relation between λ 1 ( A f n 1 ) and λ 2 ( A f n ) in 2-regular graphs). The relationship between λ 2 ( A f n ) of the adjacency matrix of C i n and λ 1 ( A f n 1 ) of the adjacency matrix of C i n 1 is detailed as follows:
λ 1 ( A f n 1 ) λ 2 ( A f n ) + 2 · V n 1 V 1 λ 2 ( A f n ) · 1 V .
Proof. 
Given the relation between λ 1 ( S ) and λ 2 ( A f ) for G d , V in (37), we consider that G d , V n = C i n , and we choose sub-graphs Y and S accordingly, as sub-graphs of C i n , where
Y t l , t [ V ] C i n ( t ) = { C i n ( 1 ) , C i n ( 2 ) , , C i n ( V ) } { C i n ( l ) } .
Corollary 3 investigates how the largest eigenvalue of the ( n 1 ) -fold sub-graph of a cycle is upper bounded in terms of the second-largest eigenvalue of its n-fold OR product, offering insight into spectral behavior across recursive graph products.
Next, we investigate general functions with arbitrary characteristic graphs. Building on Section 3, we present bounds on the chromatic number of their n-fold OR products.

4. Bounds for General Characteristic Graphs

Here, we focus on general characteristic graphs G ( V , E ) and their n-fold OR products, G n . Given a graph G with an adjacency matrix A f , we first evaluate d e g ( x k ) for x k [ V ] and derive bounds on λ 1 ( A f ) and χ ( G ) in Section 4.1. We also present bounds on expansion rates and establish lower and upper bounds on entropies of G n in Section 4.2 and Section 4.3, respectively. Finally, for the n-fold OR product G n , we introduce an approach that decomposes A f n to two symmetric block matrices and leverages GCT to investigate the spectrum of G n in Section 4.4.

4.1. Degrees and Chromatic Numbers of General Graphs

Given a general graph G, we next derive a recursive relation for d e g ( x k n ) for x k n [ V n ] , where d e g ( x k n ) may vary across vertices, providing a generalization of Propositions 1 and 8.
Corollary 4
(Degrees of vertices in G n ). Given a general graph G ( V , E ) , the degrees of vertices of G n are calculated as follows:
d e g ( x k n ) = d e g ( x k ) + j = 1 n 1 d e g ( x k ) · V j , x k [ V n ] .
Proof. 
Similarly to Propositions 1 and 8, we can compute d e g ( x k ) for x k [ V ] , with the distinction that each x k may have a different degree. For the 2-fold OR product, each vertex, x k 2 ( l ) for l V , connects to d e g ( x k ) adjacent sub-graphs. Since neighboring nodes differ across x k , we iteratively compute the degrees of x k n separately for each x k . □
Corollary 4 immediately implies that if vertices x t , x k [ V ] for t k have equal degrees in G, i.e., d e g ( x t ) = d e g ( x k ) , then d e g ( x t n ) = d e g ( x k n ) in the n-fold OR product, G n , n 2 .
Given a general graph G, we next devise lower and upper bounds on χ ( G n ) . To that end, we exploit the block matrix representation of A f n (see (20)) and use the maximum number of sub-matrices J V n 1 in the rows of A f n .
Corollary 5
(Bounds on χ ( G n ) for a general G). Given a general characteristic graph G ( V , E ) , the chromatic number of G n , χ ( G n ) , is bounded as follows:
1 λ 1 ( A f ) + d max · j = 1 n 1 V j λ V n ( A f n ) χ ( G n ) λ 1 ( A f ) + d max · j = 1 n 1 V j ,
Proof. 
The non-sparsity of an adjacency matrix, i.e., more 1s in its entries, is directly related to its largest eigenvalue. The full connection between sub-graphs in the OR product is represented by J V , with λ 1 ( J V ) = V . Thus, the row with the most J V matrices provides an upper bound for the largest eigenvalue. Additionally, the average number of J V n 1 matrices across block rows of A f n , multiplied by V n 1 (which represents λ 1 ( J V n 1 ) ), approximates λ 1 ( A f n ) . By modifying (23) and using d max of G to approximate λ 1 ( A f n ) , we can establish a bound for χ ( G n ) . □
Corollary 5 refines the bounds in (23), Proposition 7, and Corollary 2 by leveraging the exact value of λ 1 ( A f n ) and accounting for the specific structure of G. Given Corollary 5, let us investigate the computational complexity for determining the eigenvalues λ k ( A f n ) and contrast it with the QR method. The QR transformation iteratively decomposes a matrix A ( t ) , where t Z + denotes the iteration index, into an orthogonal matrix Q ( t ) and an upper triangular matrix R ( t ) , satisfying A ( t ) = Q ( t ) R ( t ) . Then, the next iteration is given by A ( t + 1 ) = R ( t ) Q ( t ) . Under typical conditions (e.g., A is diagonalizable with distinct eigenvalues), it converges to an upper triangular matrix whose diagonal approximates λ k ( A ) . For an m × m matrix, QR requires a computational complexity of O ( m 3 ) [79]. This is used for calculating λ k ( A f n ) , which has a computational complexity of O ( V 3 n ) . However, using Corollary 5, the overall complexity remains at O ( V 3 ) . This is because computing the eigenvalues of G has a complexity of O ( V 3 ) ; the summations on the LHS and RHS of (41) each have a complexity of O ( n ) , and the maximization step (determining d max ) has a complexity of O ( V ) . Hence, the dominant term, i.e., O ( V 3 ) , dominates the final complexity.
Next, given a general G ( V , E ) with an adjacency matrix A f , we derive lower and upper bounds on λ 1 ( A f n ) using Lemma 1, where the lower and upper bounds are functions of the minimum and maximum values of d e g ( x k ) for x k [ V ] .
Corollary 6
(Bounds on λ 1 ( A f n ) for a general G). Given a general graph G, the largest eigenvalue for the adjacency matrix of G n , denoted by λ 1 ( A f n ) , is bounded as follows:
min k [ V ] ( d e g ( x k ) ) · λ 1 ( J V n 1 ) λ 1 ( A f n ) max k [ V ] ( d e g ( x k ) ) · λ 1 ( J V n 1 ) ,
where we infer that
λ 1 ( A f n ) d e g avg ( x k ) · λ 1 ( J V n 1 ) .
Corollary 6 illustrates how d e g ( x k ) determines the spectral properties of G n , thus the achievable rate in distributed compression.

4.2. Bounds on Expansion Rates of General Graphs

Here, given a general graph G, we investigate the expansion of G n . We exploit (35), derived from the characteristics of the n-fold OR products of complete graphs, to obtain the upper bound, E θ u b , and (36), derived from the n-fold OR products of cycles, to obtain the lower bound, i.e., E θ l b . We next establish lower and upper bounds on E θ ( G n ) .
Corollary 7
(Bounds on E θ ( G n ) ). The expansion rate for the n-fold OR product of a general characteristic graph G ( V , E ) is lower and upper bounded as follows:
E θ l b E θ ( G n ) E θ u b ,
where E θ u b , derived from K i n (representing a fully connected characteristic graph), and E θ l b , from C i n (representing a connected graph with the minimum number of edges), for i = V n .
Proof. 
See Appendix H. □
Recall that the lower bound for E θ ( G d , V n ) in (32) is given in terms of Λ ( G n ) = max ( λ 2 ( A f n ) , | λ V n ( A f n ) | ) . Whereas in Corollary 7, we use exact values of Λ ( K i n ) and Λ ( C i n ) for upper and lower bounds, respectively, with A f n denoting each graph’s adjacency matrix. Given G n , E θ ( G n ) reflects its connectivity, with higher values leading to limited savings in source compression.

4.3. Bounds on Entropies of General Graphs

Here, we derive upper and lower bounds on the graph entropies for general characteristic graphs. For the upper bound, we use a similar achievability approach as in the case of C i n (see Proposition 5), which relies on coloring the MISs of sub-graphs of G n , i.e., G j for j [ n ] . For the lower bound, we employ fractional coloring applied to the n-fold OR products of general graphs to establish a bound on H G ( X 1 ) . We next derive an upper bound on H G ( X 1 ) .
Proposition 11
(An upper bound on H G ( X 1 ) ). Given a characteristic graph G ( V , E ) , the entropy of G n is upper bounded as follows:
H G ( X 1 ) 1 n H ζ n · α ( G n ) V n , ζ n 1 · α ( G n 1 ) V n , , ζ 0 · 1 V n ,
where ζ t , t [ n ] { 0 } , represents the number of maximum independent sets of G t with a size of α ( G t ) .
Proof. 
See Appendix I. □
Despite the upper bound in Proposition 11, to the best of our knowledge, with traditional coloring schemes for G ( V , E ) where the total number of vertices is odd, i.e., V = 2 k + 1 for k Z 2 , there is no established lower bound for H G ( X 1 ) . To that end, in Corollary 8, we derive a lower bound on H G ( X 1 ) by employing the concept of fractional coloring (see Definition 11).
Corollary 8
(A lower bound on H G ( X 1 ) ). The entropy of H G ( X 1 ) for a general connected characteristic graph G ( V , E ) with V = 2 k + 1 and Hamiltonian path, where k Z 2 , and under uniform distribution of X 1 , is lower bounded by
log 2 2 k + 1 k H G ( X 1 ) .
Proof. 
See Appendix J. □
From Corollary 8, we infer that employing fractional coloring, and (4), yields a lower bound on H G ( X 1 ) . For k = 2 , the lower bound using C 5 for the graph G ( V , E ) with V = 5 is given by 1.32 H G ( X 1 ) , which matches the Shannon capacity of the pentagon [7].

4.4. Spectra of General Graphs

Given a general graph G, we analyze the spectrum of A f using the concept of GCT, as detailed in Definition 12. We then exploit this spectrum to establish bounds on χ ( G n ) . For G n , using the symmetry of A f n = ( A f n 1 I V + J V A f n 1 ) F 2 V n × V n , where ⊗ denotes the Kronecker product, we infer that circle D k , in which an eigenvalue λ k ( A f n ) is contained, simplifies to an interval:
δ k = { λ k ( A f n ) R : λ k ( A f n ) a k k n t : t k V n | a k t n | } , k , t [ V n ] ,
where a k k n and t : t k V n | a k t n | denote the diagonal elements and the sums of non-diagonal elements in the k-th row of A f n , respectively, then for k = V k + i , and t = V t + j , the elements are defined as:
a k t n = a k t n 1 · Δ i j + a i j n 1 mod 2 ,
where Δ i j denotes the Kronecker delta function (i.e., Δ i j = 1 if i = j , and Δ i j = 0 otherwise). The index mappings k = V k + i , t = V t + j reflect the structure of the Kronecker product, where k , t [ V n 1 ] index the block position and i , j [ V ] index the position within each block. The Kronecker delta Δ i j ensures that a k t n 1 contributes only when i = j , capturing the effect of A f n 1 I V , while a i j n 1 comes from J V A f n 1 .
We next refine δ k by exploiting the concept of block GCT, where using Definition 13 helps enclose each given λ k ( A f n ) within an interval denoted by δ k b . Interval k is characterized by the diagonal sub-matrices A k k , corresponding to A f n 1 , and the non-diagonal sub-matrices A k t , consisting of Z V n 1 and J V n 1 , which represent disconnected and connected components of G n , respectively. However, the block GCT representation leads to loose bounds on λ k ( A f n ) and subsequently on χ ( G n ) via (23). To tighten these bounds, in Theorem 2 and Corollary 9, we derive λ k ( A f n ) by leveraging GCT intervals from (5) and (7), and using a decomposition-based approach that splits A f n into two symmetric matrices.
Theorem 2
(Computing λ k ( A f n ) via splitting A f n into two symmetric matrices). The eigenvalues of A f n , denoted by λ k ( A f n ) , are given as follows:
λ k ( A f n ) = λ k ( A G r n ) + λ k ( A f c n ) , k [ V n ] ,
where A G r n is a block diagonal matrix with diagonal blocks formed from A f n 1 , and A f c n = A f n A G r n captures the non-diagonal elements of A G r n .
Proof. 
See Appendix K. □
Theorem 2 demonstrates that by decomposing A f n into A G r n and A f c n , we can capture the connections between A f n 1 ( l ) , l [ V ] , corresponding to the sub-graphs of G n .
Next, we describe an iterative technique to determine λ k ( A f n ) from λ k ( A f ) .
Corollary 9.
The eigenvalues of A f n can be iteratively calculated as follows:
λ k ( A f n ) = λ k ( A f ) + j = 2 n λ k ( A f c j ) , k [ V n ] .
Proof. 
See Appendix L. □
Theorem 2 and Corollary 9 illustrate that leveraging the block structure of A f n reduces complexity compared with [80]. In Proposition 12, we use Theorem 2 to derive a tighter bound on χ ( G n ) than those from the block GCT representation intervals in (7).
Proposition 12
(Bounds on χ ( G n ) using GCT). Given a general graph G ( V , E ) , the chromatic number χ ( G n ) is bounded as follows:
1 λ 1 ( A G r n ) + λ 1 ( A f c n ) ( V n / 2 ) · [ ( V n + 1 ) / 2 ] χ ( G n ) λ 1 ( A G r n ) + λ 1 ( A f c n ) + 1 .
Proof. 
To prove this result, we use the bounds for χ ( G n ) in (23) using the eigenvalues of A f n , adjust λ 1 ( A f n ) and λ V n ( A f n ) , and by utilizing (49) from Theorem 2, and (25), respectively. □
For the LHS and RHS in (51), Corollary 6 (cf. (43)) provides a tighter approximation on λ 1 ( A f n ) compared with (42), which stems from employing [81] [Lemma 5], and d e g avg ( x k ) · V n 1 δ b ( λ 1 ( A f n ) ) / 2 .
We next illustrate the utility of Theorem 2 and Corollary 9 via an example.
Example 5.
Consider a characteristic graph G 1 with an adjacency matrix
A f 1 = 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 ,
with a set of distinct eigenvalues ϑ ( G 1 ) = { 2.4812 , 0.6889 , 0 , 1.1701 , 2 } . Using GCT, we derive five intervals { δ k } k [ 5 ] = { [ 2 , 2 ] , [ 2 , 2 ] , [ 2 , 2 ] , [ 3 , 3 ] , [ 3 , 3 ] } for A f 1 , one for each eigenvalue, where each δ k is centered at 0 since t r a c e ( A f 1 ) = 0 . Two unique intervals with the largest lengths are δ 1 = [ 2 , 2 ] and δ 2 = [ 3 , 3 ] , which are used to determine λ 1 ( A f 1 ) .
From Theorem 2 and Corollary 9 (see Figure 6), we have λ 1 ( A f 1 2 ) [ 12 , 18 ] . Applying the GCT for block matrices (see (7) in Section 2.3), we obtain σ ( A f 1 2 ) k = 1 5 δ k b , where δ b = [ 18 , 18 ] includes all possible eigenvalues but provides a less precise estimate than (49). Refining the bounds for λ 1 ( A f 1 2 ) using the d avg ( x k ) in (42) and the upper bound in (51) gives a more precise interval of [ 12 , 15 ] . This range shows the upper bound is 3 units tighter than the maximum degree method.

5. Conclusions

In this paper, we addressed the problem of distributed functional compression by introducing novel coloring-based encoding schemes for source characteristic graphs. We analyzed various graph topologies—cycles ( C i ), d-regular graphs ( G d , V ), and general graphs (G)—and their n-fold OR product realizations ( C i n , G d , V n , and G n ), exploring the interplay between adjacency matrix eigenvalues and chromatic numbers to develop low-entropy coloring schemes and derive bounds on the compression rate for asymptotically lossless function compression.
For cycles, we derived bounds on the degrees of C i n and proposed a recursive coloring scheme for C 2 k + 1 n , which computes valid colorings in polynomial time by leveraging their structural properties. We also investigated the relationship between the spectra of C i n and the chromatic numbers χ ( C i n ) to establish bounds on the minimum entropy colorings.
For d-regular graphs, we analyzed the degrees and eigenvalues for G d , V n . We also investigated the connection between the OR products of G d , V and graph expansion described by the spectral properties of G d , V . This enabled us to derive upper and lower bounds on the expansion rates of general characteristic graphs, where the expansion rate reflects the connectivity of sub-graphs, where higher connectivity implies an increase in H G n χ ( X 1 ) .
For general characteristic graph topologies G, we derived bounds on the degrees and eigenvalues of G n using the block matrix representation of the adjacency matrix of G n , denoted by A f n , in order to achieve a reduced computational complexity for deriving the eigenvalues λ k ( A f n ) for k [ V n ] compared with the QR method [79,82]. By leveraging the GCT approach, we provided upper and lower bounds on λ ( A f n ) and compared these with the iterative A f n decomposition method (see Theorem 2 and Corollary 9), which exploits the properties of eigenvalues of symmetric matrices to produce tighter bounds for λ ( A f n ) .
In conclusion, our results present a unified framework for distributed functional compression, linking graph-theoretic structures with information-theoretic bounds. By relating spectral properties to achievable rates, our work offers a coloring-based approach for designing functional compression schemes. We believe that these insights impact both theoretical developments and practical implementations of low-overhead, structure-adaptive coding methods. Future directions include examining connections between functional compression and other graph properties, such as diameter, graph decomposition, and graph complement, and devising lossy compression schemes. Our compression approach captures various characteristic graph topologies, which can be exploited to represent unions of source characteristic graphs and user demands in multicast and broadcast computation settings to establish fundamental lower bounds on communication costs.

Author Contributions

Conceptualization, M.R.D.S.; Methodology, M.R.D.S. and D.M.; Software, M.R.D.S.; Validation, M.R.D.S. and D.M.; Formal analysis, M.R.D.S.; Resources, D.M.; Writing—original draft, M.R.D.S.; Writing—review & editing, M.R.D.S. and D.M.; Supervision, D.M.; Project administration, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by a Huawei France-funded Chair towards Future Wireless Networks and supported by the program “PEPR Networks of the Future” of France 2030. Co-funded by the European Union (ERC, SENSIBILITÉ, 101077361). Views and opinions expressed are, however, those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Acknowledgments

The authors acknowledge the constructive comments of V. Kizhakke and A. Tanha.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Proposition 1

Consider C i = G ( V , E ) , where for each x [ V ] , d e g ( x ) = 2 (see Definition 6). The 2-fold OR product is denoted by C i 2 , and it has V sub-graphs { C i 2 ( 1 ) , C i 2 ( 2 ) , , C i 2 ( V ) } . For a given sub-graph, say C i 2 ( l ) , l [ V ] , any x 2 ( l ) C i 2 ( l ) , where x 2 ( l ) denotes a vertex in the l-th replica of C i 2 , is connected to any vertex of the adjacent sub-graphs, namely { C i 2 ( ( l 1 ) mod ( V ) ) , C i 2 ( ( l + 1 ) mod ( V ) ) } . Therefore, each vertex in C i 2 ( l ) has V edges to C i 2 ( ( l 1 ) mod ( V ) ) and C i 2 ( ( l + 1 ) mod ( V ) ) each. Accounting for the set of edges between adjacent sub-graphs, and each vertex’s degree in the sub-graph itself, the degree of each vertex in C i 2 is d e g ( x 2 ) = 2 + 2 V . For n = 3 , this results in d e g ( x 3 ) = 2 + 2 V + 2 V 2 .
For a given C i n 1 ( l ) , each vertex x n 1 ( l ) is connected to all vertices in the adjacent sub-graphs, C i n 1 ( ( l 1 ) mod ( V ) ) and C i n 1 ( ( l + 1 ) mod ( V ) ) . Similarly, the degree for the ( n 1 ) -fold OR product, by considering the previous OR product’s degree and using induction, is going to be d e g ( x n 1 ) = 2 + 2 V + 2 V 2 + + 2 V n 2 = 2 + j = 1 n 2 2 V j . Therefore, for building C i n , using the same procedure, there are V sub-graphs { C i n ( 1 ) , C i n ( 2 ) , , C i n ( V ) } . Similar to C i 2 , the sub-graph C i n ( l ) , l [ V ] , is fully connected to the adjacent sub-graphs { C i n ( ( l 1 ) mod ( V ) ) , C i n ( ( l + 1 ) mod ( V ) ) } . Therefore, using induction and the geometric series j = 1 n 1 V j , we can determine d e g ( x n ) in closed form using (8).

Appendix B. Proof of Proposition 3

Given a cycle C 5 with χ ( C 5 ) = 3 , for coloring C 5 2 , χ ( C 5 2 ) = 8 , as shown in Figure 3, saving one color compared with the greedy algorithm’s ( χ ( C 5 ) ) 2 = 9 colors. Instead of coloring the entire graph, our approach assigns coloring sets to sub-graphs to determine an optimal coloring. The coloring sets for valid coloring of sub-graphs, namely C i ( l ) for sub-graph l, ensure that no two neighboring sets share colors. To achieve valid colorings, vertices are divided into sub-graphs, each assigned a set of colors. For C 5 , a valid coloring requires three distinct colors { c 1 , c 2 , c 3 } .
The cardinality of the coloring set C i j ( l ) changes based on the chromatic number of the previous OR product χ ( C i j 1 ) . However, the number of coloring sets is always constant and equal to the number of vertices in C i , for i = V . Given C 5 2 , 5 coloring sets are assigned to the sub-graphs. To cover the sub-graphs assigned to the first two color sets, the adjacent coloring sets cannot share the same colors. Thus, { c 1 , c 2 , , c 6 } are required in the first two coloring sets, namely C 5 2 ( 1 ) and C 5 2 ( 2 ) . Consequently, we can reuse the colors from the first set, since there is no edge between sub-graphs C 5 2 ( 1 ) and C 5 2 ( 3 ) . Hence, by adding only 2 colors to the third color set and using them in a cyclic manner, all vertices are colored with only 8 colors, allowing us to express C 5 2 ( l ) for l [ 5 ] as: C 5 2 ( 1 ) = { c 1 , c 2 , c 3 } , C 5 2 ( 2 ) = { c 4 , c 5 , c 6 } , C 5 2 ( 3 ) = { c 7 , c 8 , c 1 } , C 5 2 ( 4 ) = { c 2 , c 3 , c 4 } , C 5 2 ( 5 ) = { c 5 , c 6 , c 7 } . For C 5 3 , we have | C 5 3 ( l ) | = 8 , and applying the same method yields χ ( C 5 3 ) = 20 . Thus, C 5 3 ( l ) for l [ 5 ] can be expressed as:
C 5 3 ( 1 ) = { c 1 , c 2 , c 3 , c 4 , c 5 , c 6 , c 7 , c 8 } , C 5 3 ( 2 ) = { c 9 , c 10 , c 11 , c 12 , c 13 , c 14 , c 15 , c 16 } , C 5 3 ( 3 ) = { c 17 , c 18 , c 19 , c 20 , c 1 , c 2 , c 3 , c 4 } , C 5 3 ( 4 ) = { c 5 , c 6 , c 7 , c 8 , c 9 , c 10 , c 11 , c 12 } , C 5 3 ( 5 ) = { c 13 , c 14 , c 15 , c 16 , c 17 , c 18 , c 19 , c 20 } .
Using induction we show that χ ( C 5 1 ) = 3 , χ ( C 5 2 ) = 8 , χ ( C 5 3 ) = 20 , χ ( C 5 4 ) = 50 , χ ( C 5 5 ) = 125 , χ ( C 5 6 ) = 313 , and so on. Following this pattern for n + 1 , we reach χ ( C i n + 1 ) = 2 χ ( C i n ) + χ ( C i n ) 2 .

Appendix C. Proof of Proposition 4

From (10) in Proposition 3, and by applying lower and upper bounds to the ceiling function, we infer that
2 χ ( C 2 k + 1 n 1 ) + χ ( C 2 k + 1 n 1 ) 2 χ ( C 2 k + 1 n ) 2 χ ( C 2 k + 1 n 1 ) + χ ( C 2 k + 1 n 1 ) 2 + 1 .
Exploiting the number of colors needed for coloring C 2 k + 1 n using greedy algorithm, which is given as χ ( C 2 k + 1 ) n = 3 n , and (A1), the gain, η n , lies in the following interval:
3 n 2 χ ( C 2 k + 1 n 1 ) + χ ( C 2 k + 1 n 1 ) 2 + 1 η n 3 n 2 χ ( C 2 k + 1 n 1 ) + χ ( C 2 k + 1 n 1 ) 2 .
For n = 1 , we have χ ( C 2 k + 1 ) = χ Greedy ( C 2 k + 1 ) = 3 = χ 1 . For n 2 , χ Greedy ( C 2 k + 1 n ) = 3 n 1 χ 1 , whereas our cyclic approach yields
χ ( C 2 k + 1 n ) 5 2 n 1 χ 1 , n 2 .
Substituting χ ( C 2 k + 1 n 1 ) 5 2 n 2 χ 1 in the denominator of the upper bound in (A2), where the ratio of the nominator to the denominator leads to η n 3 n 1 ( 5 2 ) n 1 . Furthermore, using (A3), the denominator of the lower bound in (A2) is lower bounded as:
2 χ ( C 2 k + 1 n 1 ) + χ ( C 2 k + 1 n 1 ) 2 + 1 2 5 2 n 2 χ 1 + 1 2 5 2 n 2 χ 1 + 1 = 5 2 n 1 χ 1 + 1 ,
and from (A4), η n 3 n 1 χ 1 5 2 n 1 χ 1 + 1 , and as n , the lower and upper bounds match.

Appendix D. Proof of Proposition 5

We build our achievable scheme by applying the recursive coloring approach for χ ( C 2 k + 1 n ) in (10) and decomposing C 2 k + 1 n into sub-graphs that cover all vertices of C 2 k + 1 n . This decomposition involves two types of sets: (i) non-overlapping maximum independent sets of each power graph C 2 k + 1 t for t [ n ] by noting that the size of a maximum independent set is α ( C 2 k + 1 t ) = k t (i.e., given the full connection of sub-graphs in the t-fold OR product C 2 k + 1 t , α ( C 2 k + 1 t ) = k t for all t [ n ] [83]), and (ii) the remaining singleton set because V t = ( 2 k + 1 ) t is odd for all t [ n ] .
We next derive a coloring PMF for the proposed decomposition of C 2 k + 1 n , given by p ( C C 2 k + 1 n ) = p ζ 0 · 1 V n , ζ 1 · k V n , , ζ n · k n V n , with coefficients { ζ t , t { 0 } [ n ] } that represent the weights assigned to color each set and satisfy the following constraints:
( a ) t = 1 n ζ t · k t V n + ζ 0 · 1 V n = 1 , ( b ) ζ t k · ζ t 1 , ( c ) ζ 0 = 1 ,
where ( a ) indicates that the coloring assignment forms a valid PMF. Constraint ( b ) minimizes the total number of colors required. In constraint ( c ) , α ( C 2 k + 1 0 ) = k 0 = 1 represents the additional color needed for the singleton set. Incorporating constraints in (A5), we have
ζ n · k n ( 2 k + 1 ) n + ζ n 1 · k n 1 ( 2 k + 1 ) n + + ζ 0 · 1 ( 2 k + 1 ) n 1 .
Using (A6), and the properties of geometric series, we obtain
1 = t = 0 n ζ t k t ( 2 k + 1 ) n < ζ n ( 2 k + 1 ) n · t = 0 n k t k n t = ζ n ( 2 k + 1 ) n · 1 k n · ( k 2 ) ( n + 1 ) 1 k 2 1 ,
where the final equality follows from computing the summation, which leads to ζ n > ( 2 k + 1 ) n k n ( k 2 1 ) k 2 ( n + 1 ) 1 . Next, to derive the upper bound on ζ n , we assign ζ 1 = 0 and maximize ζ n . From (A7), we have
1 = t = 0 n ζ t k t ( 2 k + 1 ) n > ζ n ( 2 k + 1 ) n · t = 0 n k t k n t ,
where reordering (A8), we have the following upper bound on ζ n :
ζ n < ( k 2 ( n + 1 ) 1 ) · ( 2 k + 1 ) n · k ( n 1 ) ( 2 k + 1 ) 2 n · ( k 2 1 ) · k ( n 1 ) ( k 2 ( n + 1 ) 1 ) .
Similarly, using (A9) we can bound ζ t where t < n , providing an upper bound on H C 2 k + 1 ( X 1 ) , which concludes our proof.

Appendix E. Proof of Theorem 1

Assume A f has V eigenvalues. To find the eigenvalues of A f 2 , one must solve (22), derived from evaluating (21), yielding V equations per block row and V 2 equations in total. Hence, there are V 2 equations, from which the remaining ( V 1 ) × V equations are just replicas of the eigenvalues of any given block row, due to cyclic symmetry. In (22), the terms with indices k + 1 and k 1 are in modulo V. Two additional equations are necessary to calculate λ k ( A f j ) where k [ V j ] , as the power j increases from n 1 to n. Both A f and J V are diagonalizable, i.e., there exists an invertible matrix P such that A f = P 1 H A f P and J V = P 1 H J V P , where H A f and H J V are diagonal matrices. Hence, the sum A f + J V satisfies
A f + J V = P 1 H A f P + P 1 H J V P = P 1 ( H A f + H J V ) P .
The eigenvalues of A f + J V can now be computed using the diagonal of H A f + H J V , where λ k ( A f ) and λ k ( J V ) represent the k-th eigenvalues of A f and J V matrices for k [ V ] , respectively.
λ k ( A f + J V ) = λ k ( A f ) + λ k ( J V ) , k [ V ] .
From Lemma 1, the number of distinct eigenvalues of A f 2 differs from the eigenvalues of A f at most by two, and similarly for the eigenvalues of A f j , j Z + 2 derived from A f j 1 of the ( j 1 ) -fold OR product graph. From (20), the sub-matrices A f j 1 in A f j take the following form
A f j 1 v k + J V j 1 v k + 1 + J V j 1 v k 1 = ν v k , k [ V ] ,
where j Z + 2 , and the column vector v k has dimensions V j 1 × 1 . For the n-fold OR product, A f n , is constructed by combining A f n 1 , J V n 1 from the ( n 1 ) -fold OR product, and Z V n 1 matrices, as in (20). The first block row of A f n includes A f n 1 , two J V n 1 matrices representing full connections to adjacent sub-graphs, and Z V n 1 matrices indicating no connections, all of which affect eigenvalue computation. Thus, C i n has two more distinct eigenvalues than C i n 1 .

Appendix F. Proof of Proposition 8

Given G d , V = G ( V , E ) , its 2-fold OR product is denoted by G d , V 2 , consisting of V sub-graphs, G d , V 2 = { G d , V 2 ( 1 ) , G d , V 2 ( 2 ) , , G d , V 2 ( V ) } . For a given sub-graph G d , V 2 ( l ) , where l [ V ] , any vertex x 2 ( l ) is connected to vertices in d adjacent sub-graphs between following indices l d 2 , l + d 2 , i.e., { G d , V 2 ( ( l d 2 ) mod ( V ) ) , , G d , V 2 ( ( l 1 ) mod ( V ) ) , G d , V 2 ( ( l + 1 ) mod ( V ) ) , , G d , V 2 ( ( l + d 2 ) mod ( V ) ) } . Thus, each vertex in G d , V 2 ( l ) has d × V edges to these adjacent sub-graphs. For n = 2 , d e g ( x 2 ) = d + d V , accounting for both edges within sub-graphs and edges to adjacent sub-graphs. For n = 3 , accounting for edges to adjacent sub-graphs and those within each sub-graph (i.e., d e g ( x 2 ) ) yields d e g ( x 3 ) = d + d V + d V 2 . For a given G d , V n 1 ( l ) , each vertex x n 1 ( l ) is connected to all vertices in the adjacent sub-graphs { G d , V n 1 ( ( l d 2 ) mod ( V ) ) , , G d , V n 1 ( ( l 1 ) mod ( V ) ) , G d , V n 1 ( ( l + 1 ) mod ( V ) ) , , G d , V n 1 ( ( l + d 2 ) mod ( V ) ) } . Similarly, for the ( n 1 ) -fold OR product, d e g ( x n 1 ) is calculated by induction, starting from the degree in the previous product, which yields d e g ( x n 1 ) = d + d V + d V 2 + + d V n 2 = d · j = 0 n 2 V j . Therefore, for G d , V n , there are V sub-graphs, where G d , V n ( l ) , l [ V ] , is fully connected to the adjacent sub-graphs, and for x n [ V n ] , d e g ( x n ) = d + d · j = 1 n 1 V j . Thus, d e g ( x n ) is given in closed form by (30).

Appendix G. Proof of Proposition 9

For any graph G ( V , E ) where max k [ V ] d e g ( x k ) 3 and no complete sub-graph exists of size max k [ V ] d e g ( x k ) + 1 , G can be colored with at most max k [ V ] d e g ( x k ) colors, making it max k [ V ] d e g ( x k ) -colorable [84]. For any G d , V , the chromatic number satisfies χ ( G d , V ) ω ( G d , V ) , where ω ( G d , V ) represents the clique number, i.e., the size of the largest complete sub-graph (clique) in G d , V [85].
Upper and lower bounds on χ ( G d , V ) hold for d , V Z + with V d + 1 . From Proposition 8, G d , V n is a regular graph. In G d , V 2 , a clique of size d exists among d adjacent sub-graphs, leading to χ ( G d , V 2 ) = d 2 , as at least d colors are needed per sub-graph. Similarly, in the j-fold OR product, each sub-graph connects to d others, forming a d j clique, resulting in χ ( G d , V n ) = d n .

Appendix H. Proof of Corollary 7

For the upper bound in (44), note that for K i where i = V , the largest eigenvalue is V 1 with multiplicity 1, and 1 has multiplicity V 1 [86]. Thus, Λ ( K i n ) = max ( λ 2 ( A f n ) , | λ V n ( A f n ) | ) = 1 , and using (32), we obtain the RHS of (44). Subsequently, for the lower bound in (44), we adjust (32) for C i n and determine E θ ( C i n ) . For computing Λ ( C i n ) , we note that C i n is a d-regular graph as specified in Proposition 1. Therefore, the eigenvalues satisfy { λ 1 > λ 2 > > λ V n }. Noting that the diagonal entries of A f n are zero allows us to deduce that t r a c e ( A f n ) = 0 . Given that for C i n , λ 1 ( A f n ) is positive and has the largest cardinality, λ V n ( A f n ) must be approximately equal in magnitude (negative value) and greater in value than λ 2 ( A f n ) so that the trace becomes zero, as numerically demonstrated for C i n in Section 3.4, which indicates the cardinality of λ k follows the order { d | λ 1 | > | λ V n | > | λ 2 | > } . Therefore, by replacing Λ ( C i n ) where max ( λ 2 ( A f n ) , | λ V n ( A f n ) | ) = | λ V n ( A f n ) | in the denominator of (32), we reach (44).

Appendix I. Proof of Proposition 11

For the n-fold OR product graph G n , α ( G n ) is calculated as follows:
α ( G n ) = ( α ( G ) ) n ,
which follows by induction. For n = 2 , each sub-graph in G 2 is isomorphic to G with an independence number α ( G ) . Hence, the size of the maximum independent set in G 2 is α ( G ) 2 . Similarly, for the 3-fold OR product, the sub-graphs are isomorphic to G 2 , which implies α ( G 3 ) = α ( G ) 3 . This pattern extends to higher-order products [83] [Chapter 5], leading to (A12).
Next, using (A12) and (1), we derive an upper bound on H G ( X 1 ) for a given G n . Exploiting the graph decomposition technique in the Proof of Proposition 5 (see Appendix D), we devise an achievable coloring for G n , given by p ( C G n ) = p ζ 0 · 1 V n , ζ 1 · α ( G ) V n , , ζ n · α ( G ) n V n , with coefficients { ζ t , t { 0 } [ n ] } that represent the weights of the decomposition and satisfy the following constraints:
( a ) t = 1 n ζ t · α ( G ) t V n + ζ 0 · α ( G 0 ) V n = 1 , ( b ) ζ t α ( G ) · ζ t 1 , ( c ) ζ 0 = 1 .
In (A13), the constraint ( a ) ensures that the coloring assignment forms a valid PMF. Constraints ( b ) and ( c ) are identical to those in the proof of Proposition 5. Employing (A13) yields
H ζ n · α ( G n ) V n , ζ n 1 · α ( G n 1 ) V n , , ζ 0 · 1 V n ,
where (A14) can be optimized over { ζ t , t { 0 } [ n ] } to minimize H ( C G n ) . Subsequently, to derive H G n χ ( X 1 ) , we use (13) and normalize the entropy calculated in (A14) for the n-fold OR product by a factor of 1 n . Finally, by using (A12) along with (A14), which upper bounds H G ( X 1 ) , we reach the statement of the proposition.

Appendix J. Proof of Corollary 8

For any connected G ( V , E ) with V = 2 k + 1 , we have χ ( C 2 k + 1 ) χ ( G ) , so χ ( C 2 k + 1 n ) gives a lower bound on χ ( G n ) . Therefore, we first calculate χ f ( C 2 k + 1 n ) , then use it to derive a lower bound on H C 2 k + 1 ( X 1 ) and subsequently on H G ( X 1 ) . Using a : b coloring for C 2 k + 1 , we show that at least 2 b + 1 colors are needed and claim that the infimum of χ f ( C 2 k + 1 n ) occurs at b = k . We prove χ b ( C 2 k + 1 ) = 2 b + 1 , by contradiction: Assume that χ b ( C 2 k + 1 ) 2 b and assign each vertex i [ 2 k + 1 ] a coloring set C i of size b. For i [ 2 k ] , the coloring sets are defined as C 1 = [ b ] , C 2 = [ b + 1 , 2 b ] , …, and C 2 k = [ b ] . Using 2 b colors recursively over the vertices, disjoint color sets are assigned to each node up to the last node x 2 k + 1 . However, reusing the previous 2 b colors for C 2 k + 1 is impossible, leading to a contradiction of the initial assumption. Next, from (3), we show that 2 k + 1 k χ f ( C 2 k + 1 ) holds. From [85], a lower bound on χ f ( G ) is given by χ f ( G ) V α ( G ) , which by applying G = C 2 k + 1 , proves the assumption that b = k .
To compute H C 2 k + 1 n χ ( X 1 ) , we use the bound H G f ( X 1 ) H G ( X 1 ) from [12] [Lemma 1], and leveraging their coloring PMF in [12] [Proposition 2], yielding as n :
H G f ( X 1 ) = lim n 1 n inf b 1 b min C C i n f H ( C C i n f ) .
Given a uniform C i n , the entropy is simplified and calculated as H C 2 k + 1 n χ f ( X 1 ) = log 2 | χ f ( C 2 k + 1 n ) | . From (3), the graph entropy of C 2 k + 1 n , incorporating 1 n normalization for the n-th realization, is:
1 n H C 2 k + 1 n χ ( X 1 ) = 1 n · log 2 2 b + 1 b n = log 2 2 b + 1 b ,
where the infimum of b is k in (A16), this captures H C 2 k + 1 ( X 1 ) and validating (46) for any G.

Appendix K. Proof of Theorem 2

By Lemma 1, the eigenvalues of the sum of two symmetric matrices equal the sums of their respective eigenvalues. To calculate λ k ( A f n ) for k [ V n ] , we express A f n as the sum of two symmetric matrices and determine the eigenvalues by summing those of the individual matrices. For n = 1 , the eigenvalues λ k ( A f ) can be calculated using GCT or QR decomposition methods. For n = 2 , the adjacency matrix A f 2 is partitioned into A f 2 = A G r 2 + A f c 2 , where A G r 2 is a diagonal block with A f on the diagonals and Z V elsewhere. The term A f c 2 = A f 2 A G r 2 is constructed with all-zero matrices Z V on the diagonal and J V or Z V on the off-diagonal blocks based on the sub-graphs connections. This decomposition allows:
λ k ( A f 2 ) = λ k ( A G r 2 ) + λ k ( A f c 2 ) , k [ V 2 ] .
Assume that the decomposition holds for n = j , A f j = A G r j + A f c j and λ k ( A f j ) = λ k ( A G r j ) + λ k ( A f c j ) . For n = j + 1 , decompose A f j + 1 as: A f j + 1 = A G r j + 1 + A f c j + 1 . Here, A G r j + 1 is block diagonal with A f j on the diagonal blocks, and A f c j + 1 contains off-diagonal terms derived from J V j + 1 and Z V j + 1 . Thus, by induction, λ k ( A f n 1 ) is computed using δ k for using GCT according to (47), and the decomposition and eigenvalue sum hold for all n 2 as shown in (49).

Appendix L. Proof of Corollary 9

Consider a block diagonal matrix (e.g., A G r j F 2 V j × V j , j [ n ] ), where each diagonal block is identical to A f j 1 F 2 V j 1 × V j 1 , and all off-diagonal entries are zero. The eigenvalues of A G r j , j [ n ] is equal to λ k ( A f j 1 ) , k [ V j 1 ] , but the algebraic multiplicity of each eigenvalue is scaled by the number of times the block A f j 1 appears on the diagonal. Thus, for the 2-fold OR product, we have λ k ( A f 2 ) = λ k ( A G r 2 ) + λ k ( A f c 2 ) . Furthermore, for n = 3 , we can substitute λ k ( A G r 3 ) with λ k ( A G r 2 ) + λ k ( A f c 2 ) , as follows λ k ( A f 3 ) = λ k ( A G r 2 ) + λ k ( A f c 2 ) + λ k ( A f c 3 ) . Similarly, for j-fold OR product, λ k ( A f j ) = λ k ( A G r 2 ) + λ k ( A f c 2 ) + + λ k ( A f c j 1 ) + λ k ( A f c j ) , and for n = j + 1 , the relation is λ k ( A f j + 1 ) = λ k ( A G r 2 ) + + λ k ( A f c j + 1 ) . Similarly, for the n-fold OR product λ k ( A f n ) is calculated as follows:
λ k ( A f n ) = λ k ( A G r 2 ) + λ k ( A f c 2 ) + + λ k ( A f c n 1 ) + λ k ( A f c n ) , k [ V n ] .
By substituting λ k ( A G r 2 ) in (A18) with λ k ( A f ) , we reach the statement of the corollary in (50).

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  3. Alon, N.; Orlitsky, A. Source coding and graph entropies. IEEE Trans. Inf. Theory 1996, 42, 1329–1339. [Google Scholar] [CrossRef]
  4. Pawlak, M.; Rafajlowicz, E.; Krzyzak, A. Postfiltering versus prefiltering for signal recovery from noisy samples. IEEE Trans. Inf. Theory 2003, 49, 3195–3212. [Google Scholar] [CrossRef]
  5. Ho, T.; Médard, M.; Koetter, R.; Karger, D.R.; Effros, M.; Shi, J.; Leong, B. A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 2006, 52, 4413–4430. [Google Scholar] [CrossRef]
  6. Ahlswede, R.; Cai, N.; Li, S.Y.; Yeung, R.W. Network information flow. IEEE Trans. Inf. Theory 2000, 46, 1204–1216. [Google Scholar] [CrossRef]
  7. Shannon, C. The zero error capacity of a noisy channel. IRE Trans. Inf. Theory 1956, 2, 8–19. [Google Scholar] [CrossRef]
  8. Doshi, V.; Shah, D.; Medard, M.; Jaggi, S. Distributed functional compression through graph coloring. In Proceedings of the 2007 Data Compression Conference (DCC’07), Snowbird, UT, USA, 27–29 March 2007; pp. 93–102. [Google Scholar]
  9. Doshi, V.; Shah, D.; Médard, M.; Effros, M. Functional compression through graph coloring. IEEE Trans. Inf. Theory 2010, 56, 3901–3917. [Google Scholar] [CrossRef]
  10. Orlitsky, A.; Roche, J.R. Coding for computing. IEEE Trans. Inf. Theory 2001, 47, 903–917. [Google Scholar] [CrossRef]
  11. Feizi, S.; Médard, M. On network functional compression. IEEE Trans. Inf. Theory 2014, 60, 5387–5401. [Google Scholar] [CrossRef]
  12. Malak, D. Fractional graph coloring for functional compression with side information. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022. [Google Scholar]
  13. Korner, J.; Marton, K. How to encode the modulo-two sum of binary sources (corresp.). IEEE Trans. Inf. Theory 1979, 25, 219–221. [Google Scholar] [CrossRef]
  14. Ahlswede, R.; Korner, J. Source coding with side information and a converse for degraded broadcast channels. IEEE Trans. Inf. Theory 1975, 21, 629–637. [Google Scholar] [CrossRef]
  15. Coleman, T.P.; Lee, A.H.; Médard, M.; Effros, M. Low-complexity approaches to slepian–wolf near-lossless distributed data compression. IEEE Trans. Inf. Theory 2006, 52, 3546–3561. [Google Scholar] [CrossRef]
  16. Han, T.; Kobayashi, K. A dichotomy of functions F(X, Y) of correlated sources (X, Y). IEEE Trans. Inf. Theory 1987, 33, 69–76. [Google Scholar] [CrossRef]
  17. Pradhan, S.S.; Ramchandran, K. Distributed source coding using syndromes (discus): Design and construction. IEEE Trans. Inf. Theory 2003, 49, 626–643. [Google Scholar] [CrossRef]
  18. Malak, D.; Deylam Salehi, M.R.; Serbetci, B.; Elia, P. Multi-server multi-function distributed computation. Entropy 2024, 26, 448. [Google Scholar] [CrossRef] [PubMed]
  19. Malak, D.; Deylam Salehi, M.R.; Serbetci, B.; Elia, P. Multi-functional distributed computation. In Proceedings of the 2024 60th Annual Allerton Conference on Communication, Control, and Computing, Urbana, IL, USA, 24–27 September 2024; pp. 1–8. [Google Scholar]
  20. Tanha, A.; Malak, D. The influence of placement on transmission in distributed computing of boolean functions. In Proceedings of the 2024 IEEE 25th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Lucca, Italy, 10–13 September 2024. [Google Scholar]
  21. Khalesi, A.; Elia, P. Perfect multi-user distributed computing. In Proceedings of the 2024 IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024; pp. 1349–1354. [Google Scholar]
  22. Witsenhausen, H. The zero-error side information problem and chromatic numbers (corresp.). IEEE Trans. Inf. Theory 1976, 22, 592–593. [Google Scholar] [CrossRef]
  23. Deylam Salehi, M.R.; Purakkal, V.K.K.; Malak, D. Non-linear function computation broadcast. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Ann Arbor, MI, USA, 22–27 June 2025. [Google Scholar]
  24. Wyner, A.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theory 1976, 22, 1–10. [Google Scholar] [CrossRef]
  25. Feng, H.; Effros, M.; Savari, S. Functional source coding for networks with receiver side information. In Proceedings of the Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 29 September–1 October 2004; pp. 1419–1427. [Google Scholar]
  26. Yamamoto, H. Wyner-Ziv theory for a general function of the correlated sources (corresp.). IEEE Trans. Inf. Theory 1982, 28, 803–807. [Google Scholar] [CrossRef]
  27. Berger, T.; Yeung, R.W. Multiterminal source encoding with one distortion criterion. IEEE Trans. Inf. Theory 1989, 35, 228–236. [Google Scholar] [CrossRef]
  28. Barros, J.; Servetto, S.D. On the rate-distortion region for separate encoding of correlated sources. In Proceedings of the IEEE International Symposium on Information Theory, Yokohama, Japan, 29 June–4 July 2003; p. 171. [Google Scholar]
  29. Wagner, A.B.; Tavildar, S.; Viswanath, P. Rate region of the quadratic gaussian two-encoder source-coding problem. IEEE Trans. Inf. Theory 2008, 54, 1938–1961. [Google Scholar] [CrossRef]
  30. Rebollo-Monedero, D.; Forne, J.; Domingo-Ferrer, J. From t-closeness-like privacy to postrandomization via information theory. IEEE Trans. Know. Data Eng. 2009, 22, 1623–1636. [Google Scholar] [CrossRef]
  31. Shirani, F.; Pradhan, S.S. A new achievable rate-distortion region for distributed source coding. IEEE Trans. Inf. Theory 2021, 67, 4485–4503. [Google Scholar] [CrossRef]
  32. Yuan, D.; Guo, T.; Bai, B.; Han, W. Lossy computing with side information via multi-hypergraphs. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022; pp. 344–349. [Google Scholar]
  33. Sefidgaran, M.; Tchamkerten, A. Computing a function of correlated sources: A rate region. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July 2011–5 August 2011; pp. 1856–1860. [Google Scholar]
  34. Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. In Proceedings of the 6th Prague Conference on Information Theory, Prague, Hungary, 17–22 September 1973; pp. 411–425. [Google Scholar]
  35. Pettofrezzo, A.J.; Byrkit, D.R. Elements of Number Theory; Prentice-Hall: Saddle River, NJ, USA, 1970. [Google Scholar]
  36. Paar, C.; Pelzl, J. Understanding Cryptography: Textbook for Students; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  37. Diffie, W.; Hellman, M.E. Multiuser cryptographic techniques. In Proceedings of the National Computer Conference and Exposition, New York, NY, USA, 7–10 June 1976; pp. 109–112. [Google Scholar]
  38. Selent, D. Advanced encryption standard. Rivier Acad. J. 2010, 6, 1–14. [Google Scholar]
  39. Basu, S. International data encryption algorithm (idea)—A typical illustration. J. Glob. Res. Comput. Sci. 2011, 2, 116–118. [Google Scholar]
  40. Knill, G. Applications: International standard book numbers. Math. Teach. 1981, 74, 47–48. [Google Scholar] [CrossRef]
  41. Friedman, J.; Tillich, J.-P. Generalized alon—Boppana theorems and error-correcting codes. SIAM J. Discret. Math. 2005, 19, 700–718. [Google Scholar] [CrossRef]
  42. Berestycki, N.; Lubetzky, E.; Peres, Y.; Sly, A. Random walks on the random graph. Ann. Probab. 2018, 46, 456–490. [Google Scholar] [CrossRef]
  43. Kahale, N. Better expansion for ramanujan graphs. In Proceedings of the IEEE Symposium of Foundations of Computer Science, San Juan, PR, USA, 1–4 October 1991; pp. 398–404. [Google Scholar]
  44. Kahale, N. On the second eigenvalue and linear expansion of regular graphs. In Proceedings of the 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, PA, USA, 24–27 October 1992; pp. 296–303. [Google Scholar]
  45. Felber, P.; Kropf, P.; Schiller, E.; Serbu, S. Survey on load balancing in peer-to-peer distributed hash tables. IEEE Commun. Surv. Tutor. 2013, 16, 473–492. [Google Scholar] [CrossRef]
  46. Ganesh, A.; Massoulié, L.; Towsley, D. The effect of network topology on the spread of epidemics. In Proceedings of the IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies, Miami, FL, USA, 13–17 March 2005; Volume 2, pp. 1455–1466. [Google Scholar]
  47. Tentes, A. Expander Graphs, Randomness Extractors and Error Correcting Codes. Ph.D. Thesis, University of Patras, Patras, Greece, 2009. [Google Scholar]
  48. Szabó, G.; Fath, G. Evolutionary games on graphs. Phys. Rep. 2007, 446, 97–216. [Google Scholar] [CrossRef]
  49. You, J.; Gomes-Selman, J.M.; Ying, R.; Leskovec, J. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 2–9 February; Volume 35, pp. 737–745.
  50. Cardinal, J.; Fiorini, S.; Joret, G. Tight results on minimum entropy set cover. Algorithmica 2008, 51, 49–60. [Google Scholar] [CrossRef]
  51. Tretter, C. Spectral Theory of Block Operator Matrices and Applications; World Scientific: Singapore, 2008. [Google Scholar]
  52. Havel, V. A remark on the existence of finite graphs. Casopis Pest. Mat. 1955, 80, 477–480. [Google Scholar] [CrossRef]
  53. Beigel, R. Finding maximum independent sets in sparse and general graphs. In SODA; Society for Industrial and Applied Mathematics (SIAM): Baltimore, MD, USA, 1999; Volume 99, pp. 856–857. [Google Scholar]
  54. Chen, W.-K. Graph Theory and Its Engineering Applications; World Scientific: Singapore, 1997; Volume 5. [Google Scholar]
  55. Nagle, J.F. On ordering and identifying undirected linear graphs. J. Math. Phys. 1966, 7, 1588–1592. [Google Scholar] [CrossRef]
  56. Nilli, A. On the second eigenvalue of a graph. Discret. Math. 1991, 91, 207–210. [Google Scholar] [CrossRef]
  57. Alon, N. Graph powers. Contemp. Comb. 2002, 10, 11–28. [Google Scholar]
  58. Bermond, J.-C. Hamiltonian graphs. In Selected Topics in Graph Theory; Academic Press: Cambridge, MA, USA, 1979; pp. 127–167. [Google Scholar]
  59. Axenovich, M. Lecture Notes Graph Theory; Karlsruher Institut für Technologie: Karlsruhe, Germany, 2014. [Google Scholar]
  60. Bang-Jensen, J.; Gutin, G.Z. Digraphs: Theory, Algorithms and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  61. Körner, J.; Orlitsky, A. Zero-error information theory. IEEE Trans. Inf. Theory 1998, 44, 2207–2229. [Google Scholar] [CrossRef]
  62. Koulgi, P.; Tuncel, E.; Regunathan, S.L.; Rose, K. On zero-error source coding with decoder side information. IEEE Trans. Inf. Theory 2003, 49, 99–111. [Google Scholar] [CrossRef]
  63. Tuncel, E. Kraft inequality and zero-error source coding with decoder side information. IEEE Trans. Inf. Theory 2007, 53, 4810–4816. [Google Scholar] [CrossRef]
  64. Scheinerman, E.R.; Ullman, D.H. Fractional Graph Theory: A Rational Approach to the Theory of Graphs; Courier Corporation: Mumbai, India, 2011. [Google Scholar]
  65. Salas, H.N. Gershgorin’s theorem for matrices of operators. Linear Algebra Its Appl. 1999, 291, 15–36. [Google Scholar] [CrossRef]
  66. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  67. Deylam Salehi, M.R.; Malak, D. An achievable low complexity encoding scheme for coloring cyclic graphs. In Proceedings of the 2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 26–29 September 2023; pp. 1–8. [Google Scholar]
  68. Aspvall, B.; Gilbert, J.R. Graph coloring using eigenvalue decomposition. Siam J. Algebr. Discret. Methods 1984, 5, 526–538. [Google Scholar] [CrossRef]
  69. Hoffman, A.J.; Howes, L. On eigenvalues and colorings of graphs, ii. Ann. N. Y. Acad. Sci. 1970, 175, 238–242. [Google Scholar] [CrossRef]
  70. Wilf, H.S. The eigenvalues of a graph and its chromatic number. J. Lond. Math. Soc. 1967, 1, 330–332. [Google Scholar] [CrossRef]
  71. Brigham, R.C.; Dutton, R.D. Bounds on graph spectra. J. Comb. Theory Ser. 1984, 37, 228–234. [Google Scholar] [CrossRef]
  72. Hong, Y. Bounds of eigenvalues of a graph. Acta Math. Appl. Sin. 1988, 4, 165–168. [Google Scholar] [CrossRef]
  73. Das, K.C.; Kumar, P. Some new bounds on the spectral radius of graphs. Disc. Math. 2004, 281, 149–161. [Google Scholar] [CrossRef]
  74. Ajtai, M.; Komlós, J.; Szemerédi, E. Sorting in c log n parallel steps. Combinatorica 1983, 3, 1–19. [Google Scholar] [CrossRef]
  75. Arora, S.; Leighton, T.; Maggs, B. On-line algorithms for path selection in a nonblocking network. In Proceedings of the Twenty-Second Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, 13–17 May 1990; pp. 149–158. [Google Scholar]
  76. Bellare, M.; Goldreich, O.; Goldwasser, S. Randomness in interactive proofs. Comput. Comp. 1993, 3, 319–354. [Google Scholar] [CrossRef]
  77. Ajtai, M.; Komlós, J.; Szemerédi, E. Deterministic simulation in logspace. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, New York, NY, USA, 25–27 May 1987; pp. 132–140. [Google Scholar]
  78. Tanner, R.M. Explicit concentrators from generalized N-gons. SIAM J. Algebr. Discret. Methods 1984, 5, 287–293. [Google Scholar] [CrossRef]
  79. Watkins, D.S. Understanding the QR algorithm. SIAM Rev. 1982, 24, 427–440. [Google Scholar] [CrossRef]
  80. Echeverría, C.; Liesen, J.; Nabben, R. Block diagonal dominance of matrices revisited: Bounds for the norms of inverses and eigenvalue inclusion sets. Linear Algebra Its Appl. 2018, 553, 365–383. [Google Scholar] [CrossRef]
  81. Mehlhorn, K.; Sun, H. Great Ideas in Theoretical Computer Science; Max Planck Inst.: Munich, Germany, 2014. [Google Scholar]
  82. Francis, J.G. The QR transformation a unitary analogue to the LR transformation—Part 1. Comput. J. 1961, 4, 265–271. [Google Scholar] [CrossRef]
  83. Klavžar, S.; Imrich, W. Product Graphs: Structure and Recognition; Wiley: Hoboken, NJ, USA, 2000. [Google Scholar]
  84. Lovász, L. Three short proofs in graph theory. J. Comb. Theory Ser. B 1975, 19, 269–271. [Google Scholar] [CrossRef]
  85. Lovász, L. On the ratio of optimal integral and fractional covers. Discret. Math. 1975, 13, 383–390. [Google Scholar] [CrossRef]
  86. Qi, L.; Miao, L.; Zhao, W.; Liu, L. Characterization of graphs with an eigenvalue of large multiplicity. Adv. Math. Phys. 2020, 672. [Google Scholar] [CrossRef]
Figure 1. Distributed functional compression with two sources and a receiver, where G X 1 is cyclic.
Figure 1. Distributed functional compression with two sources and a receiver, where G X 1 is cyclic.
Entropy 27 00757 g001
Figure 2. A valid coloring of C 4 3 with 8 colors.
Figure 2. A valid coloring of C 4 3 with 8 colors.
Entropy 27 00757 g002
Figure 3. The 2-fold OR product of C 5 , i.e., C 5 2 , and its valid coloring.
Figure 3. The 2-fold OR product of C 5 , i.e., C 5 2 , and its valid coloring.
Entropy 27 00757 g003
Figure 4. (Left) χ ( C 2 k + 1 ) n (dashed (blue) curve), and χ ( C 2 k + 1 n ) (solid (orange) curve) for any k 2 . (Right) The gain, i.e., η n , of the coloring approach in Proposition 3 compared with the Greedy algorithm.
Figure 4. (Left) χ ( C 2 k + 1 ) n (dashed (blue) curve), and χ ( C 2 k + 1 n ) (solid (orange) curve) for any k 2 . (Right) The gain, i.e., η n , of the coloring approach in Proposition 3 compared with the Greedy algorithm.
Entropy 27 00757 g004
Figure 5. A 3-regular graph, G 3 , 6 , is distinguished by a dashed square and χ ( G 3 , 6 2 ) = 9 .
Figure 5. A 3-regular graph, G 3 , 6 , is distinguished by a dashed square and χ ( G 3 , 6 2 ) = 9 .
Entropy 27 00757 g005
Figure 6. Splitting the adjacency matrix A f 1 2 into two symmetric matrices A G r 2 and A f 1 c 2 .
Figure 6. Splitting the adjacency matrix A f 1 2 into two symmetric matrices A G r 2 and A f 1 c 2 .
Entropy 27 00757 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deylam Salehi, M.R.; Malak, D. Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers. Entropy 2025, 27, 757. https://doi.org/10.3390/e27070757

AMA Style

Deylam Salehi MR, Malak D. Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers. Entropy. 2025; 27(7):757. https://doi.org/10.3390/e27070757

Chicago/Turabian Style

Deylam Salehi, Mohammad Reza, and Derya Malak. 2025. "Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers" Entropy 27, no. 7: 757. https://doi.org/10.3390/e27070757

APA Style

Deylam Salehi, M. R., & Malak, D. (2025). Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers. Entropy, 27(7), 757. https://doi.org/10.3390/e27070757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop