Multi-Server Multi-Function Distributed Computation

The work here studies the communication cost for a multi-server multi-task distributed computation framework, as well as for a broad class of functions and data statistics. Considering the framework where a user seeks the computation of multiple complex (conceivably non-linear) tasks from a set of distributed servers, we establish the communication cost upper bounds for a variety of data statistics, function classes, and data placements across the servers. To do so, we proceed to apply, for the first time here, Körner’s characteristic graph approach—which is known to capture the structural properties of data and functions—to the promising framework of multi-server multi-task distributed computing. Going beyond the general expressions, and in order to offer clearer insight, we also consider the well-known scenario of cyclic dataset placement and linearly separable functions over the binary field, in which case, our approach exhibits considerable gains over the state of the art. Similar gains are identified for the case of multi-linear functions.


I. INTRODUCTION
Distributed computing plays an increasingly significant role in accelerating the execution of computationally challenging and complex computational tasks.This growth in influence is rooted in the innate capability of distributed computing to parallelize computational loads across multiple servers.This same parallelization renders distributed computing as an indispensable tool for addressing a wide array of complex computational challenges, spanning scientific simulations, extracting various spatial data distributions [1], data-intensive analyses for cloud computing [2], machine learning [3], as well as applications in various other fields such as computational fluid dynamics [4], high-quality graphics for movie and game rendering [5], and a variety of medical applications [6] to name just a few.In the center of this ever-increasing presence of parallelized computing, stand modern parallel processing techniques, such as MapReduce [7]- [9], and Spark [10], [11].
For distributed computing though to achieve the desirable parallelization effect, there is an undeniable need for massive information exchange to-and-from the various network nodes.Reducing this communication load is essential for scalability [12]- [15] in various topologies [16]- [18].Central to the effort to reduce communication costs, stand coding techniques such as those found in [19]- [37], including gradient coding [21], and different variants of coded distributed computing that nicely yield gains in reliability, scalability, computation speed and cost-effectiveness [24].Similar communication-load aspects are often addressed via polynomial codes [38] which can mitigate stragglers and enhance the recovery threshold, while MatDot codes, devised in [32], [39] for secure distributed matrix multiplication, can decrease the number of transmissions for distributed matrix multiplication.This same emphasis in reducing Derya Malak, Mohammad Reza Deylam Salehi, and Petros Elia are with the Commun.Systems Dept., EURECOM, Biot Sophia Antipolis, 06904 FRANCE (emails: {malak, deylam, elia }@eurecom.fr).This work was conducted when B. Serbetci was a Postdoctoral Researcher at EURECOM; fberks@gmail.comThis research was partially supported by a Huawei France-funded Chair towards Future Wireless Networks, and supported by the program "PEPR Networks of the Future" of France 2030.Co-funded by the European Union (ERC, SENSIBILIT É, 101077361, and ERC-PoC, LIGHT, 101101031).Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council.Neither the European Union nor the granting authority can be held responsible for them.arXiv:2405.08732v1[cs.IT] 14 May 2024 communication costs is even more prominent in works like [32], [35], [36], [39]- [47], which again focus on distributed matrix multiplication.For example, focusing on a cyclic dataset placement model, the work in [40] provided useful achievability results, while the authors of [36] have characterized achievability and converse bounds for secure distributed matrix multiplication.Furthermore, the work in [35] found creative methods to exploit the correlation between the entries of the matrix product in order to reduce the cost of communication.
A. The multi-server multi-function distributed computing setting, and the need for accounting for general non-linear functions As computing requirements become increasingly challenging, distributed computing models have also evolved to be increasingly complex.One such recent model is the multi-server multi-function distributed computing model that consists of a master node, a set of distributed servers, and a user demanding the computation of multiple functions.The master contains the set of all datasets and allocates them to the servers which are then responsible for computing a set of specific subfunctions of datasets.This multiserver multi-function setting was recently studied by Wan et al. in [40] for the class of linearly separable functions, which nicely captures a wide range of real-world tasks [7] such as convolution [42], the discrete Fourier transform [48] and a variety of other cases as well.This same work bounded the communication cost, employing linear encoding and linear decoding that leverage the structure of requests.
At the same time though there is growing need to consider more general classes of functions, including non-linear functions such as is often the case with subfunctions that produce intermediate values in MapReduce operations [7], or that relate to quantization [49], classification [50], and optimization [51].Intense interest can also be identified in the aforementioned problem of distributed matrix multiplication, which has been explored in a plethora of works that include [36], [43], [46], [52]- [54], with a diverse focus that entails secrecy [46], [52], [54], as well as precision and stragglers [14], [36], [43], [53] to name a few.In addition to matrix multiplication, other important non-linear function classes include sparse polynomial multiplication [55], permutation invariant functions [56] -which often appear in multiagent settings and have applications in learning, combinatorics, and graph neural networks -as well as nomographic functions [57], [58] which can appear in the context of sensor networks and which have strong connections with interference exploitation and lattice codes, as nicely revealed in [57], [58].
Our own work here is indeed motivated by this emerging need for distributed computing of non-linear functions, and our goal is to now consider general functions in the context of the multi-server, multifunction distributed computing framework, while also capturing dataset statistics and correlations, and while exploiting the structural properties of the (possibly non-linear) functions requested by the user.To do so, we will go beyond the linear coding approaches in [40], [59], [60], and will devise demand-based encoding-decoding solutions.To do this, we will adopt -in the context of the multi-server multi-function framework -the powerful tools from characteristic graphs, which are specifically geared toward capturing both the statistical structure of data as well as the properties of functions beyond the linear case.To help the reader better understand our motivation and contribution, we proceed with a brief discussion on data structure and characteristic graphs.

B. Data correlation and structure
Crucial in reducing the communication bottleneck of distributed computing, is an ability to capture the structure that appears in modern datasets.Indeed, even before computing considerations come into play, capturing the general structure of data has been crucial in reducing the communication load in various scenarios such as those in the seminal work by Slepian-Wolf [61] and Cover [62].Similarly, when function computation is introduced, data structure can be a key component.In the context of computing, we have seen the seminal work by Körner and Marton [63] which focused on efficient compression of the modulo 2 sum of two statistically dependent sources, while Lalitha et al. [64] explored linear combinations of multiple statistically dependent sources.Furthermore, for general bivariate functions of correlated sources, when one of the sources is available as side information, the work of Yamamoto [65] generalized the pioneering work of Wyner and Ziv [66], to provide a rate-distortion characterization for the function computation setting.
It is the case though that when the computational model becomes more involved -as is the case in our multi-server multi-function scenario here -data may often be treated as unstructured and independent [40], [59], [67]- [69].This naturally allows for crucial analytical tractability, but it may often ignore the potential benefits of accounting for statistical skews and correlations in data when aiming to reduce communication costs in distributed computing.Furthermore, this comes at a time when more and more function computation settings -such as in medical imaging analysis [70], data fusion and group inferences [71], as well as predictive modeling for artificial intelligence [72] -entail datasets with prominent dependencies and correlations.While various works, such as by Körner-Marton [63], Han-Kobayashi [73], Yamamoto [65], Alon-Orlitsky [74], Orlitsky-Roche [75], provide crucial breakthroughs in exploiting data structure, to the best of our knowledge, in the context of fully distributed function computation, the structure in functions and data has yet to be considered simultaneously.

C. Characteristic graphs
To jointly account for this structure in both data and functions, we will draw from the powerful literature on Characteristic graphs, introduced by Körner for source coding [76], and used in data compression [63], [74], [75], [77]- [79], cryptography [80], image processing [81], and bioinformatics [82].For example, toward understanding the fundamental limits of distributed functional compression, the work in [76] devised the graph entropy approach in order to provide the best possible encoding rate of an information source with vanishing error probability.This same approach, while capturing both function structure and source structure, was presented for the case of one source, and it is not directly applicable to the distributed computing setting.Similarly the zero-error side information setting in [74] and the lossy encoding setting in [75], [65] use Körner's graph entropy [76] approach to capture both function structure and source structure, but were again presented for the case of one source.Similar focus can be found in the works in [74], [75], [77], [78], [80].The same characteristic graph approach has been nicely used by Feizi and Médard in [83] for the distributed computing setting, for a simple distributed computing framework, and in the absence of considerations for data structure.
Characteristic graphs, which are used in fully distributed architectures to compress information, can allow us to capture various data statistics and correlations, various data placement arrangements, and various function types.This versatility motivates us to employ characteristic graphs in our multi-server, multi-function architecture for distributed computing of non-linear functions.

D. Contributions
In this paper, leveraging fundamental principles from source and functional compression as well as graph theory, we study a general multi-server multi-function distributed computing framework composed of a single user requesting a set of functions, which are computed with the assistance of distributed servers that have partial access to datasets.To achieve our goal, we consider the use of Körner's characteristic graph framework [76] in our multi-server multi-function setting, and proceed to establish upper bounds on the achievable sum-rates reflecting the setting's communication requirements.
By extending, for the first time here, Körner's characteristic graph framework [76] to the new multiserver multi-function setting, we are able to reflect the nature of the functions and data statistics, in order to allow each server to build a codebook of encoding functions that determine the transmitted information.Each server, using its own codebook, can transmit a function (or a set of functions) of the subfunctions of the data available in its storage, and to then provide the user with sufficient information for evaluating the demanded functions.The codebooks allow for a substantial reduction in the communication load.
The employed approach allows us to account for general dataset statistics, correlations, dataset placement, and function classes, thus yielding gains over the state of art [40], [61], as showcased in our examples for the case of linearly separable functions in the presence of statistically skewed data, as well as for the case of multi-linear functions where the gains are particularly prominent, again under statistically skewed data.For this last case of multi-linear functions, we provide an upper bound on the achievable sum-rate (see Subsection IV-B), under a cyclic placement of data that reside in the binary field.We also provide a generalization of some elements in existing works on linearly separable functions [40], [59].
In the end, our work demonstrates the power of using characteristic graph-based encoding for exploiting the structural properties of functions and data in distributed computing, as well as provides insights into fundamental compression limits, all for the broad scenario of multi-server, multi-function distributed computation.

E. Paper organization
The rest of this paper is structured as follows.Section II describes the system model for the multiserver multi-function architecture, and Section III details the main results on the communication cost or sum-rate bounds under general dataset distributions and correlations, dataset placement models, and general function classes requested by the user, over a field of characteristic q ≥ 2, through employing the characteristic graph approach, and contrasts the sum-rate with relevant prior works, e.g., [40], [61].Finally, we summarize our key results and outline possible future directions in Section V. We provide a primer for the key definitions and results on characteristic graphs and their fundamental compression limits in Appendix A, and give proofs of our main results in Appendix B2.
Notation: We denote by H(X) = E[− log P X (X)] the Shannon entropy of random variable X drawn from distribution or probability mass function (PMF) P X .Let P X 1 ,X 2 be the joint PMF of two random variables X 1 and X 2 , where X 1 and X 2 are not necessarily independent and identically distributed (i.i.d.), i.e., equivalently the joint PMF is not in product form.The notation X ∼ Bern(ϵ) denotes that X is Bernoulli distributed with parameter ϵ ∈ [0, 1].Let h(•) denote the binary entropy function, and H B (B(n, ϵ)) denote the entropy of a Binomial random variable of size n ∈ N, with ϵ ∈ [0, 1] modeling the success probability of each Boolean-valued outcome.The notation X S = {X i : i ∈ S} denotes a subset of servers with indices i ∈ S for S ⊆ Ω.The notation S c = Ω\S denotes the complement of S. We denote the probability of an event A by P(A).The notation 1 x∈A denotes the indicator function which takes the value 1 if x ∈ A, and 0 otherwise.The notation G X i denotes the characteristic graph that server i ∈ Ω builds for computing F (X Ω ).The measures H G X (X) and H G X (X | Y ) denote the entropy of characteristic graph G X , and the conditional graph entropy for random variable X given Y , respectively.The notation T (N, K, K c , M, N r ) shows the topology of the distributed system.We note that Z i denotes the indices of datasets stored in i ∈ Ω, and the notation K n (S) = |Z S | = i∈S Z i represents the cardinality of the datasets in the union of the sets in S for a given subset S ⊆ Ω of servers.We also note that [N ] = {1, 2, . . ., N }, N ∈ Z + , and [a : b] = {a, a + 1, . . ., b} for a, b ∈ Z + such that a < b.We use the convention mod {b, a} = a if a divides b.We provide the notation in Table I.

II. SYSTEM MODEL
This section outlines our multi-server multi-function architecture and details our main technical contributions, namely the communication cost for the problem of distributed computing of general nonlinear functions, and the cost for special instances of the computation problem under some simplifying assumptions on the dataset statistics, dataset correlations, placement, and the structures of functions.
In the multi-server, multi-function distributed computation framework, the master has access to the set of all datasets, and distributes the datasets across the servers.The total number of servers is N , and each server has a capacity of M .Communication from the master to the servers is allowed, whereas the servers are distributed and cannot collaborate.The user requests K c functions that could be non-linear.Given the dataset assignment to the servers, any subset of N r servers is sufficient to compute the functions requested.We denote by T (N, K, K c , M, N r ) the topology for the described multi-server multi-function distributed computing setting, which we detail in the following.

A. Datasets, subfunctions, and placement into distributed servers
There are K datasets in total, each denoted by where the assignments possibly overlap.
Each server computes a set of subfunctions We denote the number of symbols in each W k by L, which equals the blocklength n.Let X i = {W k } k∈Z i = W Z i = {h k (D k )} k∈Z i denote the set of subfunctions of i-th server, X i be the alphabet of X i , and X Ω = (X 1 , X 2 , . . ., X N ) be the set of subfunctions of all servers.We denote by W k = W k1 , W k2 , . . ., W kn and X i = X i1 , X i2 , . . ., X in ∈ F |Z i |×n q , the length n sequences of subfunction W k , and of W Z i assigned to server i ∈ Ω.

B. Cyclic dataset placement model, computation capacity, and recovery threshold
We assume that the total number of datasets K is divisible by the number of servers N , i.e., K N .= ∆ ∈ Z + .The dataset placement on N distributed servers is conducted in a circular or cyclic manner, in the amount of ∆ circular shifts between two consecutive servers, where the shifts are to the right and the final entries are moved to the first positions, if necessary.As a result of cyclic placement, any subset of N r servers covers the set of all datasets to compute the requested functions from the user.Given N r ∈ [N ], each server has a storage size or computation cost of |Z i | = M = ∆(N − N r + 1), and the amount of dataset overlap between the consecutive servers is ∆(N − N r ).
Hence, the set of indices assigned to server i ∈ Ω is given as follows: where As a result of (1), the cardinality of the datasets assigned to each server meets the storage capacity constraint M with equality, i.e., |Z i | = M , for all i ∈ Ω.

C. User demands and structure of the computation
We address the problem of distributed lossless compression of a set general multi-variable functions requested by the user from the set of servers, where K c ≥ 1, and the functions are known by the servers and the user.More specifically, the user, from a subset of distributed servers, aims to compute in a lossless manner the following length n sequence as n tends to infinity: where F j (X 1l , X 2l , . . ., X N l ) is the function outcome for the l-th realization l ∈ [n], given the length n sequence.We note that the representation in ( 2) is the most general form of a (conceivably non-linear) multi-variate function, which encompasses the special cases of separable functions, and linearly separable functions, which we discuss next.
In this work, the user seeks to compute functions that are separable to each dataset.Each demanded function where h k is a general function (could be linear or non-linear) of dataset D k .Hence, using the relation ] can be written in the following form: In the special case of linearly separable functions2 [40], the demanded functions take the form: where is the subfunction vector, and the coefficient matrix Γ = {γ jk } ∈ F Kc×K q is known to the master node, servers, and the user.In other words, to linearly separable functions, i.e., it may hold that {F j (X Ω )} j∈[Kc] ̸ = ΓW.

D. Communication cost for the characteristic graph-based computing approach
To compute {F j (X Ω )} j∈[Kc] , each server i ∈ Ω constructs a characteristic graph, denoted by G X i , for compressing X i .More specifically, for asymptotic lossless computation of the demanded functions, the server builds the n-th OR power G n X i of G X i for compressing X i to determine the transmitted information.The minimal possible code rate achievable to distinguish the edges of G n X i as n → ∞, is given the Characteristic graph entropy, H G X i (X i ).For a primer on key graph-theoretic concepts, characteristic graph-related definitions, and the fundamental compression limits of characteristic graphs, we refer the reader to [77], [80], [83].In this work, we solely focus on the characterization of the total communication cost from all servers to the user, i.e., the achievable sum-rate, without accounting for the costs of communication between the master and the servers, and of computations performed at the servers and the user.
Each i ∈ Ω builds a mapping from (X i ) specifies the color classes of X i that form independent sets to distinguish the demanded function outcomes.Given an encoding function g i that models the transmission of server i ∈ Ω for computing the color encoding performed by server i ∈ Ω for X i .Hence, the communication rate of server i ∈ Ω, for a sufficiently large blocklength n, where T i is the length for the color encoding performed at i ∈ Ω, is where the inequality follows from exploiting the achievability of H , [76].We refer the reader to Appendix B for a detailed description of the notions of chromatic and graph entropies (cf.(33) and (34), respectively).
For the multi-server multi-function distributed setup, using the characteristic graph-based fundamental limit in (5), an achievable sum-rate for asymptotic lossless computation is We next provide our main results in Section III.

III. MAIN RESULTS
In this section, we analyze the multi-server multi-function distributed computing framework exploiting the characteristic graph-based approach in [76].In contrast to the previous research attempts in this direction, our solution method is general, and it captures (i) general input statistics or dataset distributions or the skew in data instead of assuming uniform distributions, (ii) correlations across datasets, (iii) any dataset placement model across servers, beyond the cyclic [40] or the Maddah-Ali and Niesen [84] placements, and (iv) general function classes requested by the user, instead of focusing on a particular function type (see e.g., [40], [68], [85]).
Subsequently, we will delve into specific function computation scenarios.First, we will present our main result (Theorem 1) which is the most general form that captures (i)-(iv).We then demonstrate (in Proposition 1) that the celebrated result of Wan et al. [40,Theorem 2] can be obtained as a special case of Theorem 1, given that: (i) the datasets are i.i.d. and uniform over q-ary fields, (ii) the placement of datasets across servers is cyclic, and (iii) the demanded functions are linearly separable, given as in (4).Under correlated and identically distributed Bernoulli dataset model with a skewness parameter ϵ ∈ (0, 1) for datasets, we next present in Proposition 2, the achievable sum rate for computing Boolean functions.Finally, in Proposition 3, we analyze our characteristic graph-based approach for evaluating multi-linear functions, a pertinent class of non-linear functions, under the assumption of cyclic placement and i.i.d.Bernoulli distributed datasets with parameter ϵ, and derive an upper bound on the sum rate needed.To gain insight into our analytical results and demonstrate the savings in the total communication cost, we provide some numerical examples.
We next present our main theorem (Theorem 1) on the achievable communication cost for the multiserver, multi-function topology, which holds for all input statistics, under any correlation model across datasets, and for distributed computing of all function classes requested by the user, regardless of the data assignment over the servers' caches.The key to capturing the structure of general functions in Theorem 1 is the utilization of a characteristic graph-based compression technique, as proposed by Körner in [76]. 3heorem 1. (Achievable sum-rate using the characteristic graph approach for general functions and distributions.)In the multi-server, multi-function distributed computation model, denoted by T (N, K, K c , M, N r ), under general placement of datasets, and for a set of K c general functions {f j (W K )} j∈[Kc] requested by the user, and under general jointly distributed dataset models, including non-uniform inputs and allowing correlations across datasets, the characteristic graph-based compression yields the following upper bound on the achievable communication rate: G X i ,j is the union characteristic graph4 that server i ∈ Ω builds for computing • each subfunction W k , k ∈ K is defined over a q-ary field such that the characteristic is at least 2, and Theorem 1 provides a general upper bound on the sum-rate for computing functions for general dataset statistics and correlations, and the placement model, and allows any function type, over a field of characteristic q ≥ 2. We note that in (7), the codebook C i determines the structure of the union characteristic graph G ∪ X i , which, in turn, determines the distribution of Z i .Therefore, the tightness of the rate upper bound relies essentially on the codebook selection.We also note that it is possible to analyze the computational complexity of building a characteristic graph and computing the bound in (7) via evaluating the complexity of the transmissions Z i determined by {f j (W K )} j∈[Kc] for a given i ∈ Ω.However, the current manuscript focuses primarily on the cost of communication, and we leave the computational complexity analysis to future work.Because ( 7) is not analytically tractable, in the following, we will focus on special instances of Theorem 1, to gain insights into the effects of input statistics, dataset correlations, and special function classes in determining the total communication cost.
We next demonstrate that the achievable communication cost for the special scenario of distributed linearly separable computation framework given in [40,Theorem 2] is embedded by the characterization provided in Theorem 1.We next showcase the achievable sum rate result for linearly separable functions.Proposition 1. (Achievable sum-rate using the characteristic graph approach for linearly separable functions and i.i.d.subfunctions over F q .)In the multi-server, multi-function distributed computation model, denoted by T (N, K, K c , M, N r ), under the cyclic placement of datasets, where K N = ∆ ∈ Z + , and for a set of K c linearly separable functions, given as in (4), requested by the user, and given i.i.d.uniformly distributed subfunctions over a field of characteristic q ≥ 2, the characteristic graph-based compression yields the following bound on the achievable communication rate: Proof.See Appendix D.
We note that Theorem 1 results in Proposition 1 when three conditions hold: (i) the dataset placement across servers is cyclic following the rule in (1), (ii) the subfunctions W K are i.i.d. and uniform over F q (see (45) in Appendix D), and (iii) the codebook C i is restricted to linear combinations of subfunctions W K , which yields that the independent sets of G ∪ X i satisfy a set of linear constraints5 in the variables {W k } k∈Z i .Note that the linear encoding and decoding approach for computing linearly separable functions, proposed by Wan et al. in [40,Theorem 2], is valid over a field of characteristic q > 3.However, in Proposition 1, the characteristic of F q is at least 2, i.e., q ≥ 2, generalizing [40, Theorem 2] to larger input alphabets.
Next, we aim to demonstrate the merits of the characteristic graph-based compression in capturing dataset correlations within the multi-server, multi-function distributed computation framework.More specifically, we restrict the general input statistics in Theorem 1 such that the datasets are correlated and identically distributed, where each subfunction follows a Bernoulli distribution with the same parameter ϵ, i.e., W k ∼ Bern(ϵ), with ϵ ∈ (0, 1), and the user demands K c arbitrary Boolean functions regardless of the data assignment.Similarly to Theorem 1, the following proposition (Proposition 2) holds for general function types (Boolean) regardless of the data assignment.Proposition 2. (Achievable sum-rate using the characteristic graph approach for general functions and identically distributed subfunctions over F 2 .)In the multi-server multi-function distributed computing setting, denoted by T (N, K, K c , M, N r ), under the general placement of datasets, and for a set of K c Boolean functions {f j (W K )} j∈[Kc] requested by the user, and given identically distributed and correlated subfunctions with where ϵ ∈ (0, 1), the characteristic graph-based compression yields the following bound on the achievable communication rate: where , respectively, and • the probability that W Z i yields the function value Z i = 1 is given as Proof.See Appendix E.
While admittedly the above approach (Proposition 2) may not directly offer sufficient insight, it does employ the new machinery to offer a generality that allows us to plug in any set of parameters to determine the achievable performance.
Contrasting Propositions 1-2, which give the total communication costs for computing linearly separable and Boolean functions, respectively, over F 2 , Proposition 2, by exploiting the skew and correlations of datasets indexed by Z i , as well as the functions' structures via the MISs s 0 (G ∪ X i ) and s 1 (G ∪ X i ) of server i ∈ Ω, demonstrates that harnessing the correlation across the datasets can indeed reduce the total communication cost versus the setting in Proposition 1, devised with the assumption of i.i.d. and uniformly distributed subfunctions.
The prior works have focused on devising distributed computation frameworks and exploring their communication costs for specific function classes.For instance, in [63], Körner and Marton have restricted the computation to be the binary sum function, and in [73], Han and Kobayashi have classified functions into two categories depending on whether they can be computed at a sum rate that is lower than that of [61].Furthermore, the computation problem has been studied for specific topologies, e.g., the side information setting in [74], [75].Despite the existing efforts, see e.g., [63], [73]- [75], to the best of our knowledge, for the given multi-server, multi-function distributed computing scenario, there is still no general framework for determining the fundamental limits of the total communication cost for computing general non-linear functions.Indeed, for this setting, the most pertinent existing work that applies to general non-linear functions and provides an upper bound on the achievable sum rate is that of Slepian-Wolf [61].On the other hand, the upper bound on the achievable computation scheme presented in Theorem 1 can provide savings in the communication cost over [61] for functions including linearly separable functions and beyond.To that end, we exploit Theorem 1 to determine an upper bound on the achievable sum-rate for distributed computing of a multi-linear function in the form of Note that (11) is used in various scenarios, including distributed machine learning, e.g., to reduce variance in noisy datasets via ensemble learning [86] or weighted averaging [87], sensor network applications to aggregate readings for improved data analysis [88], as well as distributed optimization and financial modeling, where these functions play pivotal roles in establishing global objectives and managing risk and return [89], [90].
Drawing on the utility of characteristic graphs in capturing the structures of data and functions, as well as input statistics and correlations, and the general result in Theorem 1, our next result, Proposition 3, demonstrates a new upper bound on the achievable sum rate for computing multi-linear functions within the framework of multi-server and multi-function distributed computing via exploiting conditional graph entropies.
Proposition 3. (Achievable sum-rate using the characteristic graph approach for multi-linear functions and i.i.d.subfunctions over F 2 .)In multi-server multi-function distributed computing setting, denoted by T (N, K, K c , M, N r ), under the cyclic placement of datasets, where K N = ∆ ∈ Z + , and for computing the multi-linear function (K c = 1), given as in (11), requested by the user, and given i.i.d.uniformly distributed subfunctions W k ∼ Bern(ϵ), k ∈ [K], for some ϵ ∈ (0, 1), the characteristic graphbased compression yields the following bound on the achievable communication rate: where , taking the value one, i.e., P k∈S : denotes the minimum number of servers needed to compute f (W K ), given as in (11), where each of these servers computes a disjoint product of M subfunctions, and represents whether an additional server is needed to aid the computation, and if ∆ N ≥ 1, then ξ N denotes the number of subfunctions to be computed by the additional server, and similarly as above, P k∈S : We will next detail two numerical examples (Subsections IV-A-IV-B) to showcase the achievable gains in the total communication cost for Proposition 2 and Proposition 3, respectively.
IV. NUMERICAL EVALUATIONS TO DEMONSTRATE THE ACHIEVABLE GAINS Given T (N, K, K c , M, N r ), to gain insight into our analytical results and demonstrate the savings in the total communication cost, we provide some numerical examples.To demonstrate Proposition 2, in Subsection IV-A, we focus on computing linearly separable functions, and in Subsection IV-B (cf.Proposition 3), we focus on multi-linear functions, respectively.
To that end, to characterize the performance of our characteristic graph-based approach for linearly separable functions, we denote by η lin the gain of the sum-rate for the characteristic graph-based approach given in (9) over the sum-rate of the distributed scheme of Wan et al. in [40], given in (8), and by η SW the gain of the sum-rate in (9) over the sum-rate of the fully distributed approach of Slepian-Wolf [61].To capture general statistics, i.e., dataset skewness and correlations, and make a fair comparison, we adapt the transmission model of Wan et al. in [40] via modifying the i.i.d.dataset assumption.We next study an example scenario (Subsection IV-A) for computing a class of linearly separable functions (4) over F 2 , where each of the demanded functions takes the form  II. Furthermore, we assume for K c > 1 that Γ = {γ jk } ∈ F Kc×K 2 is full rank.For the proposed setting, we next demonstrate the achievable gains η lin of our proposed technique versus ϵ for computing (4), as a function of skew, ϵ, and correlation, ρ, of datasets, K c ∈ [N r ] < K, and other system parameters, and showcase the results via Figures 1, 3, 4, and 5.

A. Example case: Distributed computing of linearly separable functions over F 2
We consider the computation of linearly separable functions given in (4) for general topologies, with general N , K, M , N r , K c , over F 2 , with an identical skew parameter ϵ ∈ [0, 1] for each subfunction, where W k ∼Bern(ϵ), k ∈ [K], using cyclic placement as in (1), and incorporating the correlation between the subfunctions, with the correlation coefficient denoted by ρ.We consider three scenarios, as described next: a) Scenario I.The number of demanded functions is K c = 1, where the subfunctions could be uncorrelated or correlated.:This scenario is similar to the setting in [40] whereas different from [40] which is valid over a field of characteristic q > 3, we consider F 2 , and in the case of correlations, i.e., when ρ > 0, we capture the correlations across the transmissions (evaluated from subfunctions of datasets) from distributed servers, as detailed earlier in Section III.We first assume that the subfunctions are not correlated, i.e., ρ = 0, and evaluate η lin for f (W K ) = k∈[K] W k mod 2. The parameter of f (W K ), i.e., the probability that f (W K ) takes the value 1 can be computed using the recursive relation: where B(K, ϵ) is the binomial PMF, and ϵ l is the probability that the modulo 2 sum of any 1 < l ≤ K subfunctions taking the value one, with W k ∼ Bern(ϵ) being i.i.d.across k ∈ S, with the convention ϵ 1 = ϵ.
Given N r , we denote by N * = N N −Nr+1 the minimum number of servers, corresponding to the subset N * ⊆ Ω, needed to compute f (W K ), where each server, with a cache size of M , computes a sum of M subfunctions, where across these N * servers, the sets of subfunctions are disjoint.Hence, represents whether additional servers in addition to N * servers are needed to aid the computation, and if ∆ N ≥ 1, then ∆ • ∆ N .= ξ N denotes the number of subfunctions to be computed by the set of additional servers, namely I * ∈ Ω, and similarly as above, P k∈S : |S|=ξ N W k = ϵ ξ N , which is obtained evaluating ϵ l at l = ξ N .Adapting (8) for F 2 , we obtain the total communication cost R ach (lin) for computing the linearly separable function f Using Proposition 2 and (13), we derive the sum rate for distributed lossless computing of f where the indicator function 1 ∆ N >0 captures the rate contribution from the additional server, if any.Using (15), the gain η lin over the linearly separable solution of [40] is presented as: where h(ϵ ξ N ) represents the rate needed from the set of additional servers I * ∈ Ω, aiding the computation through communicating the sum of the remaining subfunctions in the set C ⊆ Z I * , where the summation for these remaining functions in C ⊆ Z I * is denoted as k∈C⊆Z I * : k / ∈ i∈N * Z i , |C|=ξ N W k , that cannot be captured by the set N * .
Given K c = 1 for the given modulo 2 sum function, we next incorporate the correlation model in [91], for each W k , identically distributed with W k ∼ Bern(ϵ), and correlation ρ across any two subfunctions.The formulation in [91] yields the following PMF for f (W K ): where 1 y∈A 1 and 1 y∈A 2 are indicator functions, where We depict the behavior of our gain, η lin , using the same topology T (N, K, K c , M, N r ) as in [40], with different system parameters (N, K, M, N r ), under ρ = 0 in Figure 1-(Left).As we increase both N and K, along with the number of active servers, N r , the gain, η lin , of the characteristic graph approach increases.This stems from the characteristic graph approach, to compute functions f (W K ) of W K using N * servers.From Figure 1-(Right), it is evident that by capturing correlations between the subfunctions, hence across the servers' caches, η lin grows more rapidly until it reaches the maximum of ( 16), corresponding to η lin = Nr N * = 10, attributed to full correlation (see, Figure 1-(Right)).What we can also see is that for ρ = 0, the gain rises with the increase of ϵ and linearly grows with Nr N * .As ρ increases reaching its maximum at ρ = 1, the gain is maximized, yielding the minimum communication cost that can be achieved with our technique.Here, the gain η lin is dictated by the topology and is given as η lin = Nr N * .This linear relation shows that this specific topology can provide a very substantial reduction in the total communication cost, as ρ goes to 1, over the state-of-the-art [40], as shown in Figure 1-(Right) via the purple (solid) curve.Furthermore, one can draw a comparison between the characteristic graph approach and the approach in [61].Here, we represent the gain as η SW .It is noteworthy that the sum rate of all servers using the coding approach of Slepian-Wolf [61] is R ach (SW ) = H(W K ).With ρ = 0, this expression simplifies to R ach (SW ) = K • H(W k ), resulting again in a substantial reduction in the communication cost as we see from R ach (lin) in ( 14) for the same topology of the purple (solid) curve as shown in Figure 1-(Right).
b) Scenario II.The number of demanded functions is K c = 2, where the subfunctions could be uncorrelated or correlated.: To gain insights into the behavior of η lin , we consider an example distributed computation model with K = N = 3, N r = 2, where the subfunctions W 1 , W 2 , W 3 are assigned to X 1 , X 2 and X 3 in a cyclic manner, with h(W k ) = ϵ, k ∈ [3], and Given N r = 2, using the characteristic graph approach for individual servers, an achievable compression scheme, for a given ordering i and j of server transmissions, relies on first compressing of the characteristic graph G X i constructed by server i ∈ Ω that has no side information, and then the conditional rate needed for compressing the colors of G X j for any other server j ∈ Ω\i via incorporating the side information Z i = g i (X i ) obtained from server i ∈ Ω.Thus, contrasting the total communication cost associated with the possible orderings, the minimum total communication cost R ach (G) can be determined 6 .The achievable sum rate here takes the form Focusing on the characteristic graph approach, we illustrate how each server builds its union characteristic graph for simultaneously computing f 1 and f 2 according to (36) (as detailed in Appendix B1), in Figure 2. In (18), the first term corresponds to G X 1 = (V X 1 , E X 1 ) where V X 1 = {0, 1} 2 is built using the support of W 1 and W 2 , and the edges E X 1 are built based on the rule that (x , where, as we see here, requires 2 colors.Similarly, server 2 constructs G X 2 = (V X 2 , E X 2 ) given Z 1 , where V X 2 = {0, 1} 2 using the support of W 2 and W 3 , and where Z 1 determines f 1 = W 2 , and hence, to compute . Hence, we require 2 distinct colors for G X 2 .As a result, the first term yields a sum rate of h(ϵ) + h(ϵ) = 2h(ϵ).Similarly, the second term of ( 18) captures the impact of G X 2 = (V X 2 , E X 2 ), where server 2 builds G X 2 using the support of W 2 and W 3 , and G X 2 is a complete graph to distinguish all possible binary pairs to compute f 1 and f 2 , requiring 4 different colors.Given Z 2 , both f 1 and f 2 are deterministic.Hence, given Z 2 , G X 1 has no edges, which means that H G X 1 (X 1 | Z 2 ) = 0.As a result, the ordering of server transmission given by the second term of ( 18) yields the same sum rate of 2h(ϵ) + 0 = 2h(ϵ).For this setting, the minimum required rate is R ach (G) = 2h(ϵ), and the configuration captured by the second term provides a lower recovery threshold of N r = 1 versus N r = 2 for the configurations of server transmissions given by the first term (18).The different N r achieved by these two configurations is also captured by Figure 2.
Alternatively, in the linearly separable approach [40], N r servers transmit the requested function of the datasets stored in their caches.For distributed computing of f 1 and f 2 , servers 1 and 2 transmit at rate H(W 2 ) = h(ϵ), for computing f 1 , and at rate H(W 2 + W 3 ), for function f 2 .As a result, the achievable 6 We can generalize (18) to Nr > 2, where, for a given ordering of server transmissions, any consecutive server that transmits sees all previous transmissions as side information and the best ordering that has the minimum total communication cost, i.e., R ach (G). 7Here, 3 ) represent two different realizations of the pair of subfunctions W2 and W3.communication cost is given by R ach (lin) = h(ϵ)+h(W 2 +W 3 ).Here, for a fair comparison, we update the model studied in [40] to capture the correlation within each server without accounting for the correlation across the servers.Under this setting, for ρ = 0, we see that the gain η lin of the characteristic graph approach over the linearly separable solution of [40] for computing f 1 and f 2 as a function of ϵ ∈ [0, 1] takes the form where η lin (ϵ) > 1 for ϵ ̸ = 1 2 follows from the concavity of h(•) that yields the inequality h(2ϵ(1−ϵ)) ≥ h(ϵ).Furthermore, η lin approaches 1.5 as ϵ → {0, 1} (see Figure 3).We next examine the setting where the correlation coefficient ρ is nonzero, using the joint PMF P W 2 ,W 3 , as depicted in Table II, of the required subfunctions (W 2 and W 3 ) in computing f 1 and f 2 .This PMF describes the joint PMF corresponding to a binary non-symmetric channel model, where the correlation coefficient between W 2 and W 3 is ρ = 1−p 1−ϵ , and where p ′ = ϵp 1−ϵ .Thus our gain here compared to the linearly separable encoding and decoding approach of [40] is given as We consider now the correlation model in Table II, where coefficient ρ rises in ϵ for a fixed p.In Figure 4-(Left), we illustrate the behavior of η lin , given by ( 20), for computing f 1 and f 2 for N r = 2 as a function of p and ϵ, where for this setting, the correlation coefficient ρ is a decreasing function of p and an increasing function of ϵ.We observe from (20) that the gain η lin satisfies η lin ≥ 1 for all ϵ ∈ [0, 1], which monotonically increases in p -and hence monotonically decreases in ρ due to the relation ρ = 1−p 1−ϵ -as a function of the deviation of ϵ from 1/2.For ϵ ∈ (0.5, 1], η lin increases in ϵ.For example, for p = 0.1 then η lin (1) = 1.28, as depicted by the green (solid) curve.Similarly, given ϵ ∈ [0, 0.5), decreasing ϵ results in η lin to exhibit a rising trend, e.g., for p = 0.9 then η lin (0) = 1.36, as shown by the red (dash-dotted) curve.As p approaches one, η lin goes to 1.5 as ϵ tends to zero, which can be derived from (20).We here note that the gains are generally smaller than in the previous set of comparisons as shown in Figure 3.
More generally, given a user request, consisting of K c = 2 linearly separable functions (i.e., satisfying (4)), and after considering (20) beyond N r = 2, we see that η lin is at most N r as ρ approaches one.We next use the joint PMF model used in obtaining (17), where we observe that , to see that the gain takes the form where For this model, we illustrate η lin versus ϵ in Figure 4-(Right) for different ρ values.Evaluating (21), the peak achievable gain is attained when and hence, a gain η lin = N r = 2, as shown by the purple (solid) curve.On the other hand, for ρ = 0, we observe that , and hence, it can be shown that the gain is lower bounded as η lin ≥ 1.25.
c) Scenario III.The number of demanded functions is K c ∈ [N r ], and the number of datasets is equal to the number of servers, i.e., K = N , where the subfunctions are uncorrelated.:We now provide an achievable rate comparison between the approach in [40], and our graph-based approach, as summarized by our Proposition 1 that generalizes the result in [40,Theorem 2] to finite fields with characteristics q ≥ 2, for the case of ρ = 0.
Here, to capture dataset skewness and make a fair comparison, we adapt the transmission model of Wan et al. in [40] via modifying the i.i.d.dataset assumption and taking into account the skewness incurred within each server in determining the local computations k∈S : |S|=M W k at each server.
For the linearly separable model in (4) adapted to account for our setting, exploiting the summation k∈Z i W k , and ϵ M given in (15), the communication cost for a general number of K c with ρ = 0 is expressed as: In ( 22), as ϵ approaches 0 or 1, then h(ϵ M ) → 0. Subsequently, the achievable communication cost for the characteristic graph model can be determined as To understand the behavior of , knowing that Nr KcN * is a fixed parameter, we need to examine the dynamic component h(ϵ M ) h(ϵ) .Exploiting Schur concavity 8 for the binary entropy function, which tells us that h(E[X]) ≥ E[h(X)], we can see that as ϵ approaches 0 or 1, then where the inequality between the left and right-hand sides becomes loose as a function of M .As a result, as ϵ approaches 0 or 1, then η lin ≈ M • Nr Kc•N * , which follows from exploiting (22), (23) and the achievability of the upper bound in (24).We illustrate the upper bound on η lin in Figure 5, and demonstrate the η lin behavior for K c demanded functions across various topologies with circular dataset placement, namely for various K = N , i.e., when the amount of circular shift between two consecutive servers is ∆ = K N = 1 and the cache size is M = N − N r + 1, and for ρ = 0 and ϵ ≤ 1/2.We focus only on plotting η lin for ϵ ≤ 1/2, accounting for the symmetry of the entropy function.Therefore, we only plot for ϵ ≤ 1/2.The multiplicative coefficient Nr KcN * of η lin determines the growth, which is depicted by the curves.Thus we see that for a given topology T (N, K, K c , M, N r ) with K c demanded functions, for ρ = 0, using (24), we see that η lin exponentially grows with term 1 − ϵ for ϵ ∈ [0, 1/2]9 , and very substantial reduction in the total communication cost is possible as ϵ approaches {0, 1} as shown in Figure 5 by the blue (solid) curve.The gain over [40,Theorem 2], η lin , for a given topology, changes proportionally to Nr KcN * .The gain over [61], η SW , for ρ = 0 linearly scales10 with K KcN * .For instance, the gain for the blue (solid) curve in Figure 5 is η SW = 10.
In general, other functions in F 2 , such as bitwise AN D and the multi-linear function (see e.g., Proposition 3), are more skewed and have lower entropies than linearly separable functions, and hence are easier to compute.Therefore, the cost given in (23) can serve as an upper bound for the communication costs of those more skewed functions in F 2 .We have here provided insights into the achievable gains in communication cost for several scenarios.We leave the study of η lin for more general topologies T (N, K, K c , M, N r ) and correlation models beyond (17) devised for linearly separable functions, and beyond the joint PMF model in Table II, as future work.
Proposition 3 illustrates the power of the characteristic graph approach in decreasing the communication cost for distributed computing of multi-linear functions, given as in (11), compared to recovering the local computations k∈S : |S|=M W k using [61].We denote by η SW the gain of the sum-rate for the graph entropybased approach given in (12) -using the conditional entropy-based sum-rate expression in (54) -over the sum-rate of the fully distributed scheme of Slepian-Wolf [61] for computing (11).For the proposed setting, we next showcase the achievable gains η SW of Proposition 3 via an example and showcase the results via Figure 6.

B. Distributed computation of K-multi-linear functions over F 2
We study the behaviors of η SW versus the skewness parameter ϵ for computing the multi-linear function given in (11) for i.i.d.uniform W k ∼ Bern(ϵ), ϵ ∈ [0, 1/2] across k ∈ [K], and for a given T (N, K, K c , M, N r ) with parameters N , K, M = ∆(N − N r + 1), such that N r = N − 1, K c = 1, ρ = 0, and the number of replicates per dataset is M N K = 2.We use Proposition 3 to determine the sum-rate upper bound, and illustrate the gains 10 log 10 (η SW ) in decibels versus ϵ in Figure 6.
From the numerical results in Figure 6 (Left), we observe that the sum-rate gain of the graph entropybased approach versus the fully distributed approach of [61], η SW , could reach up to more than 10-fold gain in compression rate for uniform and up to 10 6 -fold for skewed data.The results on η SW showcase that our proposed scheme can guarantee an exponential rate reduction over [61] as a function of decreasing ϵ.Furthermore, the sum-rate gains scale linearly with the cache size M , which scales with K given N r = N − 1.Note that η SW diminishes with increasing N when M and ∆ are kept fixed.In Figure 6 (Right), for M ≪ K, a fixed total cache size M N , and hence fixed K, the gain η SW for large N and small M is higher versus small N and large M , demonstrating the power of the graph-based approach as the topology becomes more and more distributed.

V. CONCLUSION
In this paper, we devised a distributed computation framework for general function classes in multiserver, multi-function, single-user topologies.Specifically, we analyzed the upper bounds for the communication cost for computing in such topologies, exploiting Körner's characteristic graph entropy, by incorporating the structures in the dataset and functions, as well as the dataset correlations.To showcase the achievable gains of our framework, and perceive the roles of dataset statistics, correlations, and function classes, we performed several experiments under cyclic dataset placement over a field of characteristic two.Our numerical evaluations for distributed computing of linearly separable functions, as demonstrated in Subsection IV-A via three scenarios, indicate that by incorporating dataset correlations and skew, it is possible to achieve a very substantial reduction in the total communication cost over the state-of-theart.Similarly, for distributed computing of multi-linear functions, in Subsection IV-B, we demonstrate a very substantial reduction in the total communication cost versus the state-of-the-art.Our main results (Theorem 1 and Propositions 1, 2, and 3) and observations through the examples help us gain insights into reducing the communication cost of distributed computation by taking into account the structures of datasets (skew and correlations) and functions (characteristic graphs).
Potential directions include providing a tighter achievability result for Theorem 1 and devising a converse bound on the sum-rate.They involve conducting experiments under the scheme of the coded scheme of Maddah-Ali and Niesen detailed in [84] in order to capture the finer-grained granularity of placement that can help tighten the achievable rates.They also involve, beyond the special cases detailed in Propositions 1, 2, and 3, exploring the achievable gains for a broader set of distributed computation scenarios, e.g., overthe-air computing, cluster computing, coded computing, distributed gradient descent, or more generally, distributed optimization and learning, and goal-oriented and semantic communication frameworks, that can be reinforced by compression by capturing the skewness, correlations, and placement of datasets, the structures of functions, and topology.

B. Characteristic graphs, distributed functional compression, and communication cost
In this section, we provide a summary of key graph theoretic points devised by Körner [76], and studied by Alon and Orlitsky [74], Orlitsky and Roche [75] to understand the fundamental limits of distributed computation.
Let us consider the canonical scenario with two servers, storing X 1 and X 2 , respectively.The user requests a bivariate function F (X 1 , X 2 ) that could be linearly separable or in general non-linear.Associated with the source pair (X 1 , X 2 ) is a characteristic graph G, as defined by Witsenhausen [93].We denote by G X 1 = (V G X 1 , E G X 1 ) the characteristic graph server one builds (server two similarly builds G X 2 ) for computing11 F (X 1 , X 2 ), determined as a function of X 1 , X 2 , and F , where V G X 1 = X 1 and an edge (x 1  1 , x 2 1 ) ∈ E G X 1 if and only if there exists a 2 ).Note that the idea of building G X 1 can also be generalized to multivariate functions, F (X Ω ) where Ω = [N ] for N > 2 [83].In this paper, we only consider vertex colorings.A valid coloring of a graph G X 1 is such that each vertex of G X 1 is assigned a color (code) such that adjacent vertices receive disjoint colors (codes).Vertices that are not connected can be assigned to the same or different colors.The chromatic number χ(G X 1 ) of a graph G X 1 is the minimum number of colors needed to have a valid coloring of G X 1 [77], [78], [80].
Definition 1. (Characteristic graph entropy [74], [76].)Given a random variable X 1 with characteristic graph G X 1 = (V X 1 , E X 1 ) for computing function f (X 1 , X 2 ), the entropy of the characteristic graph is expressed as where S(G X 1 ) is the set of all MISs of G X 1 , where an MIS is not a subset of any other independent set, where an independent set of a graph is a set of its vertices in which no two vertices are adjacent [94].Notation X 1 ∈ U 1 ∈ S(G X 1 ) means that the minimization is over all distributions P U 1 ,X 1 (u 1 , x 1 ) such that P U 1 ,X 1 (u 1 , x 1 ) > 0 implies x 1 ∈ u 1 , where U 1 is an MIS of G x 1 .
Similarly, the conditional graph entropy for X 1 with characteristic graph G X 1 for computing f (X 1 , X 2 ) given X 2 as side information is defined in [75] using the notation U 1 − X 1 − X 2 that indicates a Markov chain: The Markov chain relation in (28) implies that H . In (28), the goal is to determine the equivalence classes 2 ) > 0. We next consider an example to clarify the distinction between characteristic graph entropy, H G X 1 (X 1 ) and entropy of a conditional characteristic graph, or conditional graph entropy, H 1) Let P X 1 be a uniform PMF over the set {1, 2, 3}.Assume that G X has only one edge, i.e., E X 1 = {(1, 3)}.Hence, the set of MISs is given as S(G X 1 ) = {{1, 2}, {2, 3}}.
To determine the entropy of a characteristic graph, i.e., H G X 1 (X 1 ), from ( 27), our objective is to minimize I(X 1 ; U 1 ), which is a convex function of P(U 1 | X 1 ).Hence, I(X 1 ; U 1 ) is minimized when the conditional distribution of P(U 1 | X 1 ) is selected as subfunctions.In our proposed framework, for the distributed computing of these functions, we leverage characteristic graphs that can capture the structure of subfunctions.To determine the achievable rate of distributed lossless functional compression, we determine the colorings of these graphs and evaluate the entropy of such colorings.In the case of K c > 1 functions, let G X i ,j = (V X i , E X i ,j ) be the characteristic graph that server i ∈ Ω builds for computing function j ∈ [K c ].The graphs {G X i ,j } j∈[Kc] are on the same vertex set.
Union graphs for simultaneously computing a set of functions with side information have been considered in [83], using multi-functional characteristic graphs.A multi-functional characteristic graph is an OR function of individual characteristic graphs for different functions [83,Definition 45].To that end, server i ∈ Ω creates a union of graphs on the same set of vertices V X i with a set of edges E ∪ X i , which satisfy In other words, we need to distinguish the outcomes x 1 i and x 2 i of server X i if there exists at least one function The server then compresses the union G ∪ X i by exploiting (33) and (34).
In the special case when the number of demanded functions K c is large (or tends to infinity), such that the union of all subspaces spanned by the independent sets of each G X i ,j , j ∈ [K c ] is the same as the subspace spanned by X i , MISs of G ∪ X i in (36) for server i ∈ Ω become singletons, rendering G ∪ X i a complete graph.In this case, the problem boils down to the paradigm of distributed source compression (see Appendix A).
2) Distributed functional compression: The fundamental limit of functional compression has been given by Körner [76].Given X i ∈ F |Z i |×n q for server i ∈ Ω, the encoding function e X i specifies MISs given by the valid colorings c G n X i (X i ).Let the number of symbols in Z i = g i (X i ) = e X i (c G n X i (X i )) be T i for server i ∈ Ω.Hence, the communication cost of server i, as n → ∞, is given by (5).
Defining G X S = [G X i ] i∈S for a given subset S ⊆ Ω chosen to guarantee distributed computation of F (X Ω ), i.e., |S| ≥ N r , the sum-rate of servers for distributed lossless functional compression for computing F ( where H G X S (X S ) is the joint graph entropy of S ⊆ Ω, and it is defined as [83, Definition 30]: where c G n X i (X i ) is the coloring of the n-th power graph G n X i that i ∈ Ω builds for computing f (X Ω ) [83].Similarly, exploiting [83,Definition 31], the conditional graph entropy of the servers is given as Using (36) we jointly capture the structures of the set of demanded functions.Hence, this enables us to provide a refined communication cost model in (5) versus the characterizations as a function of K c , see e.g., [40], [59], [69].

C. Proof of Theorem 1
Consider the general topology, T (N, K, K c , M, N r ), under general placement of datasets, and for a set of K c general functions {f j (W K )} j∈[Kc] requested by the user, and under general jointly distributed dataset models, including non-uniform inputs and allowing correlations across datasets.
We note that server i ∈ Ω builds a characteristic graph12 G X i ,j for distributed lossless computing of f j (W K ), j ∈ [K c ]. Similarly, server i ∈ Ω builds a union characteristic graph for computing {f j (W K )} j∈ [Kc] .We denote by G ∪ G X i ,j the union characteristic graph, given as in (36).In the description of G ∪ X i , the set V X i is the support set of X i , i.e., V X i = X i , and E X i is the union of edges, i.e., E X i = j∈[Kc] E X i ,j , where E X i ,j denotes the set of edges in G X i ,j , which is the characteristic graph the server builds for distributed lossless computing f j (W K ) for a given function j To compute the set of demanded functions {f j (W K )} j∈[Kc] , we assume that server i ∈ Ω can use a codebook of functions denoted by C i such that C i ∋ g i , where the user can compute its demanded functions using the set of transmitted information {g i (X i )} i∈S provided from any set of |S| = N r servers.More specifically, server i ∈ Ω chooses a function g i ∈ C i to encode X i .Note that g i represents, in the context of encoding characteristic graphs, the mapping from X i to a valid coloring c G X i (X i ).We denote by the color encoding performed by server i ∈ Ω for the length n realization of X i , denoted by X i .For convenience, we use the following shorthand notation to represent the transmitted information from the server: Combining the notions of the union graph in (36) and the encodings of the individual servers given in (40), the rate R i needed from server i ∈ Ω for meeting the user demand is upper bounded by the cost of the best encoding that minimizes the rate of information transmission from the respective server.Equivalently, where equality is achievable in (41).Because the user can recover the desired functions using any set of N r servers, the achievable sum rate is upper bounded by

D. Proof of Proposition 1
For the multi-server, multi-function distributed computing architecture, this proposition restricts the demand to be a set of linearly separable functions, given as in (4).Given the recovery threshold N r , it holds that where in (43), we used the identity H G ∪ X i ) I(X i ; U i ).Furthermore, if the codebook C i is restricted to linear combinations of subfunctions, Z i is given by the following set of linear equations: In other words, Z i , i ∈ [N r ], is a vector-valued function.Note that each server contributes to determining the set of linearly separable functions {f j (W K ) , j ∈ [K c ]} of datasets, given as in (4), in a distributed manner.Hence, each independent set U i ∈ S(G ∪ X i ), with S(G ∪ X i ) denoting the set of MISs of X i , of X i is captured by the linear functions of {W k } k∈[(i−1)∆+1:(i−1)∆+M ] , i.e., each U i ∈ S(G ∪ X i ) is determined by (46).Hence, the user can recover the requested functions by linearly combining the transmissions of the N r servers: In (44), we use the definition of mutual information, I(X i ; U i ) = H(X i )−H(X i | U i ), where given i ∈ [N r ] and ∆ = K N , it holds under cyclic placement that X i = W (i−1)∆+M (i−1)∆+1 = W (i−1)∆+1 , W (i−1)∆+2 , . . ., W (i−1)∆+M , k are the coefficients for computing function l ∈ [K c ].In (45), we used that W k is uniform over F q and i.i.d.across k ∈ [K], and rewrote the conditional entropy expression such that H W (i−1)∆+M (i−1)∆+1 where (a) follows from that Z i is a function of W (i−1)∆+M (i−1)∆+1 .For a given l ∈ [K c ] and field size q, the relation k W k ensures that G X i has q independent sets where each such set U i contains q M −1 different values of X i .Exploiting that W k is i.i.d. and uniform over F q , each element of Z i is uniform over F q .Hence, the achievable sum-rate is upper bounded by Nr i=1 min Exploiting the cyclic placement model, we can tighten the bound in (50).Note that server i = 1 can help recover M subfunctions (at most, i.e., M transmissions needed to recover M subfunctions), and each of servers i ∈ [2 : N r ] can help recover an additional ∆ subfunctions (at most, i.e., ∆ transmissions needed to recover ∆ subfunctions).Hence, the set of servers [N r ] suffices to provide M + (N r − 1)∆ = N ∆ = K subfunctions and reconstruct any desired function of W K .Due to cyclic placement, each W k is stored in exactly N − N r + 1 servers.Now, let us consider the following four scenarios: (i) When 1 ≤ K c < ∆, it is sufficient for each server to transmit K c linearly independent combinations of their subfunctions.This leads to resolving K c N r linear combinations of K subfunctions from N r servers that are sufficient to derive the demanded K c linear functions.Because K c N r < ∆N r , there are K − K c N r > ∆(N − N r ) = M − ∆ unresolved linear combinations of K subfunctions.
(ii) When ∆ ≤ K c ≤ ∆N r , it is sufficient for each server to transmit at most ∆ linearly independent combinations of their subfunctions.This leads to resolving ∆N r linear combinations of K subfunctions and ∆(N − N r ) = M − ∆ unresolved linear combinations of K subfunctions.
(iii) When ∆N r < K c ≤ K, each server needs to transmit at a rate Kc Nr where Kc Nr > ∆ and Kc Nr ≤ K Nr = ∆ Nr+N −Nr Nr = ∆ + ∆ N −Nr Nr , which gives the number of linearly independent combinations needed to meet the demand.This yields a sum-rate of K c .The subset of servers may need to provide up to an To evaluate the first term in (12), we choose a total of N * servers with a disjoint set of subfunctions.We denote the selected set of servers by N * ⊆ Ω, and the collective computation rate of these N * servers, as a function of the conditional graph entropies of these servers, becomes where (a) follows from assuming S = {i 1 , i 2 , • • • , i N * } with no loss of generality, and (b) from that the rate of server i l ∈ S is positive only when i∈[i l−1 ] k∈Z i W k = 1, which is true with probability (ϵ M ) l−1 .Finally, (c) follows from employing the sum of the terms in the geometric series, i.e., N * −1 l=0 (ϵ M ) l = 1−(ϵ M ) N * 1−ϵ M . 13n the case of ∆ N = N − N * • (N − N r + 1) > 0, the product of K subfunctions cannot be determined by N * servers and we need additional servers I * ∈ Ω to aid the computation and determine the outcome of f (W K ) by computing the product of the remaining ξ N subfunctions.In other words, if ∆ N > 0 and i∈S k∈Z i W k = 1, the (N * + 1)-th server determines the outcome of f (W K ) by computing the product of subfunctions W k ∼ Bern(ϵ), k ∈ N −ξ N +1 : N , that cannot be captured by the previous N * servers.Hence, the additional rate, given by the second term in (12), is given by the product of the term with 1 ∆ N >0 , and h ϵ ξ N .Combining this rate term with (54), we prove the statement of the proposition.
under a specific correlation model across subfunctions.More specifically, when the subfunctions W k ∼ Bern(ϵ) are identically distributed and correlated across k ∈ [K], and ∆ ∈ Z + , we model the correlation across datasets (a) exploiting the joint PMF model in [91, Theorem 1] and (b) for a joint PMF described in Table

Fig. 2 :
Fig. 2: Colorings of graphs in Subsection IV-A (Scenario II).(Top Left-Right) Characteristic graphs G X1 and G X2 , respectively.(Bottom Left-Right) The minimum conditional entropy colorings of G X1 given c G X 2 , and G X2 given c G X 1 , respectively.

Fig. 5 :
Fig. 5: η lin in a logarithmic scale versus ϵ for K c demanded functions for various values of K c , with ρ = 0 for different topologies, as detailed in Subsection IV-A (Scenario III).

Fig. 6 :
Fig.6: Gain 10 log 10 (η SW ) versus ϵ for computing(11), whereK c = 1, ρ = 0, N r = N − 1. (Left)The set of parameters N , K, and M are indicated for each configuration.(Right) 10 log 10 (η SW ) versus ϵ to observe the effect of N for a fixed total cache size M N and fixed K.

Example 1 .
(Characteristic graph entropy of ternary random variables [75, Examples 1-2].)In this example, we first investigate the characteristic graph entropy H G X 1 (X 1 ) and the conditional graph entropy H

M
− M − H(Z i ) = Nr i=1 H(Z i ) , Set of indices of datasets assigned to serveri ∈ Ω such that |Zi| ≤ M Zi ⊆ [K]Set of subfunctions corresponding to a subset of servers with indices i ∈ S for S ⊆ Ω XS = {Xi : i ∈ S}

TABLE I :
Notation.

TABLE II :
Joint PMF P W2,W3 of W 2 and W 3 with a crossover parameter p, in Subsection IV-A (Scenario II).