Next Article in Journal
The Random Domino Automaton on the Bethe Lattice and Power-Law Cluster-Size Distributions
Previous Article in Journal
Explainable Structured Pruning of BERT via Mutual Information
Previous Article in Special Issue
Security, Privacy, and Linear Function Retrieval in Combinatorial Multi-Access Coded Caching with Private Caches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Function Computation over a Directed Acyclic Network

1
School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China
2
Institute of Network Coding, The Chinese University of Hong Kong, Hong Kong SAR, China
3
School of Science, Tianjin University of Technology, Tianjin 300384, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(12), 1225; https://doi.org/10.3390/e27121225
Submission received: 9 November 2025 / Revised: 27 November 2025 / Accepted: 27 November 2025 / Published: 3 December 2025

Abstract

The problem of multi-function computation over a directed acyclic network is investigated in this paper. In such a network, a sink node is required to compute with zero error multiple vector-linear functions, where each vector-linear function has distinct inputs generated by multiple source nodes. The computing rate tuple of an admissible code is defined as a tuple consisting of the average number of zero-error computations for each vector-linear function when the network is used once jointly. From the information theoretic point of view, we are interested in characterizing the rate region, which is defined as the closed set of all achievable computing rate tuples. In particular, when the sink node is required to compute a single vector-linear function, the network multi-function computation problem degenerates to the network function computation problem. We prove an outer bound on the rate region by developing the approach of the cut-set strong partition. We also illustrate that the obtained outer bound is tight for a typical model of computing two vector-linear functions over the diamond network. Furthermore, we establish the relationship between the network multi-function computation rate region and the network function computation rate region. Also, we show that the best known outer bound on the rate region for computing an arbitrary vector-linear function over an arbitrary network is a straightforward consequence of our outer bound.

1. Introduction

In this paper, we consider the problem of multi-function computation over a directed acyclic network, called network multi-function computation. A directed acyclic graph is used to model the network, where some nodes are referred to as source nodes, and another node is referred to as the sink node. The sink node is required to compute with zero error multiple vector-linear functions, where each vector-linear function has distinct inputs generated by the source nodes. The computing rate tuple of an admissible code is defined as a tuple consisting of the average number of zero-error computations for each vector-linear function when the network is used once jointly. From the information theoretic point of view, we are interested in characterizing the rate region, which is defined as the closed set of all achievable computing rate tuples. This rate region measures the efficiency of computing the vector-linear functions over the network. We note that when the sink node is required to compute a single vector-linear function, the network multi-function computation problem degenerates to the network function computation problem (cf. [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]). The multi-function computation problems have been extensively studied in the literature, e.g., [17,18,19,20,21,22,23,24,25,26,27], as multiple tasks often need to be jointly performed on a single device or within a shared communication infrastructure.
The model of network function computation for computing a target function over a directed acyclic network has been investigated persistently in the literature [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. Appuswamy et al. [1] investigated the fundamental computing capacity, i.e., the maximum average number of times that the target function can be computed with zero error for one use of the network, and gave a cut-set based upper bound that is valid under certain constraints on either the network topology or the target function. Huang et al. [2] enhanced Appuswamy et al.’s upper bound that can be applied for arbitrary functions and arbitrary network topologies. Furthermore, for the case of computing an arbitrary function over a multi-edge tree network and the case of computing the identity function or the algebraic sum over an arbitrary network topology, the above two upper bounds coincide and are tight (see [1,2]). Appuswamy and Franceschetti [3] investigated the solvability (rate-1 achievability) of linear (function-computing) network codes for computing a vector-linear function over a directed acyclic network. Subsequently, Guang et al. [4] proved an improved general upper bound by using a novel approach of the cut-set strong partition, which is not only a strict improvement over the previous upper bounds but also tight for all the considered network function computation problems previous to [4], whose computing capacities are known. In particular, the improved upper bound was used to enhance the results in [4] for computing a vector-linear function over a directed acyclic network. Based on the improved upper bound, Li and Xu [12] characterized the computing capacity for computing an arbitrary vector-linear function over the diamond network.
The main contributions and organization of the paper are given as follows.
  • In Section 2, we formally present the model of network multi-function computation, and define the network multi-function computing codes and the rate region.
  • In Section 3, we prove an outer bound on the rate region by developing the approach of the cut-set strong partition introduced by Guang et al. [4], which is applicable to arbitrary network topologies and arbitrary vector-linear functions. We also illustrate that the obtained outer bound is tight for a typical model of computing two vector-linear functions over the diamond network.
  • In Section 4, we compare network multi-function computation and network function computation. We first establish the relationship between the network multi-function computation rate region and the network function computation rate region. By this relationship, we show that the best known outer bound in [4] on the network function computation rate region can induce an outer bound on the network multi-function computation rate region. However, this induced outer bound is not as tight as our outer bound. Further, we show that the best known outer bound in [4] on the rate region for computing an arbitrary vector-linear function over an arbitrary network is a straightforward consequence of our outer bound.
  • Finally, we conclude in Section 5 with a summary of our results.

2. Preliminaries

2.1. Model of Network Multi-Function Computation

Let G = ( V , E ) be a directed acyclic graph, where V is a finite node set and E is a finite edge set. For an edge e E connecting a node u V to another node v V , we use tail ( e ) and head ( e ) to denote the tail node and the head node of e, respectively, i.e., u = tail ( e ) and v = head ( e ) . Accordingly, for a node v V , let
In ( v ) = e E : head ( e ) = v and Out ( v ) = e E : tail ( e ) = v ,
the set of input edges of v and the set of output edges of v, respectively. In the graph G , there is a set of source nodes S = { σ 1 , σ 2 , , σ s } V with | S | = s , and a single sink node ρ V S , where each source node has no input edges and the sink node ρ has no output edges, i.e., In ( σ i ) = Out ( ρ ) = for i = 1 , 2 , , s . Without loss of generality, we assume that for each v V { ρ } , there always exists a directed path from v to ρ in G . Let F q be a finite field of size q. We allow multiple edges between two nodes and assume that a symbol taken from the finite field F q can be transmitted with zero error on each edge for each use. The graph G , together with S and ρ , forms a network N , i.e., N = ( G , S , ρ ) .
Consider t nonnegative integers k 1 , k 2 , , k t . We assume that each source node σ i S generates t sequences x i , 1 , x i , 2 , , x i , t . Specifically, for each index j with 1 j t , the sequence x i , j is defined as an F q -valued column k j -vector, i.e., x i , j F q k j , called the j-th source sub-vector generated by σ i . Let k j = 1 t k j . Then, all the t sequences generated by the source node σ i S can be further written a column k -vector:
x i x i , 1 x i , 2 x i , t F q j = 1 t k j = F q k ,
called the source vector generated by σ i .
For a subset of source nodes I S , we let
x I , j ( x i , j : σ i I ) F q k j × | I | , 1 j t ,
where x I , j is called the j-th source submatrix generated by the source nodes in I. In the rest of the paper, we use F q k j × I (instead of F q k j × | I | for notational simplicity) to denote the set of all possible k j × | I | matrices taken by x I , j . (When k j = 1 , we write x I , j F q k j × I as x I , j F q 1 × I for notational simplicity.) Further, we let
x I x I , 1 x I , 2 x I , t F q k × | I | ,
called the source matrix generated by the source nodes in I. Clearly, we have x I = ( x i : σ i I ) . Similarly, we use F q k × I (instead of F q k × | I | for notational simplicity) to denote the set of all possible k × | I | matrices taken by x I . In particular, when I = S , we have
x S , j = ( x 1 , j , x 2 , j , , x s , j ) F q k j × S , 1 j t ,
and
x S = ( x 1 , x 2 , , x s ) F q k × S .
Let f 1 , f 2 , , f t be t vector-linear functions over a finite field F q , called target functions. More precisely, for each 1 j t , we let
f j ( x 1 , j , x 2 , j , , x s , j ) = ( x 1 , j , x 2 , j , , x s , j ) · M j , x i , j F q , 1 i s ,
where M j is an F q -valued column-full-rank matrix of size s × r j , i.e., R a n k ( M j ) = r j (which implies r j s ). In our model, the sink node ρ demands to compute with zero error the target functions f j ( x S , j ) for all 1 j t , where x S , j is the j-th source submatrix generated by all the source nodes and
f j ( x S , j ) x S , j · M j = ( x 1 , j , x 2 , j , , x s , j ) · M j F q k j × r j , 1 j t .
In other words, the sink node ρ is required to compute f j , the target function, k j times with zero error for all 1 j t . Then we have specified the network multi-function computation model, which is denoted by ( N , M j : 1 j t ) .
In particular, when the sink node ρ is required to compute only a single target function with zero error (i.e., t = 1 ), the above model degenerates to the network function computation model [1,2,3,4].

2.2. Network Multi-Function Computing Coding

In this subsection, we will define a ( k j : 1 j t ; n ) (network multi-function computing) code for the model ( N , M j : 1 j t ) . The purpose of such a code is to enable the sink node ρ to compute f j , the target function, k j times with zero error for all j = 1 , 2 , , t . To be specific, a ( k j : 1 j t ; n ) code for ( N , M j : 1 j t ) consists of
  • a local encoding function for each edge e E :
    φ e : F q k F q n i f e O u t ( σ i ) f o r   s o m e 1 i s , d I n ( t a i l ( e ) ) F q n F q n o t h e r w i s e ,
    where k = j = 1 t k j ;
  • t decoding functions  ψ j with 1 j t at the sink node ρ :
    ψ j : e I n ( ρ ) F q n F q k j × r j , for all j = 1 , 2 , , t .
  • With the encoding mechanism as described, the local encoding functions φ e , e E derive recursively the symbols transmitted over all edges e, denoted by g e ( x S ) , which can be considered as vectors in F q n . Specifically, g e can be written as
    g e ( x S ) = φ e ( x i ) if e Out ( σ i ) f o r   s o m e 1 i s , φ e g In ( u ) ( x S ) o t h e r w i s e ,
    where u = tail ( e ) and g E ( x S ) g e ( x S ) : e E for an edge set E E . We call g e the global encoding function for an edge e.
We say that such a ( k j : 1 j t ; n ) code C = φ e : e E ; ψ j : 1 j t is admissible if for all 1 j t , the target function f j can be computed with zero error k j times at ρ by using C , i.e.,
ψ j g I n ( ρ ) ( x S ) = f j ( x S , j ) = x S , j · M j , x S F q k × S .
To measure the performance of codes, we further define the computing rate for each target function f j with 1 j t by
R j ( C ) = k j n ,
describing the average number of times the target function f j can be computed with zero error at ρ for one use of the network N .
A t-tuple of nonnegative real numbers ( R j : 1 j t ) R + t is called achievable if ϵ > 0 , there exists an admissible ( k j : 1 j t ; n ) code C for the model ( N , M j : 1 j t ) such that
R j < R j ( C ) + ϵ , 1 j t .
Consequently, the rate region for the model ( N , M j : 1 j t ) is defined as
R ( N , M j : 1 j t ) ( R 1 , R 2 , , R t ) R + t : ( R 1 , R 2 , , R t ) i s   a c h i e v a b l e ,
which is evidently closed and bounded.

3. Outer Bound on the Rate Region R ( N , M j : 1 j t )

In this section, we present a general outer bound on the rate region R ( N , M j : 1 j t ) , where “general” means that the outer bound is applicable to arbitrary network topologies and arbitrary vector-linear functions. We first follow from [4] to present some graph-theoretic notations and definitions. Consider a network N = ( G , S , ρ ) , where we recall that G = ( V , E ) is a directed acyclic graph. For two nodes u , v V , if there exists a directed path from u to v in N , we denote this relation by u v . If there is no directed path from u to v in N , we say that v is separated from u and denote this relation by u v . Given a set of edges C E , define I C as the set of the source nodes from which the sink node ρ is separated if C is deleted from E , i.e.,
I C σ S : ρ is separated from σ upon deleting the edges in C from E .
Equivalently, I C is the set of source nodes from which all directed paths to the sink node ρ pass through C. An edge set C is said to be a cut set if I C . Further, we call C a global cut if I C = S . We let Λ ( N ) denote the family of all cut sets in the network N , i.e.,
Λ ( N ) C E : I C .
Define a set K C for a cut set C Λ ( N ) as
K C σ S : e C s . t . σ tail ( e ) .
Then we can readily see that K C is the set of source nodes from which there exists a directed path to the sink node ρ that passes through C. It is clear that I C K C . Further, we let J C = K C I C , and hence K C = I C J C and I C J C = . We note that if the set of edges C is given, I C , J C and K C are determined.
Definition 1
([4], Definition 2, [14], Definition 3). Let C Λ ( N ) be a cut set and P C = C 1 , C 2 , , C m be a partition of the cut set C. The partition P C is said to be a strong partition of C if the following two conditions are satisfied:
  • I C , = 1 , 2 , , m ;
  • I C i K C j = , 1 i , j m and i j . (There is a typo in the original definition of strong partition [4], Definition 2, where in 2), “ I C i I C j = ” in [4], Definition 2 should be “ I C i K C j = ” as stated in [14], Definition 3).
For a cut set C in Λ ( N ) , the partition { C } is called the trivial strong partition of C. Let C Λ ( N ) be an arbitrary cut set and P C = C 1 , C 2 , , C m be an arbitrary strong partition of C. For an F q -valued matrix M of size s × r , we further define the P C -rank of M by
rank P C ( M ) R a n k M [ I C ] + = 1 m R a n k M [ I C ] R a n k M [ = 1 m I C ] ,
where M [ U ] for a subset U S stands for the submatrix of M containing the ith row if σ i U . In particular, when P C = { C } , the trivial strong partition of C, we have
rank { C } ( M ) = R a n k M [ I C ] .
With the definition of the P C -rank, we present in the following a general outer bound on the rate region, which can be applicable to arbitrary network topologies and arbitrary vector-linear functions. The proof of the outer bound is deferred to Section 3.
Theorem 1.
Consider a model of network multi-function computation ( N , M j : 1 j t ) . Then,
R N , M j : 1 j t ( R 1 , R 2 , , R t ) R + t : j = 1 t rank P C ( M j ) · R j | C | f o r   a l l ( C , P C ) Λ ( N ) × P C ,
where P C denotes the collection of all the strong partitions of a cut set C Λ ( N ) .
The following example is given to illustrate the general outer bound obtained in Theorem 1.
Example 1.
Consider a network two-function computation model ( N ˜ , M 1 , M 2 ) as depicted in Figure 1, where in the diamond network N ˜ , there are three source nodes σ 1 , σ 2 , σ 3 and a single sink node ρ; and the two target functions f 1 and f 2 are specified below:
f 1 ( x 1 , 1 , x 2 , 1 , x 3 , 1 ) = x 1 , 1 + x 2 , 1 + x 3 , 1 , x 1 , 1 , x 2 , 1 , x 3 , 1 F q , f 2 ( x 1 , 2 , x 2 , 2 , x 3 , 2 ) = ( x 1 , 2 , x 2 , 2 , x 3 , 2 ) , x 1 , 2 , x 2 , 2 , x 3 , 2 F q .
We readily see that the corresponding matrices of f 1 and f 2 are as follows:
M 1 = 1 1 1 and M 2 = 1 0 0 0 1 0 0 0 1 .
We consider the global cut set C = { e 5 , e 6 } , which has a unique nontrivial strong partition P C = C 1 = { e 5 } , C 2 = { e 6 } . We readily see that I C = S = { σ 1 , σ 2 , σ 3 } , I C 1 = { σ 1 } , and I C 2 = { σ 3 } . Then, we calculate that
rank P C ( M 1 ) = R a n k M 1 [ I C ] + R a n k M 1 [ I C 1 ] + R a n k M 1 [ I C 2 ] R a n k M 1 [ I C 1 I C 2 ] = 1 + 1 + 1 1 = 2 .
Similarly, we calculate that
rank P C ( M 2 ) = R a n k M 2 [ I C ] + R a n k M 2 [ I C 1 ] + R a n k M 2 [ I C 2 ] R a n k M 2 [ I C 1 I C 2 ] = 3 + 1 + 1 2 = 3 .
Then by Theorem 1, we obtain that
R ( N ˜ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : rank P C ( M 1 ) · R 1 + rank P C ( M 2 ) · R 2 | C | = ( R 1 , R 2 ) R + 2 : 2 R 1 + 3 R 2 2 .
In fact, the outer bound in (5) is already tight, i.e.,
R ( N ˜ , M 1 , M 2 ) = ( R 1 , R 2 ) R + 2 : 2 R 1 + 3 R 2 2 ,
which is depicted in Figure 2. To establish this, it suffices to show that
R ( N ˜ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : 2 R 1 + 3 R 2 2 .
To be specific, we present in Figure 3 an admissible ( k 1 = 1 , k 2 = 0 ; n = 1 ) code C and hence the computing rates R 1 ( C ) = 1 and R 2 ( C ) = 0 . This implies that ( 1 , 0 ) is achievable. Similarly, we present in Figure 4 an admissible ( k 1 = 0 , k 2 = 2 ; n = 3 ) code C and hence the computing rates R 1 ( C ) = 0 and R 2 ( C ) = 2 / 3 . This implies that ( 0 , 2 / 3 ) is achievable. By applying the time-sharing scheme, we thus have proved (6).

Proof of Theorem 1

In this subsection, we prove the outer bound in Theorem 1. First, we let k 1 , k 2 , , k t be arbitrary t nonnegative integers, and accordingly define k p = 1 t k p . (The index p is used here (instead of the original index j) to avoid potential symbol confusion, and this index notation will be consistently adopted in all subsequent proofs.) We then present the following equivalence relation, which will be used in the subsequent proof.
Definition  2.
Consider a subset of source nodes I S . Let
x I = x I , 1 x I , 2 x I , t F q k × I and x I = x I , 1 x I , 2 x I , t F q k × I
be any two source matrices, where x I , p and x I , p F q k p × I for 1 p t . We say that x I and x I are I-equivalent if for each 1 p t ,
x I , p · M p [ I ] = x I , p · M p [ I ] .
The I-equivalence relation induces a partition of F q k × I and the blocks in the partition are called I-equivalence classes. We use Cl I to denote an I-equivalence class. The following lemma establishes that all I-equivalence classes have the same size, and provides an explicit formula for this size.
Lemma 1.
Consider a subset of source nodes I S . All I-equivalence classes have the same size
e x p p = 1 t k p · | I | R a n k ( M p [ I ] ) ,
where e x p [ · ] denotes the exponential function with base q, i.e., e x p [ z ] = q z .
Proof. 
See Appendix A. □
Immediately, we obtain the following consequence of Lemma 1, which explicitly gives the number of all I-equivalence classes.
Corollary 1.
Consider a subset of source nodes I S . The number of all I-equivalence classes is given by
e x p p = 1 t k p · R a n k ( M p [ I ] ) .
Proof. 
We note that the I-equivalence relation induces a partition of F q k × I , where k = p = 1 t k p . In addition, by Lemma 1, all I-equivalence classes have the same size
e x p p = 1 t k p · | I | R a n k ( M p [ I ] ) .
Then, the number of all I-equivalence classes is calculated to be
| F q k × I | e x p p = 1 t k p · | I | R a n k ( M p [ I ] ) = e x p p = 1 t k p · | I | e x p p = 1 t k p · ( | I | R a n k ( M p [ I ] ) ) = e x p p = 1 t k p · R a n k ( M p [ I ] ) .
The corollary is proved. □
Now, we are ready to prove Theorem 1. Let C be an arbitrary admissible ( k p : 1 p t ; n ) code for the model ( N , M p : 1 p t ) , of which the global encoding functions are g e , e E . Consider a cut set C Λ ( N ) , where we let I = I C and J = J C . Since no directed path exists from any source node in S ( I J ) to any node in { tail ( e ) : e C } , the source matrix x S ( I J ) does not contribute to the value of g C ( x S ) . Hence, we can write g C ( x I , x J , x S ( I J ) ) as g C ( x I , x J ) . Next, we present the following lemma, which plays a crucial role in proving our general outer bound in Theorem 1.
Lemma 2.
Let g e : e E be the set of all global encoding functions of a given admissible k p : 1 p t ; n code for the model ( N , M p : 1 p t ) and k = p = 1 t k p . For a cut set C Λ N , let I = I C and J = J C . Consider any two source matrices x I and x I in F q k × I . If x I and x I are not I-equivalent, then
g C x I , a J g C x I , a J , a J F q k × J .
Proof. 
See Appendix B. □
We recall the ( k p : 1 p t ; n ) code C , of which the global encoding functions are g e , e E . We now consider a cut set C Λ ( N ) . Let I = I C and J = J C for notational simplicity. By (1) and (2), we can readily see that
q n | C | # g C ( x I , x J ) : ( x I , x J ) F q k × ( I J ) # g C ( x I , a J ) : x I F q k × I = # all Cl I g C ( x I , a J ) : x I Cl I
= all Cl I # g C ( x I , a J ) : x I Cl I ,
where “ # { · } ” stands for the size of the set; a J F q k × J is an arbitrarily fixed source matrix; the equality (7) follows from the fact that all I-equivalent classes Cl I form a partition of F q k × I ; and the equality (8) follows from Lemma 2.
For each I-equivalent classes Cl I , we continue to consider
# g C ( x I , a J ) : x I Cl I .
For the cut set C Λ ( N ) , let P C = { C 1 , C 2 , , C m } be an arbitrary strong partition of C. We further let I = I C for each 1 m , and accordingly L = I ( = 1 m I ) . By Definition 1, we can see that
K C I L J , 1 m .
Then, we can rewrite (9) as follows:
# g C ( x I , a J ) : x I Cl I = # g C ( x I , x L , a J ) : 1 m : x I = ( x I 1 , x I 2 , , x I m , x L ) Cl I # { g C ( x I , a L , a J ) : 1 m : x I F q k × I , 1 m , and ( x I 1 , x I 2 , , x I m , a L ) Cl I } ,
where we take a L F q k × L as an arbitrary source matrix such that there exists a source matrix ( y I 1 , y I 2 , , y I m , y L ) Cl I with y L = a L .
We note that for the subset I I (where 1 m ), the I -equivalence relation similarly induces a partition of F q k × I and the blocks in the partition are called I -equivalence classes. For notational distinction, we use cl I (instead of Cl I ) to denote an I -equivalence class. For each I -equivalence class cl I , 1 m , we define the set
cl I 1 , cl I 2 , , cl I m , a L ( x I 1 , x I 2 , , x I m , a L ) : x I cl I   for   1 m .
Continuing from (10), we consider
# { g C ( x I , a L , a J ) : 1 m : x I F q k × I , 1 m , and ( x I 1 , x I 2 , , x I m , a L ) Cl I } = # { g C ( x I , a L , a J ) : 1 m : x I cl I , an I - equivalence class , 1 m , and cl I 1 , cl I 2 , , cl I m , a L Cl I } # { cl I 1 , cl I 2 , , cl I m , a L : cl I is an I - equivalence class , 1 m , and cl I 1 , cl I 2 , , cl I m , a L Cl I } ,
where (11) is elaborated as follows. To be specific, let
cl cl I 1 , cl I 2 , , cl I m , a L Cl I and cl cl I 1 , cl I 2 , , cl I m , a L Cl I
be any two distinct sets. We assume without loss of generality that cl I 1 cl I 1 . Consider any two source matrices:
( x I 1 , x I 2 , , x I m , a L ) cl and ( x I 1 , x I 2 , , x I m , a L ) cl ,
which implies that x I cl I , x I cl I for 1 m . This, together with cl I 1 cl I 1 , immediately implies that x I 1 and x I 1 are not I 1 -equivalent. By the same way to prove Lemma 2, we have
g C 1 x I 1 , a L , a J g C 1 x I 1 , a L , a J ,
and thus
g C ( x I , a L , a J ) : 1 m g C ( x I , a L , a J ) : 1 m .
Based on the above, the inequality (11) is proved.
Before discussing (11) further, we need the lemma below, whose proof is deferred to Appendix C.
Lemma 3.
Consider a subset of source nodes I S . Let I , 1 m be m disjoint subsets of I and let L = I ( = 1 m I ) . Fix an arbitrary I-equivalence class Cl I and an arbitrary source matrix a L F q k × L such that there exists a source matrix y I = ( y I 1 , y I 2 , , y I m , y L ) Cl I with y L = a L . Let
P { cl I 1 , cl I 2 , , cl I m , a L : cl I is an I - equivalence class , 1 m , and cl I 1 , cl I 2 , , cl I m , a L Cl I } .
Then, the size of P is given by
| P | = e x p p = 1 t k p · = 1 m R a n k ( M p [ I ] ) R a n k ( M p [ = 1 m I ] ) .
With (10) and (11), by Lemma 3, we immediately obtain that
# g C ( x I , a J ) : x I Cl I e x p p = 1 t k p · = 1 m R a n k ( M p [ I ] ) R a n k ( M p [ = 1 m I ] ) .
Combining (8) and (13), we thus have
q n | C | all Cl I e x p p = 1 t k p · = 1 m R a n k ( M p [ I ] ) R a n k ( M p [ = 1 m I ] )
= e x p p = 1 t k p · R a n k ( M p [ I ] ) · e x p p = 1 t k p · = 1 m R a n k ( M p [ I ] ) R a n k ( M p [ = 1 m I ] ) = e x p p = 1 t k p · R a n k ( M p [ I ] ) + = 1 m R a n k ( M p [ I ] ) R a n k ( M p [ = 1 m I ] )
= e x p p = 1 t k p · rank P C ( M p ) ,
where the equality (14) follows from Corollary 1, and the equality (15) follows from the definition of rank P C ( M p ) for 1 p t (cf. (4)). Equivalently, we have
n | C | p = 1 t k p · rank P C ( M p ) .
It further follows from (16) that
| C | p = 1 t k p n · rank P C ( M p ) = p = 1 t R p ( C ) · rank P C ( M p ) ,
where we recall in (3) that the computing rate for the target function f p is defined by R p ( C ) = k p / n for each 1 p t . We note that the inequality (17) is true for all pairs ( C , P C ) Λ ( N ) × P C , and thus
p = 1 t rank P C ( M p ) · R p ( C ) | C | , ( C , P C ) Λ ( N ) × P C .
Further, the above outer bound is valid for any t nonnegative integers k 1 , k 2 , , k t , and any admissible ( k p : 1 p t ; n ) code. Hence, we have proved that
R ( N , M p : 1 p t ) ( R 1 , R 2 , , R t ) R + t : p = 1 t rank P C ( M p ) · R p | C | f o r   a l l ( C , P C ) Λ ( N ) × P C .
The theorem is proved.

4. Comparison on Network Function Computation

The model of network function computation ( N , f ) for computing an arbitrary target function f over a directed acyclic network N has been investigated persistently in the literature [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. In the same way, we can define the rate region R ( N , f ) for the model ( N , f ) . Further, the computing capacity of the model ( N , f ) is defined as
C ( N , f ) max R ( N , f ) ,
namely that the maximum average number of times that the target function f can be computed with zero error for one use of the network. We first present the following theorem, which reveals the relationship between the network multi-function computation rate region and the network function computation rate region.
Theorem 2.
Consider the model of network multi-function computation ( N , M j : 1 j t ) and the models of network function computation ( N , M j ) for 1 j t . On the one hand,
R ( N , M j : 1 j t ) ( R 1 , R 2 , , R t ) : R j R ( N , M j ) f o r 1 j t .
On the other hand, the following inclusion holds:
R ( N , M j : 1 j t ) { j = 1 t λ j · R j e j = ( λ 1 R 1 , λ 2 R 2 , , λ t R t ) : R j R ( N , M j ) a n d λ j 0 f o r 1 j t , a n d j = 1 t λ j 1 } ,
where e j is a t-dimensional row vector whose j-th component is 1, and all other components are 0.
Proof. 
See Appendix D. □
Remark 1.
We note that the inclusions (18) and (19) in Theorem 2 are in general not tight. To prove this claim, we will use a specific model presented in Example 2.
In network function computation, several “general” upper bounds on the computing capacity have been obtained [1,2,4], where “general” means that the upper bounds are applicable to arbitrary networks and arbitrary target functions. Here, the best known upper bound is the one proved by Guang et al. [4] in using the approach of the cut-set strong partition. Notably, this best-known upper bound also equivalently provides the best outer bound on the rate region. In particular, the best-known upper bound on the computing capacity in ([4], Theorem 2) for an arbitrary network and an arbitrary vector-linear function can be verified as follows, and its equivalent form explicitly characterizes the best outer bound on the rate region.
Theorem 3.
Consider a model of network function computation ( N , M ) . Then,
C ( N , M ) min ( C , P C ) Λ ( N ) × P C | C | rank P C ( M ) ,
or equivalently,
R ( N , M ) R R + : R min ( C , P C ) Λ ( N ) × P C | C | rank P C ( M ) .
We note that Theorem 3 is a straightforward consequence of Theorem 1. On the other hand, we show in the following that Theorem 3, which gives the outer bound on the network function computation rate region, can induce an outer bound on the network multi-function computation rate region. However, this outer bound is not as tight as the one given by Theorem 1.
By Theorem 3 and (18) in Theorem 2, we consider the model of network multi-function computation, and then obtain that
R ( N , M j : 1 j t ) ( R 1 , R 2 , , R t ) : R j R ( N , M j ) f o r 1 j t ( R 1 , R 2 , , R t ) R + t : R j min ( C , P C ) Λ ( N ) × P C | C | rank P C ( M j ) f o r 1 j t .
We formally state this outer bound on the rate region for the model of network multi-function computation ( N , M j : 1 j t ) in the following theorem.
Theorem 4.
Consider a model of network multi-function computation ( N , M j : 1 j t ) . Then,
R ( N , M j : 1 j t ) ( R 1 , R 2 , , R t ) R + t : R j min ( C , P C ) Λ ( N ) × P C | C | rank P C ( M j ) f o r 1 j t .
We first note that our outer bound in Theorem 1 is tighter than the outer bound in Theorem 4 induced by Theorem 3. To be specific, we have
( R 1 , R 2 , , R t ) R + t : j = 1 t rank P C ( M j ) · R j | C | f o r   a l l ( C , P C ) Λ ( N ) × P C { ( R 1 , R 2 , , R t ) R + t : rank P C ( M j ) · R j | C | . . f o r a l l ( C , P C ) Λ ( N ) × P C a n d 1 j t } = ( R 1 , R 2 , , R t ) R + t : R j min ( C , P C ) Λ ( N ) × P C | C | rank P C ( M j ) f o r 1 j t .
Further, we use the specific example below to illustrate that our outer bound is a strict enhancement of the outer bound in Theorem 4.
Example 2.
Consider a network two-function computation model ( N ^ , M 1 , M 2 ) as depicted in Figure 5, where in the asymmetric diamond network N ^ , there are three source nodes σ 1 , σ 2 , σ 3 and a single sink node ρ; and the two target functions f 1 and f 2 are specified below:
f 1 ( x 1 , 1 , x 2 , 1 , x 3 , 1 ) = x 1 , 1 + x 2 , 1 + x 3 , 1 , x 1 , 1 , x 2 , 1 , x 3 , 1 F q , f 2 ( x 1 , 2 , x 2 , 2 , x 3 , 2 ) = ( x 1 , 2 , x 2 , 2 , x 3 , 2 ) , x 1 , 2 , x 2 , 2 , x 3 , 2 F q .
We readily see that the corresponding matrices of f 1 and f 2 are as follows:
M 1 = 1 1 1 and M 2 = 1 0 0 0 1 0 0 0 1 .
We will calculate the outer bound in Theorem 1 by considering two typical cut sets and their strong partitions below.
  • We first consider the cut set C = { e 5 } and its trivial strong partition P C = { C } = { e 5 } . We can see that I C = { σ 3 } , and thus
    rank P C ( M j ) = R a n k M j [ I C ] = 1 , j = 1 , 2 .
    By Theorem 1, we have
    R ( N ^ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : rank P C ( M 1 ) · R 1 + rank P C ( M 2 ) · R 2 | C | = ( R 1 , R 2 ) R + 2 : R 1 + R 2 1 .
  • In the following, we consider the global cut set C = { e 6 , e 7 } and its trivial strong partition P C = { C } = { e 6 , e 7 } . We can see that I C = S , and thus
    rank P C ( M 1 ) = R a n k M 1 [ I C ] = 1 a n d rank P C ( M 2 ) = R a n k M 2 [ I C ] = 3 .
    By Theorem 1, we also have
    R ( N ^ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : rank P C ( M 1 ) · R 1 + rank P C ( M 2 ) · R 2 | C | = ( R 1 , R 2 ) R + 2 : R 1 + 3 R 2 2 .
Combining (21) and (23), we thus obtain that
R ( N ^ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : R 1 + R 2 1 , 2 R 1 + 3 R 2 2 .
In fact, the outer bound in (24) is already tight, i.e.,
R ( N ^ , M 1 , M 2 ) = ( R 1 , R 2 ) R + 2 : R 1 + R 2 1 , 2 R 1 + 3 R 2 2 ,
which is depicted in Figure 6. To establish this, it suffices to show that
R ( N ^ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : R 1 + R 2 1 , 2 R 1 + 3 R 2 2 .
To be specific, we present in Figure 7 an admissible ( k 1 = 1 , k 2 = 0 ; n = 1 ) code C and hence the computing rates R 1 ( C ) = 1 and R 2 ( C ) = 0 . This implies that ( 1 , 0 ) is achievable. Further, we present in Figure 8 an admissible ( k 1 = 1 , k 2 = 1 ; n = 2 ) code C and hence the computing rates R 1 ( C ) = 1 / 2 and R 2 ( C ) = 1 / 2 . This implies that ( 1 / 2 , 1 / 2 ) is achievable. Similarly, we present in Figure 9 an admissible ( k 1 = 0 , k 2 = 2 ; n = 3 ) code C and hence the computing rates R 1 ( C ) = 0 and R 2 ( C ) = 2 / 3 . This implies that ( 0 , 2 / 3 ) is achievable. By applying the time-sharing scheme, we show that (26) holds. Together with (24) , we thus have proved (25).
In the following, we calculate the outer bound in Theorem 4. First, we have
1 C ( N ^ , M 1 ) min ( C , P C ) Λ ( N ^ ) × P C | C | rank P C ( M 1 ) | { e 5 } | rank { { e 5 } } ( M 1 ) = 1 ,
where the first inequality holds because there exists an admissible ( 1 ; 1 ) code for the model ( N ^ , M 1 ) as depicted in Figure 7; the second inequality follows from Theorem 3; and the last equality follows from rank { { e 5 } } ( M 1 ) = 1 by (20). This implies that
C ( N ^ , M 1 ) = min ( C , P C ) Λ ( N ^ ) × P C | C | rank P C ( M 1 ) = 1 .
Similarly, we also have
2 3 C ( N ^ , M 2 ) min ( C , P C ) Λ ( N ^ ) × P C | C | rank P C ( M 2 ) | { e 6 , e 7 } | rank { { e 6 , e 7 } } ( M 2 ) = 2 3 ,
where the first inequality holds because there exists an admissible ( 2 ; 3 ) code for the model ( N ^ , M 2 ) as depicted in Figure 9; and the last equality follows from rank { { e 6 , e 7 } } ( M 2 ) = 3 by (22). This implies that
C ( N ^ , M 2 ) = min ( C , P C ) Λ ( N ^ ) × P C | C | rank P C ( M 2 ) = 2 3 .
By (27) and (28), we obtain the outer bound in Theorem 4 as follows:
R ( N ^ , M 1 , M 2 ) ( R 1 , R 2 ) R + 2 : R 1 1 , R 2 2 3 .
We can readily see that the outer bound in (25) is strictly tighter than the outer bound in (29). This thus shows that our outer bound in Theorem 1 is a strict enhancement of the outer bound in Theorem 4.
In the end of the example, we prove the claim in Remark 1, i.e., the inclusions (18) and (19) in Theorem 2 are in general not tight. First, it follows from (27) and (28) that
( R 1 , R 2 ) : R 1 R ( N ^ , M 1 )   and   R 2 R ( N ^ , M 2 ) = ( R 1 , R 2 ) R + 2 : R 1 1 , R 2 2 3 R ( N ^ , M 1 , M 2 ) ,
where the last inequality follows from (25). This thus shows that the inclusion (18) in Theorem 2 is in general not tight.
By (27) and (28), we also have
λ 1 · ( R 1 , 0 ) + λ 2 · ( 0 , R 2 ) : R j R ( N ^ , M j )   and   λ j 0   for   j = 1 , 2 ,   and   λ 1 + λ 2 1 = ( λ 1 · R 1 , λ 2 · R 2 ) : 0 R 1 1 , 0 R 2 2 3 , λ 1 0 , λ 2 0 ,   and   λ 1 + λ 2 1 = ( R ¯ 1 , R ¯ 2 ) R + 2 : R ¯ 1 + 3 2 R ¯ 2 1 R ( N ^ , M 1 , M 2 ) ,
which implies that the inclusion (19) in Theorem 2 is in general not tight.
We note that Example 2 plays a key role. On the one hand, this example demonstrates that the outer bound in Theorem 1 is strictly tighter than the induced outer bound in Theorem 4. On the other hand, this example illustrates that the network multi-function computation rate region cannot be directly derived from network function computation rate regions by the time-sharing scheme. This further underscores that studying the network multi-function computation offers advantages over separately investigating the network function computations.

5. Conclusions

In this paper, we put forward the problem of multi-function computation over a directed acyclic network. We proved an outer bound on the rate region by developing the approach of the cut-set strong partition introduced by Guang et al. We also illustrated that the obtained outer bound is tight for a typical model of computing two vector-linear functions over the diamond network. Furthermore, we compared network multi-function computation and network function computation. We first established the relationship between the network multi-function rate region and the network function computation rate region. By this relationship, we showed that the best known outer bound on the network function computation rate region can induce an outer bound on the network multi-function computation rate region. However, this induced outer bound is not as tight as our outer bound. Further, we showed that the best known outer bound on the rate region for computing an arbitrary vector-linear function over an arbitrary network is a straightforward consequence of our outer bound.
For the network multi-function computation model considered in the current paper, several interesting problems still remain open, such as whether the presented outer bound on the rate region is tight, and whether the used method can be generalized to characterize the outer bound on the rate region for more general functions.

Author Contributions

All authors contributed to the main conceptual ideas and mathematical techniques in this work. The first draft of the manuscript was written by X.S. All authors commented on previous versions of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Grant No. 2023YFA1009604), the Natural Science Foundation of China (Grant Nos. 62171238, 62461160306), and the Fundamental Research Funds for the Central Universities (Grant No. 050-63253087).

Data Availability Statement

The original contributions presented in this study are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

For a subset of source nodes I S , we let Cl I F q k × I be an arbitrary I-equivalence class, where we recall that k = p = 1 t k p . For any source matrix x I F q k × I , we write
x I = x I , 1 x I , 2 x I , t F q k × I
with x I , p F q k p × I for 1 p t . Further, we take an arbitrarily fixed source matrix in Cl I :
a I = a I , 1 a I , 2 a I , t Cl I .
Then, by Definition 2, the I-equivalence class Cl I can be written as:
Cl I = x I F q k × I : x I , p · M p [ I ] = a I , p · M p [ I ]   for all   1 p t .
We thus obtain that
| Cl I | = # x I F q k × I : x I , p · M p [ I ] = a I , p · M p [ I ]   for all   1 p t = p = 1 t # x I , p F q k p × I : x I , p · M p [ I ] = a I , p · M p [ I ] = p = 1 t e x p k p · | I | R a n k ( M p [ I ] ) = e x p p = 1 t k p · | I | R a n k ( M p [ I ] ) .
The lemma is proved.

Appendix B. Proof of Lemma 2

Let g e : e E be the set of all global encoding functions of a given admissible k p : 1 p t ; n code C for the model ( N , M p : 1 p t ) . We further let C Λ ( N ) be a cut set. To simplify the notation, we let I = I C and J = J C . Consider any two source matrices that are not I-equivalent:
x I = x I , 1 x I , 2 x I , t F q k × I and x I = x I , 1 x I , 2 x I , t F q k × I ,
where x I , p , x I , p F q k p × I for 1 p t are the row blocks of x I and x I , respectively. By Definition 2, we assume without loss of generality that
x I , 1 · M 1 [ I ] x I , 1 · M 1 [ I ] .
In the following, we take
a J = a J , 1 a J , 2 a J , t F q k × J and d = d 1 d 2 d t F q k × S ( I J ) ,
such that all a J , p F q k p × J and d p F q k p × S ( I J ) for 1 p k are arbitrary. Then, the inequality (A1) implies that
x I , 1 · M 1 [ I ] + a J , 1 · M 1 [ J ] + d 1 · M 1 [ S ( I J ) ] x I , 1 · M 1 [ I ] + a J , 1 · M 1 [ J ] + d 1 · M 1 [ S ( I J ) ] .
Let D = σ ( S I ) Out ( σ ) , an edge subset of E . Then C ^ C D is a global cut set, i.e., I C ^ = S . Since g I n ( ρ ) is a function of g C ^ and the code C can compute all target functions with zero error, by (A2), we obtain that
g C ^ ( x I , a J , d ) g C ^ ( x I , a J , d ) .
We further note that C ^ = C D , K C = I J and K D = S I . Then, we can rewrite (A3) as follows:
g C ( x I , a J ) , g D ( a J , d ) = g C ^ ( x I , a J , d ) g C ^ ( x I , a J , d ) = g C ( x I , a J ) , g D ( a J , d ) .
This immediately implies that g C ( x I , a J ) g C ( x I , a J ) . The lemma is proved.

Appendix C. Proof of Lemma 3

Consider a subset of source nodes I S . Let I , 1 m be m disjoint subsets of I and let L = I ( = 1 m I ) . We let Cl I F q k × I be an arbitrary I-equivalence class and a L F q k × L be an arbitrary source matrix such that there exists a source matrix y I = ( y I 1 , y I 2 , , y I m , y L ) Cl I with y L = a L , where we recall that k = p = 1 t k p . We further let Cl I ( a L ) be the set formed by all source matrices x I = ( x I 1 , x I 2 , , x I m , x L ) Cl I with x L = a L , namely that
Cl I ( a L ) x I = ( x I 1 , x I 2 , , x I m , x L ) Cl I : x L = a L .
Before discussing further, we need the lemma below, whose proof is deferred to the end of the appendix.
Lemma A1.
Consider an I-equivalence class Cl I . Then, P (cf. (12)) is a partition of Cl I ( a L ) , namely that all the sets cl I 1 , cl I 2 , , cl I m , a L with cl I being an I -equivalence class for 1 m that satisfy cl I 1 , cl I 2 , , cl I m , a L Cl I constitute a partition of Cl I ( a L ) .
In the following, we focus on the size of the set Cl I ( a L ) . For any source matrices x I F q k × I , x I F q k × I (where 1 m ), and x L F q k × L , we write
x I = x I , 1 x I , 2 x I , t F q k × I , x I = x I , 1 x I , 2 x I , t F q k × I , and x L = x L , 1 x L , 2 x L , t F q k × L
with x I , p F q k p × I , x I , p F q k p × I , x L , p F q k p × L for 1 p t . Further, we take an arbitrarily fixed source matrix in Cl I :
a I = a I , 1 a I , 2 a I , t Cl I .
Then by Definition 2, the I-equivalence class Cl I can be written as:
Cl I = x I F q k × I : x I , p · M p [ I ] = a I , p · M p [ I ]   for all   1 p t .
Combining (A4) and (A5), we further have
Cl I ( a L ) = { x I = ( x I 1 , x I 2 , , x I m , x L ) F q k × I : x L = a L   and   x I , p · M p [ I ] = a I , p · M p [ I ] for all   1 p t } = { x I = ( x I 1 , x I 2 , , x I m , x L ) F q k × I : x L = a L   and   ( x I 1 , p , x I 2 , p , , x I m , p ) · M p [ = 1 m I ] + x L , p · M p [ L ] = a I , p · M p [ I ]   for all   1 p t } = { ( x I 1 , x I 2 , , x I m ) F q k × ( = 1 m I ) : ( x I 1 , p , x I 2 , p , , x I m , p ) · M p [ = 1 m I ] + a L , p · M p [ L ] = a I , p · M p [ I ]   for all   1 p t } .
By (A6), we can compute the size of the set Cl I ( a L ) as follows:
| Cl I ( a L ) | = p = 1 t # { ( x I 1 , p , x I 2 , p , , x I m , p ) F q k p × ( = 1 m I ) : ( x I 1 , p , x I 2 , p , , x I m , p ) · M p [ = 1 m I ] + a L , p · M p [ L ] = a I , p · M p [ I ] } = p = 1 t e x p k p · = 1 m | I | R a n k ( M p [ = 1 m I ] ) = e x p p = 1 t k p · = 1 m | I | R a n k ( M p [ = 1 m I ] ) .
In addition, by Lemma A1, P is a partition of Cl I ( a L ) . We also note that by Lemma 1, all I -equivalence ( 1 m ) classes have the same size
e x p p = 1 t k p · | I | R a n k ( M p [ I ] ) .
Then, for any set cl = cl I 1 , cl I 2 , , cl I m , a L P with cl I being an I -equivalence class for 1 m , its size is given by
| cl | = = 1 m e x p p = 1 t k p · | I | R a n k ( M p [ I ] ) = e x p p = 1 t k p · = 1 m | I | R a n k ( M p [ I ] ) .
By (A7) and (A8), the size of P is calculated to be
| P | = e x p p = 1 t k p · = 1 m | I | R a n k ( M p [ = 1 m I ] ) e x p p = 1 t k p · = 1 m | I | R a n k ( M p [ I ] ) = e x p p = 1 t k p · = 1 m R a n k ( M p [ I ] ) R a n k ( M p [ = 1 m I ] ) .
To complete the proof of Lemma 3, it remains to prove Lemma A1.
Proof of Lemma A1.
Consider the I-equivalence class Cl I and an arbitrary source matrix a L F q k × L such that there exists a source matrix y I = ( y I 1 , y I 2 , , y I m , y L ) Cl I with y L = a L . First, we can readily see that all the sets in P (cf. (12)) are disjoint and cl P cl Cl I ( a L ) . To complete the proof, it suffices to prove that cl P cl Cl I ( a L ) . We consider an arbitrary source matrix x I = ( x I 1 , x I 2 , , x I m , a L ) in Cl I ( a L ) . Clearly, x I Cl I . Next, we will prove x I cl for some cl P . First, we recall that for each 1 m , F q k × I can be partitioned into I -equivalence classes. Hence, x I is in some I -equivalence class, say cl I , and then
x I = ( x I 1 , x I 2 , , x I m , a L ) cl cl I 1 , cl I 2 , , cl I m , a L .
We can verify that all source matrices in cl are I-equivalent. As such, there always exists an I-equivalence class Cl I such that
cl = cl I 1 , cl I 2 , , cl I m , a L Cl I ,
and further cl Cl I ( a L ) . Combining (A9) and (A10), we have x I Cl I ( a L ) Cl . Together with x I Cl I ( a L ) Cl , we immediately obtain that Cl = Cl and also Cl I ( a L ) = Cl I ( a L ) . This thus implies that cl Cl I ( a L ) , i.e., cl P , and hence x I cl P cl . As such, we have proved Cl I ( a L ) cl P cl and the lemma is proved. □

Appendix D. Proof of Theorem 2

We first note that an admissible code for the model ( N , M j : 1 j t ) can also be regarded as an admissible code for the network function computation model ( N , M j ) , where 1 j t . This immediately shows that
R ( N , M j : 1 j t ) ( R 1 , R 2 , , R t ) : R j R ( N , M j ) f o r 1 j t .
In the following, we prove the inclusion (19). For each index 1 j t , we let R j R ( N , M j ) , namely that the target function f j can be computed with zero error R j times on average for one use of the network N . This immediately implies that R j e j R ( N , M j : 1 j t ) . We further let λ j 0 for 1 j t , and j = 1 t λ j 1 . By the time-sharing scheme, we have
j = 1 t λ j · R j e j R ( N , M j : 1 j t ) ,
namely that all target functions f j ( 1 j t ) can be computed with zero error λ j R j times for one use of the network N , respectively. We note that Equation (A11) holds for any R j R ( N , M j ) and λ j 0 for 1 j t , and j = 1 t λ j 1 , This immediately implies that
R ( N , M j : 1 j t ) j = 1 t λ j · R j e j : R j R ( N , M j ) a n d λ j 0 f o r 1 j t , a n d j = 1 t λ j 1 .
The theorem is proved.

References

  1. Appuswamy, R.; Franceschetti, M.; Karamchandani, N.; Zeger, K. Network coding for computing: Cut-set bounds. IEEE Trans. Inf. Theory 2011, 57, 1015–1030. [Google Scholar] [CrossRef]
  2. Huang, C.; Tan, Z.; Yang, S.; Guang, X. Comments on cut-set bounds on network function computation. IEEE Trans. Inf. Theory 2018, 64, 6454–6459. [Google Scholar] [CrossRef]
  3. Appuswamy, R.; Franceschetti, M. Computing linear functions by linear coding over networks. IEEE Trans. Inf. Theory 2014, 60, 422–431. [Google Scholar] [CrossRef]
  4. Guang, X.; Yeung, R.W.; Yang, S.; Li, C. Improved upper bound on the network function computing capacity. IEEE Trans. Inf. Theory 2019, 65, 3790–3811. [Google Scholar] [CrossRef]
  5. Appuswamy, R.; Franceschetti, M.; Karamchandani, N.; Zeger, K. Linear codes, target function classes, and network computing capacity. IEEE Trans. Inf. Theory 2013, 59, 5741–5753. [Google Scholar] [CrossRef]
  6. Ramamoorthy, A.; Langberg, M. Communicating the sum of sources over a network. IEEE J. Sel. Areas Commun. 2013, 31, 655–665. [Google Scholar] [CrossRef]
  7. Rai, B.; Dey, B. On network coding for sum-networks. IEEE Trans. Inf. Theory 2012, 58, 50–63. [Google Scholar] [CrossRef]
  8. Rai, B.; Das, N. Sum-networks: Min-cut = 2 does not guarantee solvability. IEEE Commun. Lett. 2013, 17, 2144–2147. [Google Scholar] [CrossRef]
  9. Tripathy, A.; Ramamoorthy, A. Sum-networks from incidence structures: Construction and capacity analysis. IEEE Trans. Inf. Theory 2018, 64, 3461–3480. [Google Scholar] [CrossRef]
  10. Kowshik, H.; Kumar, P. Optimal function computation in directed and undirected graphs. IEEE Trans. Inf. Theory 2012, 58, 3407–3418. [Google Scholar] [CrossRef]
  11. Giridhar, A.; Kumar, P. Computing and communicating functions over sensor networks. IEEE J. Sel. Areas Commun. 2005, 23, 755–764. [Google Scholar] [CrossRef]
  12. Li, D.; Xu, Y. Computing vector-linear functions on diamond network. IEEE Commun. Lett. 2022, 26, 1519–1523. [Google Scholar] [CrossRef]
  13. Zhang, R.; Guang, X.; Yang, S.; Niu, X.; Bai, B. Computation of binary arithmetic sum over an asymmetric diamond network. IEEE J. Sel. Areas Inf. Theory 2024, 5, 585–596. [Google Scholar] [CrossRef]
  14. Guang, X.; Zhang, R. Zero-error distributed compression of binary arithmetic sum. IEEE Trans. Inf. Theory 2024, 70, 1111–1120. [Google Scholar] [CrossRef]
  15. Tripathy, A.; Ramamoorthy, A. On computation rates for arithmetic sum. In Proceedings of the 2016 the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016. [Google Scholar]
  16. Tripathy, A.; Ramamoorthy, A. Zero-error function computation on a directed acyclic network. In Proceedings of the 2018 IEEE Information Theory Workshop (ITW), Guangzhou, China, 25–29 November 2018. [Google Scholar]
  17. Feizi, S.; Médard, M. Multi-functional compression with side information. In Proceedings of the IEEE Global Communications Conference, Honolulu, HI, USA, 30 November–4 December 2009. [Google Scholar]
  18. Kannan, S.; Viswanath, P. Multi-session function computation and multicasting in undirected graphs. IEEE J. Sel. Areas Commun. 2013, 31, 702–713. [Google Scholar] [CrossRef]
  19. Günlü, O.; Bloch, M.; Schaefer, R.F. Private remote sources for secure multi-function computation. IEEE Trans. Inf. Theory 2022, 68, 6826–6841. [Google Scholar] [CrossRef]
  20. Kim, W.; Kruglik, S.; Kiah, H.M. Coded computation of multiple functions. In Proceedings of the IEEE Information Theory Workshop, Sundsvall, Sweden, 23–28 April 2023. [Google Scholar]
  21. Kim, W.; Kruglik, S.; Kiah, H.M. Verifiable coded computation of multiple functions. IEEE Trans. Inf. Forensics Secur. 2024, 19, 8009–8022. [Google Scholar] [CrossRef]
  22. Malak, D.; Deylam Salehi, M.; Serbetci, B.; Elia, P. Multi-server multi-function distributed computation. Entropy 2024, 26, 448. [Google Scholar] [CrossRef] [PubMed]
  23. Huang, L.; Feng, X.; Zhang, L.; Qian, L.; Wu, Y. Multi-server multi-user multi-task computation offloading for mobile edge computing networks. Sensors 2019, 19, 1446. [Google Scholar] [CrossRef]
  24. Chi, K.; Shen, J.; Li, Y.; Wang, S. Multi-function radar signal sorting based on complex network. IEEE Signal Process. Lett. 2021, 28, 91–95. [Google Scholar] [CrossRef]
  25. Reeder, J.; Georgiopoulos, M. Generative neural networks for multi-task life-long learning. Comput. J. 2014, 57, 427–450. [Google Scholar] [CrossRef]
  26. Iwai, H.; Kobayashi, I. A study on developmental artificial neural networks that integrate multiple functions using variational autoencoder. In Proceedings of the International Conference on Soft Computing and Machine Intelligence, Mexico City, Mexico, 25–26 November 2023. [Google Scholar]
  27. Malak, D.; Salehi, M.R.D.; Serbetci, B.; Elia, P. Multi-functional distributed computing. In Proceedings of the 60th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 24–27 September 2024. [Google Scholar]
Figure 1. The network two-function computation model ( N ˜ ,   M 1 ,   M 2 ) .
Figure 1. The network two-function computation model ( N ˜ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g001
Figure 2. The rate region R ( N ˜ ,   M 1 ,   M 2 ) .
Figure 2. The rate region R ( N ˜ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g002
Figure 3. An admissible ( k 1 = 1 ,   k 2 = 0 ;   n = 1 ) code for ( N ˜ ,   M 1 ,   M 2 ) .
Figure 3. An admissible ( k 1 = 1 ,   k 2 = 0 ;   n = 1 ) code for ( N ˜ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g003
Figure 4. An admissible ( k 1 = 0 ,   k 2 = 2 ;   n = 3 ) code for ( N ˜ ,   M 1 ,   M 2 ) .
Figure 4. An admissible ( k 1 = 0 ,   k 2 = 2 ;   n = 3 ) code for ( N ˜ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g004
Figure 5. The network two-function computation model ( N ^ ,   M 1 ,   M 2 ) .
Figure 5. The network two-function computation model ( N ^ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g005
Figure 6. The rate region R ( N ^ ,   M 1 ,   M 2 ) .
Figure 6. The rate region R ( N ^ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g006
Figure 7. An admissible ( k 1 = 1 ,   k 2 = 0 ;   n = 1 ) code for ( N ^ ,   M 1 ,   M 2 ) .
Figure 7. An admissible ( k 1 = 1 ,   k 2 = 0 ;   n = 1 ) code for ( N ^ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g007
Figure 8. An admissible ( k 1 = 1 ,   k 2 = 1 ;   n = 2 ) code for ( N ^ ,   M 1 ,   M 2 ) .
Figure 8. An admissible ( k 1 = 1 ,   k 2 = 1 ;   n = 2 ) code for ( N ^ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g008
Figure 9. An admissible ( k 1 = 0 ,   k 2 = 2 ;   n = 3 ) code for ( N ^ ,   M 1 ,   M 2 ) .
Figure 9. An admissible ( k 1 = 0 ,   k 2 = 2 ;   n = 3 ) code for ( N ^ ,   M 1 ,   M 2 ) .
Entropy 27 01225 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, X.; Zhang, R.; Li, D.; Guang, X. Multi-Function Computation over a Directed Acyclic Network. Entropy 2025, 27, 1225. https://doi.org/10.3390/e27121225

AMA Style

Sun X, Zhang R, Li D, Guang X. Multi-Function Computation over a Directed Acyclic Network. Entropy. 2025; 27(12):1225. https://doi.org/10.3390/e27121225

Chicago/Turabian Style

Sun, Xiufang, Ruze Zhang, Dan Li, and Xuan Guang. 2025. "Multi-Function Computation over a Directed Acyclic Network" Entropy 27, no. 12: 1225. https://doi.org/10.3390/e27121225

APA Style

Sun, X., Zhang, R., Li, D., & Guang, X. (2025). Multi-Function Computation over a Directed Acyclic Network. Entropy, 27(12), 1225. https://doi.org/10.3390/e27121225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop