Next Article in Journal
Research on Target Image Classification in Low-Light Night Vision
Previous Article in Journal
Rate Optimization of Intelligent Reflecting Surface-Assisted Coal Mine Wireless Communication Systems
Previous Article in Special Issue
High-Throughput Polar Code Decoders with Information Bottleneck Quantization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Privacy-Preserving Coded Computing with Hierarchical Task Partitioning

by
Qicheng Zeng
,
Zhaojun Nan
* and
Sheng Zhou
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(10), 881; https://doi.org/10.3390/e26100881
Submission received: 25 August 2024 / Revised: 11 October 2024 / Accepted: 15 October 2024 / Published: 21 October 2024
(This article belongs to the Special Issue Intelligent Information Processing and Coding for B5G Communications)

Abstract

:
Coded computing is recognized as a promising solution to address the privacy leakage problem and the straggling effect in distributed computing. This technique leverages coding theory to recover computation tasks using results from a subset of workers. In this paper, we propose the adaptive privacy-preserving coded computing (APCC) strategy, designed to be applicable to various types of computation tasks, including polynomial and non-polynomial functions, and to adaptively provide accurate or approximated results. We prove the optimality of APCC in terms of encoding rate, defined as the ratio between the computation loads of tasks before and after encoding, based on the optimal recovery threshold of Lagrange Coded Computing. We demonstrate that APCC guarantees information-theoretical data privacy preservation. Mitigation of the straggling effect in APCC is achieved through hierarchical task partitioning and task cancellation, which further reduces computation delays by enabling straggling workers to return partial results of assigned tasks, compared to conventional coded computing strategies. The hierarchical task partitioning problems are formulated as mixed-integer nonlinear programming (MINLP) problems with the objective of minimizing task completion delay. We propose a low-complexity maximum value descent (MVD) algorithm to optimally solve these problems. The simulation results show that APCC can reduce the task completion delay by a range of 20.3 % to 47.5 % when compared to other state-of-the-art benchmarks.

1. Introduction

Under the vision of “Internet of Everything”, intelligence-enabled applications are essential, leading to a variety of crucial computation tasks, such as the training and inference of complex machine learning models based on extensive datasets [1,2,3]. However, executing these computation-intensive tasks on a single device with limited computation capability and power resources presents significant challenges. To this end, distributed computing emerges as a practical solution, where a central node, referred to as master, manages task division, assignment, and result collection, while multiple distributed computing nodes, called workers, process the assigned partial computation tasks in parallel [4].
Nevertheless, while distributed computing accelerates the computation process by employing multiple workers for parallel processing, the total delay is dominated by the slowest worker, as the master must wait for all workers to complete their assigned tasks [5]. As demonstrated in the experimental results of [6], the delay of the slowest worker can exceed five times that of the others, which significantly prolongs the total delay. Moreover, due to the randomness of delays, identifying slow workers in advance is challenging. To tackle this so-called straggling effect, coded computing has emerged as a promising solution [6,7,8,9,10,11,12]. As Figure 1 shows, this approach combines coding theory with distributed computing and reduces delays by introducing structured computational redundancies. Through the incorporation of redundancy during the encoding process, computation tasks can be completed using results from a subset of workers, thereby reducing total delays.
In coded computing, workers are tasked with processing input data and returning results, but the involved tasks may contain sensitive information, such as patient medical data, customer personal information, and proprietary company data [13,14]. Consequently, it is essential to maintain the data privacy against colluding workers, those who return correct results but may communicate with one another to share input data from the master, so as to infer some private information of the master. Recent research studies have aimed to develop coded computing strategies that address not only the straggling effect but also privacy concerns, such as combining additional random data insertion with prevalent polynomial coded computing methods [15,16,17,18,19,20,21,22,23,24,25]. This approach enhances the robustness of the system against straggling workers while also improving privacy and security by obscuring the original data.
In the majority of existing studies, matrix multiplication is treated as the primary application of coded computing, and its performance has been extensively validated. However, real-world computation tasks are often more diverse than mere matrix multiplication. For instance, in a linear regression task, the iterative process of solving weights involves calculating previous weights multiplied by the quadratic power of the input data. This implies that coded computing schemes for matrix multiplication must be executed twice in each step, and computation becomes considerably more complex when considering other tasks, such as inference tasks of neural networks.
In terms of extending the applicability of coded computing, one state-of-the-art approach is Lagrange Coded Computing (LCC) [15]. LCC employs Lagrange polynomial interpolation to transform input data before and after encoding into interpolation points on the computation function. This allows the recovery of desired results through the reconstruction of the interpolation function. LCC is compatible with various computation tasks, ranging from matrix multiplication to polynomial functions, and offers an optimal recovery threshold concerning the degree of polynomial functions. In [21,25,26], the problem of using matrix data as input and polynomial functions as computation tasks is also explored.
However, LCC still suffers from several shortcomings [27]. First, its recovery threshold is proportional to degrees of polynomial functions, which can be prohibitively large for complex tasks and thereby make it difficult to achieve successful recovery. Second, Lagrange polynomial interpolation can be ill-conditioned, making it challenging to ensure numerical stability, unless one embeds the computation to a finite field. In [27], Berrut’s Approximated Coded Computing (BACC) is proposed to address these shortcomings and further expand the scope of computation tasks to arbitrary functions. However, BACC only yields approximated computing results and does not guarantee privacy preservation. Other related works [28,29,30,31,32] also focus on approximated results while attempting to maintain the numerical stability of coded computing. As far as we know, there is still a lack of a versatile coded computing strategy suitable for various computational tasks. This strategy should be capable of achieving privacy preservation while providing accurate or approximated results based on specific demands.
On the other hand, opportunities exist to enhance mitigation of the straggling effect and further reduce delays. This is because prior studies commonly discard results from straggling workers, leading to the inefficient utilization of computational resources. In [11], a hierarchical task partitioning structure is proposed, where divided tasks are further partitioned into multiple layers, and workers process their assigned tasks in the order of layer indices. Consequently, straggling workers can return results from lower layers instead of none, while fast workers can reach higher layers and return more results. Similar performance improvements are achieved through multi-message communications (MMC) [33,34,35], where workers are permitted to return partial results of assigned tasks in each time slot, enabling straggling workers to contribute to the system.
Essentially, three ways exist to alleviate the straggling effect, given the total number of workers. First, minimize the recovery threshold of coded computing schemes, as a smaller recovery threshold implies fewer workers to wait for [9,10,15,16,36,37,38,39]. As a result, the master can recover desired computing results even with more straggling workers. Second, the computation loads for each worker should be carefully designed to allow them to complete varying amounts of computation based on their capabilities, which is formulated as optimization problems in [4,40,41,42,43]. This approach narrows the gap between the delays of fast and slow workers. Third, workers should be capable of returning partial results of assigned tasks, rather than the scenario where fast workers complete all assigned tasks, leaving straggling workers to contribute virtually nothing. The third point aligns with the idea of a hierarchical task partitioning structure and MMC.
In this work, we consider a distributed system with one master and multiple workers, and propose an adaptive privacy-preserving coded computing (APCC) strategy. The strategy primarily focuses on the applicability for diverse computation tasks, the privacy preservation of input data, and the mitigation of the straggling effect. Moreover, based on the hierarchical task partitioning structure in APCC, we propose an operation called cancellation to prevent slower workers from processing completed tasks, reducing resource waste and improving delay performance. Specifically, the main contributions are summarized as follows:
  • We propose the APCC framework, which effectively mitigates the straggling effect and fully preserves data privacy. APCC is applicable to various computation tasks, including polynomial and non-polynomial functions, and can adaptively provide accurate results or approximated results with controllable error.
  • We rigorously prove the information-theoretical privacy preservation of the input data in APCC, as well as the optimality of APCC in terms of the encoding rate based on the optimal recovery threshold of LCC. The encoding rate is defined as the ratio between the computation loads of tasks before and after encoding, serving as an indicator of the performance of coded computing schemes in mitigating the straggling effect.
  • Considering the randomness of task completion delay, we formulate hierarchical task partitioning problems in APCC, with or without cancellation, as mixed-integer nonlinear programming (MINLP) problems with the objective of minimizing task completion delay. We propose a maximum value descent (MVD) algorithm to optimally solve the problems with linear complexity.
  • Extensive simulations demonstrate improvements in delay performance offered by APCC when compared to other state-of-the-art coded computing benchmarks. Notably, APCC achieves a reduction in task completion delay ranging from 20.3 % to 47.5 % compared to LCC [15] and BACC [27]. Simulations also explore the trade-off between task completion delay and the level of privacy preservation.
The remainder of the paper is structured as follows. Section 2 presents the system model. In Section 3, we propose the adaptive privacy-preserving coded computing strategy, namely APCC. In Section 4, the performance of APCC is further analyzed in terms of encoding rate, privacy preservation, approximation error, numerical stability, communication costs, and encoding and decoding complexity. In Section 5, we proposed the MVD algorithm to address the hierarchical task partitioning optimization problem with or without cancellation. The simulation results are provided in Section 6, and conclusions are drawn in Section 7.

2. System Model

As shown in Figure 2, we consider the distributed computing system consisting of one master and N workers. The goal is to complete a computation task on the master with the help of N workers. The task is represented by a function f, operating over an equally pre-divided input dataset D = { D k R p × q | k [ 0 : K 1 ] } . The master aims to evaluate the results { f ( D k ) } k = 0 K 1 , whose dimensions are decided by the task function f. To achieve this, we employ the proposed APCC strategy. Note that we consider the computation of { f ( D k ) } k = 0 K 1 as the entire task and the computation of f ( D k ) as a subtask.
In APCC, the K equally pre-divided input data { D k } k = 0 K 1 are not directly encoded like conventional coded computing strategies. Instead, they are firstly partitioned into r sets. Subsequently, the input data in each set are encoded into N parts, which are then assigned to N workers for parallel computation. This hierarchical task partitioning structure enables workers to return partial results of assigned subtasks, further mitigating the straggling effect and reducing delays. After the task assignment, the master leverages the results obtained from a subset of workers in each set and employs interpolation methods to reconstruct the original function f, thereby achieving the recovery of { f ( D k ) } k = 0 K 1 . A comprehensive description of the APCC strategy is presented in Section 3.
Taking into account the unreliable channels and uncertain computation capabilities of workers, some of them may fail to return results to the master in time. These straggling workers are referred to as stragglers. Additionally, we assume that workers are honest but curious. This means they will send back the correct computation results, but there could be up to L ( L < N ) colluding workers who can communicate with each other and attempt to learn information about the input data { D k } k = 0 K 1 . These workers are called colluders.

3. Adaptive Privacy-Preserving Coded Computing

In this section, we propose the adaptive privacy-preserving coded computing (APCC) strategy, which is suitable for diverse computation tasks including polynomial functions and non-polynomial functions, and can adaptively provide accurate results or approximated results. We begin with a general description to explain how APCC works and then provide an illustrative example for accurate results case without loss of generality. Lastly, we introduce the hierarchical task partitioning structure of APCC, and the cancellation of completed subtasks based on this hierarchical structure.

3.1. General Description

In this subsection, we provide a general description of the proposed APCC strategy. As shown in Figure 2, the inputs of the function f are first equally pre-divided into K parts { D k } k = 0 K 1 , and K corresponding subtasks { f ( D k ) } k = 0 K 1 are formed. The APCC strategy then follows three steps: (1) Encoding; (2) Assignment; (3) Decoding, and obtains accurate or approximated computing results of { f ( D k ) } k = 0 K 1 .

3.1.1. Encoding

In the initialization step, the K subtasks are further partitioned into r sets, with set i ( i [ 0 : r 1 ] ) containing K i subtasks { f ( D i , j } j = 0 K i 1 , where D i , j { D k | k [ 0 : K 1 ] } . Consequently, the desired results of the master are
{ f ( D k ) } k = 0 K 1 = { f ( D i , j ) } j = 0 K i 1 | i [ 0 : r 1 ] ,
where K i should satisfy i = 0 r 1 K i = K . The specific values of { K i } i = 0 r 1 will be formulated as optimization problems in Section 5. We refer to the partition of these sets as hierarchical task partitioning.
Inspired by Barycentric polynomial interpolation [27,44], the input data { D i , j } j = 0 K i 1 for set i is linearly encoded through function g i ( x ) as:
g i ( x ) = j = 0 K i 1 w i , j k = 0 , k j K i + L 1 ( x α i , k ) k = 0 K i + L 1 w i , k l = 0 , l k K i + L 1 ( x α i , l ) D i , j + j = K i K i + L 1 w i , j k = 0 , k j K i + L 1 ( x α i , k ) k = 0 K i + L 1 w i , k l = 0 , l k K i + L 1 ( x α i , l ) Z i , j ,
where { Z i , j V | j [ K i : K i + L 1 ] } are L random matrices added to preserve the privacy, each element in Z i , j follows a uniform distribution, and x R is the encoding parameter. { α i , j } j = 0 K i + L 1 are distinct values selected as Chebyshev points of the first kind:
α i , j = cos ( 2 j + 1 ) π 2 ( K i + L ) , j [ 0 : K i + L 1 ] .
w i , j is a constant related to α i , j and calculated as:
w i , j = 1 k = 0 , k j K i + L 1 ( α i , j α i , k ) , j [ 0 : K i + L 1 ] .
Note that the form of g i ( x ) is a Barycentric polynomial [27,44], which can avoid overflows and underflows in floating-point arithmetic and requires a lower computation complexity compared to its similar form of Lagrange polynomial in LCC [15]. Furthermore, Equation (2) guarantees that
g i ( α i , j ) = D i , j , j [ 0 : K i 1 ] .
because the coefficient term before D i , j and Z i , j
w i , j k = 0 , k j K i + L 1 ( x α i , k ) k = 0 K i + L 1 w i , k l = 0 , l k K i + L 1 ( x α i , l ) = 1 , if x = α i , j , 0 , if x = α i , k , k j .
The encoded input data { D ˜ i , n } n = 0 N 1 are obtained as:
D ˜ i , n = g i ( β n ) , n [ 0 : N 1 ] .
{ β n } n = 0 N 1 are selected as Chebyshev points of the second kind:
β n = cos n π N 1 , n [ 0 : N 1 ] .

3.1.2. Assignment

For set i, the encoded data D ˜ i , n = g i ( β n ) is assigned to worker n. Consequently, as Figure 2 shows, each worker receives r encoded subtasks { f ( D ˜ i , n ) } i = 0 r 1 and executes them in the order of set indices. Once completed, the result of each encoded subtask f ( D ˜ i , n ) is returned to the master. In other words, after the original K subtasks are partitioned into multiple sets, each set is transformed into N encoded subtasks assigned to N workers for processing.

3.1.3. Decoding

For set i, the master decodes using function r i ( x ) , which is constructed by interpolation [27,44] as:
r i ( x ) = n = 0 R i 1 w ˜ n x x ˜ n m = 0 R i 1 w ˜ m x x ˜ m f ( g i ( x ˜ n ) ) ,
where { f ( g i ( x ˜ n ) ) | n [ 0 : R i 1 ] } are the first R i received results in { f ( D ˜ i , n ) } n = 0 N 1 for set i, x ˜ n is the corresponding encoding parameter that belongs to { β n | n [ 0 , N 1 ] } , and the parameter w ˜ n is adaptive for different cases, as follows.
Case 1: Accurate results. If f is a polynomial function of degree d, where the degree d of a polynomial function is defined as the maximum order of its monomials, the adaptive parameters w ˜ n is determined as:
w ˜ n = 1 m = 0 , m n R i 1 ( x ˜ n x ˜ m ) , n [ 0 : R i 1 ] .
In this case, r i ( x ) is a Barycentric polynomial interpolation function [44] for f ( g i ( x ) ) . The degree of g i ( x ) equals ( K i + L 1 ) , so that f ( g i ( x ) ) remains a polynomial function, and its degree satisfies deg f ( g i ( x ) ) d ( K i + L 1 ) . Consequently, if the number of received results R i for set i satisfies:
R i = d ( K i + L 1 ) + 1 ,
it implies that sufficient interpolation points have been obtained to precisely recover f ( g i ( x ) ) through r i ( x ) , and the entire computation process is completed with
f ( D i , j ) = f ( g i ( α i , j ) ) = r i ( α i , j ) ,
for any i [ 0 : r 1 ] , j [ 0 : K i 1 ] .
Note that Equation (11) means that the accurate result case of APCC has the same recovery threshold as LCC [15]. Furthermore, similar to LCC, when there is no need for privacy preservation, which means L = 0 , we can also provide an uncoded version of APCC by selecting the value of { β n } from { α i , j } . Thus, the new recovery threshold becomes:
R i = N N K i + 1 .
Case 2: Approximated results. If f is an arbitrary function, the adaptive parameter w ˜ n is calculated as:
w ˜ n = ( 1 ) n , n [ 0 : R i 1 ] .
In this case, r i ( x ) is a Berrut’s rational interpolation function for f ( g i ( x ) ) , as discussed in [27,45]. The computed results f ( g i ( x ˜ n ) ) serve as the interpolation points of f ( g i ( x ) ) , and they satisfy r i ( x ˜ n ) = f ( g i ( x ˜ n ) ) due to the property of Berrut’s rational interpolation [45]. Therefore, the master can regard r i ( x ) as an approximation of f ( g i ( x ) ) , which means that
f ( D i , j ) = f ( g i ( α i , j ) r i ( α i , j ) ,
for any i [ 0 : r 1 ] , j [ 0 : K i 1 ] . In addition, the approximation using r i ( x ) becomes more accurate as R i increases. Thus, if the master desires more accurate computations, it simply needs to wait for more results.

3.2. An Illustrating Example

In this subsection, we present an illustrative example for the case of accurate results without loss of generality. Specifically, we consider a linear regression problem. The feature data D R 12000 × 10 contains 12,000 data samples with 10 features, and the label vector is denoted by y R 12000 × 1 . The objective is to find the weighting vector w R 10 × 1 that minimizes the loss | | Dw y | | 2 . To solve this problem, the gradient descent method updates the weights iteratively along the negative gradient direction as follows:
w ( t + 1 ) = w ( t ) 2 η p D T ( D w ( t ) y ) ,
where η is the learning rate and t represents the iteration index.
In order to apply the aforementioned update process to a distributed system with one master and N = 10 workers, for instance, the feature data D is first equally divided into K = 12 sub-matrices ( D 0 , D 1 , , D 11 ) T , D k R 1000 × 10 , k [ 0 : 11 ] . As w ( t ) is known by the workers and D T y can be precomputed by the master, the computation function (subtask) of the master in each iteration can be expressed as f ( D k ) = D k T D k w R 10 × 1 , k [ 0 : 11 ] . After obtaining the results of the entire task { f ( D k ) } k = 0 11 , the gradient update is computed as D T D w = k = 0 11 D k T D k w .
We now illustrate how APCC can be applied in the above problem, to obtain f ( D k ) = D k T D k w , k [ 0 : 11 ] .

3.2.1. Encoding

As depicted in Figure 3a, since there are 12 subtasks f ( D k ) , k [ 0 : 11 ] , the master further partitions them into r = 3 sets before encoding input data, and set i ( i = 0 , 1 , 2 ) contains K i subtasks. Here, for instance, we assume that K 0 = 5 , K 1 = 4 , and K 2 = 3 , and they satisfy K 0 + K 1 + K 2 = K = 12 . After this hierarchical task partitioning, the input of the j-th subtask in set i is denoted as D i , j R 1000 × 10 instead of the previous D k .
Next, the K i input data { D i , j R 1000 × 10 | j = 0 , , K i 1 } in set i are encoded into N = 10 parts { D ˜ i , n R 1000 × 10 | n = 0 , , 9 } through g i ( x ) , where D ˜ i , n = g i ( β n ) , n [ 0 : 9 ] . Moreover, g i ( x ) is a polynomial function with a degree of ( K i + L 1 ) , and its form ensures that the parameters { α i , j } satisfy g i ( α i , j ) = D i , j .

3.2.2. Assignment

As Figure 3b shows, for each set, the 10 encoded input data { D ˜ i , n } n = 0 9 are assigned to the 10 workers. Subsequently, each worker applies function f to compute and return the results to the master. As can be observed, the K i original subtasks in set i are transformed into 10 subtasks performed on the 10 workers in parallel. Since there are 3 sets, each worker is assigned 3 subtasks. These subtasks are executed in the order of set indices, which implies f ( D ˜ 0 , n ) is computed first, followed by f ( D ˜ 1 , n ) , and so on.

3.2.3. Decoding

As illustrated in Figure 3b, following the assignment of encoded input to workers, the master continuously awaits the subtask results from workers and creates a decoding function r i ( x ) for set i. This decoding function is constructed using interpolation to recover the original function f ( D i , j ) = f ( g i ( α i , j ) ) . Consequently, each received result, f ( g i ( β n ) ) , can be regarded as an interpolation point for f ( g i ( x ) ) , and r i ( x ) is precisely the interpolation function of f ( g i ( x ) ) .
Presently, f ( D i , j ) = D i , j T D i , j w is a polynomial function of degree d = 2 , where the degree d of a polynomial function f is defined as the maximum order of its monomials. We have illustrated how to complete the decoding process in Subsection III.A. By setting the number of received results to R i = d ( K i + L 1 ) + 1 , sufficient interpolation points are obtained to accurately recover f ( g i ( x ) ) through r i ( x ) , i.e., f ( D i , j ) = f ( g i ( α i , j ) ) = r i ( α i , j ) , for any i [ 0 : 2 ] and j [ 0 : K i 1 ] . To further illustrate APCC, we also provide a corresponding pseudo-code, as presented in Algorithm 1.
Algorithm 1: APCC
Entropy 26 00881 i001

3.3. Hierarchical Task Partitioning and Cancellation

In Figure 3, the hierarchical task partitioning in APCC aims to maximize the utility of computing results from straggling workers. This is achieved through a well-designed structure and appropriate choice of K i values. Although the same number of encoded subtasks are assigned to all workers, the number of successfully returned results from each worker can differ due to varying processing speeds. As a result, straggling workers may return fewer computing results than faster workers, but they can still make valuable contributions to task completion instead of being completely discarded.
Furthermore, the illustration in Figure 3 suggests that K i 1 should exceed K i [11]. This assertion is explained as follows: The “completion time” of set i” is defined as the moment when a sufficient number of encoded subtask results within set i are obtained. The overarching objective is to minimize the delay in completing the entire task, which must necessarily exceed the “completion time” of any set since the entire task remains incomplete until all r sets are recovered. Given that subtasks are executed in order of set indices, when set r is recovered, the master must have acquired results for the smaller-index sets equal to or greater than K i . Opting for smaller values of K i for the smaller-index sets would result in more workers experiencing straggling, a situation that should be averted. Further details are expounded in Section 5.
Based on the hierarchical structure, we propose an alternative method to further accelerate the coded computing process. As depicted in Figure 4, the subtasks { f ( D ˜ i , n ) } i = 0 r 1 to be computed on each worker form an execution sequence. Once enough results for set i are obtained, the master can instruct workers that have not completed the computation of f ( D ˜ i , n ) to terminate or skip this part of the computation and proceed to compute the next subtasks f ( D ˜ i + 1 , n ) of the subsequent set. This operation, called “Cancellation”, prevents computation resources from being wasted on completed sets. Considering the presence of non-persistent stragglers, cancellation increases the probability of them overcoming the previous straggling effect and avoiding becoming stragglers again.

4. Performance Analysis

In this section, we first define a metric called encoding rate to evaluate the efficiency performance of coded computing schemes, in terms of utilizing computation resources of workers as efficiently as possible. Then, based on the optimal recovery threshold of LCC [15], we rigorously prove APCC for accurate results is also an optimal polynomial coding in terms of the encoding rate. Furthermore, an information-theoretic guarantee to completely preserve the privacy of input data { D k } k = 0 K 1 in APCC is proved. Subsequently, we present an analysis of the approximation error for Case 2 of APCC, along with a discussion of numerical stability. At the end of this section, we provide an analysis of encoding and decoding complexity for APCC and compare it with other state-of-the-art strategies.

4.1. Optimality of APCC in Terms of Encoding Rate

To evaluate the performance of various coded computing schemes, a metric known as the encoding rate R encode is used. This metric is defined as:
R encode = K N S ,
where K is the number of subtasks before encoding, N is the number of subtasks after encoding (which is equivalent to the number of workers), and S represents the number of straggling workers that failed to return results before the task was completed. Similar metrics, such as those found in [17,20,46], have also been developed.
Furthermore, since the recovery threshold, denoted by H, is defined as the minimum number of results needed to guarantee decodability, we have H = N S and thus R encode = K H . It is important to note that the encoding rate only applies when decodability is guaranteed.
The physical significance of the encoding rate is the ratio between the computation load of tasks before encoding and that required after encoding. For instance, given a task with a computation load of O ( γ ) , each subtask has a corresponding load of O ( γ K ) . As ( N S ) subtasks are successfully completed, the required computation load is O ( γ ( N S ) K ) . Since coded computing essentially trades computation redundancy for reduced delay to mitigate the straggling effect, it is reasonable to use this metric to evaluate the efficiency of different schemes.
Before demonstrating the optimality of APCC in terms of encoding rate, we present the definitions of capacity and linear coded computing schemes.
Definition 1. 
A linear coded computing scheme is one in which the encoded data is a linear combination of the original input data as follows:
D ˜ n = j = 0 K 1 G n , j D j + Z ˜ n , n [ 0 : N 1 ] ,
where G = { G n , j } R N × K is the encoding matrix and Z ˜ n are additive random real matrices.
For example, according to Equation (2) in APCC, G n , j = w i , j k = 0 , k j K i + L 1 ( β n α i , k ) k = 0 K i + L 1 w i , k l = 0 , l k K i + L 1 ( β n α i , l ) are the coefficient terms before D i , j , and Z ˜ n = j = K i K i + L 1 w i , j k = 0 , k j K i + L 1 ( β n α i , k ) k = 0 K i + L 1 w i , k l = 0 , l k K i + L 1 ( β n α i , l ) Z i , j , represents the sum of added random matrices in g i ( x ) . The index i corresponds to the set index of the hierarchical task partitioning structure of APCC and can be discarded in other coded computing strategies.
Definition 2. 
For a coded computing problem ( N , S , L , f ) , where N is the number of workers, S and L denote the number of stragglers and colluders, respectively, and the computation function f on the master is a polynomial function of degree d, the capacity C is defined as the supremum of the encoding rate R encode as:
C = sup R encode ( N , S , L , d ) ,
over all feasible linear coded computing schemes that can address up to L colluders and S stragglers.
As illustrated in Section 3, APCC is a linear coded computing scheme and its hierarchical structure results in different K i and S i for each set, with K i and S i representing the number of subtasks before encoding and that of straggling workers, respectively. For set i, R i represents the number of workers that have successfully returned results in time, implying that the number of stragglers is S i = N R i . Moreover, according to Equation (11), set i is considered complete when R i = d ( K i + L 1 ) + 1 . Hence, the encoding rate of APCC can be calculated as:
R encode [ APCC ] = K i N S i = N S i d ( L 1 ) 1 d ( N S i ) ,
or the uncoded version for L = 0 :
R encode [ APCC ] = K i N S i N ( N S i ) ( S i + 1 ) ,
where the equality holds when N can be divided by K i .
The following theorem shows that the encoding rate of APCC achieves the capacity, thereby establishing its optimality. In fact, the optimality of APCC in encoding rate is attributed to its identical polynomial coding structure when compared to LCC [15], despite having different function expressions. Specifically, for the accurate results case of APCC, the encoding and decoding processes are achieved through Barycentric polynomial interpolation; for LCC, the processes are achieved through Lagrange polynomial interpolation. Although these two formats can be transformed into each other, the Barycentric polynomial format requires less computational complexity and has stronger numerical stability [27,44]. For the sake of clarity, we omit the set index i in APCC and focus on a specific set, without loss of generality.
Theorem 1. 
For a coded computing problem ( N , S , L , f ) , where N is the number of workers, S and L denote the number of stragglers and colluders, respectively, and the computation function f on the master is an arbitrary polynomial function of degree d, the capacity C is given by:
C = N S d ( L 1 ) 1 d ( N S ) , if L > 0 , max { N S + d 1 d ( N S ) , N ( N S ) ( S + 1 ) } if L = 0 .
Proof. 
To prove Theorem 1, a lower bound on the capacity C is first established, which follows the encoding rate of APCC in (20) and (21). To establish the upper bound, we leverage the optimality statement of LCC, as illustrated in Theorems 1 and 2 of [15], which shows that polynomial coded computing strategies are able to decode returned computing results successfully only if the following condition is met:
N d ( K + L 1 ) + 1 + S , if L > 0 , min { d ( K 1 ) + 1 + S , K ( S + 1 ) } if L = 0 .
Therefore, we have:
K N S 1 d L + 1 , if L > 0 , max { N S + d 1 d , N ( S + 1 ) } if L = 0 .
Equation (24) presents the maximum number of task divisions permissible to ensure decodability, given the numbers of workers N, stragglers S, and colluders L. The reason is that the more divisions there are, the more results are needed from workers. However, there are at most N workers, including S stragglers, to return results. Based on (24), an upper bound on the encoding rate can be derived as:
R encode = K N S N S d ( L 1 ) 1 d ( N S ) , if L > 0 , max { N S + d 1 d ( N S ) , N ( N S ) ( S + 1 ) } if L = 0 .
Since the capacity C is the supremum of R encode , it also has the same upper bound. With the lower bound provided previously, we can conclude that APCC is an optimal coded computing strategy that can reach the capacity in (22).    □
To enhance clarity, the fundamental proof for the derivation of (23) is briefly introduced in Appendix A, following the same steps as outlined in [15].
Please note that the conclusion presented in this subsection pertains only to accurately coded computing. For approximated coded computing, the use of different approximation methods can lead to varying errors, making it challenging to compare and analyze their impact on the encoding rate and capacity in a qualitative manner.

4.2. Guarantee of the Privacy Preservation

Recall that colluders are those workers who can communicate with each other and attempt to learn something about the original input data. Since the system can tolerate at most L colluders, we assume that there are L colluders, where L L and the user does not know which workers are colluding. We use the index set L = { l 0 , l 1 , , l L 1 } { 0 , , N 1 } to denote the colluding workers, where | L | = L .
Assuming that the input data { D i , j } j = 0 K i 1 are independent of each other, we denote the encoded input data sent to workers in the colluding set L for set i as:
D ˜ i , L ( D ˜ i , l 0 , D ˜ i , l 1 , , D ˜ i , l L 1 ) .
Therefore, the information-theoretic privacy-preserving constraint can be expressed as:
I ( D i , 0 , D i , 1 , , D i , K i 1 ; D ˜ i , L ) = 0 , i [ 0 , r 1 ] ,
where I ( · ) represents the mutual information function.
With the assumption of finite precision floating point arithmetic, the values of elements in the data matrices such as D i , j , D ˜ i , n , and Z i , j come from a sufficiently large finite field F . Assuming that the size of these data matrices is m × m , we have
I ( D i , 0 , D i , 1 , , D i , K i 1 ; D ˜ i , L ) = H ( D ˜ i , l 0 , , D ˜ i , l L 1 ) H ( D ˜ i , l 0 , , D ˜ i , l L 1 | D i , 0 , , D i , K i 1 ) = ( a ) H ( D ˜ i , l 0 , , D ˜ i , l L 1 ) H ( Z i , K i , , Z i , K i + L 1 ) = ( b ) H ( D ˜ i , l 0 , , D ˜ i , l L 1 ) L m m log | F | H ( D ˜ i , l 0 ) + + H ( D ˜ i , l L 1 ) L m m log | F | ( c ) L m m log | F | L m m log | F | = 0 , i [ 0 , r 1 ] ,
where ( a ) is due to the fact that all random matrices { Z i , j } j = K i K i + L 1 are independent of the input data { D i , j } j = 0 K i 1 . ( b ) is because the entropy of each element in the random matrices equals log | F | , and ( c ) follows from the upper bound of the entropy of each element in D ˜ i , l ( · ) being log | F | . Since the mutual information is non-negative, it must be 0, which guarantees complete privacy preservation.
Note that the analysis in this subsection is applicable to both accurate and approximated cases. This is because the analysis only involves the encoding and assignment steps of APCC, and both cases require the same two initial steps. The key difference between the two aforementioned cases is reflected in the decoding functions with distinct adaptive parameters w ˜ n , which correspond to Barycentric polynomial interpolation and Berrut’s rational interpolation, respectively.

4.3. Analysis of Approximation Error for Case 2

According to the discussion in [27], the approximation error of Berrut’s rational polynomial interpolation used for Case 2 in APCC is provided as the following theorem:
Theorem 2 
([27]). Let the interpolating objective function h i ( x ) = f ( g i ( x ) ) have a continuous second derivative on [ 1 , 1 ] , and the number of received results R i > 3 , the approximation error is upper bounded as:
| | r i ( x ) h i ( x ) | | 2 ( 1 + Γ ) sin ( N R i + 1 ) π 2 ( N 1 ) | | h i ( x ) | | ,
if R i is even, and
| | r i ( x ) h i ( x ) | | 2 ( 1 + Γ ) sin ( N R i + 1 ) π 2 ( N 1 ) ( | | h i ( x ) | | + | | h i ( x ) | | ) ,
if R i is odd, where Γ ( N R i + 1 ) ( N R i + 3 ) π 2 4 .
Consequently, for set i and a fixed total number of workers N, the approximation using r i ( x ) becomes more accurate as the number of received results R i increases.

4.4. Numerical Stability

In coded computing, the issue of numerical stability typically arises from the decoding part, which is based on solving a system of linear equations involving a Vandermonde matrix. As previously discussed, Cases 1 and 2 of APCC employ Barycentric polynomial interpolation and Berrut’s rational interpolation as decoding methods, respectively. For Case 1, Barycentric polynomial interpolation demonstrates good performance in addressing errors caused by floating-point arithmetic [44]. Regarding Case 2, it has been shown in [27] that the Lebesgue Constant of Berrut’s rational interpolation grows logarithmically with the number of received results from workers, rendering it both forward and backward stable.

4.5. Encoding and Decoding Complexity

In this subsection, we provide the analysis of encoding and decoding complexity. Intuitively, APCC utilizes the hierarchical task partitioning structure to enhance delay performance. However, it does so at the cost of requiring multiple encoding and decoding operations, specifically r times for the r sets, when compared to LCC [15] and BACC [27].
In LCC and BACC, the encoding operations take N times, corresponding to the number of workers, while the decoding operations take K times, equivalent to the number of task divisions. On the other hand, in the case of APCC, which features r partitioned sets, the encoding and decoding operations entail N r and i = 0 r K i = K , respectively. When the computation loads per worker in all strategies are equal, i.e., K = K r , it can be deduced that the encoding and decoding operations in APCC are r times those of LCC and BACC.

5. Hierarchical Task Partitioning

In this section, the hierarchical task partitioning is formulated as an optimization problem with the objective of minimizing the task completion delay. The problem is divided into two cases for consideration: with and without cancellation. Through derivations, two mixed integer non-linear programming problems are obtained, and we propose a maximum value descent (MVD) algorithm to obtain the optimal solutions with low computational complexity. Moreover, after analysis, it is found that the MVD algorithm can be quickly executed by selecting the appropriate input. Detailed explanations are provided as follows.

5.1. Problem Formulation

In the context of negligible encoding and decoding delays, with the computation delays of workers being the dominant component, the delay for a worker to complete a single subtask, denoted as T can be represented by a shifted exponential distribution [4,7,11,12,40,41], whose cumulative distribution function (CDF) is given by:
F T ( t ) = P [ T t ] = 1 e μ ( t a ) , if t a , 0 , otherwise ,
where a > 0 is a parameter indicating the minimum processing time and μ > 0 is a parameter modeling the computing performance of workers. All N workers follow a uniform computation delay distribution defined in (31).
Recall that in the hierarchical structure, the completion of a particular set is dependent on the successful receiving of a sufficient number of results from its encoded subtasks. The overall completion of the entire task is achieved only when all r sets have been completed. Notably, H i is defined as the threshold number of successful results needed to ensure the completion of set i.
Following the discussion in Section 3 and assuming that privacy preservation is required which means L > 0 , the threshold for Case 1 of APCC can be expressed as H i = d ( K i + L 1 ) + 1 according to (11). For Case 2 of APCC, the threshold H i can be determined based on the desired approximation precision, with higher values of H i leading to more accurate approximations.
The completion time of sets is defined as t { t i , i [ 0 : r 1 ] } , where t i denotes the time interval from the initial moment 0 of the entire task to the recovery moment of set i. The entire task is considered completed when all r sets have been recovered. Therefore, we denote the entire task completion delay as
T [ e ] = max i [ 0 : r 1 ] t i .
Note that while each worker executes the assigned subtasks in the order of set indices, the order in which these sets are recovered may not be the same. The completion time of sets is influenced not only by the set indices but also by the recovery thresholds H i determined by  K i .
Due to the randomness of delay, our objective is to minimize the entire task completion delay T [ e ] = max i [ 0 : r 1 ] t i , upon which the probability of the master recovering desired results for all sets is higher than a given threshold ρ s , as expressed by the following inequality:
P [ R 0 ( t 0 ) H 0 , , R r 1 ( t r 1 ) H r 1 ] ρ s ,
where R i ( t ) is defined as the number of returned results for set i until time t.
However, to derive (33), we first need to obtain the distribution of the delay required to receive the last non-straggling result in each set and then derive their joint probability distribution, which is intractable, especially when considering the cancellation of completed sets. As a result, the problem with the constraint (33) is hard to solve.
In the following, we consider substituting (33) with an expectation constraint (34d) and formulate the problem as:
P 1 1 : min { K } max i [ 0 : r 1 ] t i
s . t . i = 0 r 1 K i = K ,
H i N , i [ 0 , r 1 ]
E [ R i ( t i ) ] H i , i [ 0 , r 1 ]
K i , H i Z + , i [ 0 , r 1 ] ,
where K { K i | i [ 0 : r 1 ] } is the partitioning scheme.
Constraint (34a) corresponds to the hierarchical task partitioning, and (34c) indicates that the threshold for each set should be smaller than the number of workers. In constraint (34e), Z + represents the set of positive integers. Constraint (34d) states that the master is expected to receive sufficient results of encoded subtasks from workers to recover f ( D i , j ) j = 0 K i 1 in set i. Similar approximation approaches are also used in [4,12,40,41], and the performance gap can be bounded [12].
As previously shown, H i = d ( K i + L 1 ) + 1 for Case 1 of APCC. Additionally, the maximum of t i for all sets can be replaced with an optimization variable z by adding an extra constraint. Consequently, for Case 1 of APCC, P 1 1 can be equivalently written as:
P 1 2 : min { K , z } z
s . t . t i z 0 , i [ 0 , r 1 ] ,
d ( K i + L 1 ) + 1 E [ R i ( t i ) ] 0 , i [ 0 , r 1 ] ,
d ( K i + L 1 ) + 1 N 0 , i [ 0 , r 1 ] ,
K i Z + , i [ 0 , r 1 ] ,
Constraint ( 34 b ) .
For Case 2 of APCC, one only needs to adjust constraints (35c) and (35d) according to the relationship between K i and H i , which does not affect the subsequent methods employed. Consequently, for the sake of convenience in expression, we will focus on Case 1 of APCC in the following parts of this section, without loss of generality.

5.2. APCC without Cancellation

If the cancellation of completed sets is not considered, we first denote the delay of one worker to continuously complete m subtasks as T m , and derive its CDF from (31) as:
P [ T m t ] = 1 e μ ( t m a ) , if t m a , 0 otherwise .
Since computations on workers are independent, E [ R i ( t i ) ] can be written as:
E [ R i ( t i ) ] = n = 0 N 1 E [ I { T i + 1 t i } ] = N · P [ T i + 1 t i ] ,
where I { x } denotes the indicator function that equals 1 if event x is true and equals 0 otherwise. P [ T i + 1 t i ] is given by (36).
Substituting (37) into P 1 2 , we find (35d) is covered by (35c) and obtain the following optimization problem:
P 2 1 : min { K , z } z s . t . d ( K i + L 1 ) + 1 N [ 1 e μ ( t i i + 1 a ) ] 0 ,
i [ 0 , r 1 ] ,
Constraints ( 34 b ) , ( 35 b ) , ( 35 e ) .
As P 2 1 shows, it is a mixed integer non-linear programming (MINLP) problem, which is usually NP-hard. Although its optimal solution can be found by the Branch and Bound (B&B) algorithm [47], the computational complexity is up to O ( N d ) r , which means the B&B algorithm becomes extremely time-consuming when either N or r are large.
Accordingly, to efficiently obtain an optimal solution, we propose the maximum value descent (MVD) algorithm shown in Algorithm 2. The key idea of the MVD algorithm is to iteratively update the input solution K = { K i , i [ 0 : r 1 ] } by adjusting K i for the set that corresponds to the maximum value descent of the objective function z. In the MVD algorithm, each do-while loop can be regarded as one update, and K j in Step 7 constantly approaches the optimal K j * . Once reduced in an update, K j will not increase because the objective function z must decrease in each update. When the updating process terminates, the optimal solution K * is exactly the obtained K in the last update. Furthermore, the MVD algorithm has a computational complexity of O N r d , as the number of do-while loops is determined by constraint (35d).
Furthermore, the MVD algorithm can be executed quickly by selecting a sufficiently good partitioning solution as input. It should be noted that after relaxation and cancellation of the integer constraint in (35e), P 2 1 can be transformed into a convex problem as follows:
P 2 2 : min { K , z } z
s . t . Constraints ( 38 b ) , ( 34 b ) , ( 35 b ) ,
K i > 0 , i [ 0 , r 1 ] .
and the optimal solution is given in Proposition 1 according to the Karush–Kuhn–Tucker (KKT) conditions.
Algorithm 2: MVD
Entropy 26 00881 i002
Proposition 1. 
For given ( N , K , L , d , r , μ , a ) , the optimal solution K [ P r o p 1 ] and corresponding delay t [ P r o p 1 ] to P 2 2 are
i = 0 r 1 e μ ( z * i + 1 a ) = r d ( K + r L r ) + r N , t i [ P r o p 1 ] = z * , K i [ P r o p 1 ] = N d [ 1 e μ ( z * i + 1 a ) ] 1 d L + 1 .
Due to the convexity of P 2 2 , the Euclidean distance between K [ P r o p 1 ] and the optimal solution K * of P 2 1 is small. Therefore, it is recommended to use a rounded result of K [ P r o p 1 ] as the input for the MVD algorithm.

5.3. APCC with Cancellation

If the cancellation of completed sets is considered, a worker may be canceled in a certain set but successfully return results in time for the subsequent sets. For example, worker n may be a straggler for set i but completes its assigned subtask and returns the result in time for the next set ( i + 1 ) due to the cancellation. Such situations make it quite difficult to derive and analyze the expectation of R i ( t ) as in the previous Section 5.2, because the impact of the cancellation of the previous set on the delay of non-straggling workers in subsequent sets needs to be considered. Therefore, we provide the following alternative perspective to simplify this problem.
Note that if set i is the last completed one, the entire task is completed when the last needed result for this set is received. Thus, we define the delay of set i as T i [ e ] and aim to minimize max i [ 0 : r 1 ] E [ T i [ e ] ] . To derive E [ T i [ e ] ] , consider that there are still N H i + 1 = N d ( K i + L 1 ) workers computing the last result for set i when other sets are finished. Once any one of these workers returns the first result, this set and the entire task will be completed. Accordingly, the CDF of T i [ e ] can be written as follows:
P T i [ e ] t = 1 1 P T i + 1 t N d ( K i + L 1 ) = 1 e μ ( N d ( K i + L 1 ) ) ( t i + 1 a ) , if t ( i + 1 ) a , 0 otherwise ,
where T i + 1 is the delay needed to complete ( i + 1 ) subtasks for one worker, shown previously in (36). Then we have
E [ T i [ e ] ] = i + 1 μ [ N d ( K i + L 1 ) ] + a ( i + 1 ) .
By further adding an extra optimization variable z to substitute max i [ 0 : r 1 ] E [ T i [ e ] ] , the optimization problem can be formulated as:
P 3 1 : min { K , z } z s . t . i + 1 μ N d ( K i + L 1 ) + a ( i + 1 ) z 0 ,
i [ 0 : r 1 ] ,
Constraints ( 34 b ) , ( 35 d ) , ( 35 e ) .
Note that P 3 1 is a MINLP problem similar to P 2 1 and has an O ( N d ) r computational complexity to solve if using B&B algorithm. However, after relaxation and canceling the integer constraint in (35e), P 3 1 can also be transformed into a convex problem as:
P 3 2 : min { K , z } z
s . t . Constraints ( 43 b ) , ( 34 b ) , ( 35 d ) ,
K i > 0 , i [ 0 , r 1 ] ,
and optimal solution is given in Proposition 2 according to the KKT conditions.
Proposition 2. 
For given ( N , K , L , d , r , μ , a ) , the closed-form expression of the optimal solution K [ P r o p 2 ] to P 3 2 is
i = 0 r 1 i + 1 z * a ( i + 1 ) = μ r N d ( K + r L r ) ,
K i [ P r o p 2 ] = N d i + 1 d μ [ z * a ( i + 1 ) ] L + 1 .
Consequently, the MVD algorithm is used again to solve P 3 1 with a computational complexity of O ( N r d ) , and the rounded result of K [ P r o p 2 ] is recommended to be used as the input.

6. Simulation Results

In this section, we leverage simulation results to evaluate the performance of APCC in terms of task completion delay and compare it with other state-of-the-art coded computing strategies, including LCC [15], LCC with multi-message communications (LCC-MMC) [35], and BACC [27]. Additionally, we analyze the impact of the number of partitioned sets r and the number of colluding workers L on the task completion delay of APCC.
In simulations, the entire task is given, leading to a constant computation load for the entire task. In this scenario, we aim to compare the entire task completion delay across various task divisions and coded computing strategies, illustrating the delay performance improvements introduced by APCC. We assume that the computation delay T 0 of a single worker to complete the entire task follows a shifted exponential distribution, which is modeled as:
P [ T 0 t ] = 1 e μ 0 ( t a 0 ) , if t a 0 , 0 otherwise ,
then the computation delay T of a single worker to complete one subtask follows:
P [ T t ] = 1 e μ 0 ( K t a 0 ) , if t a 0 K , 0 otherwise ,
where K denotes the task division number, which may vary depending on the chosen coded computing strategies. The parameter a 0 is set to 0.5 s, and μ 0 is set as 1 10 a 0 . In APCC, { K i } i = 0 r 1 corresponds to the number of subtasks in each set before encoding, and their values are obtained using the MVD algorithm. Then, 5 × 10 4 Monte Carlo realizations are run to obtain the average completion delay of the entire task, and the simulation codes are shared here (code link: https://github.com/Zemiser/APCC, accessed on 24 August 2024). Note that by comparing (47) with (31), we have μ = K μ 0 and a = a 0 K , and can further derive the distribution of T m in (36).
The benchmarks involved in this section are as follows:
(1) APCC: APCC is our proposed coded computing strategy in this paper. It first divides the entire task into K subtasks and then partitions them into r sets with different sizes. The number of subtasks in set i , i [ 0 , r 1 ] is denoted as K i , which satisfies i = 0 r 1 K i = K . After that, each set is encoded into N subtasks assigned to the N workers. Consequently, each worker is assigned r subtasks. For Case 1 of APCC, the set i is recovered when the master has received d ( K i + L 1 ) + 1 results, and the entire task is completed when all sets are recovered.
(2) LCC: LCC proposed in [15] divides the entire task into K subtasks and then encodes them into N subtasks assigned to N workers. Each worker in LCC is assigned one subtask. Therefore, the entire task is completed when the master has received d ( K + L 1 ) + 1 results. L = 0 means the absence of a requirement for privacy preservation. We assume that the number of workers N is greater than d K 1 to facilitate our analysis. Consequently, when L = 0 , the recovery threshold is defined as d ( K 1 ) + 1 instead of N N K + 1 according to [15].
(3) LCC-MMC: MMC proposed in [35] is another approach to utilize the computing results of straggling workers except for the hierarchical structure. It also achieves a partial return of results from workers through a more granular task division. Specifically, LCC-MMC divides the entire task into K L M subtasks and then encodes them into N r subtasks. Each worker in LCC-MMC is assigned r subtasks and the entire task is completed when the master has received d ( K L M 1 ) + 1 results. However, LCC-MMC cannot preserve the privacy of input data because multiple encoded data from the same encoding function are sent to a worker, which is different from the case of APCC where r subtasks assigned to the same worker are generated by r different encoding functions { g i ( x ) } i = 0 r 1 .
(4) BACC: The BACC strategy, as introduced in [27], offers approximated results with improved precision achievable by increasing the number of return results from workers. It shares a task division structure identical to LCC, partitioning the task into K subtasks and then further encoding them into N subtasks. Each worker in BACC is assigned one such subtask.
To ensure fairness, all strategies employ an identical number of workers and distribute an equivalent computation load for a single worker. Assuming that the computation loads of the entire task are O ( γ ) , then each subtask f ( D k ) in APCC has a computation load of O ( γ K ) , and the computation loads of each worker in APCC are O ( γ r K ) because there are r partitioned sets. Similarly, we can derive that the computation loads of each worker in LCC, BACC and LCC-MMC are O ( γ K ) , O ( γ K ) and O ( γ r K L M ) , respectively. In order to ensure that each worker in these schemes performs an identical fraction of the entire task as APCC, we have
K = K L M r = K r .
Due to the different applicability of various coded computing strategies, we will first conduct a comprehensive analysis and comparison of APCC alongside other strategies within the following three scenarios: (1) Accurate results with L colluding workers ( L > 0 ); (2) Accurate results without colluding workers ( L = 0 ); (3) Approximated results. Finally, we study the impact of the parameters r and L on the delay performance of APCC.

6.1. Accurate Results with L Colluding Workers ( L > 0 )

In this scenario, we consider the following three benchmarks: LCC, APCC without cancellation, and APCC with cancellation. For fair comparison, the computation load of workers should be set the same, so we have K = K r .
As shown in Figure 5, the average completion delay of the entire task { f ( D k ) } k = 0 K 1 first decreases and then increases with the task division number K, indicating the existence of an optimal division that minimizes the delay. This trade-off arises from balancing the computation load of each worker and the minimum number of workers needed to recover { f ( D k ) } k = 0 K 1 . On the one hand, as the division number decreases, the computation load of each subtask increases, which leads to longer computation delays for each worker due to the increased workload. Although the number of workers waiting for results decreases, the increase in load negates this advantage. On the other hand, while the division number approaches the maximum, as illustrated in the inequality (24), the number of workers that the master needs to wait for approaches N, making the straggling effect a bottleneck for performance and increasing the delay. The zigzag fluctuations in the curve are mainly due to the integer values of the partitioning numbers.
Note that the primary metric for evaluating different schemes in our study is the minimum task completion delay under different division numbers, as depicted in Figure 5. This is because the division number K = K r corresponds to the division of computation function inputs, which is typically a high-dimensional matrix. As such, K can be adjusted flexibly in most cases. Therefore, the minimum achieved task completion delay is the main focus of our analysis.
Figure 6 compares APCC and LCC in terms of the minimum task completion delay. In these benchmarks, ‘Brute-Force’ refers to a partitioning strategy derived from an exhaustive search across all possible values of { K i } . Due to the highly complex traversal search, the brute-force results are only provided for scenarios with a smaller number of sets ( r = 4 ). Figure 6 illustrates that both APCC with and without cancellation yield sufficient reductions in task completion delay compared to LCC. For instance, when N = 100 , L = 10 , d = 2 , r = 16 , and the partitioning strategy obtained from the MVD algorithm is utilized, APCC with and without cancellation achieve delay reductions of 41.4 % and 47.5 % , respectively, compared to LCC. Moreover, the comparison with the ‘Brute-Force’ benchmarks shows that the partitioning strategy { K i } obtained through the MVD algorithm is near-optimal.

6.2. Accurate Results without Colluding Workers ( L = 0 )

In this scenario, we evaluate four benchmarks: LCC, LCC-MMC, and APCC with and without cancellation. Among these, only LCC does not consider partial results from straggling workers. Similar to Subsection IV.A, we set K = K L M r = K r , with K L M representing the task division number for LCC-MMC.
In Figure 7, both LCC-MMC and APCC effectively reduce task completion delay compared to LCC. Specifically, when r is large enough, APCC with cancellation closely approaches the performance of LCC-MMC. This similarity arises because, in both APCC and LCC-MMC, the master utilizes nearly all computing results from workers when divided subtasks are sufficiently small. Figure 7 also illustrates that when privacy is not a concern, MMC is a viable method to reduce the delay in coded computing.
Compared to Figure 6, we observe that the absence of colluding workers limits the potential for delay optimization. For instance, with parameters N = 100 , L = 0 , d = 2 , and r = 16 , APCC with cancellation achieves only a 20.3 % delay reduction compared to LCC.

6.3. Approximated Results

In this subsection, we compare the task completion delay of BACC and case 2 of APCC, which can both provide approximated results with fewer workers than the recovery threshold. To ensure uniform worker computation load, we also set K = K r , as in our previous analysis. Furthermore, since BACC shares an identical task division structure with LCC, we employ a smaller recovery threshold of the same form as LCC to evaluate its delay performance. For instance, when the recovery threshold d ( K + L 1 ) + 1 exceeds N, a reduced uniform recovery threshold d 2 ( K + L 1 ) + 1 below N can be employed for both BACC and APCC.
As shown in Figure 8, the hierarchical task partitioning and the cancellation of completed sets in APCC yield sufficient delay performance improvement. Compared to BACC, the proposed MVD algorithm for APCC achieves up to 42.9 % delay reduction. Note that in this scenario, both APCC and BACC can obtain approximated results with fewer returned results, while LCC for accurate computation fails to work when K is larger than 20 in the two cases of Figure 8, as the recovery threshold of LCC needs to be larger than d ( K + L 1 ) + 1 .

6.4. Impact of r and L on the Performance of APCC

The impact of the hierarchical partitioning number of sets r on the task completion delay of APCC is illustrated in Figure 9a. It is observed that a larger number of sets r results in a smaller computation delay, which is consistent with the results shown in previous figures. The reduction in delay can be attributed to the fact that a larger r implies a smaller computation load for each subtask in the hierarchical structure, and the difference in computation load between fast and slow workers can be described more precisely. Consequently, the proposed MVD algorithm can better utilize the computing results of straggling workers to reduce delay. Furthermore, Figure 9a indicates that the benefit of increasing r has a boundary effect, which corresponds to the upper bound of benefit brought by the granularity refinement of task divisions.
Recall that L denotes the maximum number of colluding workers that a coded computing scheme can tolerate. The value of L can serve as an indirect indicator of the level of privacy preservation offered by the scheme. Specifically, a larger value of L corresponds to more stringent privacy protection and a higher tolerance for colluders. It is demonstrated in Section 4.2 that complete data privacy can be achieved as long as the number of colluders remains below L.
Figure 9b illustrates the impact of the number of colluding workers L on the trade-off between delay and privacy preservation. It is worth noting that, for a fixed K , increasing the value of L leads to a larger recovery threshold H for the original subtasks, which results in a longer task completion delay. Moreover, as demonstrated in (24), choosing a larger value of L restricts the maximum number of task divisions. Consequently, the range of K values corresponding to the plotted curves in Figure 9b varies with L.

7. Conclusions

In this paper, we have investigated a distributed computing system that consists of one master and multiple workers. We have first proposed an adaptive privacy-preserving coded computing (APCC) strategy, which is suitable for diverse task scenarios and computation functions. APCC adaptively provides accurate or approximated results with controllable error according to the form of computation functions, and the computation process remains numerically stable. We have rigorously proved the optimality of APCC in terms of encoding rate based on the optimal recovery threshold of LCC. The complete privacy preservation of input data has also been proved.
We have further provided a low-complexity maximum value descent algorithm to optimally solve the hierarchical task partitioning problem in APCC, with and without considering cancellation, aiming at minimizing task completion delay. The cancellation is our proposed operation aiming to further accelerate computation by timely canceling the completed tasks. Extensive simulations have demonstrated that APCC outperforms the state-of-the-art coded computing strategies by a range of 20.3 % to 42.9 % in terms of task completion delay.

Author Contributions

Conceptualization, Q.Z. and S.Z.; Methodology, Q.Z. and Z.N.; Software, Q.Z.; Validation, Z.N.; Formal analysis, Z.N.; Resources, S.Z.; Writing—original draft, Q.Z.; Writing—review & editing, S.Z.; Project administration, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 62341108; in part by the China Postdoctoral Science Foundation under Grant 2023M742011; and in part by the Fundamental Research Funds for the Central Universities under Grant 2242022k60006.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of the Inequality (23)

In this appendix, the proof for the optimal recovery threshold of LCC [15] to guarantee decodability is briefly introduced to enhance the clarity of the inequality (23). To achieve this, a weakened result under the condition of multilinearity is first derived. After that, in order to extend to the case of a general polynomial function, a construction of multilinear functions based on polynomial functions is provided.
The definition of the multilinear function is as follows:
Definition A1. 
For a multilinear function f ( D 1 , D 2 , D d ) with degree d, D 1 , D 2 , D d are its d input variables, and f is linear with respect to each variable.
Under the assumption of the multilinearity of f, the optimal recovery threshold is provided in Lemma 1 of [15] as:
Lemma A1 
([15]). Consider an ( N , S , L , f ) coded computing problem, where N is the number of workers, S , L is the maximum number of stragglers and colluding workers that can be tolerated, respectively. f is a multilinear function, the degree of f is d, and the number of the equally divided input data is K. The optimal recovery threshold for linear coded computing schemes, denoted by H * , is defined as:
H * d ( K + L 1 ) + 1 , if L > 0 , min { d ( K 1 ) + 1 , N N K + 1 } if L = 0 .
In order to generalize to the case of polynomial functions, a construction method of multilinear functions is given in Lemma 4 of [15] as follows:
Lemma A2 
([15]). For a general polynomial function f of degree d, f is a function constructed based on f and satisfies:
f ( D 1 , D 2 , , D d ) = T [ 1 : d ] [ ( 1 ) | T | f ( k T D k ) ] .
Then, f is multilinear with respect to the d inputs. Here, T is a subset of the set [ 1 : d ] and the degree of f also equals d because it is a linear combination of f.
Based on the above two lemmas, Lemma A1 can be extended to the case of general polynomial [15]. Moreover, the actual number of results returned by workers equals ( N S ) , which must be larger than the recovery threshold. Consequently, to guarantee the decodability for general polynomial coded computing, N S H * should hold, and thus the Formula (23) is derived.

References

  1. Dean, J.; Corrado, G.; Monga, R.; Chen, K.; Devin, M.; Mao, M.; Ranzato, M.; Senior, A.; Tucker, P.; Yang, K.; et al. Large scale distributed deep networks. In Proceedings of the NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 1, p. 25. [Google Scholar]
  2. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  3. Nguyen, G.; Dlugolinsky, S.; Bobák, M.; Tran, V.; López García, Á.; Heredia, I.; Malík, P.; Hluchỳ, L. Machine learning and deep learning frameworks and libraries for large-scale data mining: A survey. Artif. Intell. Rev. 2019, 52, 77–124. [Google Scholar] [CrossRef]
  4. Sun, Y.; Zhang, F.; Zhao, J.; Zhou, S.; Niu, Z.; Gündüz, D. Coded computation across shared heterogeneous workers with communication delay. IEEE Trans. Signal Process. 2022, 70, 3371–3385. [Google Scholar] [CrossRef]
  5. Dean, J.; Barroso, L.A. The tail at scale. Commun. ACM 2013, 56, 74–80. [Google Scholar] [CrossRef]
  6. Tandon, R.; Lei, Q.; Dimakis, A.G.; Karampatziakis, N. Gradient coding: Avoiding stragglers in distributed learning. In Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia, 6–11 August 2017; pp. 3368–3376. [Google Scholar]
  7. Lee, K.; Lam, M.; Pedarsani, R.; Papailiopoulos, D.; Ramchandran, K. Speeding Up Distributed Machine Learning Using Codes. IEEE Trans. Inf. Theory 2018, 64, 1514–1529. [Google Scholar] [CrossRef]
  8. Li, S.; Maddah-Ali, M.A.; Yu, Q.; Avestimehr, A.S. A Fundamental Tradeoff Between Computation and Communication in Distributed Computing. IEEE Trans. Inf. Theory 2018, 64, 109–128. [Google Scholar] [CrossRef]
  9. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Polynomial codes: An optimal design for high-dimensional coded matrix multiplication. In Proceedings of the NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; p. 30. [Google Scholar]
  10. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Straggler mitigation in distributed matrix multiplication: Fundamental limits and optimal coding. IEEE Trans. Inf. Theory 2020, 66, 1920–1933. [Google Scholar] [CrossRef]
  11. Ferdinand, N.; Draper, S.C. Hierarchical coded computation. In Proceedings of the 2018 IEEE International Symposium on Information Theory, Vail, CO, USA, 17–22 June 2018; pp. 1620–1624. [Google Scholar]
  12. Reisizadeh, A.; Prakash, S.; Pedarsani, R.; Avestimehr, A.S. Coded computation over heterogeneous clusters. IEEE Trans. Inf. Theory 2019, 65, 4227–4242. [Google Scholar] [CrossRef]
  13. Raghupathi, W.; Raghupathi, V. Big data analytics in healthcare: Promise and potential. Health Inf. Sci. Syst. 2014, 2, 1–10. [Google Scholar] [CrossRef]
  14. McAfee, A.; Brynjolfsson, E.; Davenport, T.H.; Patil, D.; Barton, D. Big data: The management revolution. Harv. Bus. Rev. 2012, 90, 60–68. [Google Scholar]
  15. Yu, Q.; Li, S.; Raviv, N.; Kalan, S.M.M.; Soltanolkotabi, M.; Avestimehr, S.A. Lagrange coded computing: Optimal design for resiliency, security, and privacy. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, Naha, Japan, 16–18 April 2019; pp. 1215–1225. [Google Scholar]
  16. Yang, H.; Lee, J. Secure Distributed Computing With Straggling Servers Using Polynomial Codes. IEEE Trans. Inf. Forensics Secur. 2019, 14, 141–150. [Google Scholar] [CrossRef]
  17. Chang, W.T.; Tandon, R. On the capacity of secure distributed matrix multiplication. In Proceedings of the 2018 IEEE Global Communications Conference, Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  18. Aliasgari, M.; Simeone, O.; Kliewer, J. Private and secure distributed matrix multiplication with flexible communication load. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2722–2734. [Google Scholar] [CrossRef]
  19. Kim, M.; Lee, J. Private secure coded computation. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1097–1101. [Google Scholar]
  20. Kakar, J.; Ebadifar, S.; Sezgin, A. On the capacity and straggler-robustness of distributed secure matrix multiplication. IEEE Access 2019, 7, 45783–45799. [Google Scholar] [CrossRef]
  21. Nodehi, H.A.; Najarkolaei, S.R.H.; Maddah-Ali, M.A. Entangled polynomial coding in limited-sharing multi-party computation. In Proceedings of the 2018 IEEE Information Theory Workshop (ITW), Guangzhou, China, 25–29 November 2018; pp. 1–5. [Google Scholar]
  22. Yu, Q.; Avestimehr, A.S. Entangled polynomial codes for secure, private, and batch distributed matrix multiplication: Breaking the “cubic” barrier. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 245–250. [Google Scholar]
  23. Chang, W.T.; Tandon, R. On the upload versus download cost for secure and private matrix multiplication. In Proceedings of the 2019 IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
  24. D’Oliveira, R.G.; El Rouayheb, S.; Karpuk, D. GASP codes for secure distributed matrix multiplication. IEEE Trans. Inf. Theory 2020, 66, 4038–4050. [Google Scholar] [CrossRef]
  25. Akbari-Nodehi, H.; Maddah-Ali, M.A. Secure Coded Multi-Party Computation for Massive Matrix Operations. IEEE Trans. Inf. Theory 2021, 67, 2379–2398. [Google Scholar] [CrossRef]
  26. Tahmasebi, B.; Maddah-Ali, M.A. Private Function Computation. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 1118–1123. [Google Scholar]
  27. Jahani-Nezhad, T.; Maddah-Ali, M.A. Berrut Approximated Coded Computing: Straggler Resistance Beyond Polynomial Computing. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 111–122. [Google Scholar] [CrossRef]
  28. Jahani-Nezhad, T.; Maddah-Ali, M.A. CodedSketch: A coding scheme for distributed computation of approximated matrix multiplication. IEEE Trans. Inf. Theory 2021, 67, 4185–4196. [Google Scholar] [CrossRef]
  29. Soleymani, M.; Ali, R.E.; Mahdavifar, H.; Avestimehr, A.S. ApproxIFER: A model-agnostic approach to resilient and robust prediction serving systems. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Virtual Event, 22 February–1 March 2022; Volume 36, pp. 8342–8350. [Google Scholar]
  30. Fahim, M.; Cadambe, V.R. Numerically stable polynomially coded computing. IEEE Trans. Inf. Theory 2021, 67, 2758–2785. [Google Scholar] [CrossRef]
  31. Ramamoorthy, A.; Tang, L. Numerically Stable Coded Matrix Computations via Circulant and Rotation Matrix Embeddings. IEEE Trans. Inf. Theory 2022, 68, 2684–2703. [Google Scholar] [CrossRef]
  32. Charalambides, N.; Mahdavifar, H.; Hero, A.O. Numerically stable binary gradient coding. In Proceedings of the 2020 IEEE International Symposium on Information Theory, Los Angeles, CA, USA, 21–26 June 2020; pp. 2622–2627. [Google Scholar]
  33. Buyukates, B.; Ulukus, S. Timely distributed computation with stragglers. IEEE Trans. Commun. 2020, 68, 5273–5282. [Google Scholar] [CrossRef]
  34. Hasırcıoğlu, B.; Gómez-Vilardebó, J.; Gündüz, D. Bivariate polynomial coding for efficient distributed matrix multiplication. IEEE J. Sel. Areas Inf. Theory 2021, 2, 814–829. [Google Scholar] [CrossRef]
  35. Ozfatura, E.; Ulukus, S.; Gündüz, D. Straggler-aware distributed learning: Communication–computation latency trade-off. Entropy 2020, 22, 544. [Google Scholar] [CrossRef] [PubMed]
  36. Dutta, S.; Fahim, M.; Haddadpour, F.; Jeong, H.; Cadambe, V.; Grover, P. On the Optimal Recovery Threshold of Coded Matrix Multiplication. IEEE Trans. Inf. Theory 2020, 66, 278–301. [Google Scholar] [CrossRef]
  37. Yang, C.S.; Avestimehr, A.S. Coded computing for secure Boolean computations. IEEE J. Sel. Areas Inf. Theory 2021, 2, 326–337. [Google Scholar] [CrossRef]
  38. Tang, T.; Ali, R.E.; Hashemi, H.; Gangwani, T.; Avestimehr, S.; Annavaram, M. Adaptive verifiable coded computing: Towards fast, secure and private distributed machine learning. In Proceedings of the 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Lyon, France, 30 May–3 June 2022; pp. 628–638. [Google Scholar]
  39. Soleymani, M.; Ali, R.E.; Mahdavifar, H.; Avestimehr, A.S. List-decodable coded computing: Breaking the adversarial toleration barrier. IEEE J. Sel. Areas Inf. Theory 2021, 2, 867–878. [Google Scholar] [CrossRef]
  40. Zhang, F.; Sun, Y.; Zhou, S. Coded computation over heterogeneous workers with random task arrivals. IEEE Commun. Lett. 2021, 25, 2338–2342. [Google Scholar] [CrossRef]
  41. Wu, F.; Chen, L. Latency optimization for coded computation straggled by wireless transmission. IEEE Wirel. Commun. Lett. 2020, 9, 1124–1128. [Google Scholar] [CrossRef]
  42. Van Huynh, N.; Hoang, D.T.; Nguyen, D.N.; Dutkiewicz, E. Joint Coding and Scheduling Optimization for Distributed Learning Over Wireless Edge Networks. IEEE J. Sel. Areas Commun. 2022, 40, 484–498. [Google Scholar] [CrossRef]
  43. Kim, D.; Park, H.; Choi, J.K. Optimal Load Allocation for Coded Distributed Computation in Heterogeneous Clusters. IEEE Trans. Commun. 2021, 69, 44–58. [Google Scholar] [CrossRef]
  44. Berrut, J.P.; Trefethen, L.N. Barycentric lagrange interpolation. SIAM Rev. 2004, 46, 501–517. [Google Scholar] [CrossRef]
  45. Berrut, J.P. Rational functions for guaranteed and experimentally well-conditioned global interpolation. Comput. Math. Appl. 1988, 15, 1–16. [Google Scholar] [CrossRef]
  46. Zeng, Q.; Zhou, S. On the Capacity of Privacy-Preserving and Straggler-Robust Distributed Coded Computing. In Proceedings of the 2021 IEEE/CIC International Conference on Communications in China (ICCC), Xiamen, China, 28–30 July 2021; pp. 664–669. [Google Scholar]
  47. Lawler, E.L.; Wood, D.E. Branch-and-bound methods: A survey. Oper. Res. 1966, 14, 699–719. [Google Scholar] [CrossRef]
Figure 1. The concept of coded computing.
Figure 1. The concept of coded computing.
Entropy 26 00881 g001
Figure 2. System model and the proposed Adaptive Privacy-preserving Coded Computing (APCC).
Figure 2. System model and the proposed Adaptive Privacy-preserving Coded Computing (APCC).
Entropy 26 00881 g002
Figure 3. The three-step process of APCC.
Figure 3. The three-step process of APCC.
Entropy 26 00881 g003
Figure 4. Hierarchical structure and the cancellation operation.
Figure 4. Hierarchical structure and the cancellation operation.
Entropy 26 00881 g004
Figure 5. Delay performance comparison between APCC and LCC for accurate results with L colluding workers ( L > 0 ). Settings: N = 200 , L = 20 , d = 4 . The partitioning strategy { K i } of APCC is obtained by the proposed MVD algorithm. r is the number of partitioned sets.
Figure 5. Delay performance comparison between APCC and LCC for accurate results with L colluding workers ( L > 0 ). Settings: N = 200 , L = 20 , d = 4 . The partitioning strategy { K i } of APCC is obtained by the proposed MVD algorithm. r is the number of partitioned sets.
Entropy 26 00881 g005
Figure 6. APCC vs. LCC. Minimum task completion delay achieved by all possible task divisions K = K r , applied to accurate results with L colluding workers ( L > 0 ).
Figure 6. APCC vs. LCC. Minimum task completion delay achieved by all possible task divisions K = K r , applied to accurate results with L colluding workers ( L > 0 ).
Entropy 26 00881 g006
Figure 7. APCC vs. LCC and LCC-MMC. Minimum task completion delay achieved by all possible task divisions K = K L M r = K r , applied to accurate results without colluding workers ( L = 0 ).
Figure 7. APCC vs. LCC and LCC-MMC. Minimum task completion delay achieved by all possible task divisions K = K L M r = K r , applied to accurate results without colluding workers ( L = 0 ).
Entropy 26 00881 g007
Figure 8. APCC vs. BACC. Minimum task completion delay achieved by all possible task divisions K = K r , applied to approximated results.
Figure 8. APCC vs. BACC. Minimum task completion delay achieved by all possible task divisions K = K r , applied to approximated results.
Entropy 26 00881 g008
Figure 9. Delay performance of APCC with different r and L.
Figure 9. Delay performance of APCC with different r and L.
Entropy 26 00881 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, Q.; Nan, Z.; Zhou, S. Adaptive Privacy-Preserving Coded Computing with Hierarchical Task Partitioning. Entropy 2024, 26, 881. https://doi.org/10.3390/e26100881

AMA Style

Zeng Q, Nan Z, Zhou S. Adaptive Privacy-Preserving Coded Computing with Hierarchical Task Partitioning. Entropy. 2024; 26(10):881. https://doi.org/10.3390/e26100881

Chicago/Turabian Style

Zeng, Qicheng, Zhaojun Nan, and Sheng Zhou. 2024. "Adaptive Privacy-Preserving Coded Computing with Hierarchical Task Partitioning" Entropy 26, no. 10: 881. https://doi.org/10.3390/e26100881

APA Style

Zeng, Q., Nan, Z., & Zhou, S. (2024). Adaptive Privacy-Preserving Coded Computing with Hierarchical Task Partitioning. Entropy, 26(10), 881. https://doi.org/10.3390/e26100881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop