Next Article in Journal
Multi-Line Prefetch Covert Channel with Huge Pages
Previous Article in Journal
Composable Privacy-Preserving Framework for Stakes-Based Online Peer-to-Peer Applications
Previous Article in Special Issue
Enhancing Security for Resource-Constrained Smart Cities IoT Applications: Optimizing Cryptographic Techniques with Effective Field Multipliers
 
 
Article
Peer-Review Record

Efficient Secure Multi-Party Computation for Multi-Dimensional Arithmetics and Its Applications†

Cryptography 2025, 9(3), 50; https://doi.org/10.3390/cryptography9030050
by Dongyu Wu 1,*, Bei Liang 1, Zijie Lu 1 and Jintai Ding 2
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Cryptography 2025, 9(3), 50; https://doi.org/10.3390/cryptography9030050
Submission received: 1 April 2025 / Revised: 1 July 2025 / Accepted: 2 July 2025 / Published: 3 July 2025
(This article belongs to the Special Issue Cryptography and Network Security—CANS 2024)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors present a nice synthesis and application of earlier ideas. However, there are serious presentational issues which must be resolved before the paper can be published.

1. Theoretical formalisations are overly complex. There are two ways to define comcepts: descriptive used by authors and functional where functional describe only the desired end result while descriptive definitions just descibe the process.

I strongly encourage the authors to embrace functional definition. For instance, RCOT_m^n definition would be follows:

Idealised functionality for RCOT generates random u, \Delta and two random shares v_1, v_2 such that v_1+v_2 = u \cdot \Delta. Values v_1, \Delta is sent to S and u, v_2  is sent to R.

You can still keep the descriptive style of definitions as long as you name outputs such way that you can state the desired functional relation. In the example above v_1+v_2 = u \cdot \Delta.

Secondly, the auhtors flip-flop between additive and subtractive secret sharing. Some defintions are given for additive formalisation (e.g v_1+v_2 = u\cdot\Delta) while other definitions are given in subtractive sharing(e.g. RSVOLE v_2-v_1 = xu). This discrepancy makes validation of some claims slightly more complex. I suggest additive formalisation as it generalises naturally for multiparty setting.

Thirdly, the definitions of different ideal functionalities are incompartible. Some of them contain elements that are used in formal frameworks for defining universally-composable protocols, some are just simple two-party functionalities. Since auhtors do not provide any formal UC proofs these extra elements are redundant an confusing. Remove (sended, sid) and (receiver, sid) tuples form all protocols. 

2. Uneccesary notions. Auhtors introduce notion of (s,t)-secret sharing that is used nowhere in this paper. Authors use only additive and subractive secret sharing (mostly for two parties). So define only additive and subtrative secret sharing (if you really want to flipflop between different but equivalent formalisations).

3. Inconsitent notation for matrix operations.  Authors use three operations: pointwise multiplication A\cdot B, matrix multiplication AB and tensor multiplication A \otimes B. While these notations are rather standard, they should be defined in Section 2. Also, at some places authors randomly deviate from this convention and use A\cdot B for matrix multiplication. Please, remove these inconsistencies.

4. Halfway correct understanding of Beaver triples. First of all, the Beaver triples (A,B,C) can be defined and used over any ring. In particular, we can use Beaver triples which are matrices. In particular, tensor triples are matrices where A is m x 1 and B is 1 x n matrix. In the literature that authors compare their work people use matrices with slightly different dimensions.

The main question in this area is convertability of Beaver triples. Authtors correctly observe that k tensor triples can be converted to Beaver triples of dimensions  m x k for A and k x n for B. This can be done with local operations after the offline triple generation phase is completed. As auhtors note this allows to reduce storage. However, tensor triples are not special -- given m x r A and r x n B, it is straightgorward to create triples with  m x (kr) for A and (kr) x n for B. Moreover, it is possible to combine Beaver triples with different values of r to get the best representation the target triple.    

Authors also observe that a tensor triple can be pruned back by forgetting some rows and columns. This also holds for other matrix Beaver triples. So this is not unique property.

Authors do not mention that Beaver triples cannot be locally extended to larger m and m values. Of course, we can represent A = (A_1 A_2)^t and B=(B_1 B_2) and use Beaver triples for shapes A_i, B_i but then we need to non-locally compute A_1 B_2 and A_2 B_1. This can be done with more Beaver triples but it requires one round of interactions.

In other words, tensor triples are not special and do not solve non-trivial Beaver triple coversions better than others.

5. Issues around COT-based RsVOLE (Fig.5)

The signature of RRG is undefined. It is not clear whther PRG(seed_{i,0}) are rows or columns (one can guess). Define vectors v_i = PRG(seed_{i,0}) and place them correctly to V.

 It is not clear whether protocol on Fig.5 is original or it is already presented and analysed in [21]. Auhtors refer to [21] to claim that the protocol is secure. However, notation in [21] is radically different so it is hard to trace which proof secures the claim and which protocol is analogue of Fig.5. Please reference the exact protocol and the exact theorem with some clarifications to notation discrepancies.

6. Incorrectly defined triple Amalgation algorithm (Fig 10) 

The protocol here tries to use Beaver triple conversions detailed in 4 but does something wrong. You cannot add vectors u_i and v_i there, since you need to reveal masked matrices to complete multiplication. So something is wrong here. Write down explicitly how do you use these amalgated triples.  

7. Annoying repetitions in Section 4

Two protocols in this section rely on matrix reformulation of ||x_i-y_j||^2. There is no need to repeat this protocol several times. State a protocol for distance squares and later use it  explicitly in Eigenfaces and FaceNet

8. Partial results in Section 5.

Table 3 contains only communication complecity for the generation of various tensor triples. I would really want to see the total running-time for the entire off-line phase and comparison with Beaver matrix triples from literature. As explained above tensor triples can be merged to Beaver matrix triples (locally and with low comp. cost) so it should be possible to compare results wirh state of the art directly.

You should alsoe compare results with [26] that uses the fastest way to generate Beaver matrix triples [26, Algorithm 1] where a trusted party P2 generates locally Beaver triples and sends them to P0 and P1. No other protocol for Beaver triple genration including tensor triple generation can be faster, so it gives the lower time bound for any protocol. As you use computational assumptions (COT,SOT) you get higher running times but it fairly shows the slowdown factor.

Table 4. The tensor and the matrix method are equivalent. In both methods we need to open randomly masked columns in the first matrix and randomly maked columns for the second matrix so there can be no differences in communication complexity. Please clarify what is meant by the matrix method -- the only way it can be different is when you use Beaver triples to do all element multiplications separately.

Table 5. You should include [26] as the absolute lower bound to the table.  

Minor comments

page 2, line 42: What is big model?

page 6 line 196: K is undefined!

page 6 Definition 4: Why do you need computational security for tensor triples. Explain that briefly. Is this the property of triples or the property of generation process. Since you work in two-party setting triple generation is only computationally secure. However, triples themselves can be perfectly secure.

page 6 line 220: Undefined symbol \lambda!

page 8 lines 267 - 272: Tensor triple generation requires communication complexity Theta(n^2) while naive Beaver triple generation requires communication complexity Theta(n). Thus, the usage of tensor triples for doing  a\cdot b is really wasteful if you consider the overall resource consumption unless you show that under same security assumptions Beaver triple computation has additional O(n) overhead anyway. Please comment it.

page 9 Figure 8. A\cdot B --> AB

page 12 line 367 <x,x> notation is undefined use standard matrix notation

page 13 Section 4.2.5

Explain briefly why one needs to form a joint vector instead of doing pairwise protocols separately and later on converting distance shares to joint share space between all parties S, P1, ...P_k.

page 13 Section 4.3.1

The entire comparison is unfair, see the issue 4 above. 

page 16 Table 5 caption: sqaure --> square

Table 3-8:  Please use right alignment of measurement results with proper number of digits so that results are visually comparable.

 

 

 

  

 

 

 

 

  

   

 

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper is write well, and I recommand to accept it.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This work proposes a new approach for boosting the performance of secure multi-party computation (MPC) when multi-dimensional operations are involved. The authors consider a correlation called tensor triples that is an alternative to Beaver triples, in the sense that it maintains pre-computability while achieving better efficiency. They propose a method for generating tensor triples via subfield vector oblivious linear evaluation (VOLE) that can be applied to support Multi-Dimensional Arithmetic Evaluations (including outer product and matrix multiplication). Then, they discuss applications that involve such operations from the areas of Biometric Identification and Machine Learning. Finally, they implement  tensor triple generation and matrix multiplication and evaluate the performance gain of the proposed approach in comparison with existing Privacy-Preserving Biometric Identification techniques.

From a practical perspective, this work has considerable contributions to the implementation of MPC for several timely applications in privacy-preserving data processing. There is a good level of presentation and the paper was relatively easy to follow. I appreciate the fact that the evaluation section includes comparison with prominent Biometric Identification algorithms.

From a pure scientific perspective though, there is room for improvement. Please see my detailed comments below:

  • The security argumentation is very brief for the standards of a cryptographic journal and more mathematical details are needed. For instance,
    • Theorem 6: why does uniform distribution imply that "this hybrid is indistinguishable from the original interaction"?
    • Theorem 7: can you elaborate on "We emphasize the matrix share... hence not affecting the protocol security."?
    • Theorem 8: why repetitive application guarantees realization of matrix product? Is there some level of composability implied here? This statement is not straightforward.
  • Since the "definition of outer product triple in [17] is equivalent to the notion of tensor triple" I do not see why introducing new terminology is needed. It seems that the concept of [17] could be applied directly.
  • I believe that the comparison in Table 5 is unfair. The experiments have been run in different settings and probably in very different times. For instance, [43] is a 13 year-old paper. Even if "the statistics in the table already implies the high efficiency of the tensor triple method", there is not sufficient evidence of how much this is achieved given Table 5.
  • It is not clear to me what the "Offline" results refer to. Does it refer to the preprocessing of the tensor triples? As this is the dominating overhead in the evaluation, the reference should be clear.
  • In Tables 6 and 7, the communication cost of the proposed methodology is higher. Moreover, as the parameter grows, it seems that the gap becomes larger (e.g.,  in Table 6, the cost for n=1024 is 4X higher). I did not find any discussion on this aspect and I believe that reflection on the evaluation section should not be restricted only to the positive side. 

Other comments:

  • Figure 1: if m,n are parameters of the functionality, why do they need to be given as inputs?
  • Figure 3: Since the realisations are for two parties, I do not see how a generalized TTGen functionality helps readability.
  • line 214: some citation for the AHE based generation method is needed.
  • Figure 4: I think there is some notation inconsistency in the "outputs". Namely, it should be [u]_0, [v]_0, [W]_0,... instead of u_0,v_0,W_0,...
  • Silent OT should be mentioned first in the OT preliminaries section.
  • p.13: "determined by P2. The parties can announce" -> "determined by P2, the parties can announce"
  • line 408: what is the typical size of a tensor triple?
  • line 456: I do not understand the reference to the RLWE method. I could not find any such method for generating tensor triples in the paper.
  • Table 5: "sqaure matrix multiplication" -> "square matrix multiplication"
  • The conclusion section could include more details on the key findings of the experiments.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The only substantial thing where I do disagree with authors is the treatment of Beaver triples.
Just to reiterate. Let X and Y be the matrices to be multiplied and A and B be matrices of the 
same dimensions, e.g. dim(X) = dim(A) and dim(Y)=dim(B). Assume that matrix elements come form 
a ring then matrices U=X+A and V=Y+B reveal nothing about X and Y. Let C = AB. Then clearly

UV - AV - UB + C = XY

As U and V are public then shares of matrix products can be computed for AV and UB as linear 
combinations of shares. Consequently a right dimension Beaver triple allows computation matrix product shares. 

The only question is how to assemble a Beaver triple of the right size. In case of tensor triples,
the assembly is straightforward as

(a_1 a_2 .. a_k) (b_1 b_2 ...b_l)^t

is just a sum of tensors a_i b_i^t   

However this is just one way to assemble. Instead we can express them as outer products of two column matrices (a1 a2)(b1 b2)^t, (a3 a4)(b3 b4)^t ...

Thus, there lots of ways to define relevant Beaver triples. All of them are equally valid as long 
as you can concatenate them into matrices of right dimensions.

Note that all of these triples can be cropped: from A you can crop rows and from B you can crop columns. It is not possible to crop columns of A and rows of B. The tensor triple is special in this setting as there is no columns to crop in A and no rows to crop in B. But this is not something really special.

Also, tensor triples proposed in this article are the most wasteful in terms of storage space. By using 2 column A and 2 row B we get 25% storage reduction.

Note that it is not possible to add rows to A and add columns to B without need for additional multiplication operations. That is tensor triples do not scale when you need to go beyond the original dimensions of a and b. 

==

Authors should aknowledge this fact and discuss it explicitly. I find that the must appropriate place is in Sect. 3.2.4 where auhtors actually do exactly the Beaver triple assembly from l-column matrices and raw tensor triples.  

Another fact that authors shpuld explicitly aknowledge is the impossibility of unscaling. You cannot extend tensor triples bi increasing the vectors a and b.

Adding thes paragraphs would significantly improve the readability.


Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop