Next Article in Journal
A Novel and Secure Fake-Modulus Based Rabin-Ӡ Cryptosystem
Next Article in Special Issue
SigML++: Supervised Log Anomaly with Probabilistic Polynomial Approximation
Previous Article in Journal
Enhanced Authentication for Decentralized IoT Access Control Architecture
Previous Article in Special Issue
Automated Classical Cipher Emulation Attacks via Unified Unsupervised Generative Adversarial Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic

Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Cryptography 2023, 7(3), 43; https://doi.org/10.3390/cryptography7030043
Submission received: 11 July 2023 / Revised: 7 September 2023 / Accepted: 8 September 2023 / Published: 12 September 2023
(This article belongs to the Special Issue Cyber Security, Cryptology and Machine Learning)

Abstract

:
In this paper, we present new variants of Newton–Raphson-based protocols for the secure computation of the reciprocal and the (reciprocal) square root. The protocols rely on secure fixed-point arithmetic with arbitrary precision parameterized by the total bit length of the fixed-point numbers and the bit length of the fractional part. We perform a rigorous error analysis aiming for tight accuracy claims while minimizing the overall cost of the protocols. Due to the nature of secure fixed-point arithmetic, we perform the analysis in terms of absolute errors. Whenever possible, we allow for stochastic (or probabilistic) rounding as an efficient alternative to deterministic rounding. We also present a new protocol for secure integer division based on our protocol for secure fixed-point reciprocals. The resulting protocol is parameterized by the bit length of the inputs and yields exact results for the integral quotient and remainder. The protocol is very efficient, minimizing the number of secure comparisons. Similarly, we present a new protocol for integer square roots based on our protocol for secure fixed-point square roots. The quadratic convergence of the Newton–Raphson method implies a logarithmic number of iterations as a function of the required precision (independent of the input value). The standard error analysis of the Newton–Raphson method focuses on the termination condition for attaining the required precision, assuming sufficiently precise floating-point arithmetic. We perform an intricate error analysis assuming fixed-point arithmetic of minimal precision throughout and minimizing the number of iterations in the worst case.

1. Introduction

In this paper, we design and analyze protocols for secure fixed-point arithmetic as a practical alternative to secure floating-point arithmetic. From a numerical analysis perspective, floating-point arithmetic is very useful as floating-point numbers scale dynamically and relative errors can be controlled appropriately. Performance-wise, however, secure floating-point arithmetic is very demanding. Compared to secure integer arithmetic, for example, full support for secure floating-point numbers is usually orders of magnitude more costly. This holds across many frameworks for secure computation, ranging from all flavors of multiparty computation to fully homomorphic encryption and indistinguishability obfuscation.
Secure fixed-point arithmetic strikes a balance between performance and usability. Addition/subtraction is as efficient as for integers, whereas multiplication is costlier but relatively straightforward. Our focus in this paper is on more advanced operations such as division (via the reciprocal) and taking square roots. We present new protocols based on Newton–Raphson iteration, along with a detailed error analysis for strict accuracy guarantees at minimal cost. Moreover, turning the tables around, we show how to obtain efficient protocols for secure integer division and integer square roots from their secure fixed-point counterparts.
The Newton–Raphson method has been studied extensively in the literature on secure fixed-point arithmetic, starting with the paper by Algesheimer et al. [1]. This important paper contained the groundwork for the secure computation of the reciprocal, including a thorough error analysis, in fact aimed at a direct application to secure integer division. The works by Catrina et al. [2,3] presented the basic foundation for secure fixed-point arithmetic, also introducing probabilistic rounding as an efficient alternative to deterministic rounding. In this paper, we will closely follow the Newton–Raphson-based protocol for the reciprocal from [2]. However, we will fine-tune the use of probabilistic vs. deterministic rounding, limiting the number of truncated bits as much as possible, to guarantee an absolute error below 2 f for any desired precision of f fractional bits.
In this paper, we will also use the Newton–Raphson method for the secure computation of the reciprocal square root. Prior work by Liedel [4] and follow-up work by Aly and Smart [5] used Goldschmidt’s method for the reciprocal square root. However, these papers lacked a complete error analysis and did not guarantee an absolute error below 2 f for any desired precision of f fractional bits. In this paper, we will present a fine-tuned secure protocol for the reciprocal square root and a detailed error analysis, following the same approach as for the reciprocal. We will also extend this protocol to compute the square root, with the same guarantee for the absolute error.
We note that the error analysis of applications of the Newton–Raphson method commonly focuses on bounds for the relative error assuming floating-point arithmetic. Research into the accuracy of fixed-point arithmetic is in general rather limited. Sources like [6,7,8] (Section 4.2, in particular) have treated some basic aspects. For instance, Wilkinson [7] already covered the basic idea that the inner product of two vectors x , y can be computed accurately by accumulating the terms x i y i exactly and only rounding the final sum to the desired precision; this idea carries over to the setting of secure computation, see Table 2 in [2]. A further aspect of the secure use of the Newton–Raphson method is that it should always be run for the same (worst-case) number of iterations to avoid leaking information about the input. In this paper, we present the first detailed analysis taking all these aspects into account.
We present our solutions in a generic way, assuming secure integer and fixed-point arithmetic with a small set of basic operations. Each basic operation needs to be implemented by means of a secure protocol, operating on either secret-shared, encrypted, or encoded values, depending on the underlying framework for secure computation. Although the performance of these protocols varies across frameworks, the relative performance behaves similarly between operations like secure addition, multiplication, or comparison, as well as the secure generation of random bits. For concreteness, we will focus on secure multiparty computation (MPC) as the underlying framework. Specifically, we consider the use of probabilistic rounding (versus deterministic rounding) to limit the cost of secure fixed-point multiplications. Many results, however, carry over to related areas in cryptography such as (fully) homomorphic encryption.
The paper is organized as follows. Above, we elaborated on the state of the art for the secure fixed-point computation of the reciprocal and the reciprocal square root, emphasizing the lack of detailed error analyses. Section 2 explains some basic aspects of MPC and provides a brief introduction to secure fixed-point arithmetic; in particular, some details about the use of probabilistic rounding are presented, and the basics of the Newton–Raphson method are highlighted. Section 3 presents our solution for the secure computation of the reciprocal, together with a tight error analysis achieving a given fixed-point precision while minimizing the computational cost. In Section 4, we demonstrate a direct application of the secure fixed-point reciprocal, namely for efficient secure integer division (with the remainder). Section 5 then presents our solution for the secure computation of the reciprocal square root, essentially following the same approach as for the reciprocal, although the details are more intricate. In Section 6 we demonstrate a direct application of the secure fixed-point reciprocal square root, namely for the efficient secure computation of fixed-point square roots with precise error bounds, which we use in turn for efficient secure integer square roots. We conclude in Section 7 and mention some applications and concurrent work on a related problem. Finally, Appendix A collects all lemmas and proofs left out of the main text; all theorems and proofs are included in the main text.

2. Preliminaries

Below, we provide the background on secure fixed-point arithmetic underlying all protocols in this paper. We also discuss the concept of probabilistic rounding and briefly review the Newton–Raphson method.

2.1. Secure Computation

We present our protocols for the secure computation of the reciprocal and the (reciprocal) square root in terms of an arithmetic black box (following, e.g., [9,10,11]). The protocols are specified in pseudocode, using a limited set of operations commonly supported in many MPC frameworks. The parties executing these operations are suppressed from the notation.
We use [ [ a ] ] to denote a secure representation of value a. That is, [ [ a ] ] can be thought of as either a secret-shared value a or a public-key (homomorphic) encryption of a value a. We let Open ( [ [ a ] ] ) denote the pooling of (decryption) shares to reveal the value of a in the clear. Secure arithmetic over a finite field (or finite ring) using + , , , / is assumed to be available. The common representation of integral and fixed-point numbers as -bit integers in a bounded range [ 2 1 , 2 1 ) Z N is assumed, where 2 + κ < N for security parameter κ . This allows for efficient secure comparisons < , , > , , = , . To denote a uniform randomly generated secure bit b, we write [ [ b ] ] R { 0 , 1 } . Similarly, we write [ [ r ] ] R { 0 , 1 , , 2 + κ 1 } to denote a secure integer r distributed sufficiently randomly such that the statistical distance Δ ( r ; 2 + r ) < 2 κ is negligible as a function of κ .
As a more advanced primitive, we assume the availability of operation [ [ v ] ] Scale ( [ [ a ] ] ) , for a 0 . Here, v = ± 2 k , for some k Z , is uniquely determined by the constraint 1 2 a v < 1 . Similarly, we use [ [ v ] ] , [ [ v 1 2 ] ] Scale ( [ [ a ] ] ) to denote the same operation with the additional constraint that k is even.
Efficient implementations for these operations are assumed. The round complexity is typically either constant or logarithmic. To ensure logarithmic round complexity of O ( log ) for our protocols operating on -bit fixed-point numbers, it suffices that basic secure arithmetic + , , * , / takes O ( 1 ) rounds and that secure comparison < and Scale ( [ [ a ] ] ) take O ( log ) rounds.

2.2. Secure Fixed-Point Arithmetic

We follow the model for secure fixed-point arithmetic put forth by Catrina et al. [2,3]. For > f 0 , the set Q , f of -bit fixed-point numbers with f fractional bits is defined as
Q , f = { x ¯ 2 f : x ¯ Z , 2 1 x ¯ < 2 1 } .
The integer part of a fixed-point number thus consists of e = f bits, of which the most significant bit represents the sign. Phrased differently, we use two’s complement for the binary representation of fixed-point numbers x Q , f :
x = ( d e 1 d 0 . d 1 d f ) 2 = d e 1 2 e 1 + i = f e 2 d i 2 i , with   d i { 0 , 1 } .
The value δ f = 2 f corresponding to the least significant bit of x is also loosely referred to as the precision.
For the implementation of fixed-point arithmetic, a number x = x ¯ 2 f Q , f is simply represented by the integer x ¯ . This integer representation is particularly convenient for the implementation of secure fixed-point arithmetic, e.g., when all computation is carried out with secret-shared numbers over a prime field. The factor 2 f is publicly known and is only used when the results are output as fixed-point numbers. The actual calculations are performed with integer values only.
The sum of two fixed-point numbers x and y is obtained by adding their integer representations. That is, setting x + y ¯ = x ¯ + y ¯ gives the correct result:
x + y ¯ 2 f = ( x ¯ + y ¯ ) 2 f = x ¯ 2 f + y ¯ 2 f = x + y .
For the product of a fixed-point number x and an integer t, we set t x ¯ = t x ¯ to obtain the desired result:
t x ¯ 2 f = ( t x ¯ ) 2 f = t ( x ¯ 2 f ) = t x .
Computing the product of two fixed-point numbers, however, is slightly more involved. Simply multiplying the integer representations x ¯ and y ¯ does not yield a useful result for x y ¯ :
x ¯ y ¯ 2 f x y = ( x 2 f ) ( y 2 f ) 2 f x y = x y 2 f x y 0 .
We therefore divide x ¯ y ¯ by 2 f and apply some form of rounding to obtain an integral result. For instance, we may use x ¯ y ¯ 2 f as a close approximation of x y ¯ , where · denotes rounding to the nearest integer:
x ¯ y ¯ 2 f 2 f x y = x y 2 f 2 f x y = x y 2 f x y 2 f 2 f 1 2 2 f = 1 2 δ f .
By (deterministically) rounding to the nearest integer, the absolute error is limited to 1 2 δ f in the worst case. For reasons of efficiency, however, we will often allow a slightly larger error of δ f in the worst case by using probabilistic rounding.
Remark 1.
In the remainder of this paper, we will use the integer representation of fixed-point numbers in the pseudocode of the algorithms. For a better intuitive understanding, however, we consider the actual fixed-point numbers in the error analyses. Concretely, if we write x, this means x in the analyses but x ¯ in the algorithms.

2.3. Probabilistic Rounding

Apart from the primitives for secure computation introduced above, we will use two specific methods for rounding secure fixed-point numbers. Algorithm 1 covers both methods, referred to as deterministic and probabilistic rounding, respectively. Deterministic rounding is the common method of rounding a to the nearest integer a . Probabilistic rounding [2,3] yields either a or a as a result, where the value closest to a tends to be more likely.
Remark 2.
Probabilistic (or stochastic) rounding is applied in various research fields, including machine learning, ODEs and PDEs, quantum computing, and digital signal processing, usually in combination with a severe limitation on numerical precision (see, for instance, [12,13,14,15]). The latter condition makes probabilistic rounding desirable in these cases, because it ensures zero-mean rounding errors and avoids the problem of stagnation, where small values are lost to rounding when they are added to an increasingly large accumulator [16]. However, the use of a randomness source may be expensive, as the number of random bits (entropy) varies with the probability distribution required for the rounding errors.
To make the distinction between deterministic rounding and probabilistic rounding more concrete, consider the following equation for the exact result of the product x y :
x y 2 f = x y 2 f + r .
The first term on the right-hand side captures the integer part of x y together with the first f fractional bits, while r contains the remaining f fractional bits; hence, r [ 0 , 1 ) . The probabilistically rounded result x y $ then yields
x y $ = x y r δ f w i t h   p r o b a b i l i t y   1 r , x y + ( 1 r ) δ f w i t h   p r o b a b i l i t y   r .
The maximum difference δ f between x y and x y $ occurs when x y = x y and x y $ = x y + δ f , hence only when r = 0 . This happens with probability 0, so for the probabilistic rounding error e after a single multiplication, we have | e | < δ f . As always, for the deterministic rounding error e, we have | e | 1 2 δ f .
As can be seen from Algorithm 1, deterministic rounding in MPC is significantly more expensive than probabilistic rounding due to the use of the secure comparison c < [ [ r ] ] in line 10. Given the bits [ [ r 0 ] ] , , [ [ r ν 1 ] ] of [ [ r ] ] and the bits of c , a common implementation of c < [ [ r ] ] takes approximately ν secure multiplications in log 2 ν rounds, whereas the other parts of the algorithm commonly take O ( 1 ) rounds (the asymptotic round complexity for secure comparison can be limited to O ( 1 ) rounds following [10], but the hidden constant is too large for practical purposes). For the deterministic rounding of a / 2 ν to the nearest integer, we first add 2 ν 1 to a and then truncate the ν least significant bits. The comparison c < [ [ r ] ] is needed to obtain the correct output. For probabilistic rounding, we omit the corrections in lines 2 and 10, saving the work for a secure comparison.
Algorithm 1  Round ν ( [ [ a ] ] , m o d e = p r o b a b i l i s t i c ) 2 + ν 1 a < 2 + ν 1
1:if mode = deterministic then
2:    [ [ a ] ] [ [ a ] ] + 2 ν 1
3: [ [ r 0 ] ] , , [ [ r ν 1 ] ] R { 0 , 1 } ν random bits
4: [ [ r ] ] i = 0 ν 1 [ [ r i ] ] 2 i
5: [ [ r ] ] R { 0 , 1 , , 2 κ + 1 } ▹ security parameter κ
6: c Open ( 2 1 + ν + [ [ a ] ] + [ [ r ] ] + 2 ν [ [ r ] ] )
7: c c mod 2 ν
8: [ [ b ] ] ( [ [ a ] ] + [ [ r ] ] c ) / 2 ν b = a / 2 ν $
9:if mode = deterministic then
10:    [ [ b ] ] [ [ b ] ] ( c < [ [ r ] ] ) b = a / 2 ν
11:return  [ [ b ] ] 2 1 b < 2 1

2.4. Newton–Raphson Method

The Newton–Raphson method (also known as Newton’s method) is a numerical procedure to find roots of functions. The method has been known for centuries and extensively studied and analyzed in the literature (see, e.g., ref. [17] for a general description of the method and [18] for a historical overview of its convergence properties). Without providing further details on the derivation, we simply state that, given an approximation c i to the root of a function f f ( c ) , better approximations may be found in an iterative fashion using the update formula:
c i + 1 = c i f ( c i ) f ( c i ) .
There are a few conditions that must be satisfied for the Newton–Raphson method to work. For the moment, it suffices to say that an important aspect of the method is that it requires an initial approximation, which needs to be of sufficient accuracy.

3. Reciprocal

In this section, we consider the secure computation of the reciprocal using the Newton–Raphson method. This approximation of the reciprocal will also serve as a basis for secure integer division in Section 4.
We perform a tight error analysis to guarantee an absolute error not exceeding δ f = 2 f while minimizing the additional precision used during the computation.

3.1. Secure Computation

The reciprocal function evaluates [ [ 1 / a ] ] for a secret-shared value [ [ a ] ] , a 0 , see Algorithm 2. As a first step towards a good initial approximation, [ [ a ] ] is scaled to [ [ b ] ] = [ [ a ] ] [ [ v ] ] , where v = ± 2 k for some k Z . The scaling factor v is chosen such that b [ 0.5 , 1 ) . In this interval, the line 3 2 b is a good approximation for 1 / b , with equality at the endpoints b = 0.5 and b = 1 , and the maximum error occurring at b = 1 / 2 . Shifting this line a distance of half the maximum error downward halves the maximum (absolute) error and results in the initial approximation (as in [3], which in turn relies on [19]):
[ [ c 0 ] ] = 3 α 2 [ [ b ] ] ,
with α = 3 / 2 2 0.085786 . Compared to 1 / b , c 0 has a maximum error of α (at b = 0.5 , b = 1 / 2 , b = 1 ). The constant term 3 α may be truncated to whatever precision is used in the computations. The multiplication of [ [ b ] ] by 2 is essentially free, as it can be performed locally by the parties without truncation.
Algorithm 2  Reciprocal ( [ [ a ] ] , n = 0 ) 2 1 a < 2 1
1: [ [ v ] ] Scale ( [ [ a ] ] ) v = ± 2 k , k Z
2: [ [ b ] ] Round f n ( [ [ a ] ] [ [ v ] ] ) 2 f + n 1 b < 2 f + n
3: α 3 / 2 2
4: θ log 2 log α 2 ( f + n )
5: [ [ c 0 ] ] ( 3 α ) 2 f 2 [ [ b ] ]
6:for  i = 1  to  θ  do
7:    [ [ z ] ] 2 Round f + n ( [ [ c i 1 ] ] [ [ b ] ] )
8:    [ [ c i ] ] Round f + n ( [ [ c i 1 ] ] [ [ z ] ] )
9: [ [ d θ ] ] Round f + n ( [ [ c θ ] ] [ [ v ] ] , d e t e r m i n i s t i c )
10:return  [ [ d θ ] ] 2 1 d θ < 2 1
Given the initial approximation, successive approximations are then computed using
[ [ c i + 1 ] ] = [ [ c i ] ] 2 [ [ c i ] ] [ [ b ] ] ,
which is obtained by instantiating the Newton–Raphson method in (1) with f ( c ) = b 1 / c .
After θ iterations, with θ independent of the input value a, the final approximation for 1 / a is obtained from c θ 1 / ( a v ) as follows:
[ [ d θ ] ] = [ [ c θ ] ] [ [ v ] ] .
The required number of iterations θ will be determined below such that the final error for c θ does not exceed δ f = 2 f , assuming exact arithmetic. Subsequently, we will determine the required number of additional bits n for Algorithm 2, taking into account all (rounding) errors. For better readability, we will drop the secret-shared brackets in the remainder of this section.
Under the right circumstances, the Newton–Raphson method converges quadratically to the (nearest) root of a given function, as we will show next. First, with c = 1 / b , define ϵ i = c c i as the iteration error. Then, applying (3) and assuming exact arithmetic, we find
ϵ i + 1 = c c i ( 2 c i b ) = c ( c ϵ i ) ( 2 ( c ϵ i ) b ) = ϵ i 2 b .
Since b [ 0.5 ,   1 ) and | ϵ 0 | α < 1 , we see that quadratic convergence is guaranteed from the start:
| ϵ i | = b 2 i 1 | ϵ 0 | 2 i α 2 i .
To achieve | ϵ θ | δ f , we thus set
θ = log 2 log α δ f .
Remark 3.
The behavior at b = 1 determines the number of iterations. This observation motivates changing the slope and offset of the linear initial approximation such that the error is slightly smaller at b = 1 than it is at b = 0.5 . If the difference is not too large, the solution at b = 0.5 will “catch up” with the solution at b = 1 within a certain number of iterations. In some cases, this may save an iteration. For instance, the initial approximation
[ [ ς 0 ] ] = 2.8312530517578125 1.890625 [ [ b ] ]
saves an iteration for various values of f 29 , including the most common choices for f in this range, namely f = 2 n with n [ 5 ,   10 ] . For these larger values of f, rounding is most expensive, and thus saving iterations is most valuable. The approximation ς 0 comes with two disadvantages. Firstly, b is multiplied by a number with six fractional digits, instead of an integer; still, the approximation is more efficient in those cases where an iteration is saved. Secondly, the required number of iterations is not as straightforward to compute as it is for c 0 , because it is no longer determined by the situation at a single point. With ς 0 , the largest error is generally attained at a point close to the middle of [ 0.5 ,   1 ) , which slowly shifts to the right for larger values of f.
We further note that quadratic polynomials are also an option. For instance, the following approximation is quite accurate and behaves well during the Newton–Raphson process:
[ [ ω 0 ] ] = 3 [ [ b ] ] 2 6.5 [ [ b ] ] + 4.51425 .
Quadratic polynomials are more expensive due to the computation of [ [ b ] ] 2 , which cannot be performed locally. This makes using quadratic polynomials only worthwhile when it saves iterations—and thus multiplications—in the computations that follow. Unfortunately, this is only true for relatively low values of f, in which cases we save exactly one multiplication in the entire computation. For higher values of f, despite leading to more accurate intermediate approximations, the same number of iterations is required, and hence there is no gain. Because of this, we will not study quadratic polynomials any further and stick to the simpler linear functions. Moreover, to keep things simple in our algorithms and analyses, we will stick to the approximation in (2).

3.2. Tight Error Analysis without Scaling

In this section, we analyze the error ϵ θ = c c θ in the computation of c = 1 / b for b [ 0.5 , 1 ) . We determine a tight bound for | ϵ θ | taking into account all (rounding) errors, assuming fixed-point arithmetic with f fractional bits in Algorithm 2 (i.e., with n = 0 ). In Section 3.3, we will use this bound to determine the minimal number of additional bits n needed to guarantee that the absolute error for 1 / a is limited to δ f = 2 f , also taking into account the errors due to scaling.
Because we use probabilistic rounding in Algorithm 2, each iteration adds a rounding error of 3 δ f in the worst case, see Lemma A2. Due to the quadratic convergence, however, the influence of these rounding errors is limited for subsequent iterations. With the help of Lemma A1, which bounds the error | ϵ θ 1 | for the penultimate iteration, we are able to give a tight bound for the total error after θ iterations.
Theorem 1.
If the Newton–Raphson method is used to compute 1 / b for b [ 0.5 , 1 ) , employing initial approximation (2), and computing the number of iterations θ with (6), then | ϵ θ | < ρ δ f , where ρ = 3.05 .
Proof. 
Clearly, if θ = 0 , the initial error is already below δ f . Because no iterations are performed, no further errors are introduced, and the final error remains below δ f .
For the cases in which θ = 1 or θ = 2 , we exhaustively compute the error for all possible inputs b, considering all rounding possibilities. This covers the values 4 f 14 and yields a maximum value for | ϵ θ | of approximately 2.88 δ f .
For larger values of f, we follow a different approach: firstly, we derive an expression that bounds the absolute error as a function of f and θ . Secondly, we compute the value of the error bound for f = 15 ( θ = 3 ) , which will be below 3.05 δ f . Thirdly, we show that for larger values of f, the value of the error bound will always be smaller than in the case f = 15 .
From Lemma A1, we know that in the case of exact arithmetic, the error at the start of the final iteration is bounded by b 2 θ 1 1 δ f . Lemma A2 tells us that in the first iteration, the rounding error is bounded by ( c 0 + 1 ) δ f , while in every subsequent iteration it is bounded by ( 1 / b + 1 ) δ f . Thus, for θ 3 , we obtain the following bound for the total error at the start of the final iteration:
| ϵ θ 1 | < b 2 θ 1 1 δ f + ( c 0 + 1 ) δ f + ( θ 2 ) 1 b + 1 δ f .
Let T θ = T θ ( b ) = ( c 0 + 1 ) + ( θ 2 ) ( 1 / b + 1 ) . Applying (A2) with i = θ 1 gives
| ϵ θ | < ϵ θ 1 2 b + 1 b + 1 δ f .
Hence, as an upper bound for | ϵ θ | we get
E θ , f ( b ) δ f = def b 2 θ 1 + 2 b 2 θ 1 T θ δ f + b T θ 2 δ f + 1 b + 1 δ f .
For the case f = 15 , where θ = 3 , this yields
E 3 , 15 ( b ) δ f = b 7 + 2 b 4 T 3 δ 15 + b T 3 2 δ 15 + 1 b + 1 δ f ,
for which a simple numerical analysis shows that the maximum value is slightly below 3.05 δ f .
We complete the proof by showing that E θ , f ( b ) < E 3 , 15 ( b ) for f > 15 , with θ defined by (6). Since θ is increasing as a function of f, let f θ be the lowest value of f such that θ = log 2 log α δ f . Then, f θ is also increasing as a function of θ .
Since it is clear that E θ , f θ > E θ , f for all f > f θ , it suffices to bound E θ , f θ . To that end, we will consider the three terms in the definition of E θ , f that depend on θ and f separately. The first term b 2 θ 1 needs no complicated assessment. Clearly, with 0.5 b < 1 , this term decreases rapidly with θ .
The second term is 2 b 2 θ 1 T θ δ f . Using the definition of θ , we can rewrite δ f as
log 2 log α δ f + γ = θ δ f = α 2 θ γ 1 ,
where the value of γ depends on f and is determined by the ceiling operation. In any case, 0 γ < 1 , taking the derivative
d 2 b 2 θ 1 T θ α 2 θ γ 1 d θ = 2 b 2 θ 1 α 2 θ γ 1 1 b + 1 + T θ 2 θ 1 ln 2 ( ln b + 2 γ ln α ) < 6 b 2 θ 1 α 2 θ γ 1 1 + ( θ 1 ) 2 θ 2 ln 2 ln α ,
where we used c 0 ( b ) < 2 and d T θ / d θ < 3 , so that T θ < 3 ( θ 1 ) . The factor before the outer parentheses is positive for any valid b and θ 3 . With initial approximation (2), α = 3 / 2 2 0.085786 , and it is easy to verify that the factor between the outer parentheses is negative for θ = 3 . Moreover, the negative part will only increase in (absolute) size with θ . Therefore, the derivative is, and will remain, negative. This shows that the original term is decreasing as a function of θ .
The third term is b T θ 2 δ f . Similarly, writing δ f as a function of θ and taking the derivative, we find
d b T θ 2 α 2 θ γ d θ = b T θ α 2 θ γ 2 ( 1 b + 1 ) + T θ 2 θ γ ln 2 ln α < 6 b T θ α 2 θ γ 1 + ( θ 1 ) 2 θ 2 ln 2 ln α .
This resembles the bound for the second term. Indeed, the term before the outer parentheses is again positive for any valid b and θ 3 , and with the known value for α it is easy to verify that the factor between the outer parentheses is negative for θ = 3 . Moreover, the negative part will only increase in (absolute) size with θ . Therefore, the derivative is always negative, which shows that the original term is decreasing as a function of θ .
Combining these results shows that E θ , f ( b ) < E 3 , 15 ( b ) < 3.05 , for all b [ 0.5 , 1 ) and f > 15 , which proves the statement. Note that we could tighten the bound even more by computing E θ , f θ for an arbitrary θ > 3 . □
To limit the absolute error for the computation of 1 / a to δ f = 2 f , we apply Algorithm 2 using n additional bits of precision. That is, we use fixed-point arithmetic with f + n fractional bits in the core of Algorithm 2. The downside of using extra bits is that more bits need to be truncated after every multiplication, and secure truncation is a relatively expensive procedure. Notice that we may still directly apply Theorem 1 to find that | ϵ θ , n | < ρ δ f + n . After finishing the Newton–Raphson iterations, the result should be rounded to the original precision, which may introduce more errors. We will evaluate these errors in the next section, together with the errors introduced in the scaling steps.

3.3. Tight Error Analysis

In this section, we analyze the errors due to the scaling steps in Algorithm 2. The input a is scaled to b = a v [ 0.5 , 1 ] , and the output is obtained by scaling c θ 1 / b to d θ = c θ v 1 / a . The scaling steps introduce additional errors or magnify existing errors. Up until this point, we silently assumed that the scaling b = a v was exact. This, however, may not be true if | a | > 1 . In this case, the radix point shifts to the left, and because we are working with fixed-point numbers, the least significant bits are lost. So, instead of b = a v , we obtain
b * = a v $ = a v + η 1 ,
where η 1 is the error induced by the scaling from a to b. An important observation is that | η 1 | is smaller than the precision used in the computation. Moreover, b * can be computed with the same number of fractional bits as the intermediate results in the Newton–Raphson iterations (it would be a waste to scale down to f fractional bits if the computation is performed with f + n fractional bits). Consequently, we know that
| η 1 | < δ f + n = δ f 2 n .
After scaling, the reciprocal of b * is computed. As explained at the end of Section 3.2, these computations are performed with extra bits. However, at this point we do not yet reduce the precision back to δ f , but instead use
c θ * = 1 b * ϵ θ , n ,
where | ϵ θ , n | < ρ δ f + n . Recall that ρ = 3.05 , according to Theorem 1. This result is scaled back through another multiplication by v and subsequently rounded deterministically to the original precision:
d θ * = c θ * v + η 2 ,
where | η 2 | 1 2 δ f . The absolute error then reads as:
d θ * d = 1 a v + η 1 ϵ θ , n v + η 2 1 a = 1 a 1 1 + η 1 a v ϵ θ , n v + η 2 1 a = 1 a 1 η 1 a v + η 1 a v 2 ϵ θ , n v + η 2 1 a = 1 a η 1 b + η 1 ϵ θ , n v + η 2 .
A careful analysis, partly covered by Lemmas A3 and A4, leads to the following result:
Theorem 2.
If the Newton–Raphson method is used to compute 1 / a for a Q 2 f , f , employing the approach in Algorithm 2, with n f , then the absolute error (8) is bounded by 2 n .
Proof. 
We distinguish three cases: (i) a < 2 n , (ii) 2 n a < 2 n , and (iii) a 2 n . In case (i), v 2 n and, consequently, both scaling steps introduce no rounding errors: η 1 = η 2 = 0 . In case (ii), 2 n v 2 n 1 . Due to the extra precision that is used, there is still no rounding error in the initial scaling step ( η 1 = 0 ), but there might be an error when the result is truncated to the original precision. In case (iii), v 2 ( n + 1 ) , and both scaling steps may introduce errors.
In case (i), the absolute error simplifies to | ϵ θ , n v | . For such small values of a, the scaling factor v is large, and the absolute error is bounded by 2 f 1 ρ δ f + n . However, it can be shown to be tighter by noting that v = 2 f 1 occurs only when a = δ f . For this value of a, according to Lemma A3, ρ may be replaced by 2. For the remaining values of a in case (i), ϵ θ , n may be approximately 1.5 times as large (it is still bounded by ρ δ f + n ), but the value for v is at most 2 f 2 , making the product | ϵ θ , n v | strictly smaller. Thus, the exact bound for case (i) is 2 f 1 2 δ f + n = 2 n .
In case (ii), the error simplifies to | ϵ θ , n v + η 2 | , which is bounded by 2 n 1 ρ δ f + n + 1 2 δ f = 1 2 ( ρ + 1 ) δ f . Using the value for ρ given in Theorem 1, it is straightforward to deduce that the bound for case (i) exceeds that of case (ii) if n f 2 :
2 n 2 ( f 2 ) = 4 δ f > ρ + 1 2 δ f .
The cases n = f 1 and n = f are less straightforward and will be considered separately.
If n = f 1 , case (i) contains only a = δ f . As already derived, the error in this case is bounded by 2 n = 2 δ f . The first value in case (ii) is 2 δ f , for which we may replace ρ δ f + n by 2 δ f + n , according to Lemma A3. The maximal error before applying η 2 then reads 2 n 1 2 δ f + n = δ f . With a = 2 δ f , 1 / a = 2 f 1 , which is a multiple of δ f . Consequently, the error cannot become larger than δ f < 2 n . The next value in case (ii) is 3 δ f , for which we may replace ρ δ f + n by 7 3 δ f + n , according to Lemma A4. The maximal error before applying η 2 then reads 2 n 1 7 3 δ f + n = 7 6 δ f . With a = 3 δ f , 1 / a = 1 3 2 f , which is a multiple of 1 3 δ f (but not an integer multiple of δ f ). Combining these results shows that the total error is bounded by 5 3 δ f < 2 n . For larger values of a, the error is simply bounded by 2 n 2 ρ δ f + n + 1 2 δ f = 1.2625 δ f < 2 n .
If n = f , case (i) ceases to exist. The first value in case (ii) is δ f . Similar to the case a = 2 δ f when n = f 1 , we know that in this case the maximal error is δ f = 2 n . The next value in case (ii) is 2 δ f , for which a similar derivation shows that the error is bounded by 1 2 δ f < 2 n . The third value in case (ii) is 3 δ f . Analogous to the situation with n = f 1 , we may replace ρ by 7 3 δ f + n and find that the error before applying η 2 is bounded by 2 n 2 7 3 δ f + n = 7 12 δ f . Knowing that the exact solution is a multiple of 1 3 δ f (but not an integer multiple of δ f ), we conclude that the total error, after applying η 2 (deterministically), is maximally 2 3 δ f < δ f = 2 n . Again, for larger values of a, the error is bounded by 2 n 3 ρ δ f + n + 1 2 δ f = 0.88125 δ f < 2 n .
Thus, for all n f , the errors in cases (i) and (ii) are bounded by 2 n . For even larger values of a, the error bound decreases rapidly, despite η 1 coming into play. In case (iii), the error is approximately bounded by 1 2 ( 2 2 n + 2 + 2 2 n ρ + 1 ) δ f (ignoring the η 1 term in the denominator that is small compared to b), which is significantly smaller than 2 n . □
Remark 4.
Concerning the relative error, given by the expression
d θ * d d = η 1 b + η 1 ϵ θ , n b + a η 2 ,
we see that the tables have turned. For small values of a, the relative error is also small, while for larger values of a the error increases. If  a < 2 n , the error is bounded by 2 n ρ δ f , while the bound increases to ( 2 n ρ + 2 n 1 ) δ f for 2 n a < 2 n . For larger values of a, the last term on the right-hand side starts to dominate. In this domain, the error is bounded by ( 1 / ( 2 n b δ f ) + 2 n ρ + 1 2 ( 2 f 1 ) ) δ f < ( 2 n + 1 + 2 n ρ + 2 f 1 ) δ f 0.5 . Based on the results from numerical experiments, we suspect that the actual bound for the relative error lies at approximately 1 / 3 , due to the relation between a and η 2 (they do not attain their maximal values at the same time). Though this error may seem large, it is not an effect of the specific computational algorithms, but merely a behavior inherent to the use of fixed-point numbers.
Corollary 1.
If the Newton–Raphson method is used to compute 1 / a for a Q 2 f , f , using the approach in Algorithm 2, then computing with n = f additional bits guarantees that the absolute error (8) is bounded by δ f , while using n = f + 1 bits guarantees that the absolute error is strictly smaller than δ f .
Proof. 
According to Theorem 2, if  n f , the absolute error is bounded by 2 n . It follows directly that if n = f , the bound equals 2 f = δ f . Note that from the proof of Theorem 2, it follows that this bound can only be attained when a = δ f .
If n = f + 1 , then a proof similar to that of Theorem 2 shows that the error is strictly smaller than δ f . Recall that for small values of a, the error reads as | ϵ θ , n v + η 2 | , and therefore the absolute error before applying η 2 is bounded by 2 n 2 ρ δ f + n . For  a = δ f , however, we may replace ρ by δ f + n or 2 δ f + n , according to Lemma A3. This leads to errors 2 n 2 δ f + n = 1 4 δ f and 2 n 2 2 δ f + n = 1 2 δ f , respectively. Because  η 2 is applied deterministically, both will be rounded to the analytical solution, and hence ϵ θ = 0 .
For larger values of a, the error is bounded by 2 n 3 ρ δ f + n + 1 2 δ f = 0.88125 δ f < δ f , which completes the proof. By considering the cases a = 2 δ f and a = 3 δ f separately from even larger values of a, it is possible to show that the error is actually strictly smaller than 0.7 δ f , but we will omit the proof here.    □

4. Integer Division

Secure integer division is an important primitive and appears in many applications. For integer inputs [ [ g ] ] , [ [ a ] ] , performing integer division yields integers [ [ q ] ] , [ [ r ] ] such that g = q a + r and 0 r < a . Formulated differently, we have q = g / a and r = g q a .
One possible way of computing [ [ q ] ] is by applying the Newton–Raphson algorithm described in the previous section. To that end, [ [ a ] ] needs to be converted from an integer to a fixed-point number. Subsequently, the reciprocal of [ [ a ] ] is computed and multiplied by [ [ g ] ] . It turns out, however, that it is advantageous to perform the multiplication by [ [ g ] ] before finalizing the computation of 1 / [ [ a ] ] . The resulting value [ [ q ˜ ] ] is a good approximation to [ [ q ] ] and can be used to compute the final, correct value of [ [ q ] ] . In the remainder of this section, we will omit the secret-shared brackets for better readability.

4.1. Error for Integer Division

In the case of integer division, the error analysis from Section 3.3 can be simplified. With a being an integer, there can only be nonzero bits to the left of the radix point. This means—assuming that there are an equal number of bits before and after the radix point, i.e.,  = 2 f —that no information is lost in the initial scaling step: η 1 = 0 . In other words, in the case of integer division, we have b * = b . The error after computing the reciprocal of b (before rescaling and truncating to the original precision) is still bounded by ϵ θ , n , such that we now have c θ = 1 / ( a v ) ϵ θ , n .
At this point, we first multiply g and v. Since we have assumed that = 2 f , there is no rounding error for this multiplication. The result is multiplied by c θ , after which we truncate to the original precision. This results in a generally nonintegral estimate to g / a , which we call q ˜ :
q ˜ = 1 a v ϵ θ , n g v + η 2 = g a ϵ θ , n g v + η 2 .
The resulting approach is summarized in Algorithm 3. In what follows, we will denote the error of q ˜ (with respect to g / a ) by E q ˜ .
Algorithm 3  IntDivFxp ( [ [ g ] ] , [ [ a ] ] , n = 1 ) 2 1 g , a < 2 1 , with  g , a 2 f Z
 Lines 1–8 of Algorithm 2
 Line 2 simplifies to [ [ b ] ] 2 f + n [ [ a ] ] [ [ v ] ]
9: [ [ w ] ] 2 f [ [ g ] ] [ [ v ] ]
10: [ [ q ˜ ] ] Round f + n ( [ [ c θ ] ] [ [ w ] ] )
11:return  [ [ q ˜ ] ] 2 1 q ˜ < 2 1
Theorem 3.
If the Newton–Raphson method is used to compute g / a , with g and a being integers, using the approach in Algorithm 3 and n < f , then | E q ˜ | 2 n .
Proof. 
To derive the error bound, we consider the error before the final truncation η 2 is applied: ϵ θ , n g v . Because a is an integer, we have v 0.5 , with equality only when a = 1 . In the latter case, b = 0.5 , and according to Lemma A3 we have | ϵ θ , n ( 0.5 ) | 2 δ f + n . It follows that | ϵ θ , n v | δ f + n . Furthermore, we know that g 2 f 1 . Combining all this gives
| ϵ θ , n g v | δ f + n ( 2 f 1 ) = ( 1 2 f ) 2 n < 2 n .
Obviously, when a = 1 , g / a is an integer, and therefore a multiple of δ f . Since n < f , 2 n is also a multiple of δ f . From these observations, it follows that the final rounding η 2 cannot bring the error any further than 2 n .
The case v = 0.25 occurs only for a = 2 and a = 3 , leading to b = 0.5 and b = 0.75 , respectively. Clearly, for  a = 2 , we have again that | ϵ θ , n ( 0.5 ) | 2 δ f + n , leading to | ϵ θ , n g v | < 1 2 2 n . Because  | η 2 | < δ f 1 2 2 n , we know that | ϵ θ , n g v + η 2 | < 2 n . For the case a = 3 , let us write n = f γ , so that 2 n = 2 γ δ f ( γ = 1 , 2 , 3 , ). According to Lemma A4, we have | ϵ θ , n ( 0.75 ) | 7 3 δ f + n when n + f = 6 , leading to | ϵ θ , n g v | < 7 12 2 n . The final error can only be larger than 2 n when | η 2 | > 5 12 2 n = 5 12 2 γ δ f , and because | η 2 | < δ f , this is only possible if γ = 1 . However, the system n + f = 6 and n = f 1 has no integer solutions, and therefore this scenario will never occur. Lemma A4 tells us that in all other cases | ϵ θ , n ( 0.75 ) | 5 3 δ f + n , leading to | ϵ θ , n g v | < 5 12 2 n . Now, the final error can only be larger than 2 n when | η 2 | > 7 12 2 n = 7 12 2 γ δ f . Because  | η 2 | < δ f and γ 1 , this is impossible.
For even larger values of a, v 0.125 , and we have | ϵ θ , n | ρ δ f + n , leading to | ϵ θ , n g v | < 1 8 ρ 2 n . The final error can only be larger than 2 n if | η 2 | > 7 8 ρ 2 n = 7 8 ρ 2 γ δ f . Again, because  | η 2 | < δ f , there are no solutions with γ 1 .    □
We emphasize that the above result holds even when η 2 is determined probabilistically, whereas throughout Section 3 it was assumed that the final rounding—to the original precision—was performed deterministically (which was especially relevant for the cases n = f and n = f + 1 ) .

4.2. From Fixed-Point Approximation to Integer Solution

The fixed-point value q ˜ now needs to be rounded to an integer value q ¯ . This can be achieved either deterministically or probabilistically.
Corollary 2.
Suppose q ˜ is computed with Algorithm 3 using n = 1 . If  q ˜ is rounded to an integer q ¯ deterministically, then q ¯ { q , q + 1 } . If  q ˜ is rounded to an integer  q ¯ probabilistically, then q ¯ { q 1 , q , q + 1 } .
Proof. 
We apply Theorem 3 to find that the error on q ˜ is bounded by 2 1 . Since q g / a < q + 1 , this gives q 0.5 q ˜ < ( q + 1 ) + 0.5 . It follows directly that for deterministic rounding, q ¯ { q , q + 1 } . It also follows that for probabilistic rounding, q ¯ { q 1 , q , q + 1 , q + 2 } . It remains to be shown that q ¯ = q + 2 is not possible. To that end, first note that from q g / a < q + 1 , it follows that q g / a q + 1 1 / a . Therefore, for  q ¯ = q + 2 to occur, it should be possible that E q ˜ > 1 / a .
Suppose that 2 m 1 a < 2 m for some integer m. Then, v = 2 m and 1 / a = v / b . At this point, we are only interested in solutions q ˜ > g / a with negative errors, hence we have | ϵ θ , n | < ( 1 / b + 1 ) δ f + n . Maximizing the error | ϵ θ , n g v | with n = 1 then gives
| ϵ θ , 1 g v | 1 / b + 1 δ f + 1 ( 2 f 1 ) v = 1 2 1 / b + 1 ( 1 2 f ) v < 1 2 1 / b + 1 v v / b = 1 / a .
Thus, the approximation before applying η 2 is still below q + 1 . Since q + 1 is an integer and therefore a multiple of δ f , it follows that | E q ˜ | 1 / a . In other words, the rounding η 2 cannot push the error beyond q + 1 . Consequently, q ˜ will never be rounded to q + 2 .    □
Remark 5.
If we were to calculate q ˜ with Algorithm 2 instead of Algorithm 3 and multiply the result by g, we would find that
q ˜ = 1 a v ϵ θ , n v + η 2 g = g a ϵ θ , n g v + η 2 g .
Numerical simulations suggest that we would then find the same values for q ¯ . That is, q ¯ { q , q + 1 } in the case of deterministic rounding and q ¯ { q 1 , q , q + 1 } in the case of probabilistic rounding. However, this approach would require an extra secure comparison in the deterministic rounding step in line 9 of Algorithm 2.
If we were to replace this deterministic rounding with probabilistic rounding, then | η 2 | < δ f (instead of | η 2 | 1 2 δ f with deterministic rounding). Numerical simulations show that in this case, q ¯ { q 1 , q , q + 1 , q + 2 } , independent of whether rounding to an integer is performed deterministically or probabilistically. Hence, in this approach, at least one extra secure comparison is also required to find the correct value q. This proves that it is indeed advantageous to incorporate multiplication by g into the computation of 1 / a , as we did in Algorithm 3.
So far, we have computed q ˜ using only probabilistic rounding. We found that q ¯ { q , q + 1 } if the rounding (to the nearest) is performed deterministically and q ¯ { q 1 , q , q + 1 } if q ˜ is rounded to q ¯ probabilistically. The final step is to recover the correct solution q.
This is achieved by one or two comparisons, depending on how q ˜ is rounded to q ¯ . According to Corollary 2, if  q ˜ is rounded deterministically, then q ¯ { q , q + 1 } . Hence, we can compute q ¯ a g and check the sign. If  q ¯ a g > 0 , then q = q ¯ 1 ; otherwise, q = q ¯ . If  q ˜ is rounded probabilistically, then q ¯ { q 1 , q , q + 1 } . This time, we not only check the sign of q ¯ a g , but also that of ( q ¯ + 1 ) a g . If  q ¯ a g > 0 , then q = q ¯ 1 . Otherwise, if  ( q ¯ + 1 ) a g > 0 , then q = q ¯ , or q = q ¯ + 1 .
At first sight, it might not seem relevant if q ˜ is rounded to q ¯ deterministically or probabilistically, because even though deterministic rounding requires an extra secure comparison, it saves a secure comparison in the computation of q. Rounding probabilistically to q ¯ does not require any secure comparisons, but two secure comparisons are needed to find the correct value for q. Hence, in both cases, we need exactly two secure comparisons. However, the secure comparison in Algorithm 1 is cheaper than a regular secure comparison, because the bits of the numbers that are compared are already available. Therefore, it is computationally advantageous to choose the option with deterministic rounding to q ¯ and only one comparison to find q. The complete procedure is summarized in Algorithm 4.
Algorithm 4  IntDiv ( [ [ g ] ] , [ [ a ] ] ) 2 f 1 g , a < 2 f 1 , with  g , a Z
1: [ [ q ˜ ] ] IntDivFxp ( [ [ g 2 f ] ] , [ [ a 2 f ] ] ) 2 1 q ˜ < 2 1
2: [ [ q ¯ ] ] Round f ( [ [ q ˜ ] ] , d e t e r m i n i s t i c )
3: [ [ q ] ] = [ [ q ¯ ] ] ( [ [ q ¯ ] ] [ [ a ] ] > [ [ g ] ] )
4:return  [ [ q ] ] 2 f 1 q < 2 f 1

5. Reciprocal Square Root

To compute the reciprocal (or, inverse) square root securely, we follow the same approach as in Section 3 for the reciprocal. The overall goal is to guarantee an absolute error not exceeding δ f = 2 f while minimizing the additional precision used during the computation. In Section 6, we will use this result for the secure computation of the square root with the same accuracy.

5.1. Secure Computation

The reciprocal square root function evaluates 1 / [ [ a ] ] for a secret-shared value [ [ a ] ] , a > 0 , see Algorithm 5. Upon initialization, [ [ a ] ] is scaled to [ [ b ] ] = [ [ a ] ] [ [ v ] ] such that b [ 0.5 , 2 ) . The interval for b is taken twice as large as that for the reciprocal, so that the scaling factor v = 2 k , k Z , can be chosen with k even. This ensures that scaling back by [ [ v 1 / 2 ] ] at the end introduces no additional rounding errors.
Algorithm 5  RecSqrt ( [ [ a ] ] , n = 0 ) 2 1 a < 2 1
1: [ [ v ] ] , [ [ v 1 2 ] ] Scale ( [ [ a ] ] ) v = ± 2 k , k Z , k even
2: [ [ b ] ] Round f n ( [ [ a ] ] [ [ v ] ] ) 2 f + n 1 b < 2 f + n + 1
3: β ( 2 1 ) / 4
4: τ 3 / 2
5: θ log 2 log τ β ( τ 2 ( f + n ) )
6: [ [ c 0 ] ] 3 / 2 + β Round 1 ( [ [ b ] ] / 2 , d e t e r m i n i s t i c )
7:for  i = 1  to  θ  do
8:    [ [ z 1 ] ] Round f + n ( [ [ c i 1 ] ] [ [ b ] ] ) )
9:    [ [ z 2 ] ] 3 Round f + n ( [ [ c i 1 ] ] [ [ z 1 ] ] )
10:    [ [ c i ] ] Round f + n + 1 ( 1 2 [ [ c i 1 ] ] [ [ z 2 ] ] )
11: [ [ d θ ] ] Round f + n ( [ [ c θ ] ] [ [ v 1 2 ] ] , d e t e r m i n i s t i c )
12:return  [ [ d θ ] ] 2 1 d θ < 2 1
To find an initial approximation, following the same approach that led to (2) would give
[ [ c 0 ] ] = 2 6 ( 7 2 [ [ b ] ] ) α * ,
where α * = ( 7 3 9 3 ) / ( 6 2 ) . This initial approximation has a maximal absolute error of α * 0.089537 (at b = 0.5 , b = 9 3 / 2 , and  b = 2 ). An integer factor in front of b—like in (2)—would be more efficient, but this is not really an option here. A factor 1 2 is possible, essentially reducing the cost of truncation by a factor of f. Therefore, another good initial approximation is
[ [ c 0 ] ] = 1 4 ( 5 + 2 ) 1 2 [ [ b ] ] ,
which has a maximal absolute error of β = 1 4 ( 2 1 ) 0.103553 at b = 1 and b = 2 and only 1 4 ( 3 2 4 ) 0.060660 at b = 0.5 . Obviously, this slightly higher initial error may lead to an extra iteration in some cases, but it turns out this is not the case for the most common values of f, namely f = 2 n with n { 2 , , 10 } . Compared to the initial approximation by Liedel [4], our approximation is slightly less accurate. This may be attributed to the fact that the approximation by Liedel was derived for the interval [ 0.5 , 1 ) , while ours is defined for [ 0.5 , 2 ) . Due to the quadratic convergence behavior of the Newton–Raphson method, however, the effect of the lower initial accuracy is rather small. On the other hand, our approximation is more efficient in terms of truncation, because (a) we only need to truncate a single bit to compute c 0 , whereas Liedel needed many more, and (b) Liedel assumed that the input is scaled to [ 0.5 , 1 ) , so it is possible that the square root of the scaling factor is not an integral power of two. In these cases, another multiplication by 2 needs to be performed, leading to another expensive truncation. Because our approximation and method rely on the assumption that the input is scaled to [ 0.5 , 2 ) in such a way that the scaling factor is always an even power of two, no such correcting multiplication is necessary. Aly and Smart [5] used an even more crude initial approximation. It required the position of the most significant bit, say t, which was then used to compute 2 t / 2 , a rough approximation for the square root. Finding the most significant bit, however, is equivalent to our scaling step to find b, and once b is known, computing the more accurate approximation in (9) is basically free.
Given the initial approximation [ [ c 0 ] ] , successive approximations are computed using
[ [ c i + 1 ] ] = 1 2 [ [ c i ] ] 3 [ [ c i ] ] 2 [ [ b ] ] ,
which corresponds to the Newton–Raphson method in (1) applied to f ( c ) = b 1 / c 2 . After  θ iterations, with  θ independent of the input value, the scaling is inverted. The final approximation for 1 / a then reads
[ [ d θ ] ] = [ [ c θ ] ] [ [ v 1 2 ] ] .
Now, we clearly see why the scaling factor v was chosen to be an even power of two: it also makes the inverse scaling an integral power of two.
The required number of iterations θ will be determined below such that the final error for c θ does not exceed δ f = 2 f , assuming exact arithmetic. Subsequently, we will determine the required number of additional bits n for Algorithm 5, taking into account all (rounding) errors. For better readability, we will drop the secret-shared brackets in the remainder of this section.
Using c = 1 / b to denote the analytical solution and ϵ i = c c i to denote the iteration error, applying (10) gives
ϵ i + 1 = c 1 2 c i ( 3 c i 2 b ) = c 1 2 ( c ϵ i ) ( 3 ( c ϵ i ) 2 ) b ) = 3 2 b ϵ i 2 1 2 b ϵ i 3 .
Since b [ 0.5 , 2 ) and | ϵ 0 | β < 1 , we see that quadratic convergence is guaranteed right from the start. From (11), it follows directly that for those values of b where ϵ 0 0 , it holds that
| ϵ i | ( 3 2 b ) 2 i 1 ϵ 0 2 i ( τ β ) 2 i / τ ,
where τ = 3 / 2 and the convergence is slowest for b = 2 . To achieve | ϵ θ | δ f , we thus set
θ = log 2 log τ β τ δ f .
For those values of b where ϵ 0 < 0 , the third-order term in (11) cannot simply be ignored. However, we will not update (12) accordingly. Instead, we will handle these cases in the appropriate places in the proofs that follow.

5.2. Tight Error Analysis without Scaling

In this section, we analyze the error ϵ θ = c c θ in the computation of c = 1 / b for b [ 0.5 , 2 ) . Analogous to the analysis for the reciprocal, we determine a tight bound for | ϵ θ | taking into account all (rounding) errors, assuming fixed-point arithmetic with f fractional bits in Algorithm 5 (thus with n = 0 ). In Section 5.3, we will use this bound to determine the minimal number of additional bits n needed to guarantee that the absolute error for 1 / a is limited to δ f = 2 f , also taking into account the errors due to scaling.
With the help of Lemmas A5 and A6, we are able to give a bound on the total error for the reciprocal square root after θ iterations.
Theorem 4.
If the Newton–Raphson method is used to compute 1 / b for some b [ 0.5 , 2 ) , employing initial approximation (9) and with the number of iterations θ computed via (12), then | ϵ θ | < σ δ f , where σ = 2.71 .
Proof. 
Clearly, if  θ = 0 , the initial error is already below δ f . Because no iterations are performed, no further errors are introduced, and the final error remains below δ f .
For the cases in which θ { 1 , 2 , 3 } , we exhaustively compute the error for all possible inputs b, taking into account all rounding possibilities. This covers the values 4 f 18 and yields a maximum value for | ϵ θ | of approximately 2.60 δ f .
For larger values of f, we follow an analogous approach to that for the reciprocal: firstly, we derived an expression that bounds the absolute error as a function of f and θ . Secondly, we compute the value of the error bound for f = 19 ( θ = 4 ), which will be below 2.71 δ f . Thirdly, we show that for larger values of f, the value of the error bound will always be smaller than in the case f = 19 .
Following Lemma A5, we know that in the case of exact arithmetic, the error at the start of the final iteration is bounded by ξ 2 θ 2 b / 2 2 θ 1 1 δ f / τ . Lemma A6 tells us that in the first iteration, the rounding error is bounded by ( c 0 2 / 2 + c 0 / 2 + 1 ) δ f , while in every subsequent iteration it is bounded by ( 1 / ( 2 b ) + 1 / ( 2 b ) + 1 ) δ f . Thus, for  θ 4 , we obtain the following bound for the total error at the start of the final iteration:
| ϵ θ 1 | < ξ 2 θ 2 b / 2 2 θ 1 1 δ f / τ + c 0 2 2 + c 0 2 + 1 δ f + ( θ 2 ) 1 2 b + 1 2 b + 1 δ f .
Let T θ = T θ ( b ) = ( c 0 2 / 2 + c 0 / 2 + 1 ) + ( θ 2 ) ( 1 / ( 2 b ) + 1 / ( 2 b ) + 1 ) . Applying (A4) with i = θ 1 , and without the third-order term (since ϵ θ 1 > 0 ), gives
| ϵ θ | < 3 2 b ϵ θ 1 2 + 1 2 b + 1 2 b + 1 δ f < 3 2 b ξ 2 θ 2 b / 2 2 θ 1 1 δ f / τ + T θ δ f 2 + 1 2 b + 1 2 b + 1 δ f = ξ b ξ / 2 2 θ 1 + 2 b ξ / 2 2 θ 1 T θ τ δ f + 3 2 b T θ 2 δ f + 1 2 b + 1 2 b + 1 δ f = def E θ , f ( b ) δ f .
For the case f = 19 , where θ = 4 , this yields
E 4 , 19 ( b ) δ f = ξ b ξ / 2 15 + 2 b ξ / 2 8 T 4 τ δ 19 + 3 2 b T 4 2 δ 19 + 1 2 b + 1 2 b + 1 δ f ,
for which a simple numerical analysis shows that the maximum value is slightly below 2.71 δ f .
We complete the proof by showing that E θ , f ( b ) < E 4 , 19 ( b ) for f > 19 , with  θ defined by (12). Since θ is increasing as a function of f, let f θ be the lowest value of f such that θ = log 2 log τ β τ δ f . Then, f θ is also increasing as a function of θ .
Since it is clear that E θ , f θ > E θ , f for all f > f θ , it suffices to bound E θ , f θ . To that end, we will consider the three terms in the definition of E θ , f that depend on θ and f separately.
To evaluate the first term ξ b ξ / 2 2 θ 1 , we note that ξ is defined to have the value 1.045 for b 1 < b < b 2 , with  b 2 1.65 , while ξ = 1 for b > b 2 . It thus follows that b ξ / 2 < 1 , and as a result the entire term decreases rapidly with  θ .
For convenience, we use b ˜ = b ξ / 2 and α ˜ = τ β in the analysis below. The second term may then be written as 2 b ˜ 2 θ 2 T θ τ δ f . Using the definition of θ , we get τ δ f = α ˜ 2 θ γ 1 , where γ satisfies log 2 log α ˜ ( τ δ f ) + γ = θ and 0 γ < 1 . Taking the derivative thus yields for the second term
d 2 b ˜ 2 θ 2 T θ α ˜ 2 θ γ 1 d θ = 2 b ˜ 2 θ 2 α ˜ 2 θ γ 1 1 2 b + 1 2 b + 1 + T θ 2 θ 1 ln 2 ( 1 2 ln b ˜ + 2 γ ln α ˜ ) < 6 b ˜ 2 θ 2 α ˜ 2 θ γ 1 1 + ( θ 1 ) 2 θ 2 ln 2 ln α ˜ ,
where c 0 ( b ) < 3 / 2 and d T θ / d θ < 3 , so that T θ < 3 ( θ 1 ) . This bound is almost identical to the bound we found in the analysis for the reciprocal. The factor before the outer parentheses is now positive for any valid b and θ 4 . Additionally, it is easy to verify that the factor within the outer parentheses is negative for θ = 4 . Because the negative part will only increase in (absolute) size with θ , the derivative is, and will remain, negative. This shows that the original term is decreasing as a function of θ .
The third term is 3 2 b T θ 2 δ f . Writing τ δ f as a function of θ , we may rewrite this term to b / 2 T θ 2 α ˜ 2 θ γ . Taking the derivative, we find:
d b / 2 T θ 2 α ˜ 2 θ γ d θ = 2 b / 2 T θ α ˜ 2 θ γ 1 2 b + 1 2 b + 1 + T θ 2 θ γ 1 ln 2 ln α ˜ < 6 b / 2 T θ α 2 θ γ 1 + ( θ 1 ) 2 θ 2 ln 2 ln α ˜ .
Again, this bound is very similar to the bound in the analysis for the reciprocal. The term before the outer parentheses is positive for any valid b and θ 4 , and with the known value for α ˜ it is easy to verify that the term between the outer parentheses is negative for θ = 4 . Additionally, the negative part will only increase in (absolute) size with θ . Therefore, the derivative is always negative, which shows that the original term is decreasing as a function of θ .
Combining these results shows that E θ , f ( b ) < E 4 , 19 ( b ) < 2.71 for all b [ 0.5 ,   2 ) and f > 19 , which proves the statement. Note that we could tighten the bound even more by computing E θ , f θ for an arbitrary θ > 4 .    □
Similar to the reciprocal, we will perform our computations with extra precision to control the effect of rounding. Therefore, in the following, we assume a total of f + n fractional bits. Then, we apply Theorem 4 to find that ϵ θ , n < σ δ f + n .

5.3. Tight Error Analysis

The analysis of scaling errors for the reciprocal square root is like that for the reciprocal. Again, we have that b * = a v + η 1 , with  | η 1 | < δ f + n . And this time, we have
c θ * = 1 b * + ϵ θ , n ,
where | ϵ θ , n | < σ δ f + n with σ = 2.71 , according to Theorem 4. Finally, c * is scaled back through the multiplication by v rounded (deterministically) to the original precision:
d θ * = c θ * v + η 2 ,
where | η 2 | 1 2 δ f . The absolute error for d = 1 / a then reads as
d θ * d = 1 a v + η 1 + ϵ θ , n v + η 2 1 a = 1 a 1 1 + η 1 a v + ϵ θ , n v + η 2 1 a = 1 a 1 1 2 η 1 a v + 3 8 η 1 a v 2 + ϵ θ , n v + η 2 1 a η 1 2 a b + ϵ θ , n v + η 2 .
We are able to bound the overall error for 1 / a as follows, using Lemma A7 to bound the error for cases in which a is a specific power of 2.
Theorem 5.
If the Newton–Raphson method is used to compute 1 / a for a Q 2 f , f , employing the approach in Algorithm 5, with  1 2 f n f 2 , then the absolute error (13) is bounded by ( 2 ( f 1 ) / 2 n σ + 1 2 ) δ f .
Proof. 
Similar to Theorem 2, we can make a distinction between three cases: (i)  a < 2 2 n , in which η 1 = η 2 = 0 ; (ii)  2 2 n a < 2 n + 1 , in which generally only η 1 = 0 ; and (iii) a 2 n + 1 . However, since 1 2 f n , which is equivalent to 2 n f , there exists no a Q 2 f , f such that a < 2 2 n . Therefore, we will only consider the cases (ii) and (iii).
In case (ii), the error simplifies to | ϵ θ , n v + η 2 | . For the smallest value a = δ f , we find that the error is bounded by ( 2 f / 2 σ δ f + n + 1 2 δ f ) in the case that f is even, and by ( 2 ( f 1 ) / 2 σ δ f + n + 1 2 δ f ) if f is odd. However, if f is even and a = δ f , we may replace σ δ f + n by δ f + n , according to Lemma A7. Therefore, a tighter bound for even f is found by considering the next value, a = 2 δ f , for which the absolute error is bounded by ( 2 f / 2 1 σ δ f + n + 1 2 δ f ) . However, the largest bound is found for an odd f and reads as ( 2 ( f 1 ) / 2 n σ + 1 2 ) δ f .
In case (iii), we need to take into account all error terms in (13). Without going into further detail, we state that the error is maximized by taking the lowest value of a in this range ( a = 2 n + 1 ), which also has the largest value for v, with  n + 1 even, because it maximizes the combined value of the first and second error terms. The absolute error then reads as ( 2 ( n + 1 ) / 2 ( 1 2 + σ ) + 1 2 ) δ f .
It can be easily verified that with n f 2 , the bound in case (iii) is always below the (lowest) bound in case (ii). Therefore, for the given range of n, ( 2 ( f 1 ) / 2 n σ + 1 2 ) δ f bounds the absolute error for all values of a.    □
Remark 6.
The relative error is found by dividing the absolute error by d = 1 / a :
d θ * d d η 1 2 b + ϵ θ , n b + a η 2 .
Here, ≐ means equality up to higher-order terms. Note that this only applies to the term involving η 1 , for which quadratic and higher-order terms are ignored. However, these terms do not have a significant effect in the cases with the largest absolute errors.
The relative error is small for small values of a, while for larger values of a the error increases. If  a < 2 2 n , the error is bounded by 2 1 / 2 n σ δ f , increasing to ( 2 1 / 2 n σ + 2 ( n 1 ) / 2 ) δ f for 2 2 n a < 2 n + 1 . For  a 2 n + 1 , the error is (approximately) bounded by ( 1 + 2 1 / 2 n σ + 2 f / 2 1 ) δ f .
Corollary 3.
If the Newton–Raphson method is used to compute 1 / a for a Q 2 f , f , employing the approach in Algorithm 5, then computing with n = ( f + 5 ) / 2 additional bits guarantees that the absolute error (13) is strictly smaller than δ f .
Proof. 
According to Theorem 5, if  f / 2 n f 2 , the absolute error is bounded by ( 2 ( f 1 ) / 2 n σ + 1 / 2 ) δ f . Thus, for the final absolute error to be smaller than δ f , we need
( 2 ( f 1 ) / 2 n σ + 1 2 ) δ f < δ f ,
which gives that (approximately) n > 1 2 f + 1.94 . From this, it follows that n = 1 2 f + 2 for even f, and  n = 1 2 f + 5 2 for odd f, which guarantees that the absolute error will be smaller than δ f . Bearing in mind the range of n for which Theorem 5 is valid, this results holds for f 8 .
What remains to be shown is that: (i) smaller values for n will not suffice to guarantee an error smaller than δ f , and (ii) the result also holds for f < 8 . Both are achieved by exhaustively checking all rounding combinations for increasing values of f. For (ii), this shows that the results hold for f 4 (all f that require at least one iteration). To prove (i), we use n = 1 2 f + 1 for even f and n = 1 2 f + 3 2 for odd f, until a final absolute error larger than δ f is found. All f 300 are checked, and several cases where the final error exceeds δ f are identified, the first of which are f = 20 and f = 223 . This proves that n = ( f + 5 ) / 2 is not only a sufficient condition, but (in general) also a necessary one.    □

6. Square Root

Besides being a result in itself, the reciprocal square root can be used to compute the square root by multiplying it with the original input [ [ a ] ] . In fact, this seems the most efficient way to achieve this, as it provides a means to compute the square root with multiplications and additions only, whereas applying the Newton–Raphson method to the square root directly would lead to an algorithm that requires the computation of a reciprocal (and hence a full Newton–Raphson computation) in every iteration.

6.1. Error for Square Root

Looking back at the computation of the reciprocal square root, after computing c θ * , we have several options. We could finish the computation of the reciprocal square root as we did before, multiplying c θ * by v , and subsequently perform a rounding step. Because the multiplication by a will follow, it makes sense not to round to the original precision at this stage and keep the extra n bits to maintain a higher accuracy. Still, rounding to f + n fractional bits induces an error that would be multiplied by a, which for large values of a would lead to a large error.
Instead, it is significantly better to multiply c θ * by a first, subsequently perform a rounding step, and only then multiply by v . Even though the largest error is still attained for large values of a, at least we avoid multiplying the intermediate rounding error by this large a.
A third option, with an accuracy practically equal to the previous accuracy, is multiplying a and v separately (similar to multiplying g and v in Algorithm 3):
w = a v + η w .
Here, | η w | < δ f + n . We then multiply c θ * with w and (deterministically) round the result to the original precision:
d θ * = c θ * w + η 2 .
Again, | η 2 | < 1 2 δ f . Subtracting the exact solution gives the absolute error:
d θ * a = 1 a v + η 1 + ϵ θ , n a v + η ˜ 1 + η 2 a = a η 1 2 a v 1 3 4 η 1 a v + 5 8 η 1 a v 2 + ϵ θ , n a v + c * η ˜ 1 + η 2
< a η 1 2 b 1 + | η 1 | b + ϵ θ , n b v + c * η ˜ 1 + η 2 .
The algorithm is summarized in Algorithm 6.
Algorithm 6  Sqrt ( [ [ a ] ] , n = 0 ) 2 1 a < 2 1
 Lines 1–10 of Algorithm 5
11: [ [ w ] ] Round f n ( [ [ a ] ] [ [ v 1 2 ] ] )
12: [ [ d θ ] ] Round f + 2 n ( [ [ c θ ] ] [ [ w ] ] , d e t e r m i n i s t i c )
13:return  [ [ d θ ] ] 2 1 d θ < 2 1
Theorem 6.
If Algorithm 6 is used to compute a for a Q 2 f , f , with  n 1 2 f , then the absolute error (14) is bounded by ( 2 f / 2 n ( 1 2 ( 1 + δ f + n ) + σ ) + 1 2 ) δ f for even f and by ( 2 f / 2 n ( 1 4 ( 1 + 1 2 δ f + n ) + 2 σ ) + 1 2 ) δ f for odd f, where σ = 2.71 .
Proof. 
For a < 2 n , we have η 1 = η ˜ 1 = 0 , and the error simplifies to | ϵ θ , n b / v + η 2 | . This is bounded by ( 2 ( 2 n ) / 2 σ + 1 2 ) δ f for even n, and by ( 2 ( 1 n ) / 2 σ + 1 2 ) δ f for odd n. However, the error increases for a larger a. For  a 2 n , the first term in (15) comes into play and increases with a. At the same time, a larger a generally leads to a smaller v, which increases the second term as well. Since n 1 2 f , we still have that η ˜ 1 = 0 . Thus, the error bound simplifies to E d θ * , where
E d θ * = a η 1 2 b 1 + | η 1 | b + ϵ θ , n b v + η 2 .
For a 2 f 1 and even f, we have v = 2 f , and we can write a = 2 f b , with  0.5 b < 1 . Substituting this into the above equation gives
E d θ * < 2 f b 2 b 1 + δ f + n b + σ b 2 f / 2 δ f + n + 1 2 δ f = 2 f / 2 n 1 2 b 1 + δ f + n b + σ b δ f + 1 2 δ f .
The factor within the outer parentheses increases with b and can be bounded by choosing b = 1 , which leads to the bound for even f. Notice that if 2 f 2 a < 2 f 1 , then indeed 1 b < 2 . In this case, however, the  2 f / 2 n factor would become 2 f / 2 n 1 , making it twice as small, while the factor between parentheses would become less than twice as large. Thus, this would decrease the overall error.
For a 2 f 1 and odd f, we have v = 2 ( f 1 ) , and we can write a = 2 f 1 b , with  1 b < 2 . Substituting this into the same equation gives
E d θ * < 2 ( f 1 ) / 2 n 1 2 b 1 + δ f + n b + σ b δ f + 1 2 δ f .
Substituting b = 2 then yields the bound stated for odd f. Note that in this case choosing a smaller a would lead to a lower value for the factor in front of the parentheses, as well as a lower value for the factor in parentheses, and therefore it need not be considered.    □
Corollary 4.
If Algorithm 6 is used to compute a for a Q 2 f , f , then computing with n = ( f + 7 ) / 2 additional bits guarantees that the absolute error (14) is strictly smaller than δ f .
Proof. 
Using the result of Theorem 6 for the case that f is even, the absolute error is certainly smaller than δ f if the following holds:
2 f / 2 n 1 2 ( 1 + δ f + n ) + σ + 1 2 < 1 .
To get rid of the δ f + n term, we assign it a fairly large value, which, as we will see, does not matter much for the outcome. We choose δ f + n = 1 16 , which would be the correct value if f + n = 4 . It then follows that (approximately) n > 1 2 f + 2.69 , from which we conclude that n = 1 2 f + 3 is sufficient to guarantee an error smaller than δ f . Based on simulation results, we suspect that n = 1 2 f + 2 would already be sufficient, as we were unable to find a case in which the error exceeded δ f . However, since the largest error does not always occur for the same a value (as, for example, in the case of the reciprocal square root), simulations are very costly and could not be performed for large values of f. There are cases for which n = 1 2 f + 1 leads to errors larger than δ f , so using this value for n clearly does not guarantee an error smaller than δ f .
For odd values of f, an analogous derivation shows that using n = 1 2 f + 7 2 is guaranteed to keep the final absolute error below δ f . Also in this case, using one bit less seems already sufficient, since no counterexample could be found. Cases in which the error exceeded δ f were found for n = 1 2 f + 3 2 , which shows that this value for n is certainly not sufficient.    □

6.2. Integer Square Root

A related primitive is the integer square root. For a given integer [ [ a ] ] , this function computes integer [ [ q ] ] such that q a < q + 1 ; hence, q = a . For this purpose, we may exploit the algorithm derived in the previous section.
Corollary 5.
Suppose Algorithm 6 is used to compute a for an integer a Q 2 f , f , with  f 3 . If  q ˜ is rounded to an integer q ¯ deterministically, then no additional bits are required to guarantee that q ¯ { q , q + 1 } .
Proof. 
With the input a now an integer, we have that η 1 = 0 . Also, η w = 0 . Thus, the error bound (15) simplifies to | ϵ θ , n b / v + η 2 | . For even f, this is bounded by
| ϵ θ , n b / v + η 2 | < σ δ f + n v + 1 2 δ f = 2 f / 2 n σ + 1 2 δ f .
Even without any extra bits ( n = 0 ), this bound remains below 0.5 for f 3 , so that q 0.5 < q ˜ < q + 1.5 . Therefore, if  q ˜ is rounded deterministically to an integer q ¯ , then q ¯ { q , q + 1 } . Analogously, the same result can be shown to hold for odd f.    □
Remark 7.
The result of Corollary 5 holds even if the rounding in line 12 of Algorithm 6 is performed probabilistically.
After computing the square root of a and rounding to an integer, the correct solution q is recovered by a single secure comparison. The complete procedure is summarized in Algorithm 7.
Algorithm 7  IntSqrt ( [ [ a ] ] ) 2 f 1 a < 2 f 1 , with  a Z
 Line 2 in Algorithm 6 simplifies to [ [ b ] ] 2 f + n [ [ a ] ] [ [ v ] ]
1: [ [ q ˜ ] ] Sqrt ( [ [ a 2 f ] ] , n = 0 ) 2 1 q ˜ < 2 1
2: [ [ q ¯ ] ] Round f ( [ [ q ˜ ] ] , d e t e r m i n i s t i c )
3: [ [ q ] ] [ [ q ¯ ] ] ( [ [ q ¯ ] ] 2 > [ [ a ] ] )
4:return  [ [ q ] ] 2 f 1 q < 2 f 1

7. Conclusions

Basic secure fixed-point arithmetic allows for efficient + , , * , < operations and often easily extends to efficient dot products and matrix multiplications. The availability of efficient solutions for secure reciprocals and square roots opens up a much broader scope of applications, such as efficient solutions for secure Gaussian elimination, secure linear programming, and secure Cholesky decomposition with appropriately scaled input matrices.
As announced at the end of Section 2.1, our protocols achieve logarithmic round complexity: the round complexity is dominated by the θ = O ( log f ) rounds for the for loops in Algorithms 2 and 5, as each iteration takes O ( 1 ) rounds due to the use of probabilistic rounding. In concurrent work, we achieved similar results for the secure computation of sine and cosine in secure fixed-point arithmetic, relying on an iterative method very different from Newton-Raphson iteration, but also supporting any desired precision [20].
The use of secure fixed-point arithmetic is essential in many secure computation frameworks. As part of ongoing work, we are integrating all these solutions in the Python package MPyC [21], where the overall goal is to support all fixed-point arithmetic and functions with arbitrary (parameterized) precision expressed as the number of fractional bits f. Our solutions for secure integer division (see Section 4) and secure integer square roots (see Section 6.2) therefore apply to secure integer arithmetic over arbitrarily large ranges. In fact, we use this form of secure integer division as a building block for the implementation of secure class groups in MPyC, see [22].

Author Contributions

Secure computation protocols, S.K. and B.S.; numerical analysis, S.K.; writing, S.K. and B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Lemmas

This appendix collects all lemmas and proofs left out of the main text.

Appendix A.1. Lemmas for the Reciprocal

Lemma A1.
If the Newton–Raphson method is used to compute 1 / b for some b [ 0.5 , 1 ) with exact arithmetic employing initial approximation (2), and the number of iterations θ 1 is computed with (6), then | ϵ θ 1 ( b ) | < b 2 θ 1 1 δ f .
Proof. 
The value for θ given by (6) is set such that | ϵ θ | < δ f holds for all b, and specifically for b = 1 . From (5), it then follows that ϵ 0 ( 1 ) < δ f 2 θ . With initial approximation (2), we have | ϵ 0 ( b ) | ϵ 0 ( 1 ) , and therefore | ϵ 0 ( b ) | < δ f 2 θ , for all b. Then, applying (5) to ϵ 0 ( b ) with i = θ 1 gives the result. □
Lemma A2.
If the Newton–Raphson method is used to compute 1 / b for some b [ 0.5 , 1 ) , employing initial approximation (2), then the rounding error in the first iteration is bounded by ( c 0 ( b ) + 1 ) δ f , while in any subsequent iteration it is bounded by ( 1 / b + 1 ) δ f .
Proof. 
Recall that for the reciprocal the iterative rule reads as
c i + 1 = c i ( 2 c i b ) ,
in which the subtraction is carried out without rounding.
The first multiplication c i b = ( c ϵ i ) b = 1 ϵ i b yields after rounding:
c i b $ = 1 ϵ i b + e i + 1 , 1 ,
where e i + 1 , 1 is a probabilistic rounding term, with | e i + 1 , 1 | < δ f (see Section 2.3). The second multiplication gives
c i ( 2 c i b $ ) = ( c ϵ i ) ( 1 + ϵ i b e i + 1 , 1 ) = c c e i + 1 , 1 ϵ i 2 b + ϵ i e i + 1 , 1 = c ϵ i 2 b c i e i + 1 , 1 ,
which is rounded to
c i + 1 = c i ( 2 c i b $ ) $ = c ϵ i 2 b c i e i + 1 , 1 + e i + 1 , 2 ,
where | e i + 1 , 2 | < δ f . Thus, instead of the “exact” result in (4), we now find
ϵ i + 1 = ϵ i 2 b + e i + 1 ,
where e i + 1 is the total rounding error resulting from iteration i + 1 :
e i + 1 = c i e i + 1 , 1 e i + 1 , 2 .
Because | e i + 1 , 1 | and | e i + 1 , 2 | are strictly smaller than δ f , it follows that | e 1 | < ( c 0 + 1 ) δ f . Moreover, using initial approximation (2), it holds that 0 < c i c for i 1 , and thus | e i + 1 | < ( 1 / b + 1 ) δ f for these values of i. □
Lemma A3.
If the Newton–Raphson method is used to compute 1 / a for some a = ± 2 λ δ f , λ { 0 , 1 , 2 , , 2 f 2 } , employing initial approximation (2), and the number of iterations θ is computed with (6), then | ϵ θ , n | 2 δ f + n . In fact, ϵ θ , n { δ f + n , 0 , δ f + n , 2 δ f + n } .
Proof. 
When a = ± 2 λ δ f , with λ { 0 , 1 , 2 , , 2 f 2 } , a has exactly one nonzero bit. Consequently, a will be scaled to b = 0.5 , for which c = 2 is an exact multiple of δ f + n . Since the intermediate approximations c i are also multiples of δ f + n , the error terms ϵ i = c c i will also be multiples of δ f + n . As a result, the ϵ i b -term in (A1) is a multiple of 1 2 δ f + n , and it follows that the first rounding term in every iteration, | e i + 1 , 1 | , is either zero or 1 2 δ f + n . If, for the moment, we omit the second rounding of the final iteration, then combining this knowledge with the analysis in the proof of Theorem 1 gives the maximal error:
E 3 , 15 ( 0.5 ) δ f + n = ( 0.5 ) 7 + 2 ( 0.5 ) 4 T 3 δ 15 + 0.5 T 3 2 δ 15 + 1 δ f + n ,
where T 3 ( 0.5 ) = ( 2 2 + 13 ) / 4 . The diamond superscript indicates that another (probabilistic) rounding step is still to be performed. Computing the above value shows that it is only slightly above δ f + n . Because the correct solution is an exact multiple of δ f + n , the second rounding in the final iteration can only take the error as far as the next multiple of δ f + n , which is 2 δ f + n .
By exhaustively checking all rounding combinations, this bound was found to also hold for the cases f + n 14 with at least one iteration. Clearly, when θ = 0 , the error is already below δ f + n to begin with and, since no iterations are performed, does not change.
So far, we have assumed that all errors point towards the positive direction. The situation is different if all rounding errors go in the negative direction. In this situation, the worst-case scenario would be that the iteration error after θ 1 iterations is zero, while all rounding errors in the final iteration are maximally negative. Again assuming that | e i + 1 , 1 | is either zero or 1 2 δ f + n , it then follows from (7) that
min ϵ θ ( 0.5 ) = δ f + n .
As before, the diamond superscript indicates that a final (probabilistic) rounding is still to be performed. Since the exact solution is still a multiple of δ f + n , the second rounding in the final iteration cannot take the error any further than δ f + n . This result is independent of the values of f and θ .
Combining δ f + n ϵ θ 2 δ f + n with the knowledge that the error is a multiple of δ f + n , we find that ϵ θ , n { δ f + n , 0 , δ f + n , 2 δ f + n } . □
Lemma A4.
If the Newton–Raphson method is used to compute 1 / a for a = ± 3 δ f , employing initial approximation (2), and the number of iterations θ is computed with (6), then | ϵ θ , n | 7 3 δ f + n for f + n = 6 and | ϵ θ , n | 5 3 δ f + n for all other values of f and n.
Proof. 
When a = ± 3 δ f , a will be scaled to b = 0.75 . With b = 0.75 , c = 4 / 3 , which is a multiple of 1 3 δ f + n . Since the intermediate approximations c i are multiples of δ f + n , the error terms ϵ i = c c i will also be multiples of 1 3 δ f + n . As a result, the ϵ i b term in (A1) is a multiple of 1 4 δ f + n , and it follows that the first rounding term in every iteration, | e i + 1 , 1 | , can be at most 3 4 δ f + n . If, for the moment, we omit the second rounding of the final iteration, then combining this knowledge with the analysis in the proof of Theorem 1 gives
E 3 , 15 ( 0.75 ) δ f + n = ( 0.75 ) 7 + 2 ( 0.75 ) 4 T 3 δ 15 + 0.75 T 3 2 δ 15 + 1 δ f + n ,
where T 3 = ( c 0 ( 0.75 ) + 1 ) + ( 3 2 ) ( 0.75 / 0.75 + 1 ) = 2 + 3 , and the diamond superscript indicates that a final rounding is still to be performed. Computing the above value shows that it is slightly below 1.15 δ f + n . Because the correct solution c is a multiple of 1 3 δ f + n , the second rounding in the final iteration can only take the error as far as 5 3 δ f + n .
By exhaustively checking all rounding combinations, this bound is found to also hold for the cases f + n 14 with at least one iteration, except for f + n = 6 , in which case | ϵ θ , n | = 7 3 δ f + n is the largest possible error. Also, here, when θ = 0 , the error is already below δ f + n to begin with and does not change, since no iterations are performed. □

Appendix A.2. Lemmas for the Reciprocal Square Root

In the following lemma, we make use of the two points where c 0 ( b ) = c ( b ) , which we hereby define as b 1 0.58 and b 2 1.65 . Between these points, ϵ 0 ( b ) < 0 , while outside the interval ϵ 0 ( b ) > 0 .
Lemma A5.
If the Newton–Raphson method is used to compute 1 / b for some b [ 0.5 , 2 ) with exact arithmetic, employing initial approximation (9), and the number of iterations θ 1 is computed with (12), then | ϵ θ 1 ( b ) | < ξ 2 θ 2 b / 2 2 θ 1 1 δ f / τ , were τ = 3 / 2 . The factor ξ may be taken as equal to 1.045 for b 1 < b < b 2 and unity elsewhere.
Proof. 
The formula for θ in (12) is constructed in such a way that ϵ θ < δ f for b = 2 . This is based on the assumption that ϵ i = ( 3 2 b ) 2 i 1 ϵ 0 2 i , which for b = 2 is a safe assumption. From this, it follows that | ϵ 0 ( 2 ) | < τ 2 θ 1 δ f 2 θ . With the initial approximation (9), we have | ϵ 0 ( b ) | ϵ 0 ( 2 ) , and therefore | ϵ 0 ( b ) | < τ 2 θ 1 δ f 2 θ , for all b.
Next, consider the first iteration. Since there are inputs for which ϵ 0 < 0 , we cannot simply ignore the third-order term in (11). Instead, we have
| ϵ 1 | = 3 2 b ϵ 0 2 1 2 b ϵ 0 3 3 2 b ϵ 0 2 1 + 1 3 b | ϵ 0 | .
The third-order term in the first line only has negative values for b 1 < b < b 2 . For other values it is positive, meaning that it will only decrease the error, and can thus be safely ignored. Consequently, the largest value that the b term between parentheses in the second line might have is b 2 . Combining this value with the largest initial error β (which actually do not coincide), we find that the term between parentheses is bounded by 1.045 , and we obtain
| ϵ 1 | 3 2 ξ b τ 2 θ 1 δ f 2 θ 2 ,
where ξ = 1.045 for b 1 < b < b 2 and unity elsewhere.
We know that after the first iteration, ϵ i > 0 . Therefore, the third-order term in (11) will be larger than zero, and we have ϵ i + 1 < 3 2 b ϵ i 2 . Applying this for the remaining θ 2 iterations gives
| ϵ θ 1 | < ( 3 2 b ) 2 θ 2 1 ϵ 1 2 θ 2 ( 3 2 b ) 2 θ 2 1 3 2 ξ b τ 2 θ 1 δ f 2 θ 2 2 θ 2 = ξ 2 θ 2 b / 2 2 θ 1 1 δ f / τ ,
which proves the statement. □
Next, we consider the case in which the result of every multiplication is rounded. For the reciprocal square root, there are three multiplications per iteration.
Lemma A6.
If the Newton–Raphson method is used to compute 1 / b for some b [ 0.5 , 2 ) , employing initial approximation (9), then the rounding error in the first iteration is bounded by ( c 0 2 / 2 + c 0 / 2 + 1 ) δ f , while in any subsequent iteration it is bounded by ( 1 / ( 2 b ) + 1 / ( 2 b ) + 1 ) δ f .
Proof. 
Recall that for the reciprocal square root, the iterative rule reads as
c i + 1 = 1 2 c i 3 c i 2 b .
The first multiplication gives
c i b = ( c ϵ i ) b = b ϵ i b ,
which is subsequently rounded to
c i b $ = b ϵ i b + e i + 1 , 1 ,
with probabilistic rounding error | e i + 1 , 1 | < δ f . Next, we perform the second multiplication:
c i c i b $ = ( c ϵ i ) b ϵ i b + e i + 1 , 1 = 1 2 b ϵ i + ϵ i 2 b + c i e i + 1 , 1 ,
which is then rounded to
c i c i b $ $ = 1 2 b ϵ i + ϵ i 2 b + c i e i + 1 , 1 + e i + 1 , 2 ,
where | e i + 1 , 2 | < δ f . The subtraction that follows is without rounding. The third and final multiplication gives
1 2 c i 3 c i c i b $ $ = 1 2 ( c ϵ i ) 2 + 2 b ϵ i ϵ i 2 b c i e i + 1 , 1 e i + 1 , 2 = c 3 2 b ϵ i 2 + 1 2 b ϵ i 3 1 2 c i 2 e i + 1 , 1 1 2 c i e i + 1 , 2 ,
which is rounded to
c i + 1 = 1 2 c i 3 c i c i b $ $ $ = c 3 2 b ϵ i 2 + 1 2 b ϵ i 3 1 2 c i 2 e i + 1 , 1 1 2 c i e i + 1 , 2 + e i + 1 , 3 .
Again, | e i + 1 , 3 | < δ f . Thus, instead of the “exact” result in (11), we now find
ϵ i + 1 = 3 2 b ϵ i 2 1 2 b ϵ i 3 + e i + 1 ,
with
e i + 1 = 1 2 c i 2 e i + 1 , 1 + 1 2 c i e i + 1 , 2 e i + 1 , 3 .
Because all rounding terms are strictly smaller than δ f (see Section 2.3), it directly follows that e 1 < ( c 0 2 / 2 + c 0 / 2 + 1 ) δ f . Moreover, with the approximation (9), we have 0 < c i c , and therefore | e i + 1 | < ( 1 / ( 2 b ) + 1 / ( 2 b ) + 1 ) δ f , for i 1 . □
Lemma A7.
If the Newton–Raphson method is used to compute 1 / a for some a = ± 2 λ δ f , with λ { 0 , 2 , 4 , , 2 f 2 } and even f, or with λ { 1 , 3 , 5 , , 2 f 3 } and odd f, employing initial approximation (9), and the number of iterations θ is computed with (12), then | ϵ θ , n | δ f + n . In particular, ϵ θ , n { δ f + n , 0 , δ f + n }.
Proof. 
When a = ± 2 λ δ f , with λ { 0 , 2 , , 2 f 2 } and even f, or with λ { 1 , 3 , , 2 f 3 } and odd f, a will be scaled to b = 1 . Then, c = 1 , which is an exact multiple of δ f + n . Since the intermediate approximations c i are also multiples of δ f + n , the error terms ϵ i = c c i will also be multiples of δ f + n . As a result, the ϵ i b term in (A3) is a multiple of δ f + n , and it follows that the first rounding term in every iteration, | e i + 1 , 1 | , is zero. If, for the moment, we omit the third rounding term of the final iteration, then combining this knowledge with the analysis in the proof of Theorem 4 gives the maximal error:
E 4 , 19 ( 1 ) δ f + n = ξ ξ / 2 15 + 2 ξ / 2 8 T 4 τ δ 19 + 3 2 T 4 2 δ 19 + 1 2 δ f + n ,
where ξ = 1.045 , T 4 = ( c 0 ( 1 ) / 2 + 1 ) + ( 4 2 ) ( 1 / 2 + 1 ) = 1 8 ( 35 + 2 ) , and the diamond superscript indicates that another rounding step is still to be performed. A simple numerical evaluation of the above expression shows that its value is slightly below 0.51 δ f + n . Because the correct solution c is an exact multiple of δ f + n , the second rounding in the final iteration can only take the error as far as the next multiple of δ f + n , which is δ f + n . By exhaustively checking all rounding combinations, this bound is found to also hold for the cases f + n 18 with at least one iteration. Clearly, when θ = 0 , the error is already below δ f + n to begin with and, since no iterations are performed, does not change.
Combining δ f + n ϵ θ δ f + n with the knowledge that the error is a multiple of δ f + n , we find that ϵ θ , n { δ f + n , 0 , δ f + n } . Numerical experiments further suggest that actually ϵ θ , n { 0 , δ f + n } , but we will not prove this here. □

References

  1. Algesheimer, J.; Camenisch, J.; Shoup, V. Efficient computation modulo a shared secret with application to the generation of shared safe-prime products. In Advances in Cryptology—CRYPTO 2002; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2442, pp. 417–432. [Google Scholar]
  2. Catrina, O.; de Hoogh, S. Secure multiparty linear programming using fixed-point arithmetic. In Computer Security—ESORICS 2010; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6345, pp. 134–150. [Google Scholar]
  3. Catrina, O.; Saxena, A. Secure computation with fixed-point numbers. In Financial Cryptography and Data Security—FC 2010; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6052, pp. 35–50. [Google Scholar]
  4. Liedel, M. Secure distributed computation of the square root and applications. In Information Security Practice and Experience—ISPEC 2012; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7232, pp. 277–288. [Google Scholar]
  5. Aly, A.; Smart, N.P. Benchmarking privacy preserved scientific operations. In Applied Cryptography and Network Security—ACNS 2019; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11464, pp. 509–529. [Google Scholar]
  6. Knuth, D.E. The Art of Computer Programming (Vol. 2: Seminumerical Algorithms), 3rd ed.; Addison Wesley: Reading, MA, USA, 1997. [Google Scholar]
  7. Wilkinson, J.H. Rounding Errors in Algebraic Processes; Prentice Hall: Englewood Cliffs, NJ, USA, 1963. [Google Scholar]
  8. Wilkinson, J.H. The algebraic eigenvalue problem. In Monographs on Numerical Analysis; Clarendon Press: Oxford, UK, 1965. [Google Scholar]
  9. Aly, A.; Nawaz, K.; Salazar, E.; Sucasas, V. Through the looking-glass: Benchmarking secure multi-party computation comparisons for ReLU’s. In Cryptology and Network Security—CANS 2022; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13641, pp. 44–67. [Google Scholar]
  10. Damgård, I.; Fitzi, M.; Kiltz, E.; Nielsen, J.B.; Toft, T. Unconditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation. In Theory of Cryptography Conference—TCC 2006; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3876, pp. 285–304. [Google Scholar]
  11. Damgård, I.; Nielsen, J.B. Universally composable efficient multiparty computation from threshold homomorphic encryption. In Advances in Cryptology—CRYPTO 2003; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2729, pp. 247–264. [Google Scholar]
  12. Croci, M.; Giles, M.B. Effects of round-to-nearest and stochastic rounding in the numerical solution of the heat equation in low precision. IMA J. Numer. Anal. 2022, 43, 1358–1390. [Google Scholar] [CrossRef]
  13. Na, T.; Ko, J.H.; Kung, J.; Mukhopadhyay, S. On-chip training of recurrent neural networks with limited numerical precision. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 3716–3723. [Google Scholar]
  14. Paxton, E.A.; Chantry, M.; Klöwer, M.; Saffin, L.; Palmer, T. Climate modeling in low precision: Effects of both deterministic and stochastic rounding. J. Clim. 2022, 35, 1215–1229. [Google Scholar] [CrossRef]
  15. Wang, N.; Choi, J.; Brand, D.; Chen, C.; Gopalakrishnan, K. Training deep neural networks with 8-bit floating point numbers. In Proceedings of the 32nd International Conference on Neural Information Processing Systems—NIPS 2018, Santa Barbara, CA, USA, 18–22 August 2002; Curran Associates, Inc.: Red Hook, NY, USA, 2018; pp. 7686–7695. [Google Scholar]
  16. Croci, M.; Fasi, M.; Higham, N.J.; Mary, T.; Mikaitis, M. Stochastic rounding: Implementation, error analysis and applications. R. Soc. Open Sci. 2022, 9, 211631. [Google Scholar] [CrossRef]
  17. Ryaben’kii, V.S.; Tsynkov, S.V. A Theoretical Introduction to Numerical Analysis; Chapman and Hall/CRC: New York, NY, USA, 2006. [Google Scholar]
  18. Yamamoto, T. Historical developments in convergence analysis for Newton’s and Newton-like methods. J. Comput. Appl. Math. 2000, 124, 1–23. [Google Scholar] [CrossRef]
  19. Ercegovac, M.; Lang, T. Digital Arithmetic; Morgan Kaufmann: San Francisco, CA, USA, 2004. [Google Scholar]
  20. Korzilius, S.; Schoenmakers, B. New approach for sine and cosine in secure fixed-point arithmetic. In Cyber Security, Cryptology, and Machine Learning—CSCML 2023; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13914, pp. 307–319. [Google Scholar]
  21. Schoenmakers, B. MPyC Package for Secure Multiparty Computation in Python. GitHub. 2018. Available online: github.com/lschoe/mpyc (accessed on 7 September 2023).
  22. Schoenmakers, B.; Segers, T. Efficient Extended GCD and Class Groups from Secure Integer Arithmetic. In Cyber Security, Cryptology, and Machine Learning—CSCML 2023; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13914, pp. 32–48. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Korzilius, S.; Schoenmakers, B. Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic. Cryptography 2023, 7, 43. https://doi.org/10.3390/cryptography7030043

AMA Style

Korzilius S, Schoenmakers B. Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic. Cryptography. 2023; 7(3):43. https://doi.org/10.3390/cryptography7030043

Chicago/Turabian Style

Korzilius, Stan, and Berry Schoenmakers. 2023. "Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic" Cryptography 7, no. 3: 43. https://doi.org/10.3390/cryptography7030043

APA Style

Korzilius, S., & Schoenmakers, B. (2023). Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic. Cryptography, 7(3), 43. https://doi.org/10.3390/cryptography7030043

Article Metrics

Back to TopTop