Next Article in Journal
An O(n)-Round Strategy for the Magnus-Derek Game
Next Article in Special Issue
Computation of the Metric Average of 2D Sets with Piecewise Linear Boundaries
Previous Article in Journal / Special Issue
Algorithmic Solution of Stochastic Differential Equations

Algorithms 2010, 3(3), 224-243; https://doi.org/10.3390/a3030224

Article
Segment LLL Reduction of Lattice Bases Using Modular Arithmetic
by * and
Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL 60208, USA
*
Author to whom correspondence should be addressed.
Received: 28 May 2010 / Accepted: 29 June 2010 / Published: 12 July 2010

## Abstract

:
The algorithm of Lenstra, Lenstra, and Lovász (LLL) transforms a given integer lattice basis into a reduced basis. Storjohann improved the worst case complexity of LLL algorithms by a factor of $O ( n )$ using modular arithmetic. Koy and Schnorr developed a segment-LLL basis reduction algorithm that generates lattice basis satisfying a weaker condition than the LLL reduced basis with $O ( n )$ improvement than the LLL algorithm. In this paper we combine Storjohann’s modular arithmetic approach with the segment-LLL approach to further improve the worst case complexity of the segment-LLL algorithms by a factor of $n 0 . 5$.
Keywords:
Lattice; LLL basis reduction; reduced basis; successive minima; segments; modular arithmetic; fast matrix multiplication

## 1. Introduction

Given row vectors $b 1 , … , b n ∈ Z d$ an integer lattice L (for short lattice) is defined as
$L : = v ∈ Z d | v = ∑ i = 1 n z i b i , z i ∈ Z , b i ∈ Z d$
Several important theoretical and practical problems benefit from studying lattices. These include problems in geometry , cryptography , and integer programming . An important problem, whose study dates back to 18th century, is the problem of finding i-th successive minimum of a lattice, $i = 1 , … , n$. This problem involves finding the smallest number $λ i$ (and possibly an associated lattice element) such that there are i linearly independent elements in L of length at most $λ i$ [1, Chapter 8]. The shortest lattice vector problem is a special case of finding the shortest lattice vector only. This is a difficult problem to solve. For example, it is shown by Ajtai  that the problem of finding the shortest non-zero lattice vector under $l 2$ norm is NP-hard under randomized reduction . Micciancio  showed that an α-approximate version of this problem (under randomized reduction) remains NP-hard for any $α < 2$. The problem of finding the shortest lattice vector under $l ∞$ norm is shown in the class NP-complete by van Emde Boas .
Knowing that finding the exact shortest lattice basis is difficult in the worst case, the problem of finding approximate successive minima is addressed by many researchers. In this context various notions of reduced bases have been proposed. In particular, notions of LLL-reduced, semi-reduced, Korkine-Zolotarev reduced, Block 2k reduced, semi block 2k reduced, and segment reduced bases are used by Lenstra, Lenstra, and Lovász , Schönhage , Kannan , Schnorr , and Koy and Schnorr , respectively. We define these and additional concepts below.

#### 1.1. Definitions of Reduced Lattice Bases

Without loss of generality we assume that $b 1 , … , b n$ are linearly independent. Superscript t is used to denote the transpose of a vector or a matrix. The $l 2$ norm is given by $∥ y ∥ = ( y y t ) 0 . 5$. $[ x ]$ denotes the nearest integer to a real number x (if non-unique then choose the candidate with smallest magnitude), $⌈ x ⌉$ denotes the smallest integer greater than or equal to x, and $⌊ x ⌋$ denotes the largest integer less than or equal to x. $T i , j$ is the entry at the i-th row and j-th column of a matrix T. We use I to represent an identity matrix, and $e i$ to represent its i-th column.
Let $B ∈ Z n × d$ be such that the i-th row of B is given by $b i$ for $1 ≤ i ≤ n$. For a given lattice basis $b 1 , … , b n$ the Gram-Schmidt algorithm determines the associated orthogonal vectors $b 1 * , … , b n *$ together with coefficients $Γ j , i ( 1 ≤ j < i ≤ n )$ defined inductively by
$b i * = b i − ∑ j = 1 i − 1 Γ j , i b j * , where Γ j , i = b j * b i t / ∥ b j * ∥ 2$
This can be rewritten as $B = Γ t B *$, where $B *$ denotes the matrix whose i-th row is $b i *$, and Γ is a upper triangular matrix with $Γ i , i = 1$ and $Γ j , i$ ($j < i$) is given in (1.1). Let $D i , … , j : = ∥ b i * ∥ 2 ⋯ ∥ b j * ∥ 2$. We denote $D 1 , … , l$ by $d l$. Note that $d n$ is the Gramian determinant of B. When we are considering k segments of B and $B *$, $D k ( l − 1 ) + 1 , ⋯ , k l : = ∥ b k ( l − 1 ) + 1 * ∥ 2 ⋯ ∥ b k l * ∥ 2$ is the segment Gramian determinant, and for simplicity we denote it by $D ( l )$, where k is fixed.
D1.
A basis is called size-reduced if $| Γ j , i | ≤ 1 / 2 f o r 1 ≤ j < i ≤ n .$ The notion of a size reduced basis goes back to Hermite .
D2.
A basis is called (δ,η)-reduced if $( δ − Γ i , i + 1 2 ) ∥ b i * ∥ 2 ≤ ∥ b i + 1 * ∥ 2$ for $i = 1 , … , n − 1$, $δ ∈ ( 1 4 , 1 ]$, $| Γ j , i | ≤ η$, $η ∈ [ 1 / 2 , δ )$. For $δ = 3 4$ and $| Γ j , i | ≤ 1 / 2$ it is called 2-reduced because the above inequality becomes $∥ b i * ∥ 2 ≤ 2 ∥ b i + 1 * ∥ 2$. A basis is called δ-LLL reduced if it is size-reduced and δ-reduced. It is simply called LLL reduced if it is size-reduced and 2-reduced. The LLL reduced basis was introduced by Lenstra, Lenstra, and Lovász .
D3.
A basis is called semi-reduced if it is size-reduced and satisfies weaker conditions $∥ b i * ∥ 2 ≤ 2 n ∥ b i + 1 * ∥ 2$ for $i = 1 , … , n − 1$.
D4.
A basis is called Korkine-Zolotarev basis if it is size-reduced and if $∥ b i * ∥ = λ 1 ( L i )$ for $i = 1 , … , n ,$ where $L i$ is the orthogonal projection of L on the orthogonal complement of $s p a n { b 1 , … , b i − 1 }$.
The concepts of block reduced and segment reduced basis are defined by dividing a basis into k blocks or segments, i.e., $n = m k$, and then specifying appropriate conditions on basis vectors within each block and among blocks.
D5.
A basis $b 1 , … , b m k$ is called Block KZ reduced basis if it is size-reduced and if the projections of all $2 k$-blocks $b i k + 1 , … , b ( i + 2 ) k$ on the orthogonal complement of $s p a n { b 1 , … , b i k }$ for $i = 0 , … , m − 2$ are Korkine-Zolotarev reduced.
D6.
A basis $b 1 , … , b m k$ is called k-segment LLL reduced if the following conditions hold.
C1.
It is size-reduced.
C2.
$( δ − Γ i , i + 1 2 ) ∥ b i * ∥ 2 ≤ ∥ b i + 1 * ∥ 2$ for $i ≠ k l$, $l ∈ Z$, i.e., vectors within each segment of the basis are δ-reduced, and
C3.
Letting $α : = 1 / ( δ − 1 4 )$, two successive segments of the basis are connected by the following two conditions.
C3.1.
$D ( l ) ≤ ( α / δ ) k 2 D ( l + 1 )$for$l = 1 , … , m − 1$.
C3.2.
$δ k 2 ∥ b k l * ∥ 2 ≤ α ∥ b k l + 1 * ∥ 2$for$l = 1 , … , m − 1$.
The case where $k = O ( n )$ is of special interest.

#### 1.2. Discussion on Various Reduced Bases

The ratios $∥ b i ∥ 2 λ i 2 , i = 1 , … , n$ are used to measure the quality of various reduced bases defined above. We call these approximation ratios. Known bounds on approximation ratios for various reduced bases, known algorithms for generating them, the worst case running time of these algorithms, and the bit-precision used in performing the computations (addition, subtraction, multiplication and division) in these algorithms are summarized in Table 1. The bounds in this table assume $k = O ( n )$, and $d = O ( n )$. Following [7,8] we use $M s c : = max { 2 n , M 0 , d 1 , … , d n }$, where $M 0 : = max i = 1 , … , n ∥ b i ∥ 2$ to measure the complexity of these algorithms. Note that $M s c = 2 O ( n 2 )$ when $∥ b i ∥ = 2 O ( n )$.
The work of Lenstra, Lenstra, and Lovász  is seminal on finding a reduced lattice basis, and its implication on the problem of finding successive minima. Their algorithm for finding an LLL reduced basis is polynomial time. In particular, for $M s c = 2 O ( n 2 )$ in the worst case it requires $O ( n 5 )$ arithmetic operations using $O ( n 2 )$ bit numbers. Since the development of the LLL algorithm significant effort has been directed towards developing methods for finding an improved quality basis in polynomial time, and finding a worse quality basis with a better worst case computational complexity. Research has also progressed towards generalizing the LLL algorithm to arbitrary norms [18,19].
Table 1. Summary of various LLL-related Algorithms.
Table 1. Summary of various LLL-related Algorithms.
AlgorithmLower Bounds on $∥ b i ∥ 2 λ i 2$Upper Bounds on $∥ b i ∥ 2 λ i 2$Arithmetic StepsPrecision
LLL reduced $α 1 − i δ n$$α n − 1 δ − n$$O ( n 3 log 1 / δ M s c )$$O ( ln M s c )$
LLL reduced $α 1 − i δ n$$α n − 1 δ − n$$O ( n 3 log 1 / δ M s c )$$O ( n + ln M 0 )$
Modular LLL $α 1 − i δ n$$α n − 1 δ − n$$O ( n 2 log 1 / δ M s c )$$O ( ln M s c )$
Semi-reduced $α 1 − i − 2 n$$α 2 n − 1$$O ( n 2 log 1 / δ M s c )$$O ( ln M s c )$
Kannan $4 i + 3$$i + 3 4$$n O ( n ) ln M 0$$O ( n 2 ln M 0 )$
Block KZ [10,15] 1$4 i + 3 γ 2 k − 2 i − 1 2 k − 1$$γ 2 k 2 n − i 2 k − 1 i + 3 4$$O ( n ( n / 2 + o ( n ) ) + n 4 ln M 0 )$$O ( n ln M 0 )$
Segment LLL $α 1 − i δ 3 n$$α n − 1 δ − 3 n$$O ( n 2 log 1 / δ M s c )$$O ( ln M s c )$
Mod-Seg LLL$α 1 − i δ 3 n$$α n − 1 δ − 3 n$$O ( n 1 . 5 log 1 / δ M s c )$$O ( ln M s c )$
Mod-Seg LLL FMM$α 1 − i δ 3 n$$α n − 1 δ − 3 n$$O ( n 1 . 382 log 1 / δ M s c )$$O ( ln M s c )$
Nguyen and Stehle  $L 2$$α 1 − i δ n$$α 1 − i δ − n$$O ( n 2 ( n + ln ( M 0 ) ) ln 1 / δ ( M s c ) )$$1 . 58 n$ fl
Schnorr  SLL$α 1 − i δ 7 n$$α n − 1 δ − 7 n$$O ( n 2 ln n log 1 / δ M s c )$$3 n + d$ fl
1 $γ n$ is the Hermite constant which is defined as .
The algorithm by Schönhage  finds a semi-reduced basis. It requires $O ( n )$ less time over the LLL algorithm. However, the bounds on the approximation ratios for a semi-reduced basis are of a significantly lower quality. A better complexity for finding a semi-reduced basis is also proved by Storjohann .
Kannan  proposes an algorithm for finding Korkine-Zolotarev (KZ) basis that runs in $O ( n O ( n ) ln M 0 )$ arithmetic operations on $O ( n 2 ln M 0 )$ bit integers. Kannan’s algorithm uses the LLL algorithm as a black box. This bound for finding a KZ basis is improved by Schnorr  to $O ( n n / 2 + o ( n ) + n 4 ln M 0 )$ arithmetic operations using $O ( n ln M 0 )$ bit integers. The bound for Schnorr’s algorithm in Table 1 is given for performing a KZ reduction of a block of size $2 k$. Schnorr  further introduces the notion of a semi block 2k reduced basis, and uses this concept to show that a $O ( k ( n / k ) )$-approximate shortest vector is found in $O ( n 2 ( k k / 2 + o ( k ) + n 2 ) ln ( M 0 ) )$ arithmetic operation using $O ( n ln M 0 )$ bit integers. This leads to a hierarchy of algorithms for finding the shortest lattice vector, and a semi block 2k reduced basis. The complexity in Table 1 is a special case where $k = ⌊ 2 n ⌋$.
Koy and Schnorr  propose the concept of a segment reduced basis, and give an algorithm for finding such a basis. Similar to the semi-reduction algorithm of Schönhage  the segment reduction algorithm works with a subset of vectors in the lattice basis at a time. However, it worsens the approximation ratios only slightly, and in a controllable fashion. Moreover, it also achieves an $O ( n )$ reduction in the worst case complexity over the LLL algorithm. Since the writing of the original draft of this paper improvements in computational complexity of the LLL and segment LLL algorithms have also been achieved by showing that the methods can be modified to perform computations using $O ( n )$ bit floating point numbers. In particular, Nguyen and Stehle  rearranged computations in the Cholesky factorization algorithm and used Babai’s nearest point algorithm to update the Cholesky factor coefficients to show that the LLL-algorithm can be correctly implemented with $O ( n )$ bit floating point precision computations. By making use of results from numerical analysis on Householder transformation using floating point arithmetic and rearrangement of computations in Gram-Schmidt algorithm Schnorr  has given an improved segment reduction algorithm that performs $O ( n 5 + ϵ )$ bit operations for input bases of length $2 O ( n )$.

#### 1.3. Paper Contribution and Organization

In this paper we show that the modular arithmetic computation approach of  can be combined with the segment concept in  to develop a modular segment reduction algorithm. The novelty of Storjohann’s is in rearranging the computations in LLL and delaying certain updates, which result in a computational savings by a factor of $O ( n )$. The savings of $O ( n )$ in  result from localizing the updates. We show that by combining the strength of the modular arithmetic approach with the Segment LLL algorithm an $O ( n 0 . 5 )$ further saving is possible in the worst case when initial integer basis vectors have $2 O ( n )$ magnitude and $d = O ( n )$. We also show that it is possible to further improve this complexity by using fast matrix multiplication.
This paper is organized as follows. In the next section we review the LLL basis reduction algorithm of Lenstra, Lenstra, and Lovász . In addition we explain the basic computational observations of Storjohann in this section. In Section 3 we give Storjohann’s modular LLL reduction algorithm and give the essential results from . Additional notation and concepts needed to describe the modular approach are also given in this section. In Section 4 we give the segment basis reduction algorithm. In Section 5 we describe the modular segment reduction algorithm proposed in this paper, and give its worst case complexity result.

## 2. Methods for LLL-Reduced Lattice Bases

#### 2.1. The LLL Basis Reduction Algorithm

The LLL algorithm performs two essential computational steps. These are: (i) Size reduction of B by ensuring that $| Γ j , i | ≤ 1 / 2$, $1 ≤ j < i ≤ n$; (ii) swap of two adjacent rows of B, and subsequent restoration of Γ. We now explain these two steps.

#### Size Reduction of B

Let [$b ^ 1 , … , b ^ n$]=[$b 1 , … , b k − 1 , b k − [ Γ j , k ] b j , … , b n ]$ ($j < k$) be a basis obtained from $b 1 , … , b n$. It can be rewritten as $B ^ = U ( j , k ) B$, where $U ( j , k ) = I − [ Γ j , k ] e k e j T$ is an elementary unimodular matrix. It is easy to see that $B ^ = Γ ^ t B *$, where $Γ ^ = Γ − [ Γ j , k ] Γ e j e k T$. Note that $B *$ is unchanged as a result of this operation. The operation results in $| Γ ^ j , k | ≤ 1 / 2$. This computation is called the size reduction of $b k$ against $b j$, $j < k$. Note that $Γ ^$ is obtained from Γ (i.e., Γ is updated) in $O ( n )$ arithmetic operations. After initial Γ is computed, we can size reduce the entire basis by recursively applying this step in the order $( k , j ) = ( n , n − 1 ) , ( n , n − 2 ) , … , ( n , 1 ) , ( n − 1 , n − 2 ) , … , ( 2 , 1 )$. This is summarized in the methods SizeReduceVector and SizeReduceBasis. The method SizeReduceBasis is presented in a more general setting to allow for size reduction of limited number of vectors in B. Also, note that B need not be updated since all the information required to reduce B is contained in Γ. The update of B can be stored in a sequence of elementary unimodular matrices or their product. We represent this matrix by U.
Figure 1. Size Reduction of a Basis Vector.
Figure 1. Size Reduction of a Basis Vector.
Figure 2. Size Reduction of a Basis.
Figure 2. Size Reduction of a Basis.

#### Swap of Two Adjacent Rows of B

Let [$b ^ 1 , … , b ^ n$] = [$b 1 , … , b k , b k − 1 , … , b n$] be a basis obtained from $b 1 , … , b n$. It can be rewritten as $B ^ = U ( k − 1 , k ) B$, where $U ( k − 1 , k )$ is a permutation matrix that permutes the $( k − 1 )$-th row with the k-th row of B. This operation requires updating $∥ b k − 1 * ∥ 2$ and $∥ b k * ∥ 2$ of $B *$ and the coefficients of column/row $k − 1$ and k of Γ. This can be done by the following recurrence using $μ : = Γ k − 1 , k$, $ν : = ∥ b k * ∥ 2 + μ 2 ∥ b k − 1 * ∥ 2$:
$Γ k − 1 , k = μ ∥ b k − 1 * ∥ 2 / ν , ∥ b k * ∥ 2 = ∥ b k − 1 * ∥ 2 ∥ b k * ∥ 2 / ν , ∥ b k − 1 * ∥ 2 = ν ,$
We refer to the procedure implementing above recurrence by Swap (B (or U),$Γ , k , k − 1 ) )$.
The absolute value of the coefficients in the $( k − 1 )$-th and k-th rows of Γ obtained after the swap can become larger than $1 / 2$, a further size reduction step is performed to ensure that these coefficients are less than $1 / 2$. Note that while the restoration of Γ resulting from swap requires $O ( n )$ arithmetic operations, the size reduction step requires $O ( n 2 )$ operations. Hence, the worst case effort resulting from a swap of two adjacent rows is $O ( n 2 )$.
The Lenstra, Lenstra, and Lovász  algorithm for finding an LLL-reduced basis is summarized in Figure 3. The number of swaps and the effort needed to restore the size reduced property of B determines the worst case complexity of the LLL algorithm.
Lenstra, Lenstra, and Lovász  maintain size reduced property of B for two reasons. The first reason is in checking the condition in the IF statement of the LLL algorithm. This allows us to produce an LLL-reduced basis upon the termination of their algorithm. Second, the size reduced property of B is used to bound the size of intermidate numbers generated in the algorithm, which is necessary to establish polynomial time complexity of the algorithm.
Figure 3. The LLL Basis Reduction Algorithm.
Figure 3. The LLL Basis Reduction Algorithm.
Figure 4 rearranges the computations in the LLL algorithm of Figure 3 without changing the algorithm. For the moment we are not concerned with the issue of the size of intermediate numbers. In particular, the algorithm in Figure 4 will produce the same basis as the algorithm in Figure 3. In fact, if the computations are performed in infinite precision, then the step indicated in ♠ is not even necessary. If this step is deleted, then the cost of the restoration of Γ after each swap reduces from $O ( n 2 )$ to $O ( n )$ arithmetic operations. Storjohann  achieves this while maintaining finite precision with computation on integers of appropriate length by using modular arithmetic.
Figure 4. The LLL Basis Reduction Algorithm with Rearranged Computations.
Figure 4. The LLL Basis Reduction Algorithm with Rearranged Computations.

## 3. Storjohann’s Improvements

We now describe Storjohann’s  modifications. The LLL algorithm is first described as a fraction free algorithm to allow all computations on integer (not rational) numbers. The modular arithmetic modification that allows one to maintain finite precision is given subsequently.

#### 3.1. The LLL-Reduction with Fraction Free Computations

For the matrix $B B t$ we have an integral lower triangular matrix F and an integral upper triangular matrix T such that $T = F ( B B t )$ (See Geddes, Czapor, and Labahn ). F and T are called the fraction free factors of $B B t$. Fraction free factors of a matrix are computed in $O ( n 3 )$ arithmetic operations using standard matrix multiplication. It is known that
$T = F ( B B t ) = d 1 … … … ⋱ T i , j ⋮ ⋱ ⋮ d n$
where $T i , j = d j Γ i , j$. Recall that $B B t$ is positive definite since the row vectors of B are linearly independent. Hence T and F are unique. Also, $F = d i a g { 1 , d 1 , … , d n − 1 } Γ − t$, $T = d i a g { d 1 , … , d n } Γ$, $∥ b i * ∥ 2 = d i / d i − 1$ while taking $d 0 = 1$, and $d 1 , … , d n$ are integers because $b 1 , … , b n$ are in $Z d$. Note also that $T F t = d i a g { d 1 , d 1 d 2 , d 2 d 3 , … , d n − 1 d n }$.
Storjohann  gave a fast matrix multiplication algorithm for computing F and T. It requires $O ( n θ ln ( n ) ( ln M s c ) 1 + ϵ )$ bit operations on integers of bit length $O ( ln M s c )$, where $θ < 2 . 376$ and ϵ is a positive constant when the fast matrix multiplication algorithm of Coppersmith and Winograd  is used. $θ = 3$ and $ϵ = 1$ when the standard matrix multiplication is used.
In Figure 5 we give Storjohann’s rearrangement of the computations of Figure 4 using fraction free computation. The ModifiedLLL algorithm performs two types of unimodular operations. (i) FFReduce: subtracting a multiple of a row of B from another row of B, and (ii) FFSwap: swapping a row of B with an adjacent row of B. The ModifiedLLL algorithm works by recording the unimodular row operations on B in a unimodular matrix U initially set to be an identity matrix, and updating the entries of T. There is no need to update B or $B *$ in the algorithm, except in a post processing step. It is sufficient to update matrices U and T during the algorithm’s iterations. The fraction free updates of U and T corresponding to these unimodular operations are given in Figure 6 and Figure 7, respectively. Note that one execution of FFReduce or FFSwap is performed in $O ( n )$ arithmetic operations.
Figure 5. Modified LLL Basis Reduction Algorithm.
Figure 5. Modified LLL Basis Reduction Algorithm.
Figure 6. Fraction Free Subtract Subroutine.
Figure 6. Fraction Free Subtract Subroutine.
Figure 7. Fraction Free Swap Subroutine.
Figure 7. Fraction Free Swap Subroutine.
The LLL and ModifiedLLL algorithms use $Δ : = ∏ i = 1 n − 1 d i$ to measure progress. The FFSwap step of the algorithm reduces Δ by a factor δ . This is because when $b k$ and $b k − 1$ are swapped, $∥ b k − 1 * ∥ 2 ∥ b k * ∥ 2$ remains constant, and the new value of $∥ b k − 1 * ∥ 2$ is reduced at least by a factor δ. As a consequence $d k − 1$ is reduced by a factor δ, while all other $d i$ do not change. The value of Δ is unchanged in the FFReduce step of the algorithm because $B *$ does not change after this step. Since $1 ≤ Δ ≤ M s c n − 1$, Case 1 in the ModifiedLLL algorithm occurs only $O ( n log 1 / δ M s c )$ times. Hence this part of the algorithm is executed in $O ( n 2 log 1 / δ M s c )$ arithmetic operations. Case 2 of the algorithm can also occur at most $O ( n log 1 / δ M s c )$ times, each requiring $O ( n 2 )$ arithmetic operations. Hence, this part of the algorithm is executed in $O ( n 3 log 1 / δ M s c )$ arithmetic operations. Finally a δ-LLL reduced basis is generated by $U B$, which is performed in $O ( n 2 d )$ operations under standard matrix multiplication, and in $O ( n 1 . 376 d )$ using the algorithm of Coppersmith and Winograd [14,22]. Lenstra, Lenstra, and Lovász  showed that the bit length of the numbers on which the arithmetic operations are performed is bounded by $O ( log 2 M s c )$. This gives the complexity result in Table 1, where $d = O ( n )$ for simplicity.
The following lemma gives bounds on the size of intermediate lattice bases generated during the LLL and ModifiedLLL algorithms. This property is used when using computations with modular arithmetic.
Lemma 1
. Let B be an input basis to the LLL and ModifiedLLL algorithms. The quantities $max i { ∥ b i * ∥ }$ and $max i { d i }$ are non-increasing in the LLL and ModifiedLLL algorithms. Furthermore, upon termination
$∥ b i ∥ 2 ≤ n M 0 , for 1 ≤ i ≤ n$
Proof:
Recall that size reduction/subtract does not change $B *$, consequently for all i, $∥ b i * ∥$ is unchanged in this step. Swapping $b i$ and $b i − 1$ decreases $∥ b i − 1 * ∥$ by a factor of δ and the updated $∥ b i * ∥$ is bounded by old $∥ b i − 1 * ∥$. Hence, the non-increasing property is established. We have $∥ b i ∥ 2 = ∥ b i * ∥ 2 + ∑ j = 1 i − 1 Γ j , i 2 ∥ b j * ∥ 2 ≤ n M 0$, since $∥ b i * ∥ ≤ ∥ b i ∥ ≤ M 0$ in the beginning, and throughout the LLL and ModifiedLLL algorithms. The bounds obviously hold at termination. ☐

#### 3.2. The Modified LLL Algorithm with Modular Arithmetic

Storjohann  uses modular arithmetic to keep the intermediate numbers bounded during the algorithm’s iterations. Given an integer a, and an integer $M > 0$, we write $a ( m o d M )$ to mean the unique integer r congruent to a modulo M in the symmetric range, that is, with $− ⌊ ( M − 1 ) / 2 ⌋ ≤ r ≤ ⌊ M / 2 ⌋$. Similarly, $U ( m o d M )$ stands for the same operation for all entries of matrix U.
The modular basis reduction algorithm of Storjohann  is given in Figure 8 and Figure 9. Its worst case computational complexity is given in Table 1. The notable difference of this algorithm from the ModifiedLLL algorithm is in the modular arithmetic operation that is performed in the methods ModReduce and ModSwap.
Let $M = 2 ⌈ ( n M 0 ) 1 / 2 ⌉ + 1$ so that by Lemma 1 the entries in the reduced basis matrix upon the termination of the ModifiedLLL algorithm are bounded in magnitude by $( M − 1 ) / 2$. The modular approach hinges on the observation that $U B ( m o d M ) = U ¯ B ( m o d M )$, where $U ¯ = U ( m o d M )$. Note that in the “infinite" precision version of the $Modified LLL$ algorithm, where the ♠ step is not performed, one allows U to grow. However, in the modular arithmetic version the elements of U and T remain bounded.
Figure 8. The Modular LLL Basis Reduction Algorithm.
Figure 8. The Modular LLL Basis Reduction Algorithm.
Figure 9. ModSubtract and ModSwap subroutines.
Figure 9. ModSubtract and ModSwap subroutines.
We have shown above how to bound the entries of U by $M = O ( M 0 )$ during the course of the algorithm. Lemma 1 has already bounded the diagonal entries $d i$ of T throughout the algorithm. The following lemma gives a way to keep the off diagonal entries of T bounded.
Lemma 2
. Let T be the matrix of (3.5), M a positive integer, and i and j indices with $1 ≤ i < j ≤ n$. There exists a unit upper triangular integral matrix V such that $T V$ is identical to T except in the (i,j)-th entry which is reduced modulo $d i d i − 1 M$. Furthermore, V can be chosen so that $V ¯ = V ( mod M )$ is the identity matrix.
Storjohann  constructed the matrix V in Lemma 2 as follows. Let $V 0$ be the $n × n$ strictly upper triangular matrix with column j equal to column i of $F t$ and all other entries zero, let $q = [ T i , j / ( d i d i − 1 M ) ]$, and take $V = I − q M V 0$. Note that $V t B$ is also a basis for L. Since the matrix $V t B$ is not calculated; the corresponding operation should be recorded in U. However, U remains unchanged, because $U = V ¯ t U ( m o d M )$ and $V ¯ = I n$. The entries of matrix T corresponding to this row transformation on B are updated by multiplying T with V, which has the desired effect of reducing $T i , j$ modulo $d i d i − 1 M$. This modular reduction is performed in the ModReduce and ModSwap calculation. We remark that because of the above operation the intermediate lattice bases B that correspond to the matrix T may no longer be polynomially bounded in the size of the starting B, however, it is no longer important because an intermediate B is never recorded.

## 4. The Segment LLL Reduction of Lattice Bases

Recently Koy and Schnorr  introduced the concept of a segment LLL reduced basis (See Definition D7), and gave an algorithm for finding such a basis. The segment LLL reduced basis satisfies a slightly weaker condition, however, it is computed by Koy and Schnorr  in $O ( n )$ fewer arithmetic operations. The algorithm of Koy and Schnorr works on two segments of B, i.e., $[ B l − 1 , B l ] = [ b k ( l − 1 ) + 1 , … , b k ( l + 1 ) ]$ at a time. This algorithm is outlined in Figure 10. The work in the SegmentLLL algorithm comes from the calls to a subroutine Loc-LLL(l) given in Figure 11. Subroutine Loc-LLL(l) performs a local LLL basis reduction on the segment [$B l − 1 , B l ]$ and records the operations in a unimodular matrix $U l ∈ Z 2 k × 2 k$, as explained below.
Figure 10. The Segment LLL Basis Reduction Algorithm.
Figure 10. The Segment LLL Basis Reduction Algorithm.
The Local-LLL reduction (Subroutine Loc-LLL(l)) works on $B ^ : = [ B l − 1 , B l ] t$ and Γ. The matrix Γ in (4.6) is partitioned into segments with each segment has $2 k$ basis vectors.
$Γ = 1 … Γ 1 , k ( l − 1 ) ⋱ ⋮ 1 Γ 1 , k ( l − 1 ) + 1 … Γ 1 , k ( l + 1 ) ⋮ ⋮ Γ k ( l − 1 ) , k ( l − 1 ) + 1 … Γ k ( l − 1 ) , k ( l + 1 ) Γ 1 , k ( l + 1 ) + 1 … Γ 1 , n ⋮ ⋮ Γ k ( l − 1 ) , k ( l + 1 ) + 1 … Γ k ( l − 1 ) , n 1 … Γ k ( l − 1 ) + 1 , k ( l + 1 ) ⋱ ⋮ 1 Γ k ( l − 1 ) + 1 , k ( l + 1 ) + 1 … Γ k ( l − 1 ) + 1 , n ⋮ ⋮ Γ k ( l + 1 ) , k ( l + 1 ) + 1 … Γ k ( l + 1 ) , n 1 … Γ k ( l + 1 ) + 1 , n ⋱ ⋮ 1$
$≡ Γ A Γ C Γ E$
Figure 11. The Local LLL Iterations.
Figure 11. The Local LLL Iterations.
When working in Loc-LLL(l) all LLL swaps and size reductions are restricted to the input $2 k$ segment. Only the matrix $Γ C$ is updated while performing the segment LLL swaps and size reductions. The unimodular operations updating $Γ A$, and the operations required to update $Γ E$ are stored in the matrix $U l$. The updates for $Γ A$ and $Γ E$ are performed only after it is no longer possible to perform an LLL-swap based on the information in $Γ C$. $Γ A$ and $Γ E$ are updated as follows:
$Γ A = Γ A U l , Γ E = ( Γ C ) e n d U l − 1 ( Γ C ) b e g − 1 Γ E$
Here $( Γ C ) b e g$ and $( Γ C ) e n d$ are $Γ C$ matrices recorded at the beginning and end of the Local LLL-reduction step in Loc-LLL(l). Since only matrix $Γ C$ is updated during the LLL unimodular operations in this segment the corresponding updates of $Γ C$ and $U l$ are performed using $O ( k 2 )$ arithmetic operations. The total number of swaps in all calls to Loc-LLL(l) is bounded by $O ( n log 1 / δ M s c )$, hence the total work in the Local LLL-reduction step is bounded by $O ( n k 2 log 1 / δ M s c )$ arithmetic operations. The cost of updating $Γ A$ and $Γ C$, and performing the Segment Size Reduction step in each execution of Loc-LLL(l) is $O ( n d k )$ arithmetic operations.
Let $d e c r$ denote the number of times that the condition
$( D ( l − 1 ) > ( α / δ ) k 2 D ( l ) or δ k 2 ∥ b k ( l − 1 ) * ∥ 2 > α ∥ b k ( l − 1 ) + 1 * ∥ 2 )$
holds and l is decreased. The number of times Loc-LLL(l) is called is $m − 1 + 2 · d e c r$. Koy and Schnorr  showed that $d e c r ≤ 2 m − 1 k 2 log 1 / δ M s c < 2 n k 3 log 1 / δ M s c$. Hence the total work in the Segment Size Reduction step of Loc-LLL(l) is $O ( n 3 k 2 log 1 / δ M s c )$ arithmetic operations when $d = O ( n )$. This leads to the computational complexity result in Table 1 when $k = n$ and $d = O ( n )$. We have omitted details on the bounds on the length of the elements in $U l$ and Γ (see Koy and Schnorr  for details).

## 5. The Modular Segment LLL Reduction with Modular Arithmetic

#### 5.1. Algorithm and Its Complexity

We are now in a position to give our segment LLL reduction algorithm with modular arithmetic. It finds a segment LLL reduced basis with an $O ( n 0 . 5 )$ improvement in the computational complexity when $M s c = 2 O ( n 2 )$. This algorithm is given in Figure 12. The major difference in the ModSegmentLLL and SegmentLLL algorithms is in performing the ModLocSegmentLLL step presented in Figure 13. In this subroutine we perform updates using modular arithmetic while working with $B ^$. The subroutines ModReduce and ModSwap require $O ( k )$ operations in comparison to the $O ( k 2 )$ worst case operations in the algorithm of Koy and Schnorr described in the previous section.
Figure 12. The Modular Segment LLL Basis Reduction.
Figure 12. The Modular Segment LLL Basis Reduction.
Figure 13. The Modular Local Segment LLL Basis Reduction.
Figure 13. The Modular Local Segment LLL Basis Reduction.
Figure 14. Size Reduction of a Segment Using Modular Arithmetic.
Figure 14. Size Reduction of a Segment Using Modular Arithmetic.
We now explain the steps in ModLocSegmentLLL. While working with the matrix $B ^$, let us partition
$T = A C E$
similar to the partitioning of Γ in (4.6). We perform two types of unimodular operations on $B ^$ in the ModLocSegmentLLL algorithm. The Preprocess C and Postprocess C steps are performed to ensure that the lattice basis vectors corresponding to C are size reduced before and after performing the Local δ-Reduction step. This allows us to bound the size of matrix Q needed to update E after completing the Local δ-Reduction step.
The calls to ModReduce and ModSwap are as in the case of the ModularLLL algorithm with the important difference that they are now performed on a segment. ModReduce subtracts a multiple of a row (column) from another row (column). This unimodular operation is recorded by updating $U l$ modulo β. The constant β used in the ModSegmentLLL algorithm is taken to be a multiple of M. A choice of β is specified below in Lemma 4. This inferior value is used in the intermediate computations because during the algorithm we don’t have a bound on the elements of $B ^$. However, the fact that the initial and terminating $B ^$ are size reduced ensures that a proper bound on β is still possible. The subroutine ModSwap performs all necessary computations to update C and $U l$ when two rows of $B ^$ are swapped. The elements of C are recorded modulo $d i d i − 1 β$. As in the case of Storjohann’s modification of the LLL algorithm, there is no need to record the modulo operations in $U l$.
The matrix $U l$ is further updated in the Postprocess C step by incorporating all the unimodular transformations recorded in W while working on the size reduction of the basis vectors corresponding to C. Here the elements of $U l$ are recorded modulo β. Note that while $U l$ is recorded modulo β, U is recorded modulo M. Updating A and U is straightforward. In Section 5.2 we show that the computations involving $U l$ and A can be performed with integers of $O ( ln M s c )$ bit length. To this end we use the results from Storjohann  for his analysis of the semi-reduction algorithm.
The total computational effort in Steps 1, 3, 4, and 5 of the ModLocSegmentLLL algorithm is $O ( n k 2 )$ arithmetic operations. Following  and [14, Theorem 18], there are at most $n ( log 1 / δ M s c )$ swaps in all the executions of the ModLocSegmentLLL algorithm, each swap requiring $O ( k )$ arithmetic operations. Hence, we improve the total computational efforts in Step 2 [Modular Segment Iterations] of the ModSegmentLLL algorithm to $O ( n k log 1 / δ M s c )$ arithmetic operations. Since there are a total of $O ( n k 3 log 1 / δ M s c )$ calls to the ModLocSegmentLLL algorithm we are led to the following theorem.
Theorem 1
Using standard matrix multiplication, for $k = O ( n )$ and $d = O ( n )$, Step 2 of Algorithm ModSegmentLLL performs $O ( n 1 . 5 log 1 / δ M s c )$ arithmetic computations. We can perform these computations using integers of bit length $O ( ln M s c )$.
The proof of the first statement in Theorem 1 is already complete. The second statement on the bit length needed for computations in proved in Section 5.2. We note that Step 1 of the ModSegmentLLL algorithm computes F and T, and Step 3 performs a global size reduction. Step 1 is performed in $O ( n 3 )$ arithmetic operations on integers of bit length $O ( ln M s c )$ . Step 3 is also performed in $O ( n 3 )$ arithmetic operations on integers of bit length $O ( ln M s c )$. Therefore, we have the following corollary.
Corollary 1
For a basis $b 1 , … , b n ∈ Z d$ and $d = O ( n )$, the running time of Algorithm ModSegmentLLL is bounded by $O ( n 1 . 5 log 1 / δ M s c )$ arithmetic operations using integers of bit length $O ( ln M s c )$.
The bound in Corollary 1 is $n 0 . 5$ better than the bound in Algorithm SegmentLLL when $M s c = 2 O ( n 2 )$, which is possible in the worst case. Section 5.2 is devoted to showing the correctness of Algorithm ModSegmentLLL and proving Theorem 1.

#### 5.2. Correctness of the ModSegmentLLL Algorithm

The following lemma allows us to compute U modulo M, and T modulo $d i d i − 1 M$ during the ModSegmentLLL algorithm.
Lemma 3
Upon termination, the reduced basis from the SegmentLLL and ModSegmentLLL algorithms has the following upper bound
$∥ b i ∥ 2 ≤ n M 0 for 1 ≤ i ≤ n , and ∥ b i * ∥ ≤ M 0$
throughout the algorithm.
Proof:
Follow the proof of Lemma 2, while observing that size reduction or modular reduction of the elements in T leave $∥ b i * ∥$ unchanged. ☐
The following lemma of Schönhage allows to give a proper value of β, which is used to reduce the entries of $U l$ and C modulo β. We now show that $U l$, A, C, and E are correctly updated using integers of $O ( ln M s c )$ bits.
Lemma 4
 Let $B ^ b e g$, $B ^ e n d ∈ Z 2 k × d$ be size-reduced bases. The unimodular matrix $U ^$ that transforms $B ^ b e g$ to $B ^ e n d$, satisfies
$∥ U ^ ∥ 1 ≤ ( 2 k ) 2 ( 3 2 ) 2 k − 1 M s c ≤ M s c 2$
where $∥ U ^ ∥ 1 = max j { ∥ U ^ j t ∥ 1 }$ and $U ^ j t$ is the j-th column of $U ^$.
Lemma 4 allows to take $β = q β M$, where $q β = [ ( 2 ⌈ M s c 2 ⌉ + 1 ) / M ] + 1$ while reducing the entries of $U l$ modulo β. Note that taking β as a multiple of M is important because $U l$ is used to update U whose elements are computed modulo M.

#### Updating E

Let R be the $2 k × 2 k$ diagonal matrix with the i-th diagonal entry $d k ( l − 1 ) + i d k ( l − 1 ) + i − 1$ for $1 ≤ i ≤ 2 k$, and H the $2 k × 2 k$ diagonal matrix with $H 1 , 1 = ( C n e w ) 1 , 1 d k ( l − 1 )$ and $H i , i = ( C n e w ) i , i ( C n e w ) i − 1 , i − 1$ for $2 ≤ i ≤ 2 k$, where $d k ( l − 1 ) + i , 1 ≤ i ≤ 2 k$ are the diagonal entries of $C b e g$. Following Storjohann’s development of his algorithm for finding a semi-reduced basis in [14, Equation (29)], we can show that the matrix E is updated by
$E ˜ = Q E , where Q = 1 d k ( l − 1 ) H ( C n e w − 1 ) t U l d k ( l − 1 ) C b e g t R − 1$
These computations are performed in a specific order to maintain integrality of operations: (i) backtrack fraction free Gaussian elimination by pre-multiplying E by $d k ( l − 1 ) C b e g t R − 1$; (ii) pre-multiply by the basis modular transformation matrix $U l$; (iii) forwardtrack fraction free Gaussian elimination by pre-multiplying the result from (ii) by $( 1 / d k ( l − 1 ) ) H ( C n e w − 1 ) t$.
To establish a bound on the magnitudes of the integers in $E ˜$, we need to bound $∥ C n e w − 1 ∥ ∞$. Let S be the $2 k × 2 k$ diagonal matrix with the i-th diagonal entry $( C n e w ) i , i$ for $1 ≤ i ≤ 2 k$ so that $S − 1 C n e w$ is unit upper triangular with all off diagonal entries $≤ 1 / 2$, (Recall that the basis vectors corresponding to $C n e w$ are size-reduced). In particular, the entries in $( S − 1 C n e w ) − 1$ are $2 k × 2 k$ minors of $( S − 1 C n e w )$ which is bounded by $( 2 k ) k$ using Hadamard’s inequality. It follows that the entries in $C n e w − 1 = ( S − 1 C n e w ) − 1 S − 1$ are bounded by $( 2 k ) k$ because $d i ≥ 1 , 1 ≤ i ≤ n$. We get
$∥ E ˜ ∥ ∞ = ∥ Q E ∥ ∞ ≤ ( 2 k ) 3 ∥ H ∥ ∞ ∥ C n e w − 1 ∥ ∞ ∥ U l ∥ ∞ ∥ C b e g t ∥ ∞ β$
$≤ ( 2 k ) 3 M s c 2 ( 2 k ) k ( 2 k ) 2 ( 3 / 2 ) 2 k − 1 M s c M s c β$
$≤ 2 ( 2 k ) k + 5 ( 3 / 2 ) 2 k − 1 M s c 6$
The above inequality shows that the entries of $E ˜$ are bounded by $O ( ln M s c + k ln k )$ bit length. Furthermore, if $E ˜$ is computed by multiplying E with matrices in Q from right to left, then all intermediate matrices are fraction free, and the computations are performed on integers of size $O ( ln M s c )$. This completes the proof for the correctness of the algorithm.

#### 5.3. The Modular Segment LLL using Fast Matrix Multiplication

The complexity of Step 2 of the ModSegmentLLL algorithm is bounded by the following theorem when using fast matrix multiplication.
Theorem 2
If $d = O ( n )$, $k = ⌈ n 1 5 − θ ⌉$, then using fast matrix multiplications Step 2 of the ModSegmentLLL algorithm can be performed in $O ( n 1 + 1 5 − θ ( log 1 / δ M s c ) )$ operations using integers of bit length $O ( log 2 M s c )$.
Proof:
As discussed above, there are at most $n ( log 1 / δ M s c )$ LLL-exchanges, each requiring $O ( k )$ arithmetic operations for a local δ-reduction. According to [20, Theorem 3], there are $d e c r ≤ 2 n k 3 log 1 / δ M s c$ calls of the ModLocSegmentLLL algorithm. Each call requires $O ( n k θ − 1 + n k + k 2 ( ln k ) )$ arithmetic operations for updating matrices A and T. The complexity of Step 2 of the ModSegmentLLL algorithm is bounded by
$O ( n k ( log 1 / δ M s c ) ) + O ( 2 n k 3 ( log 1 / δ M s c ) ( n k θ − 1 + n k + k 2 ( log 2 k ) ) ) ≤ O ( n k ( log 1 / δ M s c ) ) + O ( n 2 k 4 − θ ( log 1 / δ M s c ) ) = O ( n 1 + 1 5 − θ ( log 1 / δ M s c ) )$
when $k = ⌈ n 1 ( 5 − θ ) ⌉$. ☐
Storjohann  showed that the fraction free Gaussian elimination and Step 3 of the algorithm can be performed in $O ( n θ log n )$ arithmetic operations for $θ = 2 . 376$ with integers of bit length $O ( ln M s c )$. The bound in Theorem 2 is $O ( n 1 . 382 ( log 1 / δ M s c ) )$ where $M s c > 2 n$. Hence Step 2 of Algorithm ModSegmentLLL dominates the overall effort giving the following corollary.
Corollary 2
For $d = O ( n )$, and $k = ⌈ n 1 ( 5 − θ ) ⌉$ the running time of Algorithm ModSegmentLLL is bounded by $O ( n 1 . 382 log 1 / δ M s c )$ operations using integers of bit length $O ( ln M s c )$ when using fast matrix multiplication.

## 6. Concluding Remarks

Schnorr [17, Section 6] remarked that it is possible to further improve the running time of the iterated subsegment algorithm in  using modular arithmetic. This is possible since the iterated subsegment algorithm runs in $O ( n 3 ln n )$ operations by recursively transporting local transforms from a segment-level to the next higher segment. Note that by comparison the basic segment-LLL algorithm analyzed in this paper requires $O ( n 3 . 5 )$ operations while using standard arithmetic, and $O ( n 3 + 1 5 − θ )$ operations while using fast matrix multiplications. In all cases the modular arithmetic computations are performed on numbers of length $O ( n 2 )$. Unfortunately the worst-case $O ( n 2 )$ bit-length required for the modular arithmetic is large, and floating point arithmetic is more practical. Numerical experience using implementations based on floating point arithmetic were reported in  for the LLL algorithm and in  for the segment-LLL reduction algorithm. The possibility of combining modular arithmetic with floating point computations remains a topic of future research.

## Acknowledgement

The research of both authors was funded by NSF grants DMI-0200151, DMI-0522765, and ONR grant N00014-01-1-0048/P00002 and N00014-09-10518.

## References

1. Cassels, J.W.S. An Introduction to the Geometry of Numbers; Springer-Verlag: Berlin, Germany, 1971. [Google Scholar]
2. Dwork, C. Lattices and their application to cryptography. Availible online: http://www.dim.uchile.cl/m̃kiwi/topicos/00/dwork-lattice-lectures.ps (accessed on 15 June 2010).
3. Lenstra, H.W. Integer programming with a fixed number of variables. Math. Operat. Res. 1983, 8, 538–548. [Google Scholar] [CrossRef]
4. Ajtai, M. The shortest vector problem in L2 is NP-hard for randomized reductions. In Proceedings of the 30th ACM Symposium on Theory of Computing, Dallas, TX, USA, May 1998; pp. 10–19.
5. Micciancio, D. The shortest vector in a lattice is hard to approximate to within some constant. SIAM J. Comput. 2001, 30, 2008–2035. [Google Scholar] [CrossRef]
6. van Emde Boas, P. Another NP-complete partition problem and the complexity of computing short vectors in lattices; Technical report MI-UvA-81-04; University of Amsterdam: Amsterdam, The Netherlands, 1981. [Google Scholar]
7. Lenstra, A.K.; Lenstra, H.W.; Lovász, L. Factoring polynomials with rational coefficients. Math. Ann. 1982, 261, 515–534. [Google Scholar] [CrossRef]
8. Schönhage, A. Factorization of univariate integer polynomials by diophantine approximation and improved lattice basis reduction algorithm. In Proceedings of 11th Colloquium Automata, Languages and Programming; Springer-Verlag: Antwerpen, Belgium, 1984; LNCS 172, pp. 436–447. [Google Scholar]
9. Kannan, R. Improved algorithms for integer programming and related lattice problems. In Proceedings of the 15th Annual ACM Symposium On Theory of Computing, Boston, MA, USA, May 1983; pp. 193–206.
10. Schnorr, C.P. A hierarchy of polynomial time lattice basis reduction algorithms. Theor. Comput. Sci. 1987, 53, 201–224. [Google Scholar] [CrossRef]
11. Koy, H.; Schnorr, C.P. Segment LLL-reduction with floating point orthogonalization. LNCS 2001, 2146, 81–96. [Google Scholar]
12. Hermite, C. Second letter to Jacobi. Crelle J. 1850, 40, 279–290. [Google Scholar] [CrossRef]
13. Schnorr, C.P. A more efficient algorithm for lattice basis reduction. J. Algorithms 1988, 9, 47–62. [Google Scholar] [CrossRef]
14. Storjohann, A. Faster Algorithms for Integer Lattice Basis Reduction; Technical Report 249; Swiss Federal Institute of Technology: Zurich, Switzerland, 1996. [Google Scholar]
15. Schnorr, C.P. Block Korkin-Zolotarev Bases and Suceessive Minima; Technical Report 92-063; University of California at Berkley: Berkley, CA, USA, 1992. [Google Scholar]
16. Nguyen, P.Q.; Stehlé, D. Floating-point LLL revisited. LCNS 2005, 3494, 215–233. [Google Scholar]
17. Schnorr, C.P. Fast LLL-type lattice reduction. Inf. Comput. 2006, 204, 1–25. [Google Scholar] [CrossRef]
18. Kaib, M.; Ritter, H. Block Reduction for Arbitrary Norms. Availible online: http://www.mi.informatik.uni-frankfurt.de/research/papers.html (accessed on 15 June 2010).
19. Lovász, L.; Scarf, H. The generalized basis reduction algorithm. Math. Operat. Res. 1992, 17, 754–764. [Google Scholar] [CrossRef]
20. Koy, H.; Schnorr, C.P. Segment LLL-reduction of lattice bases. LNCS 2001, 2146, 67–80. [Google Scholar]
21. Geddes, K.O.; Czapor, S.R.; Labahn, G. Algorithms for Computer Algebra; Kluwer: Boston, MA, USA, 1992. [Google Scholar]
22. Coppersmith, D.; Winograd, S. Matrix multiplication via arithmetic progressions. J. Symbol. Comput. 1990, 9, 251–280. [Google Scholar] [CrossRef]
23. Stehlé, D. Floating-point LLL: Theoretical and practical aspects. In The LLL Algorithm; Springer-verlag: New York, NY, USA, 2009; Chapter 5. [Google Scholar]
24. Schönhage, A.; Strassen, V. Schnelle Multiplikation grosser Zahlen. Computing 1971, 7, 281–292. [Google Scholar] [CrossRef]