The Old Stuff The New Concepts The Bottom Line
The (gimmicky) Road Map Gradual Sub-Lattice Reduction The Old Stuff - - PowerPoint PPT Presentation
The (gimmicky) Road Map Gradual Sub-Lattice Reduction The Old Stuff - - PowerPoint PPT Presentation
The Old Stuff The New Concepts The Bottom Line Gradual Sub-Lattice Reduction (now with more applications!) Andy Novocin andy@novocin.com LIRMM, Montpellier June 22nd The Old Stuff The New Concepts The Bottom Line The (gimmicky) Road
The Old Stuff The New Concepts The Bottom Line
The (gimmicky) Road Map Gradual Sub-Lattice Reduction ∗
The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
Why give this talk?
- I want my work to be as useful as possible.
- This began as a new complexity for factoring polynomials
- The result is actually much more about lattice reductions
- Lattice reduction is used for more than just factoring
- So I want to show you how this result might be applied
. . . in the hope that you will find it useful
The Old Stuff The New Concepts The Bottom Line
Why give this talk?
- I want my work to be as useful as possible.
- This began as a new complexity for factoring polynomials
- The result is actually much more about lattice reductions
- Lattice reduction is used for more than just factoring
- So I want to show you how this result might be applied
. . . in the hope that you will find it useful
The Old Stuff The New Concepts The Bottom Line
Why give this talk?
- I want my work to be as useful as possible.
- This began as a new complexity for factoring polynomials
- The result is actually much more about lattice reductions
- Lattice reduction is used for more than just factoring
- So I want to show you how this result might be applied
. . . in the hope that you will find it useful
The Old Stuff The New Concepts The Bottom Line
Why give this talk?
- I want my work to be as useful as possible.
- This began as a new complexity for factoring polynomials
- The result is actually much more about lattice reductions
- Lattice reduction is used for more than just factoring
- So I want to show you how this result might be applied
. . . in the hope that you will find it useful
The Old Stuff The New Concepts The Bottom Line
Why give this talk?
- I want my work to be as useful as possible.
- This began as a new complexity for factoring polynomials
- The result is actually much more about lattice reductions
- Lattice reduction is used for more than just factoring
- So I want to show you how this result might be applied
. . . in the hope that you will find it useful
The Old Stuff The New Concepts The Bottom Line
Why give this talk?
- I want my work to be as useful as possible.
- This began as a new complexity for factoring polynomials
- The result is actually much more about lattice reductions
- Lattice reduction is used for more than just factoring
- So I want to show you how this result might be applied
. . . in the hope that you will find it useful
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
Introducing Lattices
A lattice, L
- The same lattice, L
✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏
Definition
A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!
The Old Stuff The New Concepts The Bottom Line
Introducing Lattices
A lattice, L
- The same lattice, L
✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏
Definition
A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!
The Old Stuff The New Concepts The Bottom Line
Introducing Lattices
A lattice, L
- The same lattice, L
✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏
Definition
A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!
The Old Stuff The New Concepts The Bottom Line
Introducing Lattices
A lattice, L
- The same lattice, L
✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏
Definition
A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!
The Old Stuff The New Concepts The Bottom Line
Introducing Lattices
A lattice, L
- The same lattice, L
✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏
Definition
A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!
The Old Stuff The New Concepts The Bottom Line
The Most Common Lattice Question
The Shortest Vector Problem
Given a lattice, L, find the Shortest Vector in L.
- The Shortest Vector Problem (SVP) is NP-hard to even
approximate to within a constant.
- The are many interesting research areas which can be
connected to the SVP .
- One of the primary uses of lattice reduction algorithms is to
approximately solve the SVP in polynomial time.
- The algorithm in this talk is well suited for approximating
the SVP (in some specific lattices).
- Sometimes approximating can be enough.
The Old Stuff The New Concepts The Bottom Line
The Most Common Lattice Question
The Shortest Vector Problem
Given a lattice, L, find the Shortest Vector in L.
- The Shortest Vector Problem (SVP) is NP-hard to even
approximate to within a constant.
- The are many interesting research areas which can be
connected to the SVP .
- One of the primary uses of lattice reduction algorithms is to
approximately solve the SVP in polynomial time.
- The algorithm in this talk is well suited for approximating
the SVP (in some specific lattices).
- Sometimes approximating can be enough.
The Old Stuff The New Concepts The Bottom Line
The Most Common Lattice Question
The Shortest Vector Problem
Given a lattice, L, find the Shortest Vector in L.
- The Shortest Vector Problem (SVP) is NP-hard to even
approximate to within a constant.
- The are many interesting research areas which can be
connected to the SVP .
- One of the primary uses of lattice reduction algorithms is to
approximately solve the SVP in polynomial time.
- The algorithm in this talk is well suited for approximating
the SVP (in some specific lattices).
- Sometimes approximating can be enough.
The Old Stuff The New Concepts The Bottom Line
The Most Common Lattice Question
The Shortest Vector Problem
Given a lattice, L, find the Shortest Vector in L.
- The Shortest Vector Problem (SVP) is NP-hard to even
approximate to within a constant.
- The are many interesting research areas which can be
connected to the SVP .
- One of the primary uses of lattice reduction algorithms is to
approximately solve the SVP in polynomial time.
- The algorithm in this talk is well suited for approximating
the SVP (in some specific lattices).
- Sometimes approximating can be enough.
The Old Stuff The New Concepts The Bottom Line
The Most Common Lattice Question
The Shortest Vector Problem
Given a lattice, L, find the Shortest Vector in L.
- The Shortest Vector Problem (SVP) is NP-hard to even
approximate to within a constant.
- The are many interesting research areas which can be
connected to the SVP .
- One of the primary uses of lattice reduction algorithms is to
approximately solve the SVP in polynomial time.
- The algorithm in this talk is well suited for approximating
the SVP (in some specific lattices).
- Sometimes approximating can be enough.
The Old Stuff The New Concepts The Bottom Line
The Most Common Lattice Question
The Shortest Vector Problem
Given a lattice, L, find the Shortest Vector in L.
- The Shortest Vector Problem (SVP) is NP-hard to even
approximate to within a constant.
- The are many interesting research areas which can be
connected to the SVP .
- One of the primary uses of lattice reduction algorithms is to
approximately solve the SVP in polynomial time.
- The algorithm in this talk is well suited for approximating
the SVP (in some specific lattices).
- Sometimes approximating can be enough.
The Old Stuff The New Concepts The Bottom Line
An Example: Algebraic Number Reconstruction
Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this: 1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3) Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the
- ther vectors.
The Old Stuff The New Concepts The Bottom Line
An Example: Algebraic Number Reconstruction
Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this: 1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3) Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the
- ther vectors.
The Old Stuff The New Concepts The Bottom Line
An Example: Algebraic Number Reconstruction
Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this: 1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3) Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the
- ther vectors.
The Old Stuff The New Concepts The Bottom Line
An Example: Algebraic Number Reconstruction
Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this: 1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3) Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the
- ther vectors.
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
First we need to recall Gram-Schmidt Orthogonalization
Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗
1, . . . , b∗ d with the
following properties:
- b1 = b∗
1
- SPANR{b1, . . . , bi} = SPANR{b∗
1, . . . , b∗ i }
Intuition of GSO
My favorite way to think of G-S vectors is that b∗
i is bi modded
- ut by b1, . . . , bi−1 over R.
The Old Stuff The New Concepts The Bottom Line
First we need to recall Gram-Schmidt Orthogonalization
Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗
1, . . . , b∗ d with the
following properties:
- b1 = b∗
1
- SPANR{b1, . . . , bi} = SPANR{b∗
1, . . . , b∗ i }
Intuition of GSO
My favorite way to think of G-S vectors is that b∗
i is bi modded
- ut by b1, . . . , bi−1 over R.
The Old Stuff The New Concepts The Bottom Line
First we need to recall Gram-Schmidt Orthogonalization
Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗
1, . . . , b∗ d with the
following properties:
- b1 = b∗
1
- SPANR{b1, . . . , bi} = SPANR{b∗
1, . . . , b∗ i }
Intuition of GSO
My favorite way to think of G-S vectors is that b∗
i is bi modded
- ut by b1, . . . , bi−1 over R.
The Old Stuff The New Concepts The Bottom Line
First we need to recall Gram-Schmidt Orthogonalization
Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗
1, . . . , b∗ d with the
following properties:
- b1 = b∗
1
- SPANR{b1, . . . , bi} = SPANR{b∗
1, . . . , b∗ i }
Intuition of GSO
My favorite way to think of G-S vectors is that b∗
i is bi modded
- ut by b1, . . . , bi−1 over R.
The Old Stuff The New Concepts The Bottom Line
Introducing A Reduced Basis
The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.
A Reduced Basis
Let b1, . . . , bd be a basis for a lattice, L, and let b∗
j be the jth G-S
vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗
i 2≤ (
1 δ − η2 )· b∗
i+1 2 ∀i < d
In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗
i 2≤ 2 b∗ i+1 2.
A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.
The Old Stuff The New Concepts The Bottom Line
Introducing A Reduced Basis
The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.
A Reduced Basis
Let b1, . . . , bd be a basis for a lattice, L, and let b∗
j be the jth G-S
vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗
i 2≤ (
1 δ − η2 )· b∗
i+1 2 ∀i < d
In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗
i 2≤ 2 b∗ i+1 2.
A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.
The Old Stuff The New Concepts The Bottom Line
Introducing A Reduced Basis
The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.
A Reduced Basis
Let b1, . . . , bd be a basis for a lattice, L, and let b∗
j be the jth G-S
vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗
i 2≤ (
1 δ − η2 )· b∗
i+1 2 ∀i < d
In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗
i 2≤ 2 b∗ i+1 2.
A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.
The Old Stuff The New Concepts The Bottom Line
Introducing A Reduced Basis
The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.
A Reduced Basis
Let b1, . . . , bd be a basis for a lattice, L, and let b∗
j be the jth G-S
vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗
i 2≤ (
1 δ − η2 )· b∗
i+1 2 ∀i < d
In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗
i 2≤ 2 b∗ i+1 2.
A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
Reduced is near-Orthogonal
✲
v∗
1 := v1
✘✘✘✘✘✘✘✘ ✿
v2 In this picture there are two vectors which are far from orthogonal.
✻
v∗
2
Small G-S Length
✲
v∗
1 := v1
✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗
v2 In this one the vectors are closer to orthogonal.
✻
v∗
2
Larger G-S length
- LLL searches for a nearly orthogonal basis.
- It does this by rearranging basis vectors such that latter
vectors have long G-S lengths and ’modding out’ by previous vectors over Z.
The Old Stuff The New Concepts The Bottom Line
A Reduced Basis is a Nice Basis
Nice traits of a reduced basis:
- The first vector is not far from the shortest vector in the
- lattice. For every v ∈ L we have:
b1 ≤ 2(d−1)/2 v
- The later vectors have longer Gram-Schmidt length than
when LLL began. This is useful because of the following property which is true for any basis, b1, . . . , bd: For every v ∈ L with v 2≤ B. If b∗
d 2> B then
v ∈ SPANZ(b1, . . . , bd−1).
- The basic idea is that LLL can separate the small vectors
from the large vectors, if we can create a large enough gap in their sizes.
The Old Stuff The New Concepts The Bottom Line
A Reduced Basis is a Nice Basis
Nice traits of a reduced basis:
- The first vector is not far from the shortest vector in the
- lattice. For every v ∈ L we have:
b1 ≤ 2(d−1)/2 v
- The later vectors have longer Gram-Schmidt length than
when LLL began. This is useful because of the following property which is true for any basis, b1, . . . , bd: For every v ∈ L with v 2≤ B. If b∗
d 2> B then
v ∈ SPANZ(b1, . . . , bd−1).
- The basic idea is that LLL can separate the small vectors
from the large vectors, if we can create a large enough gap in their sizes.
The Old Stuff The New Concepts The Bottom Line
A Reduced Basis is a Nice Basis
Nice traits of a reduced basis:
- The first vector is not far from the shortest vector in the
- lattice. For every v ∈ L we have:
b1 ≤ 2(d−1)/2 v
- The later vectors have longer Gram-Schmidt length than
when LLL began. This is useful because of the following property which is true for any basis, b1, . . . , bd: For every v ∈ L with v 2≤ B. If b∗
d 2> B then
v ∈ SPANZ(b1, . . . , bd−1).
- The basic idea is that LLL can separate the small vectors
from the large vectors, if we can create a large enough gap in their sizes.
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
A Lattice Reduction Algorithm
Most variants of LLL perform the following steps in one form or another:
- 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear
combinations of b1, . . . , bi−1 from bi.
- 2. (LLL Switch). If there is a k such that interchanging bk−1
and bk will increase b∗
k 2 by a factor 1/δ, then do so.
- 3. (Repeat). If there was no such k in Step 2, then the
algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’
The Old Stuff The New Concepts The Bottom Line
A Lattice Reduction Algorithm
Most variants of LLL perform the following steps in one form or another:
- 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear
combinations of b1, . . . , bi−1 from bi.
- 2. (LLL Switch). If there is a k such that interchanging bk−1
and bk will increase b∗
k 2 by a factor 1/δ, then do so.
- 3. (Repeat). If there was no such k in Step 2, then the
algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’
The Old Stuff The New Concepts The Bottom Line
A Lattice Reduction Algorithm
Most variants of LLL perform the following steps in one form or another:
- 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear
combinations of b1, . . . , bi−1 from bi.
- 2. (LLL Switch). If there is a k such that interchanging bk−1
and bk will increase b∗
k 2 by a factor 1/δ, then do so.
- 3. (Repeat). If there was no such k in Step 2, then the
algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’
The Old Stuff The New Concepts The Bottom Line
A Lattice Reduction Algorithm
Most variants of LLL perform the following steps in one form or another:
- 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear
combinations of b1, . . . , bi−1 from bi.
- 2. (LLL Switch). If there is a k such that interchanging bk−1
and bk will increase b∗
k 2 by a factor 1/δ, then do so.
- 3. (Repeat). If there was no such k in Step 2, then the
algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
A Tightly Packed Example
10 10 20 10 20 5 10 20 5 1 10 20 10 20 5 10 20 5 1 10 20 5 10 20 5 1 10 5 20 10 20 5 1 5 10 20 10 20 5 1 5 10 20 1 5 10 1 20 5 1 10 20 1 5 10 20
The Old Stuff The New Concepts The Bottom Line
Today’s Complexity Goal (kind of. . .)
Parameters
Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.
- The 1982 LLL paper does this in O(d5n log3(X))
- The 2005 Nguyen and Stehlé paper does this in
O(d4n(d + log(X)) log(X))
- We will try to do something like this on some types of input
in something like O(d7 + d5 log(X))
- It’s actually
O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.
- My goal today is to explain this result, and why/how to use
it in applications.
The Old Stuff The New Concepts The Bottom Line
Today’s Complexity Goal (kind of. . .)
Parameters
Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.
- The 1982 LLL paper does this in O(d5n log3(X))
- The 2005 Nguyen and Stehlé paper does this in
O(d4n(d + log(X)) log(X))
- We will try to do something like this on some types of input
in something like O(d7 + d5 log(X))
- It’s actually
O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.
- My goal today is to explain this result, and why/how to use
it in applications.
The Old Stuff The New Concepts The Bottom Line
Today’s Complexity Goal (kind of. . .)
Parameters
Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.
- The 1982 LLL paper does this in O(d5n log3(X))
- The 2005 Nguyen and Stehlé paper does this in
O(d4n(d + log(X)) log(X))
- We will try to do something like this on some types of input
in something like O(d7 + d5 log(X))
- It’s actually
O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.
- My goal today is to explain this result, and why/how to use
it in applications.
The Old Stuff The New Concepts The Bottom Line
Today’s Complexity Goal (kind of. . .)
Parameters
Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.
- The 1982 LLL paper does this in O(d5n log3(X))
- The 2005 Nguyen and Stehlé paper does this in
O(d4n(d + log(X)) log(X))
- We will try to do something like this on some types of input
in something like O(d7 + d5 log(X))
- It’s actually
O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.
- My goal today is to explain this result, and why/how to use
it in applications.
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
Knapsack Lattices
The asterisk
So far our algorithm only operates on the following types of lattices: 1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N Although many interesting problems can fit these formats.
The Old Stuff The New Concepts The Bottom Line
Knapsack Lattices
The asterisk
So far our algorithm only operates on the following types of lattices: · · · · · · PN · · · ... · · · P2 · · · · · · P1 · · · 1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N Although many interesting problems can fit these formats.
The Old Stuff The New Concepts The Bottom Line
Knapsack Lattices
The asterisk
So far our algorithm only operates on the following types of lattices: 1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N Although many interesting problems can fit these formats.
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣
0 switches
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠
1 switch
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠
2 switches
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) = log(X) + · · · log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠
log(X) switches
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) = log(X) + · · · log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠♠ ♠ ♠ ♣♣♣ ♠ ♠
log(X) + 1 switches
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) = log(X) + · · · log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠
log(X) + 2 switches
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) = log(X) + 2 log(X) + · · · log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠
log(X) + 3 switches
The Old Stuff The New Concepts The Bottom Line
The Switch Picture
LLL[82] counts switches: O(d2 log(X)) = log(X) + 2 log(X)+ · · · + (d − 1) log(X) log(X)
- d−1
♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠
log(X) + 4 switches
The Old Stuff The New Concepts The Bottom Line
It’s a Better Picture with a Sub-Lattice
In problems where we want vectors of length ≤ B, We can prove a ‘better’ bound for the number of switches. d + log(B)
- d
log(B)
- log(X)
♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♣ ♣ ♣ ♠ ♠ ♠ ♠ ♠
≤ O(d2(d + log(B)))
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
An idea from Karim Belabas
Belabas showed that:
- Using many calls to LLL each on truncated/scaled entries
was faster than a single call at full precision.
- The CPU’s work was not distributed evenly between the
many calls to LLL.
- This process must be done with the most significant digits
first.
The Old Stuff The New Concepts The Bottom Line
An idea from Karim Belabas
Belabas showed that:
- Using many calls to LLL each on truncated/scaled entries
was faster than a single call at full precision.
- The CPU’s work was not distributed evenly between the
many calls to LLL.
- This process must be done with the most significant digits
first.
The Old Stuff The New Concepts The Bottom Line
An idea from Karim Belabas
Belabas showed that:
- Using many calls to LLL each on truncated/scaled entries
was faster than a single call at full precision.
- The CPU’s work was not distributed evenly between the
many calls to LLL.
- This process must be done with the most significant digits
first.
The Old Stuff The New Concepts The Bottom Line
An Example
200001 1 90102 1 90403 1 90904 has a vector of length √ 102 200 1 90 1 90 1 90 −1 1 −1 1 3 3 3 10 −6 −7 −7 (7 swaps) −1 1 301 −1 1 802
- 5
−8 3 −2 −8 13 −5 −97
- (2 swaps)
A single call to LLL uses 24 swaps.
The Old Stuff The New Concepts The Bottom Line
An Example
200001 1 90102 1 90403 1 90904 has a vector of length √ 102 200 1 90 1 90 1 90 −1 1 −1 1 3 3 3 10 −6 −7 −7 (7 swaps) −1 1 301 −1 1 802
- 5
−8 3 −2 −8 13 −5 −97
- (2 swaps)
A single call to LLL uses 24 swaps.
The Old Stuff The New Concepts The Bottom Line
An Example
200001 1 90102 1 90403 1 90904 has a vector of length √ 102 200 1 90 1 90 1 90 −1 1 −1 1 3 3 3 10 −6 −7 −7 (7 swaps) −1 1 301 −1 1 802
- 5
−8 3 −2 −8 13 −5 −97
- (2 swaps)
A single call to LLL uses 24 swaps.
The Old Stuff The New Concepts The Bottom Line
An Example
200001 1 90102 1 90403 1 90904 has a vector of length √ 102 200 1 90 1 90 1 90 −1 1 −1 1 3 3 3 10 −6 −7 −7 (7 swaps) −1 1 301 −1 1 802
- 5
−8 3 −2 −8 13 −5 −97
- (2 swaps)
A single call to LLL uses 24 swaps.
The Old Stuff The New Concepts The Bottom Line
An Example
200001 1 90102 1 90403 1 90904 has a vector of length √ 102 200 1 90 1 90 1 90 −1 1 −1 1 3 3 3 10 −6 −7 −7 (7 swaps) −1 1 301 −1 1 802
- 5
−8 3 −2 −8 13 −5 −97
- (2 swaps)
A single call to LLL uses 24 swaps.
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
A sketch of the Algorithm
Input: 1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N
- for j = 1 . . . N do:
- Compute the new column
- Scale down new column.
- while scaled do:
- Run LLL (removing large final vectors)
- Scale up new column
The Old Stuff The New Concepts The Bottom Line
Some features of the Proof
- The size of the vectors remains O(r + log(B))
- The number of scalings is O(r + N)
- The total number of switches is O((r + N)r(r + log(B)))
- The overall complexity is roughly
O(r 3N log(B)[log(X) + N log(B)])
The Old Stuff The New Concepts The Bottom Line
Some features of the Proof
- The size of the vectors remains O(r + log(B))
- The number of scalings is O(r + N)
- The total number of switches is O((r + N)r(r + log(B)))
- The overall complexity is roughly
O(r 3N log(B)[log(X) + N log(B)])
The Old Stuff The New Concepts The Bottom Line
Some features of the Proof
- The size of the vectors remains O(r + log(B))
- The number of scalings is O(r + N)
- The total number of switches is O((r + N)r(r + log(B)))
- The overall complexity is roughly
O(r 3N log(B)[log(X) + N log(B)])
The Old Stuff The New Concepts The Bottom Line
Some features of the Proof
- The size of the vectors remains O(r + log(B))
- The number of scalings is O(r + N)
- The total number of switches is O((r + N)r(r + log(B)))
- The overall complexity is roughly
O(r 3N log(B)[log(X) + N log(B)])
The Old Stuff The New Concepts The Bottom Line
When this result is interesting
- For problems where you can prove a bound on the size of
‘interesting’ vectors.
- In L2 there is a log2(X) term.
- In this algorithm the term is ‘replaced’ by
log(B) log(X)/r + log2(B)
- In factoring polynomials we can prove a bound of log(r)
- In algebraic number reconstruction we can prove log(X)/r
The Old Stuff The New Concepts The Bottom Line
When this result is interesting
- For problems where you can prove a bound on the size of
‘interesting’ vectors.
- In L2 there is a log2(X) term.
- In this algorithm the term is ‘replaced’ by
log(B) log(X)/r + log2(B)
- In factoring polynomials we can prove a bound of log(r)
- In algebraic number reconstruction we can prove log(X)/r
The Old Stuff The New Concepts The Bottom Line
When this result is interesting
- For problems where you can prove a bound on the size of
‘interesting’ vectors.
- In L2 there is a log2(X) term.
- In this algorithm the term is ‘replaced’ by
log(B) log(X)/r + log2(B)
- In factoring polynomials we can prove a bound of log(r)
- In algebraic number reconstruction we can prove log(X)/r
The Old Stuff The New Concepts The Bottom Line
Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts
∗
Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials
The Old Stuff The New Concepts The Bottom Line
A Recent Result Which Inspired Us
Belabas, Kleuners, van Hoeij, and Steel showed that reducing the following basis will factor a polynomial. pa/BN ... pa/B1 1 ∗ · · · ∗ ... . . . ... . . . 1 ∗ · · · ∗ Any vector which corresponds with a factor has size ≤ r + 1, so we choose B = r + 1
The Old Stuff The New Concepts The Bottom Line
Comparing with Schönhage
If we apply our algorithm to the [BHKS] result then we can factor a polynomial with degree N and height H with complexity: O(N2r 4(N + log(H))) This is the first improvement since 1984 when Schönhage gives: O(N8 + N5 log3(H))
The Old Stuff The New Concepts The Bottom Line