The (gimmicky) Road Map Gradual Sub-Lattice Reduction The Old Stuff - - PowerPoint PPT Presentation

the gimmicky road map
SMART_READER_LITE
LIVE PREVIEW

The (gimmicky) Road Map Gradual Sub-Lattice Reduction The Old Stuff - - PowerPoint PPT Presentation

The Old Stuff The New Concepts The Bottom Line Gradual Sub-Lattice Reduction (now with more applications!) Andy Novocin andy@novocin.com LIRMM, Montpellier June 22nd The Old Stuff The New Concepts The Bottom Line The (gimmicky) Road


slide-1
SLIDE 1

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗

(now with more applications!) Andy Novocin andy@novocin.com

LIRMM, Montpellier

June 22nd

slide-2
SLIDE 2

The Old Stuff The New Concepts The Bottom Line

The (gimmicky) Road Map Gradual Sub-Lattice Reduction ∗

The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-3
SLIDE 3

The Old Stuff The New Concepts The Bottom Line

Why give this talk?

  • I want my work to be as useful as possible.
  • This began as a new complexity for factoring polynomials
  • The result is actually much more about lattice reductions
  • Lattice reduction is used for more than just factoring
  • So I want to show you how this result might be applied

. . . in the hope that you will find it useful

slide-4
SLIDE 4

The Old Stuff The New Concepts The Bottom Line

Why give this talk?

  • I want my work to be as useful as possible.
  • This began as a new complexity for factoring polynomials
  • The result is actually much more about lattice reductions
  • Lattice reduction is used for more than just factoring
  • So I want to show you how this result might be applied

. . . in the hope that you will find it useful

slide-5
SLIDE 5

The Old Stuff The New Concepts The Bottom Line

Why give this talk?

  • I want my work to be as useful as possible.
  • This began as a new complexity for factoring polynomials
  • The result is actually much more about lattice reductions
  • Lattice reduction is used for more than just factoring
  • So I want to show you how this result might be applied

. . . in the hope that you will find it useful

slide-6
SLIDE 6

The Old Stuff The New Concepts The Bottom Line

Why give this talk?

  • I want my work to be as useful as possible.
  • This began as a new complexity for factoring polynomials
  • The result is actually much more about lattice reductions
  • Lattice reduction is used for more than just factoring
  • So I want to show you how this result might be applied

. . . in the hope that you will find it useful

slide-7
SLIDE 7

The Old Stuff The New Concepts The Bottom Line

Why give this talk?

  • I want my work to be as useful as possible.
  • This began as a new complexity for factoring polynomials
  • The result is actually much more about lattice reductions
  • Lattice reduction is used for more than just factoring
  • So I want to show you how this result might be applied

. . . in the hope that you will find it useful

slide-8
SLIDE 8

The Old Stuff The New Concepts The Bottom Line

Why give this talk?

  • I want my work to be as useful as possible.
  • This began as a new complexity for factoring polynomials
  • The result is actually much more about lattice reductions
  • Lattice reduction is used for more than just factoring
  • So I want to show you how this result might be applied

. . . in the hope that you will find it useful

slide-9
SLIDE 9

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-10
SLIDE 10

The Old Stuff The New Concepts The Bottom Line

Introducing Lattices

A lattice, L

  • The same lattice, L

✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏

Definition

A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!

slide-11
SLIDE 11

The Old Stuff The New Concepts The Bottom Line

Introducing Lattices

A lattice, L

  • The same lattice, L

✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏

Definition

A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!

slide-12
SLIDE 12

The Old Stuff The New Concepts The Bottom Line

Introducing Lattices

A lattice, L

  • The same lattice, L

✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏

Definition

A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!

slide-13
SLIDE 13

The Old Stuff The New Concepts The Bottom Line

Introducing Lattices

A lattice, L

  • The same lattice, L

✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏

Definition

A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!

slide-14
SLIDE 14

The Old Stuff The New Concepts The Bottom Line

Introducing Lattices

A lattice, L

  • The same lattice, L

✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏ ✏✏✏ ✏

Definition

A lattice, L, is the set of all integer combinations of some set of vectors in Rn Any minimal spanning set of L is called a basis of L Every lattice has many bases. . . and we want to find a good basis!

slide-15
SLIDE 15

The Old Stuff The New Concepts The Bottom Line

The Most Common Lattice Question

The Shortest Vector Problem

Given a lattice, L, find the Shortest Vector in L.

  • The Shortest Vector Problem (SVP) is NP-hard to even

approximate to within a constant.

  • The are many interesting research areas which can be

connected to the SVP .

  • One of the primary uses of lattice reduction algorithms is to

approximately solve the SVP in polynomial time.

  • The algorithm in this talk is well suited for approximating

the SVP (in some specific lattices).

  • Sometimes approximating can be enough.
slide-16
SLIDE 16

The Old Stuff The New Concepts The Bottom Line

The Most Common Lattice Question

The Shortest Vector Problem

Given a lattice, L, find the Shortest Vector in L.

  • The Shortest Vector Problem (SVP) is NP-hard to even

approximate to within a constant.

  • The are many interesting research areas which can be

connected to the SVP .

  • One of the primary uses of lattice reduction algorithms is to

approximately solve the SVP in polynomial time.

  • The algorithm in this talk is well suited for approximating

the SVP (in some specific lattices).

  • Sometimes approximating can be enough.
slide-17
SLIDE 17

The Old Stuff The New Concepts The Bottom Line

The Most Common Lattice Question

The Shortest Vector Problem

Given a lattice, L, find the Shortest Vector in L.

  • The Shortest Vector Problem (SVP) is NP-hard to even

approximate to within a constant.

  • The are many interesting research areas which can be

connected to the SVP .

  • One of the primary uses of lattice reduction algorithms is to

approximately solve the SVP in polynomial time.

  • The algorithm in this talk is well suited for approximating

the SVP (in some specific lattices).

  • Sometimes approximating can be enough.
slide-18
SLIDE 18

The Old Stuff The New Concepts The Bottom Line

The Most Common Lattice Question

The Shortest Vector Problem

Given a lattice, L, find the Shortest Vector in L.

  • The Shortest Vector Problem (SVP) is NP-hard to even

approximate to within a constant.

  • The are many interesting research areas which can be

connected to the SVP .

  • One of the primary uses of lattice reduction algorithms is to

approximately solve the SVP in polynomial time.

  • The algorithm in this talk is well suited for approximating

the SVP (in some specific lattices).

  • Sometimes approximating can be enough.
slide-19
SLIDE 19

The Old Stuff The New Concepts The Bottom Line

The Most Common Lattice Question

The Shortest Vector Problem

Given a lattice, L, find the Shortest Vector in L.

  • The Shortest Vector Problem (SVP) is NP-hard to even

approximate to within a constant.

  • The are many interesting research areas which can be

connected to the SVP .

  • One of the primary uses of lattice reduction algorithms is to

approximately solve the SVP in polynomial time.

  • The algorithm in this talk is well suited for approximating

the SVP (in some specific lattices).

  • Sometimes approximating can be enough.
slide-20
SLIDE 20

The Old Stuff The New Concepts The Bottom Line

The Most Common Lattice Question

The Shortest Vector Problem

Given a lattice, L, find the Shortest Vector in L.

  • The Shortest Vector Problem (SVP) is NP-hard to even

approximate to within a constant.

  • The are many interesting research areas which can be

connected to the SVP .

  • One of the primary uses of lattice reduction algorithms is to

approximately solve the SVP in polynomial time.

  • The algorithm in this talk is well suited for approximating

the SVP (in some specific lattices).

  • Sometimes approximating can be enough.
slide-21
SLIDE 21

The Old Stuff The New Concepts The Bottom Line

An Example: Algebraic Number Reconstruction

Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this:      1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3)      Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the

  • ther vectors.
slide-22
SLIDE 22

The Old Stuff The New Concepts The Bottom Line

An Example: Algebraic Number Reconstruction

Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this:      1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3)      Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the

  • ther vectors.
slide-23
SLIDE 23

The Old Stuff The New Concepts The Bottom Line

An Example: Algebraic Number Reconstruction

Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this:      1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3)      Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the

  • ther vectors.
slide-24
SLIDE 24

The Old Stuff The New Concepts The Bottom Line

An Example: Algebraic Number Reconstruction

Finding a minpoly: Given an approximation ˜ α = Re(˜ α) + i · Im(˜ α). Make a lattice, L, like this:      1 C · Re( ˜ α0) C · Im( ˜ α0) 1 C · Re( ˜ α1) C · Im( ˜ α1) 1 C · Re( ˜ α2) C · Im( ˜ α2) 1 C · Re( ˜ α3) C · Im( ˜ α3)      Where C is a very large constant. Let minpoly(α) =: c0 + c1x + c2x2 + c3x3. Then (c0, c1, c2, c3, 0, 0) ∈ L and is smaller in size than the

  • ther vectors.
slide-25
SLIDE 25

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-26
SLIDE 26

The Old Stuff The New Concepts The Bottom Line

First we need to recall Gram-Schmidt Orthogonalization

Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗

1, . . . , b∗ d with the

following properties:

  • b1 = b∗

1

  • SPANR{b1, . . . , bi} = SPANR{b∗

1, . . . , b∗ i }

Intuition of GSO

My favorite way to think of G-S vectors is that b∗

i is bi modded

  • ut by b1, . . . , bi−1 over R.
slide-27
SLIDE 27

The Old Stuff The New Concepts The Bottom Line

First we need to recall Gram-Schmidt Orthogonalization

Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗

1, . . . , b∗ d with the

following properties:

  • b1 = b∗

1

  • SPANR{b1, . . . , bi} = SPANR{b∗

1, . . . , b∗ i }

Intuition of GSO

My favorite way to think of G-S vectors is that b∗

i is bi modded

  • ut by b1, . . . , bi−1 over R.
slide-28
SLIDE 28

The Old Stuff The New Concepts The Bottom Line

First we need to recall Gram-Schmidt Orthogonalization

Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗

1, . . . , b∗ d with the

following properties:

  • b1 = b∗

1

  • SPANR{b1, . . . , bi} = SPANR{b∗

1, . . . , b∗ i }

Intuition of GSO

My favorite way to think of G-S vectors is that b∗

i is bi modded

  • ut by b1, . . . , bi−1 over R.
slide-29
SLIDE 29

The Old Stuff The New Concepts The Bottom Line

First we need to recall Gram-Schmidt Orthogonalization

Given a set of vectors b1, . . . , bd ∈ Rn the Gram-Schmidt (G-S) process returns a set of orthogonal vectors b∗

1, . . . , b∗ d with the

following properties:

  • b1 = b∗

1

  • SPANR{b1, . . . , bi} = SPANR{b∗

1, . . . , b∗ i }

Intuition of GSO

My favorite way to think of G-S vectors is that b∗

i is bi modded

  • ut by b1, . . . , bi−1 over R.
slide-30
SLIDE 30

The Old Stuff The New Concepts The Bottom Line

Introducing A Reduced Basis

The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.

A Reduced Basis

Let b1, . . . , bd be a basis for a lattice, L, and let b∗

j be the jth G-S

vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗

i 2≤ (

1 δ − η2 )· b∗

i+1 2 ∀i < d

In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗

i 2≤ 2 b∗ i+1 2.

A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.

slide-31
SLIDE 31

The Old Stuff The New Concepts The Bottom Line

Introducing A Reduced Basis

The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.

A Reduced Basis

Let b1, . . . , bd be a basis for a lattice, L, and let b∗

j be the jth G-S

vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗

i 2≤ (

1 δ − η2 )· b∗

i+1 2 ∀i < d

In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗

i 2≤ 2 b∗ i+1 2.

A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.

slide-32
SLIDE 32

The Old Stuff The New Concepts The Bottom Line

Introducing A Reduced Basis

The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.

A Reduced Basis

Let b1, . . . , bd be a basis for a lattice, L, and let b∗

j be the jth G-S

vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗

i 2≤ (

1 δ − η2 )· b∗

i+1 2 ∀i < d

In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗

i 2≤ 2 b∗ i+1 2.

A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.

slide-33
SLIDE 33

The Old Stuff The New Concepts The Bottom Line

Introducing A Reduced Basis

The goal of lattice reduction is to find a ‘nice’ basis for a given lattice.

A Reduced Basis

Let b1, . . . , bd be a basis for a lattice, L, and let b∗

j be the jth G-S

vector. Then we call the basis (δ, η)-reduced, for δ ∈ (1/4, 1], η ∈ [1/2, √ δ), when: b∗

i 2≤ (

1 δ − η2 )· b∗

i+1 2 ∀i < d

In the original LLL paper the values (δ, η) := (3/4, 1/2) were chosen so that b∗

i 2≤ 2 b∗ i+1 2.

A reduced basis cannot be too far from orthogonal. In particular the G-S lengths do not drop ‘too’ fast.

slide-34
SLIDE 34

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-35
SLIDE 35

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-36
SLIDE 36

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-37
SLIDE 37

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-38
SLIDE 38

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-39
SLIDE 39

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-40
SLIDE 40

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-41
SLIDE 41

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-42
SLIDE 42

The Old Stuff The New Concepts The Bottom Line

Reduced is near-Orthogonal

v∗

1 := v1

✘✘✘✘✘✘✘✘ ✿

v2 In this picture there are two vectors which are far from orthogonal.

v∗

2

Small G-S Length

v∗

1 := v1

✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗

v2 In this one the vectors are closer to orthogonal.

v∗

2

Larger G-S length

  • LLL searches for a nearly orthogonal basis.
  • It does this by rearranging basis vectors such that latter

vectors have long G-S lengths and ’modding out’ by previous vectors over Z.

slide-43
SLIDE 43

The Old Stuff The New Concepts The Bottom Line

A Reduced Basis is a Nice Basis

Nice traits of a reduced basis:

  • The first vector is not far from the shortest vector in the
  • lattice. For every v ∈ L we have:

b1 ≤ 2(d−1)/2 v

  • The later vectors have longer Gram-Schmidt length than

when LLL began. This is useful because of the following property which is true for any basis, b1, . . . , bd: For every v ∈ L with v 2≤ B. If b∗

d 2> B then

v ∈ SPANZ(b1, . . . , bd−1).

  • The basic idea is that LLL can separate the small vectors

from the large vectors, if we can create a large enough gap in their sizes.

slide-44
SLIDE 44

The Old Stuff The New Concepts The Bottom Line

A Reduced Basis is a Nice Basis

Nice traits of a reduced basis:

  • The first vector is not far from the shortest vector in the
  • lattice. For every v ∈ L we have:

b1 ≤ 2(d−1)/2 v

  • The later vectors have longer Gram-Schmidt length than

when LLL began. This is useful because of the following property which is true for any basis, b1, . . . , bd: For every v ∈ L with v 2≤ B. If b∗

d 2> B then

v ∈ SPANZ(b1, . . . , bd−1).

  • The basic idea is that LLL can separate the small vectors

from the large vectors, if we can create a large enough gap in their sizes.

slide-45
SLIDE 45

The Old Stuff The New Concepts The Bottom Line

A Reduced Basis is a Nice Basis

Nice traits of a reduced basis:

  • The first vector is not far from the shortest vector in the
  • lattice. For every v ∈ L we have:

b1 ≤ 2(d−1)/2 v

  • The later vectors have longer Gram-Schmidt length than

when LLL began. This is useful because of the following property which is true for any basis, b1, . . . , bd: For every v ∈ L with v 2≤ B. If b∗

d 2> B then

v ∈ SPANZ(b1, . . . , bd−1).

  • The basic idea is that LLL can separate the small vectors

from the large vectors, if we can create a large enough gap in their sizes.

slide-46
SLIDE 46

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-47
SLIDE 47

The Old Stuff The New Concepts The Bottom Line

A Lattice Reduction Algorithm

Most variants of LLL perform the following steps in one form or another:

  • 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear

combinations of b1, . . . , bi−1 from bi.

  • 2. (LLL Switch). If there is a k such that interchanging bk−1

and bk will increase b∗

k 2 by a factor 1/δ, then do so.

  • 3. (Repeat). If there was no such k in Step 2, then the

algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’

slide-48
SLIDE 48

The Old Stuff The New Concepts The Bottom Line

A Lattice Reduction Algorithm

Most variants of LLL perform the following steps in one form or another:

  • 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear

combinations of b1, . . . , bi−1 from bi.

  • 2. (LLL Switch). If there is a k such that interchanging bk−1

and bk will increase b∗

k 2 by a factor 1/δ, then do so.

  • 3. (Repeat). If there was no such k in Step 2, then the

algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’

slide-49
SLIDE 49

The Old Stuff The New Concepts The Bottom Line

A Lattice Reduction Algorithm

Most variants of LLL perform the following steps in one form or another:

  • 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear

combinations of b1, . . . , bi−1 from bi.

  • 2. (LLL Switch). If there is a k such that interchanging bk−1

and bk will increase b∗

k 2 by a factor 1/δ, then do so.

  • 3. (Repeat). If there was no such k in Step 2, then the

algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’

slide-50
SLIDE 50

The Old Stuff The New Concepts The Bottom Line

A Lattice Reduction Algorithm

Most variants of LLL perform the following steps in one form or another:

  • 1. (Gram-Schmidt over Z). By subtracting suitable Z-linear

combinations of b1, . . . , bi−1 from bi.

  • 2. (LLL Switch). If there is a k such that interchanging bk−1

and bk will increase b∗

k 2 by a factor 1/δ, then do so.

  • 3. (Repeat). If there was no such k in Step 2, then the

algorithm stops. Otherwise go back to Step 1. The cost of this algorithm has been roughly approximated as: ‘the number of switches’ times ‘the cost per switch’

slide-51
SLIDE 51

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-52
SLIDE 52

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-53
SLIDE 53

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-54
SLIDE 54

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-55
SLIDE 55

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-56
SLIDE 56

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-57
SLIDE 57

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-58
SLIDE 58

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-59
SLIDE 59

The Old Stuff The New Concepts The Bottom Line

A Tightly Packed Example

    10 10 20 10 20 5 10 20 5 1         10 20 10 20 5 10 20 5 1         10 20 5 10 20 5 1         10 5 20 10 20 5 1         5 10 20 10 20 5 1         5 10 20 1         5 10 1 20         5 1 10 20         1 5 10 20    

slide-60
SLIDE 60

The Old Stuff The New Concepts The Bottom Line

Today’s Complexity Goal (kind of. . .)

Parameters

Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.

  • The 1982 LLL paper does this in O(d5n log3(X))
  • The 2005 Nguyen and Stehlé paper does this in

O(d4n(d + log(X)) log(X))

  • We will try to do something like this on some types of input

in something like O(d7 + d5 log(X))

  • It’s actually

O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.

  • My goal today is to explain this result, and why/how to use

it in applications.

slide-61
SLIDE 61

The Old Stuff The New Concepts The Bottom Line

Today’s Complexity Goal (kind of. . .)

Parameters

Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.

  • The 1982 LLL paper does this in O(d5n log3(X))
  • The 2005 Nguyen and Stehlé paper does this in

O(d4n(d + log(X)) log(X))

  • We will try to do something like this on some types of input

in something like O(d7 + d5 log(X))

  • It’s actually

O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.

  • My goal today is to explain this result, and why/how to use

it in applications.

slide-62
SLIDE 62

The Old Stuff The New Concepts The Bottom Line

Today’s Complexity Goal (kind of. . .)

Parameters

Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.

  • The 1982 LLL paper does this in O(d5n log3(X))
  • The 2005 Nguyen and Stehlé paper does this in

O(d4n(d + log(X)) log(X))

  • We will try to do something like this on some types of input

in something like O(d7 + d5 log(X))

  • It’s actually

O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.

  • My goal today is to explain this result, and why/how to use

it in applications.

slide-63
SLIDE 63

The Old Stuff The New Concepts The Bottom Line

Today’s Complexity Goal (kind of. . .)

Parameters

Given a lattice basis, b1, . . . , bd ∈ Rn with bi 2≤ X for all i. Return a reduced basis.

  • The 1982 LLL paper does this in O(d5n log3(X))
  • The 2005 Nguyen and Stehlé paper does this in

O(d4n(d + log(X)) log(X))

  • We will try to do something like this on some types of input

in something like O(d7 + d5 log(X))

  • It’s actually

O((r + N)r 3(r + log(B))(log(X) + (r + N)(r + log(B)))) for a reduced basis of a sub-lattice.

  • My goal today is to explain this result, and why/how to use

it in applications.

slide-64
SLIDE 64

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-65
SLIDE 65

The Old Stuff The New Concepts The Bottom Line

Knapsack Lattices

The asterisk

So far our algorithm only operates on the following types of lattices:      1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N      Although many interesting problems can fit these formats.

slide-66
SLIDE 66

The Old Stuff The New Concepts The Bottom Line

Knapsack Lattices

The asterisk

So far our algorithm only operates on the following types of lattices:              · · · · · · PN · · · ... · · · P2 · · · · · · P1 · · · 1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N              Although many interesting problems can fit these formats.

slide-67
SLIDE 67

The Old Stuff The New Concepts The Bottom Line

Knapsack Lattices

The asterisk

So far our algorithm only operates on the following types of lattices:      1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N      Although many interesting problems can fit these formats.

slide-68
SLIDE 68

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-69
SLIDE 69

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣

0 switches

slide-70
SLIDE 70

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠

1 switch

slide-71
SLIDE 71

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠

2 switches

slide-72
SLIDE 72

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) = log(X) + · · · log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠

log(X) switches

slide-73
SLIDE 73

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) = log(X) + · · · log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠♠ ♠ ♠ ♣♣♣ ♠ ♠

log(X) + 1 switches

slide-74
SLIDE 74

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) = log(X) + · · · log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠

log(X) + 2 switches

slide-75
SLIDE 75

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) = log(X) + 2 log(X) + · · · log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠

log(X) + 3 switches

slide-76
SLIDE 76

The Old Stuff The New Concepts The Bottom Line

The Switch Picture

LLL[82] counts switches: O(d2 log(X)) = log(X) + 2 log(X)+ · · · + (d − 1) log(X) log(X)               

  • d−1

♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♠ ♠ ♣♣♣ ♠

log(X) + 4 switches

slide-77
SLIDE 77

The Old Stuff The New Concepts The Bottom Line

It’s a Better Picture with a Sub-Lattice

In problems where we want vectors of length ≤ B, We can prove a ‘better’ bound for the number of switches. d + log(B)               

  • d

log(B)

  • log(X)

                                

♠ ♠ ♠ ♣♣♣ ♠ ♠ ♠ ♠ ♣♣♣ ♠ ♠ ♣ ♣ ♣ ♣ ♣ ♣ ♠ ♠ ♠ ♠ ♠

≤ O(d2(d + log(B)))

slide-78
SLIDE 78

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-79
SLIDE 79

The Old Stuff The New Concepts The Bottom Line

An idea from Karim Belabas

Belabas showed that:

  • Using many calls to LLL each on truncated/scaled entries

was faster than a single call at full precision.

  • The CPU’s work was not distributed evenly between the

many calls to LLL.

  • This process must be done with the most significant digits

first.

slide-80
SLIDE 80

The Old Stuff The New Concepts The Bottom Line

An idea from Karim Belabas

Belabas showed that:

  • Using many calls to LLL each on truncated/scaled entries

was faster than a single call at full precision.

  • The CPU’s work was not distributed evenly between the

many calls to LLL.

  • This process must be done with the most significant digits

first.

slide-81
SLIDE 81

The Old Stuff The New Concepts The Bottom Line

An idea from Karim Belabas

Belabas showed that:

  • Using many calls to LLL each on truncated/scaled entries

was faster than a single call at full precision.

  • The CPU’s work was not distributed evenly between the

many calls to LLL.

  • This process must be done with the most significant digits

first.

slide-82
SLIDE 82

The Old Stuff The New Concepts The Bottom Line

An Example

    200001 1 90102 1 90403 1 90904     has a vector of length √ 102     200 1 90 1 90 1 90         −1 1 −1 1 3 3 3 10 −6 −7 −7     (7 swaps) −1 1 301 −1 1 802

  • 5

−8 3 −2 −8 13 −5 −97

  • (2 swaps)

A single call to LLL uses 24 swaps.

slide-83
SLIDE 83

The Old Stuff The New Concepts The Bottom Line

An Example

    200001 1 90102 1 90403 1 90904     has a vector of length √ 102     200 1 90 1 90 1 90         −1 1 −1 1 3 3 3 10 −6 −7 −7     (7 swaps) −1 1 301 −1 1 802

  • 5

−8 3 −2 −8 13 −5 −97

  • (2 swaps)

A single call to LLL uses 24 swaps.

slide-84
SLIDE 84

The Old Stuff The New Concepts The Bottom Line

An Example

    200001 1 90102 1 90403 1 90904     has a vector of length √ 102     200 1 90 1 90 1 90         −1 1 −1 1 3 3 3 10 −6 −7 −7     (7 swaps) −1 1 301 −1 1 802

  • 5

−8 3 −2 −8 13 −5 −97

  • (2 swaps)

A single call to LLL uses 24 swaps.

slide-85
SLIDE 85

The Old Stuff The New Concepts The Bottom Line

An Example

    200001 1 90102 1 90403 1 90904     has a vector of length √ 102     200 1 90 1 90 1 90         −1 1 −1 1 3 3 3 10 −6 −7 −7     (7 swaps) −1 1 301 −1 1 802

  • 5

−8 3 −2 −8 13 −5 −97

  • (2 swaps)

A single call to LLL uses 24 swaps.

slide-86
SLIDE 86

The Old Stuff The New Concepts The Bottom Line

An Example

    200001 1 90102 1 90403 1 90904     has a vector of length √ 102     200 1 90 1 90 1 90         −1 1 −1 1 3 3 3 10 −6 −7 −7     (7 swaps) −1 1 301 −1 1 802

  • 5

−8 3 −2 −8 13 −5 −97

  • (2 swaps)

A single call to LLL uses 24 swaps.

slide-87
SLIDE 87

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-88
SLIDE 88

The Old Stuff The New Concepts The Bottom Line

A sketch of the Algorithm

Input:      1 · · · x1,1 x1,2 · · · x1,N 1 · · · x2,1 x2,2 · · · x2,N . . . . . . ... . . . . . . . . . . . . · · · 1 xr,1 xr,2 · · · xr,N     

  • for j = 1 . . . N do:
  • Compute the new column
  • Scale down new column.
  • while scaled do:
  • Run LLL (removing large final vectors)
  • Scale up new column
slide-89
SLIDE 89

The Old Stuff The New Concepts The Bottom Line

Some features of the Proof

  • The size of the vectors remains O(r + log(B))
  • The number of scalings is O(r + N)
  • The total number of switches is O((r + N)r(r + log(B)))
  • The overall complexity is roughly

O(r 3N log(B)[log(X) + N log(B)])

slide-90
SLIDE 90

The Old Stuff The New Concepts The Bottom Line

Some features of the Proof

  • The size of the vectors remains O(r + log(B))
  • The number of scalings is O(r + N)
  • The total number of switches is O((r + N)r(r + log(B)))
  • The overall complexity is roughly

O(r 3N log(B)[log(X) + N log(B)])

slide-91
SLIDE 91

The Old Stuff The New Concepts The Bottom Line

Some features of the Proof

  • The size of the vectors remains O(r + log(B))
  • The number of scalings is O(r + N)
  • The total number of switches is O((r + N)r(r + log(B)))
  • The overall complexity is roughly

O(r 3N log(B)[log(X) + N log(B)])

slide-92
SLIDE 92

The Old Stuff The New Concepts The Bottom Line

Some features of the Proof

  • The size of the vectors remains O(r + log(B))
  • The number of scalings is O(r + N)
  • The total number of switches is O((r + N)r(r + log(B)))
  • The overall complexity is roughly

O(r 3N log(B)[log(X) + N log(B)])

slide-93
SLIDE 93

The Old Stuff The New Concepts The Bottom Line

When this result is interesting

  • For problems where you can prove a bound on the size of

‘interesting’ vectors.

  • In L2 there is a log2(X) term.
  • In this algorithm the term is ‘replaced’ by

log(B) log(X)/r + log2(B)

  • In factoring polynomials we can prove a bound of log(r)
  • In algebraic number reconstruction we can prove log(X)/r
slide-94
SLIDE 94

The Old Stuff The New Concepts The Bottom Line

When this result is interesting

  • For problems where you can prove a bound on the size of

‘interesting’ vectors.

  • In L2 there is a log2(X) term.
  • In this algorithm the term is ‘replaced’ by

log(B) log(X)/r + log2(B)

  • In factoring polynomials we can prove a bound of log(r)
  • In algebraic number reconstruction we can prove log(X)/r
slide-95
SLIDE 95

The Old Stuff The New Concepts The Bottom Line

When this result is interesting

  • For problems where you can prove a bound on the size of

‘interesting’ vectors.

  • In L2 there is a log2(X) term.
  • In this algorithm the term is ‘replaced’ by

log(B) log(X)/r + log2(B)

  • In factoring polynomials we can prove a bound of log(r)
  • In algebraic number reconstruction we can prove log(X)/r
slide-96
SLIDE 96

The Old Stuff The New Concepts The Bottom Line

Gradual Sub-Lattice Reduction ∗ The Old Stuff Lattice Reduction Lattice Reduction The New Concepts

Sub- Gradual The Bottom Line The Complexity Result New Complexities for Factoring Polynomials

slide-97
SLIDE 97

The Old Stuff The New Concepts The Bottom Line

A Recent Result Which Inspired Us

Belabas, Kleuners, van Hoeij, and Steel showed that reducing the following basis will factor a polynomial.          pa/BN ... pa/B1 1 ∗ · · · ∗ ... . . . ... . . . 1 ∗ · · · ∗          Any vector which corresponds with a factor has size ≤ r + 1, so we choose B = r + 1

slide-98
SLIDE 98

The Old Stuff The New Concepts The Bottom Line

Comparing with Schönhage

If we apply our algorithm to the [BHKS] result then we can factor a polynomial with degree N and height H with complexity: O(N2r 4(N + log(H))) This is the first improvement since 1984 when Schönhage gives: O(N8 + N5 log3(H))

slide-99
SLIDE 99

The Old Stuff The New Concepts The Bottom Line

Thanks!

Thank You for Having Me!