NTRU Prime Can we predict future attacks? Daniel J. Bernstein 1996 - - PowerPoint PPT Presentation

ntru prime can we predict future attacks daniel j
SMART_READER_LITE
LIVE PREVIEW

NTRU Prime Can we predict future attacks? Daniel J. Bernstein 1996 - - PowerPoint PPT Presentation

1 2 NTRU Prime Can we predict future attacks? Daniel J. Bernstein 1996 DobbertinBosselaers Preneel RIPEMD-160: University of Illinois at Chicago & a strengthened version of Technische Universiteit Eindhoven RIPEMD: It is


slide-1
SLIDE 1

1

NTRU Prime Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven cr.yp.to/papers.html #ntruprime is joint work with: Chitchanok Chuengsatiansup Tanja Lange Christine van Vredendaal Technische Universiteit Eindhoven Focus of this talk: motivation.

2

Can we predict future attacks? 1996 Dobbertin–Bosselaers– Preneel “RIPEMD-160: a strengthened version of RIPEMD”: “It is anticipated that these techniques can be used to produce collisions for MD5 and perhaps also for RIPEMD. This will probably require an additional effort, but it no longer seems as far away as it was a year ago.” 1996 Robshaw: Collisions “should be expected”; upgrade “when practical and convenient”.

slide-2
SLIDE 2

1

Prime

  • J. Bernstein

University of Illinois at Chicago & echnische Universiteit Eindhoven cr.yp.to/papers.html #ntruprime is joint work with: Chitchanok Chuengsatiansup Lange Christine van Vredendaal echnische Universiteit Eindhoven

  • f this talk: motivation.

2

Can we predict future attacks? 1996 Dobbertin–Bosselaers– Preneel “RIPEMD-160: a strengthened version of RIPEMD”: “It is anticipated that these techniques can be used to produce collisions for MD5 and perhaps also for RIPEMD. This will probably require an additional effort, but it no longer seems as far away as it was a year ago.” 1996 Robshaw: Collisions “should be expected”; upgrade “when practical and convenient”. Imagine “This is The attack not break MD5, so point of speculation is controversial and creates state of alternative the very

slide-3
SLIDE 3

1

Bernstein Illinois at Chicago & Universiteit Eindhoven cr.yp.to/papers.html joint work with: Chuengsatiansup redendaal Universiteit Eindhoven talk: motivation.

2

Can we predict future attacks? 1996 Dobbertin–Bosselaers– Preneel “RIPEMD-160: a strengthened version of RIPEMD”: “It is anticipated that these techniques can be used to produce collisions for MD5 and perhaps also for RIPEMD. This will probably require an additional effort, but it no longer seems as far away as it was a year ago.” 1996 Robshaw: Collisions “should be expected”; upgrade “when practical and convenient”. Imagine someone resp “This is completely The attack by Dobb not break any normal MD5, so what exactly point of preventing speculation about is controversial and and creates confusion state of the art. Recommending alternative hash functions the very least quite

slide-4
SLIDE 4

1

Chicago & Eindhoven with: Chuengsatiansup Eindhoven motivation.

2

Can we predict future attacks? 1996 Dobbertin–Bosselaers– Preneel “RIPEMD-160: a strengthened version of RIPEMD”: “It is anticipated that these techniques can be used to produce collisions for MD5 and perhaps also for RIPEMD. This will probably require an additional effort, but it no longer seems as far away as it was a year ago.” 1996 Robshaw: Collisions “should be expected”; upgrade “when practical and convenient”. Imagine someone responding: “This is completely out of line The attack by Dobbertin does not break any normal usage MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific and creates confusion on the state of the art. Recommending alternative hash functions is the very least quite prematur

slide-5
SLIDE 5

2

Can we predict future attacks? 1996 Dobbertin–Bosselaers– Preneel “RIPEMD-160: a strengthened version of RIPEMD”: “It is anticipated that these techniques can be used to produce collisions for MD5 and perhaps also for RIPEMD. This will probably require an additional effort, but it no longer seems as far away as it was a year ago.” 1996 Robshaw: Collisions “should be expected”; upgrade “when practical and convenient”.

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.”

slide-6
SLIDE 6

2

Can we predict future attacks? 1996 Dobbertin–Bosselaers– Preneel “RIPEMD-160: a strengthened version of RIPEMD”: “It is anticipated that these techniques can be used to produce collisions for MD5 and perhaps also for RIPEMD. This will probably require an additional effort, but it no longer seems as far away as it was a year ago.” 1996 Robshaw: Collisions “should be expected”; upgrade “when practical and convenient”.

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.” Clearly not a real cryptographer. Maybe a standards organization.

slide-7
SLIDE 7

2

e predict future attacks? Dobbertin–Bosselaers– Preneel “RIPEMD-160: strengthened version of RIPEMD”: “It is anticipated that techniques can be used to duce collisions for MD5 and erhaps also for RIPEMD. This robably require an additional but it no longer seems as ay as it was a year ago.” Robshaw: Collisions “should ected”; upgrade “when ractical and convenient”.

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.” Clearly not a real cryptographer. Maybe a standards organization. Now imagine saying tha are worse cryptographic

slide-8
SLIDE 8

2

future attacks? ertin–Bosselaers– “RIPEMD-160: version of is anticipated that can be used to collisions for MD5 and

  • RIPEMD. This

require an additional longer seems as as a year ago.” Collisions “should upgrade “when convenient”.

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.” Clearly not a real cryptographer. Maybe a standards organization. Now imagine a religious saying that all of these are worse than “provably cryptographic hash

slide-9
SLIDE 9

2

attacks? ertin–Bosselaers– anticipated that used to and This additional seems as go.” “should “when

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.” Clearly not a real cryptographer. Maybe a standards organization. Now imagine a religious fanatic saying that all of these functions are worse than “provably secure cryptographic hash functions.

slide-10
SLIDE 10

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.” Clearly not a real cryptographer. Maybe a standards organization.

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions.

slide-11
SLIDE 11

3

Imagine someone responding: “This is completely out of line. The attack by Dobbertin does not break any normal usage of MD5, so what exactly is the point of preventing it? This speculation about MD5 collisions is controversial and non-scientific, and creates confusion on the state of the art. Recommending alternative hash functions is at the very least quite premature.” Clearly not a real cryptographer. Maybe a standards organization.

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions. 1991 “provably secure” example, Chaum–van Heijst–Pfitzmann: Choose p sensibly. Define C(x; y) = 4x9y mod p for suitable ranges of x and y. Simple, beautiful, structured. Very easy security reduction: finding C collision implies computing a discrete logarithm.

slide-12
SLIDE 12

3

Imagine someone responding: is completely out of line. attack by Dobbertin does reak any normal usage of so what exactly is the

  • f preventing it? This

eculation about MD5 collisions controversial and non-scientific, creates confusion on the

  • f the art. Recommending

alternative hash functions is at very least quite premature.” not a real cryptographer. a standards organization.

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions. 1991 “provably secure” example, Chaum–van Heijst–Pfitzmann: Choose p sensibly. Define C(x; y) = 4x9y mod p for suitable ranges of x and y. Simple, beautiful, structured. Very easy security reduction: finding C collision implies computing a discrete logarithm. CvHP is Horrible Far worse standard compression-function Security 1922 Kraitchik 1986 Copp Schroepp 1993 Gord 1993 Schirok 1994 Sho

slide-13
SLIDE 13

3

someone responding: completely out of line. Dobbertin does normal usage of exactly is the venting it? This

  • ut MD5 collisions

and non-scientific, confusion on the Recommending functions is at quite premature.” real cryptographer. rds organization.

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions. 1991 “provably secure” example, Chaum–van Heijst–Pfitzmann: Choose p sensibly. Define C(x; y) = 4x9y mod p for suitable ranges of x and y. Simple, beautiful, structured. Very easy security reduction: finding C collision implies computing a discrete logarithm. CvHP is very bad cryptography Horrible security fo Far worse security standard “unstructured” compression-function Security losses in C 1922 Kraitchik (index 1986 Coppersmith–Odlyzk Schroeppel (NFS p 1993 Gordon (general 1993 Schirokauer (faster 1994 Shor (quantum

slide-14
SLIDE 14

3

  • nding:

line. does usage of the This collisions non-scientific, the Recommending is at mature.” cryptographer. rganization.

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions. 1991 “provably secure” example, Chaum–van Heijst–Pfitzmann: Choose p sensibly. Define C(x; y) = 4x9y mod p for suitable ranges of x and y. Simple, beautiful, structured. Very easy security reduction: finding C collision implies computing a discrete logarithm. CvHP is very bad cryptography Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecesso 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time).

slide-15
SLIDE 15

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions. 1991 “provably secure” example, Chaum–van Heijst–Pfitzmann: Choose p sensibly. Define C(x; y) = 4x9y mod p for suitable ranges of x and y. Simple, beautiful, structured. Very easy security reduction: finding C collision implies computing a discrete logarithm.

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time).

slide-16
SLIDE 16

4

Now imagine a religious fanatic saying that all of these functions are worse than “provably secure” cryptographic hash functions. 1991 “provably secure” example, Chaum–van Heijst–Pfitzmann: Choose p sensibly. Define C(x; y) = 4x9y mod p for suitable ranges of x and y. Simple, beautiful, structured. Very easy security reduction: finding C collision implies computing a discrete logarithm.

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”.

slide-17
SLIDE 17

4

imagine a religious fanatic that all of these functions rse than “provably secure” cryptographic hash functions. provably secure” example, Chaum–van Heijst–Pfitzmann:

  • se p sensibly.

C(x; y) = 4x9y mod p itable ranges of x and y. Simple, beautiful, structured. easy security reduction: C collision implies computing a discrete logarithm.

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”. We still pre-quantum Which DL

slide-18
SLIDE 18

4

religious fanatic

  • f these functions

provably secure” hash functions. secure” example, Heijst–Pfitzmann: sensibly. 4x9y mod p ranges of x and y. eautiful, structured. security reduction: collision implies discrete logarithm.

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”. We still use discrete pre-quantum public-k Which DL groups

slide-19
SLIDE 19

4

fanatic functions secure” functions. example, Heijst–Pfitzmann: d p and y. structured. tion: rithm.

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”. We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best?

slide-20
SLIDE 20

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”.

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best?

slide-21
SLIDE 21

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”.

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.”

slide-22
SLIDE 22

5

CvHP is very bad cryptography. Horrible security for its speed. Far worse security record than standard “unstructured” compression-function designs. Security losses in C include 1922 Kraitchik (index calculus); 1986 Coppersmith–Odlyzko– Schroeppel (NFS predecessor); 1993 Gordon (general DL NFS); 1993 Schirokauer (faster NFS); 1994 Shor (quantum poly time). Imagine someone in 1991 saying “DL security is well understood”.

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.” 1997 Rivest: “Over time, this may change, but for now trying to get an evaluation of the security

  • f an elliptic-curve cryptosystem

is a bit like trying to get an evaluation of some recently discovered Chaldean poetry.”

slide-23
SLIDE 23

5

is very bad cryptography. rrible security for its speed. rse security record than rd “unstructured” ression-function designs. Security losses in C include Kraitchik (index calculus); Coppersmith–Odlyzko– eppel (NFS predecessor); Gordon (general DL NFS); Schirokauer (faster NFS); Shor (quantum poly time). Imagine someone in 1991 saying security is well understood”.

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.” 1997 Rivest: “Over time, this may change, but for now trying to get an evaluation of the security

  • f an elliptic-curve cryptosystem

is a bit like trying to get an evaluation of some recently discovered Chaldean poetry.” Are RSA, These syste enabling Many optimization Attacks >100 scientific Still many How many the state

slide-24
SLIDE 24

5

bad cryptography. for its speed. security record than “unstructured” ression-function designs. C include (index calculus); ersmith–Odlyzko– predecessor); (general DL NFS); uer (faster NFS); (quantum poly time). someone in 1991 saying ell understood”.

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.” 1997 Rivest: “Over time, this may change, but for now trying to get an evaluation of the security

  • f an elliptic-curve cryptosystem

is a bit like trying to get an evaluation of some recently discovered Chaldean poetry.” Are RSA, DSA, etc. These systems have enabling attacks such Many optimization Attacks keep getting >100 scientific pap Still many unexplo How many people the state of the art?

slide-25
SLIDE 25

5

cryptography. eed. than designs. include calculus);

ssor); NFS); NFS); time). saying understood”.

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.” 1997 Rivest: “Over time, this may change, but for now trying to get an evaluation of the security

  • f an elliptic-curve cryptosystem

is a bit like trying to get an evaluation of some recently discovered Chaldean poetry.” Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art?

slide-26
SLIDE 26

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.” 1997 Rivest: “Over time, this may change, but for now trying to get an evaluation of the security

  • f an elliptic-curve cryptosystem

is a bit like trying to get an evaluation of some recently discovered Chaldean poetry.”

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art?

slide-27
SLIDE 27

6

We still use discrete logs for pre-quantum public-key crypto. Which DL groups are best? 1986 Miller proposes ECC. Gives detailed arguments that index calculus “is not likely to work on elliptic curves.” 1997 Rivest: “Over time, this may change, but for now trying to get an evaluation of the security

  • f an elliptic-curve cryptosystem

is a bit like trying to get an evaluation of some recently discovered Chaldean poetry.”

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art? Recurring themes in attacks: factorizations of ring elements; ring automorphisms; subfields; extending applicability (even to some curves!) via group maps.

slide-28
SLIDE 28

6

still use discrete logs for re-quantum public-key crypto. DL groups are best? Miller proposes ECC. detailed arguments that calculus “is not likely rk on elliptic curves.” Rivest: “Over time, this change, but for now trying to evaluation of the security elliptic-curve cryptosystem bit like trying to get an evaluation of some recently discovered Chaldean poetry.”

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art? Recurring themes in attacks: factorizations of ring elements; ring automorphisms; subfields; extending applicability (even to some curves!) via group maps. Which ECC 2005 Bernstein: “have the the numb for elliptic 2005 ECRYPT “Some general exist about attacks : recommend fields.” No

slide-29
SLIDE 29

6

discrete logs for public-key crypto. s are best?

  • ses ECC.

rguments that “is not likely elliptic curves.” “Over time, this but for now trying to evaluation of the security elliptic-curve cryptosystem trying to get an some recently Chaldean poetry.”

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art? Recurring themes in attacks: factorizations of ring elements; ring automorphisms; subfields; extending applicability (even to some curves!) via group maps. Which ECC fields 2005 Bernstein: pr “have the virtue of the number of securit for elliptic-curve cryptography 2005 ECRYPT key-sizes “Some general concerns exist about possible attacks : : : As a first recommend curves fields.” No extra automo

slide-30
SLIDE 30

6

for crypto. est? ECC. that ely curves.” this trying to security cryptosystem an recently etry.”

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art? Recurring themes in attacks: factorizations of ring elements; ring automorphisms; subfields; extending applicability (even to some curves!) via group maps. Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography 2005 ECRYPT key-sizes repo “Some general concerns exist about possible future attacks : : : As a first choice, recommend curves over prime fields.” No extra automorphisms.

slide-31
SLIDE 31

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art? Recurring themes in attacks: factorizations of ring elements; ring automorphisms; subfields; extending applicability (even to some curves!) via group maps.

8

Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography.” 2005 ECRYPT key-sizes report: “Some general concerns exist about possible future attacks : : : As a first choice, we recommend curves over prime fields.” No extra automorphisms.

slide-32
SLIDE 32

7

Are RSA, DSA, etc. less scary? These systems have structure enabling attacks such as NFS. Many optimization avenues. Attacks keep getting better. >100 scientific papers. Still many unexplored avenues. How many people understand the state of the art? Recurring themes in attacks: factorizations of ring elements; ring automorphisms; subfields; extending applicability (even to some curves!) via group maps.

8

Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography.” 2005 ECRYPT key-sizes report: “Some general concerns exist about possible future attacks : : : As a first choice, we recommend curves over prime fields.” No extra automorphisms. Imagine a response: “That’s premature! E(F2n) isn’t broken!”

slide-33
SLIDE 33

7

RSA, DSA, etc. less scary? systems have structure enabling attacks such as NFS.

  • ptimization avenues.

ttacks keep getting better. scientific papers. many unexplored avenues. many people understand state of the art? Recurring themes in attacks: rizations of ring elements; automorphisms; subfields; extending applicability (even to curves!) via group maps.

8

Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography.” 2005 ECRYPT key-sizes report: “Some general concerns exist about possible future attacks : : : As a first choice, we recommend curves over prime fields.” No extra automorphisms. Imagine a response: “That’s premature! E(F2n) isn’t broken!” Last example: Halevi–Ra “Candidate

  • bfuscation

encryption UCLA pres to Sahai, techniques presented forcing a effort, perhaps to reverse- The new an ‘iron in the field

slide-34
SLIDE 34

7

  • etc. less scary?

have structure such as NFS.

  • ptimization avenues.

getting better. papers. unexplored avenues. eople understand art? themes in attacks: ring elements; rphisms; subfields; applicability (even to via group maps.

8

Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography.” 2005 ECRYPT key-sizes report: “Some general concerns exist about possible future attacks : : : As a first choice, we recommend curves over prime fields.” No extra automorphisms. Imagine a response: “That’s premature! E(F2n) isn’t broken!” Last example: 2013 Halevi–Raykova–Sahai–W “Candidate indistinguishabilit

  • bfuscation and functional

encryption for all circuits”. UCLA press release: to Sahai, previously techniques for obfuscation presented only a ‘sp forcing an attacker effort, perhaps a few to reverse-engineer The new system, he an ‘iron wall’ : : : a in the field of cryptography

slide-35
SLIDE 35

7

scary? structure NFS. avenues. etter. avenues. understand attacks: elements; subfields; (even to maps.

8

Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography.” 2005 ECRYPT key-sizes report: “Some general concerns exist about possible future attacks : : : As a first choice, we recommend curves over prime fields.” No extra automorphisms. Imagine a response: “That’s premature! E(F2n) isn’t broken!” Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Wate “Candidate indistinguishabilit

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend effort, perhaps a few days, trying to reverse-engineer the softw The new system, he said, puts an ‘iron wall’ : : : a game-change in the field of cryptography.”

slide-36
SLIDE 36

8

Which ECC fields do we use? 2005 Bernstein: prime fields “have the virtue of minimizing the number of security concerns for elliptic-curve cryptography.” 2005 ECRYPT key-sizes report: “Some general concerns exist about possible future attacks : : : As a first choice, we recommend curves over prime fields.” No extra automorphisms. Imagine a response: “That’s premature! E(F2n) isn’t broken!”

9

Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an ‘iron wall’ : : : a game-change in the field of cryptography.”

slide-37
SLIDE 37

8

ECC fields do we use? Bernstein: prime fields the virtue of minimizing number of security concerns lliptic-curve cryptography.” ECRYPT key-sizes report: general concerns about possible future attacks : : : As a first choice, we recommend curves over prime No extra automorphisms. Imagine a response: “That’s remature! E(F2n) isn’t broken!”

9

Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an ‘iron wall’ : : : a game-change in the field of cryptography.” 2013 Bernstein: cryptographic

  • f this so

the best has against Security

slide-38
SLIDE 38

8

fields do we use? prime fields

  • f minimizing

security concerns cryptography.” ey-sizes report: concerns

  • ssible future

first choice, we rves over prime automorphisms.

  • nse: “That’s

n) isn’t broken!” 9

Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an ‘iron wall’ : : : a game-change in the field of cryptography.” 2013 Bernstein: “The cryptographic conferences

  • f this sort of shit,

the best defense that has against the U.S. Security Agency, w

slide-39
SLIDE 39

8

use? fields minimizing concerns cryptography.” report: choice, we rime rphisms. “That’s roken!”

9

Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an ‘iron wall’ : : : a game-change in the field of cryptography.” 2013 Bernstein: “The flagship cryptographic conferences are

  • f this sort of shit, and, if this

the best defense that the wo has against the U.S. National Security Agency, we’re screw

slide-40
SLIDE 40

9

Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an ‘iron wall’ : : : a game-change in the field of cryptography.”

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.”

slide-41
SLIDE 41

9

Last example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. UCLA press release: “According to Sahai, previously developed techniques for obfuscation presented only a ‘speed bump,’ forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an ‘iron wall’ : : : a game-change in the field of cryptography.”

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”.

slide-42
SLIDE 42

9

example: 2013 Garg–Gentry– Halevi–Raykova–Sahai–Waters “Candidate indistinguishability

  • bfuscation and functional

encryption for all circuits”. press release: “According Sahai, previously developed techniques for obfuscation resented only a ‘speed bump,’ an attacker to spend some perhaps a few days, trying reverse-engineer the software. new system, he said, puts up ‘iron wall’ : : : a game-change field of cryptography.”

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”. Classic NTRU Standardize Also standa Define R Receiver (Some invertibilit Public key Sender cho Ciphertext

slide-43
SLIDE 43

9

2013 Garg–Gentry–

  • va–Sahai–Waters

indistinguishability functional all circuits”. release: “According reviously developed

  • bfuscation

‘speed bump,’ attacker to spend some few days, trying eer the software. system, he said, puts up : a game-change cryptography.”

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”. Classic NTRU Standardize prime Also standardize q Define R = Z[x]=( Receiver chooses small (Some invertibility Public key h = 3g=f Sender chooses small Ciphertext c = m +

slide-44
SLIDE 44

9

rg–Gentry– ters indistinguishability functional circuits”. “According developed bump,’ end some ys, trying software. puts up game-change cryptography.”

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”. Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ Ciphertext c = m + hr mod

slide-45
SLIDE 45

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q.

slide-46
SLIDE 46

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q.

slide-47
SLIDE 47

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr.

slide-48
SLIDE 48

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr. Reduce mod 3: f m mod 3.

slide-49
SLIDE 49

10

2013 Bernstein: “The flagship cryptographic conferences are full

  • f this sort of shit, and, if this is

the best defense that the world has against the U.S. National Security Agency, we’re screwed.” 2016 Miles–Sahai–Zhandry: “We exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations

  • f these two programs.”

So Sahai’s claimed “iron wall” is just another “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr. Reduce mod 3: f m mod 3. Divide by f mod 3: m.

slide-50
SLIDE 50

10

Bernstein: “The flagship cryptographic conferences are full sort of shit, and, if this is est defense that the world against the U.S. National Security Agency, we’re screwed.” Miles–Sahai–Zhandry: “We two simple programs that unctionally equivalent, and how to efficiently distinguish een the obfuscations these two programs.” Sahai’s claimed “iron wall” another “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr. Reduce mod 3: f m mod 3. Divide by f mod 3: m. 1998 Hoffste introduced Many subsequent meet-in-the-middle lattice attacks, chosen-ciphertext decryption-failure complicated variations parameter Also many were small e.g., homomo

slide-51
SLIDE 51

10

“The flagship conferences are full shit, and, if this is that the world U.S. National , we’re screwed.” Miles–Sahai–Zhandry: “We simple programs that equivalent, and efficiently distinguish

  • bfuscations

rograms.” claimed “iron wall” “speed bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr. Reduce mod 3: f m mod 3. Divide by f mod 3: m. 1998 Hoffstein–Pipher–Silverman introduced this syste Many subsequent NTRU meet-in-the-middle lattice attacks, hyb chosen-ciphertext a decryption-failure attacks; complicated padding variations for efficiency; parameter selection. Also many ideas that were small tweaks e.g., homomorphic

slide-52
SLIDE 52

10

flagship are full this is world National screwed.” Miles–Sahai–Zhandry: “We rograms that equivalent, and distinguish all” bump”.

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr. Reduce mod 3: f m mod 3. Divide by f mod 3: m. 1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU pap meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrosp were small tweaks of NTRU: e.g., homomorphic encryption.

slide-53
SLIDE 53

11

Classic NTRU Standardize prime p; e.g. 743. Also standardize q; e.g. 2048. Define R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. (Some invertibility requirements.) Public key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. Use smallness: f m + 3gr. Reduce mod 3: f m mod 3. Divide by f mod 3: m.

12

1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU papers: meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrospect were small tweaks of NTRU: e.g., homomorphic encryption.

slide-54
SLIDE 54

11

NTRU Standardize prime p; e.g. 743. standardize q; e.g. 2048. R = Z[x]=(xp − 1). Receiver chooses small f ; g ∈ R. invertibility requirements.) key h = 3g=f mod q. Sender chooses small m; r ∈ R. Ciphertext c = m + hr mod q. Multiply by f mod q: f c mod q. allness: f m + 3gr. Reduce mod 3: f m mod 3. by f mod 3: m.

12

1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU papers: meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrospect were small tweaks of NTRU: e.g., homomorphic encryption. Unnecessa Attacker public polynomials Compatible multiplication f (1)h(1) c(1) = m

slide-55
SLIDE 55

11

rime p; e.g. 743. q; e.g. 2048. ]=(xp − 1).

  • ses small f ; g ∈ R.

invertibility requirements.) g=f mod q. small m; r ∈ R. + hr mod q. mod q: f c mod q. f m + 3gr. f m mod 3. 3: m.

12

1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU papers: meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrospect were small tweaks of NTRU: e.g., homomorphic encryption. Unnecessary structures Attacker can evaluate public polynomials Compatible with addition multiplication mod f (1)h(1) = 3g(1) c(1) = m(1) + h(1)

slide-56
SLIDE 56

11

743. 2048. 1). ∈ R. requirements.) q. ∈ R. d q. mod q. + 3gr. mod 3. m.

12

1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU papers: meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrospect were small tweaks of NTRU: e.g., homomorphic encryption. Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z

slide-57
SLIDE 57

12

1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU papers: meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrospect were small tweaks of NTRU: e.g., homomorphic encryption.

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q.

slide-58
SLIDE 58

12

1998 Hoffstein–Pipher–Silverman introduced this system. Many subsequent NTRU papers: meet-in-the-middle attacks, lattice attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; variations for efficiency; parameter selection. Also many ideas that in retrospect were small tweaks of NTRU: e.g., homomorphic encryption.

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search.

slide-59
SLIDE 59

12

Hoffstein–Pipher–Silverman duced this system. subsequent NTRU papers: meet-in-the-middle attacks, attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; complicated padding systems; riations for efficiency; rameter selection. many ideas that in retrospect small tweaks of NTRU: homomorphic encryption.

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search. NTRU complicates so that m Limits impact

slide-60
SLIDE 60

12

–Pipher–Silverman system. subsequent NTRU papers: meet-in-the-middle attacks, hybrid attacks; chosen-ciphertext attacks; decryption-failure attacks; padding systems; efficiency; selection. that in retrospect ks of NTRU: rphic encryption.

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search. NTRU complicates so that m(1) is never Limits impact of the

slide-61
SLIDE 61

12

–Pipher–Silverman papers: attacks, attacks; systems; retrospect NTRU: encryption.

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search. NTRU complicates m selection so that m(1) is never large. Limits impact of the attack.

slide-62
SLIDE 62

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack.

slide-63
SLIDE 63

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1).

slide-64
SLIDE 64

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security.

slide-65
SLIDE 65

13

Unnecessary structures in NTRU Attacker can evaluate public polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: f (1)h(1) = 3g(1) in Z=q; c(1) = m(1) + h(1)r(1) in Z=q. One way to exploit this: c(1); h(1) are visible; r(1) is guessable, sometimes standard. Attacker scans many ciphertexts to find some with large m(1). Uses this to speed up m search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security. Or use other irreds. Ring-LWE typically uses Φ2048 = x1024 + 1.

slide-66
SLIDE 66

13

Unnecessary structures in NTRU er can evaluate polynomials h; c at 1. Compatible with addition and multiplication mod xp − 1: (1) = 3g(1) in Z=q; m(1) + h(1)r(1) in Z=q. ay to exploit this: (1) are visible; r(1) is guessable, sometimes standard. er scans many ciphertexts some with large m(1). this to speed up m search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security. Or use other irreds. Ring-LWE typically uses Φ2048 = x1024 + 1. More generally: any ring to the equations and c =

slide-67
SLIDE 67

13

structures in NTRU evaluate

  • lynomials h; c at 1.

addition and mod xp − 1: (1) in Z=q; (1)r(1) in Z=q. exploit this: visible; r(1) is sometimes standard. many ciphertexts with large m(1). eed up m search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security. Or use other irreds. Ring-LWE typically uses Φ2048 = x1024 + 1. More generally: Attack any ring map (Z=q to the equations h and c = m + hr in

slide-68
SLIDE 68

13

NTRU 1. and 1: Z=q. is standard. ciphertexts (1). search.

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security. Or use other irreds. Ring-LWE typically uses Φ2048 = x1024 + 1. More generally: Attacker applies any ring map (Z=q)[x]=P → to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P

slide-69
SLIDE 69

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security. Or use other irreds. Ring-LWE typically uses Φ2048 = x1024 + 1.

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P.

slide-70
SLIDE 70

14

NTRU complicates m selection so that m(1) is never large. Limits impact of the attack. Better: replace NTRU’s Z[x]=(xp − 1) with Z[x]=Φp. Recall Φp = (xp − 1)=(x − 1). Can view poly m mod xp − 1 as two parts: m(1); m mod Φp. Compatible with add, mult. Why include m(1) here? Doesn’t seem to help security. Or use other irreds. Ring-LWE typically uses Φ2048 = x1024 + 1.

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P. e.g. typically q = 2048 in NTRU. Have natural ring maps from (Z=2048)[x]=(xp − 1) to (Z=2)[x]=(xp − 1), (Z=4)[x]=(xp − 1), (Z=8)[x]=(xp − 1), etc. Can attacker exploit these?

  • Maybe. Complicated. See 2004

Smart–Vercauteren–Silverman.

slide-71
SLIDE 71

14

complicates m selection that m(1) is never large. impact of the attack. Better: replace NTRU’s xp − 1) with Z[x]=Φp. Φp = (xp − 1)=(x − 1). view poly m mod xp − 1 parts: m(1); m mod Φp. Compatible with add, mult. include m(1) here? esn’t seem to help security.

  • ther irreds. Ring-LWE

ypically uses Φ2048 = x1024 + 1.

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P. e.g. typically q = 2048 in NTRU. Have natural ring maps from (Z=2048)[x]=(xp − 1) to (Z=2)[x]=(xp − 1), (Z=4)[x]=(xp − 1), (Z=8)[x]=(xp − 1), etc. Can attacker exploit these?

  • Maybe. Complicated. See 2004

Smart–Vercauteren–Silverman. Ring-LWE “provable q so that Z[x]=q; i.e., maps (Z

slide-72
SLIDE 72

14

complicates m selection never large. the attack. NTRU’s with Z[x]=Φp. − 1)=(x − 1). mod xp − 1 (1); m mod Φp. add, mult. (1) here? help security.

  • irreds. Ring-LWE

2048 = x1024 + 1.

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P. e.g. typically q = 2048 in NTRU. Have natural ring maps from (Z=2048)[x]=(xp − 1) to (Z=2)[x]=(xp − 1), (Z=4)[x]=(xp − 1), (Z=8)[x]=(xp − 1), etc. Can attacker exploit these?

  • Maybe. Complicated. See 2004

Smart–Vercauteren–Silverman. Ring-LWE religion, “provable security”, q so that P splits Z[x]=q; i.e., have n maps (Z=q)[x]=P

slide-73
SLIDE 73

14

selection rge. attack. Φp. 1). − 1 d Φp. mult. security. Ring-LWE

1024 + 1.

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P. e.g. typically q = 2048 in NTRU. Have natural ring maps from (Z=2048)[x]=(xp − 1) to (Z=2)[x]=(xp − 1), (Z=4)[x]=(xp − 1), (Z=8)[x]=(xp − 1), etc. Can attacker exploit these?

  • Maybe. Complicated. See 2004

Smart–Vercauteren–Silverman. Ring-LWE religion, version 1: “provable security”, take prime q so that P splits completely Z[x]=q; i.e., have n different maps (Z=q)[x]=P → Z=q.

slide-74
SLIDE 74

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P. e.g. typically q = 2048 in NTRU. Have natural ring maps from (Z=2048)[x]=(xp − 1) to (Z=2)[x]=(xp − 1), (Z=4)[x]=(xp − 1), (Z=8)[x]=(xp − 1), etc. Can attacker exploit these?

  • Maybe. Complicated. See 2004

Smart–Vercauteren–Silverman.

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q.

slide-75
SLIDE 75

15

More generally: Attacker applies any ring map (Z=q)[x]=P → T to the equations h = 3g=f and c = m + hr in (Z=q)[x]=P. e.g. typically q = 2048 in NTRU. Have natural ring maps from (Z=2048)[x]=(xp − 1) to (Z=2)[x]=(xp − 1), (Z=4)[x]=(xp − 1), (Z=8)[x]=(xp − 1), etc. Can attacker exploit these?

  • Maybe. Complicated. See 2004

Smart–Vercauteren–Silverman.

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases.

slide-76
SLIDE 76

15

generally: Attacker applies ring map (Z=q)[x]=P → T equations h = 3g=f = m + hr in (Z=q)[x]=P. ypically q = 2048 in NTRU. natural ring maps from 2048)[x]=(xp − 1) to 2)[x]=(xp − 1), 4)[x]=(xp − 1), 8)[x]=(xp − 1), etc. attacker exploit these?

  • e. Complicated. See 2004

rt–Vercauteren–Silverman.

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases. Ring-LWE (2012 Langlois–Stehl prove that

  • f the mo

to the computational

  • f LWE and
slide-77
SLIDE 77

15

Attacker applies =q)[x]=P → T h = 3g=f in (Z=q)[x]=P. 2048 in NTRU. ing maps from − 1) to 1), 1), 1), etc. exploit these?

  • Complicated. See 2004

ercauteren–Silverman.

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases. Ring-LWE religion, (2012 Langlois–Stehl prove that the arithmetic

  • f the modulus q is

to the computational

  • f LWE and RLWE.”
slide-78
SLIDE 78

15

applies → T x]=P. NTRU. from these? 2004 ercauteren–Silverman.

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases. Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”
slide-79
SLIDE 79

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases.

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”
slide-80
SLIDE 80

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases.

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds.

slide-81
SLIDE 81

16

Ring-LWE religion, version 1: For “provable security”, take prime q so that P splits completely in Z[x]=q; i.e., have n different ring maps (Z=q)[x]=P → Z=q. Do these maps damage security? Fast attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, 2016 Chen–Lauter–Stange. Fast non-q-dependent attack by 2016 Castryck–Iliashenko– Vercauteren breaks 2015 ELOS cases but not 2016 CLS cases.

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it.

slide-82
SLIDE 82

16

WE religion, version 1: For rovable security”, take prime that P splits completely in ; i.e., have n different ring (Z=q)[x]=P → Z=q. these maps damage security? attacks in some cases: 2014 Eisentr¨ ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, Chen–Lauter–Stange. non-q-dependent attack 2016 Castryck–Iliashenko– ercauteren breaks 2015 ELOS but not 2016 CLS cases.

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it. We recommend: that remains i.e., choose Field (Z=q to any smaller

slide-83
SLIDE 83

16

religion, version 1: For ity”, take prime splits completely in e n different ring =P → Z=q. damage security? some cases: 2014 ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, Chen–Lauter–Stange. endent attack Castryck–Iliashenko– reaks 2015 ELOS 2016 CLS cases.

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it. We recommend: T that remains irred i.e., choose inert mo Field (Z=q)[x]=P. to any smaller nonzero

slide-84
SLIDE 84

16

1: For rime completely in different ring security? cases: 2014 ager–Hallgren–Lauter, 2015 Elias–Lauter–Ozman–Stange, Chen–Lauter–Stange. attack Castryck–Iliashenko– ELOS cases.

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it. We recommend: Take irred P that remains irred in (Z=q)[x i.e., choose inert modulus q Field (Z=q)[x]=P. No ring map to any smaller nonzero ring.

slide-85
SLIDE 85

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it.

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring.

slide-86
SLIDE 86

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it.

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2.

slide-87
SLIDE 87

17

Ring-LWE religion, version 2 (2012 Langlois–Stehl´ e): “We prove that the arithmetic form

  • f the modulus q is irrelevant

to the computational hardness

  • f LWE and RLWE.”

Basic idea: “modulus switching” from Z=q to Z=q′. Attacker multiplies by q′=q and rounds. But rounding adds noise, making attacks harder! The proof limits security gap but does not eliminate it.

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2. But we also recommend heresy: take P with prime degree p and with large Galois group, specifically Sp, size p!. Good example: P = xp − x − 1.

slide-88
SLIDE 88

17

WE religion, version 2 Langlois–Stehl´ e): “We that the arithmetic form modulus q is irrelevant computational hardness WE and RLWE.” idea: “modulus switching” =q to Z=q′. Attacker multiplies by q′=q and rounds. rounding adds noise, making attacks harder! roof limits security gap es not eliminate it.

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2. But we also recommend heresy: take P with prime degree p and with large Galois group, specifically Sp, size p!. Good example: P = xp − x − 1. 2014.02, To eliminate structures,

  • f prime

subfield is polynomial very large the numb having automo

slide-89
SLIDE 89

17

religion, version 2 Langlois–Stehl´ e): “We rithmetic form q is irrelevant computational hardness E.” dulus switching” =q′. Attacker =q and rounds. adds noise, harder! security gap eliminate it.

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2. But we also recommend heresy: take P with prime degree p and with large Galois group, specifically Sp, size p!. Good example: P = xp − x − 1. 2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a

  • f prime degree, so

subfield is Q” and polynomial xp − x very large Galois group, the number field is having automorphisms”.

slide-90
SLIDE 90

17

2 “We form irrelevant rdness switching” er rounds. gap

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2. But we also recommend heresy: take P with prime degree p and with large Galois group, specifically Sp, size p!. Good example: P = xp − x − 1. 2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the

subfield is Q” and “an irreducible polynomial xp − x − 1 with very large Galois group, so that the number field is very far from having automorphisms”.

slide-91
SLIDE 91

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2. But we also recommend heresy: take P with prime degree p and with large Galois group, specifically Sp, size p!. Good example: P = xp − x − 1.

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”.

slide-92
SLIDE 92

18

We recommend: Take irred P that remains irred in (Z=q)[x]; i.e., choose inert modulus q. Field (Z=q)[x]=P. No ring map to any smaller nonzero ring. So far this is compatible with Ring-LWE religion, version 2. But we also recommend heresy: take P with prime degree p and with large Galois group, specifically Sp, size p!. Good example: P = xp − x − 1.

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings.

slide-93
SLIDE 93

18

recommend: Take irred P remains irred in (Z=q)[x]; choose inert modulus q. Z=q)[x]=P. No ring map smaller nonzero ring. this is compatible with WE religion, version 2. e also recommend heresy: with prime degree p with large Galois group, ecifically Sp, size p!. example: P = xp − x − 1.

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings. 2014.10 Shepherd based system quantum

slide-94
SLIDE 94

18

Take irred P irred in (Z=q)[x]; inert modulus q. . No ring map nonzero ring. compatible with religion, version 2. recommend heresy: rime degree p Galois group, size p!. P = xp − x − 1.

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings. 2014.10 Campbell–Groves– Shepherd describe based system “Solilo quantum poly-time

slide-95
SLIDE 95

18

irred P )[x]; dulus q. map ring. with 2. heresy: degree p group, x − 1.

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings. 2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery

slide-96
SLIDE 96

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery.

slide-97
SLIDE 97

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy.

slide-98
SLIDE 98

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem.

slide-99
SLIDE 99

19

2014.02, our 2nd announcement: To eliminate “worrisome” structures, use “a number field

  • f prime degree, so that the only

subfield is Q” and “an irreducible polynomial xp − x − 1 with a very large Galois group, so that the number field is very far from having automorphisms”. Subsequent attacks against several lattice-based systems have exploited these structures and have not been extended to our recommended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem. 2012 Garg–Gentry–Halevi multilinear maps have the same key-recovery problem (and many other security issues).

slide-100
SLIDE 100

19

2014.02, our 2nd announcement: eliminate “worrisome” structures, use “a number field e degree, so that the only subfield is Q” and “an irreducible

  • lynomial xp − x − 1 with a

rge Galois group, so that number field is very far from automorphisms”. Subsequent attacks against several lattice-based systems exploited these structures have not been extended recommended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem. 2012 Garg–Gentry–Halevi multilinear maps have the same key-recovery problem (and many other security issues). SV/Solilo k ≥ 1. Define Public key: Secret key: with gR i.e., short

  • f the ideal
slide-101
SLIDE 101

19

2nd announcement:

  • rrisome”

“a number field so that the only and “an irreducible x − 1 with a group, so that is very far from rphisms”. attacks against lattice-based systems these structures een extended recommended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem. 2012 Garg–Gentry–Halevi multilinear maps have the same key-recovery problem (and many other security issues). SV/Soliloquy parameter: k ≥ 1. Define R = Public key: prime Secret key: short element with gR = qR + ( i.e., short generato

  • f the ideal qR +
slide-102
SLIDE 102

19

announcement: field the only irreducible with a that r from t systems structures extended rings.

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem. 2012 Garg–Gentry–Halevi multilinear maps have the same key-recovery problem (and many other security issues). SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2 Public key: prime q and c ∈ Secret key: short element g with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.
slide-103
SLIDE 103

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem. 2012 Garg–Gentry–Halevi multilinear maps have the same key-recovery problem (and many other security issues).

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.
slide-104
SLIDE 104

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- based system “Soliloquy”; claim quantum poly-time key recovery. 2010 Smart–Vercauteren system is practically identical to Soliloquy. 2009 Gentry system (simpler version described at STOC) has the same key-recovery problem. 2012 Garg–Gentry–Halevi multilinear maps have the same key-recovery problem (and many other security issues).

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”.

slide-105
SLIDE 105

20

2014.10 Campbell–Groves– Shepherd describe an ideal-lattice- system “Soliloquy”; claim quantum poly-time key recovery. Smart–Vercauteren system is ractically identical to Soliloquy. Gentry system (simpler described at STOC) has same key-recovery problem. Garg–Gentry–Halevi multilinear maps have the key-recovery problem many other security issues).

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”. Smart–V as taking

slide-106
SLIDE 106

20

ell–Groves– e an ideal-lattice- “Soliloquy”; claim

  • ly-time key recovery.

ercauteren system is tical to Soliloquy. system (simpler ed at STOC) has ey-recovery problem. g–Gentry–Halevi have the ey-recovery problem

  • ther security issues).

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”. Smart–Vercauteren as taking exponential

slide-107
SLIDE 107

20

ell–Groves– ideal-lattice- claim recovery. system is Soliloquy. (simpler STOC) has roblem. issues).

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”. Smart–Vercauteren dismiss this as taking exponential time.

slide-108
SLIDE 108

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”.

22

Smart–Vercauteren dismiss this as taking exponential time.

slide-109
SLIDE 109

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”.

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.
slide-110
SLIDE 110

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”.

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS.

slide-111
SLIDE 111

21

SV/Soliloquy parameter: k ≥ 1. Define R = Z[x]=Φ2k . Public key: prime q and c ∈ Z=q. Secret key: short element g ∈ R with gR = qR + (x − c)R; i.e., short generator

  • f the ideal qR + (x − c)R.

But wait, isn’t it known how to compute a generator of an ideal? See, e.g., 1993 Cohen textbook “A course in computational algebraic number theory”.

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS. 2016 Biasse–Song, building on 2014 Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm that takes quantum poly time.

slide-112
SLIDE 112

21

SV/Soliloquy parameter: Define R = Z[x]=Φ2k . key: prime q and c ∈ Z=q. key: short element g ∈ R R = qR + (x − c)R; short generator ideal qR + (x − c)R. ait, isn’t it known how to compute a generator of an ideal? e.g., 1993 Cohen textbook course in computational raic number theory”.

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS. 2016 Biasse–Song, building on 2014 Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm that takes quantum poly time. Smart–V this generato Have ideal Want sho Have g′ Know g′ But how

slide-113
SLIDE 113

21

rameter: = Z[x]=Φ2k . rime q and c ∈ Z=q. rt element g ∈ R + (x − c)R; generator + (x − c)R. known how to generator of an ideal? Cohen textbook computational er theory”.

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS. 2016 Biasse–Song, building on 2014 Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm that takes quantum poly time. Smart–Vercauteren this generator as not Have ideal I of R. Want short g with Have g′ with g′R Know g′ = ug for But how do we find

slide-114
SLIDE 114

21

2k .

∈ Z=q. g ∈ R R; R. how to ideal? textbook computational

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS. 2016 Biasse–Song, building on 2014 Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm that takes quantum poly time. Smart–Vercauteren also dismiss this generator as not being sho Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ But how do we find u?

slide-115
SLIDE 115

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS. 2016 Biasse–Song, building on 2014 Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm that takes quantum poly time.

23

Smart–Vercauteren also dismiss this generator as not being short. Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ R∗. But how do we find u?

slide-116
SLIDE 116

22

Smart–Vercauteren dismiss this as taking exponential time. It actually takes subexponential

  • time. Same basic idea as NFS.

Campbell–Groves–Shepherd claim quantum poly time. Claim disputed by Biasse, not defended by CGS. 2016 Biasse–Song, building on 2014 Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm that takes quantum poly time.

23

Smart–Vercauteren also dismiss this generator as not being short. Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ R∗. But how do we find u? Log g′ = Log u + Log g where Log is Dirichlet’s log map. Dirichlet’s unit theorem: Log R∗ is a lattice, known dim. Finding Log u is a closest-vector problem in this lattice.

slide-117
SLIDE 117

22

rt–Vercauteren dismiss this taking exponential time. actually takes subexponential Same basic idea as NFS. Campbell–Groves–Shepherd quantum poly time. disputed by Biasse, defended by CGS. Biasse–Song, building on Eisentr¨ ager–Hallgren– Kitaev–Song: different algorithm takes quantum poly time.

23

Smart–Vercauteren also dismiss this generator as not being short. Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ R∗. But how do we find u? Log g′ = Log u + Log g where Log is Dirichlet’s log map. Dirichlet’s unit theorem: Log R∗ is a lattice, known dim. Finding Log u is a closest-vector problem in this lattice. Campbell–Groves–Shepherd: “A simple cyclotomic

  • known. The

R∗] under forms a lattice.

  • f this lattice

much bigger length of g], so it causally any generato e.g. via the algorithm.”

slide-118
SLIDE 118

22

ercauteren dismiss this

  • nential time.

subexponential basic idea as NFS. ell–Groves–Shepherd

  • ly time.

y Biasse, CGS. Biasse–Song, building on ager–Hallgren– different algorithm quantum poly time.

23

Smart–Vercauteren also dismiss this generator as not being short. Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ R∗. But how do we find u? Log g′ = Log u + Log g where Log is Dirichlet’s log map. Dirichlet’s unit theorem: Log R∗ is a lattice, known dim. Finding Log u is a closest-vector problem in this lattice. Campbell–Groves–Shepherd: “A simple generating cyclotomic units is

  • known. The image

R∗] under the loga forms a lattice. The

  • f this lattice turns

much bigger than length of a private g], so it is easy to causally short private any generator of ¸ e.g. via the LLL lattice algorithm.”

slide-119
SLIDE 119

22

dismiss this time.

  • nential

NFS. ell–Groves–Shepherd ing on ager–Hallgren– algorithm time.

23

Smart–Vercauteren also dismiss this generator as not being short. Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ R∗. But how do we find u? Log g′ = Log u + Log g where Log is Dirichlet’s log map. Dirichlet’s unit theorem: Log R∗ is a lattice, known dim. Finding Log u is a closest-vector problem in this lattice. Campbell–Groves–Shepherd: “A simple generating set for cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.”

slide-120
SLIDE 120

23

Smart–Vercauteren also dismiss this generator as not being short. Have ideal I of R. Want short g with gR = I. Have g′ with g′R = I. Know g′ = ug for some u ∈ R∗. But how do we find u? Log g′ = Log u + Log g where Log is Dirichlet’s log map. Dirichlet’s unit theorem: Log R∗ is a lattice, known dim. Finding Log u is a closest-vector problem in this lattice.

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.”

slide-121
SLIDE 121

23

rt–Vercauteren also dismiss generator as not being short. ideal I of R. short g with gR = I.

′ with g′R = I.

g′ = ug for some u ∈ R∗. w do we find u? = Log u + Log g Log is Dirichlet’s log map. Dirichlet’s unit theorem:

∗ is a lattice, known dim.

Finding Log u is a closest-vector roblem in this lattice.

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.” x → x3, automorphisms Easy to see

slide-122
SLIDE 122

23

ercauteren also dismiss not being short. R. with gR = I. R = I. for some u ∈ R∗. find u? Log g Dirichlet’s log map. theorem: lattice, known dim. a closest-vector lattice.

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.” x → x3, x → x5, x automorphisms of Easy to see (1−x3

slide-123
SLIDE 123

23

dismiss short. I. ∈ R∗. log map. dim. closest-vector

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.” x → x3, x → x5, x → x7, etc. automorphisms of R = Z[x]= Easy to see (1−x3)=(1−x)

slide-124
SLIDE 124

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.”

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗.

slide-125
SLIDE 125

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.”

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as R∗ ∩ ˘ ±xe0 Q

i(1 − xi)ei ¯

. Weber’s conjecture: all elements

  • f R∗ are cyclotomic units.
slide-126
SLIDE 126

24

Campbell–Groves–Shepherd: “A simple generating set for the cyclotomic units is of course

  • known. The image of O× [i.e.,

R∗] under the logarithm map forms a lattice. The determinant

  • f this lattice turns out to be

much bigger than the typical log- length of a private key ¸ [i.e., g], so it is easy to recover the causally short private key given any generator of ¸O [i.e., I], e.g. via the LLL lattice reduction algorithm.”

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as R∗ ∩ ˘ ±xe0 Q

i(1 − xi)ei ¯

. Weber’s conjecture: all elements

  • f R∗ are cyclotomic units.

Experiments confirm that SV is quickly broken by LLL using, e.g., 1997 Washington textbook basis for cyclotomic units. Shortness of basis is critical; missing from bogus CGS analysis.

slide-127
SLIDE 127

24

Campbell–Groves–Shepherd: simple generating set for the cyclotomic units is of course

  • wn. The image of O× [i.e.,

under the logarithm map a lattice. The determinant lattice turns out to be bigger than the typical log-

  • f a private key ¸ [i.e.,

it is easy to recover the causally short private key given generator of ¸O [i.e., I], via the LLL lattice reduction rithm.”

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as R∗ ∩ ˘ ±xe0 Q

i(1 − xi)ei ¯

. Weber’s conjecture: all elements

  • f R∗ are cyclotomic units.

Experiments confirm that SV is quickly broken by LLL using, e.g., 1997 Washington textbook basis for cyclotomic units. Shortness of basis is critical; missing from bogus CGS analysis. Attacker automorphisms 2016 Alb “A subfield

  • verstretched

Cryptanalysis Graded Enco norms gff 2016 Cheon–Jeong–Lee main technique is the reduction a field to traces g an order-2

slide-128
SLIDE 128

24

ell–Groves–Shepherd: generating set for the is of course ge of O× [i.e., logarithm map The determinant turns out to be than the typical log- rivate key ¸ [i.e., to recover the rivate key given

  • f ¸O [i.e., I],

lattice reduction

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as R∗ ∩ ˘ ±xe0 Q

i(1 − xi)ei ¯

. Weber’s conjecture: all elements

  • f R∗ are cyclotomic units.

Experiments confirm that SV is quickly broken by LLL using, e.g., 1997 Washington textbook basis for cyclotomic units. Shortness of basis is critical; missing from bogus CGS analysis. Attackers can also automorphisms in 2016 Albrecht–Bai–Duc “A subfield lattice

  • verstretched NTR

Cryptanalysis of some Graded Encoding Schemes” norms gff(g), and 2016 Cheon–Jeong–Lee main technique of is the reduction of a field to one in a traces g + ff(g), where an order-2 automo

slide-129
SLIDE 129

24

ell–Groves–Shepherd: for the course [i.e., map determinant be ypical log- [i.e., the given I], reduction

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as R∗ ∩ ˘ ±xe0 Q

i(1 − xi)ei ¯

. Weber’s conjecture: all elements

  • f R∗ are cyclotomic units.

Experiments confirm that SV is quickly broken by LLL using, e.g., 1997 Washington textbook basis for cyclotomic units. Shortness of basis is critical; missing from bogus CGS analysis. Attackers can also use automorphisms in more ways. 2016 Albrecht–Bai–Ducas “A subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” norms gff(g), and independently 2016 Cheon–Jeong–Lee (“The main technique of our algorithm is the reduction of a problem a field to one in a subfield”) traces g + ff(g), where ff is an order-2 automorphism.

slide-130
SLIDE 130

25

x → x3, x → x5, x → x7, etc. are automorphisms of R = Z[x]=Φ2k . Easy to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as R∗ ∩ ˘ ±xe0 Q

i(1 − xi)ei ¯

. Weber’s conjecture: all elements

  • f R∗ are cyclotomic units.

Experiments confirm that SV is quickly broken by LLL using, e.g., 1997 Washington textbook basis for cyclotomic units. Shortness of basis is critical; missing from bogus CGS analysis.

26

Attackers can also use automorphisms in more ways. 2016 Albrecht–Bai–Ducas “A subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” use norms gff(g), and independently 2016 Cheon–Jeong–Lee (“The main technique of our algorithm is the reduction of a problem on a field to one in a subfield”) use traces g + ff(g), where ff is an order-2 automorphism.

slide-131
SLIDE 131

25

3, x → x5, x → x7, etc. are

automorphisms of R = Z[x]=Φ2k . to see (1−x3)=(1−x) ∈ R∗. “Cyclotomic units” are defined as ˘ ±xe0 Q

i(1 − xi)ei ¯

. er’s conjecture: all elements are cyclotomic units. eriments confirm that SV is quickly broken by LLL using, e.g., ashington textbook for cyclotomic units. rtness of basis is critical; missing from bogus CGS analysis.

26

Attackers can also use automorphisms in more ways. 2016 Albrecht–Bai–Ducas “A subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” use norms gff(g), and independently 2016 Cheon–Jeong–Lee (“The main technique of our algorithm is the reduction of a problem on a field to one in a subfield”) use traces g + ff(g), where ff is an order-2 automorphism. We recommend the choice ideal-lattice-based Requiring minimizes Requiring Sp maximizes automorphism the smallest roots of All available this rescues and never

slide-132
SLIDE 132

25

, x → x7, etc. are

  • f R = Z[x]=Φ2k .

x3)=(1−x) ∈ R∗. units” are defined as (1 − xi)ei ¯ . conjecture: all elements cyclotomic units. confirm that SV is y LLL using, e.g., ington textbook cyclotomic units. basis is critical;

  • gus CGS analysis.

26

Attackers can also use automorphisms in more ways. 2016 Albrecht–Bai–Ducas “A subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” use norms gff(g), and independently 2016 Cheon–Jeong–Lee (“The main technique of our algorithm is the reduction of a problem on a field to one in a subfield”) use traces g + ff(g), where ff is an order-2 automorphism. We recommend changing the choice of rings ideal-lattice-based Requiring prime degree minimizes number Requiring Galois grou Sp maximizes difficult automorphism computations: the smallest field containing roots of P has degree All available eviden this rescues some and never hurts securit

slide-133
SLIDE 133

25

  • etc. are

x]=Φ2k . ) ∈ R∗. defined as ¯ . elements units. SV is using, e.g.,

  • k

critical; analysis.

26

Attackers can also use automorphisms in more ways. 2016 Albrecht–Bai–Ducas “A subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” use norms gff(g), and independently 2016 Cheon–Jeong–Lee (“The main technique of our algorithm is the reduction of a problem on a field to one in a subfield”) use traces g + ff(g), where ff is an order-2 automorphism. We recommend changing the choice of rings in ideal-lattice-based cryptography Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: the smallest field containing roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security.

slide-134
SLIDE 134

26

Attackers can also use automorphisms in more ways. 2016 Albrecht–Bai–Ducas “A subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” use norms gff(g), and independently 2016 Cheon–Jeong–Lee (“The main technique of our algorithm is the reduction of a problem on a field to one in a subfield”) use traces g + ff(g), where ff is an order-2 automorphism.

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security.

slide-135
SLIDE 135

26

ers can also use automorphisms in more ways. Albrecht–Bai–Ducas subfield lattice attack on

  • verstretched NTRU assumptions:

Cryptanalysis of some FHE and Graded Encoding Schemes” use gff(g), and independently Cheon–Jeong–Lee (“The technique of our algorithm reduction of a problem on to one in a subfield”) use g + ff(g), where ff is rder-2 automorphism.

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security. The imp “If you’re structure, visible polynomial Use LWE,

slide-136
SLIDE 136

26

also use in more ways. cht–Bai–Ducas lattice attack on TRU assumptions: some FHE and ding Schemes” use and independently Cheon–Jeong–Lee (“The

  • f our algorithm
  • f a problem on

a subfield”) use ), where ff is automorphism.

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security. The importance of “If you’re so worried structure, why are visible polynomial Use LWE, or classic

slide-137
SLIDE 137

26

ays.

  • n

assumptions: and Schemes” use endently (“The algorithm roblem on subfield”) use is

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security. The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!”

slide-138
SLIDE 138

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security.

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!”

slide-139
SLIDE 139

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security.

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable?

slide-140
SLIDE 140

27

We recommend changing the choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group Sp maximizes difficulty of automorphism computations: e.g., the smallest field containing all roots of P has degree p!. All available evidence is that this rescues some systems and never hurts security.

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable? If it is, would we gain more security from larger polynomials? Larger impact on known attacks, maybe also on unknown attacks. Not clear what to recommend.

slide-141
SLIDE 141

27

recommend changing choice of rings in ideal-lattice-based cryptography. Requiring prime degree p minimizes number of subfields. Requiring Galois group maximizes difficulty of automorphism computations: e.g., smallest field containing all

  • f P has degree p!.

available evidence is that rescues some systems never hurts security.

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable? If it is, would we gain more security from larger polynomials? Larger impact on known attacks, maybe also on unknown attacks. Not clear what to recommend. Conventional Rings (Z with q mo extremely NTRU Prime several times Is this affo

slide-142
SLIDE 142

27

changing rings in ideal-lattice-based cryptography. degree p er of subfields. group difficulty of computations: e.g., containing all degree p!. ence is that some systems security.

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable? If it is, would we gain more security from larger polynomials? Larger impact on known attacks, maybe also on unknown attacks. Not clear what to recommend. Conventional wisdom: Rings (Z=q)[x]=Φ2 with q mod 2k+1 = extremely fast FFT-based NTRU Prime rings several times slower. Is this affordable?

slide-143
SLIDE 143

27

cryptography. subfields. computations: e.g., containing all that

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable? If it is, would we gain more security from larger polynomials? Larger impact on known attacks, maybe also on unknown attacks. Not clear what to recommend. Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc.

slide-144
SLIDE 144

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable? If it is, would we gain more security from larger polynomials? Larger impact on known attacks, maybe also on unknown attacks. Not clear what to recommend.

29

Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc.

slide-145
SLIDE 145

28

The importance of efficiency “If you’re so worried about structure, why are you tolerating visible polynomial structure? Use LWE, or classic McEliece!” Maybe better security, yes— but huge costs in network traffic. Is this affordable? If it is, would we gain more security from larger polynomials? Larger impact on known attacks, maybe also on unknown attacks. Not clear what to recommend.

29

Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc. But we have shown that an optimized combination of Karatsuba and Toom is also extremely fast at crypto sizes. Hard to find any applications that will notice the differences. And we improve network traffic.

slide-146
SLIDE 146

28

importance of efficiency ’re so worried about structure, why are you tolerating polynomial structure? WE, or classic McEliece!” better security, yes— huge costs in network traffic. affordable? is, would we gain more y from larger polynomials? impact on known attacks, also on unknown attacks. clear what to recommend.

29

Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc. But we have shown that an optimized combination of Karatsuba and Toom is also extremely fast at crypto sizes. Hard to find any applications that will notice the differences. And we improve network traffic. What you Streamlined an optimized The design lattice-based Security Prime: meet-in-the-middle attacks, Paramete Public-key unauthenticated And mor

slide-147
SLIDE 147

28

  • f efficiency

rried about re you tolerating

  • lynomial structure?

classic McEliece!” curity, yes— in network traffic. rdable? gain more rger polynomials?

  • n known attacks,

unknown attacks. to recommend.

29

Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc. But we have shown that an optimized combination of Karatsuba and Toom is also extremely fast at crypto sizes. Hard to find any applications that will notice the differences. And we improve network traffic. What you find in pap Streamlined NTRU an optimized cryptosystem. The design space of lattice-based encryption. Security of Streamlined Prime: meet-in-the-middle attacks, lattice attacks, Parameters. Public-key encryption unauthenticated key And more!

slide-148
SLIDE 148

28

efficiency

  • ut

tolerating structure? McEliece!” es— traffic. re

  • lynomials?

attacks, attacks. commend.

29

Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc. But we have shown that an optimized combination of Karatsuba and Toom is also extremely fast at crypto sizes. Hard to find any applications that will notice the differences. And we improve network traffic. What you find in paper Streamlined NTRU Prime: an optimized cryptosystem. The design space of lattice-based encryption. Security of Streamlined NTRU Prime: meet-in-the-middle attacks, lattice attacks, etc. Parameters. Public-key encryption vs. unauthenticated key exchange. And more!

slide-149
SLIDE 149

29

Conventional wisdom: Rings (Z=q)[x]=Φ2k with q mod 2k+1 = 1 allow extremely fast FFT-based mults. NTRU Prime rings will be several times slower. Is this affordable? etc. But we have shown that an optimized combination of Karatsuba and Toom is also extremely fast at crypto sizes. Hard to find any applications that will notice the differences. And we improve network traffic.

30

What you find in paper Streamlined NTRU Prime: an optimized cryptosystem. The design space of lattice-based encryption. Security of Streamlined NTRU Prime: meet-in-the-middle attacks, lattice attacks, etc. Parameters. Public-key encryption vs. unauthenticated key exchange. And more!