All Along the Ring Tower
Algebraic Structures for Fun and Profit
Thomas Prest joint work w/ {Léo Ducas} ∪ {Thomas Pornin} ∪ {Léo Ducas, Steven Galbraith, Yang Yu} RISC × PROMETHEUS Seminar, 03/05/2019
All Along the Ring Tower Algebraic Structures for Fun and Profit - - PowerPoint PPT Presentation
All Along the Ring Tower Algebraic Structures for Fun and Profit Thomas Prest joint work w/ {Lo Ducas} {Thomas Pornin} {Lo Ducas, Steven Galbraith, Yang Yu} RISC PROMETHEUS Seminar, 03/05/2019 Introducon I Three Case Studies
Thomas Prest joint work w/ {Léo Ducas} ∪ {Thomas Pornin} ∪ {Léo Ducas, Steven Galbraith, Yang Yu} RISC × PROMETHEUS Seminar, 03/05/2019
I
Introducon
II
Three Case Studies
i
Generalized Bézout Equaons
ii
Generalized Four Square Theorem
iii Efficient Lace Decoding
III Conclusion
2 / 21
It is typical in lace-based cryptography to use matrices with coefficients in Zq[x]/(xd + 1) rather than Zq: 1 Communicaon costs typically go O(d2) ⇒ O(d) 2 Computaon costs typically go O(d2) ⇒ O(d log d) But in some situaons this addional structure seems ineffecve: 1 Matrix decomposion (Cholesky, Gram-Schmidt, etc.) 2 Solving equaons in a ring which is not a field (e.g. Z[x]/(xd + 1)) Algorithms can take me up to Θ(d2) or Θ(d3).
3 / 21
What naïve soluons do: 1 View Q[x]/(xd + 1) as either a Q-linear space of dimension d, an extension field of Q of degree d, etc. 2 This ignores the rich structure of cyclotomic rings and fields. What happens when we open the black box?
4 / 21
For d a power-of-two, we note: ➳ Qd = Q[x]/(xd + 1) the d-th cyclotomic field ➳ Zd = Z[x]/(xd + 1) the d-th cyclotomic ring We have this tower of fields: Q ⊊ Q2 ⊊ · · · ⊆ Qd/2 ⊊ Qd As well as this chain of isomorphisms: Qd ∼ = (Q2)d/2 ∼ = . . . ∼ = (Qd/2)2 ∼ = Qd At a high level: ➳ The field norm and field trace allows to move in the tower of fields ➳ Ring isomorphisms allow us to move in the chain of ring isomorphisms
5 / 21
Definion: For a (finite) field extension L/K: ➳ The field trace is: TrL/K : L → K f → ∑
σ∈Gal(L/K)
σ(f) ➳ The field norm is: NL/K : L → K f → ∏
σ∈Gal(L/K)
σ(f) Concretely: if f(x) = fe(x2) + x · fo(x2) ∈ Qd, then f×(x) = f(−x) and: ➳ TrQd/Qd/2(f) = f + f× = 2 · fe(x2) ➳ NQd/Qd/2(f) = f · f× = f2
e(x2) − x2f2
Composion properes: ➳ TrL/K ◦ TrM/L = TrM/K ➳ NL/K ◦ NM/L = NM/K Homomorphic properes: ➳ TrL/K(a+b) = TrL/K(a)+TrL/K(b) ➳ NL/K(a · b) = NL/K(a) · NL/K(b)
6 / 21
I
Introducon
II
Three Case Studies
i
Generalized Bézout Equaons
ii
Generalized Four Square Theorem
iii Efficient Lace Decoding
III Conclusion
7 / 21
NTRU Laces: ➳ Prevalent in lace-based crypto ➳ Public key is A = [ 1 h ] , for h = g × f−1 mod (φ, q). ➳ Private key is B such that B × At = 0 mod (φ, q) Some schemes only require a paral trapdoor B = [ g −f ] : ➳ Fiat-Shamir [ZCHW17], encrypon [SHRS17], FHE [LTV12, BLLN13] However, some schemes require a full trapdoor B g f G F : Hash-then-sign [PFH 17], IBE [DLP14], HIBE [CG17] More generally, anything based on trapdoor sampling [GPV08] Problem: Given f g x xd 1 , find F G x xd 1 such that: f G g F q
8 / 21
NTRU Laces: ➳ Prevalent in lace-based crypto ➳ Public key is A = [ 1 h ] , for h = g × f−1 mod (φ, q). ➳ Private key is B such that B × At = 0 mod (φ, q) Some schemes only require a paral trapdoor B = [ g −f ] : ➳ Fiat-Shamir [ZCHW17], encrypon [SHRS17], FHE [LTV12, BLLN13] However, some schemes require a full trapdoor B = [ g −f G −F ] : ➳ Hash-then-sign [PFH+17], IBE [DLP14], HIBE [CG17] ➳ More generally, anything based on trapdoor sampling [GPV08] Problem: Given f g x xd 1 , find F G x xd 1 such that: f G g F q
8 / 21
NTRU Laces: ➳ Prevalent in lace-based crypto ➳ Public key is A = [ 1 h ] , for h = g × f−1 mod (φ, q). ➳ Private key is B such that B × At = 0 mod (φ, q) Some schemes only require a paral trapdoor B = [ g −f ] : ➳ Fiat-Shamir [ZCHW17], encrypon [SHRS17], FHE [LTV12, BLLN13] However, some schemes require a full trapdoor B = [ g −f G −F ] : ➳ Hash-then-sign [PFH+17], IBE [DLP14], HIBE [CG17] ➳ More generally, anything based on trapdoor sampling [GPV08] Problem: Given f, g ∈ Z[x]/(xd + 1), find F, G ∈ Z[x]/(xd + 1) such that: f · G − g · F = q
8 / 21
If we can solve the problem projected over Zd/2, i.e.: NZd/Zd/2(f) · G′ − NZd/Zd/2(g) · F′ = 1 for some F′, G′, then we have this relaonship over Zd: f · (f×G′) − g · (g×F′) = 1 This leads to a simple algorithm: 1 Project 2 Solve 3 Li
9 / 21
Zd ∋ f, g F G ⊊ Zd/2 N
d d 2 f
N
d d 2 g
F 1 G 1 ⊊ Zd/4 N
d d 4 f
N
d d 4 g
F 2 G 2 ⊊ . . . . . . . . . . . . ⊊ Z N
d
f N
d
g F G At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ Zd/4 N
d d 4 f
N
d d 4 g
F 2 G 2 ⊊ . . . . . . . . . . . . ⊊ Z N
d
f N
d
g F G At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ ↓ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) F 2 G 2 ⊊ . . . . . . . . . . . . ⊊ Z N
d
f N
d
g F G At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ ↓ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) F 2 G 2 ⊊ ↓ . . . . . . . . . . . . ⊊ Z N
d
f N
d
g F G At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ ↓ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) F 2 G 2 ⊊ ↓ . . . . . . . . . . . . ⊊ ↓ Z ∋ NZd/Z(f), NZd/Z(g) F G At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ ↓ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) F 2 G 2 ⊊ ↓ . . . . . . . . . . . . ⊊ ↓ Z ∋ NZd/Z(f), NZd/Z(g) → F[ℓ], G[ℓ] At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ ↓ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) F 2 G 2 ⊊ ↓ . . . . . . . . . . . . ⊊ ↓ ↑ Z ∋ NZd/Z(f), NZd/Z(g) → F[ℓ], G[ℓ] At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) F 1 G 1 ⊊ ↓ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) → F[2], G[2] ⊊ ↓ ↑ . . . . . . . . . . . . ⊊ ↓ ↑ Z ∋ NZd/Z(f), NZd/Z(g) → F[ℓ], G[ℓ] At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g F G ⊊ ↓ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) → F[1], G[1] ⊊ ↓ ↑ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) → F[2], G[2] ⊊ ↓ ↑ . . . . . . . . . . . . ⊊ ↓ ↑ Z ∋ NZd/Z(f), NZd/Z(g) → F[ℓ], G[ℓ] At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g → F, G ⊊ ↓ ↑ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) → F[1], G[1] ⊊ ↓ ↑ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) → F[2], G[2] ⊊ ↓ ↑ . . . . . . . . . . . . ⊊ ↓ ↑ Z ∋ NZd/Z(f), NZd/Z(g) → F[ℓ], G[ℓ] At each lower level: The coefficients grow (in bitsize) by a factor 2... ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni f Ni g at each step Allows a linear me-memory trade-off by a factor log n
10 / 21
Zd ∋ f, g → F, G ⊊ ↓ ↑ Zd/2 ∋ NZd/Zd/2(f), NZd/Zd/2(g) → F[1], G[1] ⊊ ↓ ↑ Zd/4 ∋ NZd/Zd/4(f), NZd/Zd/4(g) → F[2], G[2] ⊊ ↓ ↑ . . . . . . . . . . . . ⊊ ↓ ↑ Z ∋ NZd/Z(f), NZd/Z(g) → F[ℓ], G[ℓ] At each lower level: ➳ The coefficients grow (in bitsize) by a factor 2... ➳ ... but the number of coefficients is divided by 2. Space-saving trick: recompute lazily Ni(f), Ni(g) at each step ➳ Allows a linear me-memory trade-off by a factor ℓ = log n
10 / 21
sage: f8, g8
x^7 - x^6 - 2*x^5 - 4*x^3 - 3*x^2 - x + 7 sage: f4, g4
sage: f2, g2
sage: f1, g1 14412817, 42616001 sage: F1, G1 5126443, 15157932 sage: F2, G2 2495*x - 399, 3844*x - 2025 sage: F4, G4
sage: F8, G8
2*x^7 - x^6 - x^5 - x^4 - 3*x^3 + x^2 + x - 4
11 / 21
Method Time complexity1 Space complexity1 Resultant [HHGP+03] ˜ O(d(d2 + B)) O(d2B) HNF [SS11] ˜ O(d3B) O(d2B) This work (Fast) O((dB)log2 3 log d) [Kara] O(d(B + log d) log d) ˜ O(dB) [SchöStr] This work (Compact) O((dB)log2 3 log2 d) [Kara] O(d(B + log d)) ˜ O(dB) [SchöStr] We gain in pracce: ➳ a factor 100 in memory (3 MB → 30 kB) ➳ a factor 100 in me (2 sec. → 20 msec.)
1B = log2 ||(f, g)||
12 / 21
Problem: Given A ∈ Rn×n, compute B1, . . . , Bk ∈ Rn×n such that AA⋆ + ∑
i
BB⋆ = C · In Algorithmic soluons: ➳ R = R, ➳ R = R[x]/(φ), ➳ R = Z, ➳ R = Z[x]/(xd+1), k = 1: k = 1: k = O(1): k = O(log d): Cholesky [Pei10] Babylonian method [DN12] ia.cr/2019/320 This talk + ia.cr/2019/320
13 / 21
Simplified problem: Given a ∈ Z[x]/(xd + 1), compute polynomials b1, . . . , blog2(d) ∈ Z[x]/(xd + 1) such that for some constant C: aa + ∑
i
bibi = C, where · denotes the Hermian adjoint (in our case, a(x) = a(x−1)). Aempt 1: Galois conjugaon and Hermian adjoint compose nicely: TrQd/Qd/2(aa) = aa + (aa)× = aa + a×a× ∈ Zd/2 We have projected the problem over Zd/2. Unfortunately repeang this trick doesn’t scale well.
14 / 21
Aempt 2: Let aa = g; g is self-adjoint so we can write g = glow + glow. Let b(x) = 1 − x · go,low(x2), then: g + bb = ge(x2) + x · go,low(x2) + x · go,low(x2) + (1 − x · go,low(x2)) · (1 − x · go,low(x2)) = (1 + ge + go,low · go,low)(x2) We have projected the problem over
d 2.
This trick scales well with repeon. It incurs a growth on the coefficients’ sizes... ... but composes nicely with gadget decomposion:
We write g g0 2 g1 2kgk, Then we apply this trick on each gi.
This effecvely migates the size growth. Consequence: We can compute b1 bk in
d such that
aa
i
bibi C with k O log g log d .
15 / 21
Aempt 2: Let aa = g; g is self-adjoint so we can write g = glow + glow. Let b(x) = 1 − x · go,low(x2), then: g + bb = ge(x2) + x · go,low(x2) + x · go,low(x2) + (1 − x · go,low(x2)) · (1 − x · go,low(x2)) = (1 + ge + go,low · go,low)(x2) We have projected the problem over Zd/2. This trick scales well with repeon. It incurs a growth on the coefficients’ sizes... ... but composes nicely with gadget decomposion:
We write g g0 2 g1 2kgk, Then we apply this trick on each gi.
This effecvely migates the size growth. Consequence: We can compute b1 bk in
d such that
aa
i
bibi C with k O log g log d .
15 / 21
Aempt 2: Let aa = g; g is self-adjoint so we can write g = glow + glow. Let b(x) = 1 − x · go,low(x2), then: g + bb = ge(x2) + x · go,low(x2) + x · go,low(x2) + (1 − x · go,low(x2)) · (1 − x · go,low(x2)) = (1 + ge + go,low · go,low)(x2) We have projected the problem over Zd/2. This trick scales well with repeon. It incurs a growth on the coefficients’ sizes... ... but composes nicely with gadget decomposion:
➵ We write g = g0 + 2 · g1 + · · · + 2kgk, ➵ Then we apply this trick on each gi.
This effecvely migates the size growth. Consequence: We can compute b1 bk in
d such that
aa
i
bibi C with k O log g log d .
15 / 21
Aempt 2: Let aa = g; g is self-adjoint so we can write g = glow + glow. Let b(x) = 1 − x · go,low(x2), then: g + bb = ge(x2) + x · go,low(x2) + x · go,low(x2) + (1 − x · go,low(x2)) · (1 − x · go,low(x2)) = (1 + ge + go,low · go,low)(x2) We have projected the problem over Zd/2. This trick scales well with repeon. It incurs a growth on the coefficients’ sizes... ... but composes nicely with gadget decomposion:
➵ We write g = g0 + 2 · g1 + · · · + 2kgk, ➵ Then we apply this trick on each gi.
This effecvely migates the size growth. Consequence: We can compute b1, . . . , bk in Zd such that aa + ∑
i
bibi = C, with k = ˜ O(log ∥g∥∞ + log d).
15 / 21
Problem: Given B ∈ Zn×n
d
and c ∈ SpanQd(B), compute v ∈ Λ(B) such that ||v − c|| is small. Equivalent: Given B ∈ Zn×n
d
and t ∈ Qn
d, compute z ∈ Zn d such that
||(z − t) · B|| is small. Algorithmic soluons: ➳ High quality, O((nd)2) operaons ➳ Lower quality, O(n2d log d) operaons ➳ High quality, O(n2d log d) operaons (Randomized) nearest plane [Bab85, GPV08] (Randomized) round-off [Bab85, Pei10] Fast Fourier orthogonalizaon ia.cr/2015/1014
16 / 21
Round-Off Algorithm: 1 t ← c · B−1 2 z ← ⌊t⌉ 3 Output v ← z · B Nearest Plane Algorithm:1 1 t ← c · B−1 2 For j = n down to 1:
1 ˆ tj ← tj + ∑
i>j(ti − zi) · Li,j
2 zj ← ⌊ˆ tj⌉
3 Output v ← z · B Output: Output:
c c
1Requires precompung the Gram-Schmidt orthogonalisaon (GSO) of B: B = L · ˜
B..
17 / 21
Round-Off Algorithm: 1 t ← c · B−1 2 z ← ⌊t⌉ 3 Output v ← z · B Nearest Plane Algorithm:1 1 t ← c · B−1 2 For j = n down to 1:
1 ˆ tj ← tj + ∑
i>j(ti − zi) · Li,j
2 zj ← ⌊ˆ tj⌉
3 Output v ← z · B Output: Output:
c c
1Requires precompung the Gram-Schmidt orthogonalisaon (GSO) of B: B = L · ˜
B .
17 / 21
Consider the simplified case where we want this to be small: (z − t) · b Using the ring isomorphism Qd ∼ = (Qd/2)2, this is equivalent to: [ ze − te zo − to ] · [ be bo xbo be ]
Why this is nice: ➳ We can orthogonalize the second row of B w.r.t. to the first one: ˜ b2 ← b2 − ⟨b2, b1⟩ b2, b1
·b1 ➳ We can apply this “break and orthogonalize” trick recursively. ➳ This structured decomposion then allows a faster nearest plane algorithm.
18 / 21
Addional tricks: ➳ Equivalent decomposion: (B = L · ˜ B)
⇐ ⇒ (B · B⋆ = L · ˜ B˜ B⋆ · L⋆)
The LDL decomposion is more amenable to a recursive applicaon of
➳ Working only in the FFT domain: Discarding useless conversions further reduces the total complexity to O(d log d).
19 / 21
Speed-ups in the presence of a ring: ➳ Most of efficient lace-based cryptography Speed-ups in the presence of tower of rings (this talk): ➳ Using ring isomorphisms: ia.cr/2015/1014 ➳ Using the field norm: ia.cr/2019/015 ➳ Using trace-like properes: ia.cr/2019/230 Exploing automorphisms: ➳ Homomorphic encrypon ➳ Zero-Knowledge proofs [dPLS18]
20 / 21
21 / 21
L Babai. On lova´sz’ lace reducon and the nearest lace point problem. In Proceedings on STACS 85 2Nd Annual Symposium on Theorecal Aspects of Computer Science, New York, NY, USA, 1985. Springer-Verlag New York, Inc. Joppe W. Bos, Krisn Lauter, Jake Lous, and Michael Naehrig. Improved security for a ring-based fully homomorphic encrypon scheme. In Marjn Stam, editor, 14th IMA Internaonal Conference on Cryptography and Coding, volume 8308 of LNCS, pages 45–64. Springer, Heidelberg, December 2013. Peter Campbell and Michael Groves. Praccal post-quantum hierarchical identy-based encrypon. 16th IMA Internaonal Conference on Cryptography and Coding, 2017. http://www.qub.ac.uk/sites/CSIT/FileStore/Filetoupload, 785752,en.pdf. Léo Ducas, Vadim Lyubashevsky, and Thomas Prest. Efficient identy-based encrypon over NTRU laces.
21 / 21
In Palash Sarkar and Tetsu Iwata, editors, ASIACRYPT 2014, Part II, volume 8874 of LNCS, pages 22–41. Springer, Heidelberg, December 2014. Léo Ducas and Phong Q. Nguyen. Faster Gaussian lace sampling using lazy floang-point arithmec. In Wang and Sako [WS12], pages 415–432. Rafaël del Pino, Vadim Lyubashevsky, and Gregor Seiler. Lace-based group signatures and zero-knowledge proofs of automorphism stability. In David Lie, Mohammad Mannan, Michael Backes, and XiaoFeng Wang, editors, ACM CCS 18, pages 574–591. ACM Press, October 2018. Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard laces and new cryptographic construcons. In Richard E. Ladner and Cynthia Dwork, editors, 40th ACM STOC, pages 197–206. ACM Press, May 2008. Jeffrey Hoffstein, Nick Howgrave-Graham, Jill Pipher, Joseph H. Silverman, and William Whyte. NTRUSIGN: Digital signatures using the NTRU lace.
21 / 21
In Marc Joye, editor, CT-RSA 2003, volume 2612 of LNCS, pages 122–140. Springer, Heidelberg, April 2003. Adriana López-Alt, Eran Tromer, and Vinod Vaikuntanathan. On-the-fly mulparty computaon on the cloud via mulkey fully homomorphic encrypon. In Howard J. Karloff and Toniann Pitassi, editors, 44th ACM STOC, pages 1219–1234. ACM Press, May 2012. Chris Peikert. An efficient and parallel Gaussian sampler for laces. In Tal Rabin, editor, CRYPTO 2010, volume 6223 of LNCS, pages 80–97. Springer, Heidelberg, August 2010. Thomas Prest, Pierre-Alain Fouque, Jeffrey Hoffstein, Paul Kirchner, Vadim Lyubashevsky, Thomas Pornin, Thomas Ricosset, Gregor Seiler, William Whyte, and Zhenfei Zhang. Falcon. Technical report, Naonal Instute of Standards and Technology, 2017. available at https://csrc.nist.gov/projects/ post-quantum-cryptography/round-1-submissions.
21 / 21
John M. Schanck, Andreas Hulsing, Joost Rijneveld, and Peter Schwabe. Ntru-hrss-kem. Technical report, Naonal Instute of Standards and Technology, 2017. available at https://csrc.nist.gov/projects/ post-quantum-cryptography/round-1-submissions. Damien Stehlé and Ron Steinfeld. Making NTRU as secure as worst-case problems over ideal laces. In Kenneth G. Paterson, editor, EUROCRYPT 2011, volume 6632 of LNCS, pages 27–47. Springer, Heidelberg, May 2011. Xiaoyun Wang and Kazue Sako, editors. ASIACRYPT 2012, volume 7658 of LNCS. Springer, Heidelberg, December 2012. Zhenfei Zhang, Cong Chen, Jeffrey Hoffstein, and William Whyte. pqntrusign. Technical report, Naonal Instute of Standards and Technology, 2017. available at https://csrc.nist.gov/projects/ post-quantum-cryptography/round-1-submissions.
21 / 21