Solving SVP and CVP in 2๐ Time Using Discrete Gaussian Sampling
Divesh Aggarwal
National University of Singapore (NUS)
Daniel Dadush
Centrum Wiskunde en Informatica (CWI)
Oded Regev Noah Stephens-Davidowitz
New York University (NYU)
Using Discrete Gaussian Sampling Divesh Aggarwal National - - PowerPoint PPT Presentation
Solving SVP and CVP in 2 Time Using Discrete Gaussian Sampling Divesh Aggarwal National University of Singapore (NUS) Daniel Dadush Centrum Wiskunde en Informatica (CWI) Oded Regev Noah Stephens-Davidowitz New York University (NYU)
Divesh Aggarwal
National University of Singapore (NUS)
Daniel Dadush
Centrum Wiskunde en Informatica (CWI)
Oded Regev Noah Stephens-Davidowitz
New York University (NYU)
A lattice โ โ โ๐ is all integral combinations of some basis B = ๐1, โฆ , ๐๐ . โ(๐ถ) denotes lattice generated by ๐ถ.
โ
๐1 ๐2
Given: Lattice basis ๐ถ ๐ โ๐ร๐. Goal: Compute shortest non-zero vector in โ(๐ถ).
โ
๐ง
๐1 โ = length of shortest non-zero vector
โ
๐ง
๐1 โ
Time Space
[Kan86,HS07,MW15]
(Enumeration)
๐๐(๐) poly ๐
[AKS01]
(Sieving)
2๐(๐) 2๐(๐)
[NV08, PS09, MV10a, โฆ]
22.465๐+๐(๐) 21.233๐+๐(๐)
[MV10b]
(Voronoi cell, deterministic, CVP)
22๐+๐(๐) 2๐+๐(๐)
[ADRS15]
2๐+๐(๐) 2๐+๐(๐)
shortest vector!
If we can obtain โenoughโ samples from the discrete Gaussian with the โrightโ (small) parameter, then we can solve SVP.
We need at most 1.38๐ vectors with ๐ก โ ๐1 โ / ๐ [KL78]. (uses bounds on the kissing number) ๐ธโ,๐ก is very well-studied for very high parameters, ๐ก โฟ ๐๐(โ), i.e. above the โsmoothing parameterโ of the lattice. [Kle00, GPV08] show how to sample in this regime in polynomial time. (Previously could not do much better, even in exponential time.)
Can we use samples from the LHS to get samples from the RHS?
Our goal
[Kle00, GPV08]
What if we condition on the result being in the lattice? Progress! Unfortunately, this requires us to throw out a lot of vectors. We only keep one from every โ 2๐ vectors each time we do this, leading to a very slow algorithm!
What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
When do we have ?
We have ๐ง1 + ๐ง2 2 โ โ if and only if ๐ง1, ๐ง2 are in the same coset of 2โ . (Note that there are 2๐ cosets)
What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
โ ร โ โโ = { ๐ง1, ๐ง2 โถ ๐ง1 โก ๐ง2 mod 2โ }
avg ๐ง1, ๐ง2 = (๐ง1+๐ง2
2
, ๐ง1โ๐ง2
2
)
What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
avg โโ
What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
= โ ร โ
๐ง1, ๐ง2 โผ ๐ธโโ ,๐ก โ avg ๐ง1, ๐ง2 โผ ๐ธโรโ,
๐ก 2
Progress!
avg โโ = โ ร โ avg ๐ง1, ๐ง2 =
๐ง1+๐ง2 2
, ๐ง1โ๐ง2
2
= (y1, ๐ง2) 2
If we sample ๐ง1, ๐ง2~๐ธโ,๐ก, then their average will be distributed as ๐ธโ,
๐ก 2,
if we condition on the result being in the lattice.
Pr
๐ง1,๐ง2~๐ธโ,๐ก ๐ง1+๐ง2 2
= ๐ง | ๐ง1+๐ง2
2
โ โ Pr ๐ธโ,๐ก โ ๐
2
Pr
๐ง1,๐ง2~๐ธ2โ+๐ ,๐ก ๐ง1+๐ง2 2
= ๐ง
โ ๐ โโ(mod 2โ)
Generating a single ๐ฌ๐,
๐ ๐ sample:
2.
1 + ๐ 2)/2 where ๐ 1, ๐ 2 โผ ๐ธ2โ+๐ ,๐ก.
Input: ๐
1, โฆ , ๐ ๐ iid ๐ธโ,๐ก samples (๐ โ 2๐)
2.
๐ + ๐ ๐)/2, for ๐ ๐, ๐ ๐ โ ๐ .
๐, ๐ ๐ from list.
Donโt have access to this distribution!
Achieving โ ๐๐ฌ ๐ฌ๐,๐ โ ๐
๐:
First Pass: Sample ๐ โผ ๐ธโ,๐ก (mod 2โ). Accept ๐ with probability Pr[๐ธโ,๐ก โ ๐ ] o/w reject. Implementation: Sample ๐
1 โผ ๐ธโ,๐ก and let ๐ be ๐ 1 (mod 2โ).
Sample ๐
2 โผ ๐ธโ,๐ก.
Output ๐ if ๐
1 โก ๐ 2 (mod 2โ).
Same as trivial strategy!
Achieving โ ๐๐ฌ ๐ฌ๐,๐ โ ๐
๐:
Second Try: Sample ๐ โผ ๐ธโ,๐ก (mod 2โ). Accept ๐ with probability
Pr ๐ธโ,๐กโ๐ ๐max
where ๐max = max
๐โโ(mod 2โ) Pr[๐ธโ,๐ก โ ๐]
Implementation: ???
Input: ๐
1, โฆ , ๐ ๐ iid ๐ธโ,๐ก samples (๐ โ 2๐)
Use first ๐/6 samples to estimate ๐max.
๐ โฆ ๐๐max/3
โ(๐๐๐ 2โ) 2๐ buckets # samples in each bucket
First 1 ๐max samples Last 1 ๐max samples โฏ
Input: ๐
1, โฆ , ๐ ๐ iid ๐ธโ,๐ก samples (๐ โ 2๐)
๐ (๐๐๐ 2โ).
๐๐ ๐๐(1)},
๐ + ๐ ๐)/2
where ๐
๐ is any sample contributing to ๐๐ .
May drop to ๐ 2
๐ 2 after a single step!
๐ โ # input vectors # output vectors โ ๐ โ
๐ Pr ๐ธโ,๐กโ๐
2
max
๐
Pr[๐ธโ,๐กโ๐]
Worst case bound: probability is at least 1 |support|.
๐๐ก โ โ ๐งโโ ๐โ
๐ง ๐ก 2
๐๐ก โ โ ๐งโโ ๐โ
๐ง ๐ก 2
max
๐
๐๐ก(2โ + ๐) = ๐๐ก(2โ)
Setting ๐ โ 2๐ gives
Recall that we only need 1.38๐ samples to solve SVP!
๐๐ก โ โค 2
๐ 2๐ ๐ก 2(โ)
Poisson summation formula: โniceโ function ๐ ๐งโโ ๐ ๐ง + ๐ฎ =
1 det(โ) ๐ฆโโโ
๐ ๐ฆ ๐2๐๐โฉ๐ฆ,๐ฎโช Plug in ๐โ๐
๐ฆ ๐ก 2:
๐๐ก โ + ๐ฎ =
๐ก๐ det(โ) ๐ฆโโโ ๐โ๐ ๐ก๐ฆ 2 ๐2๐๐โฉ๐ฆ,๐ฎโช
๐๐ก โ =
๐ก๐ det(โ) ๐ 1 ๐ก(โโ)
๐๐ก โ + ๐ฎ =
๐ก๐ det(โ) ๐ฆโโโ ๐โ๐ ๐ก๐ฆ 2 ๐2๐๐โฉ๐ฆ,๐ฎโช
๐๐ก โ =
๐ก๐ det(โ) ๐ 1 ๐ก(โโ)
Corollary 1: max
๐ฎ
๐๐ก โ + ๐ฎ = ๐๐ก(โ) Corollary 2: ๐๐ฝ๐ก โ โค ๐ฝ๐๐๐ก(โ) for ๐ฝ โฅ 1.
1. Use GPV to get โ 2๐ samples from ๐ธโ,๐ก with ๐ก โซ ๐1(โ).
repeatedly.
๐1(โ) ๐.
samples. SVPSolver(โ)
dist(๐ฎ, โ) ๐ง
Given: Lattice basis ๐ถ ๐ โ๐ร๐, target ๐ฎ ๐ โ๐. Goal: Compute ๐ง ๐ โ(๐ถ) minimizing ๐ฎ โ ๐ง .
โ
๐ฎ
CVP seems to be the harder problem: there is a dimension preserving reduction from SVP to CVP [GMSS99].
Time CVP? Deterministic? [Kan86,HS07,MW15] (Enumeration)
๐๐(๐)
Yes Yes [AKS02, BN09, HPS11, โฆ] (Sieving)
2๐(๐)
Approximate No [MV10b] (Voronoi cell)
22๐+๐(๐)
Yes Yes [ADRS15] (Discrete Gaussian)
2๐+๐(๐)
Approximate No [ADS15]
2๐+๐(๐)
Yes No
The algorithm is quite complicated, so the following is a over-simplified high level sketch.
closest vectors!
CVP trivially reduces to sampling from the discrete Gaussian distribution ๐ธโ,๐ฎ,๐ก for a small enough parameter ๐ก.
Great! So, we just need to run the squaring combiner and weโre done! Right!?
but given basis ๐1, โฆ ๐๐ for โ, it requires ๐ก โณ max
๐
๐๐ .
โซ ๐ก, effectively setting these coordinates to 0.
Even if apply the combiner ๐ times, we can
๐ก 2 โ 2
๐ก 2(โ)
๐ก 2(โ)๐ ๐ก 2(โโ๐ฎ)
๐ โโ/2โ ๐๐ก(๐ โ๐ฎ)
No obvious โmagical cancelationโ.
2โ๐ max
๐ โโ/2โ Pr[๐ธโ,๐ฎ,๐ก๐โ๐ ].
If we start with 2๐+๐(๐) samples, we always โseeโ the heaviest coset at each stage.
Shell of approx closest vectors Sphere of closest vectors
Can have arbitrarily many approximate closest vector for any ๐ฟ > 1 !!
The reduction from CVP to DGS needs ๐ก โช ๐1 โ , but we can only handle ๐ก โ 2โ๐ โ dist(๐ฎ, โ).
For such parameters, we obtain approximate solutions with unreasonably good approximation factor ๐ฟ โ 1 + 2โ๐, but not exact solutions.
๐ฎ
For ๐ฟ โพ 1 + 1/๐, ๐ฟ-approximate closest vectors lie lower dimensional subspaces.
โ 2 lattice subspaces
To apply recursion, need to identify them and show that there are not too many. 2๐+๐(๐) time โ at most 2 sub-problems per dimension!
Claim: There are at most 2๐ exact closest vectors. Sphere containing the closest vectors Must lie in different cosets of โ/2โ.
Claim: The approximate closest vectors are contained in 2๐ ``clustersโโ of small diameter. Shell containing the approximate closest vectors Sphere containing the closest vectors ๐ ๐
Claim: 1 + ๐2 approx. CVP sols ๐ฃ and ๐ค. ๐ค โ ๐ฃ โ 2โ implies ๐ค โ ๐ฃ โค 2๐ โ dist(๐ฎ, โ). ๐ ๐
๐ 1 + ๐2 โ ๐
Claim: 1 + ๐2 approx. CVP sols ๐ฃ and ๐ค. ๐ค โ ๐ฃ โ 2โ implies ๐ค โ ๐ฃ โค 2๐ โ dist(๐ฎ, โ). ๐ค โ ๐ฃ 2 = 2 ๐ค โ ๐ฎ 2 + 2 ๐ฃ โ ๐ฎ 2 โ4 ๐ค + ๐ฃ 2 โ ๐ฎ 2 โค 4 1 + ๐2 โ dist ๐ฎ, โ 2 โ4 โ dist ๐ฎ, โ 2 = 4๐2 โ dist ๐ฎ, โ 2
โnearly orthogonalโ basis ๐1, โฆ , ๐๐ of โ (lengths in approx. non-decreasing order) 1 + 2โ๐ approx CVP sols ๐ง1, โฆ , ๐ง๐ for ๐ฎ. ๐ง๐ = ๐ ๐๐,๐๐๐ โ๐ Theorem: โ๐ such that last ๐ coefficients { ๐๐โ๐+1,๐, โฆ , ๐๐,๐ : ๐ โ ๐ } come from set of size โ 2๐.
Recurse on these!
Assume: orthogonal lattice โ โ = { ๐ฆ1๐1, โฆ , ๐ฆ๐๐๐ : ๐ฆ โ โค๐} (0 โค ๐1 โค โฏ โค ๐๐) 1 + ๐2 approx CVP sols ๐ง1, โฆ , ๐ง๐ for ๐ฎ. ๐ง๐ = (๐1,๐๐1, โฆ , ๐๐,๐๐
๐) โ๐
Claim: If ๐ง๐ โ ๐ง๐ก โ 2โ and ๐๐โ๐+1 > ๐๐๐๐ then ๐๐โ๐+1,๐ , โฆ , ๐๐,๐ = (๐๐โ๐+1,๐ก, โฆ , ๐๐,๐ก)
For ๐ = 2โ๐, all coordinates are fixed by parity unless there are exponential gaps in basis vector lengths. But such gaps can existโฆ.
Claim: If ๐ง๐ โ ๐ง๐ก โ 2โ and ๐๐โ๐+1 โฅ ๐๐๐๐
1 2
๐ ๐๐
2 โค ๐ 2 ๐๐
โค 2 ๐ dist ๐ฎ, โ โค ๐ ๐ ๐๐ < ๐๐โ๐+1 If ๐ง๐ , ๐ง๐ก differ on any coordinate ๐ โ {๐ โ ๐ + 1, โฆ , ๐} their difference would have norm at least ๐๐โ๐+1.
This shows we have at most 2๐ clusters each of which is ๐ โ ๐ dimensional, but we need 2๐ clusters!!!
Idea: Only match parity on โhigh order bitsโ. ๐1 ๐2 ๐3 ๐4 โฆ ๐๐โ๐+1 โฆ ๐๐
Idea: Only match parity on โhigh order bitsโ. ๐1 ๐2 ๐3 ๐4 โฆ ๐๐โ๐+1 โฆ ๐๐
Gap
Parity of last ๐ coefficients determines these coefficients exactly.
Idea: Only match parity on โhigh order bitsโ. ๐1 ๐2 ๐3 ๐4 โฆ ๐๐โ๐+1 โฆ ๐๐
Gap
Idea: Can round first ๐ โ ๐ coefficients to desired parity without increasing distance to ๐ฎ by much.
๐1 ๐2 ๐3 โฆ โฆ โฆ ๐๐โ1 ๐๐ What if there are no large gaps?
Idea: Again only match parity on last ๐ bits. ๐1 ๐2 ๐3 โฆ ๐๐โ๐+1 โฆ ๐๐
Gap
๐๐โ๐+1 Mostly determined Completely determined Can guarantee ๐ is large in this case.
Input: ๐-dimensional lattice โ and target ๐ฎ. Output: Closest lattice vectors in โ to ๐ฎ. 1. Compute short basis ๐ถ of โ, and number ๐ of โhigh order coordinatesโ.
respect to ๐ถ and recurse on each group .
Initialization: (one shot 2๐+๐(๐) time) Compute short basis ๐ถ of โ, and number ๐ of โhigh
Per level work: (2๐+๐(๐) time) Sample many approx. closest vectors via DGS. Recursion: (โ 2๐ subproblems of dim. ๐ โ ๐) Group them according to last ๐ coordinates with respect to ๐ถ and recurse. Total runtime: 2๐+๐(๐)
Runtime: 1. Getting many DGS samples at low parameters.
Correctness: Show that we hit last ๐ coeffs of an exact closest vector with high probability.
Discussed in this talk
๐ 2 vectors from ๐ธโ,๐ก for any ๐ก in time 2๐+๐(๐)
Additional results from this work
๐ 2+๐(๐) -time algorithm for sampling 2๐/2 vectors above
smoothing.
Recent work
(not equivalence because the reduction in the other direction requires 1.38๐ ๐ธโ,๐ก samples.)
parameters?
something similar)?
โ ๐ค1 ๐ค6 ๐ค5 ๐ค4 ๐ค3 ๐ค2