Simple and Efficient Pseudorandom generators from Gaussian Processes - - PowerPoint PPT Presentation

simple and efficient pseudorandom generators from
SMART_READER_LITE
LIVE PREVIEW

Simple and Efficient Pseudorandom generators from Gaussian Processes - - PowerPoint PPT Presentation

Simple and Efficient Pseudorandom generators from Gaussian Processes Eshan Chattopadhyay Anindya De Rocco Servedio Cornell U Penn Columbia Halfspaces (aka LTFs) + + + -- + + + -- + -- + -- -- -- + + + -- + + -- -- +


slide-1
SLIDE 1

Simple and Efficient Pseudorandom generators from Gaussian Processes

Anindya De U Penn Eshan Chattopadhyay Cornell Rocco Servedio Columbia

slide-2
SLIDE 2

Halfspaces (aka LTFs)

+ + + + + + + + + + + + + + +

slide-3
SLIDE 3

Halfspaces (and their intersections)

+ + + + + + + +

  • +

+

slide-4
SLIDE 4

Intersections of k-halfspaces

  • Fundamental for several areas of math and theory CS.
  • Well investigated in terms of

1. Learning – [Vempala ’10, Klivans-O’Donnell-Servedio ‘08] 2. Derandomization – [Harsha-Klivans-Meka ‘10, Servedio-Tan ‘17] 3. Noise sensitivity - [Nazarov ‘03, Kane ’14] 4. Sampling – [Dyer-Frieze-Kannan ’89, Lovasz-Vempala 04, …]

slide-5
SLIDE 5

Pseudorandom Generator (PRG)

Let F be a class of Boolean functions ∀ f ∈ F , |E[f(Un)] – E[f(G(Ur))]| < ε

slide-6
SLIDE 6

BPP

  • Languages that admit an efficient

randomized algorithm.
 
 
 x ∈ L: Pr[A(x) = 1] > 2/3
 
 x ∉ L: Pr[A(x) = 0] > 2/3

slide-7
SLIDE 7

Derandomization via PRGs

slide-8
SLIDE 8

Our focus: derandomization

  • This talk: focus on derandomization in the Gaussian

space.

  • Setup: endowed with the standard normal measure.
  • Task: Produce a small and explicit set of points

such that for (intersection of k LTFs)

slide-9
SLIDE 9

Our focus: derandomization

Task: Produce a small and explicit set of points such that for (intersection of k LTFs) Non-constructively: of size exists. Best known explicit construction: Harsha-Klivans-Meka gave a construction of size O’Donnell-Servedio-Tan 2019: matching construction w.r.t uniform on Boolean cube

slide-10
SLIDE 10

Our main result

An explicit construction for fooling intersection of k- halfspaces

  • n the Gaussian measure whose size is

➢Our construction has polynomial size for ➢Arguably much simpler construction.

slide-11
SLIDE 11

Connection to Gaussian processes

  • Connection is an overstatement -- it’s a simple

rephrasing.

  • Instead of looking at AND of halfspaces, let us look at OR
  • f halfspaces.

max/sup of Gaussian processes

slide-12
SLIDE 12

Main idea

  • We are interested in studying a non-smooth function of

the supremum of Gaussian processes.

  • We are interested in producing a small set so

that

slide-13
SLIDE 13

Setting sights lower

  • What if we want to produce such that
  • Recall: statistics of Gaussian process governed by mean and

covariances -- determined by

  • Johnson-Lindenstrauss can preserve covariances

approximately by projecting on to random subspaces.

slide-14
SLIDE 14

Johnson-Lindenstrauss

  • Strategy: Sample a random low-dimensional subspace H.
  • Sample from H. Call this distribution

Question: (i) Mean / covariance of the distributions 
 Does this imply

slide-15
SLIDE 15

Preserving expected maxima

  • Yes – Sudakov-Fernique lemma (quantitative version by

Sourav Chatterjee)

  • Randomness complexity of sampling from a random low-

dimensional subspace H?

  • JL can be derandomized (Kane, Meka, Nelson – 2011) – in

particular, random projection from n to m dimensions can be replaced by a set of size

slide-16
SLIDE 16

Preserving expected maxima

Lemma: Let and be two sets of normal random variables with

  • a. ,
  • b. .

Then, In a nutshell: To get non-trivial approximations, we only need . This can be achieved by random projections to dimensions.

slide-17
SLIDE 17

Preserving expected maxima

Lemma: Let and be two sets of normal random variables with

  • a. ,
  • b. .

Then, Main thing we need to do: Prove the same for vis-à-vis

slide-18
SLIDE 18

Quick proof sketch

Main trick: Consider smooth maxima function instead of maxima. Define the function Fact: Much easier to work with the smooth function

slide-19
SLIDE 19

Stein’s interpolation method

  • Comparing the quantities and

:

  • Condition: have matching means and nearly

matching covariances.

  • For , define
slide-20
SLIDE 20

Key statement

Lemma: Proof is based on Stein’s formula (integration by parts) and some algebraic manipulations. One useful fact:

slide-21
SLIDE 21

Putting things together

slide-22
SLIDE 22

Our goal

  • Recall: We want to prove
  • Two step procedure:
  • Prove for smooth F

The error bound depends on derivatives of F .

slide-23
SLIDE 23

Going from smooth to non-smooth

  • To go from smooth test functions to non-smooth test

functions, the random variable should not be very concentrated.

  • 1

1

slide-24
SLIDE 24

Going from smooth to non-smooth

  • Suppose are (potentially correlated) normal

random variables with variance 1.

  • How concentrated can be?
  • Easy to show:
  • Much harder [Nazarov]:
slide-25
SLIDE 25

Putting it together

  • Anti-concentration bound allows us to transfer bounds

from smooth test function to the test function .

  • This proves that
slide-26
SLIDE 26

Summary

  • If we start with a set of jointly Gaussian random variables

, and do a (pseudo) random projection to

  • btain

=> JL implies means and covariance preserved.

  • Sudakov-Fernique:
  • This work, we exploit:
slide-27
SLIDE 27

Other results

  • What other statistics of Gaussians can be preserved by using

random projections?

  • If and have - matching

covariances,

  • Proof: closeness in covariance ➔ closeness in Wasserstein

➔ closeness in union of orthants distance (Chen- Servedio-Tan)

  • PRG for arbitrary functions of LTFs on Gaussian space with seed

.

slide-28
SLIDE 28

Other results

  • Deterministic Approximate Counting:

– poly(n) 2poly(log k, ε) time algorithm for counting fraction of Boolean points in a k-face polytope, up to additive error ε. – poly(n) 2poly( k, ε) time algorithm for counting fraction of Boolean points satisfied by an arbitrary function of k halfspaces, up to additive error ε.

  • Technique based on invariance principles and

regularity lemmas.

– Beats vanilla use of a PRG that brute-forces over all seeds!

slide-29
SLIDE 29

Open questions

  • PRGs for fooling DNFs of halfspaces using

similar techniques?

  • Extending techniques to the Boolean

setting?