Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, - - PowerPoint PPT Presentation

quantum chebyshev s inequality and applications
SMART_READER_LITE
LIVE PREVIEW

Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, - - PowerPoint PPT Presentation

Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, Frdric Magniez IRIF , Universit Paris Diderot, CNRS QUDATA 2019 arXiv: 1807.06456 Buffons needle A needle dropped randomly on a floor with equally spaced parallel


slide-1
SLIDE 1

Quantum Chebyshev’s Inequality and Applications

Yassine Hamoudi, Frédéric Magniez

IRIF , Université Paris Diderot, CNRS QUDATA 2019 arXiv: 1807.06456

slide-2
SLIDE 2

Buffon’s needle

Buffon, G., Essai d'arithmétique morale, 1777.

A needle dropped randomly on a floor with equally spaced parallel lines will cross one of the lines with probability 2/π.

2

slide-3
SLIDE 3

Use repeated random sampling and statistical analysis to estimate parameters of interest

Monte Carlo algorithms:

3

slide-4
SLIDE 4

Use repeated random sampling and statistical analysis to estimate parameters of interest

Monte Carlo algorithms: Empirical mean:

2/ Output: (x1 +…+ xn)/n 1/ Repeat the experiment n times: n i.i.d. samples x1, …, xn ~ X

3

slide-5
SLIDE 5

Use repeated random sampling and statistical analysis to estimate parameters of interest

Monte Carlo algorithms: Empirical mean:

2/ Output: (x1 +…+ xn)/n

Law of large numbers: x1 + . . . + xn

n

n→∞ E(X)

1/ Repeat the experiment n times: n i.i.d. samples x1, …, xn ~ X

3

slide-6
SLIDE 6

Empirical mean:

˜ μ = x1 + . . . + xn n

with

x1, . . . , xn ∼ X

How fast does it converge to E(X) ?

4

slide-7
SLIDE 7

Empirical mean:

˜ μ = x1 + . . . + xn n

with

x1, . . . , xn ∼ X

Chebyshev’s Inequality:

How fast does it converge to E(X) ?

| ˜ μ − E(X)| ≤ ϵE(X)

Objective:

multiplicative error 0 < ε < 1

with high probability

4 ( finite)

E(X), Var(X) ≠ 0

slide-8
SLIDE 8

(in fact )

Empirical mean:

˜ μ = x1 + . . . + xn n

with

x1, . . . , xn ∼ X

Chebyshev’s Inequality:

How fast does it converge to E(X) ?

| ˜ μ − E(X)| ≤ ϵE(X)

Objective:

multiplicative error 0 < ε < 1

with high probability Number of samples needed: O (

E(X2) ϵ2E(X)2 )

O( Var(X) ϵ2E(X)2 ) = O( 1 ϵ2( E(X2) E(X)2 − 1))

4 ( finite)

E(X), Var(X) ≠ 0

slide-9
SLIDE 9

(in fact )

Empirical mean:

˜ μ = x1 + . . . + xn n

with

x1, . . . , xn ∼ X

Chebyshev’s Inequality:

How fast does it converge to E(X) ?

| ˜ μ − E(X)| ≤ ϵE(X)

Objective:

multiplicative error 0 < ε < 1

with high probability Number of samples needed: O (

E(X2) ϵ2E(X)2 )

O( Var(X) ϵ2E(X)2 ) = O( 1 ϵ2( E(X2) E(X)2 − 1))

4

Relative second moment

( finite)

E(X), Var(X) ≠ 0

slide-10
SLIDE 10

(in fact )

Empirical mean:

˜ μ = x1 + . . . + xn n

with

x1, . . . , xn ∼ X

Chebyshev’s Inequality:

How fast does it converge to E(X) ?

| ˜ μ − E(X)| ≤ ϵE(X)

Objective:

multiplicative error 0 < ε < 1

with high probability Number of samples needed: O (

E(X2) ϵ2E(X)2 )

O( Var(X) ϵ2E(X)2 ) = O( 1 ϵ2( E(X2) E(X)2 − 1))

In practice: given an upper-bound , take samples

Δ2 ≥ E(X2) E(X)2

4

n = Ω ( Δ2 ϵ2 )

Relative second moment

( finite)

E(X), Var(X) ≠ 0

slide-11
SLIDE 11

Data stream model:

Frequency moments, Collision probability [Alon, Matias, Szegedy’99]

[Monemizadeh, Woodruff’] [Andoni et al.’11] [Crouch et al.’16]

Applications

Testing properties of distributions:

Closeness [Goldreich, Ron’11] [Batu et al.’13] [Chan et al.’14], Conditional independence [Canonne et al.’18]

Estimating graph parameters:

Number of connected components, Minimum spanning tree weight

[Chazelle, Rubinfeld, Trevisan’05], Average distance [Goldreich, Ron’08], Number

  • f triangles [Eden et al. 17]

Counting with Markov chain Monte Carlo methods:

Counting vs. sampling [Jerrum, Sinclair’96] [Štefankovič et al.’09], Volume of convex bodies [Dyer, Frieze'91], Permanent [Jerrum, Sinclair, Vigoda’04]

etc.

5

slide-12
SLIDE 12

Random variable X over sample space Ω ⊂ R+

Classical sample: one value x ∈ Ω, sampled with probability px

6

slide-13
SLIDE 13

Quantum sample: one (controlled-)execution of a quantum sampler or , where

Random variable X over sample space Ω ⊂ R+

Classical sample: one value x ∈ Ω, sampled with probability px

SX|0⟩ = ∑

x∈Ω

px |ψx⟩|x⟩

with ψx = arbitrary unit vector

SX S−1

X

6

slide-14
SLIDE 14

Can we use quadratically less samples in the quantum setting?

7

slide-15
SLIDE 15

Number of samples Conditions

Classical samples (Chebyshev’s inequality) [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15] [Montanaro’15] [Li, Wu’17]

Our result

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ2

E(X) ≤ H

Δ2 ≥ E(X2) E(X)2 Δ2 ≥ E(X2) E(X)2

Sample space Ω ⊂ [0,B]

L ≤ E(X) ≤ H B ϵ E(X)

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ Δ ϵ ⋅ H L Δ ϵ ⋅ log3 ( H E(X) )

Can we use quadratically less samples in the quantum setting?

7

slide-16
SLIDE 16

Number of samples Conditions

Classical samples (Chebyshev’s inequality) [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15] [Montanaro’15] [Li, Wu’17]

Our result

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ2

E(X) ≤ H

Δ2 ≥ E(X2) E(X)2 Δ2 ≥ E(X2) E(X)2

Sample space Ω ⊂ [0,B]

L ≤ E(X) ≤ H B ϵ E(X)

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ Δ ϵ ⋅ H L Δ ϵ ⋅ log3 ( H E(X) )

Can we use quadratically less samples in the quantum setting?

7

slide-17
SLIDE 17

Number of samples Conditions

Classical samples (Chebyshev’s inequality) [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15] [Montanaro’15] [Li, Wu’17]

Our result

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ2

E(X) ≤ H

Δ2 ≥ E(X2) E(X)2 Δ2 ≥ E(X2) E(X)2

Sample space Ω ⊂ [0,B]

L ≤ E(X) ≤ H B ϵ E(X)

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ Δ ϵ ⋅ H L Δ ϵ ⋅ log3 ( H E(X) )

Can we use quadratically less samples in the quantum setting?

7

slide-18
SLIDE 18

Number of samples Conditions

Classical samples (Chebyshev’s inequality) [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15] [Montanaro’15] [Li, Wu’17]

Our result

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ2

E(X) ≤ H

Δ2 ≥ E(X2) E(X)2 Δ2 ≥ E(X2) E(X)2

Sample space Ω ⊂ [0,B]

L ≤ E(X) ≤ H B ϵ E(X)

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ Δ ϵ ⋅ H L Δ ϵ ⋅ log3 ( H E(X) )

Can we use quadratically less samples in the quantum setting?

7

slide-19
SLIDE 19

Number of samples Conditions

Classical samples (Chebyshev’s inequality) [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15] [Montanaro’15] [Li, Wu’17]

Our result

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ2

E(X) ≤ H

Δ2 ≥ E(X2) E(X)2 Δ2 ≥ E(X2) E(X)2

Sample space Ω ⊂ [0,B]

L ≤ E(X) ≤ H B ϵ E(X)

Δ2 ≥ E(X2) E(X)2

Δ2 ϵ Δ ϵ ⋅ H L Δ ϵ ⋅ log3 ( H E(X) )

Can we use quadratically less samples in the quantum setting?

7

slide-20
SLIDE 20

Our Approach

slide-21
SLIDE 21

Input: Ampl-Est: O (

B ϵ E(X) ) quantum samples to obtain

Random variable X on sample space Ω ⊂ [0,B]

9

Amplitude Estimation Algorithm [Brassard et al.’02] [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15]

| ˜ μ − E(X)| ≤ ϵ ⋅ E(X)

slide-22
SLIDE 22

If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X)

10

Amplitude Estimation Algorithm [Brassard et al.’02] [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15]

Ampl-Est: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵ ⋅ E(X) Input: Random variable X on sample space Ω ⊂ [0,B]

slide-23
SLIDE 23

If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If

?

B ≫ E(X2) E(X)

10

Amplitude Estimation Algorithm [Brassard et al.’02] [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15]

Ampl-Est: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵ ⋅ E(X) Input: Random variable X on sample space Ω ⊂ [0,B]

slide-24
SLIDE 24

1

Random variable X

11

B

Largest outcome

px x

slide-25
SLIDE 25

1

Random variable Xb

12

b

New largest outcome

px x

≈ E(X2) E(X)

B

slide-26
SLIDE 26

If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 E(X2) E(X) B ≫ E(X2) E(X)

13

?

Ampl-Est: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵ ⋅ E(X) Input: Random variable X on sample space Ω ⊂ [0,B]

slide-27
SLIDE 27

If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 E(X2) E(X) B ≫ E(X2) E(X)

13

Lemma: If then b ≥ E(X2) ϵE(X) (1 − ϵ)E(X) ≤ E(Xb) ≤ E(X) . Ampl-Est: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵ ⋅ E(X) Input: Random variable X on sample space Ω ⊂ [0,B]

slide-28
SLIDE 28

If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 E(X2) E(X) B ≫ E(X2) E(X)

13

Lemma: If then b ≥ E(X2) ϵE(X) (1 − ϵ)E(X) ≤ E(Xb) ≤ E(X) . Problem: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2 Ampl-Est: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵ ⋅ E(X) Input: Random variable X on sample space Ω ⊂ [0,B]

slide-29
SLIDE 29

Solution: use the Amplitude Estimation algorithm to do a logarithmic search on b (given an upper-bound H ≥ E(X))

14

Problem: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2

slide-30
SLIDE 30

Threshold Input r.v. Number of samples Estimation

Solution: use the Amplitude Estimation algorithm to do a logarithmic search on b (given an upper-bound H ≥ E(X))

14

Problem: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2

b0 = HΔ2 b1 = (H/2)Δ2 b2 = (H/4)Δ2 ˜ μ0 …

Xb0

Δ Δ Δ

˜ μ1 ˜ μ2 … … …

Stopping rule: ˜ μi ≠ 0 Output: bi

Xb1 Xb2

slide-31
SLIDE 31

Threshold Input r.v. Number of samples Estimation

Solution: use the Amplitude Estimation algorithm to do a logarithmic search on b (given an upper-bound H ≥ E(X))

14

Problem: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2

b0 = HΔ2 b1 = (H/2)Δ2 b2 = (H/4)Δ2 ˜ μ0 …

Xb0

Δ Δ Δ

˜ μ1 ˜ μ2 …

Theorem: the first non-zero is obtained w.h.p. when: ˜ μi

2 ⋅ E(X)Δ2 ≤ bi ≤ 10 ⋅ E(X)Δ2

… …

Stopping rule: ˜ μi ≠ 0 Output: bi

Xb1 Xb2

slide-32
SLIDE 32

15

Analysis

Theorem: the first non-zero is obtained w.h.p. when: ˜ μi

2 ⋅ E(X)Δ2 ≤ bi ≤ 10 ⋅ E(X)Δ2

slide-33
SLIDE 33

Ingredient 1:

1/Δ2

15

E(Xb) b Analysis

Theorem: the first non-zero is obtained w.h.p. when: ˜ μi

2 ⋅ E(X)Δ2 ≤ bi ≤ 10 ⋅ E(X)Δ2

The output of Amplitude-Estimation is 0 w.h.p. if and only if the normalized estimated mean is below the inverse-square number of samples.

[Brassard et al.’02]

slide-34
SLIDE 34

Ingredient 1:

1/Δ2

15

E(Xb) b Analysis

If then

b ≥ 10 ⋅ E(X)Δ2 E(Xb) b ≤ E(X) b ≤ 1 10 ⋅ Δ2

Theorem: the first non-zero is obtained w.h.p. when: ˜ μi

2 ⋅ E(X)Δ2 ≤ bi ≤ 10 ⋅ E(X)Δ2

Ingredient 2:

The output of Amplitude-Estimation is 0 w.h.p. if and only if the normalized estimated mean is below the inverse-square number of samples.

[Brassard et al.’02]

slide-35
SLIDE 35

Ingredient 1:

1/Δ2

15

E(Xb) b

If then

E(Xb) b ≈ E(X) b ≈ 1 Δ2

b ≈ E(X) ⋅ Δ2

Analysis

If then

b ≥ 10 ⋅ E(X)Δ2 E(Xb) b ≤ E(X) b ≤ 1 10 ⋅ Δ2

Theorem: the first non-zero is obtained w.h.p. when: ˜ μi

2 ⋅ E(X)Δ2 ≤ bi ≤ 10 ⋅ E(X)Δ2

Ingredient 2: Ingredient 3:

The output of Amplitude-Estimation is 0 w.h.p. if and only if the normalized estimated mean is below the inverse-square number of samples.

[Brassard et al.’02]

slide-36
SLIDE 36

Applications

slide-37
SLIDE 37

17

Input: graph G=(V,E) with n vertices, m edges, t triangles Query access: unitaries Odeg|v⟩|0⟩ = |v⟩|deg(v)⟩

Opair|v⟩|w⟩|0⟩ = |v⟩|w⟩|(v, w) ∈ E ?⟩ Ongh|v⟩|i⟩|0⟩ = |v⟩|i⟩|vi⟩

ith neighbor of v

(degree query) (pair query) (neighbor query)

Application 1: approximating graph parameters

slide-38
SLIDE 38

17

Input: graph G=(V,E) with n vertices, m edges, t triangles

˜ Θ ( n t1/6 + m3/4 t ) degree/pair/neighbor quantum queries to approximate t

Result: Query access: unitaries Odeg|v⟩|0⟩ = |v⟩|deg(v)⟩

Opair|v⟩|w⟩|0⟩ = |v⟩|w⟩|(v, w) ∈ E ?⟩ Ongh|v⟩|i⟩|0⟩ = |v⟩|i⟩|vi⟩

ith neighbor of v

(degree query) (pair query) (neighbor query)

Application 1: approximating graph parameters

˜ Θ ( n m1/4) degree/neighbor quantum queries to approximate m

slide-39
SLIDE 39

classical degree/neighbor queries)

17

Input: graph G=(V,E) with n vertices, m edges, t triangles

˜ Θ ( n t1/6 + m3/4 t ) degree/pair/neighbor quantum queries to approximate t

Result:

(vs. ˜

Θ ( n t1/3 + m3/2 t ) classical degree/pair/neighbor queries)

Query access: unitaries Odeg|v⟩|0⟩ = |v⟩|deg(v)⟩

Opair|v⟩|w⟩|0⟩ = |v⟩|w⟩|(v, w) ∈ E ?⟩ Ongh|v⟩|i⟩|0⟩ = |v⟩|i⟩|vi⟩

ith neighbor of v

(degree query) (pair query) (neighbor query)

Application 1: approximating graph parameters

˜ Θ ( n m1/4) degree/neighbor quantum queries to approximate m

(vs. ˜

Θ ( n m )

[Goldreich, Ron’08] [Seshadhri’15] [Eden, Levi, Ron’15] [Eden, Levi, Ron, Seshadhri’17]

slide-40
SLIDE 40

Application 2: frequency moments in the streaming model

18

Fk =

n

i=1

|xi|k (moment of order k ≥ 3)

Input: (finite) stream of updates on x = (0,…,0) of dimension n Output: (at the end of the stream) approximate of

xi ← xi + δ

slide-41
SLIDE 41

Application 2: frequency moments in the streaming model

18

Fk =

n

i=1

|xi|k

Algorithm with smallest possible memory M using P passes over the same stream?

(moment of order k ≥ 3)

Input: (finite) stream of updates on x = (0,…,0) of dimension n Output: (at the end of the stream) approximate of

xi ← xi + δ

slide-42
SLIDE 42

Application 2: frequency moments in the streaming model

18

Fk =

n

i=1

|xi|k

Algorithm with smallest possible memory M using P passes over the same stream?

(moment of order k ≥ 3)

[Monemizadeh, Woodruff’10] [Andoni, Krauthgamer, Onak’11]

M = ˜ O ( n1−2/k P2 )

Input: (finite) stream of updates on x = (0,…,0) of dimension n Output: (at the end of the stream) approximate of Result:

xi ← xi + δ

qubits of memory (vs. classical bits of memory)

M = ˜ Θ ( n1−2/k P )

slide-43
SLIDE 43

Conclusion

slide-44
SLIDE 44

20

The mean of a random variable X can be estimated with multiplicative error ε using quantum samples, given and .

Δ2 ≥ E(X2) E(X)2

H ≥ E(X)

˜ O ( Δ ϵ ⋅ log3 ( H E(X)))

arXiv: 1807.06456

slide-45
SLIDE 45

20

The mean of a random variable X can be estimated with multiplicative error ε using quantum samples, given and .

Δ2 ≥ E(X2) E(X)2

H ≥ E(X)

˜ O ( Δ ϵ ⋅ log3 ( H E(X)))

Lower bound:

quantum samples

Ω ( Δ − 1 ϵ )

arXiv: 1807.06456

slide-46
SLIDE 46

20

The mean of a random variable X can be estimated with multiplicative error ε using quantum samples, given and .

Δ2 ≥ E(X2) E(X)2

H ≥ E(X)

˜ O ( Δ ϵ ⋅ log3 ( H E(X)))

Lower bound:

quantum samples

Ω ( Δ − 1 ϵ )

copies of the state

Ω ( Δ2 − 1 ϵ2 )

  • r

arXiv: 1807.06456

SX|0⟩ = ∑

x∈Ω

px |ψx⟩|x⟩

slide-47
SLIDE 47

Extra slides

slide-48
SLIDE 48

Sampler: Result: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵE(X)

  • n sample space Ω ⊂ [0,B]

SX|0⟩ = ∑

x∈Ω

px |ψx⟩|x⟩

22

Subroutine: the Amplitude Estimation algorithm

slide-49
SLIDE 49

Sampler: Result: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵE(X)

  • n sample space Ω ⊂ [0,B]

SX|0⟩ = ∑

x∈Ω

px |ψx⟩|x⟩ ∑

x∈Ω

px |ψx⟩|x⟩|0⟩ ∑

x∈Ω

px |ψx⟩|x⟩( 1 − x B |0⟩ + x B |1⟩)

Controlled rotation Reordering

Reduction to a Bernoulli sampler [Brassard et al.’11] [Wocjan et al.’09] [Montanaro’15]:

22

1 − E(X) B |φ0⟩|0⟩ + E(X) B |φ1⟩|1⟩ = SY|0⟩

Subroutine: the Amplitude Estimation algorithm

slide-50
SLIDE 50

Sampler: Result: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵE(X) SX|0⟩ = ∑

x∈Ω

px |ψx⟩|x⟩

23

SY|0⟩ = 1 − E(X) B |φ0⟩|0⟩ + E(X) B |φ1⟩|1⟩

Subroutine: the Amplitude Estimation algorithm

Expectation of a Bernoulli sampler [Brassard et al.’02]:

  • n sample space Ω ⊂ [0,B]
slide-51
SLIDE 51

Sampler: Result: O (

B ϵ E(X) ) quantum samples to obtain | ˜

μ − E(X)| ≤ ϵE(X) SX|0⟩ = ∑

x∈Ω

px |ψx⟩|x⟩

23

SY|0⟩ = 1 − E(X) B |φ0⟩|0⟩ + E(X) B |φ1⟩|1⟩

Subroutine: the Amplitude Estimation algorithm

Expectation of a Bernoulli sampler [Brassard et al.’02]: Step 0: the Grover's operator has eigenvalues , where . G = S−1

Y (I − 2|0⟩⟨0|)SY(I − 2I ⊗ |1⟩⟨1|)

e±2iθ θ = sin−1( E(X)/B) Step 2: output as an estimate to E(X)/B. sin2(˜ θ) Step 1: use the Phase Estimation Algorithm on G for steps (i.e. using t quantum samples), to get an estimate of . ˜ θ ±θ t ≥ Ω( B/(ϵ E(X))) (˜ μ = B ⋅ sin2(˜ θ))

  • n sample space Ω ⊂ [0,B]
slide-52
SLIDE 52

No a priori information on E(X2)/E(X)2

24

Result: There is an optimal algorithm that approximates the mean of any quantum sampler SX over Ω ⊂ [0,B] with quantum samples, when there is no a priori information on X.

˜ Θ ( B ϵE(X) + E(X2) ϵE(X))

→ Quantization of [Dagum, Karp, Luby, Ross’00]

slide-53
SLIDE 53

25

Lemma: If then b ≥ E(X2) ϵE(X) If then

b ≥ 104 ⋅ E(X)Δ2

Lemma:

E(X<b) b ≤ 1 104 ⋅ Δ2

(1 − ϵ)E(X) ≤ E(X<b) ≤ E(X) .

slide-54
SLIDE 54

25

Lemma: If then b ≥ E(X2) ϵE(X) ∙ E(X<b) = E(X) − E(X≥b) ≥ (1 − ϵ)E(X) If then

b ≥ 104 ⋅ E(X)Δ2

Lemma:

E(X<b) b ≤ 1 104 ⋅ Δ2

Proof:

(1 − ϵ)E(X) ≤ E(X<b) ≤ E(X) . ∙ E(X≥b) ≤ E(X2) b ≤ ϵE(X)

Proof:

E(X<b) b ≤ E(X) 104E(X)Δ2 ≤ 1 104 ⋅ Δ2

slide-55
SLIDE 55

Example

26

1

px x

B

slide-56
SLIDE 56

27

b

E(X<b) b

Example

slide-57
SLIDE 57

27

b

E(X<b) b 1 Δ2 E(X)Δ2

Example

slide-58
SLIDE 58

Step 1: Logarithmic search on b until Amplitude-Estimation(SX<b, Δ) ≠ 0

2 ⋅ E(X)Δ2 ≤ b ≤ 104 ⋅ E(X)Δ2 with high probability

get Step 2: Set threshold and output

d = b/ϵ

with high probability get | ˜

μ − E(X)| ≤ ϵE(X)

Δ ⋅ log3 ( H E(X))

Δ/ϵ3/2

Final algorithm:

Step 2bis: Slightly refined algorithm, adapted from [Heinrich’01, Montanaro’15]

Δ/ϵ

28 Amplitude-Estimation(SX<d, Δ/ϵ3/2) ≠ 0

slide-59
SLIDE 59

Application 1: counting the number of edges in a graph

Estimator X :=

  • 1. Sample a vertex v ∈ V uniformly at random
  • 2. Sample a neighbor w of v uniformly at random
  • 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v <lex w)

Output n*deg(v) Else Output 0

29

λ(v,w)

slide-60
SLIDE 60

Application 1: counting the number of edges in a graph

Estimator X :=

  • 1. Sample a vertex v ∈ V uniformly at random
  • 2. Sample a neighbor w of v uniformly at random
  • 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v <lex w)

Output n*deg(v) Else Output 0

Lemma: E(X) = m and E(X2)/E(X)2 ≤ O(√n).

29

λ(v,w)

(when m ≥ Ω(n))

[Goldreich, Ron’08] [Seshadhri’15]

slide-61
SLIDE 61

Application 1: counting the number of edges in a graph

Estimator X :=

  • 1. Sample a vertex v ∈ V uniformly at random
  • 2. Sample a neighbor w of v uniformly at random
  • 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v <lex w)

Output n*deg(v) Else Output 0

Lemma: E(X) = m and E(X2)/E(X)2 ≤ O(√n).

29

λ(v,w)

(when m ≥ Ω(n))

SX|0⟩ = ∑

v∈V ∑ w∈N(v)

1 n ⋅ deg(v) |v⟩|w⟩|λ(v, w)⟩

[Goldreich, Ron’08] [Seshadhri’15]

slide-62
SLIDE 62

Application 1: counting the number of edges in a graph

Estimator X :=

  • 1. Sample a vertex v ∈ V uniformly at random
  • 2. Sample a neighbor w of v uniformly at random
  • 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v <lex w)

Output n*deg(v) Else Output 0

Lemma: E(X) = m and E(X2)/E(X)2 ≤ O(√n).

29

λ(v,w)

(when m ≥ Ω(n))

SX|0⟩ = ∑

v∈V ∑ w∈N(v)

1 n ⋅ deg(v) |v⟩|w⟩|λ(v, w)⟩ Result: O(n1/4/ε) quantum samples (= quantum queries) to approximate m.

(when m ≥ Ω(n))

[Goldreich, Ron’08] [Seshadhri’15]

slide-63
SLIDE 63

Application 1: counting the number of edges in a graph

Estimator X :=

  • 1. Sample a vertex v ∈ V uniformly at random
  • 2. Sample a neighbor w of v uniformly at random
  • 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v <lex w)

Output n*deg(v) Else Output 0

Lemma: E(X) = m and E(X2)/E(X)2 ≤ O(n/√m), but we don’t know n/√m…

30

λ(v,w) SX|0⟩ = ∑

v∈V ∑ w∈N(v)

1 n ⋅ deg(v) |v⟩|w⟩|λ(v, w)⟩

[Goldreich, Ron’08] [Seshadhri’15]

slide-64
SLIDE 64

Application 1: counting the number of edges in a graph

Estimator X :=

  • 1. Sample a vertex v ∈ V uniformly at random
  • 2. Sample a neighbor w of v uniformly at random
  • 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v <lex w)

Output n*deg(v) Else Output 0

Lemma: E(X) = m and E(X2)/E(X)2 ≤ O(n/√m), but we don’t know n/√m…

30

λ(v,w) SX|0⟩ = ∑

v∈V ∑ w∈N(v)

1 n ⋅ deg(v) |v⟩|w⟩|λ(v, w)⟩ Result: Θ(n1/2/m1/4) quantum samples (= quantum queries) to approximate m.

[Goldreich, Ron’08] [Seshadhri’15]

slide-65
SLIDE 65

Application 2: frequency moments in the streaming model

31

x =

1 2 3 n

Stream of updates to x:

slide-66
SLIDE 66

5

Application 2: frequency moments in the streaming model

31

x =

1 2 3 n

Stream of updates to x: (3,+5)

slide-67
SLIDE 67

5

Application 2: frequency moments in the streaming model

31

x =

1 2 3 n

Stream of updates to x:

  • 6

(3,+5) ; (2,-6)

slide-68
SLIDE 68

4

Application 2: frequency moments in the streaming model

31

x =

1 2 3 n

Stream of updates to x:

  • 6

(3,+5) ; (2,-6) ; (3,-1)

slide-69
SLIDE 69

4

Application 2: frequency moments in the streaming model

31

Fk =

n

i=1

|xi|k

x =

1 2 3 n

Stream of updates to x:

  • 6

Frequency moment of order k ≥ 3:

(3,+5) ; (2,-6) ; (3,-1)

slide-70
SLIDE 70

4

Application 2: frequency moments in the streaming model

31

Fk =

n

i=1

|xi|k

Best P-pass algorithm with memory M approximating Fk?

x =

1 2 3 n

Stream of updates to x:

  • 6

Frequency moment of order k ≥ 3:

(3,+5) ; (2,-6) ; (3,-1)

slide-71
SLIDE 71

4

Application 2: frequency moments in the streaming model

Classically: PM = Θ(n1-2/k)

31

Fk =

n

i=1

|xi|k

Best P-pass algorithm with memory M approximating Fk?

x =

1 2 3 n

Stream of updates to x:

  • 6

Frequency moment of order k ≥ 3:

(3,+5) ; (2,-6) ; (3,-1)

[Monemizadeh, Woodruff’10] [Andoni, Krauthgamer, Onak’11]

1 sample from a random variable X with and

E(X2)/E(X)2 ≤ P ⋅ F2

k

1 pass + memory M = n1−2/k

P

| |

E(X) ≈ Fk

slide-72
SLIDE 72

4

Application 2: frequency moments in the streaming model

Classically: PM = Θ(n1-2/k)

31

Fk =

n

i=1

|xi|k

Best P-pass algorithm with memory M approximating Fk?

x =

1 2 3 n

Stream of updates to x:

  • 6

Frequency moment of order k ≥ 3:

(3,+5) ; (2,-6) ; (3,-1)

Quantumly: P2M = O(n1-2/k)

[Monemizadeh, Woodruff’10] [Andoni, Krauthgamer, Onak’11]

1 sample from a random variable X with and

E(X2)/E(X)2 ≤ P ⋅ F2

k

1 pass + memory M = n1−2/k

P

* can be done in one pass also

S−1

X

| |

1 pass + memory M = n1−2/k

P2

1 quantum sample* SX from a r.v. X with and

| |

E(X) ≈ Fk E(X) ≈ Fk E(X2)/E(X)2 ≤ (P ⋅ Fk)2

slide-73
SLIDE 73

Application 3: counting the number of triangles in a graph

32

More complicated than edges… [Eden, Levi, Ron’15] [Eden, Levi, Ron, Seshadhri’17] Main subroutine: estimator X for the number of triangles adjacent to any vertex v

slide-74
SLIDE 74

Application 3: counting the number of triangles in a graph

32

More complicated than edges… [Eden, Levi, Ron’15] [Eden, Levi, Ron, Seshadhri’17] Main subroutine: estimator X for the number of triangles adjacent to any vertex v

1 classical sample = O(1) queries in expectation but O(√m) in the worst case

slide-75
SLIDE 75

Application 3: counting the number of triangles in a graph

32

More complicated than edges… [Eden, Levi, Ron’15] [Eden, Levi, Ron, Seshadhri’17] Main subroutine: estimator X for the number of triangles adjacent to any vertex v

1 classical sample = O(1) queries in expectation but O(√m) in the worst case

Variable-time Amplitude Estimation: estimate the amplitude when some

“branches” of the computation stop earlier than the others

slide-76
SLIDE 76

Application 3: counting the number of triangles in a graph

32

More complicated than edges… [Eden, Levi, Ron’15] [Eden, Levi, Ron, Seshadhri’17] Main subroutine: estimator X for the number of triangles adjacent to any vertex v

1 classical sample = O(1) queries in expectation but O(√m) in the worst case

Variable-time Amplitude Estimation: estimate the amplitude when some

“branches” of the computation stop earlier than the others

˜ Θ ( n t1/6 + m3/4 t ) quantum queries for triangle counting

Result:

vs. ˜ Θ ( n t1/3 + m3/2 t ) classical queries