Quantum Chebyshev’s Inequality and Applications
Yassine Hamoudi, Frédéric Magniez
IRIF , Université de Paris, CNRS
Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, - - PowerPoint PPT Presentation
Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, Frdric Magniez IRIF , Universit de Paris, CNRS Mean Estimation Problem How many i.i.d. samples x 1 , x 2 , from some unknown bounded r.v. X [0,B] do we need to
Yassine Hamoudi, Frédéric Magniez
IRIF , Université de Paris, CNRS
Mean Estimation Problem
2
How many i.i.d. samples x1, x2,… from some unknown bounded r.v. X ∈ [0,B] do we need to compute such that
˜ μ
with proba. 2/3
Mean Estimation Problem
2
How many i.i.d. samples x1, x2,… from some unknown bounded r.v. X ∈ [0,B] do we need to compute such that
˜ μ
with proba. 2/3
˜ μ = x1 + . . . + xn n Sample mean:
Mean Estimation Problem
2
How many i.i.d. samples x1, x2,… from some unknown bounded r.v. X ∈ [0,B] do we need to compute such that
Chernoff’s Bound:
B ϵ2E(X)
˜ μ
with proba. 2/3
˜ μ = x1 + . . . + xn n Sample mean:
Mean Estimation Problem
2
How many i.i.d. samples x1, x2,… from some unknown bounded r.v. X ∈ [0,B] do we need to compute such that
Chernoff’s Bound: Bernstein’s Inequality:
B ϵ2E(X) Var(X) ϵ2E(X)2 + B ϵE(X)
(Var(X) ≤ B ⋅ E(X))
˜ μ
with proba. 2/3
˜ μ = x1 + . . . + xn n Sample mean:
Mean Estimation Problem
2
How many i.i.d. samples x1, x2,… from some unknown bounded r.v. X ∈ [0,B] do we need to compute such that
Chernoff’s Bound: Chebyshev’s Inequality: Bernstein’s Inequality:
B ϵ2E(X) Var(X) ϵ2E(X)2 + B ϵE(X) Var(X) ϵ2E(X)2
(Var(X) ≤ B ⋅ E(X))
˜ μ
with proba. 2/3
˜ μ = x1 + . . . + xn n Sample mean:
take samples
Mean Estimation Problem
2
How many i.i.d. samples x1, x2,… from some unknown bounded r.v. X ∈ [0,B] do we need to compute such that
Chernoff’s Bound: Chebyshev’s Inequality: Bernstein’s Inequality:
B ϵ2E(X) Var(X) ϵ2E(X)2 + B ϵE(X) Var(X) ϵ2E(X)2
(Var(X) ≤ B ⋅ E(X))
In practice: we often know Δ2 ≥ E(X2)
E(X)2
Δ2 ϵ2 ˜ μ
with proba. 2/3
= Var(X2) E(X)2 + 1
˜ μ = x1 + . . . + xn n Sample mean:
Data stream model:
Frequency moments, Collision probability [Alon, Matias, Szegedy’99]
[Monemizadeh, Woodruff’] [Andoni et al.’11] [Crouch et al.’16]
Applications
Testing properties of distributions:
Closeness [Goldreich, Ron’11] [Batu et al.’13] [Chan et al.’14], Conditional independence [Canonne et al.’18]
Estimating graph parameters:
Number of connected components, Minimum spanning tree weight
[Chazelle, Rubinfeld, Trevisan’05], Average distance [Goldreich, Ron’08], Number
Counting with Markov chain Monte Carlo methods:
Counting vs. sampling [Jerrum, Sinclair’96] [Štefankovič et al.’09], Volume of convex bodies [Dyer, Frieze'91], Permanent [Jerrum, Sinclair, Vigoda’04]
etc.
3
Random variable X on finite sample space Ω ⊂ [0,B]. Classical sample: one value x ∈ Ω, sampled with probability px
4
Quantum Mean Estimation Problem
Quantum sample: one use of a unitary operator or satisfying Random variable X on finite sample space Ω ⊂ [0,B]. Classical sample: one value x ∈ Ω, sampled with probability px
x∈Ω
SX S−1
X
4
Quantum Mean Estimation Problem
Quantum sample: one use of a unitary operator or satisfying Random variable X on finite sample space Ω ⊂ [0,B]. Classical sample: one value x ∈ Ω, sampled with probability px
x∈Ω
with ψx = arbitrary unit vector
SX S−1
X
4
Quantum Mean Estimation Problem
Quantum sample: one use of a unitary operator or satisfying Random variable X on finite sample space Ω ⊂ [0,B]. Classical sample: one value x ∈ Ω, sampled with probability px
x∈Ω
with ψx = arbitrary unit vector
SX S−1
X
4
Question: can we estimate E(X) with less samples in the quantum setting?
Quantum Mean Estimation Problem
Classical samples Quantum samples
(Chernoff) (Amplitude Estimation) (Chebyshev) [Montanaro’15]:
Our contribution:
B ϵ E(X)
Δ2 ϵ Δ ϵ ⋅ log3 ( B E(X))
5
Quantum Mean Estimation Problem
B ϵ2E(X)
Δ2 ϵ2
Δ2 ≥ E(X2) E(X)2
given
Classical samples Quantum samples
(Chernoff) (Amplitude Estimation) (Chebyshev) [Montanaro’15]:
Our contribution:
B ϵ E(X)
Δ2 ϵ Δ ϵ ⋅ log3 ( B E(X))
5
Quantum Mean Estimation Problem
B ϵ2E(X)
Δ2 ϵ2
Δ2 ≥ E(X2) E(X)2
given
Classical samples Quantum samples
(Chernoff) (Amplitude Estimation) (Chebyshev) [Montanaro’15]:
Our contribution:
B ϵ E(X)
Δ2 ϵ Δ ϵ ⋅ log3 ( B E(X))
5
Quantum Mean Estimation Problem
B ϵ2E(X)
Δ2 ϵ2
Δ2 ≥ E(X2) E(X)2
given
Classical samples Quantum samples
(Chernoff) (Amplitude Estimation) (Chebyshev) [Montanaro’15]:
Our contribution:
B ϵ E(X)
Δ2 ϵ Δ ϵ ⋅ log3 ( B E(X))
5
Quantum Mean Estimation Problem
B ϵ2E(X)
Δ2 ϵ2
Δ2 ≥ E(X2) E(X)2
given
7
Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X)
7
Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X)
B ≫ E(X2) E(X)
7
Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If
: map the outcomes larger than to 0 If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X)
B ≫ E(X2) E(X)
7
Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If E(X2) E(X)
: map the outcomes larger than to 0 If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X)
B ≫ E(X2) E(X)
7
Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
1 B
px x
If E(X2) E(X)
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X)
B ≫ E(X2) E(X)
8
1 B
px x
E(X2) E(X)
b
New largest outcome
≈ E(X2) E(X)
If : map the outcomes larger than to 0
Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 B ≫ E(X2) E(X)
9
E(X2) E(X) Lemma: If then b ≥ E(X2) ϵE(X) (1 − ϵ)E(X) ≤ E(Xb) ≤ E(X) .
⇒ We can equivalently estimate the mean of Xb for b ≥ E(X2)
ϵE(X) Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 B ≫ E(X2) E(X)
9
E(X2) E(X) Lemma: If then b ≥ E(X2) ϵE(X) (1 − ϵ)E(X) ≤ E(Xb) ≤ E(X) .
⇒ We can equivalently estimate the mean of Xb for
Problem: is unknown… E(X2) E(X) b ≥ E(X2) ϵE(X) Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 B ≫ E(X2) E(X)
9
E(X2) E(X) Lemma: If then b ≥ E(X2) ϵE(X) (1 − ϵ)E(X) ≤ E(Xb) ≤ E(X) .
⇒ We can equivalently estimate the mean of Xb for
Problem: is unknown… Δ2 ≥ E(X2) E(X)2 E(X2) E(X) but we know b ≥ E(X2) ϵE(X) Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
If : the number of samples is B ≤ E(X2) E(X) O E(X2) ϵE(X) If : map the outcomes larger than to 0 B ≫ E(X2) E(X)
9
E(X2) E(X) Lemma: If then b ≥ E(X2) ϵE(X) (1 − ϵ)E(X) ≤ E(Xb) ≤ E(X) .
⇒ We can equivalently estimate the mean of Xb for
Problem: is unknown… Δ2 ≥ E(X2) E(X)2 E(X2) E(X) b ≈ E(X) ⋅ Δ2 ? but we know b ≥ E(X2) ϵE(X) Amplitude-Estimation: quantum samples to estimate E(X)
O ( B ϵ E(X) )
10
Objective: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2
Solution: use the Amplitude Estimation algorithm (again) to do a logarithmic search on b
10
Objective: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2
Threshold Input r.v. Number of samples Amplitude Estimation
Solution: use the Amplitude Estimation algorithm (again) to do a logarithmic search on b
10
Objective: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2
b0 = BΔ2 b1 = (B/2)Δ2 b2 = (B/4)Δ2 ˜ μ0 …
Xb0
˜ μ1 ˜ μ2 … … …
Xb1 Xb2
Threshold Input r.v. Number of samples Amplitude Estimation
Solution: use the Amplitude Estimation algorithm (again) to do a logarithmic search on b
10
Objective: given how to find a threshold ? Δ2 ≥ E(X2) E(X)2 b ≈ E(X) ⋅ Δ2
b0 = BΔ2 b1 = (B/2)Δ2 b2 = (B/4)Δ2 ˜ μ0 …
Xb0
˜ μ1 ˜ μ2 …
Theorem: the first non-zero is obtained w.h.p. when: ˜ μi
… …
Xb1 Xb2
11
Analysis
Theorem: the first non-zero is obtained w.h.p. when: ˜ μi
Ingredient 1:
11
E(Xb) b Analysis
Theorem: the first non-zero is obtained w.h.p. when: ˜ μi
The output of Amplitude-Estimation is 0 w.h.p. if and only if the estimated amplitude is below the inverse number of samples.
[Brassard et al.’02]
Ingredient 1:
11
E(Xb) b Analysis
If then
b ≥ 10 ⋅ E(X)Δ2 E(Xb) b ≤ E(X) b ≤ 1 10 ⋅ Δ2
Theorem: the first non-zero is obtained w.h.p. when: ˜ μi
Ingredient 2:
The output of Amplitude-Estimation is 0 w.h.p. if and only if the estimated amplitude is below the inverse number of samples.
[Brassard et al.’02]
Ingredient 1:
11
E(Xb) b
If then
E(Xb) b ≈ E(X) b ≈ 1 Δ2
b ≈ E(X) ⋅ Δ2
Analysis
If then
b ≥ 10 ⋅ E(X)Δ2 E(Xb) b ≤ E(X) b ≤ 1 10 ⋅ Δ2
Theorem: the first non-zero is obtained w.h.p. when: ˜ μi
Ingredient 2: Ingredient 3:
The output of Amplitude-Estimation is 0 w.h.p. if and only if the estimated amplitude is below the inverse number of samples.
[Brassard et al.’02]
13
Input: graph G=(V,E) with n vertices, m edges, t triangles Query access: unitaries Odeg|v⟩|0⟩ = |v⟩|deg(v)⟩
Opair|v⟩|w⟩|0⟩ = |v⟩|w⟩|(v, w) ∈ E ?⟩ Ongh|v⟩|i⟩|0⟩ = |v⟩|i⟩|vi⟩
ith neighbor of v
(degree query) (pair query) (neighbor query)
Application 1: approximating graph parameters
13
Input: graph G=(V,E) with n vertices, m edges, t triangles
˜ Θ ( n t1/6 + m3/4 t ) quantum queries to triangle estimation
Result: Query access: unitaries Odeg|v⟩|0⟩ = |v⟩|deg(v)⟩
Opair|v⟩|w⟩|0⟩ = |v⟩|w⟩|(v, w) ∈ E ?⟩ Ongh|v⟩|i⟩|0⟩ = |v⟩|i⟩|vi⟩
ith neighbor of v
(degree query) (pair query) (neighbor query)
Application 1: approximating graph parameters
˜ Θ ( n m1/4) quantum queries for edge estimation
classical queries)
13
Input: graph G=(V,E) with n vertices, m edges, t triangles
˜ Θ ( n t1/6 + m3/4 t ) quantum queries to triangle estimation
Result:
(vs. ˜
Θ ( n t1/3 + m3/2 t ) classical queries)
Query access: unitaries Odeg|v⟩|0⟩ = |v⟩|deg(v)⟩
Opair|v⟩|w⟩|0⟩ = |v⟩|w⟩|(v, w) ∈ E ?⟩ Ongh|v⟩|i⟩|0⟩ = |v⟩|i⟩|vi⟩
ith neighbor of v
(degree query) (pair query) (neighbor query)
Application 1: approximating graph parameters
˜ Θ ( n m1/4) quantum queries for edge estimation
(vs. ˜
Θ ( n m )
[Goldreich, Ron’08] [Seshadhri’15] [Eden, Levi, Ron’15] [Eden, Levi, Ron, Seshadhri’17]
Application 2: frequency moments in the streaming model
14
Fk =
n
∑
i=1
|xi|k (moment of order k ≥ 3)
Input: stream of updates to x Output: (at the end of the stream) estimate of
xi ← xi + δ
Initially: x = (0,…,0) of dimension n
Application 2: frequency moments in the streaming model
14
Fk =
n
∑
i=1
|xi|k
What is the smallest memory size M needed to estimate Fk using P passes over the same stream?
(moment of order k ≥ 3)
Input: stream of updates to x Output: (at the end of the stream) estimate of
xi ← xi + δ
Initially: x = (0,…,0) of dimension n
Application 2: frequency moments in the streaming model
14
Fk =
n
∑
i=1
|xi|k
What is the smallest memory size M needed to estimate Fk using P passes over the same stream?
(moment of order k ≥ 3)
[Monemizadeh, Woodruff’10] [Andoni, Krauthgamer, Onak’11]
M = ˜ O ( n1−2/k P2 )
Input: stream of updates to x Output: (at the end of the stream) estimate of Result:
xi ← xi + δ
qubits of memory (vs. classical bits of memory)
M = ˜ Θ ( n1−2/k P )
Initially: x = (0,…,0) of dimension n
16
The mean of a random variable X can be estimated with multiplicative error ε using quantum samples, given .
Δ2 ≥ E(X2) E(X)2 ˜ O ( Δ ϵ ⋅ log3 ( MΩ E(X)))
Open questions:
Lower bound:
quantum samples
Ω ( Δ − 1 ϵ )
copies of the state
Ω ( Δ2 − 1 ϵ2 )
SX|0⟩ = ∑
x∈Ω
px |ψx⟩|x⟩