quantum chebyshev s inequality and applications
play

Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, - PowerPoint PPT Presentation

Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, Frdric Magniez IRIF , Universit Paris Diderot, CNRS CQT 2019 arXiv: 1807.06456 Buffons needle A needle dropped randomly on a floor with equally spaced parallel lines


  1. Quantum Chebyshev’s Inequality and Applications Yassine Hamoudi, Frédéric Magniez IRIF , Université Paris Diderot, CNRS CQT 2019 arXiv: 1807.06456

  2. Buffon’s needle A needle dropped randomly on a floor with equally spaced parallel lines will cross one of the lines with probability 2/ π . Bu ff on, G., Essai d'arithmétique morale , 1777. � 2

  3. Monte Carlo algorithms: Use repeated random sampling and statistical analysis to estimate parameters of interest � 3

  4. Monte Carlo algorithms: Use repeated random sampling and statistical analysis to estimate parameters of interest Empirical mean: 1/ Repeat the experiment n times: n i.i.d. samples x 1 , …, x n ~ X 2/ Output: (x 1 +…+ x n )/n � 3

  5. Monte Carlo algorithms: Use repeated random sampling and statistical analysis to estimate parameters of interest Empirical mean: 1/ Repeat the experiment n times: n i.i.d. samples x 1 , …, x n ~ X 2/ Output: (x 1 +…+ x n )/n Law of large numbers: x 1 + . . . + x n n →∞ E ( X ) n � 3

  6. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? � 4

  7. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: multiplicative error 0 < ε < 1 | ˜ Objective: with high probability ( finite) μ − E ( X ) | ≤ ϵ E ( X ) E ( X ), Var ( X ) ≠ 0 � 4

  8. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: multiplicative error 0 < ε < 1 | ˜ Objective: with high probability ( finite) μ − E ( X ) | ≤ ϵ E ( X ) E ( X ), Var ( X ) ≠ 0 Number of samples needed: O ( ϵ 2 E ( X ) 2 ) E ( X 2 ) ϵ 2 E ( X ) 2 ) = O ( E ( X 2 ) O ( ϵ 2 ( E ( X ) 2 − 1 )) Var ( X ) 1 (in fact ) � 4

  9. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: multiplicative error 0 < ε < 1 | ˜ Objective: with high probability ( finite) μ − E ( X ) | ≤ ϵ E ( X ) E ( X ), Var ( X ) ≠ 0 Relative second moment Number of samples needed: O ( ϵ 2 E ( X ) 2 ) E ( X 2 ) ϵ 2 E ( X ) 2 ) = O ( E ( X 2 ) O ( ϵ 2 ( E ( X ) 2 − 1 )) Var ( X ) 1 (in fact ) � 4

  10. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: multiplicative error 0 < ε < 1 | ˜ Objective: with high probability ( finite) μ − E ( X ) | ≤ ϵ E ( X ) E ( X ), Var ( X ) ≠ 0 Relative second moment Number of samples needed: O ( ϵ 2 E ( X ) 2 ) E ( X 2 ) ϵ 2 E ( X ) 2 ) = O ( E ( X 2 ) O ( ϵ 2 ( E ( X ) 2 − 1 )) Var ( X ) 1 (in fact ) n = Ω ( ϵ 2 ) Δ 2 Δ 2 ≥ E ( X 2 ) In practice: given an upper-bound , take samples E ( X ) 2 � 4

  11. Example: edge counting Problem: approximate the number m of edges in an n -vertex graph G � 5

  12. Example: edge counting Problem: approximate the number m of edges in an n -vertex graph G Estimator X := 1. Sample a vertex v ∈ V uniformly at random 2. Sample a neighbor w of v uniformly at random 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v < lex w) Output n*deg(v) Else Output 0 � 5

  13. Example: edge counting Problem: approximate the number m of edges in an n -vertex graph G Estimator X := 1. Sample a vertex v ∈ V uniformly at random 2. Sample a neighbor w of v uniformly at random 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v < lex w) Output n*deg(v) Else Output 0 Lemma: E(X) = m and E(X 2 )/E(X) 2 ≤ O( √ n ). (when m ≥ Ω (n)) [Goldreich, Ron’08] [Seshadhri’15] � 5

  14. Example: edge counting Problem: approximate the number m of edges in an n -vertex graph G Estimator X := 1. Sample a vertex v ∈ V uniformly at random 2. Sample a neighbor w of v uniformly at random 3. If deg(v) < deg(w) (or deg(v) = deg(w) and v < lex w) Output n*deg(v) Else Output 0 Lemma: E(X) = m and E(X 2 )/E(X) 2 ≤ O( √ n ). (when m ≥ Ω (n)) [Goldreich, Ron’08] [Seshadhri’15] Consequence: O( √ n/ ε 2 ) samples to approximate m with error ε . � 5

  15. Other applications Counting with Markov chain Monte Carlo methods: Counting vs. sampling [Jerrum, Sinclair’96] [ Š tefankovi č et al.’09], Volume of convex bodies [Dyer, Frieze'91], Permanent [Jerrum, Sinclair, Vigoda’04] Data stream model: Frequency moments, Collision probability [Alon, Matias, Szegedy’99] [Monemizadeh, Woodru ff ’] [Andoni et al.’11] [Crouch et al.’16] Testing properties of distributions: Closeness [Goldreich, Ron’11] [Batu et al.’13] [Chan et al.’14] , Conditional independence [Canonne et al.’18] Estimating graph parameters: Number of connected components, Minimum spanning tree weight [Chazelle, Rubinfeld, Trevisan’05] , Average distance [Goldreich, Ron’08] , Number of triangles [Eden et al. 17] etc. � 6

  16. Random variable X over sample space Ω ⊂ R + Classical sample: one value x ∈ Ω , sampled with probability p x � 7

  17. Random variable X over sample space Ω ⊂ R + Classical sample: one value x ∈ Ω , sampled with probability p x Quantum sample: one (controlled-) execution of a quantum sampler or , where S − 1 S X X S X | 0 ⟩ = ∑ p x | ψ x ⟩ | x ⟩ x ∈Ω with ψ x = arbitrary unit vector � 7

  18. Random variable X over sample space Ω ⊂ R + Classical sample: one value x ∈ Ω , sampled with probability p x Quantum sample: one (controlled-) execution of a quantum sampler or , where S − 1 S X X S X | 0 ⟩ = ∑ p x | ψ x ⟩ | x ⟩ x ∈Ω with ψ x = arbitrary unit vector Question: can we estimate E(X) with less samples in the quantum setting? � 7

  19. Previous Works

  20. The Amplitude Estimation algorithm [Brassard et al.’11] [Brassard et al.’11] [Wocjan et al.’09] S X | 0 ⟩ = ∑ Given one can obtain (with 1 ancillary qubit + controlled rotation): p x | ψ x ⟩ | x ⟩ x ∈Ω � 9

  21. The Amplitude Estimation algorithm [Brassard et al.’11] [Brassard et al.’11] [Wocjan et al.’09] S X | 0 ⟩ = ∑ Given one can obtain (with 1 ancillary qubit + controlled rotation): p x | ψ x ⟩ | x ⟩ x ∈Ω p x | ψ x ⟩ | x ⟩ ( | 1 ⟩ ) S Y | 0 ⟩ = ∑ 1 − x x | 0 ⟩ + M Ω M Ω x ∈Ω where M Ω = max{ x ∈ Ω } 9 �

  22. The Amplitude Estimation algorithm [Brassard et al.’11] [Brassard et al.’11] [Wocjan et al.’09] S X | 0 ⟩ = ∑ Given one can obtain (with 1 ancillary qubit + controlled rotation): p x | ψ x ⟩ | x ⟩ x ∈Ω p x | ψ x ⟩ | x ⟩ ( | 1 ⟩ ) S Y | 0 ⟩ = ∑ 1 − x x | 0 ⟩ + M Ω M Ω x ∈Ω 1 − E ( X ) E ( X ) = | φ 0 ⟩ | 0 ⟩ + | φ 1 ⟩ | 1 ⟩ M Ω M Ω and | φ 0 ⟩ , | φ 1 ⟩ are some unit vectors. where M Ω = max{ x ∈ Ω } 9 �

  23. The Amplitude Estimation algorithm [Brassard et al.’11] [Brassard et al.’11] [Wocjan et al.’09] S X | 0 ⟩ = ∑ Given one can obtain (with 1 ancillary qubit + controlled rotation): p x | ψ x ⟩ | x ⟩ x ∈Ω p x | ψ x ⟩ | x ⟩ ( | 1 ⟩ ) S Y | 0 ⟩ = ∑ 1 − x x | 0 ⟩ + M Ω M Ω x ∈Ω 1 − E ( X ) E ( X ) = | φ 0 ⟩ | 0 ⟩ + | φ 1 ⟩ | 1 ⟩ M Ω M Ω and | φ 0 ⟩ , | φ 1 ⟩ are some unit vectors. where M Ω = max{ x ∈ Ω } Observation: The Grover's operator G = S − 1 Y ( I − 2 | 0 ⟩⟨ 0 | ) S Y ( I − 2 I ⊗ | 1 ⟩⟨ 1 | ) has eigenvalues , where . e ±2 i θ θ = sin − 1 ( E ( X )/ M Ω ) 9 �

  24. The Amplitude Estimation algorithm [Brassard et al.’11] [Brassard et al.’11] [Wocjan et al.’09] S X | 0 ⟩ = ∑ Given one can obtain (with 1 ancillary qubit + controlled rotation): p x | ψ x ⟩ | x ⟩ x ∈Ω p x | ψ x ⟩ | x ⟩ ( | 1 ⟩ ) S Y | 0 ⟩ = ∑ 1 − x x | 0 ⟩ + M Ω M Ω x ∈Ω 1 − E ( X ) E ( X ) = | φ 0 ⟩ | 0 ⟩ + | φ 1 ⟩ | 1 ⟩ M Ω M Ω and | φ 0 ⟩ , | φ 1 ⟩ are some unit vectors. where M Ω = max{ x ∈ Ω } Observation: The Grover's operator G = S − 1 Y ( I − 2 | 0 ⟩⟨ 0 | ) S Y ( I − 2 I ⊗ | 1 ⟩⟨ 1 | ) has eigenvalues , where . e ±2 i θ θ = sin − 1 ( E ( X )/ M Ω ) Algorithm: 1/ Apply Phase Estimation on G for steps t ≥ Ω ( M Ω /( ϵ E ( X ))) | ˜ ˜ to get an estimate s.t. . θ − | θ || ≤ 1/ t θ μ = M Ω ⋅ sin 2 ( ˜ ˜ 2/ Output as an estimate to E(X). θ ) 9 �

  25. The Amplitude Estimation algorithm [Brassard et al.’02] [Brassard et al.’11] [Wocjan et al.’09] S X | 0 ⟩ = ∑ Given one can obtain (with 1 ancillary qubit + controlled rotation): p x | ψ x ⟩ | x ⟩ x ∈Ω p x | ψ x ⟩ | x ⟩ ( | 1 ⟩ ) S Y | 0 ⟩ = ∑ 1 − x x | 0 ⟩ + M Ω M Ω x ∈Ω 1 − E ( X ) E ( X ) = | φ 0 ⟩ | 0 ⟩ + | φ 1 ⟩ | 1 ⟩ M Ω M Ω and | φ 0 ⟩ , | φ 1 ⟩ are some unit vectors. where M Ω = max{ x ∈ Ω } O ( E ( X ) ) quantum samples to obtain | ˜ M Ω Result: μ − E ( X ) | ≤ ϵ E ( X ) ϵ 10 �

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend