the multiplicative quantum adversary
play

The Multiplicative Quantum Adversary Robert palek Quantum query - PowerPoint PPT Presentation

The Multiplicative Quantum Adversary Robert palek Quantum query complexity Quantum query complexity Given a function f: {0,1} n {0,1} m Quantum query complexity not necessarily Boolean output Given a function f: {0,1} n


  1. Pros and cons of additive adversary • Pros: • Cons: • universal method: • gives trivial bound for works for all functions low success probability • often gives optimal • no direct product bounds (e.g., search, theorem sorting, graph problems) • Γ , δ are intuitive: hard distribution on input pairs and inputs • easy to compute • composes optimally with respect to function composition

  2. Pros and cons of additive adversary we overcome the cons • Pros: • Cons: • universal method: • gives trivial bound for works for all functions low success probability • often gives optimal • no direct product bounds (e.g., search, theorem sorting, graph problems) • Γ , δ are intuitive: hard distribution on input pairs and inputs • easy to compute • composes optimally with respect to function composition

  3. Pros and cons of additive adversary • Pros: • Cons: • universal method: • gives trivial bound for works for all functions low success probability • often gives optimal • no direct product bounds (e.g., search, theorem sorting, graph problems) • Γ , δ are intuitive: hard distribution on and lose these pros input pairs and inputs • easy to compute • composes optimally with respect to function composition

  4. Pros and cons of additive adversary • Pros: • Cons: • universal method: • gives trivial bound for works for all functions low success probability • often gives optimal • no direct product bounds (e.g., search, theorem sorting, graph problems) • Γ , δ are intuitive: hard distribution on input pairs and inputs • easy to compute • composes optimally with respect to function composition

  5. Origin of our method

  6. Origin of our method Problem: search k ones in an n-bit input.

  7. Origin of our method Problem: search k ones in an n-bit input. [Ambainis ’05] new method based on analysis of eigenspaces of the reduced density matrix of the input register Ω ( √ (kn)) queries are needed even for success 2 -O(k) reproving the result of [Klauck, S. & de Wolf ’04] based on the polynomial method.

  8. Origin of our method Problem: search k ones in an n-bit input. [Ambainis ’05] new method based on analysis of eigenspaces of the reduced density matrix of the input register Ω ( √ (kn)) queries are needed even for success 2 -O(k) reproving the result of [Klauck, S. & de Wolf ’04] based on the polynomial method. Pros: tight bound not relying on polynomial approximation theory

  9. Origin of our method Problem: search k ones in an n-bit input. [Ambainis ’05] new method based on analysis of eigenspaces of the reduced density matrix of the input register Ω ( √ (kn)) queries are needed even for success 2 -O(k) reproving the result of [Klauck, S. & de Wolf ’04] based on the polynomial method. Pros: tight bound not relying on polynomial approximation theory Cons: tailored to one specific problem technical, complicated, non-modular proof without much intuition

  10. Origin of our method

  11. Origin of our method [Ambainis ’05] new method based on analysis of eigenspaces of the reduced density matrix of the input register

  12. Origin of our method [Ambainis ’05] new method based on analysis of eigenspaces of the reduced density matrix of the input register We improve his method as follows: put it into the well-studied adversary framework generalize it to all functions provide additional intuition, modularize the proof, and separate the quantum and combinatorial part

  13. Origin of our method [Ambainis ’05] new method based on analysis of eigenspaces of the reduced density matrix of the input register We improve his method as follows: put it into the well-studied adversary framework generalize it to all functions provide additional intuition, modularize the proof, and separate the quantum and combinatorial part However, the underlying combinatorial analysis stays the same and we cannot omit any single detail

  14. Multiplicative adversary New type of

  15. Multiplicative adversary New type of • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference

  16. Multiplicative adversary New type of • Differences: now, guess the name of • adversary matrix Γ has different semantics then before our method • We upper-bound the ratio W t+1 /W t , not difference

  17. Multiplicative adversary • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference

  18. Multiplicative adversary • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference • The bound looks similar, however, it requires common block- diagonalization of Γ and the input oracle O i , and therefore is extremely hard to compute

  19. Multiplicative adversary • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference • The bound looks similar, however, it requires common block- diagonalization of Γ and the input oracle O i , and therefore is extremely hard to compute 1 additive: � Γ � · min � Γ i � i λ min ( Γ k ) mutliplicative: log( � Γ � ) · min � Γ k i � i,k

  20. Multiplicative adversary • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference • The bound looks similar, however, it requires common block- sub-matrix of Γ with zeroes diagonalization of Γ and the input oracle O i , and therefore is when x i =y i extremely hard to compute 1 additive: � Γ � · min � Γ i � i λ min ( Γ k ) mutliplicative: log( � Γ � ) · min � Γ k i � i,k

  21. Multiplicative adversary • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference • The bound looks similar, however, it requires common block- diagonalization of Γ and the input oracle O i , and therefore is Γ k is the k-th block extremely hard to compute on the diagonal 1 additive: � Γ � · min � Γ i � i λ min ( Γ k ) mutliplicative: log( � Γ � ) · min � Γ k i � i,k

  22. Multiplicative adversary • Differences: • adversary matrix Γ has different semantics then before • We upper-bound the ratio W t+1 /W t , not difference • The bound looks similar, however, it requires common block- diagonalization of Γ and the input oracle O i , and therefore is λ min (M) is the smallest extremely hard to compute eigenvalue of M 1 additive: � Γ � · min � Γ i � i λ min ( Γ k ) mutliplicative: log( � Γ � ) · min � Γ k i � i,k

  23. Multiplicative adversary matrix

  24. Multiplicative adversary matrix • Consider a function f: {0,1} n → {0,1} m , a positive definite matrix Γ with minimal eigenvalue 1, and 1 < λ ≤ || Γ ||:

  25. Multiplicative adversary matrix • Consider a function f: {0,1} n → {0,1} m , a Eigenvalues of Γ positive definite matrix Γ with minimal 10.0 || Γ || eigenvalue 1, and 1 < λ ≤ || Γ ||: 7.5 5.0 λ 2.5 1 0 0 1 2 ... k

  26. Multiplicative adversary matrix • Consider a function f: {0,1} n → {0,1} m , a Eigenvalues of Γ positive definite matrix Γ with minimal 10.0 || Γ || eigenvalue 1, and 1 < λ ≤ || Γ ||: • Π bad is a projector onto the bad 7.5 subspace, which is the direct sum of all eigenspaces corresponding to eigenvalues 5.0 λ smaller than λ 2.5 1 0 0 1 2 ... k bad subspace

  27. Multiplicative adversary matrix • Consider a function f: {0,1} n → {0,1} m , a Eigenvalues of Γ positive definite matrix Γ with minimal 10.0 || Γ || eigenvalue 1, and 1 < λ ≤ || Γ ||: • Π bad is a projector onto the bad 7.5 subspace, which is the direct sum of all eigenspaces corresponding to eigenvalues 5.0 λ smaller than λ • F z is a diagonal projector onto inputs 2.5 evaluating to z 1 0 0 1 2 ... k bad subspace

  28. Multiplicative adversary matrix • Consider a function f: {0,1} n → {0,1} m , a Eigenvalues of Γ positive definite matrix Γ with minimal 10.0 || Γ || eigenvalue 1, and 1 < λ ≤ || Γ ||: • Π bad is a projector onto the bad 7.5 subspace, which is the direct sum of all eigenspaces corresponding to eigenvalues 5.0 λ smaller than λ • F z is a diagonal projector onto inputs 2.5 evaluating to z 1 0 • ( Γ , λ ) is a multiplicative adversary for success 0 1 2 ... k bad subspace probability η iff for every z ∈ {0,1} m , ||F z Π bad || ≤ η

  29. Multiplicative adversary matrix Eigenvalues of Γ 10.0 || Γ || 7.5 5.0 λ 2.5 1 0 0 1 2 ... k bad subspace for every z ∈ {0,1} m , ||F z Π bad || ≤ η

  30. Multiplicative adversary matrix Eigenvalues of Γ for every z ∈ {0,1} m , ||F z Π bad || ≤ η 10.0 || Γ || • It says that each vector (= superposition of inputs) from the bad subspace has short 7.5 projection onto each F z 5.0 λ 2.5 1 0 0 1 2 ... k bad subspace

  31. Multiplicative adversary matrix Eigenvalues of Γ for every z ∈ {0,1} m , ||F z Π bad || ≤ η 10.0 || Γ || • It says that each vector (= superposition of inputs) from the bad subspace has short 7.5 projection onto each F z • If the final state of the input register lies in 5.0 λ the bad subspace, then the algorithm has 2.5 success probability at most η regardless of the outcome it outputs. Typically, η is the 1 0 trivial success probability of a random 0 1 2 ... k choice. bad subspace

  32. Evolution of the progress function

  33. Evolution of the progress function • Consider algorithm A running in time T, computing function f with success probability at least η + ζ , and multiplicative adversary ( Γ , λ )

  34. Evolution of the progress function • Consider algorithm A running in time T, computing function f with success probability at least η + ζ , and multiplicative adversary ( Γ , λ ) • We run A on input δ with Γδ = δ . Then: 1. W 0 =1 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 3. W T ≥ λ ζ 2 /16

  35. Evolution of the progress function • Consider algorithm A running in time T, computing function f with success probability at least η + ζ , and multiplicative adversary ( Γ , λ ) • We run A on input δ with Γδ = δ . Then: trivial 1. W 0 =1 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 3. W T ≥ λ ζ 2 /16 • Proof:

  36. Evolution of the progress function very simple: • Consider algorithm A running in time T, computing function f with success W t is average of scalar products of probability at least η + ζ , | ϕ t x � W t+1 is average of scalar products of and multiplicative adversary ( Γ , λ ) U t +1 O | ϕ t x � • We run A on input δ with Γδ = δ . Then: The unitaries cancel and the oracle calls can be absorbed into Γ , forming 1. W 0 =1 O i Γ O i , where 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || O i : | x � → ( − 1) x i | x � 3. W T ≥ λ ζ 2 /16 • Proof:

  37. Evolution of the progress function Eigenvalues of Γ • Consider algorithm A running in time T, 10.0 || Γ || computing function f with success probability at least η + ζ , 7.5 and multiplicative adversary ( Γ , λ ) 5.0 • We run A on input δ with Γδ = δ . Then: λ 1. W 0 =1 2.5 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 1 0 3. W T ≥ λ ζ 2 /16 0 1 2 ... k • Proof: Prob. dist. of ρ T I 0.500 0.375 0.250 0.125 0 0 1 2 ... k

  38. Evolution of the progress function Eigenvalues of Γ • Consider algorithm A running in time T, 10.0 || Γ || computing function f with success probability at least η + ζ , 7.5 and multiplicative adversary ( Γ , λ ) 5.0 • We run A on input δ with Γδ = δ . Then: λ 1. W 0 =1 2.5 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 1 0 3. W T ≥ λ ζ 2 /16 0 1 2 ... k bad subspace good • Proof: Prob. dist. of ρ T I 0.500 0.375 0.250 0.125 0 0 1 2 ... k

  39. Evolution of the progress function Eigenvalues of Γ • Consider algorithm A running in time T, 10.0 || Γ || computing function f with success Lower-bound area under curve probability at least η + ζ , 7.5 � Γ , ρ T and multiplicative adversary ( Γ , λ ) I � ≥ λ · P [good] 5.0 • We run A on input δ with Γδ = δ . Then: In the bad subspace, the success λ probability is at most η , in the good subspace it is at most 1. By 1. W 0 =1 2.5 [Bernstein & Vazirani ’93] , 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 1 A can succeed w.p. at most 0 � 3. W T ≥ λ ζ 2 /16 0 1 2 ... k η + 4 P [good] bad subspace good • Proof: Prob. dist. of ρ T I 0.500 0.375 P[good] 0.250 0.125 0 0 1 2 ... k

  40. Evolution of the progress function Eigenvalues of Γ • Consider algorithm A running in time T, 10.0 || Γ || computing function f with success probability at least η + ζ , 7.5 and multiplicative adversary ( Γ , λ ) 5.0 • We run A on input δ with Γδ = δ . Then: λ 1. W 0 =1 2.5 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 1 0 3. W T ≥ λ ζ 2 /16 0 1 2 ... k bad subspace good • Proof: q.e.d. Prob. dist. of ρ T I 0.500 0.375 P[good] 0.250 0.125 0 0 1 2 ... k

  41. Evolution of the progress function Eigenvalues of Γ • Consider algorithm A running in time T, 10.0 || Γ || computing function f with success probability at least η + ζ , 7.5 and multiplicative adversary ( Γ , λ ) 5.0 • We run A on input δ with Γδ = δ . Then: λ 1. W 0 =1 2.5 2. each W t+1 /W t ≤ max i ||O i Γ O i Γ -1 || 1 0 3. W T ≥ λ ζ 2 /16 0 1 2 ... k bad subspace good • Proof: q.e.d. Prob. dist. of ρ T • We get lower bound T ≥ MAdv η , ζ (f) with I 0.500 0.375 log( λζ 2 / 16) MAdv η , ζ ( f ) = max P[good] 0.250 log(max i � O i Γ O i Γ − 1 � ) ( Γ , λ ) 0.125 0 0 1 2 ... k

  42. Block-diagonalization of Γ and O i

  43. Block-diagonalization of Γ and O i • How to efficiently upper-bound ||O i Γ O i · Γ -1 || ?

  44. Block-diagonalization of Γ and O i • How to efficiently upper-bound ||O i Γ O i · Γ -1 || ? • The eigenspaces of the conjugated O i Γ O i overlap different eigenspaces of Γ , and we want them to cancel as much as possible so that the norm above is small

  45. Block-diagonalization of Γ and O i Eigenvalues of Γ • How to efficiently upper-bound 10.0 ||O i Γ O i · Γ -1 || ? • The eigenspaces of the conjugated O i Γ O i 7.5 overlap different eigenspaces of Γ , and we want them to cancel as much as possible 5.0 so that the norm above is small 2.5 0 0 1 2 ... k

  46. Block-diagonalization of Γ and O i Eigenvalues of Γ Eigenvalues of Oi Γ Oi • How to efficiently upper-bound 10.0 10.0 ||O i Γ O i · Γ -1 || ? • The eigenspaces of the conjugated O i Γ O i 7.5 7.5 overlap different eigenspaces of Γ , and we want them to cancel as much as possible 5.0 5.0 so that the norm above is small 2.5 2.5 0 0 0 1 2 ... k 0 1 2 ... k

  47. Block-diagonalization of Γ and O i • How to efficiently upper-bound 1.40 ||O i Γ O i · Γ -1 || ? • The eigenspaces of the conjugated O i Γ O i 1.05 overlap different eigenspaces of Γ , and we want them to cancel as much as possible 0.70 so that the norm above is small • like here... 0.35 0 0 1 2 ... 2k

  48. Block-diagonalization of Γ and O i • How to efficiently upper-bound 1.40 ||O i Γ O i · Γ -1 || ? • The eigenspaces of the conjugated O i Γ O i 1.05 overlap different eigenspaces of Γ , and we want them to cancel as much as possible 0.70 so that the norm above is small • like here... 0.35 • we still need the condition on the bad 0 subspace 0 1 2 ... 2k

  49. Block-diagonalization of Γ and O i • How to efficiently upper-bound 1.40 ||O i Γ O i · Γ -1 || ? • The eigenspaces of the conjugated O i Γ O i 1.05 overlap different eigenspaces of Γ , and we want them to cancel as much as possible 0.70 so that the norm above is small • like here... 0.35 • we still need the condition on the bad 0 subspace 0 1 2 ... 2k • This makes the multiplicative adversary matrices hard to design

  50. Block-diagonalization of Γ and O i 1.40 1.05 0.70 0.35 0 0 1 2 ... 2k

  51. Block-diagonalization of Γ and O i 1.40 • By block-diagonalizing Γ and O i together, 1.05 we can bound each block separately 0.70 0.35 0 0 1 2 ... 2k

  52. Block-diagonalization of Γ and O i 1.40 • By block-diagonalizing Γ and O i together, 1.05 we can bound each block separately • Since the eigenvalues in one block don’t 0.70 differ so much like in the whole matrix, we can use some bounds, such as 0.35 λ min (M) ≤ λ ≤ ||M||, and don’t lose too much 0 0 1 2 ... 2k

  53. Block-diagonalization of Γ and O i 1.40 • By block-diagonalizing Γ and O i together, 1.05 we can bound each block separately • Since the eigenvalues in one block don’t 0.70 differ so much like in the whole matrix, we can use some bounds, such as 0.35 λ min (M) ≤ λ ≤ ||M||, and don’t lose too much 0 • This gives the bound 0 1 2 ... 2k � Γ ( k ) � � O i Γ O i · Γ − 1 � ≤ 1 + 2 max i λ min ( Γ ( k ) ) k

  54. Block-diagonalization of Γ and O i 1.40 • By block-diagonalizing Γ and O i together, 1.05 we can bound each block separately • Since the eigenvalues in one block don’t 0.70 differ so much like in the whole matrix, we can use some bounds, such as 0.35 λ min (M) ≤ λ ≤ ||M||, and don’t lose too much Γ (k) is the k-th block 0 • This gives the bound 0 1 2 ... 2k on the diagonal � Γ ( k ) � � O i Γ O i · Γ − 1 � ≤ 1 + 2 max i λ min ( Γ ( k ) ) k

  55. Block-diagonalization of Γ and O i 1.40 • By block-diagonalizing Γ and O i together, 1.05 we can bound each block separately • Since the eigenvalues in one block don’t 0.70 differ so much like in the whole matrix, we can use some bounds, such as 0.35 λ min (M) ≤ λ ≤ ||M||, sub-matrix of Γ (k) with zeroes and don’t lose too much 0 when x i =y i • This gives the bound 0 1 2 ... 2k � Γ ( k ) � � O i Γ O i · Γ − 1 � ≤ 1 + 2 max i λ min ( Γ ( k ) ) k

  56. Block-diagonalization of Γ and O i λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i

  57. Block-diagonalization of Γ and O i • The final multiplicative adversary bound is λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i

  58. Block-diagonalization of Γ and O i you pick the success probability η of a random choice, and • The final multiplicative adversary bound is additional success ζ λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i

  59. Block-diagonalization of Γ and O i maximize over all multiplicative adversaries • The final multiplicative adversary bound is λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i

  60. Block-diagonalization of Γ and O i λ is proportional to || Γ || and it has to cancel ζ 2 • The final multiplicative adversary bound is λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i

  61. Block-diagonalization of Γ and O i minimize over input bits i=1,...,n and blocks on the diagonal • The final multiplicative adversary bound is λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i

  62. Block-diagonalization of Γ and O i • The final multiplicative adversary bound is λ min ( Γ ( k ) ) Γ , λ log( 1 16 ζ 2 λ ) · min MAdv η , ζ ( f ) ≥ max 2 � Γ ( k ) i,k � i • You don’t have to use the finest block-diagonalization. Any is good, including using the whole space as one block, but then the obtained lower bound need not be very strong.

  63. Example: Lower bound for search

  64. Example: Lower bound for search • Given an n-bit string with exactly one 1. Task: find it.

  65. Example: Lower bound for search • Given an n-bit string with exactly one 1. Task: find it. MAdv 1/n, ζ (Search n ) = Ω ( ζ 2 √ n)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend