Span programs and other algorithmic tools Ashwin Nayak University - - PowerPoint PPT Presentation
Span programs and other algorithmic tools Ashwin Nayak University - - PowerPoint PPT Presentation
Span programs and other algorithmic tools Ashwin Nayak University of Waterloo Query algorithms Wish to compute a Boolean function f : {0,1} n {0,1} Input : x {0,1}, given as a black box i i O x b b x i Output :
Query algorithms
- Wish to compute a Boolean function f : {0,1}n ⟶ {0,1}
- Input : x ∈ {0,1}, given as a black box
- Output : f(x)
- With how few queries t can we compute f ?
i b i b ⊕ xi Ox Ox Ox Ox U0 Ut U1 U2 |¯ 0i
- utput
0/1
Query algorithms
- Most of the known quantum algorithms fit the framework
- e.g., Unordered search, Integer factoring
- Model for most expensive operation
- e.g., comparisons in Sorting
- Often captures time complexity
- Can show the limitation of heuristics
- e.g., quantum parallelism in solving NP-hard problems
Quantum algorithm design
- We have seen several recipes, and their applications
- Amplitude amplification : Unordered search
- Fourier sampling : Factoring, Discrete logarithms
- Quantum walks : Element distinctness, Triangle
finding
- Each speeds up a classical technique: brute force search,
finding symmetries, search by random walk
Is there a universal recipe?
[Cartoon: Little boy to his math teacher, “Rather than learning how to solve that, shouldn’t we be learning how to operate software that can solve that problem?”]
Adversary method (basic) [Ambainis, 2000] Adversary method [Høyer, Lee, Špalek, 2007] Span programs [Reichardt, Špalek, 2008] Algorithm for NAND tree [Farhi, Goldstone, Gutmann 2007] Near optimality
- f span programs
[Reichardt 2009] Optimality
- f span programs
[Lee, Mittal, Reichardt Špalek, Szegedy 2011]
Development of span programs
Span programs
- Boolean function f : {0,1}n ⟶ {0,1}
- Span program = sequence of vectors vx ∈ Rn ⨂ Rd,
- ne for each input x, for some d
- Each vx is of the form vx = ∑j ej ⨂ vx,j
- Constraint : ∑j ∈ N ⟨vx,j|vy,j⟩ = 1 for every pair x, y s.t.
f(x) ≠ f(y), N = { j : xj ≠ yj }
- Complexity = maxx ||vx||2
Example : ORn
- f(x1, x2, …, xn) = (x1 ∨ x2 ∨ … ∨ xn)
- vx ∈ Rn ⨂ C, for every input x
- ∑j ∈ N ⟨vx,j|vy,j⟩ = 1 when x = 0n, and y ≠ 0n
- Complexity
- ||v0…0||2 = α2n, ||vy||2 = 1/α2 (y ≠ 0n)
- pick α2 = 1/√n, so maximum = √n
∨ x1 x2 x3 xn · · ·
v0n = α(1, 1, 1, . . . , 1, . . . , 1) vy = 1
α(0, 0, 0, . . . , 1, . . . , 0)
y 6= 0n, yi = 1 i-th coordinate (pick one such i)
Properties of span programs
- 1. Bounded error query complexity of f asymptotically equals
its span program complexity
- Complexity of ORn = √n (Grover search, optimal)
- 2. Span program for f is also a span program for ¬f
- ANDn(x) = ¬ ORn(¬x)
- define wx = v¬x , complexity = √n
- 3. Span program complexities of f, g = Cf , Cg , respectively ⇒
span program complexity of f(g(x1), g(x2), …, g(xn)) ≤ Cf × Cg
- vectors vx ∈ R
n ⨂ R d, vx = ∑j ej ⨂ vx,j
- ∑j ∈ N ⟨vx,j|vy,j⟩ = 1 if f(x) ≠ f(y), N = {j : xj ≠ yj}
- Complexity = maxx ||vx||
2
Example : ANDn-ORn
- Naive algorithm: Nested Grover search
- whenever we need the OR of n bits,
recursively invoke search algorithm
- Need to control accumulation of error
- error ≤ 1/√n in recursive call suffices
- cost of error reduction = log n factor
- query complexity ≤ √n × √n × log n
= n log n
- Composition property of span programs
- query complexity ≤ √n × √n = n
reproduces [Høyer, Mosca, de Wolf, 2003], optimal
∨ · · · ∨ ∨ ∨ ∧ x1 x2 x3 xn
Properties of span programs
- 4. Span program complexity of fi = Ci , respectively ⇒ complexity
- f f1(x1) ∨ f2(x2) ∨ … ∨ fn(xn) ≤ (C12 + … + Cn2 )1/2
- Naive composition of algorithms gives √n × maxi Ci × log n
- Naive composition of span programs gives √n × maxi Ci
- Improvement due to weighted composition of span programs
- Subsumes variable time search [Ambainis, 2008]
· · · x1 x2 x3 xn ∨ f1 f2 f3 fn
A consequence
- Evaluating read-once formulae with n variables
- ver {∨,⋀,¬}
- Naive composition only effective for balanced
formulae
- Complexity of f1(x) ∨ f2(y) is (C1
2 + C2 2 )1/2
- Use DeMorgan rule to convert ANDs to ORs
- Apply composition recursively to subtrees of
OR gates
- Net complexity is √n (optimal)
- Subsumes previous NAND-tree algorithms [Farhi,
Goldstone, Gutmann, 2007; Ambainis, Childs, Reichardt, Špalek, Zhang, 2007]
∨ ¬ x1 ∨ ∨ ¬ f1 f2
Is this the end of the road?
[Cartoon: Junior computer programmer to supervisor, “If Facebook is already replacing e-mail, then we should get started on a replacement for Facebook.”]
1-certificate
- Function f : Dn ⟶ {0,1}, D = {0,1} or D = {0,…, n-1}
- Suppose f(x) = 1. A 1-certificate for x is a subset S
such that if yS = xS ⇒ f(y) = 1.
- (S, xS) is a witness for f(x) = 1.
- if f = ORn , (i, 1) is a 1-certificate
- if f = Triangle, any subgraph with a triangle is a 1-
certificate
- if f = Element Distinctness, any subset of indices with
a repeated element is a 1-certificate
Learning graphs [Belovs]
- A schema for constructing span programs
- A (non-adaptive) learning graph for f is a directed graph
- n subsets of indices
- All the arcs have the form (S, S⋃{i} ) for some i ∉ S
(associated with the query to index i )
- The graph “computes” f if for every x with f(x) = 1,
there is a path from ∅ to a 1-certificate for x
- Span program has the same complexity as the learning graph
n 3 2 1 · · · ∅
- A learning graph for ORn :
Complexity of a learning graph
- Each arc e is assigned a weight we
- 0-complexity C0 = ∑e we
- For each 1-input x , we have a flow pe(x) on the arcs
- ∅ is the only source, with out-flow 1
- only 1-certificates for x may be sinks
- incoming flow equals outgoing flow for all other nodes
- 1-complexity C1 = max1-inputs x ∑e pe(x)2 / we
- Learning graph complexity = (C0 C1)1/2
- For ORn , C0 = n, C1 = 1
Complexity = √n
n 3 2 1 · · · ∅ 1 1 1 1 For y 6= 0n, flow pe = 1 for one e where e = (;, {i}), yi = 1
Example: Element Distinctness
- input x ∈ {0,1,…, n-1}n
- f(x) = 0 iff all xi are distinct
- 1-certificate for a 1-input x : a pair (i,j) with xi = xj
∅ . . . . . . . . . all (r − 2)-subsets all (r − 1)-subsets all r-subsets we = r − 2 we = 1 we = 1
- ut-degree n − r + 2
- ut-degree n − r + 1
Complexity of Element Distinctness
- For a 1-input x : fix a pair (i,j) with xi = xj , i < j
- Send equal flow to all (r-2)-subsets disjoint from { i , j }
- From an (r-2)-subset S , send all flow to S ⋃ { i }, then
to S ⋃ { i , j }
- Complexity = r + √n + n/√r = n2/3 Choose
r = n2/3 ; reproduces [Ambainis, 2004]
∅ . . . . . . . . . all (r − 2)-subsets all (r − 1)-subsets all r-subsets we = r − 2 we = 1 we = 1
- ut-degree n − r + 2
- ut-degree n − r + 1
Remarks
- Span programs and learning graphs provide a method for
designing quantum query algorithms through “static” constructs
- Have led to new, more efficient algorithms
- Triangle finding and k-Distinctness
- Some algorithms have been reproduced with quantum walks
- Improvements to some algorithms entail designing more
sophisticated (adaptive) learning graphs
- Optimum constructs are given by SDPs, can be computed
explicitly for a small number of variables
- SDP for optimal span program is dual to the adversary