Stoquastic Hamiltonians, a Leisurely Introduction The motivation for - - PowerPoint PPT Presentation

stoquastic hamiltonians a leisurely introduction
SMART_READER_LITE
LIVE PREVIEW

Stoquastic Hamiltonians, a Leisurely Introduction The motivation for - - PowerPoint PPT Presentation

Stoquastic Hamiltonians, a Leisurely Introduction The motivation for stoquastic Hamiltonians comes from the success of quantum monte carlo methods. There are many variants of quantum monte carlo but all of them have a similar character, which I


slide-1
SLIDE 1

Stoquastic Hamiltonians, a Leisurely Introduction

The motivation for stoquastic Hamiltonians comes from the success of quantum monte carlo methods. There are many variants of quantum monte carlo but all of them have a similar character, which I will outline here.

1 / 48

slide-2
SLIDE 2

We often want to evaluate expressions like this: Tr[Oe−βH] Tr[e−βH] But accessing e−βH can be nasty. We can break up the exponentials into small steps Tr[Oe−βH] = Tr[O(e− β

L H)L]

and insert identities. = Tr

  • O
  • x1

|x1x1|

  • e− β

L H

  • x2

|x2x2|

  • e− β

L H

  • x3

|x3x3|

  • ..
  • 2 / 48
slide-3
SLIDE 3

= Tr

  • O
  • x1

|x1x1|

  • e− β

L H

  • x2

|x2x2|

  • e− β

L H

  • x3

|x3x3|

  • ..
  • If L is large enough, and H is efficiently expressible, then

e− β

L H = (I − β

LH) = G is easier to handle.

We can factor out the sums, to transform the expression into a sum over “paths” x = (x0, x1, x2..) Tr[Oe−βH] =

  • x|x0=xL

x0|O|x1

L−1

  • i=1

xi|G|xi+1 Tr[Oe−βH] =

  • x

O(x)w(x) We don’t want to sum over these paths

3 / 48

slide-4
SLIDE 4

Tr[Oe−βH] =

  • x

O(x)w(x) The strategy of quantum Monte Carlo is to sample from these paths in a way that is faithful to this weighting. Tr[Oe−βH] =

  • x|x0=xL

x0|O|x1

L−1

  • i=1

xi|G|xi+1 For example [Sorella, Capriotti (2013)], one might have a walker (w, x), and one might start at a random x and perform a random walk informed by the matrix elements x|G|y = Gxy: x → y w/ prob. |Gxy|

  • y |Gxy|

w → w ∗ sign(Gxy) ∗

  • y

|Gxy|

  • 4 / 48
slide-5
SLIDE 5

Sampling in this way will reproduce the distribution, but there is a problem. O =

  • x O(x)w(x)
  • x w(x)

O =

  • x O(x) sign(w(x)) |w(x)|
  • x sign(w(x)) |w(x)|
  • x |w(x)|
  • x |w(x)|
  • Define: P(x) :=

|w(x)|

  • x |w(x)| , δ(x) := sign(w(x))

O =

  • x O(x) δ(x) P(x)
  • x δ(x) P(x)

= O δ δ If there are negative signs in G, then for long path lengths the average sign will tend to zero, and relative errors can blow up.

5 / 48

slide-6
SLIDE 6

How do we avoid this “sign problem”? If H is real and has non-positive entries in its off-diagonals then for some sufficiently large L, G = I − β

LH is entrywise non-negative

and real, so all path weights are positive and real. δ = 1 We call such H globally stoquastic (in the standard basis). In fact, e−βH is an entrywise non-negative matrix for all β if and

  • nly if H is globally stoquastic.

— If H is stoquastic, G is non-negative for large L, therefore e−βH = G L is non-negative. — If e−βH is non-negative for all β, then choose sufficiently small β: e−βH = I − βH + O(β2||H||2) and so H must have non-positive off-diagonals.

6 / 48

slide-7
SLIDE 7

If e−βH is positive and real, then the Perron-Frobenius theorem tells us that the ground state of H is a vector with all positive and real weights. ψ|H|ψ =

  • x

|ψ(x)|2x|H|x +

  • x=y

ψ(x)ψ∗(y)x|H|y ψ|H|ψ =

  • x

|ψ(x)|2x|H|x −

  • x=y

ψ(x)ψ∗(y)|x|H|y| |ψ||H||ψ| =

  • x

|ψ(x)|2x|H|x −

  • x=y

|ψ(x)||ψ∗(y)||x|H|y| |ψ||H||ψ| ≤ ψ|H|ψ (Thanks to Alex for that proof) We can think of the ground state as the stationary probability distribution of a quantum monte carlo process

7 / 48

slide-8
SLIDE 8

Some comments on terminology:

◮ Globally stoquastic in standard basis: x|H|y ≤ 0 x = y ◮ (termwise) Stoquastic in standard basis:

H =

k Hk

: x|Hk|y ≤ 0 x = y

◮ Globally stoquastic = termwise stoquastic in general ◮ For 2-local multi-qubit Hamiltonians they are the same. (but

not for 3-local)

◮ Computer scientists seem to care about termwise stoquastic ◮ Monte Carlo community it is not so clear to me. Seems like it

might depend on the method.

8 / 48

slide-9
SLIDE 9

Stoquastic Hamiltonians form a distinct complexity class called Stoq-MA.

Stoq-MA QMA AM MA NP

The decision problem is the ground energy of a stoquastic

  • Hamiltonian. The transverse field Ising model is complete for this

class. Stoquastic k-sat ( H = Ha, there exists a |ψ such that Ha|ψ = 0) is MA-complete

9 / 48

slide-10
SLIDE 10

The idea behind our research program is that stoquasticity is basis

  • dependent. So for which Hamiltonians can we find a basis that

makes them stoquastic?

10 / 48

slide-11
SLIDE 11

Deciding Stoquasticity of 2-Local Hamiltonians

Joel Klassen 2018

11 / 48

slide-12
SLIDE 12

Hello my name is Joel. I am a postdoc at QuTech Working with Barbara Terhal Thanks for hosting me.

12 / 48

slide-13
SLIDE 13

Introduction

◮ This is joint work largely done by myself and Milad Marvian.

Our collaborators are Barbara Terhal, Marios Iannous, Itay Hen and Daniel Lidar.

◮ Our research has focused on stoquastic Hamiltonians ◮ In particular I have been trying to develop algorithms for

deciding when a Hamiltonian is stoquastic

◮ I will explain what a stoquastic Hamiltonian is ◮ I will give some motivation for why it is interesting ◮ I will present a polynomial time algorithm for deciding if a

two-local multiqubit Hamiltonain, with no one-local terms, can be made stoquastic in some local basis.

13 / 48

slide-14
SLIDE 14

Yeah but like, what even is a stoquastic Hamiltonian anyways?

14 / 48

slide-15
SLIDE 15

Did you say stochastic?

◮ A stochastic process is a random process evolving in time. ◮ A quantum process is not a stochastic process in general. ◮ Indeed unitary time evolution is deterministic. ◮ However some quantum systems can be modelled by

stochastic processes. Stochastic + Quantum = Stoquastic 1

1[Bravyi et.al. (2006)] 15 / 48

slide-16
SLIDE 16

We often want to evaluate expressions like this: Tr[Oe−βH] One way is to break up the exponentials into small steps, and insert identities. Tr[Oe−βH] =

  • p|p0=pL

p0|O|p1

L−1

  • i=1

pi|I − β LH|pi+1 Terms can be evaluated if O and H are local in the basis pi. Morally, this is path integration, with p representing a particular path.

  • end|O|beginning× amplitudes of paths

16 / 48

slide-17
SLIDE 17

Boy, it sure would be nice if we didn’t have to evaluate all of those paths...

17 / 48

slide-18
SLIDE 18

◮ What if we just sample from these paths according to their

weights? Will our answers be faithful?

◮ Not if our amplitudes interfere! Random sampling can

  • bscure important coherence effects.

◮ This is called the sign problem. ◮ Its very much like the difference between burnished metal and

a polished mirror.

◮ However if all of our amplitudes are positive and real... then

we don’t have this problem.

18 / 48

slide-19
SLIDE 19

Enter Stoquastic Hamiltonians

Consider a Hamiltonian H such that all of its off diagonal elements are non-positive and real in some basis {|i}.

◮ For all values of β ≥ 0: i|e−βH|j ≥ 0 ◮ Path amplitudes will be positive and real, and we can

perform stochastic sampling of our path integrals.

◮ Such a matrix H is an instance of a “Z-matrix” ◮ Matrices of this type are also employed in the study of

economics, control theory, and population dynamics.

19 / 48

slide-20
SLIDE 20

A Z-matrix in any other basis would smell as sweet.

◮ Critically, a Z-matrix is basis dependent! ◮ Generally one wants to say that when a Hamiltonian can be

efficiently transformed into a Z-matrix while preserving sparsity (ie local structure), then it is “stoquastic” (Quantum Stochastic) under that transformation. Stoquastic ≃ Z-matrix in some efficient representation

◮ There is a subtlety here. We can ask that each k-local term

be a Z-matrix, or we can ask that the whole Hamiltonian be a Z-matrix

◮ These two questions are distinct! But for two-local qubit

Hamiltonians they are the same.

20 / 48

slide-21
SLIDE 21

Okay but I mean who cares?

21 / 48

slide-22
SLIDE 22

Motivation

The Quantum Monte Carlo Community Stoquastic Hamiltonians avoid the sign-problem and thus are more amenable to quantum Monte Carlo methods. Computational Complexity Theorists Stoquastic Hamiltonians constitutes a distinct and interesting computational complexity class: Stoq- MA [Bravyi et.al. (2006)(2008)] [Aharanov, Grillo (2019)]

Stoq-MA QMA AM MA NP

Adiabatic evolution of frustration free stoquastic Hamiltonians can be simulated efficiently. [Bravyi, Terhal (2008)]

22 / 48

slide-23
SLIDE 23

Motivation

Experimentalists and Engineers

◮ It seems as though finding ground states and ground

energies of stoquastic Hamiltonians is easier than for generic Hamiltonians.

◮ Perhaps in adiabatic quantum computation we want to

build devices that are not stoquastic. (eg. TFIM is stoquastic) Theoretical Physicists

◮ Many natural systems are manifestly stoquastic in what is

considered a natural basis. (spinless destinguishable particles, hopping bosons)

◮ Is there something deep behind this?

23 / 48

slide-24
SLIDE 24

General Problem Statement & Prior Work

Stoquasticity of 2-local multiqubit Hamiltonians

When is a 2-local Hamiltonian acting on n qubits stoquastic in some local basis, and what is that basis? Lots of work in the QMC community on avoiding the sign problem. Few systematic approaches in this stoquastic picture. Some limited strategies for choosing right basis:

◮ in certain regimes of the antiferromagnetic XXZ model on a

triangular lattice [Hatano, Suzuki 1992]

◮ in certain regimes of generalized XYZ Heisenberg Hamiltonian

  • n bipartite lattice, transverse field Ising model and single-ion

anisotropy model (transformations are called Marshall-Peierls sign rules in this context) [Bishop, Farnell 2001]

◮ Any Hamiltonian which is tridiagonal in some local basis is

stoquastic [Bausch, Crosson 2018]

24 / 48

slide-25
SLIDE 25

Prior Complexity Results

◮ Deciding if 3-local Hamiltonian is stoquastic under one-local

Cliffords is NP-complete

◮ Deciding if 6-local Hamiltonian is stoquastic under one-local

  • rthogonal transformations is NP-complete [Marvian, Lidar,

Hen 2018]

◮ There is an efficient algorithm for deciding if a generalized

XYZ-Hamiltonian ( aXX + bYY + cZZ) is stoquastic under

  • ne-local unitaries. [Klassen, Terhal 2018]

◮ The XYZ-algorithm ← Will use this later

Watch our talk on youtube! Search: QIP19 Marvian Klassen Can we more clearly delineate the boundary between hardness and easiness?

25 / 48

slide-26
SLIDE 26

Can we efficiently decide beyond the XYZ algorithm?

So we can decide if an XYZ-Hamiltonian is stoquastic in some basis... “But I want a Hamiltonian that has an XY term in it!”

26 / 48

slide-27
SLIDE 27

Main Result: Going Beyond the XYZ Hamiltonian

Exactly 2-local Hamiltonian on n qubits: H =

  • uv
  • ij∈{1,2,3}

βuv

ij PiPj

P1 = X, P2 = Y , P3 = Z, βuv

ij

∈ R

Theorem

There is an efficient algorithm that runs in time O(n3) that decides whether or not H is stoquastic in some local basis. The algorithm finds the local basis, or decides that no such local basis exists.

27 / 48

slide-28
SLIDE 28

Graph Representation

H =

  • uv
  • ij∈{1,2,3}

βuv

ij PiPj ◮ βuv is a matrix. ◮ SU(2) ↔ SO(3). So H → (U1 ⊗ U2)H(U1 ⊗ U2)† corresponds

to β → OT

1 βO2 ◮ Our Hamiltonian looks like a matrix weighted graph, and we

are applying SO(3) rotations at the vertices: ⇒ (βuv)′ = OT

u βuvOv,

(βuw)′ = OT

u βuwOw,

etc.

28 / 48

slide-29
SLIDE 29

Conditions

H2 =    

  • βZX

βXZ βXX − βYY

  • βXX + βYY

−βXZ

  • −βZX

   (1) +i    

  • −βZY

−βYZ −βXY − βYX

  • βXY − βYX

βYZ

  • βZY

   (2)

◮ terms associated with XY , ZY , YX, YZ must be zero, to keep

things real.

◮ terms associated with XZ, ZX must be zero, and βXX ± βYY

must be less than zero, to keep things negative.

◮ So to be stoquastic, all β′ = OT 1 βO2 must be diagonal, and

β′

11 ≤ −|β′ 22|

29 / 48

slide-30
SLIDE 30

Roadblock

◮ If all β are diagonal, then their corresponding interactions are

(β11XX + β22YY + β33ZZ)

◮ therefore the XYZ-algorithm solves the problem ◮ naively just need to find a simultaneous diagonalization of the

matrix weights.

◮ problem: simultaneously diagonalizing all β in a graph is an

NP-hard problem. [Klassen, Terhal 2018]

◮ workaround: ignore some cases that will not have a chance

  • f being stoquastic, no longer NP-hard!

30 / 48

slide-31
SLIDE 31

Problem Statement

Problem Statement

Find a set of orthogonal rotations {Ou} such that:

  • 1. OT

u βuvOv is diagonal for all u, v.

  • 2. [OT

u βuvOv]22 = 0 for all βuv with rank(βuv) = 1.

Or show that none exists.

◮ Both conditions are necessary. (β′ 11 ≤ −|β′ 22|) ◮ If you find such a set, pass the solution to the XYZ algorithm ◮ Turns out it suffices to consider O(3) rotations instead of

SO(3) rotations (we can dump signs into the ZZ terms.)

◮ !! If {Ou} is a solution, then so is {Oudiag(±1, ±1, ±1)}!!

31 / 48

slide-32
SLIDE 32

Illustrative Subcase

◮ What if every edge in our graph has rank-1 β?

b b

u v

βuv

rank(βuv) = 1 ◮ For every vertex u find an Ou that simultaneously diagonalizes

βuv(βuv)T for all v adjacent to u. This is not hard.

◮ If you can’t find such Ou, then H is not stoquastic. ◮ Otherwise we can apply them and then OT u βuvOv must have

a single non-zero entry, but maybe not in the right place: OT

u βuvOv =

  4  

◮ We want to move that non-zero entry into the first or third

position!

32 / 48

slide-33
SLIDE 33

  0 0 4 0 0 0 0 0 0       βuv    

◮ 3rd column of Ov and 1st column of Ou are the right and left

singular vectors not in the null space of βuv.

◮ To move the non-zero entry, permute the columns of Ou and

Ov.

◮ but this has an effect on the other edges! ◮ We can bi-colour each edge in terms of the positions of the

columns which have non-zero singular values.

  0 0 4 0 0 0 0 0 0  

b b

33 / 48

slide-34
SLIDE 34

b b b b b

b b

◮ Now we have a bicoloured graph. ◮ We can permute the colours at each vertex (permuting

columns of Ou)

◮ We want every edge uniform, and to remove all blue edges

(second position)

◮ Just making edges uniform is NP-complete. ◮ But if we also want to remove all blue edges, then it becomes

easy.

34 / 48

slide-35
SLIDE 35

◮ Flip all blue edges to whichever colour is not present at the

vertex:

b b b b b b b b b b

b

Auto-Failure ◮ can either permute green and red (−), or not (+). (think

Ising model)

◮ choose (+) or (−) at one vertex, then propagate:

b b b b b

+ ◮ If you run into a contradiction, then no other choice would

solve it.

35 / 48

slide-36
SLIDE 36

◮ Flip all blue edges to whichever colour is not present at the

vertex:

b b b b b b b b b b

b

Auto-Failure ◮ can either permute green and red (−), or not (+). (think

Ising model)

◮ choose (+) or (−) at one vertex, then propagate:

b b b b b

+ − + ◮ If you run into a contradiction, then no other choice would

solve it.

35 / 48

slide-37
SLIDE 37

◮ Flip all blue edges to whichever colour is not present at the

vertex:

b b b b b b b b b b

b

Auto-Failure ◮ can either permute green and red (−), or not (+). (think

Ising model)

◮ choose (+) or (−) at one vertex, then propagate:

b b b b b

+ − + − + ◮ If you run into a contradiction, then no other choice would

solve it.

35 / 48

slide-38
SLIDE 38

Incorporating Rank> 1 Edges

◮ Let us now consider the most general case. ◮ continue thinking of Ou in terms of orthonormal column

vectors.

    Ou =

◮ Recall that we don’t care about the signs of these vectors so

let’s represent them projectively (represent columns as 1-d subspaces):

Ou

b

◮ we still need to care about the ordering of the columns!

36 / 48

slide-39
SLIDE 39

If OT

u βuvOv is diagonal, then the ith column ei v of Ov is either in

the kernel of βuv, or βuvei

v ∝ ei u, the ith column of Ou.

So if Ov diagonalizes (βuv)Tβuv then we can think of βuv as a rotation on some of these 1-d subspaces:

βuv:

b b

And this rotation partially specifies what Ou must be in order for Ou, Ov to diagonalize βuv:

Ou

b

37 / 48

slide-40
SLIDE 40

Rank-1 Edges and Rank> 1 Edges

If rank(βuv) > 1 then the choice of Ov completely specifies Ou by the mapping βuv:

βuv

b b b

Ou←v

But if rank(βuv) = 1 then Ou is underspecified:

βuv

b b

Ov

38 / 48

slide-41
SLIDE 41

Rank> 1 Connected components

Returning to our graph picture, this important distinction between Rank> 1 and Rank-1 matrix weights adds structure to the graph:

b b b b b b b b

Rank> 1 Rank-1 Rank> 1 Connected Component (RCC)

If we wisely choose the rotation at some starting vertex in an RCC, then that specifies the rotations at all other vertices in the RCC:

b b b b b

39 / 48

slide-42
SLIDE 42

Rank> 1 Connected components

Returning to our graph picture, this important distinction between Rank> 1 and Rank-1 matrix weights adds structure to the graph:

b b b b b b b b

Rank> 1 Rank-1 Rank> 1 Connected Component (RCC)

If we wisely choose the rotation at some starting vertex in an RCC, then that specifies the rotations at all other vertices in the RCC:

b b b b b

Ou

b

39 / 48

slide-43
SLIDE 43

Rank> 1 Connected components

Returning to our graph picture, this important distinction between Rank> 1 and Rank-1 matrix weights adds structure to the graph:

b b b b b b b b

Rank> 1 Rank-1 Rank> 1 Connected Component (RCC)

If we wisely choose the rotation at some starting vertex in an RCC, then that specifies the rotations at all other vertices in the RCC:

b b b b b

Ou

b

Ov Ow Ox

b b b

39 / 48

slide-44
SLIDE 44

Rank> 1 Connected components

Returning to our graph picture, this important distinction between Rank> 1 and Rank-1 matrix weights adds structure to the graph:

b b b b b b b b

Rank> 1 Rank-1 Rank> 1 Connected Component (RCC)

If we wisely choose the rotation at some starting vertex in an RCC, then that specifies the rotations at all other vertices in the RCC:

b b b b b

Ou

b

Ov Ow Ox

b b b b

Oy

39 / 48

slide-45
SLIDE 45

◮ Thus we can think of an RCC as a single site on which to

apply an O(3) rotation.

b b b b b

Ou

b

Ov Ow Ox

b b b b

Oy Ou

bb

◮ All the edges in the RCC will be diagonalized ◮ And we pick up the same edge bi-colouring property for

rank-1 edges.

◮ permuting the ordering of the columns of the starting choice

propagates through the RCC in such a way that the colourings

  • f the rank-1 edges permutes in a similar fashion.

◮ So if we can choose a good starting rotation for each RCC,

then our problem reduces to the previous case.

40 / 48

slide-46
SLIDE 46

Boy, it sure would be nice to know what you mean by a good starting rotation...

41 / 48

slide-47
SLIDE 47

Choosing a Good Starting Rotation

◮ The notion of Ou transporting along the rank> 1 edge... ◮ and the notion of Ou inducing a bicolouring on its

neighbouring rank-1 edge rely on... Ou diagonalizing βuv(βuv)T

◮ We need Ou to not only have this property for all of its

neighbouring edges, but also all of the transported rotations must have this property!

b b b b b

Ou

b

42 / 48

slide-48
SLIDE 48

Choosing a Good Starting Rotation

◮ The notion of Ou transporting along the rank> 1 edge... ◮ and the notion of Ou inducing a bicolouring on its

neighbouring rank-1 edge rely on... Ou diagonalizing βuv(βuv)T

◮ We need Ou to not only have this property for all of its

neighbouring edges, but also all of the transported rotations must have this property!

b b b b b

Ou

b

Ov Ow Ox

b b b

42 / 48

slide-49
SLIDE 49

Choosing a Good Starting Rotation

Compute the simultaneous eigenspaces (eigenspaces βuv(βuv)T of every adjacent edge) for each vertex:

b b b b b

43 / 48

slide-50
SLIDE 50

Choosing a Good Starting Rotation

Compute the simultaneous eigenspaces (eigenspaces βuv(βuv)T of every adjacent edge) for each vertex:

b b b b b

Transport the eigenspaces to neighbouring edges and take intersections

43 / 48

slide-51
SLIDE 51

Choosing a Good Starting Rotation

Compute the simultaneous eigenspaces (eigenspaces βuv(βuv)T of every adjacent edge) for each vertex:

b b b b b

Transport the eigenspaces to neighbouring edges and take intersections

43 / 48

slide-52
SLIDE 52

Choosing a Good Starting Rotation

Compute the simultaneous eigenspaces (eigenspaces βuv(βuv)T of every adjacent edge) for each vertex:

b b b b b

Transport the eigenspaces to neighbouring edges and take intersections Repeat until you reach a fixed point.

43 / 48

slide-53
SLIDE 53

Choosing a Good Starting Rotation

Compute the simultaneous eigenspaces (eigenspaces βuv(βuv)T of every adjacent edge) for each vertex:

b b b b b

Transport the eigenspaces to neighbouring edges and take intersections Repeat until you reach a fixed point. A good starting rotation must be drawn from this final set.

43 / 48

slide-54
SLIDE 54

One Last Consideration: Topological Frustrations

◮ The previous illustration suggested a trivial choice for the

starting rotation.

◮ Sometimes you have bigger subspaces to choose from:

b b b

44 / 48

slide-55
SLIDE 55

One Last Consideration: Topological Frustrations

◮ The previous illustration suggested a trivial choice for the

starting rotation.

◮ Sometimes you have bigger subspaces to choose from:

b b b

44 / 48

slide-56
SLIDE 56

One Last Consideration: Topological Frustrations

◮ The previous illustration suggested a trivial choice for the

starting rotation.

◮ Sometimes you have bigger subspaces to choose from:

b b b

Rotate 45 deg

44 / 48

slide-57
SLIDE 57

One Last Consideration: Topological Frustrations

◮ The previous illustration suggested a trivial choice for the

starting rotation.

◮ Sometimes you have bigger subspaces to choose from:

b b b

Flip along diagonal

44 / 48

slide-58
SLIDE 58

One Last Consideration: Topological Frustrations

◮ The previous illustration suggested a trivial choice for the

starting rotation.

◮ Sometimes you have bigger subspaces to choose from:

b b b

rotate −45 deg

= ◮ there are cases where you can still make a bad starting choice!

44 / 48

slide-59
SLIDE 59

One Last Consideration: Topological Frustrations

So for the final step:

◮ Identify all the fundamental cycles in the RCC ◮ A good choice must also be from the eigenspaces of the

transport operators associated with those cycles.

◮ Lets try the previous example again:

b b b

45 / 48

slide-60
SLIDE 60

One Last Consideration: Topological Frustrations

So for the final step:

◮ Identify all the fundamental cycles in the RCC ◮ A good choice must also be from the eigenspaces of the

transport operators associated with those cycles.

◮ Lets try the previous example again:

b b b

Rotate 45 deg

45 / 48

slide-61
SLIDE 61

One Last Consideration: Topological Frustrations

So for the final step:

◮ Identify all the fundamental cycles in the RCC ◮ A good choice must also be from the eigenspaces of the

transport operators associated with those cycles.

◮ Lets try the previous example again:

b b b

Flip along the Diagonal

45 / 48

slide-62
SLIDE 62

One Last Consideration: Topological Frustrations

So for the final step:

◮ Identify all the fundamental cycles in the RCC ◮ A good choice must also be from the eigenspaces of the

transport operators associated with those cycles.

◮ Lets try the previous example again:

b b b

Rotate −45 deg

45 / 48

slide-63
SLIDE 63

Recap

◮ Identify rank> 1 connected components (RCCs) ◮ Find a good initial choice of rotation for each RCC by:

◮ computing the intersections of the transported eigenspaces in

the RCC

◮ computing the real eigenspaces of the fundamental cycles,

centered at a vertex.

◮ selecting basis elements from the intersection of these sets,

and making those the columns of your initial rotation choice

◮ propagating the choice through the RCC

◮ Treat the RCC as a single vertex in a graph with only rank-1

edges, and solve the bicoloured graph problem.

◮ The solution specifies how the columns of the orthogonal

rotations on each RCC should be permuted. If at any point a step fails, then no solution exists (this is non-trivial)

46 / 48

slide-64
SLIDE 64

Can we add 1-local terms?

Can we generalize this algorithm to the generic 2-local Hamiltonian, which includes 1-local terms? The answer is no. We show that adding 1-local terms makes the problem NP-hard. We do this by constructing a reduction to 3-SAT This is possible thanks in part to the fact that there’s a freedom in how 1-local terms can be grouped with 2-local terms. So we have found a nice delineation! The presence or absence of 1-local terms determines if deciding stoquasticity is hard or easy (for 2-local qubit Hamiltonians).

47 / 48

slide-65
SLIDE 65

Future Directions

◮ Recently D-Wave announced that they had engineered a

non-stoquastic interaction. [arxiv:1903.06139]

◮ In the low energy space they do indeed get a 2-qubit

interaction that is not stoquastic in any local basis.

◮ But the full circuit Hamiltonian can always be made

stoquastic by a canonical transformation.

◮ Under what circumstances can we lift non-stoquastic

Hamiltonians to stoquastic ones? What are the consequences

  • f this?

48 / 48

slide-66
SLIDE 66

Future Directions

◮ Global and termwise stoquasticity is different ◮ What can we say about this distinction? ◮ When is it important? ◮ Masters student Marios Iannous is thinking about this.

49 / 48