Quantum Algorithms for Estimating Physical Quantities using - - PowerPoint PPT Presentation

quantum algorithms for estimating physical quantities
SMART_READER_LITE
LIVE PREVIEW

Quantum Algorithms for Estimating Physical Quantities using - - PowerPoint PPT Presentation

Quantum Algorithms for Estimating Physical Quantities using Block-Encodings Patrick Rall Quantum Information Center University of Texas at Austin A review of modern techniques and new results from arXiv:2004.06832 May 2020 Patrick Rall


slide-1
SLIDE 1

Quantum Algorithms for Estimating Physical Quantities using Block-Encodings

Patrick Rall

Quantum Information Center University of Texas at Austin A review of modern techniques and new results from arXiv:2004.06832

May 2020

Patrick Rall Algorithms from Block Encodings May 2020 1 / 22

slide-2
SLIDE 2

Quantum Primitives

|0 U |0 Initialization

Patrick Rall Algorithms from Block Encodings May 2020 2 / 22

slide-3
SLIDE 3

Quantum Primitives

|0 U |0 Initialization Unitary evolution

Patrick Rall Algorithms from Block Encodings May 2020 2 / 22

slide-4
SLIDE 4

Quantum Primitives

|0 U |0 Initialization Unitary evolution Measurement

Patrick Rall Algorithms from Block Encodings May 2020 2 / 22

slide-5
SLIDE 5

Quantum Primitives

|0 U |0 |0 Initialization Unitary evolution

✭✭✭✭✭✭ ✭ ❤❤❤❤❤❤ ❤

Measurement Postselection

Patrick Rall Algorithms from Block Encodings May 2020 2 / 22

slide-6
SLIDE 6

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-7
SLIDE 7

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success Condition experiment on postselection success

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-8
SLIDE 8

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success Condition experiment on postselection success

Quadratic speedups for both of these common operations.

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-9
SLIDE 9

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success

Classical: To get precision ε, need O(1/ε2) samples

Condition experiment on postselection success

Quadratic speedups for both of these common operations.

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-10
SLIDE 10

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success

Classical: To get precision ε, need O(1/ε2) samples Quantum: Amplitude estimation has circuit size O(1/ε)

Condition experiment on postselection success

Quadratic speedups for both of these common operations.

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-11
SLIDE 11

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success

Classical: To get precision ε, need O(1/ε2) samples Quantum: Amplitude estimation has circuit size O(1/ε)

Condition experiment on postselection success

Classical: If success probability is p, to try O(1/p) times

Quadratic speedups for both of these common operations.

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-12
SLIDE 12

Why postselection as a primitive?

Postselection captures common operations

Estimate probability of postselection success

Classical: To get precision ε, need O(1/ε2) samples Quantum: Amplitude estimation has circuit size O(1/ε)

Condition experiment on postselection success

Classical: If success probability is p, to try O(1/p) times Quantum: Amplitude amplification has circuit size O(1/√p)

Quadratic speedups for both of these common operations.

Patrick Rall Algorithms from Block Encodings May 2020 3 / 22

slide-13
SLIDE 13

Example: Expectation of Pauli Matrix

Some state |ψ and Pauli matrix P. Estimate ψ| P |ψ Let U |0n = |ψ.

Patrick Rall Algorithms from Block Encodings May 2020 4 / 22

slide-14
SLIDE 14

Example: Expectation of Pauli Matrix

Some state |ψ and Pauli matrix P. Estimate ψ| P |ψ Let U |0n = |ψ. Standard method: Let VPV † = I ⊗ σZ ⊗ σZ ⊗ I

Patrick Rall Algorithms from Block Encodings May 2020 4 / 22

slide-15
SLIDE 15

Example: Expectation of Pauli Matrix

Some state |ψ and Pauli matrix P. Estimate ψ| P |ψ Let U |0n = |ψ. Standard method: Let VPV † = I ⊗ σZ ⊗ σZ ⊗ I |0n U V †

Patrick Rall Algorithms from Block Encodings May 2020 4 / 22

slide-16
SLIDE 16

Example: Expectation of Pauli Matrix

Some state |ψ and Pauli matrix P. Estimate ψ| P |ψ Let U |0n = |ψ. Standard method: Let VPV † = I ⊗ σZ ⊗ σZ ⊗ I |0n U V † Alternate method: exploit that P is unitary

Patrick Rall Algorithms from Block Encodings May 2020 4 / 22

slide-17
SLIDE 17

Example: Expectation of Pauli Matrix

Some state |ψ and Pauli matrix P. Estimate ψ| P |ψ Let U |0n = |ψ. Standard method: Let VPV † = I ⊗ σZ ⊗ σZ ⊗ I |0n U V † Alternate method: exploit that P is unitary |0n U P U† |0

Patrick Rall Algorithms from Block Encodings May 2020 4 / 22

slide-18
SLIDE 18

Example: Expectation of Pauli Matrix

Some state |ψ and Pauli matrix P. Estimate ψ| P |ψ Let U |0n = |ψ. Standard method: Let VPV † = I ⊗ σZ ⊗ σZ ⊗ I |0n U V † Alternate method: exploit that P is unitary |0n U P U† |0 Postselection probability: |ψ| P |ψ|2. Almost what we want.

Patrick Rall Algorithms from Block Encodings May 2020 4 / 22

slide-19
SLIDE 19

Example: Expectation of Pauli Matrix

Smallest eigenvalue of P is −1, so:

  • ψ| I + P

2 |ψ

  • = ψ| I + P

2 |ψ = ψ| P |ψ + 1 2

Patrick Rall Algorithms from Block Encodings May 2020 5 / 22

slide-20
SLIDE 20

Example: Expectation of Pauli Matrix

Smallest eigenvalue of P is −1, so:

  • ψ| I + P

2 |ψ

  • = ψ| I + P

2 |ψ = ψ| P |ψ + 1 2 If only we could multiply by I+P

2

rather than P...

Patrick Rall Algorithms from Block Encodings May 2020 5 / 22

slide-21
SLIDE 21

Example: Expectation of Pauli Matrix

Smallest eigenvalue of P is −1, so:

  • ψ| I + P

2 |ψ

  • = ψ| I + P

2 |ψ = ψ| P |ψ + 1 2 If only we could multiply by I+P

2

rather than P... |+

  • |+

|0n U P U† |0

Patrick Rall Algorithms from Block Encodings May 2020 5 / 22

slide-22
SLIDE 22

Example: Expectation of Pauli Matrix

Smallest eigenvalue of P is −1, so:

  • ψ| I + P

2 |ψ

  • = ψ| I + P

2 |ψ = ψ| P |ψ + 1 2 If only we could multiply by I+P

2

rather than P... |+

  • |+

|0n U P U† |0 Observe that (+| ⊗ I)CTRL-P(|+ ⊗ I) = I+P

2

Patrick Rall Algorithms from Block Encodings May 2020 5 / 22

slide-23
SLIDE 23

Probabilistic mixtures of unitaries

|ψ → A |ψ where A =

  • i

piUi

Patrick Rall Algorithms from Block Encodings May 2020 6 / 22

slide-24
SLIDE 24

Probabilistic mixtures of unitaries

|ψ → A |ψ where A =

  • i

piUi

  • i

√pi |i

  • i

√pi |i |ψ Ui

Patrick Rall Algorithms from Block Encodings May 2020 6 / 22

slide-25
SLIDE 25

Probabilistic mixtures of unitaries

|ψ → A |ψ where A =

  • i

piUi

  • i

√pi |i

  • i

√pi |i |ψ Ui SELECT(U) =

  • i

|i i| ⊗ Ui

Patrick Rall Algorithms from Block Encodings May 2020 6 / 22

slide-26
SLIDE 26

Probabilistic mixtures of unitaries

|ψ → A |ψ where A =

  • i

piUi

  • i

√pi |i

  • i

√pi |i |ψ Ui SELECT(U) =

  • i

|i i| ⊗ Ui Can ‘perform’ non-unitary operations!

Berry, Childs, Kothari, Somma - arXiv:1501.01715, arXiv:1511.02306

Patrick Rall Algorithms from Block Encodings May 2020 6 / 22

slide-27
SLIDE 27

Block-encodings

Even more general form of circuit: |0 U |0 |ψ

Patrick Rall Algorithms from Block Encodings May 2020 7 / 22

slide-28
SLIDE 28

Block-encodings

Even more general form of circuit: |0 U |0 |ψ U is a block-encoding of A if: A = (0| ⊗ I)U(|0 ⊗ I)

  • r

U = A · · ·

  • Patrick Rall

Algorithms from Block Encodings May 2020 7 / 22

slide-29
SLIDE 29

Block-encodings

Even more general form of circuit: |0 U |0 |ψ U is a block-encoding of A if: A = (0| ⊗ I)U(|0 ⊗ I)

  • r

U = A · · ·

  • Limitation: spectral norm of A at is most 1. Add notion of scale:

Patrick Rall Algorithms from Block Encodings May 2020 7 / 22

slide-30
SLIDE 30

Block-encodings

Even more general form of circuit: |0 U |0 |ψ U is a block-encoding of A if: A = (0| ⊗ I)U(|0 ⊗ I)

  • r

U = A · · ·

  • Limitation: spectral norm of A at is most 1. Add notion of scale:

U is an α-scaled block-encoding of A if: A/α = (0| ⊗ I)U(|0 ⊗ I)

  • r

U = A/α · · ·

  • Patrick Rall

Algorithms from Block Encodings May 2020 7 / 22

slide-31
SLIDE 31

New primitives

If you have an α-scaled block-encoding of A, you can: |ψ → A |ψ

Patrick Rall Algorithms from Block Encodings May 2020 8 / 22

slide-32
SLIDE 32

New primitives

If you have an α-scaled block-encoding of A, you can: |ψ → A |ψ |ψ → A |ψ |A |ψ| with O

  • 1

|A |ψ|

  • Patrick Rall

Algorithms from Block Encodings May 2020 8 / 22

slide-33
SLIDE 33

New primitives

If you have an α-scaled block-encoding of A, you can: |ψ → A |ψ |ψ → A |ψ |A |ψ| with O

  • 1

|A |ψ|

  • estimate |A |ψ| with O

α ε

  • Patrick Rall

Algorithms from Block Encodings May 2020 8 / 22

slide-34
SLIDE 34

Building block-encodings

Unitary matrices are ‘trivial’ block-encodings of themselves

→ Pauli matrices!

Patrick Rall Algorithms from Block Encodings May 2020 9 / 22

slide-35
SLIDE 35

Building block-encodings

Unitary matrices are ‘trivial’ block-encodings of themselves

→ Pauli matrices!

Linear combinations: A =

i αiUi gives i |αi|-scaled block-encoding

Patrick Rall Algorithms from Block Encodings May 2020 9 / 22

slide-36
SLIDE 36

Building block-encodings

Unitary matrices are ‘trivial’ block-encodings of themselves

→ Pauli matrices!

Linear combinations: A =

i αiUi gives i |αi|-scaled block-encoding

Multiplication AB and tensor products A ⊗ B

Patrick Rall Algorithms from Block Encodings May 2020 9 / 22

slide-37
SLIDE 37

Building block-encodings

Unitary matrices are ‘trivial’ block-encodings of themselves

→ Pauli matrices!

Linear combinations: A =

i αiUi gives i |αi|-scaled block-encoding

Multiplication AB and tensor products A ⊗ B Sparsity: Block-encoding from ‘sparse-access’ oracles

Patrick Rall Algorithms from Block Encodings May 2020 9 / 22

slide-38
SLIDE 38

Building block-encodings

Unitary matrices are ‘trivial’ block-encodings of themselves

→ Pauli matrices!

Linear combinations: A =

i αiUi gives i |αi|-scaled block-encoding

Multiplication AB and tensor products A ⊗ B Sparsity: Block-encoding from ‘sparse-access’ oracles In practice, efficient block encodings exist for:

Any observable you might care about

Patrick Rall Algorithms from Block Encodings May 2020 9 / 22

slide-39
SLIDE 39

Building block-encodings

Unitary matrices are ‘trivial’ block-encodings of themselves

→ Pauli matrices!

Linear combinations: A =

i αiUi gives i |αi|-scaled block-encoding

Multiplication AB and tensor products A ⊗ B Sparsity: Block-encoding from ‘sparse-access’ oracles In practice, efficient block encodings exist for:

Any observable you might care about Any hamiltonian you might care about

Patrick Rall Algorithms from Block Encodings May 2020 9 / 22

slide-40
SLIDE 40

Hamiltonian simulation via polynomials

Given: α-scaled block-encoding of hamiltonian H

Patrick Rall Algorithms from Block Encodings May 2020 10 / 22

slide-41
SLIDE 41

Hamiltonian simulation via polynomials

Given: α-scaled block-encoding of hamiltonian H Goal: build block-encoding of eiHt = cos(Ht) + i sin(Ht) Approximate cos(Ht) and sin(Ht) via polynomials of H

Patrick Rall Algorithms from Block Encodings May 2020 10 / 22

slide-42
SLIDE 42

Hamiltonian simulation via polynomials

Given: α-scaled block-encoding of hamiltonian H Goal: build block-encoding of eiHt = cos(Ht) + i sin(Ht) Approximate cos(Ht) and sin(Ht) via polynomials of H Jacobi-Anger expansion → polynomial in H/α sin(tH) = sin(tα(H/α)) =

  • k=0

2(−1)kJ2k+1(αt)T2k+1(H/α) Jm(x) is modified Bessel function of the first kind. Tm(x) is m’th Chebyshev polynomial.

Patrick Rall Algorithms from Block Encodings May 2020 10 / 22

slide-43
SLIDE 43

Hamiltonian simulation via polynomials

Given: α-scaled block-encoding of hamiltonian H Goal: build block-encoding of eiHt = cos(Ht) + i sin(Ht) Approximate cos(Ht) and sin(Ht) via polynomials of H Jacobi-Anger expansion → polynomial in H/α sin(tH) = sin(tα(H/α)) =

  • k=0

2(−1)kJ2k+1(αt)T2k+1(H/α) Jm(x) is modified Bessel function of the first kind. Tm(x) is m’th Chebyshev polynomial. Truncate ∞ at K. Can make (H/α)k via multiplication, and build polynomial via linear combination.

Patrick Rall Algorithms from Block Encodings May 2020 10 / 22

slide-44
SLIDE 44

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-45
SLIDE 45

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Complicated circuit. Requires O(log(degree)) additional ancillas.

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-46
SLIDE 46

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Complicated circuit. Requires O(log(degree)) additional ancillas. Scale factor is O(K 2) - still pretty large

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-47
SLIDE 47

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Complicated circuit. Requires O(log(degree)) additional ancillas. Scale factor is O(K 2) - still pretty large

Can do much much better: Quantum singular value transformation / qubitization

Low, Chuang - arXiv:1606.02685, 1610.06536, arXiv:1707.05391 Gily´ en, Su, Low, Wiebe - arXiv:1806.01838

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-48
SLIDE 48

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Complicated circuit. Requires O(log(degree)) additional ancillas. Scale factor is O(K 2) - still pretty large

Can do much much better: Quantum singular value transformation / qubitization

Low, Chuang - arXiv:1606.02685, 1610.06536, arXiv:1707.05391 Gily´ en, Su, Low, Wiebe - arXiv:1806.01838 |0 U eiθ1|00| U† eiθ2|00| U . . . |0

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-49
SLIDE 49

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Complicated circuit. Requires O(log(degree)) additional ancillas. Scale factor is O(K 2) - still pretty large

Can do much much better: Quantum singular value transformation / qubitization

Low, Chuang - arXiv:1606.02685, 1610.06536, arXiv:1707.05391 Gily´ en, Su, Low, Wiebe - arXiv:1806.01838 |0 U eiθ1|00| U† eiθ2|00| U . . . |0

Only O(1) additional ancilla, resulting block-encoding is O(1)-scaled.

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-50
SLIDE 50

Hamiltonian simulation via polynomials

Building polynomials via multiplication and linear combination

Complicated circuit. Requires O(log(degree)) additional ancillas. Scale factor is O(K 2) - still pretty large

Can do much much better: Quantum singular value transformation / qubitization

Low, Chuang - arXiv:1606.02685, 1610.06536, arXiv:1707.05391 Gily´ en, Su, Low, Wiebe - arXiv:1806.01838 |0 U eiθ1|00| U† eiθ2|00| U . . . |0

Only O(1) additional ancilla, resulting block-encoding is O(1)-scaled. Polynomial coefficients are encoded into θ1, θ2, ... See arXiv:2003.02831.

Patrick Rall Algorithms from Block Encodings May 2020 11 / 22

slide-51
SLIDE 51

Halfway-point - Summary

|0 U |0 U = A/α · · ·

  • Block-encodings can express any matrix

Patrick Rall Algorithms from Block Encodings May 2020 12 / 22

slide-52
SLIDE 52

Halfway-point - Summary

|0 U |0 U = A/α · · ·

  • Block-encodings can express any matrix

New primitive operations: |ψ → A |ψ |ψ → A |ψ |A |ψ|

estimate |A |ψ|

Patrick Rall Algorithms from Block Encodings May 2020 12 / 22

slide-53
SLIDE 53

Halfway-point - Summary

|0 U |0 U = A/α · · ·

  • Block-encodings can express any matrix

New primitive operations: |ψ → A |ψ |ψ → A |ψ |A |ψ|

estimate |A |ψ|

Construct e.g. via linear combinations of Pauli matrices

Patrick Rall Algorithms from Block Encodings May 2020 12 / 22

slide-54
SLIDE 54

Halfway-point - Summary

|0 U |0 U = A/α · · ·

  • Block-encodings can express any matrix

New primitive operations: |ψ → A |ψ |ψ → A |ψ |A |ψ|

estimate |A |ψ|

Construct e.g. via linear combinations of Pauli matrices Singular value transformation: given H construct poly(H)

Patrick Rall Algorithms from Block Encodings May 2020 12 / 22

slide-55
SLIDE 55

Halfway-point - Summary

|0 U |0 U = A/α · · ·

  • Block-encodings can express any matrix

New primitive operations: |ψ → A |ψ |ψ → A |ψ |A |ψ|

estimate |A |ψ|

Construct e.g. via linear combinations of Pauli matrices Singular value transformation: given H construct poly(H)

Yields Hamiltonian simulation algorithm with complexity O(α|t|) → better than naive Trotter!

Patrick Rall Algorithms from Block Encodings May 2020 12 / 22

slide-56
SLIDE 56

Halfway-point - Summary

|0 U |0 U = A/α · · ·

  • Block-encodings can express any matrix

New primitive operations: |ψ → A |ψ |ψ → A |ψ |A |ψ|

estimate |A |ψ|

Construct e.g. via linear combinations of Pauli matrices Singular value transformation: given H construct poly(H)

Yields Hamiltonian simulation algorithm with complexity O(α|t|) → better than naive Trotter!

Good reference: Gily´ en, Su, Low, Wiebe - arXiv:1806.01838

Patrick Rall Algorithms from Block Encodings May 2020 12 / 22

slide-57
SLIDE 57

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-58
SLIDE 58

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Spectral gap ∆ Known ground-state energy E0 Non-degenerate ground space

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-59
SLIDE 59

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Spectral gap ∆ Known ground-state energy E0 Non-degenerate ground space

Construct polynomial approximation of Heaviside step function: p(x) ≈ 1 − Θ(x − E0 − ∆/2) =

  • if x ≥ E0 + ∆/2

1 if x < E0 + ∆/2

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-60
SLIDE 60

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Spectral gap ∆ Known ground-state energy E0 Non-degenerate ground space

Construct polynomial approximation of Heaviside step function: p(x) ≈ 1 − Θ(x − E0 − ∆/2) =

  • if x ≥ E0 + ∆/2

1 if x < E0 + ∆/2

Example construction: Chebyshev expansion of erf(kx) ≈ Θ(x)

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-61
SLIDE 61

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Spectral gap ∆ Known ground-state energy E0 Non-degenerate ground space

Construct polynomial approximation of Heaviside step function: p(x) ≈ 1 − Θ(x − E0 − ∆/2) =

  • if x ≥ E0 + ∆/2

1 if x < E0 + ∆/2

Example construction: Chebyshev expansion of erf(kx) ≈ Θ(x) To be accurate within ±∆/2 need degree O(1/∆), (arXiv:1707.05391)

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-62
SLIDE 62

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Spectral gap ∆ Known ground-state energy E0 Non-degenerate ground space

Construct polynomial approximation of Heaviside step function: p(x) ≈ 1 − Θ(x − E0 − ∆/2) =

  • if x ≥ E0 + ∆/2

1 if x < E0 + ∆/2

Example construction: Chebyshev expansion of erf(kx) ≈ Θ(x) To be accurate within ±∆/2 need degree O(1/∆), (arXiv:1707.05391)

Then p(H) ≈ |ψ0 ψ0|

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-63
SLIDE 63

Preparing ground states via e.g., Lin, Tong - arXiv:2002.12508.

Say we have a Hamiltonian H with:

Spectral gap ∆ Known ground-state energy E0 Non-degenerate ground space

Construct polynomial approximation of Heaviside step function: p(x) ≈ 1 − Θ(x − E0 − ∆/2) =

  • if x ≥ E0 + ∆/2

1 if x < E0 + ∆/2

Example construction: Chebyshev expansion of erf(kx) ≈ Θ(x) To be accurate within ±∆/2 need degree O(1/∆), (arXiv:1707.05391)

Then p(H) ≈ |ψ0 ψ0| Algorithm based on a trial state |φ with cost 1/ φ|ψ0: |φ → p(H) |φ |p(H) |φ | ≈ |ψ0

Patrick Rall Algorithms from Block Encodings May 2020 13 / 22

slide-64
SLIDE 64

Preparing thermal states via Chowdhury, Somma - arXiv:1603.02940

Goal: Prepare ρβ = e−βH/Z where Z = Tr(e−βH)

Patrick Rall Algorithms from Block Encodings May 2020 14 / 22

slide-65
SLIDE 65

Preparing thermal states via Chowdhury, Somma - arXiv:1603.02940

Goal: Prepare ρβ = e−βH/Z where Z = Tr(e−βH) Hubbard-Stratonovich transformation: e−βH/2 =

  • 1

2π ∞

−∞

dy · e−y2/2e−iy√βH

Patrick Rall Algorithms from Block Encodings May 2020 14 / 22

slide-66
SLIDE 66

Preparing thermal states via Chowdhury, Somma - arXiv:1603.02940

Goal: Prepare ρβ = e−βH/Z where Z = Tr(e−βH) Hubbard-Stratonovich transformation: e−βH/2 =

  • 1

2π ∞

−∞

dy · e−y2/2e−iy√βH Chain of polynomial approximations: βH →

  • βH → eiy√βH → e−βH/2

Patrick Rall Algorithms from Block Encodings May 2020 14 / 22

slide-67
SLIDE 67

Preparing thermal states via Chowdhury, Somma - arXiv:1603.02940

Goal: Prepare ρβ = e−βH/Z where Z = Tr(e−βH) Hubbard-Stratonovich transformation: e−βH/2 =

  • 1

2π ∞

−∞

dy · e−y2/2e−iy√βH Chain of polynomial approximations: βH →

  • βH → eiy√βH → e−βH/2

Multiply maximally mixed state I/D: I D → e−βH/2 I

D e−βH/2

Tr

  • e−βH/2 I

D e−βH/2 = e−βH/D

Z/D = e−βH Z

Patrick Rall Algorithms from Block Encodings May 2020 14 / 22

slide-68
SLIDE 68

Preparing thermal states via Chowdhury, Somma - arXiv:1603.02940

Goal: Prepare ρβ = e−βH/Z where Z = Tr(e−βH) Hubbard-Stratonovich transformation: e−βH/2 =

  • 1

2π ∞

−∞

dy · e−y2/2e−iy√βH Chain of polynomial approximations: βH →

  • βH → eiy√βH → e−βH/2

Multiply maximally mixed state I/D: I D → e−βH/2 I

D e−βH/2

Tr

  • e−βH/2 I

D e−βH/2 = e−βH/D

Z/D = e−βH Z Complexity: O(

  • β · D/Z)

Patrick Rall Algorithms from Block Encodings May 2020 14 / 22

slide-69
SLIDE 69

Expectations of mixed states (this work)

How to get quadratic speed-up for estimation of Tr(ρO)?

Patrick Rall Algorithms from Block Encodings May 2020 15 / 22

slide-70
SLIDE 70

Expectations of mixed states (this work)

How to get quadratic speed-up for estimation of Tr(ρO)? Say |ψ is a purification of ρ: ρ = TrP(|ψ ψ|)

Patrick Rall Algorithms from Block Encodings May 2020 15 / 22

slide-71
SLIDE 71

Expectations of mixed states (this work)

How to get quadratic speed-up for estimation of Tr(ρO)? Say |ψ is a purification of ρ: ρ = TrP(|ψ ψ|) Say U prepares |ψ. Then: |0n U O U† |0n |0k |0k

Patrick Rall Algorithms from Block Encodings May 2020 15 / 22

slide-72
SLIDE 72

Expectations of mixed states (this work)

How to get quadratic speed-up for estimation of Tr(ρO)? Say |ψ is a purification of ρ: ρ = TrP(|ψ ψ|) Say U prepares |ψ. Then: |0n U O U† |0n |0k |0k → |ψ| (O ⊗ I) |ψ| = |Tr(|ψ ψ| O)| = |Tr(ρO)|

Patrick Rall Algorithms from Block Encodings May 2020 15 / 22

slide-73
SLIDE 73

n-time correlation functions (this work)

Observable in Heisenberg picture: Oi(ti) = eiHtiOie−iHti

Patrick Rall Algorithms from Block Encodings May 2020 16 / 22

slide-74
SLIDE 74

n-time correlation functions (this work)

Observable in Heisenberg picture: Oi(ti) = eiHtiOie−iHti To estimate O1(t1)O2(t2)...On(tn), construct block encoding: Γ =

  • i

Oi(ti) = eiHt1O1eiH(t2−t1)O2eiH(t3−t2)...OneiHtn

Patrick Rall Algorithms from Block Encodings May 2020 16 / 22

slide-75
SLIDE 75

n-time correlation functions (this work)

Observable in Heisenberg picture: Oi(ti) = eiHtiOie−iHti To estimate O1(t1)O2(t2)...On(tn), construct block encoding: Γ =

  • i

Oi(ti) = eiHt1O1eiH(t2−t1)O2eiH(t3−t2)...OneiHtn Γ is not hermitian, so expectation is complex. Real part: Tr

  • ρΓ + Γ†

2

  • Imaginary part:

Tr

  • ρΓ − Γ†

2i

  • Patrick Rall

Algorithms from Block Encodings May 2020 16 / 22

slide-76
SLIDE 76

n-time correlation functions (this work)

Observable in Heisenberg picture: Oi(ti) = eiHtiOie−iHti To estimate O1(t1)O2(t2)...On(tn), construct block encoding: Γ =

  • i

Oi(ti) = eiHt1O1eiH(t2−t1)O2eiH(t3−t2)...OneiHtn Γ is not hermitian, so expectation is complex. Real part: Tr

  • ρΓ + Γ†

2

  • Imaginary part:

Tr

  • ρΓ − Γ†

2i

  • Improves over Pedernales et al. arXiv:1401.2430

Patrick Rall Algorithms from Block Encodings May 2020 16 / 22

slide-77
SLIDE 77

Density of states (this work)

Say H has eigenvalues Ei. ρ(E) = 1 D

  • i

δ(Ei − E)

Patrick Rall Algorithms from Block Encodings May 2020 17 / 22

slide-78
SLIDE 78

Density of states (this work)

Say H has eigenvalues Ei. ρ(E) = 1 D

  • i

δ(Ei − E) Problems with evaluating ρ(E):

Patrick Rall Algorithms from Block Encodings May 2020 17 / 22

slide-79
SLIDE 79

Density of states (this work)

Say H has eigenvalues Ei. ρ(E) = 1 D

  • i

δ(Ei − E) Problems with evaluating ρ(E):

Has non-smooth ‘spikey’ shape at high resolutions

Patrick Rall Algorithms from Block Encodings May 2020 17 / 22

slide-80
SLIDE 80

Density of states (this work)

Say H has eigenvalues Ei. ρ(E) = 1 D

  • i

δ(Ei − E) Problems with evaluating ρ(E):

Has non-smooth ‘spikey’ shape at high resolutions #P-complete to compute exactly

Patrick Rall Algorithms from Block Encodings May 2020 17 / 22

slide-81
SLIDE 81

Density of states (this work)

Say H has eigenvalues Ei. ρ(E) = 1 D

  • i

δ(Ei − E) Problems with evaluating ρ(E):

Has non-smooth ‘spikey’ shape at high resolutions #P-complete to compute exactly

Usually interested in histograms or ‘sketches’ of ρ(E)

Patrick Rall Algorithms from Block Encodings May 2020 17 / 22

slide-82
SLIDE 82

Density of states (this work)

Histogram bin: Eb

Ea ρ(E)dE

w(x) ≈ rectEa,Eb(x) = 0 if x < Ea 1 if Ea ≤ x ≤ Eb 0 if Eb < x

Patrick Rall Algorithms from Block Encodings May 2020 18 / 22

slide-83
SLIDE 83

Density of states (this work)

Histogram bin: Eb

Ea ρ(E)dE

w(x) ≈ rectEa,Eb(x) = 0 if x < Ea 1 if Ea ≤ x ≤ Eb 0 if Eb < x Construct e.g., by adding two erf(x) approximations, or using Jackson’s theorem and amplifying polynomials. Then Tr I

D w(H)

  • estimates bin value.

Patrick Rall Algorithms from Block Encodings May 2020 18 / 22

slide-84
SLIDE 84

Density of states (this work)

Histogram bin: Eb

Ea ρ(E)dE

w(x) ≈ rectEa,Eb(x) = 0 if x < Ea 1 if Ea ≤ x ≤ Eb 0 if Eb < x Construct e.g., by adding two erf(x) approximations, or using Jackson’s theorem and amplifying polynomials. Then Tr I

D w(H)

  • estimates bin value.

Roggero - arXiv:2004.04889 - Point estimates

ρ(E) = Tr I

D δ(H − E)

  • ≈ Tr
  • I

D e(H−E)2/∆

≈ Tr I

D poly(H)

  • Patrick Rall

Algorithms from Block Encodings May 2020 18 / 22

slide-85
SLIDE 85

Density of states (this work)

Histogram bin: Eb

Ea ρ(E)dE

w(x) ≈ rectEa,Eb(x) = 0 if x < Ea 1 if Ea ≤ x ≤ Eb 0 if Eb < x Construct e.g., by adding two erf(x) approximations, or using Jackson’s theorem and amplifying polynomials. Then Tr I

D w(H)

  • estimates bin value.

Roggero - arXiv:2004.04889 - Point estimates

ρ(E) = Tr I

D δ(H − E)

  • ≈ Tr
  • I

D e(H−E)2/∆

≈ Tr I

D poly(H)

  • Both improve upon phase-estimation method.

Patrick Rall Algorithms from Block Encodings May 2020 18 / 22

slide-86
SLIDE 86

Kernel Polynomial Method (this work)

Method for sketching ρ(E): Chebyshev decomposition µn = 1

−1

Tn(E)ρ(E)dE ρ(E) ≈ 1 π √ 1 − E 2

  • g0µ0 + 2

N

  • n=0

µngnTn(E)

  • Patrick Rall

Algorithms from Block Encodings May 2020 19 / 22

slide-87
SLIDE 87

Kernel Polynomial Method (this work)

Method for sketching ρ(E): Chebyshev decomposition µn = 1

−1

Tn(E)ρ(E)dE ρ(E) ≈ 1 π √ 1 − E 2

  • g0µ0 + 2

N

  • n=0

µngnTn(E)

  • Scale H so that energies fall into [−1, 1]. gn are independent of ρ.

See arXiv:0504627.

Patrick Rall Algorithms from Block Encodings May 2020 19 / 22

slide-88
SLIDE 88

Kernel Polynomial Method (this work)

Method for sketching ρ(E): Chebyshev decomposition µn = 1

−1

Tn(E)ρ(E)dE ρ(E) ≈ 1 π √ 1 − E 2

  • g0µ0 + 2

N

  • n=0

µngnTn(E)

  • Scale H so that energies fall into [−1, 1]. gn are independent of ρ.

See arXiv:0504627. Quantum singular value transformation makes Tn(H) very easy: |0 U eiπ|00| U† eiπ|00| U . . . |0

Patrick Rall Algorithms from Block Encodings May 2020 19 / 22

slide-89
SLIDE 89

Other quantities (this work)

Similar strategies work for:

Patrick Rall Algorithms from Block Encodings May 2020 20 / 22

slide-90
SLIDE 90

Other quantities (this work)

Similar strategies work for: Local density of states.

|ψ( r) is wavefunction of particle at r |ψi are eigenstates of H ρ

r(E) =

  • i

δ(Ei − E) |ψ( r)|ψi|2

Patrick Rall Algorithms from Block Encodings May 2020 20 / 22

slide-91
SLIDE 91

Other quantities (this work)

Similar strategies work for: Local density of states.

|ψ( r) is wavefunction of particle at r |ψi are eigenstates of H ρ

r(E) =

  • i

δ(Ei − E) |ψ( r)|ψi|2

Correlation functions in linear response theory:

Some observables B, C.

A(E) = Bδ(H − E − E0)C

Patrick Rall Algorithms from Block Encodings May 2020 20 / 22

slide-92
SLIDE 92

Thank you for your attention!

Special thanks to: Scott Aaronson, Andras Gily´ en, Andrew Potter, Justin Thaler, Chunhao Wang, Alexander Weisse and Alexandro Roggero

Patrick Rall Algorithms from Block Encodings May 2020 21 / 22

slide-93
SLIDE 93

Oblivious amplitude amplification

Primitive operation: |ψ → A |ψ |A |ψ | Requires O

  • 1

|A|ψ|

  • applications of both A and preparations of |ψ

What if |ψ is very expensive to prepare? Oblivious amplitude amplification arXiv:1312.1414 If A is (approximately) unitary, need exactly one copy of |ψ

Patrick Rall Algorithms from Block Encodings May 2020 22 / 22