Interlacing Families II: Mixed Characteristic Polynomials and the - - PowerPoint PPT Presentation

β–Ά
interlacing families ii mixed
SMART_READER_LITE
LIVE PREVIEW

Interlacing Families II: Mixed Characteristic Polynomials and the - - PowerPoint PPT Presentation

Interlacing Families II: Mixed Characteristic Polynomials and the Kadison-Singer Problem Nikhil Srivastava (Microsoft Research) Adam Marcus (Yale), Daniel Spielman (Yale) Discrepancy Theory How well can you approximate a discrete object by


slide-1
SLIDE 1

Interlacing Families II: Mixed Characteristic Polynomials and the Kadison-Singer Problem

Nikhil Srivastava (Microsoft Research) Adam Marcus (Yale), Daniel Spielman (Yale)

slide-2
SLIDE 2

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue.

slide-3
SLIDE 3

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue.

slide-4
SLIDE 4

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue.

slide-5
SLIDE 5

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue.

slide-6
SLIDE 6

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue.

slide-7
SLIDE 7

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue.

𝑒𝑗𝑑𝑑 = 1

slide-8
SLIDE 8

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue. How well can you do in general?

slide-9
SLIDE 9

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue. In general, a random coloring gives 𝑒𝑗𝑑𝑑: = max

𝑗

|𝑇𝑗 ∩ 𝑆| βˆ’ |𝑇𝑗| 2 ≀ 𝑃 π‘œπ‘šπ‘π‘• 𝑛 .

slide-10
SLIDE 10

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , 𝑇𝑛 βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue. Spencer: There exists a coloring with 𝑒𝑗𝑑𝑑: = max

𝑗

|𝑇𝑗 ∩ 𝑆| βˆ’ |𝑇𝑗| 2 ≀ 3 nlog(𝑛 π‘œ ).

slide-11
SLIDE 11

Discrepancy Theory

β€œHow well can you approximate a discrete

  • bject by a continuous one.”

e.g. Given a set family 𝑇1, … , π‘‡π‘œ βŠ‚ [π‘œ], find a red-blue coloring π‘œ = 𝑆 βˆͺ 𝐢 such that every set is half red and half blue. Spencer: There exists a coloring with 𝑒𝑗𝑑𝑑: = max

𝑗

|𝑇𝑗 ∩ 𝑆| βˆ’ |𝑇𝑗| 2 ≀ 3 π‘œ.

slide-12
SLIDE 12

A Spectral Discrepancy Theorem

slide-13
SLIDE 13

Quadratic Forms and Energy

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2

slide-14
SLIDE 14

Quadratic Forms and Energy

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2 Example.

𝑀1, 𝑀2, 𝑀3, 𝑀4 ∈ 𝑺2

slide-15
SLIDE 15

Quadratic Forms and Energy

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2 Example.

𝑣 = (1,0)

slide-16
SLIDE 16

𝑣, 𝑀1 2 + 𝑣, 𝑀2 2 + 𝑣, 𝑀3 2 + 𝑣, 𝑀4 2 =1 + 0 + ΒΌ + ΒΌ =1.5

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2 Example.

Quadratic Forms and Energy

𝑣 = (1,0)

slide-17
SLIDE 17

Quadratic Forms and Energy

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2 Example.

𝑣 = (0,1)

𝑣, 𝑀1 2 + 𝑣, 𝑀2 2 + 𝑣, 𝑀3 2 + 𝑣, 𝑀4 2 =0 + 1 + ΒΎ + ΒΎ =2.5

slide-18
SLIDE 18

Quadratic Forms and Energy

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2 Example.

𝑣 = (1/ 2, 1/ 2)

𝑣, 𝑀1 2 + 𝑣, 𝑀2 2 + 𝑣, 𝑀3 2 + 𝑣, 𝑀4 2 Β½ + Β½ + ΒΎ + ΒΌ =2

slide-19
SLIDE 19

Quadratic Forms and Energy

Given vectors 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ, their energy in a test direction 𝑣 ∈ π‘Ίπ‘œ, ||𝑣|| = 1, is the quadratic form 𝑅 𝑣 ≔

𝑗

𝑣, 𝑀𝑗 2 Example.

slide-20
SLIDE 20

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1

β€œisotropic”

slide-21
SLIDE 21

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1

β€œisotropic”

slide-22
SLIDE 22

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1

β€œisotropic”

slide-23
SLIDE 23

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1

β€œisotropic”

slide-24
SLIDE 24

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1

slide-25
SLIDE 25

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

slide-26
SLIDE 26

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

βˆͺ

slide-27
SLIDE 27

Splitting A Quadratic Form in Half

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

βˆͺ

β€œEach part approximates the whole.”

slide-28
SLIDE 28

Many Possible Partitions

slide-29
SLIDE 29

Many Possible Partitions

slide-30
SLIDE 30

Many Possible Partitions

slide-31
SLIDE 31

Many Possible Partitions

Theorem: Good partition always exists.

slide-32
SLIDE 32

The norm condition

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

slide-33
SLIDE 33

The norm condition

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

slide-34
SLIDE 34

In One Dimension

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ 𝑺1 are numbers with |𝑀𝑗| ≀ πœ— and energy one

𝑗

𝑀𝑗

2 = 1.

Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

each part has energy close to half

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗

2 = 1

2 Β± 5πœ—

slide-35
SLIDE 35

In One Dimension

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ 𝑺1 are numbers with |𝑀𝑗|2 ≀ πœ—2 and energy one

𝑗

𝑀𝑗

2 = 1.

Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

each part has energy close to half

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗

2 = 1

2 Β± 5πœ—

slide-36
SLIDE 36

In One Dimension

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ 𝑺1 are numbers with |𝑀𝑗|2 ≀ πœ—2 and energy one

𝑗

𝑀𝑗

2 = 1.

Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

each part has energy close to half

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗

2 = 1

2 Β± πœ—2 2

greedy bin packing

slide-37
SLIDE 37

In One Dimension

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ 𝑺1 are numbers with |𝑀𝑗|2 ≀ πœ—2 and energy one

𝑗

𝑀𝑗

2 = 1.

Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

each part has energy close to half

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗

2 = 1

2 Β± 5πœ—

slide-38
SLIDE 38

In Higher Dimensions

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

slide-39
SLIDE 39

In Higher Dimensions

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and energy one in each direction: βˆ€||𝑣|| = 1

𝑗

𝑣, 𝑀𝑗 2 = 1 Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that each

part has energy close to half in each direction: βˆ€||𝑣|| = 1

π‘—βˆˆπ‘ˆπ‘˜

𝑣, 𝑀𝑗 2 = 1 2 Β± 5πœ—

Optimal in high dim

slide-40
SLIDE 40

Matrix Notation

Given vectors 𝑀1, … , 𝑀𝑛 write quadratic form as 𝑅 𝑣 =

𝑗

𝑀𝑗, 𝑣 2 = π‘£π‘ˆ

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

𝑣

slide-41
SLIDE 41

Matrix Notation

Given vectors 𝑀1, … , 𝑀𝑛 write quadratic form as 𝑅 𝑣 =

𝑗

𝑀𝑗, 𝑣 2 = π‘£π‘ˆ

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

𝑣 Isotropy:

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-42
SLIDE 42

Matrix Notation

Given vectors 𝑀1, … , 𝑀𝑛 write quadratic form as 𝑅 𝑣 =

𝑗

𝑀𝑗, 𝑣 2 = π‘£π‘ˆ

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

𝑣 Isotropy: Comparision: 𝐡 β‰Ό 𝐢 ⟺ π‘£π‘ˆπ΅π‘£ ≀ π‘£π‘ˆπΆ 𝑣 βˆ€π‘£

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-43
SLIDE 43

Matrix Notation

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗|| ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

1 2 βˆ’ 5πœ— 𝐽 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5πœ— 𝐽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-44
SLIDE 44

Matrix Notation

Main Theorem. Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗||2 ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-45
SLIDE 45

Unnormalized Version

Suppose I get some vectors π‘₯1, … , π‘₯𝑛which are not isotropic:

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ = 𝑋 ≽ 0

Consider 𝑀𝑗 ≔ π‘‹βˆ’1

2π‘₯𝑗 and apply theorem to 𝑀𝑗.

Normalized vectors have ||𝑀𝑗||2 = ||π‘‹βˆ’1

2π‘₯𝑗||2 = πœ—

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

  • Thm. gives
slide-46
SLIDE 46

Unnormalized Version

Suppose I get some vectors π‘₯1, … , π‘₯𝑛which are not isotropic:

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ = 𝑋 ≽ 0

Consider 𝑀𝑗 ≔ π‘‹βˆ’1

2π‘₯𝑗 and apply theorem to 𝑀𝑗.

Normalized vectors have ||𝑀𝑗||2 = ||π‘‹βˆ’1

2π‘₯𝑗||2 = πœ—

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

π‘‹βˆ’1

2π‘₯𝑗π‘₯𝑗 π‘ˆπ‘‹βˆ’1 2 β‰Ό

1 2 + 5 πœ— 𝐽

slide-47
SLIDE 47

Unnormalized Version

Suppose I get some vectors π‘₯1, … , π‘₯𝑛which are not isotropic:

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ = 𝑋 ≽ 0

Consider 𝑀𝑗 ≔ π‘‹βˆ’1

2π‘₯𝑗 and apply theorem to 𝑀𝑗.

Normalized vectors have ||𝑀𝑗||2 = ||π‘‹βˆ’1

2π‘₯𝑗||2 = πœ—

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό π‘‹βˆ’1

2 π‘—βˆˆπ‘ˆπ‘˜

π‘₯𝑗π‘₯𝑗

π‘ˆ

π‘‹βˆ’1

2 β‰Ό

1 2 + 5 πœ— 𝐽

slide-48
SLIDE 48

Unnormalized Version

Suppose I get some vectors π‘₯1, … , π‘₯𝑛which are not isotropic:

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ = 𝑁 ≽ 0

Consider 𝑀𝑗 ≔ π‘‹βˆ’1

2π‘₯𝑗 and apply theorem to 𝑀𝑗.

Normalized vectors have ||𝑀𝑗||2 = ||π‘‹βˆ’1

2π‘₯𝑗||2 = πœ—

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό π‘‹βˆ’1

2 π‘—βˆˆπ‘ˆπ‘˜

π‘₯𝑗π‘₯𝑗

π‘ˆ

π‘‹βˆ’1

2 β‰Ό

1 2 + 5 πœ— 𝐽

Fact: 𝐡 β‰Ό 𝐢 ⟺ 𝐷𝐡𝐷 β‰Ό 𝐷𝐢𝐷 for invertible 𝐷

slide-49
SLIDE 49

Unnormalized Version

Suppose I get some vectors π‘₯1, … , π‘₯𝑛which are not isotropic:

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ = 𝑋 ≽ 0

Consider 𝑀𝑗 ≔ π‘‹βˆ’1

2π‘₯𝑗 and apply theorem to 𝑀𝑗.

Normalized vectors have ||𝑀𝑗||2 = ||π‘‹βˆ’1

2π‘₯𝑗||2 = πœ—

1 2 βˆ’ 5 πœ— 𝑋 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

π‘₯𝑗π‘₯𝑗

π‘ˆ

β‰Ό 1 2 + 5 πœ— 𝑋

slide-50
SLIDE 50

Unnormalized Theorem

Given arbitrary vectors π‘₯1, … , π‘₯𝑛 ∈ β„π‘œ there is a partition 𝑛 = π‘ˆ

1 βˆͺ π‘ˆ2 with

1 2 βˆ’ πœ—

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ

β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

π‘₯𝑗π‘₯𝑗

π‘ˆ β‰Ό 1

2 βˆ’ πœ—

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ

Where πœ— ≔ max

𝑗

|| π‘‹βˆ’1

2π‘₯𝑗||2

slide-51
SLIDE 51

Unnormalized Theorem

Given arbitrary vectors π‘₯1, … , π‘₯𝑛 ∈ β„π‘œ there is a partition 𝑛 = π‘ˆ

1 βˆͺ π‘ˆ2 with

1 2 βˆ’ πœ—

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ

β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

π‘₯𝑗π‘₯𝑗

π‘ˆ β‰Ό 1

2 βˆ’ πœ—

𝑗

π‘₯𝑗π‘₯𝑗

π‘ˆ

Where πœ— ≔ max

𝑗

|| π‘‹βˆ’1

2π‘₯𝑗||2

Any quadratic form in which no vector has too much influence can be split into two representative pieces.

slide-52
SLIDE 52

Applications

slide-53
SLIDE 53
  • 1. Graph Theory

Given an undirected graph 𝐻 = (π‘Š, 𝐹), consider its Laplacian matrix: 𝑀𝐻 =

π‘—π‘˜βˆˆπΉ

πœ€π‘— βˆ’ πœ€

π‘˜)(πœ€π‘— βˆ’ πœ€ π‘˜ π‘ˆ

slide-54
SLIDE 54
  • 1. Graph Theory

Given an undirected graph 𝐻 = (π‘Š, 𝐹), consider its Laplacian matrix: 𝑀𝐻 =

π‘—π‘˜βˆˆπΉ

πœ€π‘— βˆ’ πœ€

π‘˜)(πœ€π‘— βˆ’ πœ€ π‘˜ π‘ˆ

i j i j

slide-55
SLIDE 55
  • 1. Graph Theory

Given an undirected graph 𝐻 = (π‘Š, 𝐹), consider its Laplacian matrix: 𝑀𝐻 =

π‘—π‘˜βˆˆπΉ

πœ€π‘— βˆ’ πœ€

π‘˜)(πœ€π‘— βˆ’ πœ€ π‘˜ π‘ˆ = 𝐸 βˆ’ 𝐡

slide-56
SLIDE 56
  • 1. Graph Theory

Given an undirected graph 𝐻 = (π‘Š, 𝐹), consider its Laplacian matrix: 𝑀𝐻 =

π‘—π‘˜βˆˆπΉ

πœ€π‘— βˆ’ πœ€

π‘˜)(πœ€π‘— βˆ’ πœ€ π‘˜ π‘ˆ = 𝐸 βˆ’ 𝐡

Quadratic form: π‘¦π‘ˆπ‘€π‘¦ =

π‘—π‘˜βˆˆπΉ

𝑦𝑗 βˆ’ π‘¦π‘˜

2 𝑔𝑝𝑠 𝑦 ∈ π‘Ίπ‘œ

slide-57
SLIDE 57

The Laplacian Quadratic Form

An example:

  • 1

+2 +1 +1 +3

slide-58
SLIDE 58

An example: xTLx= οƒ₯ i,j 2 E (x(i)-x(j))2

  • 1

+2 +1 +1 +3 12 32 22 22 32

The Laplacian Quadratic Form

12

slide-59
SLIDE 59

An example: xTLx= οƒ₯ i,j 2 E (x(i)-x(j))2 = 28

  • 1

+2 +1 +1 +3 12 32 22 12 22 32

The Laplacian Quadratic Form

slide-60
SLIDE 60

The Laplacian Quadratic Form

Another example:

+1 +1

slide-61
SLIDE 61

The Laplacian Quadratic Form

Another example: xT LGx = 3

+1 +1 02 02 12 12 12 02

slide-62
SLIDE 62

Cuts and the Quadratic Form

For characteristic vector

slide-63
SLIDE 63

Cuts and the Quadratic Form

For characteristic vector

The Laplacian Quadratic form encodes the entire cut structure of the graph.

slide-64
SLIDE 64

Application to Graphs

G

slide-65
SLIDE 65

Application to Graphs

G

𝑀𝐻 =

π‘—π‘˜βˆˆπΉ

πœ€π‘— βˆ’ πœ€

π‘˜)(πœ€π‘— βˆ’ πœ€ π‘˜ π‘ˆ

slide-66
SLIDE 66

Application to Graphs

G

Theorem

slide-67
SLIDE 67

Application to Graphs

G

Theorem

H1 H2

𝑀𝐼1 𝑀𝐼2

slide-68
SLIDE 68

Application to Graphs

G

Theorem

H1 H2

π‘¦π‘ˆπ‘€πΌ1𝑦 β‰ˆ 1 2 π‘¦π‘ˆπ‘€π»π‘¦

slide-69
SLIDE 69

Recursive Application Gives:

  • 1. Graph Sparsification Theorem [Batson-

Spielman-S’09]: Every graph G has a weighted O(1)-cut approximation H with O(n) edges. G H 𝑃 π‘œ2 edges 𝑃 π‘œ edges

Unweighted

Weighted

slide-70
SLIDE 70

Approximating One Graph by Another

Cut Approximation [Benczur-Karger’96] For every cut, weight of edges in G β‰ˆ weight of edges in H

G H

slide-71
SLIDE 71

Approximating One Graph by Another

Cut Approximation [Benczur-Karger’96] For every cut, weight of edges in G β‰ˆ weight of edges in H

G H

slide-72
SLIDE 72

Approximating One Graph by Another

Cut Approximation [Benczur-Karger’96] For every cut, weight of edges in G β‰ˆ weight of edges in H

G H

slide-73
SLIDE 73

Approximating One Graph by Another

Cut Approximation [Benczur-Karger’96] For every cut, weight of edges in G β‰ˆ weight of edges in H

G H

slide-74
SLIDE 74

Approximating One Graph by Another

Cut Approximation [Benczur-Karger’96] For every cut, weight of edges in G β‰ˆ weight of edges in H

G H

slide-75
SLIDE 75

Approximating One Graph by Another

Cut Approximation [Benczur-Karger’96] For every cut, weight of edges in G β‰ˆ weight of edges in H

G H

G and H have same cuts. Equivalent for min cut, max cut, sparsest cut…

slide-76
SLIDE 76

Recursive Application Gives:

  • 2. Unweighted Graph Sparsification Every

transitive graph G has an unweighted O(1)-cut approximation H with O(n) edges.

πΏπ‘œ 𝐼 Expander graph

slide-77
SLIDE 77

Recursive Application Gives:

  • 2. Unweighted Graph Sparsification Every

transitive graph G can be partitioned into O(1)-cut approximations with O(n) edges.

πΏπ‘œ 𝐼1 … πΌπ‘œ Expander graphs

slide-78
SLIDE 78

Recursive Application Gives:

  • 2. Unweighted Graph Sparsification Every

transitive graph G can be partitioned into O(1)-cut approximations with O(n) edges.

πΏπ‘œ 𝐼1 … πΌπ‘œ Expander graphs

Generalizes [Frieze-Molloy]

slide-79
SLIDE 79

Recursive Application Gives:

  • 2. Unweighted Graph Sparsification Every

transitive graph G can be partitioned into O(1)-cut approximations with O(n) edges.

Same cut structure

G H1 H2

slide-80
SLIDE 80
  • 2. Uncertainty Principles

Signal 𝑦 ∈ β„‚π‘œ. Discrete Fourier Transform 𝑦 a = βŒ©π‘¦, exp(βˆ’

𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œβŒͺ

slide-81
SLIDE 81
  • 2. Uncertainty Principles

Signal 𝑦 ∈ β„‚π‘œ. Discrete Fourier Transform 𝑦 a = βŒ©π‘¦, exp(βˆ’

𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œβŒͺ

Uncertainty Principle: 𝑦 and 𝑦 cannot be simultaneously localized. π‘‘π‘£π‘žπ‘ž 𝑦 Γ— π‘‘π‘£π‘žπ‘ž 𝑦 β‰₯ π‘œ

slide-82
SLIDE 82
  • 2. Uncertainty Principles

Signal 𝑦 ∈ β„‚π‘œ. Discrete Fourier Transform 𝑦 a = βŒ©π‘¦, exp(βˆ’

𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œβŒͺ

Uncertainty Principle: 𝑦 and 𝑦 cannot be simultaneously localized. π‘‘π‘£π‘žπ‘ž 𝑦 Γ— π‘‘π‘£π‘žπ‘ž 𝑦 β‰₯ π‘œ If 𝑦 is supported on 𝑇 = βˆšπ‘œ coordinates, π‘‘π‘£π‘žπ‘ž 𝑦 β‰₯ π‘œ

slide-83
SLIDE 83
  • 2. Uncertainty Principles

Signal 𝑦 ∈ β„‚π‘œ. Discrete Fourier Transform 𝑦 a = βŒ©π‘¦, exp(βˆ’

𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œβŒͺ

Stronger Uncertainty Principle: For every subset 𝑇 = π‘œ , there is a partition π‘œ = π‘ˆ

1 βˆͺ β‹― π‘ˆ π‘œ

||𝑦|𝑇||2 β‰ˆ

1 π‘œ ||

𝑦|π‘ˆπ‘—||2 for all 𝑦 and π‘ˆπ‘—

slide-84
SLIDE 84
  • 2. Uncertainty Principles

Signal 𝑦 ∈ β„‚π‘œ. Discrete Fourier Transform 𝑦 a = βŒ©π‘¦, exp(βˆ’

𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œβŒͺ

Stronger Uncertainty Principle: For every subset 𝑇 = π‘œ , there is a partition π‘œ = π‘ˆ

1 βˆͺ β‹― π‘ˆ π‘œ

||𝑦|𝑇||2 β‰ˆ

1 π‘œ ||

𝑦|π‘ˆπ‘—||2 for all 𝑦 and π‘ˆπ‘—

slide-85
SLIDE 85
  • 2. Uncertainty Principles

Proof. Let 𝑔

𝑙 = exp(βˆ’ 𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œ be the Fourier basis.

Fix a subset 𝑇 βŠ‚ π‘œ of βˆšπ‘œ coords. The restricted norm is: ||𝑦|𝑇||2 = 𝑙 𝑦|𝑇, 𝑔

𝑙 2

a quadratic form in π‘œ dimensions. Apply the theorem.

slide-86
SLIDE 86
  • 2. Uncertainty Principles

Proof. Let 𝑔

𝑙 = exp(βˆ’ 𝑏2πœŒπ‘—π‘™ π‘œ

)

π‘™β‰€π‘œ be the Fourier basis.

Fix a subset 𝑇 βŠ‚ π‘œ of βˆšπ‘œ coords. The restricted norm is: ||𝑦|𝑇||2 = 𝑙 𝑦|𝑇, 𝑔

𝑙 2

a quadratic form in π‘œ dimensions. Apply the theorem.

Applications in analytic number theory, harmonic analysis.

slide-87
SLIDE 87
  • 3. The Kadison-Singer Problem

Dirac 1930’s: Bra-Ket formalism for quantum states. πœ” , π‘ž …

slide-88
SLIDE 88
  • 3. The Kadison-Singer Problem

Dirac 1930’s: Bra-Ket formalism for quantum states. πœ” , π‘ž … What are Bras and Kets? NOT vectors.

slide-89
SLIDE 89
  • 3. The Kadison-Singer Problem

Dirac 1930’s: Bra-Ket formalism for quantum states. πœ” , π‘ž … What are Bras and Kets? NOT vectors. Von Neumann 1936: Theory of π·βˆ—algebras.

slide-90
SLIDE 90
  • 3. The Kadison-Singer Problem

Dirac 1930’s: Bra-Ket formalism for quantum states. πœ” , π‘ž … What are Bras and Kets? NOT vectors. Von Neumann 1936: Theory of π·βˆ—algebras. Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices

slide-91
SLIDE 91
  • 3. The Kadison-Singer Problem

Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices

slide-92
SLIDE 92
  • 3. The Kadison-Singer Problem

Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices Anderson 1979: Reduced to a question about finite matrices. β€œPaving Conjecture”

slide-93
SLIDE 93
  • 3. The Kadison-Singer Problem

Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices.

slide-94
SLIDE 94
  • 3. The Kadison-Singer Problem

Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices. Weaver 2002: Discrepancy theoretic formulation

  • f the same question.
slide-95
SLIDE 95
  • 3. The Kadison-Singer Problem

Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices. This work: Proof of Weaver’s conjecture.

slide-96
SLIDE 96
  • 3. The Kadison-Singer Problem

Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about ∞ matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices. This work: Proof of Weaver’s conjecture.

slide-97
SLIDE 97

In General

Anything that can be encoded as a quadratic form can be split into pieces while preserving certain properties. Many different things can be gainfully encoded this way.

slide-98
SLIDE 98

Proof

slide-99
SLIDE 99

Main Theorem

Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗||2 ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-100
SLIDE 100

Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗||2 ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

Equivalent Theorem

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-101
SLIDE 101

Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗||2 ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

Equivalent Theorem

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 βˆ’ π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ

slide-102
SLIDE 102

Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗||2 ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

Equivalent Theorem

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

slide-103
SLIDE 103

Idea 1: Random Partition

Choose π‘ˆ

1 βˆͺ π‘ˆ2 = 𝑛 randomly

Want:

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

slide-104
SLIDE 104

Idea 1: Random Partition

Choose π‘ˆ

1 βˆͺ π‘ˆ2 = 𝑛 randomly

Want:

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 5 πœ—

π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 5 πœ—

slide-105
SLIDE 105

Idea 1: Random Partition

Choose π‘ˆ

1 βˆͺ π‘ˆ2 = 𝑛 randomly

Want:

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 5 πœ—

π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 5 πœ—

Trick: embed in blocks of 2π‘œ Γ— 2π‘œ matrix

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ

slide-106
SLIDE 106

Idea 1: Random Partition

Choose π‘ˆ

1 βˆͺ π‘ˆ2 = 𝑛 randomly

Want:

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 5 πœ—

π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 5 πœ—

Trick: embed in blocks of 2π‘œ Γ— 2π‘œ matrix

π‘—βˆˆπ‘ˆ

1

𝑀𝑗𝑀𝑗

π‘ˆ π‘—βˆˆπ‘ˆ

2

𝑀𝑗𝑀𝑗

π‘ˆ

= max

j π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ

slide-107
SLIDE 107

Idea 1: Random Partition

Define independent random 𝑀1, … , 𝑀𝑛 ∈ ℝ2π‘œ

𝑀𝑗 = 𝑀𝑗 π‘₯π‘—π‘’β„Ž π‘žπ‘ π‘π‘ 0.5 𝑀𝑗 π‘₯π‘—π‘’β„Ž π‘žπ‘ π‘π‘. 0.5

Then 𝔽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

= π”½π‘ˆ max

π‘˜ π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ

slide-108
SLIDE 108

The Matrix Chernoff Bound

𝑀1, … , 𝑀𝑛 ∈ ℝ2π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 2 and ||𝑀𝑗||2 ≀ πœ—

Tropp 2011 𝔽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 𝑃( πœ— log π‘œ) Analogous for the scalar Chernoff bound for sums of independent bdd random numbers.

slide-109
SLIDE 109

The Matrix Chernoff Bound

𝑀1, … , 𝑀𝑛 ∈ ℝ2π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 2 and ||𝑀𝑗||2 ≀ πœ—

Tropp 2011 𝔽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 𝑃( πœ— log π‘œ) Analogous for the scalar Chernoff bound for sums of independent bdd random numbers.

slide-110
SLIDE 110

Main Theorem

If 𝑀1, … , 𝑀𝑛 ∈ ℝ2π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 2 and ||𝑀𝑗||2 ≀ πœ—

Then β„™

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 2 + 𝑃( πœ—) > 0

slide-111
SLIDE 111

Main Theorem

If 𝑀1, … , 𝑀𝑛 ∈ β„π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

Then β„™

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 + 𝑃( πœ—) > 0

slide-112
SLIDE 112

Main Theorem

If 𝑀1, … , 𝑀𝑛 ∈ β„π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

Then β„™

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 + 𝑃( πœ—) > 0

slide-113
SLIDE 113

Main Theorem

If 𝑀1, … , 𝑀𝑛 ∈ β„π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

Then β„™

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 + 𝑃( πœ—) > 0

βˆƒπ‘€1, … , 𝑀𝑛 π‘‘π‘£π‘‘β„Ž π‘’β„Žπ‘π‘’ ||

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ|| ≀ 1 + βˆšπœ—

slide-114
SLIDE 114

Idea 2: The Expected Polynomial

Just like yesterday,

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

= πœ‡π‘›π‘π‘¦ det(𝑦𝐽 βˆ’

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ)

Consider 𝜈 𝑦 : = 𝔽det 𝑦𝐽 βˆ’

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

slide-115
SLIDE 115

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

slide-116
SLIDE 116

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

πœ“( 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ) 𝑦

𝑀𝑗~𝑀𝑗

is an interlacing family

slide-117
SLIDE 117

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

π”½πœ“( 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ)(𝑦) is real-rooted for all product

distributions on signings. πœ“( 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ) 𝑦

𝑀𝑗~𝑀𝑗

is an interlacing family

slide-118
SLIDE 118

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

π”½πœ“( 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ)(𝑦) is real-rooted for all product

distributions on signings. πœ“( 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ) 𝑦

𝑀𝑗~𝑀𝑗

is an interlacing family

π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

=

𝑗=1 𝑛

1 βˆ’ πœ– πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―=𝑨𝑛=0

slide-119
SLIDE 119

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ οƒ–

slide-120
SLIDE 120

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

  • 2. Calculate

π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

οƒ–

slide-121
SLIDE 121

Central Identity

Suppose 𝑀1, … , 𝑀𝑛 are independent random vectors with 𝐡𝑗 ≔ 𝔽𝑀𝑗𝑀𝑗

π‘ˆ. Then

𝔽det 𝑦𝐽 βˆ’

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

=

𝑗=1 𝑛

1 βˆ’ πœ– πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―=𝑨𝑛=0

slide-122
SLIDE 122

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

  • 2. Calculate

π”½πœ“ =: 𝜈 𝑦 =

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―0

οƒ– οƒ–

slide-123
SLIDE 123

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

  • 2. Calculate
  • 3. Bound the largest root πœ‡π‘›π‘π‘¦πœˆ 𝑦 ≀ 1 +

πœ— π”½πœ“ =: 𝜈 𝑦 =

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―0

Assuming 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

οƒ– οƒ–

slide-124
SLIDE 124

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

  • 2. Calculate
  • 3. Bound the largest root πœ‡π‘›π‘π‘¦πœˆ 𝑦 ≀ 1 +

πœ— π”½πœ“ =: 𝜈 𝑦 =

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―0

Assuming 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

οƒ– οƒ– ?

slide-125
SLIDE 125

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

  • 2. Calculate
  • 3. Bound the largest root πœ‡π‘›π‘π‘¦πœˆ 𝑦 ≀ 1 +

πœ— π”½πœ“ =: 𝜈 𝑦 =

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―0

Assuming 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

slide-126
SLIDE 126

Bounding the Roots

Need to bound the roots of as a function of the 𝐡𝑗.

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=…𝑨𝑛=0

slide-127
SLIDE 127

Bounding the Roots

Need to bound the roots of as a function of the 𝐡𝑗. Basic Question: What does (1 βˆ’ πœ–) do to roots?

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=…𝑨𝑛=0

slide-128
SLIDE 128

Bounding the Roots

Need to bound the roots of as a function of the 𝐡𝑗. Basic Question: What does (1 βˆ’ πœ–) do to roots?

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=…𝑨𝑛=0

Quantitative version of the fact that it preserves real stability.

slide-129
SLIDE 129

Bounding the Roots

Need to bound the roots of as a function of the 𝐡𝑗. Basic Question: What does (1 βˆ’ πœ–) do to roots?

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=…𝑨𝑛=0

Quantitative version of the fact that it preserves real stability.

slide-130
SLIDE 130

The Univariate Case

Basic Question: What does (1 βˆ’ πœ–) do to roots?

slide-131
SLIDE 131

The Univariate Case

Basic Question: What does (1 βˆ’ πœ–) do to roots? Answer: Interlacing

slide-132
SLIDE 132

The Univariate Case

Consider π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ distinct π‘žβ€² 𝑦 = π‘˜ π‘—β‰ π‘˜ 𝑦 βˆ’ πœ‡π‘— Then

π‘žβ€² 𝑦 π‘ž 𝑦 = 𝑗 1 π‘¦βˆ’πœ‡π‘— is a rational function with

n poles.

Basic Question: What does (1 βˆ’ πœ–) do to roots? Answer: Interlacing

slide-133
SLIDE 133

A Rational Function

π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ , π‘žβ€² 𝑦 π‘ž 𝑦 =

𝑗

1 𝑦 βˆ’ πœ‡π‘—

slide-134
SLIDE 134

A Rational Function

π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ , π‘žβ€² 𝑦 π‘ž 𝑦 =

𝑗

1 𝑦 βˆ’ πœ‡π‘—

slide-135
SLIDE 135

A Rational Function

π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ , π‘žβ€² 𝑦 π‘ž 𝑦 =

𝑗

1 𝑦 βˆ’ πœ‡π‘—

roots of π‘žβ€²

slide-136
SLIDE 136

A Rational Function

π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ , π‘žβ€² 𝑦 π‘ž 𝑦 =

𝑗

1 𝑦 βˆ’ πœ‡π‘—

slide-137
SLIDE 137

A Rational Function

π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ , π‘žβ€² 𝑦 π‘ž 𝑦 =

𝑗

1 𝑦 βˆ’ πœ‡π‘—

π‘žβ€² π‘ž = 1

slide-138
SLIDE 138

A Rational Function

π‘ž 𝑦 = 𝑦 βˆ’ πœ‡1 … 𝑦 βˆ’ πœ‡π‘œ , π‘žβ€² 𝑦 π‘ž 𝑦 =

𝑗

1 𝑦 βˆ’ πœ‡π‘—

π‘žβ€² π‘ž = 1

roots of π‘ž βˆ’ π‘žβ€²!

slide-139
SLIDE 139

The Barrier Function

Define: Ξ¦π‘ž 𝑦 = π‘žβ€² 𝑦 π‘ž 𝑦

slide-140
SLIDE 140

The Barrier Function

Define: Ξ¦π‘ž 𝑦 = π‘žβ€² 𝑦 π‘ž 𝑦 To bound roots of π‘ž βˆ’ π‘žβ€², find a point above the roots of p with Ξ¦π‘ž(𝑐) < 1.

slide-141
SLIDE 141

The Barrier Function

Define: Ξ¦π‘ž 𝑦 = π‘žβ€² 𝑦 π‘ž 𝑦 To bound roots of π‘ž βˆ’ π‘žβ€², find a point above the roots of p with Ξ¦π‘ž(𝑐) < 1. Level sets {Ξ¦π‘ž < 1} contain no zeros of π‘ž βˆ’ π‘žβ€²

slide-142
SLIDE 142

The Barrier Method [BSS’09]

  • Theorem. If b is above the roots of p and

Ξ¦π‘ž 𝑐 ≀ 1 βˆ’ 1/πœ€ then Ξ¦π‘žβˆ’π‘žβ€² 𝑐 + πœ€ ≀ 1 βˆ’ 1/πœ€

slide-143
SLIDE 143

The Barrier Method [BSS’09]

  • Theorem. If b is above the roots of p and

Ξ¦π‘ž 𝑐 ≀ 1 βˆ’ 1/πœ€ then Ξ¦π‘žβˆ’π‘žβ€² 𝑐 + πœ€ ≀ 1 βˆ’ 1/πœ€ Relates level sets of Ξ¦π‘ž to level sets of Ξ¦ 1βˆ’πœ– π‘ž

slide-144
SLIDE 144

The Barrier Method [BSS’09]

  • Theorem. If b is above the roots of p and

Ξ¦π‘ž 𝑐 ≀ 1 βˆ’ 1/πœ€ then Ξ¦π‘žβˆ’π‘žβ€² 𝑐 + πœ€ ≀ 1 βˆ’ 1/πœ€ Relates level sets of Ξ¦π‘ž to level sets of Ξ¦ 1βˆ’πœ– π‘ž

Robust version of {Ξ¦π‘ž < 1} argument – can be iterated.

slide-145
SLIDE 145

Evolution of level sets of Ξ¦π‘ž

slide-146
SLIDE 146

Evolution of level sets of Ξ¦π‘ž

slide-147
SLIDE 147

Evolution of level sets of Ξ¦π‘ž

slide-148
SLIDE 148

Evolution of level sets of Ξ¦π‘ž

slide-149
SLIDE 149

Evolution of level sets of Ξ¦π‘ž

slide-150
SLIDE 150

Evolution of level sets of Ξ¦π‘ž

Gives bounds on πœ‡π‘› 1 βˆ’ πœ– π‘™π‘ž(𝑦)

slide-151
SLIDE 151

The Bivariate Case

Given π‘ž 𝑦, 𝑧 , I want to bound the roots of 1 βˆ’ πœ–π‘¦ 1 βˆ’ πœ–π‘§ π‘ž(𝑦, 𝑧)

slide-152
SLIDE 152

The Bivariate Case

Example: roots of π‘ž 𝑦, 𝑧 real stable.

slide-153
SLIDE 153

The Bivariate Case

Example: roots of 1 βˆ’ πœ–π‘§ π‘ž 𝑦, 𝑧

slide-154
SLIDE 154

The Bivariate Case

Example: roots of 1 βˆ’ πœ–π‘§ π‘ž 𝑦, 𝑧

How to track these?

slide-155
SLIDE 155

The Bivariate Case

Given π‘ž 𝑦, 𝑧 , I want to bound the roots of 1 βˆ’ πœ–π‘¦ 1 βˆ’ πœ–π‘§ π‘ž(𝑦, 𝑧) Define bivariate barrier function Ξ¦π‘ž 𝑦, 𝑧 = πœ–π‘¦π‘ž 𝑦, 𝑧 π‘ž 𝑦, 𝑧 , πœ–π‘§π‘ž 𝑦, 𝑧 π‘ž 𝑦, 𝑧

slide-156
SLIDE 156

Evolution of Bivariate Ξ¦ Level Sets

π‘ž 𝑦, 𝑧 real stable.

slide-157
SLIDE 157

Evolution of Bivariate Ξ¦ Level Sets

π‘ž 𝑦, 𝑧 real stable.

slide-158
SLIDE 158

The Bivariate Case

1 βˆ’ πœ–π‘§ π‘ž 𝑦, 𝑧

slide-159
SLIDE 159

The Bivariate Case

1 βˆ’ πœ–π‘§ π‘ž 𝑦, 𝑧

slide-160
SLIDE 160

Several Perturbations

slide-161
SLIDE 161

Several Perturbations

slide-162
SLIDE 162

Several Perturbations

slide-163
SLIDE 163

Several Perturbations

slide-164
SLIDE 164

Several Perturbations

slide-165
SLIDE 165

Several Perturbations

slide-166
SLIDE 166

Several Perturbations

slide-167
SLIDE 167

Key Ingredient

Helton-Vinnikov’92: All bivariate real stable polynomials are determinants: π‘ž 𝑦, 𝑧 = det(𝑦𝐡 + 𝑧𝐢 + 𝐷) with 𝐡 ≽ 0, 𝐢 ≽ 0 This implies that the bivariate barrier has the same properties as the univariate one, and the old proof goes through.

slide-168
SLIDE 168

Basic Principle

Can track the evolution of multivariate zeros under 1 βˆ’ πœ–π‘¨ operators by studying related rational functions. A quantitative version of the stability preserving property.

slide-169
SLIDE 169

End Result

If π‘ˆπ‘  𝐡𝑗 ≀ πœ— and 𝑗 𝐡𝑗 = 𝐽 then

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=…𝑨𝑛=0

Has roots bounded by 1 + πœ— 2

slide-170
SLIDE 170

3-Step Plan

  • 1. Show that there exist 𝑀1, … , 𝑀𝑛 with

πœ‡π‘›π‘π‘¦ πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ πœ‡π‘›π‘π‘¦π”½πœ“

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

  • 2. Calculate
  • 3. Bound the largest root πœ‡π‘›π‘π‘¦πœˆ 𝑦 ≀ 1 +

πœ— π”½πœ“ =: 𝜈 𝑦 =

𝑗=1 𝑛

1 βˆ’ πœ–π‘¨π‘— det 𝑦𝐽 +

𝑗

𝑨𝑗𝐡𝑗

𝑨1=β‹―0

Assuming 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

οƒ– οƒ– οƒ–

slide-171
SLIDE 171

Main Theorem

If 𝑀1, … , 𝑀𝑛 ∈ β„π‘œ are independent, 𝔽 𝑗 𝑀𝑗𝑀𝑗

π‘ˆ = 𝐽 and ||𝑀𝑗||2 ≀ πœ—

Then β„™

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ

≀ 1 + 𝑃( πœ—) > 0

οƒ–

slide-172
SLIDE 172

Spectral Discrepancy Theorem

Suppose 𝑀1, … , 𝑀𝑛 ∈ π‘Ίπ‘œ are vectors ||𝑀𝑗||2 ≀ πœ— and Then there is a partition π‘ˆ

1 βˆͺ π‘ˆ2 such that

1 2 βˆ’ 5 πœ— 𝐽 β‰Ό

π‘—βˆˆπ‘ˆπ‘˜

𝑀𝑗𝑀𝑗

π‘ˆ β‰Ό

1 2 + 5 πœ— 𝐽

𝑗

𝑀𝑗𝑀𝑗

π‘ˆ = π½π‘œ

οƒ–

slide-173
SLIDE 173

Open Questions

Quantitative analysis of other stability- preserving operators. More applications of discrepancy theorem. Algorithms.

slide-174
SLIDE 174

Two Tools

Matrix-Determinant Lemma: det 𝑁 + π‘€π‘€π‘ˆ = det 𝑁 det 𝐽 + π‘βˆ’1π‘€π‘€π‘ˆ = det(𝑁)(1 + π‘€π‘ˆπ‘βˆ’1𝑀)

slide-175
SLIDE 175

Two Tools

Matrix-Determinant Lemma: det 𝑁 + π‘€π‘€π‘ˆ = det 𝑁 det 𝐽 + π‘βˆ’1π‘€π‘€π‘ˆ = det(𝑁)(1 + π‘€π‘ˆπ‘βˆ’1𝑀) Jacobi’s Formula: πœ–π‘’ det 𝑁 + 𝑒𝐡 = det 𝑁 πœ–π‘’ I + Mβˆ’1A = det 𝑁 π‘ˆπ‘ (π‘βˆ’1𝐡)

slide-176
SLIDE 176

(1 βˆ’ πœ–π‘¨) operators = rank-1 updates

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 𝔽det(M)(1 βˆ’ π‘€π‘ˆπ‘βˆ’1𝑀) = det(M)(1 βˆ’ π”½π‘ˆπ‘ (π‘βˆ’1π‘€π‘€π‘ˆ)) = det(𝑁)(1 βˆ’ π‘ˆπ‘ (π‘βˆ’1π”½π‘€π‘€π‘ˆ)) = det 𝑁 βˆ’ det 𝑁 π‘ˆπ‘ (π‘βˆ’1 π”½π‘€π‘€π‘ˆ) = (1 βˆ’ πœ–π‘’)det 𝑁 + π‘’π”½π‘€π‘€π‘ˆ |𝑒=0

slide-177
SLIDE 177

(1 βˆ’ πœ–π‘¨) operators = rank-1 updates

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 𝔽det(M)(1 βˆ’ π‘€π‘ˆπ‘βˆ’1𝑀) = det(M)(1 βˆ’ π”½π‘ˆπ‘ (π‘βˆ’1π‘€π‘€π‘ˆ)) = det(𝑁)(1 βˆ’ π‘ˆπ‘ (π‘βˆ’1π”½π‘€π‘€π‘ˆ)) = det 𝑁 βˆ’ det 𝑁 π‘ˆπ‘ (π‘βˆ’1 π”½π‘€π‘€π‘ˆ) = (1 βˆ’ πœ–π‘’)det 𝑁 + π‘’π”½π‘€π‘€π‘ˆ |𝑒=0

slide-178
SLIDE 178

(1 βˆ’ πœ–π‘¨) operators = rank-1 updates

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 𝔽det(M)(1 βˆ’ π‘€π‘ˆπ‘βˆ’1𝑀) = det(M)(1 βˆ’ π”½π‘ˆπ‘ (π‘βˆ’1π‘€π‘€π‘ˆ)) = det(𝑁)(1 βˆ’ π‘ˆπ‘ (π‘βˆ’1π”½π‘€π‘€π‘ˆ)) = det 𝑁 βˆ’ det 𝑁 π‘ˆπ‘ (π‘βˆ’1 π”½π‘€π‘€π‘ˆ) = (1 βˆ’ πœ–π‘’)det 𝑁 + π‘’π”½π‘€π‘€π‘ˆ |𝑒=0

slide-179
SLIDE 179

(1 βˆ’ πœ–π‘¨) operators = rank-1 updates

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 𝔽det(M)(1 βˆ’ π‘€π‘ˆπ‘βˆ’1𝑀) = det(M)(1 βˆ’ π”½π‘ˆπ‘ (π‘βˆ’1π‘€π‘€π‘ˆ)) = det(𝑁)(1 βˆ’ π‘ˆπ‘ (π‘βˆ’1π”½π‘€π‘€π‘ˆ)) = det 𝑁 βˆ’ det 𝑁 π‘ˆπ‘ (π‘βˆ’1 π”½π‘€π‘€π‘ˆ) = (1 βˆ’ πœ–π‘’)det 𝑁 + π‘’π”½π‘€π‘€π‘ˆ |𝑒=0

slide-180
SLIDE 180

(1 βˆ’ πœ–π‘¨) operators = rank-1 updates

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 𝔽det(M)(1 βˆ’ π‘€π‘ˆπ‘βˆ’1𝑀) = det(M)(1 βˆ’ π”½π‘ˆπ‘ (π‘βˆ’1π‘€π‘€π‘ˆ)) = det(𝑁)(1 βˆ’ π‘ˆπ‘ (π‘βˆ’1π”½π‘€π‘€π‘ˆ)) = det 𝑁 βˆ’ det 𝑁 π‘ˆπ‘ (π‘βˆ’1 π”½π‘€π‘€π‘ˆ) = (1 βˆ’ πœ–π‘’)det 𝑁 + π‘’π”½π‘€π‘€π‘ˆ |𝑒=0

slide-181
SLIDE 181

Proof of Central Identity

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 1 βˆ’ πœ–π‘¨ det 𝑁 + π‘¨π”½π‘€π‘€π‘ˆ

𝑨=0

𝔽 det 𝑦𝐽 βˆ’ 𝑀1𝑀1

π‘ˆ βˆ’ 𝑀2𝑀2 π‘ˆ

= 1 βˆ’ πœ–π‘¨1 𝔽 det(𝑦𝐽 βˆ’ 𝑀2𝑀2

π‘ˆ + 𝑨1𝐡1)|𝑨1=0

= det(𝑦𝐽 + 𝑨2𝐡2 + 𝑨1𝐡1)|𝑨1=𝑨2=0

slide-182
SLIDE 182

Proof of Central Identity

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 1 βˆ’ πœ–π‘¨ det 𝑁 + π‘¨π”½π‘€π‘€π‘ˆ

𝑨=0

𝔽 det 𝑦𝐽 βˆ’ 𝑀1𝑀1

π‘ˆ βˆ’ 𝑀2𝑀2 π‘ˆ

= 1 βˆ’ πœ–π‘¨1 𝔽 det(𝑦𝐽 βˆ’ 𝑀2𝑀2

π‘ˆ + 𝑨1𝐡1)|𝑨1=0

= det(𝑦𝐽 + 𝑨2𝐡2 + 𝑨1𝐡1)|𝑨1=𝑨2=0

slide-183
SLIDE 183

Proof of Central Identity

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 1 βˆ’ πœ–π‘¨ det 𝑁 + π‘¨π”½π‘€π‘€π‘ˆ

𝑨=0

𝔽 det 𝑦𝐽 βˆ’ 𝑀1𝑀1

π‘ˆ βˆ’ 𝑀2𝑀2 π‘ˆ

= 1 βˆ’ πœ–π‘¨1 𝔽 det(𝑦𝐽 βˆ’ 𝑀2𝑀2

π‘ˆ + 𝑨1𝐡1)|𝑨1=0

= det(𝑦𝐽 + 𝑨2𝐡2 + 𝑨1𝐡1)|𝑨1=𝑨2=0

slide-184
SLIDE 184

Proof of Central Identity

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 1 βˆ’ πœ–π‘¨ det 𝑁 + π‘¨π”½π‘€π‘€π‘ˆ

𝑨=0

𝔽 det 𝑦𝐽 βˆ’ 𝑀1𝑀1

π‘ˆ βˆ’ 𝑀2𝑀2 π‘ˆ

= 1 βˆ’ πœ–π‘¨1 𝔽 det(𝑦𝐽 βˆ’ 𝑀2𝑀2

π‘ˆ + 𝑨1𝐡1)|𝑨1=0

= det(𝑦𝐽 + 𝑨2𝐡2 + 𝑨1𝐡1)|𝑨1=𝑨2=0

slide-185
SLIDE 185

Proof of Central Identity

𝔽 det 𝑁 βˆ’ π‘€π‘€π‘ˆ = 1 βˆ’ πœ–π‘¨ det 𝑁 + π‘¨π”½π‘€π‘€π‘ˆ

𝑨=0

𝔽 det 𝑦𝐽 βˆ’ 𝑀1𝑀1

π‘ˆ βˆ’ 𝑀2𝑀2 π‘ˆ

= 1 βˆ’ πœ–π‘¨1 𝔽 det(𝑦𝐽 βˆ’ 𝑀2𝑀2

π‘ˆ + 𝑨1𝐡1)|𝑨1=0

= det(𝑦𝐽 + 𝑨2𝐡2 + 𝑨1𝐡1)|𝑨1=𝑨2=0