Interlacing Families II: Mixed Characteristic Polynomials and the Kadison-Singer Problem
Nikhil Srivastava (Microsoft Research) Adam Marcus (Yale), Daniel Spielman (Yale)
Interlacing Families II: Mixed Characteristic Polynomials and the - - PowerPoint PPT Presentation
Interlacing Families II: Mixed Characteristic Polynomials and the Kadison-Singer Problem Nikhil Srivastava (Microsoft Research) Adam Marcus (Yale), Daniel Spielman (Yale) Discrepancy Theory How well can you approximate a discrete object by
Nikhil Srivastava (Microsoft Research) Adam Marcus (Yale), Daniel Spielman (Yale)
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue.
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue.
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue.
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue.
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue.
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue.
πππ‘π = 1
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue. How well can you do in general?
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue. In general, a random coloring gives πππ‘π: = max
π
|ππ β© π| β |ππ| 2 β€ π ππππ π .
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue. Spencer: There exists a coloring with πππ‘π: = max
π
|ππ β© π| β |ππ| 2 β€ 3 nlog(π π ).
Discrepancy Theory
βHow well can you approximate a discrete
e.g. Given a set family π1, β¦ , ππ β [π], find a red-blue coloring π = π βͺ πΆ such that every set is half red and half blue. Spencer: There exists a coloring with πππ‘π: = max
π
|ππ β© π| β |ππ| 2 β€ 3 π.
Quadratic Forms and Energy
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2
Quadratic Forms and Energy
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2 Example.
π€1, π€2, π€3, π€4 β πΊ2
Quadratic Forms and Energy
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2 Example.
π£ = (1,0)
π£, π€1 2 + π£, π€2 2 + π£, π€3 2 + π£, π€4 2 =1 + 0 + ΒΌ + ΒΌ =1.5
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2 Example.
Quadratic Forms and Energy
π£ = (1,0)
Quadratic Forms and Energy
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2 Example.
π£ = (0,1)
π£, π€1 2 + π£, π€2 2 + π£, π€3 2 + π£, π€4 2 =0 + 1 + ΒΎ + ΒΎ =2.5
Quadratic Forms and Energy
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2 Example.
π£ = (1/ 2, 1/ 2)
π£, π€1 2 + π£, π€2 2 + π£, π€3 2 + π£, π€4 2 Β½ + Β½ + ΒΎ + ΒΌ =2
Quadratic Forms and Energy
Given vectors π€1, β¦ , π€π β πΊπ, their energy in a test direction π£ β πΊπ, ||π£|| = 1, is the quadratic form π π£ β
π
π£, π€π 2 Example.
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1
βisotropicβ
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1
βisotropicβ
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1
βisotropicβ
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1
βisotropicβ
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
βͺ
Splitting A Quadratic Form in Half
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
βͺ
βEach part approximates the whole.β
Many Possible Partitions
Many Possible Partitions
Many Possible Partitions
Many Possible Partitions
Theorem: Good partition always exists.
The norm condition
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
The norm condition
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
In One Dimension
Main Theorem. Suppose π€1, β¦ , π€π β πΊ1 are numbers with |π€π| β€ π and energy one
π
π€π
2 = 1.
Then there is a partition π
1 βͺ π2 such that
each part has energy close to half
πβππ
π€π
2 = 1
2 Β± 5π
In One Dimension
Main Theorem. Suppose π€1, β¦ , π€π β πΊ1 are numbers with |π€π|2 β€ π2 and energy one
π
π€π
2 = 1.
Then there is a partition π
1 βͺ π2 such that
each part has energy close to half
πβππ
π€π
2 = 1
2 Β± 5π
In One Dimension
Main Theorem. Suppose π€1, β¦ , π€π β πΊ1 are numbers with |π€π|2 β€ π2 and energy one
π
π€π
2 = 1.
Then there is a partition π
1 βͺ π2 such that
each part has energy close to half
πβππ
π€π
2 = 1
2 Β± π2 2
greedy bin packing
In One Dimension
Main Theorem. Suppose π€1, β¦ , π€π β πΊ1 are numbers with |π€π|2 β€ π2 and energy one
π
π€π
2 = 1.
Then there is a partition π
1 βͺ π2 such that
each part has energy close to half
πβππ
π€π
2 = 1
2 Β± 5π
In Higher Dimensions
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
In Higher Dimensions
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and energy one in each direction: β||π£|| = 1
π
π£, π€π 2 = 1 Then there is a partition π
1 βͺ π2 such that each
part has energy close to half in each direction: β||π£|| = 1
πβππ
π£, π€π 2 = 1 2 Β± 5π
Optimal in high dim
Matrix Notation
Given vectors π€1, β¦ , π€π write quadratic form as π π£ =
π
π€π, π£ 2 = π£π
π
π€ππ€π
π
π£
Matrix Notation
Given vectors π€1, β¦ , π€π write quadratic form as π π£ =
π
π€π, π£ 2 = π£π
π
π€ππ€π
π
π£ Isotropy:
π
π€ππ€π
π = π½π
Matrix Notation
Given vectors π€1, β¦ , π€π write quadratic form as π π£ =
π
π€π, π£ 2 = π£π
π
π€ππ€π
π
π£ Isotropy: Comparision: π΅ βΌ πΆ βΊ π£ππ΅π£ β€ π£ππΆ π£ βπ£
π
π€ππ€π
π = π½π
Matrix Notation
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π|| β€ π and Then there is a partition π
1 βͺ π2 such that
1 2 β 5π π½ βΌ
πβππ
π€ππ€π
π βΌ
1 2 + 5π π½
π
π€ππ€π
π = π½π
Matrix Notation
Main Theorem. Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π||2 β€ π and Then there is a partition π
1 βͺ π2 such that
1 2 β 5 π π½ βΌ
πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
π
π€ππ€π
π = π½π
Unnormalized Version
Suppose I get some vectors π₯1, β¦ , π₯πwhich are not isotropic:
π
π₯ππ₯π
π = π β½ 0
Consider π€π β πβ1
2π₯π and apply theorem to π€π.
Normalized vectors have ||π€π||2 = ||πβ1
2π₯π||2 = π
1 2 β 5 π π½ βΌ
πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
Unnormalized Version
Suppose I get some vectors π₯1, β¦ , π₯πwhich are not isotropic:
π
π₯ππ₯π
π = π β½ 0
Consider π€π β πβ1
2π₯π and apply theorem to π€π.
Normalized vectors have ||π€π||2 = ||πβ1
2π₯π||2 = π
1 2 β 5 π π½ βΌ
πβππ
πβ1
2π₯ππ₯π ππβ1 2 βΌ
1 2 + 5 π π½
Unnormalized Version
Suppose I get some vectors π₯1, β¦ , π₯πwhich are not isotropic:
π
π₯ππ₯π
π = π β½ 0
Consider π€π β πβ1
2π₯π and apply theorem to π€π.
Normalized vectors have ||π€π||2 = ||πβ1
2π₯π||2 = π
1 2 β 5 π π½ βΌ πβ1
2 πβππ
π₯ππ₯π
π
πβ1
2 βΌ
1 2 + 5 π π½
Unnormalized Version
Suppose I get some vectors π₯1, β¦ , π₯πwhich are not isotropic:
π
π₯ππ₯π
π = π β½ 0
Consider π€π β πβ1
2π₯π and apply theorem to π€π.
Normalized vectors have ||π€π||2 = ||πβ1
2π₯π||2 = π
1 2 β 5 π π½ βΌ πβ1
2 πβππ
π₯ππ₯π
π
πβ1
2 βΌ
1 2 + 5 π π½
Fact: π΅ βΌ πΆ βΊ π·π΅π· βΌ π·πΆπ· for invertible π·
Unnormalized Version
Suppose I get some vectors π₯1, β¦ , π₯πwhich are not isotropic:
π
π₯ππ₯π
π = π β½ 0
Consider π€π β πβ1
2π₯π and apply theorem to π€π.
Normalized vectors have ||π€π||2 = ||πβ1
2π₯π||2 = π
1 2 β 5 π π βΌ
πβππ
π₯ππ₯π
π
βΌ 1 2 + 5 π π
Unnormalized Theorem
Given arbitrary vectors π₯1, β¦ , π₯π β βπ there is a partition π = π
1 βͺ π2 with
1 2 β π
π
π₯ππ₯π
π
βΌ
πβππ
π₯ππ₯π
π βΌ 1
2 β π
π
π₯ππ₯π
π
Where π β max
π
|| πβ1
2π₯π||2
Unnormalized Theorem
Given arbitrary vectors π₯1, β¦ , π₯π β βπ there is a partition π = π
1 βͺ π2 with
1 2 β π
π
π₯ππ₯π
π
βΌ
πβππ
π₯ππ₯π
π βΌ 1
2 β π
π
π₯ππ₯π
π
Where π β max
π
|| πβ1
2π₯π||2
Any quadratic form in which no vector has too much influence can be split into two representative pieces.
Given an undirected graph π» = (π, πΉ), consider its Laplacian matrix: ππ» =
ππβπΉ
ππ β π
π)(ππ β π π π
Given an undirected graph π» = (π, πΉ), consider its Laplacian matrix: ππ» =
ππβπΉ
ππ β π
π)(ππ β π π π
i j i j
Given an undirected graph π» = (π, πΉ), consider its Laplacian matrix: ππ» =
ππβπΉ
ππ β π
π)(ππ β π π π = πΈ β π΅
Given an undirected graph π» = (π, πΉ), consider its Laplacian matrix: ππ» =
ππβπΉ
ππ β π
π)(ππ β π π π = πΈ β π΅
Quadratic form: π¦πππ¦ =
ππβπΉ
π¦π β π¦π
2 πππ π¦ β πΊπ
The Laplacian Quadratic Form
An example:
+2 +1 +1 +3
An example: xTLx= ο₯ i,j 2 E (x(i)-x(j))2
+2 +1 +1 +3 12 32 22 22 32
The Laplacian Quadratic Form
12
An example: xTLx= ο₯ i,j 2 E (x(i)-x(j))2 = 28
+2 +1 +1 +3 12 32 22 12 22 32
The Laplacian Quadratic Form
The Laplacian Quadratic Form
Another example:
+1 +1
The Laplacian Quadratic Form
Another example: xT LGx = 3
+1 +1 02 02 12 12 12 02
Cuts and the Quadratic Form
For characteristic vector
Cuts and the Quadratic Form
For characteristic vector
The Laplacian Quadratic form encodes the entire cut structure of the graph.
Application to Graphs
G
Application to Graphs
G
ππ» =
ππβπΉ
ππ β π
π)(ππ β π π π
Application to Graphs
G
Theorem
Application to Graphs
G
Theorem
H1 H2
ππΌ1 ππΌ2
Application to Graphs
G
Theorem
H1 H2
π¦πππΌ1π¦ β 1 2 π¦πππ»π¦
Recursive Application Gives:
Spielman-Sβ09]: Every graph G has a weighted O(1)-cut approximation H with O(n) edges. G H π π2 edges π π edges
Unweighted
Weighted
Approximating One Graph by Another
Cut Approximation [Benczur-Kargerβ96] For every cut, weight of edges in G β weight of edges in H
G H
Approximating One Graph by Another
Cut Approximation [Benczur-Kargerβ96] For every cut, weight of edges in G β weight of edges in H
G H
Approximating One Graph by Another
Cut Approximation [Benczur-Kargerβ96] For every cut, weight of edges in G β weight of edges in H
G H
Approximating One Graph by Another
Cut Approximation [Benczur-Kargerβ96] For every cut, weight of edges in G β weight of edges in H
G H
Approximating One Graph by Another
Cut Approximation [Benczur-Kargerβ96] For every cut, weight of edges in G β weight of edges in H
G H
Approximating One Graph by Another
Cut Approximation [Benczur-Kargerβ96] For every cut, weight of edges in G β weight of edges in H
G H
G and H have same cuts. Equivalent for min cut, max cut, sparsest cutβ¦
Recursive Application Gives:
transitive graph G has an unweighted O(1)-cut approximation H with O(n) edges.
πΏπ πΌ Expander graph
Recursive Application Gives:
transitive graph G can be partitioned into O(1)-cut approximations with O(n) edges.
πΏπ πΌ1 β¦ πΌπ Expander graphs
Recursive Application Gives:
transitive graph G can be partitioned into O(1)-cut approximations with O(n) edges.
πΏπ πΌ1 β¦ πΌπ Expander graphs
Generalizes [Frieze-Molloy]
Recursive Application Gives:
transitive graph G can be partitioned into O(1)-cut approximations with O(n) edges.
Same cut structure
G H1 H2
Signal π¦ β βπ. Discrete Fourier Transform π¦ a = β©π¦, exp(β
π2πππ π
)
πβ€πβͺ
Signal π¦ β βπ. Discrete Fourier Transform π¦ a = β©π¦, exp(β
π2πππ π
)
πβ€πβͺ
Uncertainty Principle: π¦ and π¦ cannot be simultaneously localized. π‘π£ππ π¦ Γ π‘π£ππ π¦ β₯ π
Signal π¦ β βπ. Discrete Fourier Transform π¦ a = β©π¦, exp(β
π2πππ π
)
πβ€πβͺ
Uncertainty Principle: π¦ and π¦ cannot be simultaneously localized. π‘π£ππ π¦ Γ π‘π£ππ π¦ β₯ π If π¦ is supported on π = βπ coordinates, π‘π£ππ π¦ β₯ π
Signal π¦ β βπ. Discrete Fourier Transform π¦ a = β©π¦, exp(β
π2πππ π
)
πβ€πβͺ
Stronger Uncertainty Principle: For every subset π = π , there is a partition π = π
1 βͺ β― π π
||π¦|π||2 β
1 π ||
π¦|ππ||2 for all π¦ and ππ
Signal π¦ β βπ. Discrete Fourier Transform π¦ a = β©π¦, exp(β
π2πππ π
)
πβ€πβͺ
Stronger Uncertainty Principle: For every subset π = π , there is a partition π = π
1 βͺ β― π π
||π¦|π||2 β
1 π ||
π¦|ππ||2 for all π¦ and ππ
Proof. Let π
π = exp(β π2πππ π
)
πβ€π be the Fourier basis.
Fix a subset π β π of βπ coords. The restricted norm is: ||π¦|π||2 = π π¦|π, π
π 2
a quadratic form in π dimensions. Apply the theorem.
Proof. Let π
π = exp(β π2πππ π
)
πβ€π be the Fourier basis.
Fix a subset π β π of βπ coords. The restricted norm is: ||π¦|π||2 = π π¦|π, π
π 2
a quadratic form in π dimensions. Apply the theorem.
Applications in analytic number theory, harmonic analysis.
Dirac 1930βs: Bra-Ket formalism for quantum states. π , π β¦
Dirac 1930βs: Bra-Ket formalism for quantum states. π , π β¦ What are Bras and Kets? NOT vectors.
Dirac 1930βs: Bra-Ket formalism for quantum states. π , π β¦ What are Bras and Kets? NOT vectors. Von Neumann 1936: Theory of π·βalgebras.
Dirac 1930βs: Bra-Ket formalism for quantum states. π , π β¦ What are Bras and Kets? NOT vectors. Von Neumann 1936: Theory of π·βalgebras. Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices
Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices
Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices Anderson 1979: Reduced to a question about finite matrices. βPaving Conjectureβ
Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices.
Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices. Weaver 2002: Discrepancy theoretic formulation
Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices. This work: Proof of Weaverβs conjecture.
Kadison-Singer 1959: Does this lead to a satisfactory notion of measurement? Conjecture: about β matrices Anderson 1979: Reduced to a question about finite matrices. Akemann-Anderson 1991: Reduced to a question about finite projection matrices. This work: Proof of Weaverβs conjecture.
In General
Anything that can be encoded as a quadratic form can be split into pieces while preserving certain properties. Many different things can be gainfully encoded this way.
Main Theorem
Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π||2 β€ π and Then there is a partition π
1 βͺ π2 such that
1 2 β 5 π π½ βΌ
πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
π
π€ππ€π
π = π½π
Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π||2 β€ π and Then there is a partition π
1 βͺ π2 such that πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
Equivalent Theorem
π
π€ππ€π
π = π½π
Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π||2 β€ π and Then there is a partition π
1 βͺ π2 such that πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
Equivalent Theorem
π
π€ππ€π
π = π½π
πβπ
1
π€ππ€π
π = π½ β πβπ
2
π€ππ€π
π
Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π||2 β€ π and Then there is a partition π
1 βͺ π2 such that πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
Equivalent Theorem
π
π€ππ€π
π = π½π
Idea 1: Random Partition
Choose π
1 βͺ π2 = π randomly
Want:
πβπ
1
π€ππ€π
π βΌ
1 2 + 5 π π½
πβπ
2
π€ππ€π
π βΌ
1 2 + 5 π π½
Idea 1: Random Partition
Choose π
1 βͺ π2 = π randomly
Want:
πβπ
1
π€ππ€π
π
β€ 1 2 + 5 π
πβπ
2
π€ππ€π
π
β€ 1 2 + 5 π
Idea 1: Random Partition
Choose π
1 βͺ π2 = π randomly
Want:
πβπ
1
π€ππ€π
π
β€ 1 2 + 5 π
πβπ
2
π€ππ€π
π
β€ 1 2 + 5 π
Trick: embed in blocks of 2π Γ 2π matrix
πβπ
1
π€ππ€π
π πβπ
2
π€ππ€π
π
Idea 1: Random Partition
Choose π
1 βͺ π2 = π randomly
Want:
πβπ
1
π€ππ€π
π
β€ 1 2 + 5 π
πβπ
2
π€ππ€π
π
β€ 1 2 + 5 π
Trick: embed in blocks of 2π Γ 2π matrix
πβπ
1
π€ππ€π
π πβπ
2
π€ππ€π
π
= max
j πβππ
π€ππ€π
π
Idea 1: Random Partition
Define independent random π€1, β¦ , π€π β β2π
π€π = π€π π₯ππ’β ππ ππ 0.5 π€π π₯ππ’β ππ ππ. 0.5
Then π½
π
π€ππ€π
π
= π½π max
π πβππ
π€ππ€π
π
The Matrix Chernoff Bound
π€1, β¦ , π€π β β2π are independent, π½ π π€ππ€π
π = π½ 2 and ||π€π||2 β€ π
Tropp 2011 π½
π
π€ππ€π
π
β€ 1 2 + π( π log π) Analogous for the scalar Chernoff bound for sums of independent bdd random numbers.
The Matrix Chernoff Bound
π€1, β¦ , π€π β β2π are independent, π½ π π€ππ€π
π = π½ 2 and ||π€π||2 β€ π
Tropp 2011 π½
π
π€ππ€π
π
β€ 1 2 + π( π log π) Analogous for the scalar Chernoff bound for sums of independent bdd random numbers.
Main Theorem
If π€1, β¦ , π€π β β2π are independent, π½ π π€ππ€π
π = π½ 2 and ||π€π||2 β€ π
Then β
π
π€ππ€π
π
β€ 1 2 + π( π) > 0
Main Theorem
If π€1, β¦ , π€π β βπ are independent, π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
Then β
π
π€ππ€π
π
β€ 1 + π( π) > 0
Main Theorem
If π€1, β¦ , π€π β βπ are independent, π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
Then β
π
π€ππ€π
π
β€ 1 + π( π) > 0
Main Theorem
If π€1, β¦ , π€π β βπ are independent, π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
Then β
π
π€ππ€π
π
β€ 1 + π( π) > 0
βπ€1, β¦ , π€π π‘π£πβ π’βππ’ ||
π
π€ππ€π
π|| β€ 1 + βπ
Idea 2: The Expected Polynomial
Just like yesterday,
π
π€ππ€π
π
= ππππ¦ det(π¦π½ β
π
π€ππ€π
π)
Consider π π¦ : = π½det π¦π½ β
π
π€ππ€π
π
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π( π π€ππ€π
π) π¦
π€π~π€π
is an interlacing family
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π½π( π π€ππ€π
π)(π¦) is real-rooted for all product
distributions on signings. π( π π€ππ€π
π) π¦
π€π~π€π
is an interlacing family
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π½π( π π€ππ€π
π)(π¦) is real-rooted for all product
distributions on signings. π( π π€ππ€π
π) π¦
π€π~π€π
is an interlacing family
π½π
π
π€ππ€π
π
=
π=1 π
1 β π ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―=π¨π=0
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π ο
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π½π
π
π€ππ€π
π
Central Identity
Suppose π€1, β¦ , π€π are independent random vectors with π΅π β π½π€ππ€π
π. Then
π½det π¦π½ β
π
π€ππ€π
π
=
π=1 π
1 β π ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―=π¨π=0
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π½π =: π π¦ =
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―0
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π π½π =: π π¦ =
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―0
Assuming π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π π½π =: π π¦ =
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―0
Assuming π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π π½π =: π π¦ =
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―0
Assuming π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
Bounding the Roots
Need to bound the roots of as a function of the π΅π.
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β¦π¨π=0
Bounding the Roots
Need to bound the roots of as a function of the π΅π. Basic Question: What does (1 β π) do to roots?
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β¦π¨π=0
Bounding the Roots
Need to bound the roots of as a function of the π΅π. Basic Question: What does (1 β π) do to roots?
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β¦π¨π=0
Quantitative version of the fact that it preserves real stability.
Bounding the Roots
Need to bound the roots of as a function of the π΅π. Basic Question: What does (1 β π) do to roots?
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β¦π¨π=0
Quantitative version of the fact that it preserves real stability.
The Univariate Case
Basic Question: What does (1 β π) do to roots?
The Univariate Case
Basic Question: What does (1 β π) do to roots? Answer: Interlacing
The Univariate Case
Consider π π¦ = π¦ β π1 β¦ π¦ β ππ distinct πβ² π¦ = π πβ π π¦ β ππ Then
πβ² π¦ π π¦ = π 1 π¦βππ is a rational function with
n poles.
Basic Question: What does (1 β π) do to roots? Answer: Interlacing
A Rational Function
π π¦ = π¦ β π1 β¦ π¦ β ππ , πβ² π¦ π π¦ =
π
1 π¦ β ππ
A Rational Function
π π¦ = π¦ β π1 β¦ π¦ β ππ , πβ² π¦ π π¦ =
π
1 π¦ β ππ
A Rational Function
π π¦ = π¦ β π1 β¦ π¦ β ππ , πβ² π¦ π π¦ =
π
1 π¦ β ππ
roots of πβ²
A Rational Function
π π¦ = π¦ β π1 β¦ π¦ β ππ , πβ² π¦ π π¦ =
π
1 π¦ β ππ
A Rational Function
π π¦ = π¦ β π1 β¦ π¦ β ππ , πβ² π¦ π π¦ =
π
1 π¦ β ππ
πβ² π = 1
A Rational Function
π π¦ = π¦ β π1 β¦ π¦ β ππ , πβ² π¦ π π¦ =
π
1 π¦ β ππ
πβ² π = 1
roots of π β πβ²!
The Barrier Function
Define: Ξ¦π π¦ = πβ² π¦ π π¦
The Barrier Function
Define: Ξ¦π π¦ = πβ² π¦ π π¦ To bound roots of π β πβ², find a point above the roots of p with Ξ¦π(π) < 1.
The Barrier Function
Define: Ξ¦π π¦ = πβ² π¦ π π¦ To bound roots of π β πβ², find a point above the roots of p with Ξ¦π(π) < 1. Level sets {Ξ¦π < 1} contain no zeros of π β πβ²
The Barrier Method [BSSβ09]
Ξ¦π π β€ 1 β 1/π then Ξ¦πβπβ² π + π β€ 1 β 1/π
The Barrier Method [BSSβ09]
Ξ¦π π β€ 1 β 1/π then Ξ¦πβπβ² π + π β€ 1 β 1/π Relates level sets of Ξ¦π to level sets of Ξ¦ 1βπ π
The Barrier Method [BSSβ09]
Ξ¦π π β€ 1 β 1/π then Ξ¦πβπβ² π + π β€ 1 β 1/π Relates level sets of Ξ¦π to level sets of Ξ¦ 1βπ π
Robust version of {Ξ¦π < 1} argument β can be iterated.
Evolution of level sets of Ξ¦π
Evolution of level sets of Ξ¦π
Evolution of level sets of Ξ¦π
Evolution of level sets of Ξ¦π
Evolution of level sets of Ξ¦π
Evolution of level sets of Ξ¦π
Gives bounds on ππ 1 β π ππ(π¦)
The Bivariate Case
Given π π¦, π§ , I want to bound the roots of 1 β ππ¦ 1 β ππ§ π(π¦, π§)
The Bivariate Case
Example: roots of π π¦, π§ real stable.
The Bivariate Case
Example: roots of 1 β ππ§ π π¦, π§
The Bivariate Case
Example: roots of 1 β ππ§ π π¦, π§
How to track these?
The Bivariate Case
Given π π¦, π§ , I want to bound the roots of 1 β ππ¦ 1 β ππ§ π(π¦, π§) Define bivariate barrier function Ξ¦π π¦, π§ = ππ¦π π¦, π§ π π¦, π§ , ππ§π π¦, π§ π π¦, π§
Evolution of Bivariate Ξ¦ Level Sets
π π¦, π§ real stable.
Evolution of Bivariate Ξ¦ Level Sets
π π¦, π§ real stable.
The Bivariate Case
1 β ππ§ π π¦, π§
The Bivariate Case
1 β ππ§ π π¦, π§
Several Perturbations
Several Perturbations
Several Perturbations
Several Perturbations
Several Perturbations
Several Perturbations
Several Perturbations
Key Ingredient
Helton-Vinnikovβ92: All bivariate real stable polynomials are determinants: π π¦, π§ = det(π¦π΅ + π§πΆ + π·) with π΅ β½ 0, πΆ β½ 0 This implies that the bivariate barrier has the same properties as the univariate one, and the old proof goes through.
Basic Principle
Can track the evolution of multivariate zeros under 1 β ππ¨ operators by studying related rational functions. A quantitative version of the stability preserving property.
End Result
If ππ π΅π β€ π and π π΅π = π½ then
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β¦π¨π=0
Has roots bounded by 1 + π 2
3-Step Plan
ππππ¦ π
π
π€ππ€π
π
β€ ππππ¦π½π
π
π€ππ€π
π
π π½π =: π π¦ =
π=1 π
1 β ππ¨π det π¦π½ +
π
π¨ππ΅π
π¨1=β―0
Assuming π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
Main Theorem
If π€1, β¦ , π€π β βπ are independent, π½ π π€ππ€π
π = π½ and ||π€π||2 β€ π
Then β
π
π€ππ€π
π
β€ 1 + π( π) > 0
Spectral Discrepancy Theorem
Suppose π€1, β¦ , π€π β πΊπ are vectors ||π€π||2 β€ π and Then there is a partition π
1 βͺ π2 such that
1 2 β 5 π π½ βΌ
πβππ
π€ππ€π
π βΌ
1 2 + 5 π π½
π
π€ππ€π
π = π½π
Open Questions
Quantitative analysis of other stability- preserving operators. More applications of discrepancy theorem. Algorithms.
Two Tools
Matrix-Determinant Lemma: det π + π€π€π = det π det π½ + πβ1π€π€π = det(π)(1 + π€ππβ1π€)
Two Tools
Matrix-Determinant Lemma: det π + π€π€π = det π det π½ + πβ1π€π€π = det(π)(1 + π€ππβ1π€) Jacobiβs Formula: ππ’ det π + π’π΅ = det π ππ’ I + Mβ1A = det π ππ (πβ1π΅)
(1 β ππ¨) operators = rank-1 updates
π½ det π β π€π€π = π½det(M)(1 β π€ππβ1π€) = det(M)(1 β π½ππ (πβ1π€π€π)) = det(π)(1 β ππ (πβ1π½π€π€π)) = det π β det π ππ (πβ1 π½π€π€π) = (1 β ππ’)det π + π’π½π€π€π |π’=0
(1 β ππ¨) operators = rank-1 updates
π½ det π β π€π€π = π½det(M)(1 β π€ππβ1π€) = det(M)(1 β π½ππ (πβ1π€π€π)) = det(π)(1 β ππ (πβ1π½π€π€π)) = det π β det π ππ (πβ1 π½π€π€π) = (1 β ππ’)det π + π’π½π€π€π |π’=0
(1 β ππ¨) operators = rank-1 updates
π½ det π β π€π€π = π½det(M)(1 β π€ππβ1π€) = det(M)(1 β π½ππ (πβ1π€π€π)) = det(π)(1 β ππ (πβ1π½π€π€π)) = det π β det π ππ (πβ1 π½π€π€π) = (1 β ππ’)det π + π’π½π€π€π |π’=0
(1 β ππ¨) operators = rank-1 updates
π½ det π β π€π€π = π½det(M)(1 β π€ππβ1π€) = det(M)(1 β π½ππ (πβ1π€π€π)) = det(π)(1 β ππ (πβ1π½π€π€π)) = det π β det π ππ (πβ1 π½π€π€π) = (1 β ππ’)det π + π’π½π€π€π |π’=0
(1 β ππ¨) operators = rank-1 updates
π½ det π β π€π€π = π½det(M)(1 β π€ππβ1π€) = det(M)(1 β π½ππ (πβ1π€π€π)) = det(π)(1 β ππ (πβ1π½π€π€π)) = det π β det π ππ (πβ1 π½π€π€π) = (1 β ππ’)det π + π’π½π€π€π |π’=0
Proof of Central Identity
π½ det π β π€π€π = 1 β ππ¨ det π + π¨π½π€π€π
π¨=0
π½ det π¦π½ β π€1π€1
π β π€2π€2 π
= 1 β ππ¨1 π½ det(π¦π½ β π€2π€2
π + π¨1π΅1)|π¨1=0
= det(π¦π½ + π¨2π΅2 + π¨1π΅1)|π¨1=π¨2=0
Proof of Central Identity
π½ det π β π€π€π = 1 β ππ¨ det π + π¨π½π€π€π
π¨=0
π½ det π¦π½ β π€1π€1
π β π€2π€2 π
= 1 β ππ¨1 π½ det(π¦π½ β π€2π€2
π + π¨1π΅1)|π¨1=0
= det(π¦π½ + π¨2π΅2 + π¨1π΅1)|π¨1=π¨2=0
Proof of Central Identity
π½ det π β π€π€π = 1 β ππ¨ det π + π¨π½π€π€π
π¨=0
π½ det π¦π½ β π€1π€1
π β π€2π€2 π
= 1 β ππ¨1 π½ det(π¦π½ β π€2π€2
π + π¨1π΅1)|π¨1=0
= det(π¦π½ + π¨2π΅2 + π¨1π΅1)|π¨1=π¨2=0
Proof of Central Identity
π½ det π β π€π€π = 1 β ππ¨ det π + π¨π½π€π€π
π¨=0
π½ det π¦π½ β π€1π€1
π β π€2π€2 π
= 1 β ππ¨1 π½ det(π¦π½ β π€2π€2
π + π¨1π΅1)|π¨1=0
= det(π¦π½ + π¨2π΅2 + π¨1π΅1)|π¨1=π¨2=0
Proof of Central Identity
π½ det π β π€π€π = 1 β ππ¨ det π + π¨π½π€π€π
π¨=0
π½ det π¦π½ β π€1π€1
π β π€2π€2 π
= 1 β ππ¨1 π½ det(π¦π½ β π€2π€2
π + π¨1π΅1)|π¨1=0
= det(π¦π½ + π¨2π΅2 + π¨1π΅1)|π¨1=π¨2=0