Lecture 4: Goemans-Williamson Algorithm for MAX-CUT Lecture Outline - - PowerPoint PPT Presentation
Lecture 4: Goemans-Williamson Algorithm for MAX-CUT Lecture Outline - - PowerPoint PPT Presentation
Lecture 4: Goemans-Williamson Algorithm for MAX-CUT Lecture Outline Part I: Analyzing semidefinite programs Part II: Analyzing Goemans-Williamson Part III: Tight examples for Goemans-Williamson Part IV: Impressiveness of
Lecture Outline
- Part I: Analyzing semidefinite programs
- Part II: Analyzing Goemans-Williamson
- Part III: Tight examples for Goemans-Williamson
- Part IV: Impressiveness of Goemans-Williamson
and open problems
Part I: Analyzing semidefinite programs
Goemans-Williamson Program
- Recall Goemans-Williamson program: Maximize
Οπ,π:π<π, π,π βπΉ(π»)
1βπππ 2
subject to M β½ 0 where π β½ 0 and βπ, πππ = 1
- Theorem: Goemans-Williamson gives a .878
approximation for MAX-CUT
- How do we analyze Goemans-Williamson and
- ther semidefinite programs?
Vector Solutions
- Want: matrix π such that πππ = π¦ππ¦π where
π¦π are the problem variables.
- Semidefinite program: Assigns a vector π€π to
each π¦π, gives the matrix π where πππ = π€π β π€π
- Note: This is a relaxation of the problem. To
- btain an actual solution, we need a rounding
algorithm to round this vector solution into an actual solution.
Vector Solution Justification
- Theorem: π β½ 0 if and only if there are vectors
π€π such that πππ = π€π β π€π
- Example: π =
1 β1 1 β1 2 β1 1 β1 2 , π€1 = 1,0,0 π€2 = β1,1,0 π€3 = 1,0,1
- One way to see this: take a βsquare rootβ of π
- Second way to see this: Cholesky decomposition
Square Root of a PSD Matrix
- If there are vectors {π€π} such that πππ = π€π β π€π,
take π to be the matrix with rows π€1, β― , π€π. π = πππ β½ 0
- Conversely, if π β½ 0 then π = Οπ=1
π
πππ£ππ£π
π
where ππ β₯ 0 for all π. Taking π to be the matrix with columns πππ£π, πππ = π. Taking π€π to be the ith row of π, πππ = π€π β π€π
Cholesky Decomposition
- Cholesky decomposition: π = π·π·π where π· is
a lower triangular matrix.
- π€π = Οπ π·ππππ is the ith row of π·
- We can find the entries of π· one by one.
Cholesky Decomposition Example
- Example: π =
1 β1 1 β1 2 β1 1 β1 2
- π€1 = 1,0,0
- Need π·21 = β1 so that π€2 β π€1 = β1. π€2 =
β1, π·22, 0
- Taking π·22 = 1, π€2 β π€2 = 2. π€2 = β1,1,0
- Need π·31 = 1 and π·32 = 0 so that π€3 β π€1 =
1, π€3 β π€2 = β1. π€3 = 1,0, π·33 .
- Taking π·33 = 1, π€3 β π€3 = 1. π€3 = 1,0,1
Cholesky Decomposition Example
- 1
β1 1 β1 2 β1 1 β1 2 = 1 β1 1 1 1 1 β1 1 1 1
- π€1 =
1 , π€2 = β1 1 , π€3 = 1 1
Cholesky Decomposition Formulas
- βπ < π, take π·ππ = πππβΟπ=1
πβ1 π·πππ·ππ
πππ
- Take π·ππ = 0 if πππ β Οπ=1
πβ1 π·πππ·ππ = π·ππ = 0
- Note that π€π β π€π = Οπ=1
πβ1 π·ππ π·ππ + π·πππ·ππ = πππ
- βπ, take π·ππ =
πππ β Οπ=1
πβ1 π·ππ 2
- These formulas are the basis for the Cholesky-
Banachiewicz algorithm and the Cholesky-Crout algorithm (these algorithms only differ in the
- rder the entries are evaluated)
Cholesky Decomposition Failure
1. βπ < π, π·ππ =
πππβΟπ=1
πβ1 π·πππ·ππ
π·ππ
2. βπ, π·ππ = πππ β Οπ=1
πβ1 π·ππ 2
- If the Cholesky decomposition succeeds, it gives
us vectors π€π such that πππ = π€π β π€π
- The formulas can fail in two ways:
1. πππ β Οπ=1
πβ1 π·ππ 2 < 0 for some π
2. π·ππ = 0 and πππ β Οπ=1
πβ1 π·πππ·ππ β 0 for some π, π
- Failure implies π is not PSD (see problem set)
Part II: Analyzing Goemans- Williamson
Vectors for Goemans-Williamson
- Goemans-Williamson: Maximize
Οπ,π:π<π, π,π βπΉ(π»)
1βπππ 2
subject to M β½ 0 where π β½ 0 and βπ, πππ = 1
- Semidefinite program gives us vectors π€π
where π€π β π€π = πππ
4 3 1 5 2 π€1 π€4 π€2 π€5 π€3
G
Rounding Vectors
- Beautiful idea: Map each vector π€π to Β±1 by
taking a random vector π₯ and setting π¦π = 1 if π₯ β π€π > 0 and setting π¦π = β1 if π₯ β π€π < 0
- Example:
π€1 π€4 π€2 π€5 π€3 π₯ π¦1 = π¦4 = 1, π¦2 = π¦3 = π¦5 = β1 4 3 1 5 2
G
Expected Cut Value
- Consider πΉ Οπ,π:π<π, π,π βπΉ π»
1βπ¦ππ¦π 2
- For each π, π such that π < π, π, π β πΉ(π»),
πΉ
1βπ¦ππ¦π 2
= Ξ
π where Ξ β [0, π] is the angle
between π€π and π€π
- On the other hand
1βπππ 2
=
1βπππ‘Ξ 2
Approximation Factor
- Goemens-Williamson gives a cut with expected
value at least min
Ξ
Ξ π 1βπππ‘Ξ 2
Οπ,π:π<π, π,π βπΉ π»
1βπΉππ 2
- The first term is β .878 at Ξππ ππ’ β 134Β°
Οπ,π:π<π, π,π βπΉ π»
1βπΉππ 2
is an upper bound on the max cut size, so we have a .878 approximation.
Part III: Tight Examples
Showing Tightness
- How can we show this analysis is tight?
- We give two examples where we obtain a cut of
value β .878 Οπ,π:π<π, π,π βπΉ π»
1βπΉππ 2
- In one example, Οπ,π:π<π, π,π βπΉ π»
1βπΉππ 2
is the value of the maximum cut. In the other example, .878 Οπ,π:π<π, π,π βπΉ π»
1βπΉππ 2
is the value
- f the maximum cut.
Example 1: Hypercube
- Have one vertex for each point π¦π β {Β±1}π
- We have an edge between π¦π and π¦π in π» if
cosβ1
π¦πβ π¦π π
β Ξππ ππ’ < π for an arbitrarily small π > 0
- Goemans-Williamson value β
1βcos Ξππ ππ’ 2
πΉ(π»)
- This is achieved by the coordinate cuts.
- Goemans-Williamson rounds to a random cut
which gives value β Ξππ ππ’
π πΉ(π»)
Example 2: Sphere
- Take a large number of random points {π¦π} on
the unit sphere
- We have an edge between π¦π and π¦π in π» if
cosβ1 π¦π β π¦π β Ξππ ππ’ < π for an arbitrarily small π > 0
- Goemans-Williamson value β 1βcos Ξππ ππ’
2
πΉ(π»)
- A random hyperplane cut gives value β
Ξππ ππ’ π πΉ(π») and this is essentially optimal.
Proof requirements
- How can we prove the above examples behave
as claimed?
- For the hypercube, have to upper bound the
value of the Goemans-Williamson program.
- This can be done by determining the
eigenvalues of the hypercube graph and using this to analyze the dual (see problem set)
- For the sphere, have to prove that no cut does
better than a random hyperplane cut (this is hard, see Feige-Schechtman [FS02])
Part IV: Impressiveness of Goemans- Williamson and Open Problems
Failure of Linear Programming
- Trivial algorithm: Randomly guess which side of
the cut each vertex is on.
- Gives approximation factor
1 2
- Linear programming doesnβt do any better, not
even polynomial sized linear programming extensions [CLRS13]!
Hardness of beating GW
- Only know NP-hardness for a 16
17 approximation
[HΓ₯s01], [TSSW00]
- Unique-Games hard to beat Goemans-
Williamson on MAX-CUT [KKMO07]
Open problems
- Can we find a subexponential time algorithm
beating Goemans-Williamson on max cut?
- Can we prove constant degree SOS lower
bounds for obtaining a better approximation than Goemans-Williamson?
References
- [CLRS13] S. Chan, J. Lee, P. Raghavendra, and D. Steurer. Approximate constraint
satisfaction requires large lp relaxations. FOCS 2013.
- [FS02] U. Feige and G. Schechtman. On the optimality of the random hyperplane
rounding technique for max cut. Random Structures & Algorithms - Probabilistic methods in combinatorial optimization, 20 (3), p. 403 β 440. 2002
- [GW95] M. X. Goemans and D. P. Williamson. Improved Approximation Algorithms
for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. J. ACM, 42(6):1115-1145, 1995.
- [HΓ₯s01] J. HΓ₯stad. Some optimal inapproximability results. JACM 48: p.798-869,
2001.
- [KKMO07] S. Khot, G. Kindler, E. Mossell, R. OβDonnell. Optimal Inapproximability
Results for MAX-CUT and Other 2-Variable CSPs? SIAM Journal on Computing, 37 (1): p. 319-357, 2007.
- [TSSW00] L. Trevisan, G. Sorkin, M. Sudan, and D. Williamson. Gadgets,
approximation, and linear programming. SIAM Journal on Computing, 29(6): p. 2074-2097, 2000.