Subsection 2 NP-hardness
36 / 109
Subsection 2 NP -hardness 36 / 109 NP -Hardness Do hard problems - - PowerPoint PPT Presentation
Subsection 2 NP -hardness 36 / 109 NP -Hardness Do hard problems exist? Depends on P = NP Next best thing: define hardest problem in NP A problem P is NP -hard if Every problem Q in NP can be solved in this way: 1. given an instance
36 / 109
◮ Do hard problems exist? Depends on P = NP ◮ Next best thing: define hardest problem in NP ◮ A problem P is NP-hard if
◮ If P is in NP and is NP-hard, it is called NP-complete ◮ Every problem in NP reduces to sat [Cook 1971]
37 / 109
Boolean decision variables store TM dynamics Definition of TM dynamics in CNF
38 / 109
Why it works: suppose P easier than Q, solve Q by calling ρ ◦ AlgP , conclude Q as easy as P , contradiction 39 / 109
◮ stable: given G = (V, E) and k ∈ N, does it contain a stable
◮ We know k-clique is NP-complete, reduce from it
◮ Given instance (G, k) of clique consider the complement
◮ Thm.: G has a clique of size k iff ¯
◮ ρ(G) = ¯
◮ ⇒ stable is NP-hard ◮ stable is also in NP
◮ ⇒ stable is NP-complete
40 / 109
◮ sat is NP-hard by Cook’s theorem, Reduce from sat in
◮ Polynomial reduction ρ
◮ E.g. ρ maps (x1 ∨ x2) ∧ (¯
◮ sat is YES iff MILP is feasible
41 / 109
◮ Quadratic objective, linear constraints, continuous
◮ Many applications (e.g. portfolio selection) ◮ If Q PSD then objective is convex, problem is in P ◮ If Q has at least one negative eigenvalue, NP-hard ◮ Decision problem: “is the min. obj. fun. value = 0?”
42 / 109
◮ By reduction from SAT, let σ be an instance ◮ ˆ
◮ Consider QP
j≤n
◮ Claim: σ is YES iff val(†) = 0 ◮ Proof:
◮ assume σ YES with soln. x∗, then x∗ ∈ {0, 1}n, hence
◮ assume σ NO, suppose val(†) = 0, then (†) feasible
43 / 109
◮ Add surplus vars v to sat→MILP constraints:
i x − bi − vi = 0))
◮ Now sum them on the objective
i≤m
i x − bi − vi)2
◮ Reduce from 3sat, get ≤ 3 literals per clause
44 / 109
◮ continuous Quadratic Knapsack Problem (cQKP)
◮ Reduction from subset-sum given list a ∈ Qn and γ, is there J ⊆ {1, . . . , n} s.t.
j∈J
aj = γ? reduce to f(x) =
j xj(1 − xj)
◮ σ is a YES instance of subset-sum
◮ let x∗ j = 1 iff j ∈ J, x∗ j = 0 otherwise ◮ feasible by construction ◮ f is non-negative on [0, 1]n and f(x∗) = 0: optimum
◮ σ is a NO instance of subset-sum
◮ suppose opt(cQKP) = x∗ s.t. f(x∗) = 0 ◮ then x∗ ∈ {0, 1}n because f(x∗) = 0 ◮ feasibility of x∗ → supp(x∗) solves σ, contradiction, hence f(x∗) > 0 45 / 109
◮ Reduce max clique to subclass f(x) = −
{i,j}∈E
◮ Theorem [Motzkin& Straus 1964] Let C be the maximum clique of the instance G = (V, E) of max clique
2
1 ω(G)
j =
ω(G)
46 / 109
x∈[0,1]n
j > 0}| smallest (‡)
◮ Suppose 1, 2 ∈ C but {1, 2} ∈ E[C], then x∗
1, x∗ 2 > 0, can perturb by small
ǫ ∈ [−x∗
1, x∗ 2], get xǫ = (x∗ 1 + ǫ, x∗ 2 − ǫ, . . .), feasible w.r.t. simplex and bounds
◮ {1, 2} ∈ E ⇒ x1x2 does not appear in f(x) ⇒ f(xǫ) depends linearly on ǫ; by
constant ◮ set ǫ = −x∗
1 or = x∗ 2 yields global optima with more zero components than x∗,
against assumption (‡), hence {1, 2} ∈ E[C], by relabeling C is a clique
47 / 109
x∈[0,1]n
j > 0}| smallest (‡)
◮ square simplex constraint
j xj = 1, get
x2
j + 2
xixj = 1 ◮ by construction x∗
j = 0 for j ∈ C ⇒
ψ(x∗) =
(x∗
j )2 + 2
x∗
j x∗ j =
(x∗
j )2 + 2f(x∗) = 1
◮ ψ(x) = 1 for all feasible x, so f(x) achieves maximum when
j∈C(x∗ j )2 is
minimum, i.e. x∗
j = 1 |C| for all j ∈ C
◮ again by simplex constraint f(x∗) = 1 −
(x∗
j )2 = 1 − |C|
1 |C|2 ≤ 1 − 1 ω(G) so f(x∗) attains maximum 1 − 1/ω(G) when |C| = ω(G)
48 / 109
[Hint: what are the decision variables, objective, constraints? What data are missing?]
49 / 109