Submodular Maximization Seffi Naor Lecture 2 4th Cargese Workshop - - PowerPoint PPT Presentation

submodular maximization seffi naor lecture 2 4th cargese
SMART_READER_LITE
LIVE PREVIEW

Submodular Maximization Seffi Naor Lecture 2 4th Cargese Workshop - - PowerPoint PPT Presentation

Submodular Maximization Seffi Naor Lecture 2 4th Cargese Workshop on Combinatorial Optimization Seffi Naor Submodular Maximization Submodular Maximization Constrained Submodular Maximization Family of allowed subsets M 2 N . f ( S ) max


slide-1
SLIDE 1

Submodular Maximization Seffi Naor Lecture 2 4th Cargese Workshop on Combinatorial Optimization

Seffi Naor Submodular Maximization

slide-2
SLIDE 2

Submodular Maximization

Constrained Submodular Maximization Family of allowed subsets M ⊆ 2N . max f (S) s.t. S ∈ M

Seffi Naor Submodular Maximization

slide-3
SLIDE 3

Constrained Maximization - Problem I

Seffi Naor Submodular Maximization

slide-4
SLIDE 4

Constrained Maximization - Problem I

Seffi Naor Submodular Maximization

slide-5
SLIDE 5

Constrained Maximization - Problem I (Cont.)

Problem I - Submodular Welfare Input:

1

Collection Q of unsplittable items.

2

fi : 2Q → R+ monotone submodular utility, 1 i k. Goal: Assign all items to maximize social welfare: ∑k

i=1 fi(Qi).

Arises in the context of combinatorial auctions. [Lehman-Lehman-Nisan-01]

Seffi Naor Submodular Maximization

slide-6
SLIDE 6

Constrained Maximization - Problem II

Problem II - Submodular Maximization Over a Matroid Input: Matroid M = (N , I) and submodular f : 2N → R+. Goal: Find S ∈ I maximizing f (S). Case of monotone f captures: Submodular Welfare, Max-k-Coverage, Generalized-Assignment . . .

Seffi Naor Submodular Maximization

slide-7
SLIDE 7

Constrained Maximization - Problem II

Problem II - Submodular Maximization Over a Matroid Input: Matroid M = (N , I) and submodular f : 2N → R+. Goal: Find S ∈ I maximizing f (S). Case of monotone f captures: Submodular Welfare, Max-k-Coverage, Generalized-Assignment . . . Combinatorial Approach: Greedy and local search techniques. For some cases provides best-known/tight approximations: Knapsack constraint

[Sviridenko-04]

intersection of k matroids

[Lee-Sviridenko-Vondr´ ak-09],[Ward-12]

k-exchange systems

[Feldman-Naor-S-Ward-11]

Seffi Naor Submodular Maximization

slide-8
SLIDE 8

The Greedy Approach

[Nemhauer-Wolsey-Fisher-78] Greedy is a (1/2)-approximation for maximizing a monotone submodular f

  • ver a matroid.

Seffi Naor Submodular Maximization

slide-9
SLIDE 9

The Greedy Approach

[Nemhauer-Wolsey-Fisher-78] Greedy is a (1/2)-approximation for maximizing a monotone submodular f

  • ver a matroid.

Uniform Matroid: Greedy is a

  • 1 − 1

e

  • approximation [Nemhauser-Wolsey-Fisher-78].

Captures Max-k-Coverage. Tight for coverage functions [Feige-98].

Seffi Naor Submodular Maximization

slide-10
SLIDE 10

The Greedy Approach

[Nemhauer-Wolsey-Fisher-78] Greedy is a (1/2)-approximation for maximizing a monotone submodular f

  • ver a matroid.

Uniform Matroid: Greedy is a

  • 1 − 1

e

  • approximation [Nemhauser-Wolsey-Fisher-78].

Captures Max-k-Coverage. Tight for coverage functions [Feige-98]. Non-monotone f over a matroid: ≈ 0.309-approximation (fractional local search). [Vondr´

ak-09]

≈ 0.325-approximation (simulated annealing). [Gharan-Vondr´

ak-11]

≈ 0.478-hard absolute! [Gharan-Vondr´

ak-11]

Seffi Naor Submodular Maximization

slide-11
SLIDE 11

Uniform Matroid

Notation: fS(u) = f (S ∪ u) − f (S) Greedy Algorithm

1

S0 ← ∅.

2

for i = 1 to k do: ui ← argmaxu/

∈Si−1{ fSi−1(u)}.

3

Return Sk.

Seffi Naor Submodular Maximization

slide-12
SLIDE 12

Uniform Matroid

Notation: fS(u) = f (S ∪ u) − f (S) Greedy Algorithm

1

S0 ← ∅.

2

for i = 1 to k do: ui ← argmaxu/

∈Si−1{ fSi−1(u)}.

3

Return Sk. Theorem [Nemhauer-Wolsey-Fisher-78] For monotone submodular f, f (Sk)

  • 1 −
  • 1 − 1

k k · f (OPT)

  • 1 − 1

e

  • · f (OPT)

Seffi Naor Submodular Maximization

slide-13
SLIDE 13

Uniform Matroid

Notation: fS(u) = f (S ∪ u) − f (S) Greedy Algorithm

1

S0 ← ∅.

2

for i = 1 to k do: ui ← argmaxu/

∈Si−1{ fSi−1(u)}.

3

Return Sk. Theorem [Nemhauer-Wolsey-Fisher-78] For monotone submodular f, f (Sk)

  • 1 −
  • 1 − 1

k k · f (OPT)

  • 1 − 1

e

  • · f (OPT)

Non-Monotone Submodular Functions 1/e is best factor (continuous approach via multilinear extension)

Seffi Naor Submodular Maximization

slide-14
SLIDE 14

Uniform Matroid

Randomized Greedy Algorithm

1

S0 ← ∅.

2

for i = 1 to k do: ui ← uniformly choose in random an element from Mi. Si ← Si−1 ∪ ui.

3

Return Sk.

Seffi Naor Submodular Maximization

slide-15
SLIDE 15

Uniform Matroid

Randomized Greedy Algorithm

1

S0 ← ∅.

2

for i = 1 to k do: ui ← uniformly choose in random an element from Mi. Si ← Si−1 ∪ ui.

3

Return Sk. How is Mi defined? Mi ⊆ N \ Si−1: max

u∈Mi

fSi−1(u) s.t. |Mi| = k. Assumptions: (w.l.o.g. by adding dummy elements) |N \ Si−1| k ∀u ∈ N \ Si−1, fSi−1(u) 0 comment: “empty” iteration if a dummy element is chosen.

Seffi Naor Submodular Maximization

slide-16
SLIDE 16

Performance of Randomized Greedy

Theorem [Buchbinder-Feldman-N-Schwartz-14] For monotone submodular f, E[ f (Sk)]

  • 1 −
  • 1 − 1

k k · f (OPT)

  • 1 − 1

e

  • · f (OPT)

Seffi Naor Submodular Maximization

slide-17
SLIDE 17

Performance of Randomized Greedy

Theorem [Buchbinder-Feldman-N-Schwartz-14] For monotone submodular f, E[ f (Sk)]

  • 1 −
  • 1 − 1

k k · f (OPT)

  • 1 − 1

e

  • · f (OPT)

Theorem [Buchbinder-Feldman-N-Schwartz-14] For non-monotone submodular f, E[ f (Sk)]

  • 1 − 1

k k · f (OPT) 1 e

  • · f (OPT)

Seffi Naor Submodular Maximization

slide-18
SLIDE 18

Monotone Submodular Functions

condition on first i − 1 steps: expected gain at ith step: E[ fSi−1(ui)] = 1 k · ∑

u∈Mi

fSi−1(u) 1 k ·

u∈OPT\Si−1

fSi−1(u)

  • f (OPT ∪ Si−1) − f (Si−1)

k

  • f (OPT) − f (Si−1)

k

Seffi Naor Submodular Maximization

slide-19
SLIDE 19

Monotone Submodular Functions

condition on first i − 1 steps: expected gain at ith step: E[ fSi−1(ui)] = 1 k · ∑

u∈Mi

fSi−1(u) 1 k ·

u∈OPT\Si−1

fSi−1(u)

  • f (OPT ∪ Si−1) − f (Si−1)

k

  • f (OPT) − f (Si−1)

k taking expectations over all outcomes: E[ fSi−1(ui)] f (OPT) − E[ f (Si−1)] k

Seffi Naor Submodular Maximization

slide-20
SLIDE 20

Monotone Submodular Functions

condition on first i − 1 steps: expected gain at ith step: E[ fSi−1(ui)] = 1 k · ∑

u∈Mi

fSi−1(u) 1 k ·

u∈OPT\Si−1

fSi−1(u)

  • f (OPT ∪ Si−1) − f (Si−1)

k

  • f (OPT) − f (Si−1)

k taking expectations over all outcomes: E[ fSi−1(ui)] f (OPT) − E[ f (Si−1)] k rearranging: (E[ f (Si)] = E[ f (Si−1)] + E[ fSi−1(ui)]) f (OPT) − E[ f (Si)]

  • 1 − 1

k

  • · [ f (OPT) − E[ f (Si−1)]]

Seffi Naor Submodular Maximization

slide-21
SLIDE 21

Monotone Submodular Functions

implying: f (OPT) − E[ f (Si)]

  • 1 − 1

k i · [ f (OPT) − E[ f (S0)]]

  • 1 − 1

k i · f (OPT)

Seffi Naor Submodular Maximization

slide-22
SLIDE 22

Monotone Submodular Functions

implying: f (OPT) − E[ f (Si)]

  • 1 − 1

k i · [ f (OPT) − E[ f (S0)]]

  • 1 − 1

k i · f (OPT) thus: E[ f (Sk)]

  • 1 −
  • 1 − 1

k k · f (OPT)

  • 1 − 1

e

  • · f (OPT)

completing the proof.

Seffi Naor Submodular Maximization

slide-23
SLIDE 23

Non-Monotone Submodular Functions

condition on first i − 1 steps: expected gain at ith step: E[ fSi−1(ui)] = 1 k · ∑

u∈Mi

fSi−1(u) 1 k ·

u∈OPT\Si−1

fSi−1(u)

  • f (OPT ∪ Si−1) − f (Si−1)

k

Seffi Naor Submodular Maximization

slide-24
SLIDE 24

Non-Monotone Submodular Functions

condition on first i − 1 steps: expected gain at ith step: E[ fSi−1(ui)] = 1 k · ∑

u∈Mi

fSi−1(u) 1 k ·

u∈OPT\Si−1

fSi−1(u)

  • f (OPT ∪ Si−1) − f (Si−1)

k but what is f (OPT ∪ Si−1) for non-monotone f?

Seffi Naor Submodular Maximization

slide-25
SLIDE 25

Non-Monotone Submodular Functions

condition on first i − 1 steps: expected gain at ith step: E[ fSi−1(ui)] = 1 k · ∑

u∈Mi

fSi−1(u) 1 k ·

u∈OPT\Si−1

fSi−1(u)

  • f (OPT ∪ Si−1) − f (Si−1)

k but what is f (OPT ∪ Si−1) for non-monotone f? Lemma For all 0 i k, E[ f (OPT ∪ Si)]

  • 1 − 1

k i · f (OPT) proof deferred for now ...

Seffi Naor Submodular Maximization

slide-26
SLIDE 26

Non-Monotone Submodular Functions

taking expectations over all outcomes: E[ fSi−1(ui)] E f (OPT ∪ Si−1) − f (Si−1) k

  • 1 − 1

k

i−1 · f (OPT) − E[ f (Si−1)] k

Seffi Naor Submodular Maximization

slide-27
SLIDE 27

Non-Monotone Submodular Functions

taking expectations over all outcomes: E[ fSi−1(ui)] E f (OPT ∪ Si−1) − f (Si−1) k

  • 1 − 1

k

i−1 · f (OPT) − E[ f (Si−1)] k it can be proved by induction that: E[ f (Si)] i k ·

  • 1 − 1

k i−1 · f (OPT)

Seffi Naor Submodular Maximization

slide-28
SLIDE 28

Non-Monotone Submodular Functions

taking expectations over all outcomes: E[ fSi−1(ui)] E f (OPT ∪ Si−1) − f (Si−1) k

  • 1 − 1

k

i−1 · f (OPT) − E[ f (Si−1)] k it can be proved by induction that: E[ f (Si)] i k ·

  • 1 − 1

k i−1 · f (OPT) setting i = k: E[ f (Sk)] k k ·

  • 1 − 1

k k−1 · f (OPT) 1 e · f (OPT) completing the proof

Seffi Naor Submodular Maximization

slide-29
SLIDE 29

Non-Monotone Submodular Functions

we first prove: Lemma [closely related to Feige-Mirrokni-Vondrak-11] Let N (p) be a random subset where each element is chosen with probability at most p (not necessarily independently). Then, E[ f (N (p))] (1 − p) f (∅)

Seffi Naor Submodular Maximization

slide-30
SLIDE 30

Non-Monotone Submodular Functions

we first prove: Lemma [closely related to Feige-Mirrokni-Vondrak-11] Let N (p) be a random subset where each element is chosen with probability at most p (not necessarily independently). Then, E[ f (N (p))] (1 − p) f (∅) Proof: N is sorted with respect to probability of inclusion in N (p): ∀i j : Pr[ui ∈ N (p)] Pr[uj ∈ N (p)] Terminology: Ni = {u1, ..., ui} pi - probability that ui is chosen Xi - indicator for the event that ui is chosen

Seffi Naor Submodular Maximization

slide-31
SLIDE 31

Non-Monotone Submodular Functions

Thus: E[ f (N (p))] =E

  • f (∅) +

n

i=1

Xi · fNi−1∩N (p)(ui)

  • E
  • f (∅) +

n

i=1

Xi · fNi−1(ui)

  • (submodularity)

= f (∅) +

n

i=1

E [Xi] · fNi−1(ui) = f (∅) +

n

i=1

pi · fNi−1(ui) =(1 − p1) · f (∅) +

  • n−1

i=1

(pi−1 − pi) · f (Ni)

  • + pn · f (Nn)

(1 − p) · f (∅) (since p p1 p2 . . . pn)

  • Seffi Naor

Submodular Maximization

slide-32
SLIDE 32

Non-Monotone Submodular Functions

Lemma For all 0 i k, E[ f (OPT ∪ Si)]

  • 1 − 1

k i · f (OPT)

  • bservations:

g(S) = f (S ∪ OPT) is a submodular function in iteration i, each element of N \ Si−1 is not chosen to Si with probability at least 1 − 1/k an element belongs to Si with probability at most 1 − (1 − 1/k)i reminder: E[g(N (p))] (1 − p)g(∅)

Seffi Naor Submodular Maximization

slide-33
SLIDE 33

Non-Monotone Submodular Functions

Lemma For all 0 i k, E[ f (OPT ∪ Si)]

  • 1 − 1

k i · f (OPT)

  • bservations:

g(S) = f (S ∪ OPT) is a submodular function in iteration i, each element of N \ Si−1 is not chosen to Si with probability at least 1 − 1/k an element belongs to Si with probability at most 1 − (1 − 1/k)i reminder: E[g(N (p))] (1 − p)g(∅) completing the proof: E[ f (OPT ∪ Si)] = E[g(Si \ OPT)]

  • 1 − 1

k i · g(∅) =

  • 1 − 1

k i · f (OPT)

Seffi Naor Submodular Maximization

slide-34
SLIDE 34

Non-Monotone Functions: Beyond 1/e

Main Ideas Random greedy: |Mi| has variable size If the marginal values of the additional elements is significant, then the performance improves. Otherwise, OPT is “mostly” contained in Mi and then a continuous version of the double greedy algorithm can be used, since |Mi| is O(k).

Seffi Naor Submodular Maximization

slide-35
SLIDE 35

Non-Monotone Functions: Beyond 1/e

Main Ideas Random greedy: |Mi| has variable size If the marginal values of the additional elements is significant, then the performance improves. Otherwise, OPT is “mostly” contained in Mi and then a continuous version of the double greedy algorithm can be used, since |Mi| is O(k). Theorem [Buchbinder-Feldman-N-Schwartz-14] There is an efficient algorithm that achieves an approximation factor of

1 e + 0.004 for non-monotone submodular function maximization over a

uniform matroid.

Seffi Naor Submodular Maximization