Submodular Maximization Seffi Naor Lecture 3 4th Cargese Workshop - - PowerPoint PPT Presentation

submodular maximization seffi naor lecture 3 4th cargese
SMART_READER_LITE
LIVE PREVIEW

Submodular Maximization Seffi Naor Lecture 3 4th Cargese Workshop - - PowerPoint PPT Presentation

Submodular Maximization Seffi Naor Lecture 3 4th Cargese Workshop on Combinatorial Optimization Seffi Naor Submodular Maximization Continuous Relaxation Recap: a continuous relaxation for maximization Seffi Naor Submodular Maximization


slide-1
SLIDE 1

Submodular Maximization Seffi Naor Lecture 3 4th Cargese Workshop on Combinatorial Optimization

Seffi Naor Submodular Maximization

slide-2
SLIDE 2

Continuous Relaxation

Recap: a continuous relaxation for maximization

Seffi Naor Submodular Maximization

slide-3
SLIDE 3

Continuous Relaxation

Recap: a continuous relaxation for maximization Multilinear Extension: F(x) = ∑

R⊆N

f (R) ∏

ui∈R

xi ∏

ui / ∈R

(1 − xi) , ∀x ∈ [0, 1]N Simple probabilistic interpretation. x integral ⇒ F(x) = f (x).

Seffi Naor Submodular Maximization

slide-4
SLIDE 4

Continuous Relaxation

Recap: a continuous relaxation for maximization Multilinear Extension: F(x) = ∑

R⊆N

f (R) ∏

ui∈R

xi ∏

ui / ∈R

(1 − xi) , ∀x ∈ [0, 1]N Simple probabilistic interpretation. x integral ⇒ F(x) = f (x). Multilinear Relaxation What are the properties of F? It is neither convex nor concave.

Seffi Naor Submodular Maximization

slide-5
SLIDE 5

Properties of the Multilinear Extension

Lemma The multilinear extension F satisfies: If f is non-decreasing, then ∂F

∂xi 0 everywhere in the cube for all i.

If f is submodular, then

∂2F ∂xi∂xj 0 everywhere in the cube for all i, j.

Seffi Naor Submodular Maximization

slide-6
SLIDE 6

Properties of the Multilinear Extension

Lemma The multilinear extension F satisfies: If f is non-decreasing, then ∂F

∂xi 0 everywhere in the cube for all i.

If f is submodular, then

∂2F ∂xi∂xj 0 everywhere in the cube for all i, j.

Useful for proving: Theorem The multilinear extension F satisfies: If f is non-decreasing, then F is non-decreasing in every direction d. If f is submodular, then F is concave in every direction d 0. If f is submodular, then F is convex in every direction ei − ej for all i, j ∈ N .

Seffi Naor Submodular Maximization

slide-7
SLIDE 7

Properties of the Multilinear Extension

Summarizing: f +(x)

concave closure

  • F(x)
  • multilinear ext.
  • f −(x)

convex closure

= f L(x)

Lovasz ext.

Any extension can be described as E[ f (R)] where R is chosen from a distribution that preserves the xi values (marginals). concave closure maximizes expectation but is hard to compute. concave closure minimizes expectation and has a nice characterization (Lovasz extension). Multilinear extension is somewhere in the “middle”.

Seffi Naor Submodular Maximization

slide-8
SLIDE 8

Continuous Relaxation

constrained submodular maximization problem Family of allowed subsets M ⊆ 2N . max f (S) s.t. S ∈ M

Seffi Naor Submodular Maximization

slide-9
SLIDE 9

Continuous Relaxation

constrained submodular maximization problem Family of allowed subsets M ⊆ 2N . max f (S) s.t. S ∈ M following the paradigm for relaxing linear maximization problems PM - convex hull of feasible sets (characteristic vectors) max F(x) s.t. x ∈ PM

Seffi Naor Submodular Maximization

slide-10
SLIDE 10

Continuous Relaxation

constrained submodular maximization problem Family of allowed subsets M ⊆ 2N . max f (S) s.t. S ∈ M following the paradigm for relaxing linear maximization problems PM - convex hull of feasible sets (characteristic vectors) max F(x) s.t. x ∈ PM comparing linear and submodular relaxations

  • ptimizing a fractional solution:

linear: easy submodular: not clear ...

Seffi Naor Submodular Maximization

slide-11
SLIDE 11

Continuous Relaxation

constrained submodular maximization problem Family of allowed subsets M ⊆ 2N . max f (S) s.t. S ∈ M following the paradigm for relaxing linear maximization problems PM - convex hull of feasible sets (characteristic vectors) max F(x) s.t. x ∈ PM comparing linear and submodular relaxations

  • ptimizing a fractional solution:

linear: easy submodular: not clear ...

rounding a fractional solution:

linear: hard (problem dependent) submodular: easy (pipage for matroids)

Seffi Naor Submodular Maximization

slide-12
SLIDE 12

Pipage Rounding on Matroids

Work of [Ageev-Sviridenko-04],[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08].

Seffi Naor Submodular Maximization

slide-13
SLIDE 13

Pipage Rounding on Matroids

Work of [Ageev-Sviridenko-04],[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08]. For a matroid M, the matroid polytope associated with it: PM = {x ∈ [0, 1]N : ∑

i∈S

xi rM(S) ∀S ⊆ M} where rM(·) is the rank function of M. The extreme points of PM correspond to characterstic vectors of indepenedent sets in M.

Seffi Naor Submodular Maximization

slide-14
SLIDE 14

Pipage Rounding on Matroids

Work of [Ageev-Sviridenko-04],[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08]. For a matroid M, the matroid polytope associated with it: PM = {x ∈ [0, 1]N : ∑

i∈S

xi rM(S) ∀S ⊆ M} where rM(·) is the rank function of M. The extreme points of PM correspond to characterstic vectors of indepenedent sets in M. Observation: if f is linear, a point x can be rounded by writing it as a convex sum of extreme points.

Seffi Naor Submodular Maximization

slide-15
SLIDE 15

Pipage Rounding on Matroids

Work of [Ageev-Sviridenko-04],[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08]. For a matroid M, the matroid polytope associated with it: PM = {x ∈ [0, 1]N : ∑

i∈S

xi rM(S) ∀S ⊆ M} where rM(·) is the rank function of M. The extreme points of PM correspond to characterstic vectors of indepenedent sets in M. Observation: if f is linear, a point x can be rounded by writing it as a convex sum of extreme points. Question: What do we do if f is (general) submodular?

Seffi Naor Submodular Maximization

slide-16
SLIDE 16

Pipage Rounding on Matroids

Rounding general submodular function f: if x is non-integral, there are i, j ∈ N for which 0 < xi, xj < 1. recall, F is convex in every direction ei − ej. hence, F is non-decreasing in one of the directions ±(ei − ej)

Seffi Naor Submodular Maximization

slide-17
SLIDE 17

Pipage Rounding on Matroids

Rounding general submodular function f: if x is non-integral, there are i, j ∈ N for which 0 < xi, xj < 1. recall, F is convex in every direction ei − ej. hence, F is non-decreasing in one of the directions ±(ei − ej) Rounding Algorithm: suppose direction ei − ej is non-decreasing δ - max change (due to a tight set A) if either xi + δ or xj − δ are integral - progress else there exists a tight set A′ ⊂ A, i ∈ A′, j / ∈ A′ (|A′| < |A|) recurse on A′ - progress eventually: minimal tight set (contained in all tight sets) in which any pair

  • f coordinates can be increased/decreased - progress

Seffi Naor Submodular Maximization

slide-18
SLIDE 18

Continuous Greedy

The Continuous Greedy Algorithm [C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] computes an approximate fractional solution f is monotone (for now ...) PM is downward closed ( 0 ∈ PM)

Seffi Naor Submodular Maximization

slide-19
SLIDE 19

Continuous Greedy

Seffi Naor Submodular Maximization

slide-20
SLIDE 20

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-21
SLIDE 21

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-22
SLIDE 22

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-23
SLIDE 23

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-24
SLIDE 24

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-25
SLIDE 25

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-26
SLIDE 26

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-27
SLIDE 27

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-28
SLIDE 28

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-29
SLIDE 29

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-30
SLIDE 30

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-31
SLIDE 31

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-32
SLIDE 32

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-33
SLIDE 33

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-34
SLIDE 34

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-35
SLIDE 35

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-36
SLIDE 36

Continuous Greedy

  • x(0) =
  • y∗(t) = argmax
  • n

i=1

∂F ( x(t)) ∂xi · yi : y ∈ PM

  • ∂xi(t)

∂t = y∗

i (t)

Seffi Naor Submodular Maximization

slide-37
SLIDE 37

Continuous Greedy - Analysis

[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] F ( x(t))

  • 1 − e−t

F(1OPT)

Seffi Naor Submodular Maximization

slide-38
SLIDE 38

Continuous Greedy - Analysis

[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] F ( x(t))

  • 1 − e−t

F(1OPT) When to stop the algorithm?

Seffi Naor Submodular Maximization

slide-39
SLIDE 39

Continuous Greedy - Analysis

[C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] F ( x(t))

  • 1 − e−t

F(1OPT) When to stop the algorithm? t = 1 ⇒

  • x(1) feasible (convex combination of feasible vectors)

F ( x(1))

  • 1 − 1

e

  • F(1OPT)

Seffi Naor Submodular Maximization

slide-40
SLIDE 40

Continuous Greedy - Analysis (Cont.)

Proof:

Seffi Naor Submodular Maximization

slide-41
SLIDE 41

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t

Seffi Naor Submodular Maximization

slide-42
SLIDE 42

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t

Seffi Naor Submodular Maximization

slide-43
SLIDE 43

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

Seffi Naor Submodular Maximization

slide-44
SLIDE 44

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi

Seffi Naor Submodular Maximization

slide-45
SLIDE 45

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t)

Seffi Naor Submodular Maximization

slide-46
SLIDE 46

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • Seffi Naor

Submodular Maximization

slide-47
SLIDE 47

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t))

Seffi Naor Submodular Maximization

slide-48
SLIDE 48

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) F (1OPT) − F ( x(t))

Seffi Naor Submodular Maximization

slide-49
SLIDE 49

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) F (1OPT) − F ( x(t)) We obtain a differential equation:

Seffi Naor Submodular Maximization

slide-50
SLIDE 50

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) F (1OPT) − F ( x(t)) We obtain a differential equation:   

∂F( x(t)) ∂t

F (1OPT) − F ( x(t)) F ( x(0)) 0

Seffi Naor Submodular Maximization

slide-51
SLIDE 51

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) F (1OPT) − F ( x(t)) We obtain a differential equation:   

∂F( x(t)) ∂t

F (1OPT) − F ( x(t)) F ( x(0)) 0 The solution is:

Seffi Naor Submodular Maximization

slide-52
SLIDE 52

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) F (1OPT) − F ( x(t)) We obtain a differential equation:   

∂F( x(t)) ∂t

F (1OPT) − F ( x(t)) F ( x(0)) 0 The solution is: F ( x(t))

  • 1 − e−t

F(1OPT)

Seffi Naor Submodular Maximization

slide-53
SLIDE 53

Continuous Greedy - Analysis (Cont.)

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) F (1OPT) − F ( x(t)) We obtain a differential equation:   

∂F( x(t)) ∂t

F (1OPT) − F ( x(t)) F ( x(0)) 0 The solution is: F ( x(t))

  • 1 − e−t

F(1OPT)

  • Seffi Naor

Submodular Maximization

slide-54
SLIDE 54

Continuous Greedy - Tight?

[Nemhauser-Wolsey-78] Maximizing a monotone submodular f over a matroid is

  • 1 − 1

e

  • hard.

Seffi Naor Submodular Maximization

slide-55
SLIDE 55

Continuous Greedy - Tight?

[Nemhauser-Wolsey-78] Maximizing a monotone submodular f over a matroid is

  • 1 − 1

e

  • hard.

Are we done?

Seffi Naor Submodular Maximization

slide-56
SLIDE 56

Continuous Greedy - Tight?

[Nemhauser-Wolsey-78] Maximizing a monotone submodular f over a matroid is

  • 1 − 1

e

  • hard.

Are we done?

1

Submodular Welfare: 1 − 1

e [C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] k 2k−1 [Dobzinski-Schapira-06]

  • 1 −
  • 1 − 1

k

k

  • hard
  • [Khot-Lipton-Markakis-Mehta-05]

[Mirrokni-Schapira-Vondr´ ak-07]

Seffi Naor Submodular Maximization

slide-57
SLIDE 57

Continuous Greedy - Tight?

[Nemhauser-Wolsey-78] Maximizing a monotone submodular f over a matroid is

  • 1 − 1

e

  • hard.

Are we done?

1

Submodular Welfare: 1 − 1

e [C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] k 2k−1 [Dobzinski-Schapira-06]

  • 1 −
  • 1 − 1

k

k

  • hard
  • [Khot-Lipton-Markakis-Mehta-05]

[Mirrokni-Schapira-Vondr´ ak-07]

Is the case of two players special?

Seffi Naor Submodular Maximization

slide-58
SLIDE 58

Continuous Greedy - Tight?

[Nemhauser-Wolsey-78] Maximizing a monotone submodular f over a matroid is

  • 1 − 1

e

  • hard.

Are we done?

1

Submodular Welfare: 1 − 1

e [C˘ alinescu-Chekuri-P´ al-Vondr´ ak-08] k 2k−1 [Dobzinski-Schapira-06]

  • 1 −
  • 1 − 1

k

k

  • hard
  • [Khot-Lipton-Markakis-Mehta-05]

[Mirrokni-Schapira-Vondr´ ak-07]

Is the case of two players special?

2

Greedy and Continuous Greedy fail for non-monotone f.

Seffi Naor Submodular Maximization

slide-59
SLIDE 59

Measured Continuous Greedy

Continuous Greedy:   

  • y∗(t)

= argmax

  • ∑n

i=1 ∂F( x(t)) ∂xi

· yi : y ∈ PM

  • ∂xi(t)

∂t

= y∗

i (t)

Seffi Naor Submodular Maximization

slide-60
SLIDE 60

Measured Continuous Greedy

Continuous Greedy:   

  • y∗(t)

= argmax

  • ∑n

i=1 ∂F( x(t)) ∂xi

· yi : y ∈ PM

  • ∂xi(t)

∂t

= y∗

i (t)

Measured Continuous Greedy:   

  • y∗(t)

= argmax

  • ∑n

i=1 ∂F( x(t)) ∂xi

· (1 − xi(t)) · yi : y ∈ PM

  • ∂xi(t)

∂t

= (1 − xi(t)) · y∗

i (t)

Seffi Naor Submodular Maximization

slide-61
SLIDE 61

Measured Continuous Greedy

Continuous Greedy:   

  • y∗(t)

= argmax

  • ∑n

i=1 ∂F( x(t)) ∂xi

· yi : y ∈ PM

  • ∂xi(t)

∂t

= y∗

i (t)

Measured Continuous Greedy:   

  • y∗(t)

= argmax

  • ∑n

i=1 ∂F( x(t)) ∂xi

· (1 − xi(t)) · yi : y ∈ PM

  • ∂xi(t)

∂t

= (1 − xi(t)) · y∗

i (t)

Intuition: ∂F ( x(t)) ∂xi = F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) Continuous greedy ignores the current position xi(t).

Seffi Naor Submodular Maximization

slide-62
SLIDE 62

Measured Continuous Greedy

[Feldman-N-Schwartz-11] The measured continuous greedy algorithm achieves:

1

monotone f: F ( x(t))

  • 1 − e−t

F(1OPT).

2

non-monotone f: F ( x(t)) te−t · F(1OPT).

Seffi Naor Submodular Maximization

slide-63
SLIDE 63

Measured Continuous Greedy

[Feldman-N-Schwartz-11] The measured continuous greedy algorithm achieves:

1

monotone f: F ( x(t))

  • 1 − e−t

F(1OPT).

2

non-monotone f: F ( x(t)) te−t · F(1OPT). Non-Monotone f: Stopping at t = 1 ⇒ (1/e)-approximation. All known rounding procedures work for non-monotone f as well. (matroid and O(1) knapsack) Greedy methods fail in the discrete setting.

Seffi Naor Submodular Maximization

slide-64
SLIDE 64

Measured Continuous Greedy - Non-Monotone f

Proof:

Seffi Naor Submodular Maximization

slide-65
SLIDE 65

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t

Seffi Naor Submodular Maximization

slide-66
SLIDE 66

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t

Seffi Naor Submodular Maximization

slide-67
SLIDE 67

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

Seffi Naor Submodular Maximization

slide-68
SLIDE 68

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t))

Seffi Naor Submodular Maximization

slide-69
SLIDE 69

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t)) = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) · (1 − xi(t))

Seffi Naor Submodular Maximization

slide-70
SLIDE 70

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t)) = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) · (1 − xi(t)) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • Seffi Naor

Submodular Maximization

slide-71
SLIDE 71

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t)) = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) · (1 − xi(t)) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t))

Seffi Naor Submodular Maximization

slide-72
SLIDE 72

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t)) = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) · (1 − xi(t)) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t))

Seffi Naor Submodular Maximization

slide-73
SLIDE 73

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t)) = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) · (1 − xi(t)) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) monotone: F (1OPT) − F ( x(t)) yielding same factor as continuous greedy

Seffi Naor Submodular Maximization

slide-74
SLIDE 74

Measured Continuous Greedy - Non-Monotone f

Proof: ∂F ( x(t)) ∂t =

n

i=1

∂F ( x(t)) ∂xi · ∂xi(t) ∂t =

n

i=1

∂F ( x(t)) ∂xi · (1 − xi(t)) · y∗

i (t)

i∈OPT

∂F ( x(t)) ∂xi · (1 − xi(t)) = ∑

i∈OPT

F

  • x(t) ∨ 1{i}
  • − F (

x(t)) 1 − xi(t) · (1 − xi(t)) ∑

i∈OPT

  • F
  • x(t) ∨ 1{i}
  • − F (

x(t))

  • F (

x(t) ∨ 1OPT) − F ( x(t)) monotone: F (1OPT) − F ( x(t)) yielding same factor as continuous greedy non-monotone: how to lower bound F ( x(t) ∨ 1OPT)?

Seffi Naor Submodular Maximization

slide-75
SLIDE 75

Measured Continuous Greedy - Non-Monotone f (Cont.)

1

xi(t) cannot be too large:   

∂xi(t) ∂t

1 − xi(t) xi(0) = 0 ⇓ xi(t) 1 − e−t.

Seffi Naor Submodular Maximization

slide-76
SLIDE 76

Measured Continuous Greedy - Non-Monotone f (Cont.)

1

xi(t) cannot be too large:   

∂xi(t) ∂t

1 − xi(t) xi(0) = 0 ⇓ xi(t) 1 − e−t.

2

∀S ⊆ N and y ∈ [0, 1]N s.t. maxi∈N {yi} = ymax: F (1S ∨ y) (1 − ymax) F(1S). Intuition: by submodularity (decreasing marginals), when “0” coordinates are increased to ymax, loss to F(1S) is at most a ymax-fraction

Seffi Naor Submodular Maximization

slide-77
SLIDE 77

Measured Continuous Greedy - Non-Monotone f (Cont.)

1

xi(t) cannot be too large:   

∂xi(t) ∂t

1 − xi(t) xi(0) = 0 ⇓ xi(t) 1 − e−t.

2

∀S ⊆ N and y ∈ [0, 1]N s.t. maxi∈N {yi} = ymax: F (1S ∨ y) (1 − ymax) F(1S). Intuition: by submodularity (decreasing marginals), when “0” coordinates are increased to ymax, loss to F(1S) is at most a ymax-fraction F ( x(t) ∨ 1OPT) e−t · F(1OPT).

Seffi Naor Submodular Maximization

slide-78
SLIDE 78

Measured Continuous Greedy - Non-Monotone f (Cont.)

∂F ( x(t)) ∂t F ( x(t) ∨ 1OPT) − F ( x(t)) e−t · F(1OPT) − F ( x(t))

Seffi Naor Submodular Maximization

slide-79
SLIDE 79

Measured Continuous Greedy - Non-Monotone f (Cont.)

∂F ( x(t)) ∂t F ( x(t) ∨ 1OPT) − F ( x(t)) e−t · F(1OPT) − F ( x(t)) Solving the differential equation with F ( x(0)) 0 gives: F ( x(t)) te−t · F(1OPT).

  • Non-Monotone f Guarantee

F ( x(1)) 1 e

  • · F(1OPT)

Seffi Naor Submodular Maximization

slide-80
SLIDE 80

Measured Continuous Greedy - Monotone f

Monotone f Guarantee F ( x(t)) F(1OPT)

  • 1 − e−t

Seffi Naor Submodular Maximization

slide-81
SLIDE 81

Measured Continuous Greedy - Monotone f

Monotone f Guarantee F ( x(t)) F(1OPT)

  • 1 − e−t

Note: x(t) gains the same value but advances less: xi(t) 1 − e−t ⇒ one might possibly stop at times t > 1 and still be feasible!

Seffi Naor Submodular Maximization

slide-82
SLIDE 82

Measured Continuous Greedy - Monotone f

Submodular MAX-SAT CNF formula and a monotone submodular function f defined over the clauses Goal: find an assignment φ maximizing f (over clauses satisfied by φ)

Seffi Naor Submodular Maximization

slide-83
SLIDE 83

Measured Continuous Greedy - Monotone f

Submodular MAX-SAT CNF formula and a monotone submodular function f defined over the clauses Goal: find an assignment φ maximizing f (over clauses satisfied by φ) Optimizing over a Partition Matroid each xi is replaced by a group {(xi, 0), (xi, 1)}

  • nly one element can be chosen from a group (guarantees feasibility)

Cx,v - clauses satisfied by setting x ← v for a set of clauses S: g(S) = f (∪(x,v)∈SCx,v) g is submodular (by submodularity of f)

Seffi Naor Submodular Maximization

slide-84
SLIDE 84

Measured Continuous Greedy - Monotone f

Submodular MAX-SAT CNF formula and a monotone submodular function f defined over the clauses Goal: find an assignment φ maximizing f (over clauses satisfied by φ) Optimizing over a Partition Matroid each xi is replaced by a group {(xi, 0), (xi, 1)}

  • nly one element can be chosen from a group (guarantees feasibility)

Cx,v - clauses satisfied by setting x ← v for a set of clauses S: g(S) = f (∪(x,v)∈SCx,v) g is submodular (by submodularity of f) submodular MAX SAT can be represented as a monotone submodular maximization problem over a matroid

Seffi Naor Submodular Maximization

slide-85
SLIDE 85

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible?

Seffi Naor Submodular Maximization

slide-86
SLIDE 86

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible? For variable x: T0 - total time in which (x, 0) is increased T1 - total time in which (x, 1) is increased Clearly, T0 + T1 T

Seffi Naor Submodular Maximization

slide-87
SLIDE 87

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible? For variable x: T0 - total time in which (x, 0) is increased T1 - total time in which (x, 1) is increased Clearly, T0 + T1 T By properties of measured greedy: (x, 0) 1 − e−T0, (x, 1) 1 − e−T1

Seffi Naor Submodular Maximization

slide-88
SLIDE 88

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible? For variable x: T0 - total time in which (x, 0) is increased T1 - total time in which (x, 1) is increased Clearly, T0 + T1 T By properties of measured greedy: (x, 0) 1 − e−T0, (x, 1) 1 − e−T1 (x, 1) + (x, 1) (1 − e−T0) + (1 − e−T1) 2 − e−T0 − eT0−T (T0 + T1 T)

Seffi Naor Submodular Maximization

slide-89
SLIDE 89

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible? For variable x: T0 - total time in which (x, 0) is increased T1 - total time in which (x, 1) is increased Clearly, T0 + T1 T By properties of measured greedy: (x, 0) 1 − e−T0, (x, 1) 1 − e−T1 (x, 1) + (x, 1) (1 − e−T0) + (1 − e−T1) 2 − e−T0 − eT0−T (T0 + T1 T) When is 2 − e−T0 − eT0−T maximized? T0 = T/2

Seffi Naor Submodular Maximization

slide-90
SLIDE 90

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible? For variable x: T0 - total time in which (x, 0) is increased T1 - total time in which (x, 1) is increased Clearly, T0 + T1 T By properties of measured greedy: (x, 0) 1 − e−T0, (x, 1) 1 − e−T1 (x, 1) + (x, 1) (1 − e−T0) + (1 − e−T1) 2 − e−T0 − eT0−T (T0 + T1 T) When is 2 − e−T0 − eT0−T maximized? T0 = T/2 Matroid constraints need to be satisfied: 2 − e−T0 − eT0−T 2(1 − e−T/2) 1 Yielding: T 2 ln 2

Seffi Naor Submodular Maximization

slide-91
SLIDE 91

Measured Continuous Greedy - Monotone f

Question: for which T > 1 can we run the measured continuous greedy algorithm and stay feasible? For variable x: T0 - total time in which (x, 0) is increased T1 - total time in which (x, 1) is increased Clearly, T0 + T1 T By properties of measured greedy: (x, 0) 1 − e−T0, (x, 1) 1 − e−T1 (x, 1) + (x, 1) (1 − e−T0) + (1 − e−T1) 2 − e−T0 − eT0−T (T0 + T1 T) When is 2 − e−T0 − eT0−T maximized? T0 = T/2 Matroid constraints need to be satisfied: 2 − e−T0 − eT0−T 2(1 − e−T/2) 1 Yielding: T 2 ln 2 Approximation factor is (1 − e−T): 3

4 for T = 2 ln 2.

Seffi Naor Submodular Maximization

slide-92
SLIDE 92

Measured Continuous Greedy - Monotone f (Cont.)

P =

  • x

i∈N

ai,jxi bj, 1 j m , 0 xi 1, ∀i ∈ N

  • d(P) min

1jm

  • bj

∑i∈N ai,j

  • [Feldman-N-Schwartz-11]
  • x(t) ∈ P if

t ln

  • 1

1−d(P)

  • d(P)

(∗). Note: (∗) 1 since d (P) > 0.

Seffi Naor Submodular Maximization

slide-93
SLIDE 93

Measured Continuous Greedy - Results

Problem Result Previous Hardness Submodular Welfare 1 −

  • 1 − 1

k

k max

  • 1 − 1/e,

k 2k−1

  • 1 −
  • 1 − 1

k

k k players Submodular Max-SAT

3/4 2/3 3/4

non-monotone f

1/e

≈ 0.325 ≈ 0.478 matroid non-monotone f

1/e

≈ 0.325 ≈ 0.491 O(1) knapsack

Seffi Naor Submodular Maximization