Multicriteria Optimization Some continuous and discrete dynamics - - PowerPoint PPT Presentation

multicriteria optimization some continuous and discrete
SMART_READER_LITE
LIVE PREVIEW

Multicriteria Optimization Some continuous and discrete dynamics - - PowerPoint PPT Presentation

Multicriteria Optimization Some continuous and discrete dynamics Guillaume Garrigos Institut de Mathmatiques et de Modlisation de Montpellier Universidad Tecnica Federico Santa Maria Sestri-Levante: Franco/Italian workshop 8-12 September


slide-1
SLIDE 1

Multicriteria Optimization Some continuous and discrete dynamics

Guillaume Garrigos

Institut de Mathématiques et de Modélisation de Montpellier Universidad Tecnica Federico Santa Maria

Sestri-Levante: Franco/Italian workshop 8-12 September 2014

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 1/35

slide-2
SLIDE 2

Context

H is an Hilbert space, fi : H → R are Lipschitz continuous on bounded sets. K ⊂ H is a closed convex non empty set of constraints, One of the objective functions is bounded from below.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 2/35

slide-3
SLIDE 3

Context

H is an Hilbert space, fi : H → R are Lipschitz continuous on bounded sets. K ⊂ H is a closed convex non empty set of constraints, One of the objective functions is bounded from below. One approach, the scalarization method : chose 0 ≤ θi ≤ 1,

q

  • i=1

θi = 1, and minimize

q

  • i=1

θifi.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 2/35

slide-4
SLIDE 4

Context

H is an Hilbert space, fi : H → R are Lipschitz continuous on bounded sets. K ⊂ H is a closed convex non empty set of constraints, One of the objective functions is bounded from below. One approach, the scalarization method : chose 0 ≤ θi ≤ 1,

q

  • i=1

θi = 1, and minimize

q

  • i=1

θifi. We are looking for the simultaneous minimization of the fi’s.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 2/35

slide-5
SLIDE 5

Contents

1

Multicriteria analysis

2

Continuous steepest descent dynamic

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 3/35

slide-6
SLIDE 6

Contents

1

Multicriteria analysis

2

Continuous steepest descent dynamic

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 4/35

slide-7
SLIDE 7

Nonsmooth analysis tools

Directional derivative (of Clarke) df (x; d) := lim sup

t↓0

x′→x

f (x′ + td) − f (x′) t . Subdifferential (of Clarke) ∂f (x) := {p ∈ H | p, d ≤ df (x; d) ∀d ∈ H}. Remark If f is of class C 1, then ∂f (x) = {∇f (x)} and df (x; d) = ∇f (x), d.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 5/35

slide-8
SLIDE 8

Nonsmooth analysis tools

Tangent and normal cones TK(x) := cl {d ∈ H | ∃ε > 0, ∀t ∈]0, ε[, x + td ∈ K}. NK(x) := {p ∈ H | p, d ≤ 0 ∀d ∈ TK(x)}. K TK(x) NK(x) x

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 6/35

slide-9
SLIDE 9

Multicriteria analysis

Descent direction We say that d ∈ H is a descent direction at x if dfi(x; d) < 0 holds for all i = 1..q. We say that it is an admissible descent direction if moreover d ∈ TK(x).

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 7/35

slide-10
SLIDE 10

Example

b

x ∇f1(x)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 8/35

slide-11
SLIDE 11

Example

b

x ∇f1(x) ∇f2(x)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 9/35

slide-12
SLIDE 12

Multicriteria analysis

Armijo direction We say that a descent direction d ∈ H is an Armijo direction if ∃ε > 0 s.t. for all t ∈]0, ε[ : ∀i, fi(x + td) < fi(x) + t 2dfi(x; d). We say that it is an admissible Armijo direction if moreover x + td ∈ K.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 10/35

slide-13
SLIDE 13

Multicriteria analysis

Pareto equilibrium(s)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 11/35

slide-14
SLIDE 14

Multicriteria analysis

Pareto equilibrium(s) We say that x ∈ K is a Pareto if there is no y ∈ K such that ∀i fi(y) ≤ fi(x) and ∃I fI(y) < fI(x).

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 11/35

slide-15
SLIDE 15

Multicriteria analysis

Pareto equilibrium(s) We say that x ∈ K is a Pareto if there is no y ∈ K such that ∀i fi(y) ≤ fi(x) and ∃I fI(y) < fI(x). We say that x ∈ K is a weak Pareto if there is no y ∈ K s.t. ∀i fi(y) < fi(x).

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 11/35

slide-16
SLIDE 16

Example

b

x ∇f1(x) ∇f2(x)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 12/35

slide-17
SLIDE 17

Multicriteria analysis

Pareto equilibrium(s) We say that x ∈ K is a Pareto if there is no y ∈ K such that ∀i fi(y) ≤ fi(x) and ∃I fI(y) < fI(x). We say that x ∈ K is a weak Pareto if there is no y ∈ K s.t. ∀i fi(y) < fi(x). We say that x ∈ K is a critical Pareto if 0 ∈ NK(x) + Conv{∂fi(x)}.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 13/35

slide-18
SLIDE 18

Example

b

x Conv{∇fi(x)}

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 14/35

slide-19
SLIDE 19

Multicriteria analysis

Pareto equilibrium(s) We say that x ∈ K is a Pareto if there is no y ∈ K such that ∀i fi(y) ≤ fi(x) and ∃I fI(y) < fI(x). We say that x ∈ K is a weak Pareto if there is no y ∈ K s.t. ∀i fi(y) < fi(x). We say that x ∈ K is a critical Pareto if 0 ∈ NK(x) + Conv{∂fi(x)}.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 15/35

slide-20
SLIDE 20

Multicriteria analysis

Pareto equilibrium(s) We say that x ∈ K is a Pareto if there is no y ∈ K such that ∀i fi(y) ≤ fi(x) and ∃I fI(y) < fI(x). We say that x ∈ K is a weak Pareto if there is no y ∈ K s.t. ∀i fi(y) < fi(x). We say that x ∈ K is a critical Pareto if 0 ∈ NK(x) + Conv{∂fi(x)}. Properties Pareto ⇒ weak Pareto ⇒ critical Pareto. If the fi are convex, then weak Pareto ⇔ critical Pareto. If the fi are strictly convex, then the 3 notions both coincide.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 15/35

slide-21
SLIDE 21

Link between descent direction and Pareto equilibrium

Proposition The following statements are equivalent : x is a critical Pareto point, There is no admissible descent direction at x, There is no admissible Armijo direction at x.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 16/35

slide-22
SLIDE 22

Objectif

We will consider

1 a continuous dynamic ˙

u(t) = s(u(t)), where s : K → H verify

s(u) = 0 if u is a critical Pareto point, s(u) is an admissible descent direction else.

2 an algorithm un+1 = un + tndn where dn is an admissible

Armijo direction.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 17/35

slide-23
SLIDE 23

Contents

1

Multicriteria analysis

2

Continuous steepest descent dynamic

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 18/35

slide-24
SLIDE 24

The multiobjective steepest descent direction

Definition Given x ∈ K, the multiobjective steepest descent direction is s(x) := − (NK(x) + Conv{∂fi(x)})0 .

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 19/35

slide-25
SLIDE 25

Example

b

x −s(x) Conv{∇fi(x)}

b

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 20/35

slide-26
SLIDE 26

The multiobjective steepest descent direction

Definition Given x ∈ K, the multiobjective steepest descent direction is s(x) := − (NK(x) + Conv{∂fi(x)})0 .

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 21/35

slide-27
SLIDE 27

The multiobjective steepest descent direction

Definition Given x ∈ K, the multiobjective steepest descent direction is s(x) := − (NK(x) + Conv{∂fi(x)})0 . Obviously, x is a Pareto critical iff s(x) = 0.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 21/35

slide-28
SLIDE 28

The multiobjective steepest descent direction

Definition Given x ∈ K, the multiobjective steepest descent direction is s(x) := − (NK(x) + Conv{∂fi(x)})0 . Obviously, x is a Pareto critical iff s(x) = 0. In a sense, s(x) selects itself a different convex combination of the functions at each x.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 21/35

slide-29
SLIDE 29

The multiobjective steepest descent direction

Definition Given x ∈ K, the multiobjective steepest descent direction is s(x) := − (NK(x) + Conv{∂fi(x)})0 . Obviously, x is a Pareto critical iff s(x) = 0. In a sense, s(x) selects itself a different convex combination of the functions at each x. Example If q = 1, then s(x) = proj TK(x)(−∇f (x)).

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 21/35

slide-30
SLIDE 30

The multiobjective steepest descent direction

Definition Given x ∈ K, the multiobjective steepest descent direction is s(x) := − (NK(x) + Conv{∂fi(x)})0 . Obviously, x is a Pareto critical iff s(x) = 0. In a sense, s(x) selects itself a different convex combination of the functions at each x. Example If q = 1, then s(x) = proj TK(x)(−∇f (x)). Property s(x) is an admissible descent direction at x, whenever s(x) = 0.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 21/35

slide-31
SLIDE 31

Example

b

x −s(x) Conv{∇fi(x)}

b

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 22/35

slide-32
SLIDE 32

The multiobjective steepest descent direction

Why s(x) is called the steepest descent ?

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 23/35

slide-33
SLIDE 33

The multiobjective steepest descent direction

Why s(x) is called the steepest descent ? Recall that (one objective function, no constraint) : −∇f (x) ∇f (x) = argmin

d≤1

df (x, d).

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 23/35

slide-34
SLIDE 34

The multiobjective steepest descent direction

Why s(x) is called the steepest descent ? Recall that (one objective function, no constraint) : −∇f (x) ∇f (x) = argmin

d≤1

df (x, d). The multiobjective steepest descent direction generalizes this fact : Theorem (Attouch, Garrigos, Goudou, 2014) s(x) s(x) = argmin

d≤1,d∈TK(x)

max

i

dfi(x, d).

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 23/35

slide-35
SLIDE 35

A continuous dynamic

The Multi-Objective Gradient dynamic : (MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 24/35

slide-36
SLIDE 36

A continuous dynamic : example 1

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 f1(x) = x − a2 and f2(x) = x − b2 a b

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 25/35

slide-37
SLIDE 37

A continuous dynamic : example 2

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 f1(x) = 1

2x2 and f2(x) = a, x

(1, 0)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 26/35

slide-38
SLIDE 38

A continuous dynamic : Existence and uniqueness

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that H is finite-dimentional, and that the functions are convex and bounded from below. Then for any u0 ∈ K, there exists a strong solution u : [0, +∞[→ K of (MOG), such that u(0) = u0. Strong solution essentially means an absolutely continuous trajectory u satisfying (MOG) for a.e. t > 0.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 27/35

slide-39
SLIDE 39

A continuous dynamic : example 2

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 f1(x) = 1

2x2 and f2(x) = a, x

(1, 0)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 28/35

slide-40
SLIDE 40

A continuous dynamic : Existence and uniqueness

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that H is finite-dimentional, and that the functions are convex and bounded from below. Then for any u0 ∈ K, there exists a strong solution u : [0, +∞[→ K of (MOG), such that u(0) = u0. The proof cannot rely on Cauchy-Lipschitz because of lack of Lipschitz regularity. → Use Morau-Yoshida’s regularization onto the fi’s and the indicator function. → Use Peano’s existence theorem on the regularized system : it asks only continuity but do not guarantee uniqueness. → Pass to the limit. Hard.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 29/35

slide-41
SLIDE 41

About uniqueness

The problem of uniqueness is still open. Can we find hypotheses ensuring Lipschitz continuity of s(u) ? Local Lipschitz property Suppose K = H, and that the functions are of class C 1,1. The vector field s is Lipschitz continuous at u if : q = 2, and ∇f1(u) = ∇f2(u). The vectors ∇fi(u) are linearly independent.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 30/35

slide-42
SLIDE 42

A continuous dynamic : example 2

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 f1(x) = 1

2x2 and f2(x) = a, x

(1, 0)

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 31/35

slide-43
SLIDE 43

A continuous dynamic : Qualitative behaviour

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 32/35

slide-44
SLIDE 44

A continuous dynamic : Qualitative behaviour

Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are lower regular (convex, or continuously differentiable ...). Then for all i = 1..q, the function t → fi(u(t)) is decreasing.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 32/35

slide-45
SLIDE 45

A continuous dynamic : Qualitative behaviour

Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are lower regular (convex, or continuously differentiable ...). Then for all i = 1..q, the function t → fi(u(t)) is decreasing. Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are quasi-convex. Then any bounded trajectory is weakly convergent. The limit point is a weak Pareto if the functions are convex. The limit point is a critical Pareto if the functions are C 1 or convex, and under compact assumption on u.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 32/35

slide-46
SLIDE 46

A continuous dynamic : Qualitative behaviour

Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are lower regular (convex, or continuously differentiable ...). Then for all i = 1..q, the function t → fi(u(t)) is decreasing. Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are quasi-convex. Then any bounded trajectory is weakly convergent. The limit point is a weak Pareto if the functions are convex. The limit point is a critical Pareto if the functions are C 1 or convex, and under compact assumption on u. We recover classic results by taking q = 1.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 32/35

slide-47
SLIDE 47

A continuous dynamic : Qualitative behaviour

Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are lower regular (convex, or continuously differentiable ...). Then for all i = 1..q, the function t → fi(u(t)) is decreasing. Theorem (Attouch, Garrigos, Goudou, 2014) Suppose that the objective functions are quasi-convex. Then any bounded trajectory is weakly convergent. The limit point is a weak Pareto if the functions are convex. The limit point is a critical Pareto if the functions are C 1 or convex, and under compact assumption on u. We recover classic results by taking q = 1. Can we have strong convergence under stronger assumptions ?

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 32/35

slide-48
SLIDE 48

Continuous case : What (MOG) is not

A descent method associated to some scalarization

q

  • i=1

θifi. In (MOG) the θi are chosen and modified automatically along the

  • time. And ALL the functions decrease.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 33/35

slide-49
SLIDE 49

Continuous case : What (MOG) is not

A descent method associated to some scalarization

q

  • i=1

θifi. In (MOG) the θi are chosen and modified automatically along the

  • time. And ALL the functions decrease.

A descent method associated to max fi.

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 33/35

slide-50
SLIDE 50

A continuous dynamic : example 1

(MOG) ˙ u(t) = s(u(t)) i.e ˙ u(t) + (NK(u(t)) + Conv{∂fi(u(t))})0 = 0 f1(x) = x − a2 and f2(x) = x − b2 a b

Multicriteria optimization - Guillaume Garrigos - Franco/Italian workshop 34/35

slide-51
SLIDE 51

Bibliography

The Multi-Objective Gradient dynamic

A Continuous Gradient-like Dynamical Approach to Pareto-Optimization in Hilbert Spaces. Attouch, Goudou, 2014 A Dynamic Gradient Approach to Pareto Optimization with Nonsmooth (...). Attouch, Garrigos, Goudou, Submitted.

Multi-Objective Gradient algorithm

Steepest descent methods for multicriteria optimization. Fliege, Svaiter, 2000. A steepest descent method for vector optimization. Drummond, Svaiter, 2005.

Newton’s method

Newton’s Method for Multiobjective Optimization. Fliege, Drummond, Svaiter, 2009. A quadratically convergent Newton method for vector optimization. Drummond, Raupp, Svaiter, 2014. Quasi-Newton’s method for multiobjective optimization. Povalej, 2014.

Proximal method

Proximal Methods in Vector Optimization. Bonnel, Iusem, Svaiter, 2005.

slide-52
SLIDE 52

Bibliography

The Multi-Objective Gradient dynamic

A Continuous Gradient-like Dynamical Approach to Pareto-Optimization in Hilbert Spaces. Attouch, Goudou, 2014 A Dynamic Gradient Approach to Pareto Optimization with Nonsmooth (...). Attouch, Garrigos, Goudou, Submitted.

Multi-Objective Gradient algorithm

Steepest descent methods for multicriteria optimization. Fliege, Svaiter, 2000. A steepest descent method for vector optimization. Drummond, Svaiter, 2005.

Newton’s method

Newton’s Method for Multiobjective Optimization. Fliege, Drummond, Svaiter, 2009. A quadratically convergent Newton method for vector optimization. Drummond, Raupp, Svaiter, 2014. Quasi-Newton’s method for multiobjective optimization. Povalej, 2014.

Proximal method

Proximal Methods in Vector Optimization. Bonnel, Iusem, Svaiter, 2005.

Thank you for your attention !