ENE 2XX: Renewable Energy Systems and Control LEC 07 : Convex - - PowerPoint PPT Presentation

ene 2xx renewable energy systems and control lec 07
SMART_READER_LITE
LIVE PREVIEW

ENE 2XX: Renewable Energy Systems and Control LEC 07 : Convex - - PowerPoint PPT Presentation

ENE 2XX: Renewable Energy Systems and Control LEC 07 : Convex Relaxations for Large-Scale MIQPs Professor Scott Moura University of California, Berkeley Summer 2018 Prof. Moura | Tsinghua-Berkeley Shenzhen Institute ENE 2XX | LEC 07 -


slide-1
SLIDE 1

ENE 2XX: Renewable Energy Systems and Control LEC 07 : Convex Relaxations for Large-Scale MIQPs

Professor Scott Moura University of California, Berkeley

Summer 2018

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 1

slide-2
SLIDE 2

Collaborators

eCAL Ph.D. Student @ UC Berkeley

Bertrand Travacca

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 2

slide-3
SLIDE 3

Problem Statement

Consider the general mixed integer quadratically constrained quadratic program (MIQCQP): minimize f(x) (1) subject to: gj(x) ≤ 0, j = 1, · · · , m (2) 0 ≤ x ≤ 1 (3) xi ∈ {0, 1}, i = 1, · · · , p < n (4) x ∈ Rn is the optimization variable the first p < n variables must be binary f(·) : Rn → R is quadratic and Lf – smooth gj(·) : Rn → R are quadratic and Lj – smooth

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 3

slide-4
SLIDE 4

Problem Statement

Consider the general mixed integer quadratically constrained quadratic program (MIQCQP): minimize f(x) (1) subject to: gj(x) ≤ 0, j = 1, · · · , m (2) 0 ≤ x ≤ 1 (3) xi ∈ {0, 1}, i = 1, · · · , p < n (4) x ∈ Rn is the optimization variable the first p < n variables must be binary f(·) : Rn → R is quadratic and Lf – smooth gj(·) : Rn → R are quadratic and Lj – smooth

Challenge

Solve LARGE-SCALE MIQCQPs, e.g. n = 103, 104, 105, · · ·

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 3

slide-5
SLIDE 5

Problem Statement

Consider the general mixed integer quadratically constrained quadratic program (MIQCQP): minimize f(x) (1) subject to: gj(x) ≤ 0, j = 1, · · · , m (2) 0 ≤ x ≤ 1 (3) xi ∈ {0, 1}, i = 1, · · · , p < n (4) x ∈ Rn is the optimization variable the first p < n variables must be binary f(·) : Rn → R is quadratic and Lf – smooth gj(·) : Rn → R are quadratic and Lj – smooth

Challenge

Solve LARGE-SCALE MIQCQPs, e.g. n = 103, 104, 105, · · · P vs NP – Millenium Prize Problem

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 3

slide-6
SLIDE 6

Outline

1

Existing Convex Relaxation Methods

2

Hopfield Methods

3

Dual Hopfield Methods

4

Simulations: Solving random MIQPs

5

Summary

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 4

slide-7
SLIDE 7

Existing Methods

Meta-heuristics

Meta-heuristics for mixed-integer problems simulated annealing tabu search genetic algorithm particle swarm optimization

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 5

slide-8
SLIDE 8

Existing Methods

Meta-heuristics

Meta-heuristics for mixed-integer problems simulated annealing tabu search genetic algorithm particle swarm optimization Advantages + “Black-box” + Open-source codes exists Disadvantages – Does not exploit structure – Convergence results don’t exist, in general – Does not scale well, in general

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 5

slide-9
SLIDE 9

Existing Methods

Convex Relaxation #1: Binary Relaxation

If f(x), gj(x) are convex, then relax binary constraints to xi ∈ [0, 1]... minimize f(x) (5) subject to: gj(x) ≤ 0, j = 1, · · · , m (6) 0 ≤ x ≤ 1 (7) xi ∈ {0, 1}, i = 1, · · · , p < n (8) then use interior-point methods

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 6

slide-10
SLIDE 10

Existing Methods

Convex Relaxation #1: Binary Relaxation

Stochastic approach to recover integer constraint: Let xr be solution to binary relaxation. Feasible x can be drawn randomly from {0, 1} following Bernoulli distribution B(xr). This can be sub-optimal.

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 7

slide-11
SLIDE 11

Existing Methods

Convex Relaxation #1: Binary Relaxation

Stochastic approach to recover integer constraint: Let xr be solution to binary relaxation. Feasible x can be drawn randomly from {0, 1} following Bernoulli distribution B(xr). This can be sub-optimal.

Example

minimizex∈{0,1}

  • x − 1

4

2 = 1

16

(x⋆ = 0 is the optimal solution)

If we apply binary relaxation, we get xr = 1

4 and Ex∼B(xr)

  • x − 1

4

2 =

3 16 > 1 16 !

Other ideas: Branch & Bound, Branch & Cut

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 7

slide-12
SLIDE 12

Existing Methods

Convex Relaxation #2: Lagrangian Relaxation

Notice that xi ∈ {0, 1} is equivalent to satisfying xi(1 − xi) = 0 minimize f(x) (9) subject to: gj(x) ≤ 0, j = 1, · · · , m (10) 0 ≤ x ≤ 1 (11) xi(1 − xi) = 0, i = 1, · · · , p < n (12)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 8

slide-13
SLIDE 13

Existing Methods

Convex Relaxation #2: Lagrangian Relaxation

Notice that xi ∈ {0, 1} is equivalent to satisfying xi(1 − xi) = 0 minimize f(x) (9) subject to: gj(x) ≤ 0, j = 1, · · · , m (10) 0 ≤ x ≤ 1 (11) xi(1 − xi) = 0, i = 1, · · · , p < n (12) Form the Lagrangian: L(x, µ, µ, µ, λ) = f(x) +

m

  • j=1
  • µjgj(x) + µjxi + µj(1 − xi)
  • +

p

  • i=1

λixi(1 − xi) (13)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 8

slide-14
SLIDE 14

Existing Methods

Convex Relaxation #2: Lagrangian Relaxation

Notice that xi ∈ {0, 1} is equivalent to satisfying xi(1 − xi) = 0 minimize f(x) (9) subject to: gj(x) ≤ 0, j = 1, · · · , m (10) 0 ≤ x ≤ 1 (11) xi(1 − xi) = 0, i = 1, · · · , p < n (12) Form the Lagrangian: L(x, µ, µ, µ, λ) = f(x) +

m

  • j=1
  • µjgj(x) + µjxi + µj(1 − xi)
  • +

p

  • i=1

λixi(1 − xi) (13)

Define the (concave) dual function of Λ = [µ, µ, µ, λ] D(Λ) = min

x∈Rn L(x, µ, µ, µ, λ)

(14) Weak duality approach: Solve convex program maxΛ D(Λ)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 8

slide-15
SLIDE 15

Existing Methods

Convex Relaxation #3: Semi-definite Relaxation

Introduce new variable X = xxT. This is called “lifting”. Can re-write MIQCQP

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 9

slide-16
SLIDE 16

Existing Methods

Convex Relaxation #3: Semi-definite Relaxation

Introduce new variable X = xxT. This is called “lifting”. Can re-write MIQCQP minimize 1 2 Tr(QX) + RTx + S (15) subject to: 1 2 Tr(QjX) + RT

j x + Sj ≤ 0,

j = 1, · · · , m (16) 0 ≤ x ≤ 1 (17) Xii = xi, i = 1, · · · , p < n (18) X = xxT (19) If Q, Qi are positive semi-definite, then only X = xxT makes this non-convex.

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 9

slide-17
SLIDE 17

Existing Methods

Convex Relaxation #3: Semi-definite Relaxation

Introduce new variable X = xxT. This is called “lifting”. Can re-write MIQCQP minimize 1 2 Tr(QX) + RTx + S (15) subject to: 1 2 Tr(QjX) + RT

j x + Sj ≤ 0,

j = 1, · · · , m (16) 0 ≤ x ≤ 1 (17) Xii = xi, i = 1, · · · , p < n (18) X = xxT (19) If Q, Qi are positive semi-definite, then only X = xxT makes this non-convex. Relax into convex inequality X xxT. Using Schur complement: X xxT ⇔

  • X

x x 1

  • (20)

This can be cast as a semi-definite program (SDP).

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 9

slide-18
SLIDE 18

Outline

1

Existing Convex Relaxation Methods

2

Hopfield Methods

3

Dual Hopfield Methods

4

Simulations: Solving random MIQPs

5

Summary

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 10

slide-19
SLIDE 19

A short history

In 1982, J. J. Hopfield used neural nets to model agents that do collaborative computations In 1985, J. J. Hopfield showed that neural nets can be used to solve optimization problems In 1990’s, Hopfield methods became very popular for solving MIQPs in power systems optimization In literature, power system engineers admit they didn’t fully understand why Hopfield methods work well.

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 11

slide-20
SLIDE 20

The Hopfield Method

Consider MIQP minimize f(x) = 1 2xTQx (21) subject to: 0 ≤ xi ≤ 1 i = 1, · · · , n (22) xi ∈ {0, 1} i = 1, · · · , p < n (23)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 12

slide-21
SLIDE 21

The Hopfield Method

Consider MIQP minimize f(x) = 1 2xTQx (21) subject to: 0 ≤ xi ≤ 1 i = 1, · · · , n (22) xi ∈ {0, 1} i = 1, · · · , p < n (23) The Hopfield method follows the dynamical system: d dt xH(t) = −Qx(t); xH(0) = x(0) ∈ (0, 1)n (24) x(t) = σ(xH(t)) (25) where σ(·) : Rn → [0, 1] is an “activiation function” defined element-wise as:

σ(x) : x → [σ1(x1), · · · , σn(xn)]

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 12

slide-22
SLIDE 22

What is activation function σ(x)?

strictly increasing

σ(·) ∈ C1 with Lipschitz constant Lσi

Example

σi(x) = 1

2 tanh(βi(x − 1 2)) + 1 2, parameterized by βi > 0

think of it as a “soft projection operator” from R to {0, 1}

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 13

slide-23
SLIDE 23

Discretize time dynamics

Forward Euler time discretization of Hopfield dynamics: xk+1

H

= xk

H − αkQxk;

x0

H = x0 ∈ (0, 1)n

(26) xk = σ(xk

H)

(27)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 14

slide-24
SLIDE 24

Discretize time dynamics

Forward Euler time discretization of Hopfield dynamics: xk+1

H

= xk

H − αkQxk;

x0

H = x0 ∈ (0, 1)n

(26) xk = σ(xk

H)

(27) For f(x) not quadratic, a generalization of the Hopfield dynamics is: xk+1

H

= xk

H − αk∇f(xk);

x0

H = x0 ∈ (0, 1)n

(28) xk = σ(xk

H)

(29)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 14

slide-25
SLIDE 25

Discretize time dynamics

Forward Euler time discretization of Hopfield dynamics: xk+1

H

= xk

H − αkQxk;

x0

H = x0 ∈ (0, 1)n

(26) xk = σ(xk

H)

(27) For f(x) not quadratic, a generalization of the Hopfield dynamics is: xk+1

H

= xk

H − αk∇f(xk);

x0

H = x0 ∈ (0, 1)n

(28) xk = σ(xk

H)

(29) resembles project gradient descent!

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 14

slide-26
SLIDE 26

Hopfield vs Projected Gradient Descent

Hopfield

xk+1

H

= xk

H − αk∇f(xk)

(30) xk = σ(xk

H)

(31)

Projected Gradient Descent

xk+1

H

= xk − αk∇f(xk)

(32) xk = Proj[0,1](xk

H)

(33)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 15

slide-27
SLIDE 27

Graphical Interpretation of Hopfield Method

Forward Simulation of Hopfield Neural Net!

Undirected weighted graph n nodes, one for each xi Each node has internal (xH,i ∈ R) and external (xi ∈ R) states Weights [P0]ij are elements

  • f gradients of obj fcn
  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 16

slide-28
SLIDE 28

Understanding the heuristic

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 17

slide-29
SLIDE 29

Understanding the heuristic

Main idea of heuristic to solve MIQP: Absorbing frontier on boundary of hypercube [0, 1]n Push binary variables to boundary by stretching phase-portrait with Ti’s

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 17

slide-30
SLIDE 30

Continuous Improvement to a Fixed Point

Theorem: Continuous Improvement

The Hopfield method yields f(xk+1) ≤ f(xk), ∀ k for an appropriate step size

αk. Specifically, the incremental improvement is bounded by:

0 ≤ f(xk) − f(xk+1) ≤ 0.5αk · ∇f(xk)TΣk∇f(xk)

Corollary: Convergence to a fixed point (may not be minimizer)

There exists a f † such that f(xk) → f † as k → ∞, and xk converges to the (non-empty) set

X =

  • x ∈ [0, 1]n |xi ∈ {0, 1}, i = 1, · · · , p ∨ ∇if(xk) = 0
  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 18

slide-31
SLIDE 31

Convergence Rates

T wo convergence rate results, depending on structure of obj. fcn. f(x)

Theorem: Sub-linear convergence in general

If f(x) is convex, then f(xk) − f †

0 = O

  • 1

3

k

  • .

Remark: Slower than gradient descent, for which convergence is guaranteed at a rate O

1

k

  • Theorem: Linear convergence when f(x) is strongly convex

If f(x) is strongly convex, then convergence is linear.

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 19

slide-32
SLIDE 32

Outline

1

Existing Convex Relaxation Methods

2

Hopfield Methods

3

Dual Hopfield Methods

4

Simulations: Solving random MIQPs

5

Summary

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 20

slide-33
SLIDE 33

Dual Hopfield Method

So far, we have considered Hopfield methods to approximately solve minimize f(x) (34) subject to: 0 ≤ xi ≤ 1 i = 1, · · · , n (35) xi ∈ {0, 1} i = 1, · · · , p < n (36)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 21

slide-34
SLIDE 34

Dual Hopfield Method

So far, we have considered Hopfield methods to approximately solve minimize f(x) (34) subject to: 0 ≤ xi ≤ 1 i = 1, · · · , n (35) xi ∈ {0, 1} i = 1, · · · , p < n (36) We now consider inequality constraints: minimize f(x) (37) subject to: gj(x) ≤ 0, j = 1, · · · , m (38) 0 ≤ xi ≤ 1 i = 1, · · · , n (39) xi ∈ {0, 1} i = 1, · · · , p < n (40)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 21

slide-35
SLIDE 35

Dual Hopfield Method

Apply Lagrangian relaxation

Idea: Instead of considering the “full” Lagrangian relaxation, consider L(x, µ) = f(x) +

m

  • j=1

µjgj(x)

(41)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 22

slide-36
SLIDE 36

Dual Hopfield Method

Apply Lagrangian relaxation

Idea: Instead of considering the “full” Lagrangian relaxation, consider L(x, µ) = f(x) +

m

  • j=1

µjgj(x)

(41) Then the dual function is D(µ) = min

x

L(x, µ) = f(x) +

m

  • j=1

µjgj(x)

(42) subject to: 0 ≤ xi ≤ 1 i = 1, · · · , n (43) xi ∈ {0, 1} i = 1, · · · , p < n (44) which is amenable to Hopfield method, given µ.

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 22

slide-37
SLIDE 37

Dual Ascent via Hopfield

Then solve the Dual Problem: max

µ≥0 D(µ)

(45) D(µ) = min

x

L(x, µ) = min

x

f(x) +

m

  • j=1

µjgj(x)

(46)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 23

slide-38
SLIDE 38

Dual Ascent via Hopfield

Then solve the Dual Problem: max

µ≥0 D(µ)

(45) D(µ) = min

x

L(x, µ) = min

x

f(x) +

m

  • j=1

µjgj(x)

(46) Run Hopfield method to approximately solve D(µ) = minx L(x, µ). Suppose x⋆(µ) = arg minx L(x, µ). The subgradient of D(µ) along dimension j: gj(x⋆(µ)) ∈ ∂jD(µ)

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 23

slide-39
SLIDE 39

Dual Hopfield Method

The Algorithm

Algorithm 1 Dual (sub)-gradient Ascent via Hopfield Method Initialize λ0 ≥ 0; Choose β > 0 for k = 0, 1, · · · , kmax ... (1) use Hopfield method to approximately compute dual function ... for ℓ = 0, · · · , ℓmax ... ... xℓ+1

H

= xℓ

H − αℓ∇xL(xℓ, µk)

... ... xℓ = σ(xℓ+1

H

)

... ... xk

hop ← xℓ

... until stopping criterion is met ... (2) update dual variable µ via (sub)-gradient ascent ... µk+1 = µk + βk m

j=1 gj(xk

hop(µk))

end for

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 24

slide-40
SLIDE 40

Dual Hopfield Method

The Algorithm

Algorithm 2 Accelerated Dual (sub)-gradient Ascent via Hopfield Method Initialize λ0 ≥ 0; Choose β > 0 for k = 0, 1, · · · , kmax ... (1) use Hopfield method to approximately compute dual function ... for ℓ = 0, · · · , ℓmax ... ... xℓ+1

H

= xℓ

H − αℓ∇xL(xℓ, µk)

... ... xℓ = σ(xℓ+1

H

)

... ... xk

hop ← xℓ

... until stopping criterion is met ... (2) update dual variable µ via (sub)-gradient ascent ... µk+1 = µk + k−1

k+2

  • µk − µk−1

+ βk m

j=1 gj(xhop(µk))

end for

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 24

slide-41
SLIDE 41

Outline

1

Existing Convex Relaxation Methods

2

Hopfield Methods

3

Dual Hopfield Methods

4

Simulations: Solving random MIQPs

5

Summary

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 25

slide-42
SLIDE 42

Problem Formulation

Consider solving MIQP w.r.t. x ∈ Rn minimize 1 2xTQx + RTx (47) subject to: Ax ≤ b (48) Aeqx = beq (49) lb ≤ x ≤ ub (50) xi ∈ {0, 1}, i = 1, · · · , p (51) Parameters Q, R, A, b, Aeq, beq, lb, ub generated random for each n Number of constraints also randomized

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 26

slide-43
SLIDE 43

Comparative Analysis

The contenders

All problems solved on Matlab: CPLEX MIQP: using function cplexmiqp developed by IBM Binary Relaxation via CPLEX QP : using function cplexqp Semi-definite relaxation (SDR): corresponding SDP solved using CVX Hopfield: Dual Ascent Hopfield Method uses dual variables from cplexqp

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 27

slide-44
SLIDE 44

Comparative Analysis

The scoring metrics

For each method, we compute: computer running time [sec] constraint violations (CV):

binary CV: 1

p

p

i=1 d(xi, {0, 1})

inequality CV:

1 m

m

j=1 |[Ax − b]j|

equality CV: 1

k=1 |[Aeqx − beq]k|

  • bjective function value
  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 28

slide-45
SLIDE 45
  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 29

slide-46
SLIDE 46
  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 30

slide-47
SLIDE 47

Outline

1

Existing Convex Relaxation Methods

2

Hopfield Methods

3

Dual Hopfield Methods

4

Simulations: Solving random MIQPs

5

Summary

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 31

slide-48
SLIDE 48

Summary

Today we discussed LARGE-SCALE MIQPs Some convex relaxations

Binary relaxation Lagrangian relaxation Semi-definite relaxation

Hopfield Method, including new convergence results Dual Hopfield Method, to incorporate inequalities Simulations that illustrate excellent scalability for large-scale MIQPs

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 32

slide-49
SLIDE 49

Summary

Today we discussed LARGE-SCALE MIQPs Some convex relaxations

Binary relaxation Lagrangian relaxation Semi-definite relaxation

Hopfield Method, including new convergence results Dual Hopfield Method, to incorporate inequalities Simulations that illustrate excellent scalability for large-scale MIQPs Many interesting extensions are possible, e.g. dual splitting, Newton’s method, ADMM, chance constraints, etc.

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 32

slide-50
SLIDE 50

Summary

Today we discussed LARGE-SCALE MIQPs Some convex relaxations

Binary relaxation Lagrangian relaxation Semi-definite relaxation

Hopfield Method, including new convergence results Dual Hopfield Method, to incorporate inequalities Simulations that illustrate excellent scalability for large-scale MIQPs Many interesting extensions are possible, e.g. dual splitting, Newton’s method, ADMM, chance constraints, etc. QUESTIONS?

  • Prof. Moura | Tsinghua-Berkeley Shenzhen Institute

ENE 2XX | LEC 07 - Large-Scale MIQPs Slide 32