dnn based branch and bound for the quadratic assignment
play

DNN-based Branch-and-bound for the Quadratic Assignment Problem - PowerPoint PPT Presentation

DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN-based Branch-and-bound for the Quadratic Assignment Problem *Koichi Fujii, Naoki Ito, Yuji Shinano NTT DATA Mathematical Systems Inc., , FAST


  1. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN-based Branch-and-bound for the Quadratic Assignment Problem *Koichi Fujii, Naoki Ito, Yuji Shinano NTT DATA Mathematical Systems Inc., , FAST RETAILING CO., LTD., Zuse Institute Berlin 2019/03/29 1 / 27

  2. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Introduction of NTT Data Mathematical Systems Inc. Will be introduced at next talk : Takahito Tanabe ”Implementation issues of Interior-Point Method for real-world NLP problems” 2 / 27

  3. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Summary : DNN-based Branch-and-bound for the Quadratic Assignment Problem Motivation • Quadratic assignment problems still remain as one of the most difficult combinatorial problems • Recent conic relaxation technique DNN updates the lower bounds of quadratic assignment problem Goal Improve branch-and-bound method for quadratic assignment problems Our Results First implementation of DNN-based branch-and-bound 3 / 27

  4. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Agenda DNN Relaxation of Quadratic Assignment Problem 1 DNN Optimization 2 DNN-based Branch-and-bound 3 4 / 27

  5. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Quadratic Assignment Problem { } x T ( B ⊗ A ) x | x ∈ { 0 , 1 } n , ( I ⊗ e T ) x = ( e T ⊗ I ) x = e , x i x j = 0 min , where B ⊗ A denotes the kronecker product of the matrices A and B . Known as having week LP/QP relaxation. 5 / 27

  6. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Quadratic Assignment Problem as Polynomial Optimization Problem Relax linear constraints by Lagrangian multiplier λ . { � } x T ( B ⊗ A ) x + λ ∥ B ⊗ A ∥ � x T D ˜ � x ∈ [0 , 1] n , x i x j = 0 , ˜ min ˜ x = [1; x ] , x � ∥ D ∥ where ( d T d ) − d T C := (3) D − C T d C T C I ⊗ e T := (4) C d := [ e ; e ] (5) 6 / 27

  7. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Quadratic Assignment Problem and DNN relaxation Polynomial optimization problem (POP) with non-negative variables min x { f 0 ( x ) | f i ( x ) = 0 ( i = 1 , 2 , . . . , m ) , x ≥ 0 } • 0-1 binary quadratic optimization problem • Optimal power flow, sensor network localization, ... Doubly non-negative (DNN) relaxation SDP relaxation + non-negative constraints • better lower bounds than SDP • very large O ( n 2 ) non-negative constraints POP lower bound DNN BP method relaxation BBCPOP improved lower bounds for QAPLIB instances. 7 / 27

  8. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Quadratic Assignment Problem and DNN relaxation DNN optimization problem min Z {⟨ F 0 , Z ⟩ | ⟨ H 0 , Z ⟩ = 1 , Z ∈ K 1 ∩ K 2 } where • F 0 ∈ S n and H 0 ∈ S n + ( i = 1 , 2 , . . . , m ) • K 1 = S n + and K 2 ⊆ S n ≥ 0 are convex cones • S n : space of symmetric matrices • S n + : space of symmetric positive semidefinite matrices • S n ≥ 0 : space of symmetric nonnegative matrices 8 / 27

  9. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound Quadratic Assignment Problem and DNN relaxation � { } x t Q ˜ � x ∈ [0 , 1] n , x i x j = 0 (( i , j ) ∈ Γ) , ˜ min x x = [1; x ] , (6) ⇓ DNN relaxation {⟨ Q , Z ⟩ | Z 00 = 1 , Z ∈ K 1 ∩ K 2 } (7)  �  Z αβ ≥ 0 nonnegativity �   �  Z ∈ S n +1 := Z 0 α = Z α 0 ≥ Z αα  (8) K 2 � � Z αβ = 0 if ( α, β ) ∈ Γ � 9 / 27

  10. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : BP method [Kim, Kojima, & Toh, ’16] min Z {⟨ F 0 , Z ⟩ | ⟨ H 0 , Z ⟩ = 1 , Z ∈ K 1 ∩ K 2 } ⇕ Strong duality ∈ K ∗ 1 + K ∗ max y 0 { y 0 | F 0 − y 0 H 0 2 } � �� � G ( y 0 ) Feasible Infeasible y 0 y ∗ 0 : Opt. BP method : Bisection method to judge the feasibility of a point y 0 10 / 27

  11. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : BP Method [Kim, Kojima, & Toh, ’16] How to judge if G ( y 0 ) ∈ K ∗ 1 + K ∗ 2 ? ⇒ solve regression model f ∗ = min Y 1 , Y 2 {∥ G − ( Y 1 + Y 2 ) ∥ 2 | Y 1 ∈ K ∗ 1 , Y 2 ∈ K ∗ 2 } Y 2 {∥ ( G − Y 1 ) − Y 2 ∥ 2 | Y 2 ∈ K ∗ 2 } | Y 1 ∈ K ∗ = min Y 1 { min 1 } 2 ( G − Y 1 )) ∥ 2 | Y 1 ∈ K ∗ = min Y 1 {∥ ( G − Y 1 ) − Π K ∗ 1 } Y 1 {∥ Π K 2 ( Y 1 − G ) ∥ 2 | Y 1 ∈ K ∗ = min 1 } ( where Y 2 = Π K ∗ 2 ( G − Y 1 )) • Obviously, f ∗ = 0 ⇔ G ∈ K ∗ 1 + K ∗ 2 . • Apply accelerated proximal gradient (APG) to check if f ∗ = 0. → [Assumption 1] Π K 2 , Π K 1 can be computed efficiently. 11 / 27

  12. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : APG method Constrained optimization: min α ∈ S f ( α ) Gradient projection method (e.g., [Gold- stein, ’64]) Step 1: α k +1 = Π S ( α k − 1 ) L k ∇ f ( α k ) APG method [Beck and Teboulle, ’09] Step 1: α k = Π S ( β k − 1 ) L k ∇ f ( β k ) 1+ √ 1+4 t 2 Step 2: t k +1 = k 2 Step 3: β k +1 = α k + t k − 1 t k +1 ( α k − α k − 1 ) � �� � 12 / 27 momentum

  13. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : APG method Gradient projection method f ( α k ) − f ( α ∗ ) ≤ O (1 / k ) � �� � � �� � current opt. Accelerated proximal gradient (APG) method f ( α k ) − f ( α ∗ ) ≤ O (1 / k 2 ) e.g., [Beck and Teboulle, ’09] [Nesterov, ’03] 13 / 27

  14. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : Computing a Valid Lower Bound • BP method may output an UB of the opt. val. (infeasible sol.), because APG can fail to judge feasibility due to numerical error. • Can we compute a valid lower bound y ℓ 0 of DNN? Feasible Infeasible y 0 y ∗ 0 : Opt. 14 / 27

  15. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound CDNN Optimization : Computing a Valid Lower Bound [Arima, Kim, Kojima & Toh, ’17] � { } � ⟨ H 0 , Z ⟩ = 1 , Z ∈ K 1 ∩ K 2 min ⟨ F 0 , Z ⟩ . Z ⇕ with I ∈ K 1 and large enough ρ ≥ 0 Z {⟨ F 0 , Z ⟩ | ⟨ H 0 , Z ⟩ = 1 , ⟨ I , Z ⟩ ≤ ρ, Z ∈ K 1 ∩ K 2 } min ⇕ Strong duality µ I ∈ K ∗ 1 + K ∗ max y 0 ,µ { y 0 + ρµ | F 0 − y 0 H 0 2 , µ ≥ 0 } � �� � G ( y 0 ) ⇕ y 0 ,µ, Y 2 { y 0 + ρµ | G ( y 0 ) − Y 2 − µ I ∈ K ∗ 1 (= S n + ) , Y 2 ∈ K ∗ max 2 , µ ≥ 0 } 15 / 27

  16. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : Summary min Z {⟨ F 0 , Z ⟩ | ⟨ H 0 , Z ⟩ = 1 , ( i = 1 , 2 , . . . , m ) , Z ∈ K 1 ∩ K 2 } . Dual of Lagrangian relaxation with parameter ρ ≥ 0 y 0 ,µ, Y 2 { y 0 + ρµ | G ( y 0 ) − Y 2 − µ I ∈ K ∗ 1 , Y 2 ∈ K ∗ max 2 , µ ≥ 0 } We searches • y 0 by bisection method • Y 1 ∈ K ∗ 1 and Y 2 ∈ K ∗ 2 by APG (to judge feasibility of y 0 ) • µ : minimal eigenvalue of G ( y 0 ) − Y 2 → always gives a valid LB [Assumption 1 ] Π K 2 , Π K ∗ 1 can be computed efficiently. [Assumption 2 ] We have a tight ρ ≥ 0. 16 / 27

  17. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound BBCPOP : Matlab implementation [Naoki Ito, Kim, Kojima, Takeda and Toh, 2018] 17 / 27

  18. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : the case of Quadratic Assignment Problem DNN formulation min {⟨ F 0 , Z ⟩ | ⟨ H 0 , Z ⟩ = 1 , Z ∈ K 1 ∩ K 2 } . (9) { } Z ∈ S n +1 := (10) K 1 +  �  Z αβ ≥ 0 nonnegativity �   �  Z ∈ S n +1 := Z 0 α = Z α 0 ≥ Z αα (11) K 2 � �  if ( α, β ) ∈ Γ Z αβ = 0 � 18 / 27

  19. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : the case of Quadratic Assignment Problem [Assumption 1 ] Π K 2 , Π K ∗ 1 can be computed efficiently. • Π K ∗ is the projection on to symmetric cones 1 • Π K 2 is defined as: Π K 2 ( Z αβ ) := max (0 , Z α,β ) if ( α, β ) ∈ γ Π K 2 ( Z αα ) := avg ( Z αα , Z α 0 , Z 0 α ) if Z α 0 < Z αα (12) Π K 2 ( Z αβ ) := Z αβ otherwise 19 / 27

  20. DNN Relaxation of Quadratic Assignment Problem DNN Optimization DNN-based Branch-and-bound DNN Optimization : the case of Quadratic Assignment Problem Example:   9 . 51 4 . 10 5 . 23 5 . 30 3 . 96 4 . 10 1 . 76 2 . 25 2 . 28 1 . 70     Z = 5 . 23 2 . 25 2 . 88 2 . 91 2 . 18 (13)     5 . 30 2 . 28 2 . 91 2 . 95 2 . 20   3 . 96 1 . 70 2 . 18 2 . 20 8 . 65   9 . 51 4 . 10 5 . 23 5 . 30 5 . 52 4 . 10 1 . 76 0 0 1 . 70     Π K 2 ( Z ) = 5 . 23 0 2 . 88 2 . 91 0 (14)     5 . 30 0 2 . 91 2 . 95 0   5 . 52 1 . 70 0 0 5 . 52 20 / 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend