deep inverse optimization
play

Deep Inverse Optimization Yingcong Tan 1 , Andrew Delong 2 , Daria - PowerPoint PPT Presentation

CPAIOR 2019 Deep Inverse Optimization Yingcong Tan 1 , Andrew Delong 2 , Daria Terekhov 1 1 Department of Mechanical, Industrial and Aerospace Engineering Concordia University 2 Department of Computer Science and Software Engineering Concordia


  1. CPAIOR 2019 Deep Inverse Optimization Yingcong Tan 1 , Andrew Delong 2 , Daria Terekhov 1 1 Department of Mechanical, Industrial and Aerospace Engineering Concordia University 2 Department of Computer Science and Software Engineering Concordia University Thursday, June 6th, 2019

  2. Agenda i. Motivation ii. Methodology iii. Experiments iv. Summary Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 2/35

  3. What is Inverse Optimization (IO)? Forward Optimization Problem c c Target Target min c � Target , x � min x c ′ x s.t. x ∈ arg min x { c ′ x | Ax ≤ b } s.t. Ax ≤ b Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 3/35

  4. What is Inverse Optimization (IO)? Forward Optimization Problem c c Target x ∗ min c � Target , x � min x c ′ x s.t. x ∈ arg min x { c ′ x | Ax ≤ b } s.t. Ax ≤ b Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 4/35

  5. What is Inverse Optimization (IO)? Forward Optimization Problem Inverse Optimization Problem c c Target x ∗ min c � Target , x � min x c ′ x s.t. x ∈ arg min x { c ′ x | Ax ≤ b } s.t. Ax ≤ b Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 5/35

  6. What is Inverse Optimization (IO)? Forward Optimization Problem Inverse Optimization Problem c c Target x ∗ x min c � Target , x � min x c ′ x s.t. x ∈ arg min x { c ′ x | Ax ≤ b } s.t. Ax ≤ b Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 6/35

  7. Motivation Routing Problem (i.e., Least Cost Path) Objective Learn the arc cost Production Planning Problem Objective Estimate backorder cost Customer Behavior Objective Estimate customer utility function Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 7/35

  8. Contribution Existing algorithms Highlights Optimization formulations based on optimality conditions Guarantee optimal solution Limitation Algorithms are tailored to solve special cases of IO problems *Chan et al. [2–4], Troutt et al.[6, 7], Aswani et al. [1], Saez-Gallego and Morales [5] Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 8/35

  9. Contribution Existing algorithms Highlights Optimization formulations based on optimality conditions Guarantee optimal solution Limitation Algorithms are tailored to solve special cases of IO problems *Chan et al. [2–4], Troutt et al.[6, 7], Aswani et al. [1], Saez-Gallego and Morales [5] Deep Inverse Optimization Highlights First deep-learning based approach Learn parameters through backpropogation Generally applicable to different IO problems Limitation Doesn’t guarantee optimal solution Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 9/35

  10. Agenda i. Motivation ii. Methodology iii. Experiments iv. Summary Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 10/35

  11. Methodology (IO): Find a Cost Vector Consistent With Target c Target ∆ c Discrepancy Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 11/35

  12. Methodology Solve FOP using Interior-Point Method (IPM) c Target ∆ c Discrepancy x ∗ Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 12/35

  13. Methodology Observe Discrepancy and Compute Gradients c Target ∆ c Discrepancy x ∗ Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 13/35

  14. Methodology Observe Discrepancy and Compute Gradients c Target Discrepancy x ∗ ∆ c Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 14/35

  15. Methodology Termination c Target x ∗ Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 15/35

  16. Methodology Deep Inverse Optimization Unroll a Deep RNN Unroll the IPM Dynamic Num. of Steps x (0) = features x (1) = RNN( x (0) , weights ) x (2) = RNN( x (1) , weights ) Backpropogation ∂ Loss ∂ weights x ( n ) = RNN( x ( n − 1) , weights ) min weights Loss ( Target , x ( n ) ) Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 16/35

  17. Methodology Deep Inverse Optimization Unroll a Deep RNN Unroll the IPM c , A , b = DefineLP( features , weights ) Dynamic Num. of Steps x (0) = FindFeasible( c , A , b ) x (0) = features x (1) = RNN( x (0) , weights ) x (1) = Newton( x (0) , c , A , b ) x (2) = RNN( x (1) , weights ) x (2) = Newton( x (1) , c , A , b ) x ( n ) = RNN( x ( n − 1) , weights ) x ( n ) = Newton( x ( n − 1) , c , A , b ) min weights Loss ( Target , x ( n ) ) min weights Loss ( Target , x ( n ) ) Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 17/35

  18. Agenda i. Motivation ii. Methodology iii. Experiments iv. Summary Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 18/35

  19. Experiments on Three Learning Tasks Task 1 Single-point non-parametric LP Goal Learn cost vector – Closed-form solution proposed by Chan et al. [2, 4] Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 19/35

  20. Experiments on Three Learning Tasks Task 1 Single-point non-parametric LP Goal Learn cost vector – Closed-form solution proposed by Chan et al. [2, 4] Task 2 Single-point non-parametric LP Goal Learn cost vector and constraints jointly – Maximum likelihood estimation approach proposed by Troutt et al. [6] Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 20/35

  21. Experiments on Three Learning Tasks Task 1 Single-point non-parametric LP Goal Learn cost vector – Closed-form solution proposed by Chan et al. [2, 4] Task 2 Single-point non-parametric LP Goal Learn cost vector and constraints jointly – Maximum likelihood estimation apprach proposed by Troutt et al. [6] Task 3 Multi-point parametric LP, i.e., c , A , b = f ( features , weights ) Goal Learn weights – Not addressed in literature Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 21/35

  22. Experiment on Task 1 Goal Learn cost vector consistent with a single observed target Target c X ∗ Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 22/35

  23. Experiment on Task 1 Goal Learn cost vector consistent with a single observed target Target c c X ∗ X ∗ Test on 300 random LP instances Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 23/35

  24. Experiment on Task 1 Before After Learning Learning Target c c X ∗ X ∗ N = 10 Variables M = 20 Constraints Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 24/35

  25. Squared Error (Learned vs Target) 10 1 10 1 loss (squared error) 10 3 10 5 10 7 10 9 10 11 4 8 6 0 6 0 = = 1 2 3 8 = = = = M M M M M M , , 2 2 , , , , = = 2 0 0 0 = 1 1 1 N N = = = N N N N Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 25/35

  26. Experiment on Task 2 Goal Learn cost vector and constraints consistent with a single observed target Target c X Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 26/35

  27. Experiment on Task 2 Goal Learn cost vector and constraints consistent with a single observed target Target X c c X Test on 300 random LP instances Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 27/35

  28. Squared Error (Learned vs Target) 10 1 10 1 loss (squared error) 10 3 10 5 10 7 10 9 10 11 4 8 6 0 6 0 = = 1 2 3 8 = = = = M M M M M M , , 2 2 , , , , = = 2 0 0 0 = 1 1 1 N N = = = N N N N Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 28/35

  29. Experiment on Task 3 Goal Learn weights such that decisions are consistent with observed targets across multiple conditions T ARGET D ISCREPANCY Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 29/35

  30. Experiment on Task 3 Goal Learn weights such that decisions are consistent with observed targets across multiple conditions Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 30/35

  31. Squared Error (Learned vs Target) T ARGET Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 31/35

  32. Squared Error (Learned vs Target) Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 32/35

  33. Agenda i. Motivation ii. Methodology iii. Experiments iv. Summary Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 33/35

  34. Summary General-purpose framework for solving IO problems Solves parametric or non-parametric problems Learns all parameters individually or jointly Easily extends to non-linear problems Deep-Inv-Opt package is now available on https://github.com/tankconcordia/deep inv opt Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 34/35

  35. Thank you ! Yingcong Tan, Andrew Delong, Daria Terekhov Deep Inverse Optimization 35/35

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend