scalable global optimization via local bayesian
play

Scalable Global Optimization via Local Bayesian Optimization David - PowerPoint PPT Presentation

Scalable Global Optimization via Local Bayesian Optimization David Eriksson Uber AI eriksson@uber.com Matthias Poloczek Michael Pearce Jake Gardner Ryan Turner Global Optimization Find such that


  1. Scalable Global Optimization via Local Bayesian Optimization David Eriksson Uber AI eriksson@uber.com Matthias Poloczek Michael Pearce Jake Gardner Ryan Turner

  2. Global Optimization Find 𝑦 ∗ ∈ Ω such that 𝑔 𝑦 ∗ ≤ 𝑔 𝑦 , ∀𝑦 ∈ Ω • 𝑔 is a continuous, computationally expensive, black-box function • Ω ⊂ ℝ + is a hyper-rectangle Planning and control Design of aerodynamic structures

  3. Bayesian Optimization (BO) Common restrictions: • A few hundred evaluations • Less than 10 tunable parameters True function Sample Observed points Next point

  4. Bayesian Optimization (BO) Common restrictions: • A few hundred evaluations • Less than 10 tunable parameters True function Sample Observed points Next point

  5. Bayesian Optimization (BO) Common restrictions: • A few hundred evaluations • Less than 10 tunable parameters True function Sample Observed points Next point

  6. High-dimensional BO is challenging Challenges: 1. The search space grows exponentially with dimensionality 2. A global GP model may not fit the data everywhere 3. Large areas of uncertainty leads to over-exploration Previous work makes strong assumptions : • Additive structure • Low-dimensional structure

  7. Trust-region methods Main idea: • Optimize a (simple) model in a local region • Expand/shrink this region based on progress Linear (e.g. COBYLA) Quadratic (e.g. BOBYQA) GP (TuRBO, this paper) • Only requires a locally accurate model

  8. Trust-region BO (TuRBO) 1. Avoids over-exploration by using a trust-region framework 2. Balances exploration/exploitation by using BO inside the trust-region 3. Uses Thompson sampling to scale to large batch sizes GP True Model Function Trust Region Update

  9. Experimental results Robot pushing: 10,000 evaluations, batch size 50 Rover trajectory planning: 20,000 evaluations, batch size 100

  10. Experimental results 200D Ackley function: 10,000 evaluations, batch size 100 16 14 TuRBO-1 Thompson 12 BOCK Bohamiann Value HeSBO 10 CMA-ES BOBYQA 8 Nelder-Mead BFGS 6 Random 4 0 2000 4000 6000 8000 10000 Number of evaluations

  11. Summary TuRBO: • Achieves excellent results for high-dimensional problems • Combines BO with trust-regions to avoid over-exploration • Makes no assumptions about low-dimensional structure Paper: https://arxiv.org/abs/1910.01739 Code: https://github.com/uber-research/TuRBO Poster #9

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend