constraint satisfaction problems local search
play

Constraint Satisfaction Problems: Local Search Alice Gao Lecture 8 - PowerPoint PPT Presentation

1/27 Constraint Satisfaction Problems: Local Search Alice Gao Lecture 8 Based on work by K. Leyton-Brown, K. Larson, and P. van Beek 2/27 Outline Learning Goals Local Search Algorithms Hill climbing Hill climbing with random restarts


  1. 1/27 Constraint Satisfaction Problems: Local Search Alice Gao Lecture 8 Based on work by K. Leyton-Brown, K. Larson, and P. van Beek

  2. 2/27 Outline Learning Goals Local Search Algorithms Hill climbing Hill climbing with random restarts Simulated Annealing Genetic Algorithms Revisiting the Learning goals

  3. 3/27 Learning Goals By the end of the lecture, you should be able to climbing, hill climbing with random restarts, simulated annealing, and genetic algorithms). algorithms. ▶ Describe/trace/implement the local search algorithms (hill ▶ Describe strategies for escaping local optima. ▶ Compare and contrast the properties of local search

  4. 4/27 Learning Goals Local Search Algorithms Hill climbing Hill climbing with random restarts Simulated Annealing Genetic Algorithms Revisiting the Learning goals

  5. 5/27 Questions The problem formulation: Executing the algorithm: Properties and performance of the algorithm: the global optimum solution? ▶ What is the neighbour relation? ▶ What is the cost function? ▶ Where do we start? ▶ Which neighbour do we move to? ▶ When do we stop? ▶ Given enough time, will the algorithm fjnd ▶ How much memory does it require? ▶ How does the algorithm perform in practice?

  6. 6/27 Hill climbing Start with a random or good solution. Move to a neighbour with the lowest cost. Break ties randomly. Greedy: does not look ahead beyond one step. Stop when no neighbour has a lower cost. Only need to remember the current node. No memory of where we’ve been. ▶ Where do we start? ▶ Which neighbour do we move to? ▶ When do we stop? ▶ How much memory does it require?

  7. 7/27 Hill climbing in one sentence Climbing Mount Everest in a thick fog with amnesia

  8. 8/27 CQ: Will hill climbing fjnd the global optimum? CQ: Will hill climbing fjnd the global optimal solution given enough time? (A) Yes. Given enough time, hill climbing will fjnd the global optimal solution for every problem. (B) No. There are problems where hill climbing will NOT fjnd the global optimal solution.

  9. 9/27 5: 8: end while 7: end if 6: break 9: return current Hill Climbing 4: 3: 2: while true do Algorithm 1 Hill Climbing 1: current ← a random state next ← get-best-neighbour(current) if cost(current) ≤ cost(next) then current ← next

  10. 10/27 end if 14: end while 13: 12: end if 11: 10: 9: 8: 7: Hill Climbing with Sideway Moves 6: 5: 4: 3: 2: while true do Algorithm 2 Hill Climbing with Sideway Moves 15: return current 1: current ← a random state next ← get-best-neighbour(current) if cost(current) < cost(next) then if cost(current) == cost(next) then

  11. 11/27 Hill Climbing with Tabu List ▶ How do you keep track of the most recent nodes visited? ▶ How would you update the list?

  12. 12/27 Performance of hill climbing Easy to improve a bad state. % of instances solved: 14% # of steps until success/failure: 3-4 steps on average until success or failure. % of instances solved: 94% # of steps until success/failure: 21 steps until success and 64 steps until failure. ▶ Perform quite well in practice. ▶ Makes rapid progress towards a solution. 8-queens problem: ≈ 17 million states. ▶ Basic hill climbing ▶ Basic hill climbing + ≤ 100 consecutive sideway moves:

  13. 13/27 Dealing with local optima Hill climbing can get stuck at a local optimum. What can we do? Hill climbing with random restarts Simulated annealing ▶ Restart search in a difgerent part of the state space. ▶ Move to a state with a higher cost occasionally.

  14. 14/27 Learning Goals Local Search Algorithms Hill climbing Hill climbing with random restarts Simulated Annealing Genetic Algorithms Revisiting the Learning goals

  15. 15/27 Hill climbing with random restarts If at fjrst you don’t succeed, try, try again. Restart the search with a randomly generated initial state when made too many consecutive sideway moves. Choose the best solution out of all the local optima found. ▶ we found a local optimum, and ▶ we’ve found a plateau and ▶ we’ve made too many moves.

  16. 16/27 CQ: Will hill climbing + random restarts fjnd the global optimum? CQ: Will hill climbing with random restarts fjnd the global optimal solution given enough time? (A) Yes. (B) No. There are problems where hill climbing with random restarts will NOT fjnd the global optimal solution.

  17. 17/27 Learning Goals Local Search Algorithms Hill climbing Hill climbing with random restarts Simulated Annealing Genetic Algorithms Revisiting the Learning goals

  18. 18/27 Simulated Annealing Start with a random solution and a large T . Choose a random neighbour. If the neighbour is better than current, move to the neighbour. If the neighbour is not better than the current state, ▶ Where do we start? ▶ Which neighbour do we move to? move to the neighbour with probability p = e ∆ E / T . ▶ When do we stop? Stop when T = 0.

  19. 19/27 Simulated Annealing 12: end while decrease T 11: end if 10: 9: else 8: 7: 6: 5: 4: Algorithm 3 Simulated Annealing 13: return current 1: current ← initial-state 2: T ← a large positive value 3: while T > 0 do next ← a random neighbour of current ∆ E ← current.cost - next.cost if ∆ E > 0 then current ← next current ← next with probablity p = e ∆ E / T

  20. 20/27 CQ: Consider a neighbour with a higher cost than the current CQ: How does T afgect p = e ∆ E / T ? node ( ∆ E < 0). As T decreases, how does p = e ∆ E / T change? ( p = e ∆ E / T is the probability of moving to this neighbour.) (A) As T decreases, p = e ∆ E / T increases. (B) As T decreases, p = e ∆ E / T decreases.

  21. 21/27 CQ: How does ∆ E afgect p = e ∆ E / T ? CQ: Assume that T is fjxed. Consider a neighbour where ∆ E < 0 As ∆ E decreases (becomes more negative), how does p = e ∆ E / T change? ( p = e ∆ E / T is the probability of moving to this neighbour.) (A) As ∆ E decreases, p = e ∆ E / T increases. (B) As ∆ E decreases, p = e ∆ E / T decreases.

  22. 22/27 Annealing Schedule How should we decrease T ? If the temperature decreases slowly enough, simulated annealing is guaranteed to fjnd the global optimum with probability approaching 1. ▶ Linear ▶ Logarithmic ▶ Exponential

  23. 23/27 Examples of Simulated Annealing ▶ Example: getting a tennis ball into the deepest hole. ▶ Exploration versus exploitation

  24. 24/27 Learning Goals Local Search Algorithms Hill climbing Hill climbing with random restarts Simulated Annealing Genetic Algorithms Revisiting the Learning goals

  25. 25/27 Genetic algorithm 2. Randomly select two states to reproduce. The fjtter a state, the most likely it’s chosen to reproduce. 3. Two parent states crossover to produce a child state. 4. The child state mutates with a small independent probability. 5. Add the child state to the new population. 6. Repeat steps 2 to 5 until we produce a new population. Replace the old population with the new one. 7. Repeat until one state in the population has high enough fjtness. 1. Keep track of a set of states. Each state has a fjtness.

  26. 26/27 8: 16: end while 15: end for 14: 13: child mutates with small probability 12: 11: 10: 9: Backtracking with Inferences and Heuristics for j from 1 to n do 5: 3: while true do 7: end if Algorithm 4 Generic Algorithm 6: break 4: 1: i = 0 2: create initial population pop ( i ) = { X 1 , ..., X n } if ∃ x ∈ pop ( i ) with high enough f ( x ) then for each X i ∈ pop ( i ) calculate pr ( X i ) = f ( X i ) / ∑ i f ( X i ) choose a randomly based on pr ( X i ) choose b randomly based on pr ( X i ) child ← crossover(a, b) add child to pop ( i + 1 ) i = i + 1 17: return x ∈ pop ( i ) with highest fjtness

  27. 27/27 Revisiting the Learning Goals By the end of the lecture, you should be able to climbing, hill climbing with random restarts, simulated annealing, and genetic algorithms). algorithms. ▶ Describe/trace/implement the local search algorithms (hill ▶ Describe strategies for escaping local optima. ▶ Compare and contrast the properties of local search

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend