Announcement "A note taker is being recruited for this class. No - - PowerPoint PPT Presentation

announcement
SMART_READER_LITE
LIVE PREVIEW

Announcement "A note taker is being recruited for this class. No - - PowerPoint PPT Presentation

Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well organized notes, this is a good opportunity for you to assist a fellow student and also gain assist a fellow


slide-1
SLIDE 1

Announcement

"A note taker is being recruited for this

  • class. No extra time outside of class is
  • required. If you take clear, well‐organized

notes, this is a good opportunity for you to assist a fellow student and also gain assist a fellow student and also gain volunteer hours, units or pay. If you are interested please go to the Disability Services C t ' b it t di bilit i d Center's website at www.disability.uci.edu and fill out the online notetaker application. If you have any questions you can contact y y q y them at (949)824‐7494.”

slide-2
SLIDE 2

From last class meeting – “ d ” h a “non‐distance” heuristic

N= 3

  • The “N Colored Lights” search problem.

– You have N lights that can change colors.

  • Each light is one of M different colors

M= 4

  • Each light is one of M different colors.

– Initial state: Each light is a given color. – Actions: Change the color of a specific light.

Y d ’ k h i h hi h li h

  • You don’t know what action changes which light.
  • You don’t know to what color the light changes.
  • Not all actions are available in all states.

– Transition Model: RESULT(s,a) = s’

where s’ differs from s by exactly one light’s color.

– Goal test: A desired color for each light.

g

  • Find: Shortest action sequence to goal.
slide-3
SLIDE 3

From last class meeting – “ d ” h a “non‐distance” heuristic

h “ l d h ” h bl

N= 3

  • The “N Colored Lights” search problem.

– Find: Shortest action sequence to goal.

  • h(n) = number of lights the wrong color

M= 4

h(n) number of lights the wrong color

  • f(n) = g(n) + h(n)

– f(n) = (under‐) estimate of total path cost ( ) th t f b f ti f – g(n) = path cost so far = number of actions so far

  • Is h(n) admissible?

– Admissible = never overestimates the cost to the goal. Yes because: (a) each light that is the wrong color must change; – Yes, because: (a) each light that is the wrong color must change; and (b) only one light changes at each action.

  • Is h(n) consistent?

– Consistent = h(n) ≤ c(n,a,n’) + h(n’), for n’ a successor of n.

( ) ( , , ) ( ), – Yes, because: (a) c(n,a,n’)=1; and (b) h(n) ≤ h(n’)+1

  • Is A* search with heuristic h(n) optimal?
slide-4
SLIDE 4

Local Search Algorithms

Chapter 4

slide-5
SLIDE 5

Outline Outline

  • Hill‐climbing search

g

– Gradient Descent in continuous spaces

  • Simulated annealing search
  • Tabu search
  • Local beam search
  • Genetic algorithms
  • Linear Programming
slide-6
SLIDE 6

Local search algorithms Local search algorithms

  • In many optimization problems, the path to the goal is

irrelevant; the goal state itself is the solution

  • State space = set of "complete" configurations
  • Find configuration satisfying constraints, e.g., n‐queens

Find configuration satisfying constraints, e.g., n queens

  • In such cases, we can use local search algorithms
  • keep a single "current" state, try to improve it.
  • Very memory efficient (only remember current state)
slide-7
SLIDE 7

Example: n‐queens Example: n queens

  • Put n queens on an n × n board with no two

Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Note that a state cannot be an incomplete configuration with m< n queens

slide-8
SLIDE 8

Hill‐climbing search Hill climbing search

  • "Like climbing Everest in thick fog with

Like climbing Everest in thick fog with amnesia"

slide-9
SLIDE 9

Hill‐climbing search: 8‐queens problem Hill climbing search: 8 queens problem

Each number indicates h if we move i it di l a queen in its corresponding column

  • h = number of pairs of queens that are attacking each other, either directly or

p q g , y indirectly (h = 17 for the above state)

slide-10
SLIDE 10

Hill‐climbing search: 8‐queens problem Hill climbing search: 8 queens problem

 A local minimum with h = 1

(what can you do to get out of this local minima?)

slide-11
SLIDE 11

Hill‐climbing Difficulties Hill climbing Difficulties

  • Problem: depending on initial state, can get stuck in local maxima
slide-12
SLIDE 12

Gradient Descent Gradient Descent

  • Assume we have some cost-function:

and we want minimize over continuous variables X1,X2,..,Xn

1

( ,..., )

n

C x x

, , ,

  • 1. Compute the gradient :

1

( ,..., )

n i

C x x i x   

  • 2. Take a small step downhill in the direction of the gradient:

1

' ( ,..., )

i i i n i

x x x C x x i x       

  • 3. Check if
  • 4. If true then accept move, if not reject.

i 1 1

( ,.., ',.., ) ( ,.., ,.., )

i n i n

C x x x C x x x 

  • 5. Repeat.
slide-13
SLIDE 13

Line Search Line Search

  • In GD you need to choose a step-size.
  • Line search picks a direction v (say the gradient direction) and
  • Line search picks a direction, v, (say the gradient direction) and

searches along that direction for the optimal step:

*  argmin C(xt vt)

  • Repeated doubling can be used to effectively search for the optimal step:

 g (

t

t)

  • There are many methods to pick search direction v.

d h d “ d ”

 2 4 8 (until cost increases)

Very good method is “conjugate gradients”.

slide-14
SLIDE 14

Newton’s Method

  • Want to find the roots of f(x).
  • To do that, we compute the tangent at Xn and compute where it crosses the x-axis.

Basins of attraction for x5 − 1 = 0; darker means more iterations to converge.

f (xn)  f (xn)  0 xn1  xn  xn1  xn  f (xn) f (xn)

  • Optimization: find roots of f (xn)

f (x )  f (xn)  0  x  x  f (x )

 

1f (x )

  • Does not always converge & sometimes unstable.

f (xn)  xn1  xn  xn1  xn f (xn)

  f (xn)

  • If it converges, it converges very fast
slide-15
SLIDE 15

Simulated annealing search Simulated annealing search

d l l b ll "b d" b

  • Idea: escape local maxima by allowing some "bad" moves but

gradually decrease their frequency.

  • This is like smoothing the cost landscape.
slide-16
SLIDE 16

Simulated annealing search Simulated annealing search

  • Idea: escape local maxima by allowing some "bad"

Idea: escape local maxima by allowing some bad moves but gradually decrease their frequency

slide-17
SLIDE 17

Properties of simulated annealing search

  • One can prove: If T decreases slowly enough, then simulated

annealing search will find a global optimum with probability approaching 1 (however, this may take VERY long)

– However, in any finite search space RANDOM GUESSING also will find a global

  • ptimum with probability approaching 1 .

Wid l d i VLSI l i li h d li

  • Widely used in VLSI layout, airline scheduling, etc.
slide-18
SLIDE 18

Tabu Search Tabu Search

  • A simple local search but with a memory.
  • Recently visited states are added to a tabu-list and are temporarily

excluded from being visited again. Thi th l f l d l d i d

  • This way, the solver moves away from already explored regions and

(in principle) avoids getting stuck in local minima.

slide-19
SLIDE 19

Local beam search Local beam search

  • Keep track of k states rather than just one.
  • Start with k randomly generated states.
  • At each iteration, all the successors of all k states are generated.
  • If any one is a goal state, stop; else select the k best successors from the

l t li t d t complete list and repeat.

  • Concentrates search effort in areas believed to be fruitful.

Ma lose di ersit as search progresses res lting in asted effort – May lose diversity as search progresses, resulting in wasted effort.

slide-20
SLIDE 20

Genetic algorithms Genetic algorithms

  • A successor state is generated by combining two parent states
  • Start with k randomly generated states (population)
  • A state is represented as a string over a finite alphabet (often a string of 0s
  • A state is represented as a string over a finite alphabet (often a string of 0s

and 1s)

  • Evaluation function (fitness function). Higher values for better states.

( ) g

  • Produce the next generation of states by selection, crossover, and

mutation

slide-21
SLIDE 21

fitness: fitness: # non-attacking queens b bili f b i probability of being regenerated in next generation

  • Fitness function: number of non‐attacking pairs of queens (min = 0, max =

8 × 7/2 = 28)

  • P(child) = 24/(24+23+20+11) = 31%
  • P(child) = 24/(24+23+20+11) = 31%
  • P(child) = 23/(24+23+20+11) = 29% etc
slide-22
SLIDE 22

Linear Programming Linear Programming

Problems of the sort: maximize cT x

subject to : Ax  b; Bx = c subject to : Ax  b; Bx c

  • Very efficient “off-the-shelves” solvers are

available for LRs.

  • They can solve large problems with thousands
  • f variables.