today iterative improvement algorithms
play

Today Iterative improvement algorithms See Russell and Norvig, - PowerPoint PPT Presentation

1 2 Today Iterative improvement algorithms See Russell and Norvig, chapters 4 & 5 In many optimization problems, path is irrelevant; the goal state itself is the solution. Local search and optimisation Then state space = set of


  1. 1 2 Today Iterative improvement algorithms See Russell and Norvig, chapters 4 & 5 In many optimization problems, path is irrelevant; the goal state itself is the solution. • Local search and optimisation Then state space = set of “complete” configurations; find optimal configuration, e.g., TSP • Constraint satisfaction problems (CSPs) or, find configuration satisfying constraints, e.g., timetable. In such cases, can use iterative improvement algorithms; • CSP examples keep a single “current” state, try to improve it. Typically these algorithms run in constant space, and are suitable for online as • Backtracking search for CSPs well as offline search. Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 3 4 Example: Travelling Salesperson Problem Example: n -queens Start with any complete tour, perform pairwise exchanges: Put n queens on an n × n board with no two queens on the same row, column, or diagonal. Move a queen to reduce number of conflicts. Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007

  2. 5 6 Hill-climbing (or gradient ascent/descent) Hill-climbing contd. “Like climbing Everest in thick fog with amnesia” Problem: depending on initial state, can get stuck on local maxima. global maximum function Hill-Climbing ( problem ) returns a state that is a local maximum inputs : problem , a problem value local variables : current , a node neighbour , a node current ← Make-Node ( Initial-State [ problem ]) loop do neighbour ← a highest-valued successor of current local maximum if Value [neighbour] < Value [current] then return State [ current ] current ← neighbour end states In continuous spaces, problems with choosing step size, slow convergence. Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 7 Simulated annealing 8 Simulated annealing function Simulated-Annealing ( problem, schedule ) returns a solution state inputs : problem , a problem Idea: escape local maxima by allowing some “bad” moves schedule , a mapping from time to “temperature” but gradually decrease their size and frequency . local variables : current , a node The name comes from the process used to harden metals and glass by heating next , a node them to a high temperature, and then letting them cool slowly, to reach a low T , a “temperature” controlling prob. of downward steps energy crystalline state. current ← Make-Node ( Initial-State [ problem ]) for t ← 1 to ∞ do T ← schedule [ t ] if T = 0 then return current next ← a randomly selected successor of current ∆ E ← Value [ next ] – Value [ current ] if ∆ E > 0 then current ← next else current ← next only with probability e ∆ E/T Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007

  3. 9 10 Properties of simulated annealing Constraint satisfaction problems (CSPs) In the inner loop, this picks a Random move: Standard search problem: – if it improves the state, it is accepted; state is a “black box”—any old data structure – if not, it is accepted with decreasing probability, that supports goal test, eval, successor depending on how much worse the state is, and time elapsed. CSP: It can be shown that, if T decreased slowly enough, then always reach best state. state is defined by variables X i with values from domain D i goal test is a set of constraints specifying Is this necessarily an interesting guarantee?? allowable combinations of values for subsets of variables Devised by Metropolis et al., 1953, for physical process modelling; This is a simple example of a formal representation language . now widely used in VLSI layout, airline scheduling, etc. Allows useful general-purpose algorithms with more power than standard search algorithms Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 11 12 Example: Map-Colouring Map colouring as constraint problem Colour the map with three colours so that no two adjacent states have the same colour. Variables WA , NT , Q , NSW , V , SA , T Northern Territory Domains D i = { red, green, blue } Western Queensland Constraints WA � = NT, WA � = SA, . . . (if the language allows this) Australia or South ( WA, NT ) ∈ { ( red, green ) , ( red, blue ) , ( green, red ) , . . . } Australia ( WA, Q ) ∈ { ( red, green ) , ( red, blue ) , ( green, red ) , . . . } New South Wales . . . Victoria Tasmania Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007

  4. 13 14 Example: Map-Coloring contd. Constraint graph Binary CSP : each constraint relates at most two variables Constraint graph : nodes are variables, arcs show constraints Northern NT Territory Q Western Queensland Australia WA South Australia SA NSW New South Wales V Victoria Victoria T Tasmania General-purpose CSP algorithms use the graph structure Solutions satisfy all constraints, e.g. { WA = red, NT = green, SA = blue, . . . } to speed up search. E.g., Tasmania is an independent subproblem! Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 15 16 Varieties of CSPs Varieties of constraints Discrete variables Unary constraints involve a single variable, O ( d n ) complete assignments finite domains; size d ⇒ e.g., SA � = green ♦ e.g., Boolean CSPs, incl. Boolean satisfiability Binary constraints involve pairs of variables, infinite domains (integers, strings, etc.) e.g., SA � = WA ♦ e.g., job scheduling, variables are start/end days for each job Higher-order constraints involve 3 or more variables, ♦ need a constraint language, e.g., StartJob 1 + 5 ≤ StartJob 3 e.g., cryptarithmetic column constraints ♦ linear constraints solvable, nonlinear undecidable Preferences (soft constraints), e.g., red is better than green Continuous variables often representable by a cost for each variable assignment ♦ e.g., start/end times for Hubble Telescope observations → constrained optimization problems ♦ linear constraints solvable in poly time by LP methods Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007

  5. 17 18 Example: Cryptarithmetic Example: Cryptarithmetic T W O T W O O O F T U W R F T U W R + T W O + T W O F O U R F O U R X 3 X 2 X 1 X 3 X 2 X 1 Variables: ? Variables: F T U W R O X 1 X 2 X 3 Domains: ? Domains: { 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 } Constraints: ? Constraints alldiff ( F, T, U, W, R, O ) , O + O = R + 10 · X 1 , . . . Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 19 20 Real-world CSPs Standard search formulation (incremental) Assignment problems Let’s start with the straightforward, dumb approach, then fix it e.g., who teaches what class States are defined by the values assigned so far Timetabling problems • Initial state: the empty assignment, { } e.g., which class is offered when and where? • Successor function: assign a value to an unassigned variable Hardware configuration that does not conflict with current assignment. ⇒ fail if no legal assignments (not fixable!) Spreadsheets • Goal test: the current assignment is complete Transportation scheduling Factory scheduling – This is the same for all CSPs! Floorplanning – Every solution appears at depth n with n variables: ⇒ use depth-first search – Path is irrelevant, so can also use complete-state formulation – b = ( n − ℓ ) d at depth ℓ , hence n ! d n leaves!!!! Notice that many real-world problems involve real-valued variables Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007

  6. 21 22 Backtracking search Backtracking search Variable assignments are commutative, i.e., function Backtracking-Search ( csp ) returns solution/failure [ WA = red then NT = green ] same as [ NT = green then WA = red ] return Recursive-Backtracking ( [ ] , csp ) Only need to consider assignments to a single variable at each node function Recursive-Backtracking ( assigned , csp ) returns solution/failure ⇒ b = d and there are d n leaves if assigned is complete then return assigned var ← Select-Unassigned-Variable ( Variables [ csp ], assigned , csp ) for each value in Order-Domain-Values ( var , assigned , csp ) do Depth-first search for CSPs with single-variable assignments if value is consistent with assigned according to Constraints [ csp ] then is called backtracking search result ← Recursive-Backtracking ( [ var = value | assigned ] , csp ) Backtracking search is the basic uninformed algorithm for CSPs if result � = failure then return result end Can solve n -queens for n ≈ 25 return failure Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Backtracking example 23 Backtracking example 24 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007 Alan Smaill Fundamentals of Artificial Intelligence Oct 15, 2007

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend