using cp when you don t know cp
play

Using CP When You Don't Know CP Christian Bessiere LIRMM (CNRS/U. - PowerPoint PPT Presentation

Using CP When You Don't Know CP Christian Bessiere LIRMM (CNRS/U. Montpellier) An illustrative example 5-rooms flat (bedroom, bath, kitchen, sitting, dining) on a 6-room pattern The pattern: Constraints of the builder: north north- north


  1. Using CP When You Don't Know CP Christian Bessiere LIRMM (CNRS/U. Montpellier)

  2. An illustrative example 5-rooms flat (bedroom, bath, kitchen, sitting, dining) on a 6-room pattern The pattern: Constraints of the builder: north north- north – Kitchen and dining must be -west east linked – Bath and kitchen must have a south- south- common wall south west east – Bath must be far from sitting – Sitting and dining form a single room

  3. Problem • How to propose all possible plans?  a constraint network that encodes the constraints of the builder

  4. Library of constraints • Constraints : nw n ne – X ≠ Y, X = Y se sw s – Next (X,Y) = { (nw,n),(nw,sw),(n,nw),(n,ne),(n,s), (ne,n),(ne,se),(sw,nw),(sw,s),(s,sw), (s,n),(s,se),(se,s),(se,ne) } – Far (X,Y) = { (nw,ne),(nw,s),(nw,se),(n,sw),(n,se), (ne,nw),(ne,sw),(ne,s),(sw,n),(sw,ne), (so,se),(s,nw),(s,ne),(se,nw),(se,n),(se,sw) }

  5. A possible viewpoint (variables, domains) • Variables : nw n ne – B (bedroom), se sw s – W (washroom), – K (kitchen), – S (sitting), – D (dining) • Domains : {nw,n,ne,sw,s,se}

  6. A constraint network K B D next nw,n,ne,sw,s,se nw,n,ne,sw,s,se nw,n,ne,sw,s,se next next W nw,n,ne,sw,s,se S nw,n,ne,sw,s,se far Alldiff (K,W,D,S,B) K W Bedroom: east a solution Client wishes: S D B Sitting: south

  7. Constraint Programming modelling Problem Variables Domains ??? Constraints Solution solving

  8. Modelling (“ it’s an art, not a science ”) • In the 80s, it was considered as trivial – Zebra problem (Lewis Carroll) or random problems • But on “real” problems: – Which variables ? Which domains ? – Which constraints for encoding the problem? • And efficiency? – Which constraints for speeding up the solver? • Global constraints, symmetries…  All is in the expertise of the user

  9. If you’re not an expert? 1. Choice of variables/domains 2. Constraint acquisition 3. Improve a basic model

  10. Choice of variables/domains (viewpoints) • From historical data (former solutions) • Solutions described in tables (flat data) Room Position Room Position Dining nw D S Room Position Sitting nw Kitchen n W ash nw W S K D C Kitchen n Sitting K itchen ne n Bedroom ne B W K K S Bedroom B edroom sw sw SàM Wash s D ining s B D S Wash se S itting se

  11. Extract viewpoints X Wash ∈ {nw,n,ne,sw,s,se} Room, position …. Room Position X Sitting ∈ {nw,n,ne,sw,s,se} ∅ Room Position Wash nw Kitchen n Bedroom sw Dining s Sitting se

  12. Extract viewpoints Room, position • Two viewpoints: – X B ,…,X S ∈ {nw,n,ne,sw,s,se} Room Position – X nw ,…,X se ∈ {W,B,K,D,S, ∇ } ∅ • Trivial viewpoints: – X 1 ,…,X 5 ∈ {B-nw,B-n,B-sw,…, Room Position S-s,S-se} wash nw kitchen n – X B-nw ,…,X S-se ∈ {0,1} bedroom sw dining s sitting se

  13. Connect viewpoints • VP1: X B ,…,X S ∈ {nw,n,ne,sw,s,se} • VP2: X nw ,…,X se ∈ {B,W,K,D,S, ∇ } • Channelling constraints: – X B = nw ↔ X nw = B  “nw” is taken at most once in VP1  alldiff (X B ,…,X S ) is a constraint in VP1 [like in Law,Lee,Smith07]

  14. Application: sudoku L C V L1 C4 3 X LC =V L1 C5 1 L2 C1 3 X LV =C L2 C3 4 L3 C4 2 X CV =L L3 C9 8 … … … Alldiffs learned for free

  15. Connect viewpoints • We can derive more than just alldiff • Cardinality constraints can be detected • Example: a timetabling in which 3 math courses are given  one of the viewpoints will contain 3 variables representing these 3 courses  In all other viewpoints, we can put a cardinality constraint forcing value “math” to be taken 3 times

  16. If you’re not an expert? • Choice of variables/domains • Constraint Acquisition – Space of networks – Redundancy – Queries • Improve a basic model

  17. Acquire constraints • The user doesn’t know how to specify constraints • She knows how to discriminate solutions from non-solutions – Ex: valid flat vs invalid flat  Use of machine learning techniques – Interaction by examples (positive e+ or negative e- ) – Acquisition of a network describing the problem

  18. Space of possible networks ? ? X K X D Some negative accepted ? ? X B ? ? ? X W X S ? • Language : ? → { =, ≠, next , far } • Bias : Some positive rejected X S =X W ; next (X S ,X B );… …; X K ≠X D ; far (X K ,X D )

  19. Compact SAT encoding • A SAT formula K representing all Some negative accepted possible networks: – Each constraint c i → a literal b i – Models( K ) = version space – Example e- rejected by {c i , c j , c k } → a clause ( b i ∨ b j ∨ b k ) – Example e+ rejected by c i → a clauses ( ¬ b i ) • m ∈ models( K ) Some positive rejected ⇒ ϕ (m) = { c i | m ( b i )=1} accepts all positive examples and rejects all negative examples

  20. Reduce the space e + 1 B K C(X K ,X D ): ≠ , = , far , next S D W e - K W C(X D ,X S ): ≠ , = , far , next 2 S D B e + C(X K ,X S ): ≠ , = , far , next B 3 K D S W e - 4 next (X D ,X S ) ∨ far (X K ,X S ) S B K D M W

  21. Redundancy K M • Constraints are not S independent • “ next (X K ,X D ) ∧ next (X D ,X S ) ⇒ far (X K ,X S )” • See local consistencies • It’s different from attribute-value learning

  22. Redundancy • Redundancy prevents convergence  a set R of redundancy rules: K alldiff (X 1 ,…,X n ) ⇒ X i ≠X j , ∀ i,j M S next (X K ,X D ) ∧ next (X D ,X S ) ⇒ far (X K ,X S ) • In K we already have: – next (X D ,X S ) ∨ far(X K ,X S ) – next (X K ,X D ) • So, from K + R we deduce far (X K ,X S ) • Version space = Models( K + R ) – Good properties when R is complete

  23. Queries (active learning) • Examples often lead to little new information (eg, negative plan with kitchen far from dining) • The system will propose examples (queries) to speed up convergence • Example e rejected by k constraints from the space – e positive ⇒ k constraints discarded from the space – e negative ⇒ a clause of size k • Good query = example which reduces the space as much as possible whatever the answer

  24. Queries K + R b 1 ∨ … ∨ b k • Negative example e1 :  cl e1 = b 1 ∨ … ∨ b k ∈ K + R – find m ∈ models( K + R ) such that a single literal b i in cl e1 is false m contains ¬ b i – find e2 ∈ sol( ϕ (m)) : → e2 violates only constraint c i ϕ (m)  b i or ¬ b i will go in K • If sol( ϕ (m) )= ∅ : any conflict-set is a e2 ∈ sol( ϕ (m) ) new redundancy rule  quick convergence Query: “ e2” ?

  25. An example of constraint acquisition in robotics (by Mathias Paulin) • The goal is to automate the burden of implementing elementary actions of a robot • Elementary actions are usually implemented by hand by engineers (complex physic laws, kinetic momentum, derivative equations, etc.)

  26. No need for a user • Instead of interacting with a user, classification of examples will be done by a run of the robot with given values of its sensorimotor actuators • If the action has correctly performed, this is positive • With expensive humanoid robots, a simulator allows easy classification without actually running the robot

  27. Elementary actions • Each action has variables representing – the observed world before the action, – the power applied to each actuator – the world after the action • Constraint acquisition will learn a constraint network on these variables such that its solutions are valid actions

  28. Planning a task • The overall goal is to build a plan composed of elementary actions • The planning problem is solved by a CP solver • It is convenient to encode actions as sub-CSPs

  29. Tribot Mindstorms NXT • 3 motors • 4 sensors • 5 elementary actions to combine • Discretization of variables

  30. Experiment Experiment • Modelling by CONACQ • Conacq generates a CHOCO model used by CSP-Plan [Lopez2003] ⇒ Objective : catch the mug!

  31. If you’re not an expert? • Choice of variables/domains • Constraint acquisition • Improve the basic model

  32. Improve the model modelling • Basic model M1 : Problem Variables BT-search + propagation solve(M1) ≈ ∞ Domains Constraints Solution  Experts add implicit solving constraints that increase constraint propagation The globalest is the best • An implicit constraint doesn’t change the set of solutions  We will learn implicit global constraints

  33. Implicit global constraints X 1 … X n sol(M1): • Model M1: 112345 at most two 1 per solution 332223 551554 124135 • M1+{ card [#1 ≤ 2](X 1 ..X n )}: ….. same solutions as M1 • But solve(M1+ card ) is faster than solve(M1) Card[..]+card[..]+card[..] = gcc [P] gcc = propagation with a flow

  34. Learn parameters P of gcc [P](X 1 ..X n ) Very hard Very easy relax M1 M2 M1+ Sol(M1) Sol(M2) gcc [P’](X 1 ..X n ) gcc [P’](X 1 ..X n ) Easy gcc [P](X 1 ..X n )

  35. Example: Task allocation • Projects to be assigned to students while minimising disappointment • Model M1 designed by some of the students (2h of courses on CP) : • optimize (M1) > 12h

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend