incorporating learning in bdi agents
play

Incorporating Learning in BDI Agents ephane Airiau 1 Lin Padgham 2 - PowerPoint PPT Presentation

Incorporating Learning in BDI Agents ephane Airiau 1 Lin Padgham 2 Sebastian Sardina 2 St Sandip Sen 3 1 ILLC - University of Amsterdam 2 RMIT University, Melbourne, Australia 3 University of Tulsa, OK, USA ALAMS+ALAg 2008 Worshop at AAMAS


  1. Incorporating Learning in BDI Agents ephane Airiau 1 Lin Padgham 2 Sebastian Sardina 2 St´ Sandip Sen 3 1 ILLC - University of Amsterdam 2 RMIT University, Melbourne, Australia 3 University of Tulsa, OK, USA ALAMS+ALAg 2008 Worshop at AAMAS Estoril, Portugal, 2008 Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  2. BDI Agents in a nutshell Belief, Desire, Intentions Belief: knowledge about the world and its own internal state Desires (or goals): what the agent has decided to work towards achieving Intentions: how the agents has decided to takle these goals. No planning from first principles: agents use a plan library (library of partially instantiated plans to be used to achieve the goals) Practical reasoning agents: quickly reason and react to asynchronous events. Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  3. Definition (Plan) e : ψ ← P where e is an event that triggers the plan ψ is the context for which the plan can be applied P is the body of the plan (succession of actions and/or subgoals) Goal-Plan tree Failure recovery P i : plan G G i : goals OR when a step fails, causing a plan SG i : sub- P 1 P 2 to fail, an alternative plan is goals AND tried. SG 1 SG 2 SG 3 ex: if both P 1 and P 2 are applicable, when P 4 fails, P 2 can be tried P 3 P 4 P 5 Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  4. BDI execution algorithm input Take the next event 1 (internal/external) Beliefs Event Queue Modify any goals, beliefs, 2 intentions (new event may cause an update of the belief, causing a plan library Intentions Reasoning modification of the goals and/or Deliberation intentions) dynamic static Select an applicable plan to 3 respond to this event actions Place this plan in the intention 4 base; Take the next step on a 5 selected intention (may execute an action, generate a new event) Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  5. BDI execution algorithm input Take the next event 1 (internal/external) Beliefs Event Queue Modify any goals, beliefs, 2 intentions (new event may cause an update of the belief, causing a plan library Intentions Reasoning modification of the goals and/or Deliberation intentions) dynamic static Select an applicable plan to 3 respond to this event actions Place this plan in the intention 4 BDI agents are well suited for base; complex applications with soft Take the next step on a 5 real-time reasoning and control selected intention (may execute requirements. an action, generate a new event) Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  6. BDI execution algorithm input Take the next event 1 (internal/external) Beliefs Event Queue Modify any goals, beliefs, 2 intentions (new event may cause an update of the belief, causing a plan library Intentions Reasoning modification of the goals and/or Deliberation intentions) dynamic static Select an applicable plan to 3 respond to this event actions Place this plan in the intention 4 BDI agents are well suited for base; complex applications with soft Take the next step on a 5 real-time reasoning and control selected intention (may execute requirements. an action, generate a new event) Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  7. Issues BDI agents lack learning capabilities to modify their behavior (e.g. in case of frequent failures) Plans and context conditions are programmed by a user. In a complex environment, context conditions may be hard to capture precisely too loose: plan is applicable when it is not → failures too tight: plan is not applicable when it actually is → a goal may not appear achievable when it is Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  8. Issues BDI agents lack learning capabilities to modify their behavior (e.g. in case of frequent failures) Plans and context conditions are programmed by a user. In a complex environment, context conditions may be hard to capture precisely too loose: plan is applicable when it is not → failures too tight: plan is not applicable when it actually is → a goal may not appear achievable when it is Research goal Add learning capabilities to adapt and precise context conditions of plans Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  9. Issues BDI agents lack learning capabilities to modify their behavior (e.g. in case of frequent failures) Plans and context conditions are programmed by a user. In a complex environment, context conditions may be hard to capture precisely too loose: plan is applicable when it is not → failures too tight: plan is not applicable when it actually is → a goal may not appear achievable when it is Research goal Add learning capabilities to adapt and precise context conditions of plans A first step Use a decision tree (DT) in addition to the context condition Each plan has a decision tree telling whether it is applicable Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  10. Example of a DT the environment is described by three boolean attributes a , b and c b true false a c true false true false a a 40+, 10- 4+,25- true false false true 110+, 0- 1+, 50- 1+ 35- 20+ 5- Context condition converted from the decision tree : ( a ∧ b ) ∨ ( a ∧ ¬ b ∧ c ) ∨ ( a ∧ ¬ b ∧ ¬ c ) . Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  11. Learning Issues When to collect data? In case of failure, did the failure occur because the current plan was not applicable? did it fail because other plans below were mistakenly chosen? P i : plan G 0 G i : goals OR SG i : sub- P 01 P 02 goals AND AND SG 1 SG 2 SG 3 SG 4 SG 5 OR OR P 11 P 12 P 21 P 22 P 31 P 41 P 51 When to start to use the decision tree? Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  12. Learning Issues When to collect data? In case of failure, did the failure occur because the current plan was not applicable? → Correct data did it fail because other plans below were mistakenly chosen? P i : plan G 0 G i : goals OR SG i : sub- P 01 P 02 goals AND AND SG 1 SG 2 SG 3 SG 4 SG 5 OR OR P 11 P 12 P 21 P 22 P 31 P 41 P 51 When to start to use the decision tree? Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  13. Learning Issues When to collect data? In case of failure, did the failure occur because the current plan was not applicable? → Correct data did it fail because other plans below were mistakenly chosen? → Incorrect data P i : plan G 0 G i : goals OR SG i : sub- P 01 P 02 goals AND AND SG 1 SG 2 SG 3 SG 4 SG 5 OR OR P 11 P 12 P 21 P 22 P 31 P 41 P 51 When to start to use the decision tree? Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  14. Initial Experiments Three mechanisms for plan selection CL: all trees are learnt at the same time, all data is used BU: Bottom Up learning: DT higher in the hierarchy wait for DT below to be formed PS: Probabilistic selection: plans are selected according to the frequency of success provided by the decision tree Use the DT after k instances have been observed for CL and BU ( k large), after few instances for PS (5-10 to have an initial DT). Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  15. Initial Results Setup: 17 plans, world state is defined by six boolean attributes, depth of goal-plan tree is 4. All context conditions are set to true. k = 100 Non Deterministic World: action may fail with a probability of 0.1 1 0.8 frequence of success 0.6 0.4 0.2 Probabilistic selection BU CL 0 0 200 400 600 800 1000 1200 1400 Number of instances Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  16. Conclusion Though theoretically, need to wait for DTs below to be accurate before collecting data for DT higher, DTs handle the spurious data as noise Using PS, the context conditions are learnt faster and are accurate Future Work Test on larger goal-plan trees Try better criteria for starting using the DTs Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

  17. Contacts St´ ephane Airiau: stephane@illc.uva.nl Lin Padgham: lin.padgham@rmit.edu.au Sebastian Sardina: sebastian.sardina@rmit.edu.au Sandip Sen: sandip@utulsa.edu Airiau,Padgham,Sardina,Sen Incorporating Learning in BDI Agent

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend