multiagent supervised training with agent hierarchies and
play

Multiagent Supervised Training with Agent Hierarchies and Manual - PowerPoint PPT Presentation

Multiagent Supervised Training with Agent Hierarchies and Manual Behavior Decomposition Keith Sullivan Sean Luke Department of Computer Science, George Mason University Fairfax, VA 22030 USA RoboCup Motivation Motivation for Training


  1. Multiagent Supervised Training with Agent Hierarchies and Manual Behavior Decomposition Keith Sullivan Sean Luke Department of Computer Science, George Mason University Fairfax, VA 22030 USA

  2. RoboCup Motivation

  3. Motivation for Training ◮ Programming agent behaviors is tedious ◮ Code, test, debug cycles ◮ Changing of agent behavior is desirable ◮ Non-programmers (consumers, animators, etc.) ◮ Future tasks, possibly greatly different from original task ◮ Learning from Demonstration (LfD) ◮ Iteratively builds policy from examples (state/action pairs) ◮ Supervised learning

  4. Hierarchical Training of Agent Behaviors (HiTAB) ◮ Motivation: Rapidly train complex behaviors with very few examples ◮ Behaviors are automata ◮ Expandable behavior library ◮ Start with atomic behaviors ◮ Iteratively build more complex behaviors via scaffolding ◮ Features describe internal and world conditions ◮ Continuous, torodial, categorical (boolean) ◮ Behaviors and features are parameterizable

  5. HiTAB (cont.) ◮ Gathering examples is expensive ◮ Each example is an experiment conducted in real-time ◮ Admission: close to programming by example and far away from machine learning ◮ Limited number of samples, but high dimensional problem! ◮ Behavior decomposition via hierarchical finite automata (HFA) ◮ Per-behavior feature reduction ◮ Learn transition functions → Supervised classification task ◮ C4.5 with probabilistic leaf nodes ◮ Different types of features

  6. Example Behavior Acquiring: can't see forward can Start next to can grab, pick random angle can see a can Looking: Spinning: turn turn Start human present at desired angle at desired angle lost can Collect Cans Run Away human absent (Macro) (Macro) Spinning 2: Looking 2: turn turn lost can release, can see a can under bed human absent pick random angle can't see can next to can Hide Under Acquiring 2: Bed forward (a) Moore Machine (b) HFA

  7. Formal Model ◮ S = { S 1 , ..., S n } is the set of states in the automaton. Among other states, there is one start state S 1 and zero or more flag states . ◮ B = { B 1 , ..., B k } is the set of basic (hard-coded) behaviors. ◮ F = { F 1 , ..., F m } is the set of observable features in the environment. ◮ T = F 1 × ... × F m × S → S is the transition function which maps the current state S t and the current feature vector � f t to a new state S t +1 . ◮ We generalize the model with free variables (parameters) G 1 , . . . , G n for basic behaviors and features.

  8. Using HiTAB ◮ Running HiTAB ◮ Begin in start state ◮ Query transition function, transition, perform associated behavior ◮ Training with HiTAB ◮ Alternate training mode and testing mode ◮ Build example database, adding corrections as needed ◮ Trim unused behaviors and features for saving

  9. Homogeneous Agent Hierarchy ◮ Problem ◮ Size of learning space grows → number of samples grows ◮ Inverse problem between micro- and macro-level behaviors ◮ Agent hierarchy: tree with coordinator agents as non-leaves and regular agents as leaves ◮ Coordinator agent features: statistical information about subsidiary agents a ◮ Agents at same level run same HFA, but might be in different states ◮ Train agents bottom-up

  10. Notions of Homogeneity Collective Patrol Disperse Disperse Disperse Disperse Patrol Patrol Patrol Patrol Save Humanity Collective Patrol Collective Patrol Attack Attack Disperse Disperse

  11. Experiments ◮ Simulated box Foraging ◮ Known deposit location ◮ Randomly placed boxes ◮ 10 boxes in all experiments ◮ 50 agents: two levels of hierarchy ◮ Teams of 5 agents ◮ Grouped these teams into groups of 5 ◮ Boxes require either 5 or 25 agents to pull back ◮ 100 iterations of 100,000 timesteps each

  12. Simulation

  13. Results 140 120 100 Mean Collected Boxes 80 60 40 20 Trained Swarm Trained Groups Hand-Coded Swarm Hand-Coded Groups 0 0 25000 50000 75000 100000 TimeStep

  14. Preliminary Multirobot Work

  15. Future Work ◮ Training Multiple Agents ◮ Behavior Bootstrapping ◮ Heterogeneous Groups ◮ Behavior and Capability ◮ Dynamic Hierarchies ◮ Correction of Demonstrator Error

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend