learning
play

Learning What is learning? Foundations of Artificial Intelligence - PowerPoint PPT Presentation

Learning What is learning? Foundations of Artificial Intelligence An agent learns when it improves its performance w.r.t. a specific task 8. Machine Learning with experience. Learning from Observations E.g., game programs Why learn?


  1. Learning What is learning? Foundations of Artificial Intelligence An agent learns when it improves its performance w.r.t. a specific task 8. Machine Learning with experience. Learning from Observations → E.g., game programs Why learn? Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller → Engineering, philosophy, cognitive science Albert-Ludwigs-Universit¨ at Freiburg → Data Mining (discovery of new knowledge through data analysis) Juli 1, 2011 No intelligence without learning! (University of Freiburg) Foundations of AI Juli 1, 2011 2 / 37 Contents The Learning Agent So far an agent’s percepts have only served to help the agent choose its actions. Now they will also serve to improve future behavior. � Performance standard The learning agent 1 Sensors Critic Types of learning 2 feedback Environment Decision trees 3 changes Learning Performance element element knowledge learning goals Problem generator Actuators Agent (University of Freiburg) Foundations of AI Juli 1, 2011 3 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 4 / 37

  2. Building Blocks of the Learning Agent The Learning Element Performance element: Processes percepts and chooses actions. → Corresponds to the agent model we have studied so far. Its design is affected by four major issues: Learning element: Carries out improvements Which components of the performance element are to be learned? → requires self knowledge and feedback on how the agent is doing in the environment. What representation should be chosen? Critic: Evaluation of the agent’s behaviour based on a given external What form of feedback is available? behavioral measure → feedback. Which prior information is available? Problem generator: Suggests explorative actions that lead the agent to new experiences. (University of Freiburg) Foundations of AI Juli 1, 2011 5 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 6 / 37 Types of Feedback During Learning Inductive Learning An example is a pair ( x, f ( x )) . The complete set of examples is called the The type of feedback available for learning is usually the most important training set. factor in determining the nature of the learning problem. Pure inductive inference: for a collection of examples for f , return a Supervised learning: Involves learning a function from examples of its function h (hypothesis) that approximates f . inputs and outputs. The function h typically is member of a hypothesis space H . Unsupervised learning: The agent has to learn patterns in the input A good hypothesis should generalize the data well, i.e., will predict unseen when no specific output values are given. examples correctly. The most general form of learning in which Reinforcement learning: A hypothesis is consistent with the data set if it agrees with all the data. the agent is not told what to do by a teacher. Rather it must learn from a reinforcement or reward. It typically involves learning how the environment How do we choose from among multiple consistent hypotheses? works. Ockham’s razor: prefer the simplest hypothesis consistent with the data. (University of Freiburg) Foundations of AI Juli 1, 2011 7 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 8 / 37

  3. Example: Fitting a Function to a Data Set Decision Trees f ( x ) f ( x ) f ( x ) f ( x ) Input: Description of an object or a situation through a set of attributes. Output: a decision, that is the predicted output value for the input. x x x x (c) (a) (b) (d) Both, input and output can be discrete or continuous. Discrete-valued functions lead to classification problems. (a) consistent hypothesis that agrees with all the data Learning a continuous function is called regression. (b) degree-7 polynomial that is also consistent with the data set (c) data set that can be approximated consistently with a degree-6 polynomial (d) sinusoidal exact fit to the same data (University of Freiburg) Foundations of AI Juli 1, 2011 9 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 10 / 37 Boolean Decision Tree When to Wait for Available Seats at a Restaurant Goal predicate: WillWait Input: set of vectors of input attributes X and a single Boolean output Test predicates: value y (goal predicate). Patrons : How many guests are there? ( none , some , full ) Output: Yes / No decision based on a goal predicate. WaitEstimate : How long do we have to wait? (0-10, 10-30, 30-60, > 60) Goal of the learning process: Definition of the goal predicate in the form of Alternate : Is there an alternative? ( T/F ) a decision tree. Hungry : Am I hungry? ( T/F ) Boolean decision trees represent Boolean functions. Reservation : Have I made a reservation? ( T/F ) Properties of (Boolean) Decision Trees: Bar : Does the restaurant have a bar to wait in? ( T/F ) An internal node of the decision tree represents a test of a property. Fri / Sat : Is it Friday or Saturday? ( T/F ) Branches are labeled with the possible values of the test. Raining : Is it raining outside? ( T/F ) Each leaf node specifies the Boolean value to be returned if that leaf is Price : How expensive is the food? ($, $$, $$$) reached. Type : What kind of restaurant is it? ( French , Italian , Thai , Burger ) (University of Freiburg) Foundations of AI Juli 1, 2011 11 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 12 / 37

  4. Restaurant Example (Decision Tree) Expressiveness of Decision Trees Patrons? Each decision tree hypothesis for the WillWait goal predicate can be seen as an assertion of the form None Some Full ∀ s WillWait ( s ) ⇔ ( P 1 ( s ) ∨ P 2 ( s ) ∨ . . . ∨ P n ( s )) No Yes WaitEstimate? where each P i ( s ) is the conjunction of tests along a path from the root of >60 30-60 10-30 0-10 the tree to a leaf with a positive outcome. No Alternate? Hungry? Yes Any Boolean function can be represented by a decision tree. No Yes No Yes Limitation: All tests always involve only one object and the language of Reservation? Fri/Sat? Yes Alternate? traditional decision trees is inherently propositional. No Yes No Yes No Yes ∃ r 2 NearBy ( r 2 , s ) ∧ Price ( r, p ) ∧ Price ( r 2 , p 2 ) ∧ Cheaper ( p 2 , p ) Bar? Yes No Yes Yes Raining? cannot be represented as a test. No Yes No Yes We could always add another test called CheaperRestaurantNearby , but a decision tree with all such attributes would grow exponentially. No Yes No Yes (University of Freiburg) Foundations of AI Juli 1, 2011 13 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 14 / 37 Compact Representations The Training Set of the Restaurant Example Classification of an example = Value of the goal predicate For every Boolean function we can construct a decision tree by translating every row of a truth table to a path in the tree. true → positive example false → negative example This can lead to a tree whose size is exponential in the number of attributes. Although decision trees can represent functions with smaller trees, there are functions that require an exponentially large decision tree: � 1 even number of inputs are 1 Parity function: p ( x ) = 0 otherwise � 1 half of the inputs are 1 Majority function: m ( x ) = 0 otherwise There is no consistent representation that is compact for all possible Boolean functions. (University of Freiburg) Foundations of AI Juli 1, 2011 15 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 16 / 37

  5. Inducing Decision Trees from Examples Inducing Decision Trees from Examples (2) Smallest solution: applying Ockham’s razor we should instead find the smallest decision tree that is consistent with the training set. Na¨ ıve solution: we simply construct a tree with one path to a leaf for each example. Unfortunately, for any reasonable definition of smallest finding the smallest tree is intractable. In this case we test all the attributes along the path and attach the classification of the example to the leaf. Dilemma: Whereas the resulting tree will correctly classify all given examples, it smallest simplest will not say much about other cases. ? intractable no learning It just memorizes the observations and does not generalize. We can give a decision tree learning algorithm that generates “smallish” trees. (University of Freiburg) Foundations of AI Juli 1, 2011 17 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 18 / 37 Idea of Decision Tree Learning Splitting Examples (1) Divide and Conquer approach: Choose an (or better: the best) attribute. Split the training set into subsets each corresponding to a particular value of that attribute. Now that we have divided the training set into several smaller training sets, we can recursively apply this process to the smaller training sets. Type is a poor attribute, since it leaves us with four subsets each of them containing the same number of positive and negative examples. It does not reduce the problem complexity. (University of Freiburg) Foundations of AI Juli 1, 2011 19 / 37 (University of Freiburg) Foundations of AI Juli 1, 2011 20 / 37

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend