learning from data lecture 2 the perceptron
play

Learning From Data Lecture 2 The Perceptron The Learning Setup A - PowerPoint PPT Presentation

Learning From Data Lecture 2 The Perceptron The Learning Setup A Simple Learning Algorithm: PLA Other Views of Learning Is Learning Feasible: A Puzzle M. Magdon-Ismail CSCI 4100/6100 recap: The Plan 1. What is Learning? 2. Can We do it?


  1. Learning From Data Lecture 2 The Perceptron The Learning Setup A Simple Learning Algorithm: PLA Other Views of Learning Is Learning Feasible: A Puzzle M. Magdon-Ismail CSCI 4100/6100

  2. recap: The Plan 1. What is Learning? 2. Can We do it? 3. How to do it? concepts 4. How to do it well? theory practice 5. General principles? 6. Advanced techniques. 7. Other Learning Paradigms. our language will be mathematics . . . . . . our sword will be computer algorithms M The Perceptron : 2 /25 � A c L Creator: Malik Magdon-Ismail Recap: key players − →

  3. recap: The Key Players input x ∈ R d = X . • Salary, debt, years in residence, . . . • Approve credit or not output y ∈ {− 1 , +1 } = Y . • True relationship between x and y target function f : X �→ Y . (The target f is unknown .) • Data on customers data set D = ( x 1 , y 1 ) , . . . , ( x N , y N ). ( y n = f ( x n ) .) X Y and D are given by the learning problem; The target f is fixed but unknown. We learn the function f from the data D . M The Perceptron : 3 /25 � A c L Creator: Malik Magdon-Ismail Recap: learning setup − →

  4. recap: Summary of the Learning Setup UNKNOWN TARGET FUNCTION f : X �→ Y (ideal credit approval formula) y n = f ( x n ) TRAINING EXAMPLES ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) (historical records of credit customers) FINAL LEARNING HYPOTHESIS ALGORITHM g ≈ f A (learned credit approval formula) HYPOTHESIS SET H (set of candidate formulas) M The Perceptron : 4 /25 � A c L Creator: Malik Magdon-Ismail Simple learning model − →

  5. A Simple Learning Model • Input vector x = [ x 1 , . . . , x d ] t . • Give importance weights to the different inputs and compute a “Credit Score” d � “Credit Score” = w i x i . i =1 • Approve credit if the “Credit Score” is acceptable. d � Approve credit if w i x i > threshold , (“Credit Score” is good) i =1 d � Deny credit if w i x i < threshold . (“Credit Score” is bad) i =1 • How to choose the importance weights w i input x i is important = ⇒ large weight | w i | input x i beneficial for credit = ⇒ positive weight w i > 0 input x i detrimental for credit = ⇒ negative weight w i < 0 M The Perceptron : 5 /25 � A c L Creator: Malik Magdon-Ismail Rewriting the model − →

  6. A Simple Learning Model d � Approve credit if w i x i > threshold , i =1 d � Deny credit if w i x i < threshold . i =1 can be written formally as �� d � � � h ( x ) = sign + w 0 w i x i i =1 The “bias weight” w 0 corresponds to the threshold. (How?) M The Perceptron : 6 /25 � A c L Creator: Malik Magdon-Ismail Perceptron − →

  7. The Perceptron Hypothesis Set We have defined a Hyopthesis set H H = { h ( x ) = sign( w t x ) } ← uncountably infinite H     1 w 0 w 1 x 1  ∈ R d +1 ,  ∈ { 1 } × R d .     w = x = . . . .     . .   w d x d This hypothesis set is called the perceptron or linear separator M The Perceptron : 7 /25 � A c L Creator: Malik Magdon-Ismail Geometry of perceptron − →

  8. Geometry of The Perceptron h ( x ) = sign( w t x ) (Problem 1.2 in LFD) Income Income Age Age Which one should we pick? M The Perceptron : 8 /25 � A c L Creator: Malik Magdon-Ismail Use the data − →

  9. Use the Data to Pick a Line Income Income Age Age A perceptron fits the data by using a line to separate the +1 from − 1 data. Fitting the data: How to find a hyperplane that separates the data? (“It’s obvious - just look at the data and draw the line,” is not a valid solution.) M The Perceptron : 9 /25 � A c L Creator: Malik Magdon-Ismail How to learn g − →

  10. How to Learn a Final Hypothesis g from H We want to select g ∈ H so that g ≈ f . We certainly want g ≈ f on the data set D . Ideally, g ( x n ) = y n . How do we find such a g in the infinite hypothesis set H , if it exists? Idea! Start with some weight vector and try to improve it. Income Age M The Perceptron : 10 /25 � A c L Creator: Malik Magdon-Ismail PLA − →

  11. The Perceptron Learning Algorithm (PLA) A simple iterative method. y ∗ = +1 1: w (1) = 0 2: for iteration t = 1 , 2 , 3 , . . . y ∗ x ∗ w ( t + 1) 3: the weight vector is w ( t ). w ( t ) 4: From ( x 1 , y 1 ) , . . . , ( x N , y N ) pick any misclassified example. x ∗ 5: Call the misclassified example ( x ∗ , y ∗ ), sign ( w ( t ) • x ∗ ) � = y ∗ . y ∗ = − 1 6: Update the weight: w ( t + 1) = w ( t ) + y ∗ x ∗ . y ∗ x ∗ w ( t ) w ( t + 1) 7: t ← t + 1 x ∗ PLA implements our idea: start at some weights and try to improve. “incremental learning”on a single example at a time M The Perceptron : 11 /25 � A c L Creator: Malik Magdon-Ismail PLA convergence − →

  12. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. Income What if the data cannot be fit by a perceptron? Age iteration 1 M The Perceptron : 12 /25 � A c L Creator: Malik Magdon-Ismail Start − →

  13. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. Income What if the data cannot be fit by a perceptron? Age iteration 1 M The Perceptron : 13 /25 � A c L Creator: Malik Magdon-Ismail Iteration 1 − →

  14. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 1 M The Perceptron : 14 /25 � A c L Creator: Malik Magdon-Ismail Iteratrion 2 − →

  15. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 2 M The Perceptron : 15 /25 � A c L Creator: Malik Magdon-Ismail Iteration 3 − →

  16. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 3 M The Perceptron : 16 /25 � A c L Creator: Malik Magdon-Ismail Iteration 4 − →

  17. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 4 M The Perceptron : 17 /25 � A c L Creator: Malik Magdon-Ismail Iteration 5 − →

  18. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 5 M The Perceptron : 18 /25 � A c L Creator: Malik Magdon-Ismail Iteration 6 − →

  19. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 6 M The Perceptron : 19 /25 � A c L Creator: Malik Magdon-Ismail Non-separable data? − →

  20. Does PLA Work? Theorem. If the data can be fit by a linear separator, then after some finite number of steps, PLA will find one. After how long? Income What if the data cannot be fit by a perceptron? Age iteration 1 M The Perceptron : 20 /25 � A c L Creator: Malik Magdon-Ismail We can fit! − →

  21. We can Fit the Data • We can find an h that works from infinitely many (for the perceptron). (So computationally, things seem good.) • Ultimately, remember that we want to predict . We don’t care about the data, we care about “outside the data” . Can a limited data set reveal enough information to pin down an entire target function, so that we can predict outside the data? M The Perceptron : 21 /25 � A c L Creator: Malik Magdon-Ismail Other views of learning − →

  22. Other Views of Learning • Design: learning is from data, design is from specs and a model. • Statistics, Function Approximation. • Data Mining: find patterns in massive data (typically unsupervised). • Three Learning Paradigms – Supervised: the data is ( x n , f ( x n )) – you are told the answer. – Reinforcement: you get feedback on potential answers you try: x → try something → get feedback . – Unsupervised: only given x n , learn to “organize” the data. M The Perceptron : 22 /25 � A c L Creator: Malik Magdon-Ismail Coins – supervised − →

  23. Supervised Learning - Classifying Coins 25 25 5 Mass Mass 5 1 1 10 10 Size Size M The Perceptron : 23 /25 � A c L Creator: Malik Magdon-Ismail Coins – unsupervised − →

  24. Unsupervised Learning - Categorizing Coins type 4 Mass Mass type 3 type 2 type 1 Size Size M The Perceptron : 24 /25 � A c L Creator: Malik Magdon-Ismail Puzzle: outside the data − →

  25. Outside the Data Set - A Puzzle Dogs ( f = − 1) Trees ( f = +1) Tree or Dog? ( f = ?) M The Perceptron : 25 /25 � A c L Creator: Malik Magdon-Ismail

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend