pushpak bhattacharyya cse dept iit b iit bombay b lecture
play

Pushpak Bhattacharyya CSE Dept., IIT B IIT Bombay b Lecture 38: - PowerPoint PPT Presentation

Pushpak Bhattacharyya CSE Dept., IIT B IIT Bombay b Lecture 38: PAC Learning, VC dimension; S lf O Self Organization i ti VC dimension VC-dimension Gives a necessary and sufficient condition for Gives a necessary and sufficient


  1. Pushpak Bhattacharyya CSE Dept., IIT B IIT Bombay b Lecture 38: PAC Learning, VC dimension; S lf O Self Organization i ti

  2. VC dimension VC-dimension Gives a necessary and sufficient condition for Gives a necessary and sufficient condition for PAC learnability.

  3. Def:- Def: Let C be a concept class, i.e., it has members c1,c2,c3,…… as concepts in it. , , , p C C1 C3 C3 C2

  4. Let S be a subset of U (universe). Let S be a subset of U (universe). Now if all the subsets of S can be Now if all the subsets of S can be produced by intersecting with C i s , then we say C shatters S.

  5. The highest cardinality set S that can be The highest cardinality set S that can be shattered gives the VC-dimension of C. VC-dim(C)= |S| VC-dim: Vapnik-Cherronenkis dimension.

  6. y 2 – Dim surface C = { half planes} x IIT Bombay 6

  7. y S 1 = { a } { } 1 a {a}, Ø x |s| = 1 can be shattered IIT Bombay 7

  8. y S 2 = { a,b } b b {a,b}, a {a} {a}, {b}, Ø x |s| = 2 can be shattered IIT Bombay 8

  9. y S 3 = { b b a,b,c } a c x |s| = 3 can be shattered IIT Bombay 9

  10. IIT Bombay 10

  11. y S 4 = { a,b,c,d } A B C C D D x |s| = 4 cannot be shattered IIT Bombay 11

  12. � A Concept Class C is learnable for all � A Concept Class C is learnable for all probability distributions and all concepts in C if and only if the VC dimension of C is finite � If the VC dimension of C is d , then…(next page) IIT Bombay 12

  13. (a) for 0< ε <1 and the sample size at least ( ) p max[(4/ ε )log(2/ δ ), (8d/ ε )log(13/ ε )] any consistent function A:S c � C is a learning function for C (b) for 0< ε <1/2 and sample size less than max[((1- ε )/ ε )ln(1/ δ ), d(1-2( ε (1- δ )+ δ ))] No function A:S c � H, for any hypothesis space is a learning function for C. l f f IIT Bombay 13

  14. Book Computational Learning Theory, M. H. G. 1. Anthony, N. Biggs, Cambridge Tracts in Theoretical Computer Science, 1997. h l C S 1997 Paper’s 1 A theory of the learnable Valiant LG (1984) 1. A theory of the learnable, Valiant, LG (1984), Communications of the ACM 27(11):1134 -1142. 2. Learnability and the VC-dimension, A Blumer, A Ehrenfeucht, D Haussler, M Warmuth - Journal of the ACM, 1989. f th ACM 1989

  15. Biological Motivation Biological Motivation Brain

  16. Higher brain Brain Cerebellum Cerebellum Cerebrum Cerebrum 3 Layers: Cerebrum 3- Layers: Cerebrum Cerebellum Higher brain

  17. Search Search Contributing to humanity for Meaning Achievement,recognition Food,rest survival

  18. Higher brain ( responsible for higher needs) 3 L 3- Layers: Cerebrum C Cerebrum b C b (crucial for survival) Cerebellum Higher brain

  19. Back of brain( vision) Lot of resilience: Lot of resilience: Visual and auditory areas can do each other’s job th ’ j b Side areas For auditory information processing For auditory information processing

  20. Left Brain and Right Brain Left Brain and Right Brain Dichotomy Left Brain Right Brain

  21. Left Brain – Logic, Reasoning, Verbal ability Right Brain Emotion Creativity Right Brain – Emotion, Creativity Words Words – left Brain left Brain M Music i Tune – Right Brain g Maps in the brain. Limbs are mapped to brain

  22. Character Reognition , O/p grid O/p grid . . . . . . . . I/p neuron I/p neuron

  23. • Self Organization or Kohonen network fires a Self Organization or Kohonen network fires a group of neurons instead of a single one. • The group “some how” produces a “picture” of The group some how produces a picture of the cluster. • Fundamentally SOM is competitive learning. • But weight changes are incorporated on a neighborhood. • Find the winner neuron, apply weight change for Fi d h i l i h h f the winner and its “neighbors”.

  24. Wi Winner Neurons on the contour are the “neighborhood” neurons.

  25. Weight change rule for SOM Weight change rule for SOM W (n+1) = W (n) + η (n) ( I (n) – W (n) ) W ( +1) W ( ) + ( ) ( I ( ) W ( ) ) P+ δ (n) P+ δ (n) P+ δ (n) Neighborhood: function of n Learning rate: function of n δ (n) is a decreasing function of n δ ( ) i d i f ti f η (n) learning rate is also a decreasing function of n 0 < η (n) < η (n –1 ) <=1

  26. Pictorially Winner δ (n) δ (n) Convergence for kohonen not proved except for uni- dimension . . . .

  27. A … P neurons o/p layer W p … … . n neurons Clusters: A : A A : A : B : : C : C :

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend