Review , Catch-up, Question&Answ er &A ti Q C t h i R - - PDF document

review catch up question answ er a ti q c t h i r outline
SMART_READER_LITE
LIVE PREVIEW

Review , Catch-up, Question&Answ er &A ti Q C t h i R - - PDF document

Review , Catch-up, Question&Answ er &A ti Q C t h i R Outline Dear Prof. Lathrop, Would you mind explain more about the statistic learning (chapter 20) while you are going through the material on today's class. It is very


slide-1
SLIDE 1

R i C t h Q ti &A Review , Catch-up, Question&Answ er

slide-2
SLIDE 2

Outline

  • Dear Prof. Lathrop, Would you mind explain more about the

statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same

  • concern. Thank you

– Reading assigned was Chapters 14.1, 14.2, plus: – 20.1-20.3.2 (3rd ed.) – 20.1-20.7, but not the part of 20.3 after “Learning Bayesian Networks” (2nd ed.)

  • Machine Learning
  • Probability and Uncertainty
  • Probability and Uncertainty
  • Question & Answer

– Iff time Viola & Jones 2004 Iff time, Viola & Jones, 2004

slide-3
SLIDE 3

Com puting w ith Probabilities: Law of Total Probability

Law of Total Probability (aka “summing out” or marginalization) P(a) = b P(a, b) = b P(a | b) P(b) where B is any random variable

Why is this useful? given a joint distribution (e.g., P(a,b,c,d)) we can obtain any “marginal” probability (e g P(b)) by summing out the other variables e g probability (e.g., P(b)) by summing out the other variables, e.g., P(b) = a c d P(a, b, c, d) Less obvious: we can also compute any conditional probability of interest given a Less obvious: we can also compute any conditional probability of interest given a joint distribution, e.g., P(c | b) = a d P(a, c, d | b)

= (1 / P(b))   P(a c d b) = (1 / P(b)) a d P(a, c, d, b)

where (1 / P(b)) is just a normalization constant Thus, the joint distribution contains the information we need to compute any probability of interest probability of interest.

slide-4
SLIDE 4

Com puting w ith Probabilities: The Chain Rule or Factoring We can always write P(a, b, c, … z) = P(a | b, c, …. z) P(b, c, … z) (by definition of joint probability) (by definition of joint probability) Repeatedly applying this idea we can write Repeatedly applying this idea, we can write P(a, b, c, … z) = P(a | b, c, …. z) P(b | c,.. z) P(c| .. z)..P(z) This factorization holds for any ordering of the variables Thi i th h i l f b biliti This is the chain rule for probabilities

slide-5
SLIDE 5

Conditional I ndependence

  • 2 random variables A and B are conditionally independent given C iff

P(a, b | c) = P(a | c) P(b | c) for all values a, b, c

  • More intuitive (equivalent) conditional formulation

– A and B are conditionally independent given C iff P(a | b, c) = P(a | c) OR P(b | a, c) =P(b | c), for all values a, b, c – Intuitive interpretation: P(a | b, c) = P(a | c) tells us that learning about b, given that we already know c, provides no change in our probability for a, i.e., b contains no information about a beyond what c provides

  • Can generalize to more than 2 random variables

– E.g., K different symptom variables X1, X2, … XK, and C = disease g y p – P(X1, X2,…. XK | C) =  P(Xi | C) – Also known as the naïve Bayes assumption

slide-6
SLIDE 6

“ probability theory is more fundamentally concerned with …probability theory is more fundamentally concerned with the structure of reasoning and causation than with numbers.”

Glenn Shafer and Judea Pearl I t d ti t R di i U t i R i Introduction to Readings in Uncertain Reasoning, Morgan Kaufmann, 1990

slide-7
SLIDE 7

Bayesian Netw orks

  • A Bayesian network specifies a joint distribution in a structured form
  • Represent dependence/independence via a directed graph

Nodes random variables – Nodes = random variables – Edges = direct dependence

  • Structure of the graph  Conditional independence relations

In general In general, p(X1, X2,....XN) =  p(Xi | parents(Xi ) )

  • Requires that graph is acyclic (no directed cycles)

The full joint distribution The graph-structured approximation

  • Requires that graph is acyclic (no directed cycles)
  • 2 components to a Bayesian network

– The graph structure (conditional independence assumptions) – The numerical probabilities (for each variable given its parents) p ( g p )

slide-8
SLIDE 8

Exam ple of a sim ple Bayesian netw ork

A B p(A,B,C) = p(C|A,B)p(A)p(B) C

  • Probability model has simple factored form
  • Directed edges

> direct dependence

  • Directed edges => direct dependence
  • Absence of an edge => conditional independence

Also known as belief networks graphical models causal networks

  • Also known as belief networks, graphical models, causal networks
  • Other formulations, e.g., undirected graphical models
slide-9
SLIDE 9

Exam ples of 3 -w ay Bayesian Netw orks

A C B Marginal Independence: p(A,B,C) = p(A) p(B) p(C)

slide-10
SLIDE 10

Exam ples of 3 -w ay Bayesian Netw orks

Conditionally independent effects: A Conditionally independent effects: p(A,B,C) = p(B|A)p(C|A)p(A) B and C are conditionally independent C B Given A e.g., A is a disease, and we model B and C as conditionally independent B and C as conditionally independent symptoms given A

slide-11
SLIDE 11

Exam ples of 3 -w ay Bayesian Netw orks

A B C Independent Causes: p(A,B,C) = p(C|A,B)p(A)p(B) C “Explaining away” effect: Given C, observing A makes B less likely e.g., earthquake/burglary/alarm example g , q g y p A and B are (marginally) independent but become dependent once C is known

slide-12
SLIDE 12

Exam ples of 3 -w ay Bayesian Netw orks

A C B Markov dependence: A C B Markov dependence: p(A,B,C) = p(C|B) p(B|A)p(A)

slide-13
SLIDE 13

Exam ple

  • Consider the following 5 binary variables:

– B = a burglary occurs at your house – E = an earthquake occurs at your house E = an earthquake occurs at your house – A = the alarm goes off – J = John calls to report the alarm – M = Mary calls to report the alarm – What is P(B | M, J) ? (for example) W th f ll j i t di t ib ti t thi ti – We can use the full joint distribution to answer this question

  • Requires 25 = 32 probabilities
  • Can we use prior domain knowledge to come up with a
  • Can we use prior domain knowledge to come up with a

Bayesian network that requires fewer probabilities?

slide-14
SLIDE 14

Constructing a Bayesian Netw ork: Step 1

  • Order the variables in terms of causality (may be a partial order)

e g {E B} > {A} > {J M} e.g., {E, B} -> {A} -> {J, M} P(J M A E B) P(J M | A E B) P(A| E B) P(E B)

  • P(J, M, A, E, B) = P(J, M | A, E, B) P(A| E, B) P(E, B)

≈ P(J, M | A) P(A| E, B) P(E) P(B) ≈ P(J | A) P(M | A) P(A| E, B) P(E) P(B) These CI assumptions are reflected in the graph structure of the Bayesian network

slide-15
SLIDE 15

The Resulting Bayesian Netw ork

slide-16
SLIDE 16

Constructing this Bayesian Netw ork: Step 2

  • P(J, M, A, E, B) =

P(J | A) P(M | A) P(A | E, B) P(E) P(B)

  • There are 3 conditional probability tables (CPDs) to be determined:

P(J | A), P(M | A), P(A | E, B)

– Requiring 2 + 2 + 4 = 8 probabilities – Requiring 2 + 2 + 4 = 8 probabilities

  • And 2 marginal probabilities P(E), P(B) -> 2 more probabilities
  • Where do these probabilities come from?

– Expert knowledge – From data (relative frequency estimates) – Or a combination of both - see discussion in Section 20.1 and 20.2 (optional)

slide-17
SLIDE 17

The Bayesian netw ork

slide-18
SLIDE 18

Num ber of Probabilities in Bayesian Netw orks

  • Consider n binary variables
  • Unconstrained joint distribution requires O(2n) probabilities
  • Unconstrained joint distribution requires O(2n) probabilities

If we have a Bayesian network with a maximum of k parents

  • If we have a Bayesian network, with a maximum of k parents

for any node, then we need O(n 2k) probabilities

  • Example
  • Example

– Full unconstrained joint distribution

  • n = 30: need 109 probabilities for full joint distribution

– Bayesian network y

  • n = 30, k = 4: need 480 probabilities
slide-19
SLIDE 19

The Bayesian Netw ork from a different Variable Ordering

slide-20
SLIDE 20

The Bayesian Netw ork from a different Variable Ordering

slide-21
SLIDE 21

Given a graph, can w e “read off” conditional independencies? p The “Markov Blanket” of X (the gray area in the figure)

X is conditionally independent of everything else, GIVEN the values of: * X’s parents * X’s parents * X’s children * X’s children’s parents X is conditionally independent of its non-descendants, GIVEN the values of its parents.

slide-22
SLIDE 22

General Strategy for inference

  • Want to compute P(q | e)

Step 1: Step 1:

P(q | e) = P(q,e)/P(e) =  P(q,e), since P(e) is constant wrt Q

Step 2: Step 2:

P(q,e) = a..z P(q, e, a, b, …. z), by the law of total probability

Step 3: Step 3:

a..z P(q, e, a, b, …. z) = a..z i P(variable i | parents i)

(using Bayesian network factoring)

Step 4:

Distribute summations across product terms for efficient computation

slide-23
SLIDE 23

Naïve Bayes Model

X1 X2 X3 Xn C P(C | X1,…Xn) =  P(Xi | C) P (C) ( |

1 n)

(

i |

) ( ) Features X are conditionally independent given the class variable C Widely used in machine learning Widely used in machine learning e.g., spam email classification: X’s = counts of words in emails Probabilities P(C) and P(Xi | C) can easily be estimated from labeled data

slide-24
SLIDE 24

Naïve Bayes Model ( 2 )

P(C | X1,…Xn) =  P(Xi | C) P (C) Probabilities P(C) and P(Xi | C) can easily be estimated from labeled data Probabilities P(C) and P(Xi | C) can easily be estimated from labeled data P(C = cj) ≈ #(Examples with class label cj) / #(Examples) P(Xi ik | C j) P(Xi = xik | C = cj) ≈ #(Examples with Xi value xik and class label cj) / #(Examples with class label cj) Usually easiest to work with logs log [ P(C | X1,…Xn) ] = log   log P(Xi | C) + log P (C) ] DANGER: Suppose ZERO examples with Xi value xik and class label cj ? An unseen example with Xi value xik will NEVER predict class label cj ! Practical solutions: Pseudocounts, e.g., add 1 to every #() , etc. Theoretical solutions: Bayesian inference, beta distribution, etc.

slide-25
SLIDE 25

Hidden Markov Model ( HMM)

Y Y Y3 Y Observed Y1 Y2 Y3 Yn

  • - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

S1 S2 S3 Sn Hidden Two key assumptions:

  • 1. hidden state sequence is Markov

2 observation Y is CI of all other variables given S

  • 2. observation Yt is CI of all other variables given St

Widely used in speech recognition, protein sequence models Since this is a Bayesian network polytree inference is linear in n Since this is a Bayesian network polytree, inference is linear in n

slide-26
SLIDE 26

Sum m ary

  • Bayesian networks represent a joint distribution using a graph
  • The graph encodes a set of conditional independence
  • The graph encodes a set of conditional independence

assumptions

  • Answering queries (or inference or reasoning) in a Bayesian
  • Answering queries (or inference or reasoning) in a Bayesian

network amounts to efficient computation of appropriate conditional probabilities

  • Probabilistic inference is intractable in the general case

– But can be carried out in linear time for certain classes of Bayesian networks

slide-27
SLIDE 27

Outline

  • Dear Prof. Lathrop, Would you mind explain more about the

statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same

  • concern. Thank you

– Reading assigned was Chapters 14.1, 14.2, plus: – 20.1-20.3.2 (3rd ed.) – 20.1-20.7, but not the part of 20.3 after “Learning Bayesian Networks” (2nd ed.)

  • Machine Learning
  • Probability and Uncertainty
  • Probability and Uncertainty
  • Question & Answer

– Iff time Viola & Jones 2004 Iff time, Viola & Jones, 2004

slide-28
SLIDE 28

Term inology

  • Attributes

– Also known as features, variables, independent variables, covariates co a a es

  • Target Variable

– Also known as goal predicate, dependent variable, …

  • Classification

– Also known as discrimination, supervised classification, …

  • Error function

Objective function loss function – Objective function, loss function, …

slide-29
SLIDE 29

I nductive learning

  • Let x represent the input vector of attributes
  • Let f(x) represent the value of the target variable for x
  • Let f(x) represent the value of the target variable for x

– The implicit mapping from x to f(x) is unknown to us – We just have training data pairs, D = {x, f(x)} available

  • We want to learn a mapping from x to f, i.e.,

h(x; ) is “close” to f(x) for all training data points x  are the parameters of our predictor h(..)

  • Examples:
  • Examples:

– h(x; ) = sign(w1x1 + w2x2+ w3) – hk(x) = (x1 OR x2) AND (x3 OR NOT(x4))

k( )

( ) ( ( ))

slide-30
SLIDE 30

Decision Tree Representations

  • Decision trees are fully expressive

– can represent any Boolean function – Every path in the tree could represent 1 row in the truth table Yi ld ti ll l t – Yields an exponentially large tree

  • Truth table is of size 2d, where d is the number of attributes
slide-31
SLIDE 31

Pseudocode for Decision tree learning

slide-32
SLIDE 32

I nform ation Gain

  • H(p) = entropy of class distribution at a particular node
  • H(p | A)

conditional entropy average entropy of

  • H(p | A) = conditional entropy = average entropy of

conditional class distribution, after we have partitioned the data according to the values in A

  • Gain(A) = H(p) – H(p | A)
  • Simple rule in decision tree learning

Simple rule in decision tree learning

– At each internal node, split on the node with the largest information gain (or equivalently, with smallest H(p|A))

  • Note that by definition, conditional entropy can’t be greater

than the entropy

slide-33
SLIDE 33

How Overfitting affects Prediction

P di ti

Overfitting Underfitting

Predictive Error

Error on Test Data

Model Complexity

Error on Training Data

Model Complexity

Ideal Range for Model Complexity

slide-34
SLIDE 34

Disjoint Validation Data Sets

Full Data Set Validation Data Training Data 1st partition

slide-35
SLIDE 35

Disjoint Validation Data Sets

Full Data Set Validation Data Validation Training Data Validation Data 1st partition 2nd partition

slide-36
SLIDE 36

Classification in Euclidean Space

  • A classifier is a partition of the space x into disjoint decision

regions

– Each region has a label attached Each region has a label attached – Regions with the same label need not be contiguous – For a new test point, find what decision region it is in, and predict the corresponding label

  • Decision boundaries = boundaries between decision regions

– The “dual representation” of decision regions

  • We can characterize a classifier by the equations for its

decision boundaries

  • Learning a classifier  searching for the decision boundaries

that optimize our objective function

slide-37
SLIDE 37

Decision Tree Exam ple

Debt Debt Income > t1

t2

Debt > t2

t1 t3

Income Income > t3

Note: tree boundaries are linear and axis-parallel

slide-38
SLIDE 38

Another Exam ple: Nearest Neighbor Classifier

  • The nearest-neighbor classifier

– Given a test point x’, compute the distance between x’ and each input data point pu da a po – Find the closest neighbor in the training data – Assign x’ the class label of this neighbor – (sort of generalizes minimum distance classifier to exemplars)

  • If Euclidean distance is used as the distance measure (the

most common choice), the nearest neighbor classifier results in piecewise linear decision boundaries in piecewise linear decision boundaries

  • Many extensions

– e g kNN vote based on k-nearest neighbors – e.g., kNN, vote based on k-nearest neighbors – k can be chosen by cross-validation

slide-39
SLIDE 39
slide-40
SLIDE 40
slide-41
SLIDE 41
slide-42
SLIDE 42

Linear Classifiers

  • Linear classifier  single linear decision boundary

(for 2-class case)

  • We can always represent a linear decision boundary by a linear equation:
  • We can always represent a linear decision boundary by a linear equation:

w1 x1 + w2 x2 + … + wd xd =  wj xj = wt x = 0

  • In d dimensions, this defines a (d-1) dimensional hyperplane

– d=3 we get a plane; d=2 we get a line – d=3, we get a plane; d=2, we get a line

  • For prediction we simply see if  wj xj > 0
  • The wi are the weights (parameters)
  • The wi are the weights (parameters)

– Learning consists of searching in the d-dimensional weight space for the set of weights (the linear boundary) that minimizes an error measure – A threshold can be introduced by a “dummy” feature that is always one; it weight corresponds to (the negative of) the threshold

  • Note that a minimum distance classifier is a special (restricted) case of a linear

classifier

slide-43
SLIDE 43

8 Mi i E 6 7 Minimum Error Decision Boundary 5 6 2 3 4 FEATURE 2 3 1 1 2 3 4 5 6 7 8 FEATURE 1

slide-44
SLIDE 44

The Perceptron Classifier ( pages 7 4 0 -7 4 3 in text)

  • The perceptron classifier is just another name for a linear

classifier for 2-class data, i.e.,

  • utput(x) = sign(  w x )
  • utput(x) = sign(  wj xj )
  • Loosely motivated by a simple model of how neurons fire
  • For mathematical convenience, class labels are +1 for one

class and -1 for the other

  • Two major types of algorithms for training perceptrons

– Objective function = classification accuracy (“error correcting”) – Objective function = squared error (use gradient descent) – Gradient descent is generally faster and more efficient – but there is a problem! No gradient!

slide-45
SLIDE 45

Tw o different types of perceptron output

x-axis below is f(x) = f = weighted sum of inputs y-axis is the perceptron output Thresholded output

  • (f)

Thresholded output, takes values +1 or -1 f f f) Sigmoid output, takes real values between -1 and +1 h i id i i ff i i The sigmoid is in effect an approximation to the threshold function above, but has a gradient that we can use for learning

slide-46
SLIDE 46

Gradient Descent Update Equation

  • From basic calculus, for perceptron with sigmoid, and squared

error objective function, gradient for a single input x(i) is

 ( E[w] ) = - ( y(i) – [f(i)] ) [f(i)] xj(i)

  • Gradient descent weight update rule:

wj = wj + ( y(i) – [f(i)] ) [f(i)] xj(i) – can rewrite as:

w = w + * error * c * x (i) wj = wj + * error * c * xj(i)

slide-47
SLIDE 47

Pseudo-code for Perceptron Training Initialize each wj (e.g.,randomly)

j

While (termination condition not satisfied) for i = 1: N % loop over data points (an iteration) for j= 1 : d % loop over weights for j= 1 : d % loop over weights deltawj =  ( y(i) – [f(i)] ) [f(i)] xj(i) wj = wj + deltawj end calculate termination condition end

  • Inputs: N features, N targets (class labels), learning rate 
  • Outputs: a set of learned weights
slide-48
SLIDE 48

Multi-Layer Perceptrons ( p7 4 4 -7 4 7 in text)

  • What if we took K perceptrons and trained them in parallel and

then took a weighted sum of their sigmoidal outputs?

– This is a multi-layer neural network with a single “hidden” layer This is a multi layer neural network with a single hidden layer (the outputs of the first set of perceptrons) – If we train them jointly in parallel, then intuitively different perceptrons could learn different parts of the solution M th ti ll th d fi diff t l l d i i b d i

  • Mathematically, they define different local decision boundaries

in the input space, giving us a more powerful model

  • How would we train such a model?

How would we train such a model?

– Backpropagation algorithm = clever way to do gradient descent – Bad news: many local minima and many parameters

  • training is hard and slow

– Neural networks generated much excitement in AI research in the late 1980’s and 1990’s

  • But now techniques like boosting and support vector machines

are often preferred are often preferred

slide-49
SLIDE 49

Boosting Exam ple

slide-50
SLIDE 50

First classifier

slide-51
SLIDE 51

First 2 classifiers

slide-52
SLIDE 52

First 3 classifiers

slide-53
SLIDE 53

Final Classifier learned by Boosting

slide-54
SLIDE 54

Final Classifier learned by Boosting

slide-55
SLIDE 55

Outline

  • Dear Prof. Lathrop, Would you mind explain more about the

statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same

  • concern. Thank you

– Reading assigned was Chapters 14.1, 14.2, plus: – 20.1-20.3.2 (3rd ed.) – 20.1-20.7, but not the part of 20.3 after “Learning Bayesian Networks” (2nd ed.)

  • Machine Learning
  • Probability and Uncertainty
  • Probability and Uncertainty
  • Question & Answer

– Iff time Viola & Jones 2004 Iff time, Viola & Jones, 2004

slide-56
SLIDE 56

Syntax

  • Basic element: random variable
  • Similar to propositional logic: possible worlds defined by

assignment of values to random variables assignment of values to random variables.

  • Booleanrandom variables

e.g., Cavity (= do I have a cavity?) g , y ( y )

  • Discreterandom variables

e.g., Weather is one of < sunny,rainy,cloudy,snow>

  • Domain values must be exhaustive and mutually exclusive
  • Elementary proposition is an assignment of a value to a

random variable: e g Weather = sunny; Cavity = false(abbreviated as e.g., Weather = sunny; Cavity = false(abbreviated as ¬ cavity)

  • Complex propositions formed from elementary propositions

p p p y p p and standard logical connectives : e.g., Weather = sunny Cavity = false

slide-57
SLIDE 57

Probability

  • P(a) is the probability of proposition “a”

– E.g., P(it will rain in London tomorrow) – The proposition a is actually true or false in the real-world – P(a) = “prior” or marginal or unconditional probability – Assumes no other information is available

  • Axioms:

– 0 <= P(a) <= 1 – P(NOT(a)) = 1 – P(a) – P(true) = 1 – P(true) = 1 – P(false) = 0 – P(A OR B) = P(A) + P(B) – P(A AND B)

  • An agent that holds degrees of beliefs that contradict these axioms

will act sub-optimally in some cases

– e.g., de Finetti proved that there will be some combination of bets that f h h t t l ti forces such an unhappy agent to lose money every time. – No rational agent can have axioms that violate probability theory.

slide-58
SLIDE 58

Conditional Probability

  • P(a|b) is the conditional probability of proposition a, conditioned
  • n knowing that b is true,

– E.g., P(rain in London tomorrow | raining in London today) – P(a|b) is a “posterior” or conditional probability – The updated probability that a is true, now that we know b – P(a|b) = P(a AND b) / P(b) S P( | b) i h b bili f i h b i – Syntax: P(a | b) is the probability of a given that b is true

  • a and b can be any propositional sentences
  • e.g., p( John wins OR Mary wins | Bob wins AND Jack loses)
  • P(a|b) obeys the same rules as probabilities,

– E.g., P(a | b) + P(NOT(a) | b) = 1 – All probabilities in effect are conditional probabilities p p

  • E.g., P(a) = P(a | our background knowledge)
slide-59
SLIDE 59

Random Variables

  • A is a random variable taking values a1, a2, … am

– Events are A= a1, A= a2, …. – We will focus on discrete random variables

  • Mutual exclusion

P(A = ai AND A = aj) = 0

  • Exhaustive

 P(ai) = 1

MEE (Mutually Exclusive and Exhaustive) assumption is often MEE (Mutually Exclusive and Exhaustive) assumption is often useful

(but not always appropriate, e.g., disease-state for a patient)

For finite m can represent P(A) as a table of m probabilities For finite m, can represent P(A) as a table of m probabilities For infinite m (e.g., number of tosses before “heads”) we can represent P(A) by a function (e.g., geometric)

slide-60
SLIDE 60

Joint Distributions

  • Consider 2 random variables: A, B

– P(a, b) is shorthand for P(A = a AND B=b)  a b P(a, b) = 1 – Can represent P(A, B) as a table of m2 numbers

  • Generalize to more than 2 random variables

– E.g., A, B, C, … Z  a b… z P(a, b, …, z) = 1

K

– P(A, B, …. Z) is a table of mK numbers, K = # variables

  • This is a potential problem in practice, e.g., m=2, K = 20
slide-61
SLIDE 61

Linking Joint and Conditional Probabilities

  • Basic fact:

P(a, b) = P(a | b) P(b) – Why? Probability of a and b occurring is the same as probability of a

  • ccurring given b is true, times the probability of b occurring
  • Bayes rule:

P(a, b) = P(a | b) P(b) = P(b | a) P(a) by definition => P(b | a) = P(a | b) P(b) / P(a) [Bayes rule] Why is this useful? Often much more natural to express knowledge in a particular “direction”, e.g., in the causal direction e.g., b = disease, a = symptoms More natural to encode knowledge as P(a|b) than as P(b|a)

slide-62
SLIDE 62

Sequential Bayesian Reasoning

  • h = hypothesis, e1, e2, .. en = evidence
  • P(h) = prior
  • P(h | e1) proportional to P(e1 | h) P(h)
  • P(h | e1) proportional to P(e1 | h) P(h)

= likelihood of e1 x prior(h)

  • P(h | e1 e2) proportional to P(e1 e2 | h) P(h)
  • P(h | e1, e2) proportional to P(e1, e2 | h) P(h)

in turn can be written as P(e2| h, e1) P(e1|h) P(h) ~ likelihood of e2 x “prior”(h given e1)

  • Bayes rule supports sequential reasoning

– Start with prior P(h) – New belief (posterior) = P(h | e1) This becomes the “new prior” – This becomes the new prior – Can use this to update to P(h | e1, e2), and so on…..

slide-63
SLIDE 63

Com puting w ith Probabilities: Law of Total Probability

Law of Total Probability (aka “summing out” or marginalization) P(a) = b P(a, b) = b P(a | b) P(b)

where B is any random variable

Why is this useful?

Given a joint distribution (e.g., P(a,b,c,d)) we can obtain any “marginal” probability (e.g., P(b)) by summing out the other variables, e.g.,

P(b) = a c d P(a, b, c, d) P(b) a c d P(a, b, c, d) We can compute any conditional probability given a joint distribution, e.g., P(c | b) = a d P(a, c, d | b) = a d P(a, c, d, b) / P(b)

slide-64
SLIDE 64

Com puting w ith Probabilities: The Chain Rule or Factoring

We can always write P(a, b, c, … z) = P(a | b, c, …. z) P(b, c, … z) (by definition of joint probability) (by definition of joint probability) Repeatedly applying this idea we can write Repeatedly applying this idea, we can write P(a, b, c, … z) = P(a | b, c, …. z) P(b | c,.. z) P(c| .. z)..P(z) This factorization holds for any ordering of the variables Thi i th h i l f b biliti This is the chain rule for probabilities

slide-65
SLIDE 65

I ndependence

  • 2 random variables A and B are independent iff

P(a, b) = P(a) P(b) for all values a, b

  • More intuitive (equivalent) conditional formulation

– A and B are independent iff P(a | b) = P(a) OR P(b | a) P(b), for all values a, b – Intuitive interpretation: P(a | b) = P(a) tells us that knowing b provides no change in our probability for a, i.e., b contains no information about a

C li t th 2 d i bl

  • Can generalize to more than 2 random variables
  • In practice true independence is very rare

– “butterfly in China” effect – Weather and dental example in the text – Conditional independence is much more common and useful

  • Note: independence is an assumption we impose on our model of the

ld it d t f ll f b i i world - it does not follow from basic axioms

slide-66
SLIDE 66

Conditional I ndependence

  • 2 random variables A and B are conditionally independent given C iff

P(a, b | c) = P(a | c) P(b | c) for all values a, b, c

  • More intuitive (equivalent) conditional formulation

– A and B are conditionally independent given C iff P(a | b, c) = P(a | c) OR P(b | a, c) P(b | c), for all values a, b, c – Intuitive interpretation: P(a | b, c) = P(a | c) tells us that learning about b, given that we already know c, provides no change in our probability for a, i.e., b contains no information about a beyond what c provides

  • Can generalize to more than 2 random variables

– E.g., K different symptom variables X1, X2, … XK, and C = disease g y p – P(X1, X2,…. XK | C) =  P(Xi | C) – Also known as the naïve Bayes assumption

slide-67
SLIDE 67

Outline

  • Dear Prof. Lathrop, Would you mind explain more about the

statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same

  • concern. Thank you

– Reading assigned was Chapters 14.1, 14.2, plus: – 20.1-20.3.2 (3rd ed.) – 20.1-20.7, but not the part of 20.3 after “Learning Bayesian Networks” (2nd ed.)

  • Machine Learning
  • Probability and Uncertainty
  • Probability and Uncertainty
  • Question & Answer

– Iff time Viola & Jones 2004 Iff time, Viola & Jones, 2004

slide-68
SLIDE 68

Learning to Detect Faces Learning to Detect Faces A Large-Scale Application of Machine Learning

( This m aterial is not in the text: for further inform ation see the paper by

  • P. Viola and M. Jones, I nternational Journal of Com puter Vision, 2 0 0 4
slide-69
SLIDE 69

Viola-Jones Face Detection Algorithm

  • Overview :

– Viola Jones technique overview – Features Features – Integral Images – Feature Extraction – Weak Classifiers – Boosting and classifier evaluation – Cascade of boosted classifiers – Example Results

slide-70
SLIDE 70

Viola Jones Technique Overview

  • Three major contributions/phases of the algorithm :

– Feature extraction Learning using boosting and decision stumps – Learning using boosting and decision stumps – Multi-scale detection algorithm F t t ti d f t l ti

  • Feature extraction and feature evaluation.

– Rectangular features are used, with a new image representation their calculation is very fast.

  • Classifier learning using a method called boosting
  • A combination of simple classifiers is very effective

A combination of simple classifiers is very effective

slide-71
SLIDE 71

Features

  • Four basic types.

– They are easy to calculate. Th hi b d f h bl k – The white areas are subtracted from the black ones. – A special representation of the sample called the integral im age makes feature extraction faster.

slide-72
SLIDE 72

I ntegral im ages

  • Summed area tables
  • A representation that means any rectangle’s values can be

calculated in four accesses of the integral image calculated in four accesses of the integral image.

slide-73
SLIDE 73

Fast Com putation of Pixel Sum s

slide-74
SLIDE 74

Feature Extraction

  • Features are extracted from sub windows of a sample

image.

Th b i f b i d i 24 b 24 i l – The base size for a sub window is 24 by 24 pixels. – Each of the four feature types are scaled and shifted across all possible combinations

  • In a 24 pixel by 24 pixel sub window there are ~160 000
  • In a 24 pixel by 24 pixel sub window there are ~160,000

possible features to be calculated.

slide-75
SLIDE 75

Learning w ith m any features

  • We have 160,000 features – how can we learn a classifier with
  • nly a few hundred training examples without overfitting?
  • Idea:

– Learn a single very simple classifier (a “weak classifier”) – Classify the data Look at where it makes errors – Look at where it makes errors – Reweight the data so that the inputs where we made errors get higher weight in the learning process – Now learn a 2nd simple classifier on the weighted data

t d

f – Combine the 1st and 2nd classifier and weight the data according to where they make errors – Learn a 3rd classifier on the weighted data – … and so on until we learn T simple classifiers – Final classifier is the combination of all T classifiers – This procedure is called “Boosting” – works very well in practice.

slide-76
SLIDE 76

“Decision Stum ps”

  • Decision stumps = decision tree with only a single root node

– Certainly a very weak learner! – Say the attributes are real-valued – Decision stump algorithm looks at all possible thresholds for each attribute – Selects the one with the max information gain Selects the one with the max information gain – Resulting classifier is a simple threshold on a single feature

  • Outputs a +1 if the attribute is above a certain threshold
  • Outputs a -1 if the attribute is below the threshold

– Note: can restrict the search for to the n-1 “midpoint” locations between a sorted list of attribute values for each feature. So complexity is n log n per attribute. – Note this is exactly equivalent to learning a perceptron with a single intercept term (so we could also learn these stumps via gradient descent and mean squared error)

slide-77
SLIDE 77

Boosting Exam ple

slide-78
SLIDE 78

First classifier

slide-79
SLIDE 79

First 2 classifiers

slide-80
SLIDE 80

First 3 classifiers

slide-81
SLIDE 81

Final Classifier learned by Boosting

slide-82
SLIDE 82

Final Classifier learned by Boosting

slide-83
SLIDE 83

Boosting w ith Decision Stum ps

  • Viola-Jones algorithm

– With K attributes (e.g., K = 160,000) we have 160,000 different decision stumps to choose from dec s o s u ps o c oose

  • – At each stage of boosting
  • given reweighted data from previous stage
  • Train all K (160,000) single-feature perceptrons
  • Select the single best classifier at this stage
  • Combine it with the other previously selected classifiers
  • Reweight the data
  • Reweight the data
  • Learn all K classifiers again, select the best, combine, reweight
  • Repeat until you have T classifiers selected

– Very computationally intensive

  • Learning K decision stumps T times
  • E.g., K = 160,000 and T = 1000
slide-84
SLIDE 84

How is classifier com bining done?

  • At each stage we select the best classifier on the current

iteration and combine it with the set of classifiers learned so far far

  • How are the classifiers combined?

– Take the weight*feature for each classifier, sum these up, and g , p, compare to a threshold (very simple) – Boosting algorithm automatically provides the appropriate weight for each classifier and the threshold for each classifier and the threshold – This version of boosting is known as the AdaBoost algorithm – Some nice mathematical theory shows that it is in fact a very powerful machine learning technique

slide-85
SLIDE 85

Reduction in Error as Boosting adds Classifiers

slide-86
SLIDE 86

Useful Features Learned by Boosting

slide-87
SLIDE 87

A Cascade of Classifiers

slide-88
SLIDE 88

Detection in Real I m ages

  • Basic classifier operates on 24 x 24 subwindows
  • Scaling:
  • Scaling:

– Scale the detector (rather than the images) – Features can easily be evaluated at any scale – Scale by factors of 1.25 Scale by factors of 1.25

  • Location:

– Move detector around the image (e.g., 1 pixel increments)

  • Final Detections

– A real face may result in multiple nearby detections – Postprocess detected subwindows to combine overlapping detections into a single detection

slide-89
SLIDE 89

Training

  • Examples of 24x24 images with faces
slide-90
SLIDE 90

Sm all set of 1 1 1 Training I m ages

slide-91
SLIDE 91

Sam ple results using the Viola-Jones Detector

  • Notice detection at multiple scales
slide-92
SLIDE 92

More Detection Exam ples

slide-93
SLIDE 93

Practical im plem entation

  • Details discussed in Viola-Jones paper
  • Training time

weeks (with 5k faces and 9 5k non faces)

  • Training time = weeks (with 5k faces and 9.5k non-faces)
  • Final detector has 38 layers in the cascade, 6060 features
  • 700 Mhz processor:

– Can process a 384 x 288 image in 0.067 seconds (in 2003 when paper was written) paper was written)