decision making
play

Decision Making Probabilistic model Known Unknown Bayes Decision - PowerPoint PPT Presentation

Bayesian Decision Theory Chapter 2 (Jan 11, 18, 23, 25) Bayes decision theory is a fundamental statistical approach to pattern classification Assumption: decision problem posed in probabilistic terms and relevant probability values are


  1. Bayesian Decision Theory Chapter 2 (Jan 11, 18, 23, 25) • Bayes decision theory is a fundamental statistical approach to pattern classification • Assumption: decision problem posed in probabilistic terms and relevant probability values are known

  2. Decision Making Probabilistic model Known Unknown Bayes Decision Supervised Unsupervised Theory Learning Learning (Chapter 2) Parametric Nonparametric Parametric Nonparametric Approach Approach Approach Approach (Chapter 3) (Chapter 4, 6) (Chapter 10) (Chapter 10) “Optimal” Plug-in Density K-NN, neural Mixture Cluster Analysis Rules Rules Estimation networks models

  3. Sea bass v. Salmon Classification • Each fish appearing on the conveyor belt is either sea bass or salmon; two “states of nature” • Let  denote the state of nature:  1 = sea bass and  2 = salmon;  is a random variable that must be described probabilistically • a priori (prior) probability: P (  1 ) and P(  2 ); P (  1 ) is the probability next fish observed is a sea bass • If no other types of fish are present then • P(  1 ) + P(  2 ) = 1 (exclusivity and exhaustivity) • P(  1 ) = P(  2 ) (uniform priors) • Prior prob. reflects our prior knowledge about how likely we are to observe a sea bass or salmon; prior prob. may depend on time of the year or the fishing area!

  4. • Case 1: Suppose we are asked to make a decision without observing the fish. We only have prior information • Bayes decision rule given only prior information • Decide  1 if P(  1 ) > P(  2 ), otherwise decide  2 • Error rate = Min {P(  1 ) , P(  2 )} • Suppose now we are allowed to measure a feature on the state of nature - say the fish lightness value • Define class-conditional probability density function (pdf) of feature x; x is a r.v. • P(x |  i ) is the prob. of x given class  i , i =1, 2. P(x |  i )>= 0 and area under the pdf is 1.

  5. Less the densities overlap, better the feature

  6. • Case 2: Suppose we only have class-conditional densities and no prior information • Maximum likelihood decision rule • Assign input pattern x to class  1 if P(x |  1 ) > P(x |  2 ), otherwise  2 • P(x |  1 ) is also the likelihood of class  1 given the feature value x • Case 3: We have both prior densities and class- conditional densities • How does the feature x influence our attitude (prior) concerning the true state of nature? • Bayes decision rule

  7. • Posteriori prob. is a function of likelihood & prior • Joint density: P(  j , x) = P(  j | x)p (x) = p(x |  j ) P (  j ) • Bayes rule P(  j | x) = {p(x |  j ) . P (  j )} / p(x), j = 1,2  2 j  where    ( ) ( | ) ( ) P x P x j P j  1 j • Posterior = (Likelihood x Prior) / Evidence • Evidence P(x) can be viewed as a scale factor that guarantees that the posterior probabilities sum to 1 • P(x) is also called the unconditional density of feature x

  8. • P(  1 | x) is the probability of the state of nature being  1 given that feature value x has been observed • Decision based on the posterior probabilities is called the “Optimal” Bayes Decision rule. What does optimal mean? For a given observation (feature value) X: if P(  1 | x) > P(  2 | x) decide  1 if P(  1 | x) < P(  2 | x) decide  2 To justify the above rule, calculate the probability of error: P(error | x) = P(  1 | x) if we decide  2 P(error | x) = P(  2 | x) if we decide  1

  9. • So, for a given x, we can minimize the prob. of error by deciding  1 if P(  1 | x) > P(  2 | x); otherwise decide  2 Therefore: P(error | x) = min [P(  1 | x), P(  2 | x)] • For each observation x, Bayes decision rule minimizes the probability of error • Unconditional error: P(error) obtained by integration over all possible observed x w.r.t. p(x)

  10. • Optimal Bayes decision rule Decide  1 if P(  1 | x) > P(  2 | x); otherwise decide  2 • Special cases: (i) P(  1 ) = P(  2 ); Decide  1 if p(x |  1 ) > p(x |  2 ), otherwise  2 (ii) p(x |  1 ) = p(x |  2 ); Decide  1 if P(  1 ) > P(  2 ), otherwise  2

  11. Bayesian Decision Theory – Continuous Features • Generalization of the preceding formulation • Use of more than one feature (d features) • Use of more than two states of nature (c classes) • Allowing actions other than deciding on the state of nature • Introduce a “loss function”; minimizing the “risk” is more general than minimizing the probability of error

  12. • Allowing actions other than classification primarily allows the possibility of “rejection” • Rejection: Input pattern is rejected when it is difficult to decide between two classes or the pattern is too noisy! • The loss function specifies the cost of each action

  13. • Let {  1 ,  2 ,…,  c } be the set of c states of nature (or “categories” or “classes”) • Let {  1 ,  2 ,…,  a } be the set of a possible actions that can be taken for an input pattern x • Let  (  i |  j ) be the loss incurred for taking action  i when the true state of nature is  j • Decision rule:  (x) specifies which action to take for every possible observation x

  14.  j c        R ( | x ) ( | ) P ( | x ) Conditional Risk i i j j  j 1 For a given x, suppose we take the action  i • If the true state is  j , we will incur the loss  (  i |  j ) • P(  j | x) is the prob. that the true state is  j • But, any one of the C states is possible for given x Overall risk R = Expected value of R(  i | x) w.r.t. p(x) Conditional risk Minimizing R Minimize R(  i | x) for i = 1,…, a

  15. Select the action  i for which R(  i | x) is minimum • This action minimizes the overall risk • The resulting risk is called the Bayes risk • It is the best classification performance that can be achieved given the priors, class-conditional densities and the loss function!

  16. • Two-category classification  1 : decide  1  2 : decide  2  ij =  (  i |  j ); loss incurred in deciding  i when the true state of nature is  j Conditional risk: R(  1 | x) =  11 P(  1 | x) +  12 P(  2 | x) R(  2 | x) =  21 P(  1 | x) +  22 P(  2 | x)

  17. Bayes decision rule is stated as: if R(  1 | x) < R(  2 | x) Take action  1 : “decide  1 ” This rule is equivalent to: decide  1 if: {(  21 -  11 ) P(x |  1 ) P(  1 )} > {(  12 -  22 ) P(x |  2 ) P(  2 )}; decide  2 otherwise

  18. In terms of the Likelihood Ratio (LR), the preceding rule is equivalent to the following rule:     P ( x | ) P ( )   12 22 2 1 if .      P ( x | ) P ( ) 2 21 11 1 then take action  1 (decide  1 ); otherwise take action  2 (decide  2 ) The “threshold” term on the right hand side now involves the prior and the loss function

  19. Interpretation of the Bayes decision rule: If the likelihood ratio of class  1 and class  2 exceeds a threshold value (independent of the input pattern x), the optimal action is: decide  1 Maximum likelihood decision rule is a special case of minimum risk decision rule: • Threshold value = 1 • 0-1 loss function • Equal class prior probability

  20. Bayesian Decision Theory (Sections 2.3-2.5) • Minimum Error Rate Classification • Classifiers, Discriminant Functions and Decision Surfaces • Multivariate Normal (Gaussian) Density

  21. Minimum Error Rate Classification • Actions are decisions on classes If action  i is taken and the true state of nature is  j then: the decision is correct if i = j and in error if i  j • Seek a decision rule that minimizes the probability of error or the error rate

  22. • Zero-one (0-1) loss function: no loss for correct decision and a unit loss for incorrect decision   0 i j       ( , ) i , j 1 ,..., c  i j  1 i j The conditional risk can now be simplified as:  j c        R ( | x ) ( | ) P ( | x ) i i j j  j 1       P ( | x ) 1 P ( | x ) j i  j 1 “The risk corresponding to the 0 -1 loss function is the average probability of error”

  23. • Minimizing the risk under 0-1 loss function requires maximizing the posterior probability P(  i | x) since R(  i | x) = 1 – P(  i | x) ) • For Minimum error rate • Decide  i if P (  i | x) > P(  j | x)  j  i

  24. • Decision boundaries and decision regions      P ( ) P ( x | )      12 22 2 1 Let . then decide if :        1 P ( ) P ( x | ) 21 11 1 2 • If  is the 0-1 loss function then the threshold involves only the priors:   0 1         1 0  P ( )     2 then   a P ( ) 1    0 2 2 P ( )         2 if then     b   1 0 P ( ) 1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend