Quantitative Evaluation
Adapted in part from:
http://www.cs.cornell.edu/Courses/cs578/2003fa/performance_measures.pdf
Quantitative Evaluation Adapted in part from: - - PDF document
Quantitative Evaluation Adapted in part from: http://www.cs.cornell.edu/Courses/cs578/2003fa/performance_measures.pdf Accuracy Target: 0/1, -1/+1, True/False, Prediction = f(inputs) = f(x): 0/1 or Real Threshold: f(x) >
http://www.cs.cornell.edu/Courses/cs578/2003fa/performance_measures.pdf
3
i)))
2 i=1KN
4
23
8
– cost(b-type-error) = cost (c-type-error)
17
18
19
20
harmonic average of precision and recall
21
better performance worse performance
24
25
– TPR vs. FPR – Sensitivity vs. 1-Specificity – P(true|true) vs. P(true|false)
26
diagonal line is random prediction
27
28
13
14
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
% prospects % respondents
Example: Lift(25%)= CR(25%) / 25% = 62% / 25% = 2.5 If we send to 25% of our prospects using the model, they are 2.5 times as likely to respond than if we were to select them randomly.
– T = total number of prospects – H = total number of respondents – n = cost per mailing – p = profit per response
– Profit(c) = CR(c).H.p
revenue generated by respondents
cost of sending the mailings
saving from not sending mailings
cost of missed revenue
– 2 is a constant (scaling) – H.p – T.n is a constant (translation)
– Profit(c) ~ CR(c).H.p – c.T.n
– E = H / T
response rate
– Profit(c) ~ CR(c).E.p – c.n
– Lift(c) = CR(c)/c – Lift would be maximum if we could send to only
– The maximum value for lift is thus: 1/E
– Case 1: p < n
– Case 2: p = n
– Case 3: p > n
32