Bayesian Method for Repeated Threshold Estimation
Alexander Petrov
Supported by NIMH and NSF grants to Prof. Barbara Dosher Department of Cognitive Sciences University of California, Irvine
Bayesian Method for Repeated Threshold Estimation Alexander Petrov - - PowerPoint PPT Presentation
Bayesian Method for Repeated Threshold Estimation Alexander Petrov Department of Cognitive Sciences University of California, Irvine Supported by NIMH and NSF grants to Prof. Barbara Dosher Motivation: Perceptual Learning Non-stationary
Supported by NIMH and NSF grants to Prof. Barbara Dosher Department of Cognitive Sciences University of California, Irvine
http://www.socsci.uci.edu/~apetrov/
Non-stationary thresholds Dynamics of learning is important Must use naïve observers Low motivation high lapsing rates Slow learning many sessions Large volume of low-quality binary data
http://www.socsci.uci.edu/~apetrov/
http://www.socsci.uci.edu/~apetrov/
Up/down (Levitt, 1970) PEST (Taylor & Creelman, 1967) BEST PEST (Pentland, 1980) QUEST (Watson & Pelli, 1979) ML-Test (Harvey, 1986) Ideal (Pelli, 1987) YAAP (Treutwein, 1989) and many others…
http://www.socsci.uci.edu/~apetrov/
Standard methods:
Adaptive stimulus placement Stopping criterion Threshold estimation
Our method:
Threshold estimation Integrate information across blocks
http://www.socsci.uci.edu/~apetrov/
Threshold log α Slope β Guessing rate γ Lapsing rate λ
http://www.socsci.uci.edu/~apetrov/
Threshold log α Slope β Guessing rate γ Lapsing rate λ
http://www.socsci.uci.edu/~apetrov/
http://www.socsci.uci.edu/~apetrov/
1 1 1
k k k k n k k k i i i i i k
− + ≠
Likelihood of current data Priors Information about φ extracted from the other data sets Modified prior for the current block
http://www.socsci.uci.edu/~apetrov/
Pass 1: for each block i, calculate Pass 2: for each block k, calculate
i i
k k k k k i i k
≠
http://www.socsci.uci.edu/~apetrov/
1
k k k k k
−
75% threshold Posterior density posterior normal
http://www.socsci.uci.edu/~apetrov/
Vaguely informative priors: Implemented on a grid: logα x β x λ Assume γ=.5 for 2AFC data MATLAB software available at
β β
α α
λ λ
http://www.socsci.uci.edu/~apetrov/
75
http://www.socsci.uci.edu/~apetrov/
20 40 60 80 100
trial number log intensity
2 interleaved
100 trials/block
10 catch 40 x 3down/1up 50 x 2down/1up
100 runs of 12
http://www.socsci.uci.edu/~apetrov/
Estimated threshold Frequency ML median mean true
.27
ML .15 0.36 0.41
.31 .28 Std
Mean
Median Med Mean Estimator 1200 Monte Carlo estimates True 75% threshold = -1.217
http://www.socsci.uci.edu/~apetrov/
0.5 0.6 0.7 0.8 0.9 1 log intensity P(correct) true β and λ ML β and λ Lapsing rate LAMBDA Slope BETA 0.05 0.1 0.15 0.2 1 2 3 4 5
http://www.socsci.uci.edu/~apetrov/
Estimated threshold Frequency ML median mean true
.31
ML .16 .34 .30 Std 0.57 0.58
Mean
Median Med Mean Estimator 1200 Monte Carlo estimates No catch trials presented True 75% threshold = -1.217
http://www.socsci.uci.edu/~apetrov/
/800
t
−
1000 2000 3000 4000 5000 6000
Trial number T75
http://www.socsci.uci.edu/~apetrov/
10 20 30 40 50 60
Block number ML threshold True learning curve Reconstruction ± CI95
http://www.socsci.uci.edu/~apetrov/
10 20 30 40 50 60
Block number ML threshold True learning curve Reconstruction ± CI95
http://www.socsci.uci.edu/~apetrov/
http://www.socsci.uci.edu/~apetrov/
0.5 1 Estimated threshold - true threshold Frequency ML median mean true
.28
ML .15 0.39 0.42
.32 .29 Std
Mean
Median Med Mean Estimator 6000 Monte Carlo estimates Similar to the stationary case No systematic bias over time
http://www.socsci.uci.edu/~apetrov/
2 4 6 8 10 12 14 16
0.5 1 Block number 75% threshold
In high noise In no noise Jeter, Dosher, Petrov, & Lu (2005)
http://www.socsci.uci.edu/~apetrov/
Sensitivity to priors? Compare with standard ML methods Individual differences Estimate slope in addition to threshold Non-stationary β and λ? Recommended stimulus placement? Hierarchical models
http://www.socsci.uci.edu/~apetrov/