Bayesian Method for Repeated Threshold Estimation Alexander Petrov - - PowerPoint PPT Presentation

bayesian method for repeated threshold estimation
SMART_READER_LITE
LIVE PREVIEW

Bayesian Method for Repeated Threshold Estimation Alexander Petrov - - PowerPoint PPT Presentation

Bayesian Method for Repeated Threshold Estimation Alexander Petrov Department of Cognitive Sciences University of California, Irvine Supported by NIMH and NSF grants to Prof. Barbara Dosher Motivation: Perceptual Learning Non-stationary


slide-1
SLIDE 1

Bayesian Method for Repeated Threshold Estimation

Alexander Petrov

Supported by NIMH and NSF grants to Prof. Barbara Dosher Department of Cognitive Sciences University of California, Irvine

slide-2
SLIDE 2

http://www.socsci.uci.edu/~apetrov/

Motivation: Perceptual Learning

Non-stationary thresholds Dynamics of learning is important Must use naïve observers Low motivation high lapsing rates Slow learning many sessions Large volume of low-quality binary data

slide-3
SLIDE 3

http://www.socsci.uci.edu/~apetrov/

Objective: Data Reduction

slide-4
SLIDE 4

http://www.socsci.uci.edu/~apetrov/

Isn’t This a Solved Problem?

Up/down (Levitt, 1970) PEST (Taylor & Creelman, 1967) BEST PEST (Pentland, 1980) QUEST (Watson & Pelli, 1979) ML-Test (Harvey, 1986) Ideal (Pelli, 1987) YAAP (Treutwein, 1989) and many others…

slide-5
SLIDE 5

http://www.socsci.uci.edu/~apetrov/

We Solve a Different Problem

Standard methods:

Adaptive stimulus placement Stopping criterion Threshold estimation

Our method:

Threshold estimation Integrate information across blocks

slide-6
SLIDE 6

http://www.socsci.uci.edu/~apetrov/

Weibull Psychometric Function

Threshold log α Slope β Guessing rate γ Lapsing rate λ

( ; , ) 1 exp( exp((log log ) )) ( ; , , , ) (1 ) ( ; , ) W x x P x W x α β α β α β γ λ γ γ λ α β = − − − = + − −

γ 1-λ logα logx

slide-7
SLIDE 7

http://www.socsci.uci.edu/~apetrov/

Two Kinds of Parameters

Threshold log α Slope β Guessing rate γ Lapsing rate λ

Parameters

  • f interest θ

Nuisance parameters φ

The nuisance parameters are harder to estimate but change more slowly than the threshold parameter.

slide-8
SLIDE 8

http://www.socsci.uci.edu/~apetrov/

Get the Best of Both Worlds

Use long data sequences to constrain the nuisance parameters; use short sequences to estimate the thresholds.

slide-9
SLIDE 9

http://www.socsci.uci.edu/~apetrov/

Joint Posterior of θk,φ

1 1 1

( , | ; , ) ( | , ) ( ) ( ) ( ) ( | , )

k k k k n k k k i i i i i k

p p p p p p d θ φ θ φ θ φ θ θ φ θ

− + ≠

=

∏∫

y y y y y y y … …

Likelihood of current data Priors Information about φ extracted from the other data sets Modified prior for the current block

slide-10
SLIDE 10

http://www.socsci.uci.edu/~apetrov/

Pass 1: for each block i, calculate Pass 2: for each block k, calculate

Two-Pass Algorithm

( | ) ( ) ( | , )

i i

p p p d φ θ θ φ θ = ∫ y y ( , | ) ( | , ) ( ) ( ) ( | )

k k k k k i i k

p p p p p θ φ θ φ θ φ φ

=

∏∫

y y y

slide-11
SLIDE 11

http://www.socsci.uci.edu/~apetrov/

Posterior Thresholds

1

( ) (.75; , ) ( , | )

k k k k k

p T P p d d θ φ θ φ θ φ

= ∫ y

  • 3
  • 2.5
  • 2
  • 1.5
  • 1
  • 0.5

75% threshold Posterior density posterior normal

slide-12
SLIDE 12

http://www.socsci.uci.edu/~apetrov/

Vaguely informative priors: Implemented on a grid: logα x β x λ Assume γ=.5 for 2AFC data MATLAB software available at

http://www.socsci.uci.edu/~apetrov/

Some Details

( ) N( , ) p

β β

β µ σ ∝ (log ) N( , ) p

α α

α µ σ ∝ ( ) Beta( , ) p a b

λ λ

λ ∝

slide-13
SLIDE 13

http://www.socsci.uci.edu/~apetrov/

Simulation 1: Stationary

1.5 β = log 1.204 const α = − = .10 λ =

75

1.217 T = −

slide-14
SLIDE 14

http://www.socsci.uci.edu/~apetrov/

Stimulus Placement

20 40 60 80 100

  • 2
  • 1.5
  • 1
  • 0.5

trial number log intensity

2 interleaved

staircases

100 trials/block

10 catch 40 x 3down/1up 50 x 2down/1up

100 runs of 12

blocks each

slide-15
SLIDE 15

http://www.socsci.uci.edu/~apetrov/

  • 3
  • 2.5
  • 2
  • 1.5
  • 1
  • 0.5

Estimated threshold Frequency ML median mean true

Threshold Estimators

.27

  • 1.23
  • 1.24

ML .15 0.36 0.41

  • Std. dev.

.31 .28 Std

  • 1.27
  • 1.30

Mean

  • 1.23
  • 1.26

Median Med Mean Estimator 1200 Monte Carlo estimates True 75% threshold = -1.217

slide-16
SLIDE 16

http://www.socsci.uci.edu/~apetrov/

β x λ Distribution from Pass 1

  • 4
  • 3
  • 2
  • 1

0.5 0.6 0.7 0.8 0.9 1 log intensity P(correct) true β and λ ML β and λ Lapsing rate LAMBDA Slope BETA 0.05 0.1 0.15 0.2 1 2 3 4 5

slide-17
SLIDE 17

http://www.socsci.uci.edu/~apetrov/

  • 3
  • 2.5
  • 2
  • 1.5
  • 1
  • 0.5

Estimated threshold Frequency ML median mean true

Catch Trials Are Worthwhile

.31

  • 1.22
  • 1.24

ML .16 .34 .30 Std 0.57 0.58

  • Std. dev.
  • 1.33
  • 1.36

Mean

  • 1.26
  • 1.29

Median Med Mean Estimator 1200 Monte Carlo estimates No catch trials presented True 75% threshold = -1.217

slide-18
SLIDE 18

http://www.socsci.uci.edu/~apetrov/

Simulation 2: With Learning

1.5 β =

/800

log 0.693 ( 2)

t

e α

= − − .10 λ =

1000 2000 3000 4000 5000 6000

  • 2
  • 1.5
  • 1
  • 0.5

Trial number T75

slide-19
SLIDE 19

http://www.socsci.uci.edu/~apetrov/

Group Learning Curve, N=100

10 20 30 40 50 60

  • 2
  • 1.8
  • 1.6
  • 1.4
  • 1.2
  • 1
  • 0.8
  • 0.6
  • 0.4
  • 0.2

Block number ML threshold True learning curve Reconstruction ± CI95

slide-20
SLIDE 20

http://www.socsci.uci.edu/~apetrov/

More Realistic Sample, N=10

10 20 30 40 50 60

  • 2
  • 1.8
  • 1.6
  • 1.4
  • 1.2
  • 1
  • 0.8
  • 0.6
  • 0.4
  • 0.2

Block number ML threshold True learning curve Reconstruction ± CI95

slide-21
SLIDE 21

http://www.socsci.uci.edu/~apetrov/

Individual Runs

  • 2
  • 1
  • 2
  • 1
  • 2
  • 1
  • 2
  • 1
  • 2
  • 1
  • 2
  • 1
slide-22
SLIDE 22

http://www.socsci.uci.edu/~apetrov/

  • 1.5
  • 1
  • 0.5

0.5 1 Estimated threshold - true threshold Frequency ML median mean true

The Method Performs Well

.28

  • 0.02
  • 0.03

ML .15 0.39 0.42

  • Std. dev.

.32 .29 Std

  • 0.05
  • 0.08

Mean

  • 0.03
  • 0.05

Median Med Mean Estimator 6000 Monte Carlo estimates Similar to the stationary case No systematic bias over time

slide-23
SLIDE 23

http://www.socsci.uci.edu/~apetrov/

Example: Actual Data, N=8

2 4 6 8 10 12 14 16

  • 3
  • 2.5
  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 Block number 75% threshold

In high noise In no noise Jeter, Dosher, Petrov, & Lu (2005)

slide-24
SLIDE 24

http://www.socsci.uci.edu/~apetrov/

Future Work

Sensitivity to priors? Compare with standard ML methods Individual differences Estimate slope in addition to threshold Non-stationary β and λ? Recommended stimulus placement? Hierarchical models

slide-25
SLIDE 25

http://www.socsci.uci.edu/~apetrov/

The End