Microbead Rheology Theory and Applica4ons Ian Seim - - PowerPoint PPT Presentation

microbead rheology
SMART_READER_LITE
LIVE PREVIEW

Microbead Rheology Theory and Applica4ons Ian Seim - - PowerPoint PPT Presentation

Microbead Rheology Theory and Applica4ons Ian Seim iseim@live.unc.edu 2/2/17 Path Data Each path is a 4me series with x and y posi4ons recorded at regular 4me intervals The x-posi4ons can be denoted: X(0 = t 0 ), X(t 1 ), X(t 2 ),


slide-1
SLIDE 1

Microbead Rheology

Theory and Applica4ons Ian Seim iseim@live.unc.edu 2/2/17

slide-2
SLIDE 2
slide-3
SLIDE 3

Path Data

  • Each path is a 4me series with x and y

posi4ons recorded at regular 4me intervals

– The x-posi4ons can be denoted: X(0 = t0), X(t1), X(t2), …, X(ti), …, X(T = tN)

  • tN = T = total recording 4me of path data

– Typically T = 30 seconds for experiments in the Hill lab

  • τ = lag 4me (inter-observa4on 4me)

– The first lag 4me is ti+1 – ti = 1/frame rate (typically 60fps -> τ = 1/60s), but we are oXen interested in mul4ples of this first lag 4me (2/60, 3/60, etc.) – Νumber of data points = N = T/τ

slide-4
SLIDE 4

Path Data

  • We are interested in the proper4es of the

increments, ΔXi = X(ti+τ) – Χ(ti)

– ti for i = 0, 1, 2, …, N-τ – some4mes for mul4ples of τ, i.e. 2/frame rate, 3/ frame rate, 4/frame rate, etc.

  • For a given path and τ = 1/frame rate, we then

have N-1 increments in each coordinate

  • What does the distribu4on of the x-

increments look like for this salt water path…?

slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7

Mean-Squared-Displacement (MSD)

  • For a 4me series of posi4ons, X(ti) for i = 0, 1,

2, …, N, at a given lag 4me τ, the MSD is defined as:

  • At each lag 4me, τ, the MSD is the variance of

the corresponding van hove correla4on func4on

Δr2(τ) = 1 N −τ +1 X(ti +τ)− X(ti)

[ ]

2 i=0 N−τ

slide-8
SLIDE 8
slide-9
SLIDE 9

Brownian Mo4on

  • A simple con4nuous-4me stochas4c process
  • In our applica4ons, we are interested in the

brownian mo4on as a random walk of a par4cle

– Think of a drunken sailor who stumbles out of the bar with nowhere to go: he takes a sequence of steps, but randomly chooses an angle for each

  • ne. How far away from the bar is he aXer some

amount of 4me? (on average)

slide-10
SLIDE 10

Brownian Mo4on

  • Regard each increment, ΔXi = X(ti + τ) – X(ti), as a

random variable

– ΔXi ~ N(0, 2Dτ) – The increments are independent (the par4cle doesn’t “remember” where it was)

, d = dimensionality (Einstein, 1905) , k = Boltzmann’s constant, η = viscosity, r = radius (Stokes-Einstein)

  • Fluctua4on-Dissipa4on: allows us to infer

dissipa4on (macro) proper4es from observed fluctua4ons (micro) and vice versa

D = kBT 6πηr

MSD(τ) = 2dDτ

slide-11
SLIDE 11
slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14
slide-15
SLIDE 15
slide-16
SLIDE 16
slide-17
SLIDE 17
slide-18
SLIDE 18
slide-19
SLIDE 19

Frac4onal Brownian Mo4on

  • A generaliza4on of brownian mo4on

– The distribu4on of increments is s4ll Gaussian with mean 0, but they are no longer independent, i.e. they are correlated and have “memory”

  • We introduce a parameter, α, that captures

the linear, but not necessarily = 1 scaling of the MSD:

  • If α < 1, this is called sub-diffusion
  • If α > 1, this is called super-diffusion

MSD(τ) = 2dDτ α

slide-20
SLIDE 20

Proper4es of Variance

  • For N random variables Xi we have:
  • In our serng, the N (+1) random variables are

increments, ΔXi in each coordinate, and their sum is our observed path

  • Lets look at what happens to the variance of

this sum in two cases…

Var Xi

i=1 N

⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = Var Xi

( )+

Cov Xi, X j

( )

i≠j

i=1 N

slide-21
SLIDE 21

Brownian Mo4on

  • Since the variance of the sum of the smallest

increments is equal to the variance of the largest increment, we see that the increments are independent, i.e. covariance = 0

ΔXi = X(ti +τ)− X(ti) = N(0,2Dτ)

Var ΔXi

i=1 N

⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = Var ΔXi

( )+

Cov ΔXi,ΔX j

( )

i≠j

i=1 N

= 2Dτ

i=1 N

= 2NDτ

Var X(tN )− X(t0)

( ) = 2DT = 2NDτ

slide-22
SLIDE 22

Frac4onal Brownian Mo4on

, where Vij is a covariance matrix (defined later)

  • Here we see that the two variances are no longer

equal and we see that the increments are now correlated, and covariance is non-zero.

Var ΔXi

i=1 N

⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = Var(ΔXi)+ Vij = 2DT αN1−α

i≠j

i=1 N

Var X(tN )− X(t0)

( ) = 2DT α

ΔXi = 2Dτ α Ni(0,Vij)

slide-23
SLIDE 23

Covariance for fBm

ΔXi = 2Dτ α Ni(0,Vij)

Vij = 1 2 i − j +1

α + i − j −1 α − 2 i − j α

⎡ ⎣ ⎤ ⎦

slide-24
SLIDE 24

Experimental DriX ?

We should probably look at the par4cle paths, per movie…

slide-25
SLIDE 25
slide-26
SLIDE 26
slide-27
SLIDE 27
slide-28
SLIDE 28

DriX

  • Due to fluid flow, microscope movement,
  • ther external forces, etc.
  • For now, we assume it is linear (constant in

4me), and we update our fBm model:

  • So we see that paths are a “superposi4on” of

a stochas4c process, fBm, and a determinis4c, linear driX, with velocity μ

ΔXi = µτ + 2Dτ α Ni(0,Vij)

slide-29
SLIDE 29

Firng fBm to data using MLE

  • Using our model of fBm + driX, we can

simultaneously fit the parameters D, α, and μ to each experimental path using Maximum Likelihood Es4ma4on

  • This allows us to accurately recover the

diffusive parameters, despite driX (if it is linear), and effec4vely characterize the viscoelas4c fluid and re-construct a “true” MSD

slide-30
SLIDE 30

Maximum Likelihood Es4ma4on

  • Suppose there are N i.i.d. observa4ons, X1, …,

XN from an unknown probability density func4on with parameters, θ

– We want to find an es4mate of θ

  • We define the joint density func4on for these
  • bserva4ons as
  • Now, we regard the observa4ons as

parameters of this func4on, and θ as a variable (free to vary), and we end up with the likelihood

f (X1,..., XN |θ) = f (Xi |θ)

i=1 N

L(θ;X1,..., XN ) = f (X1,..., XN |θ)

slide-31
SLIDE 31

Maximum Likelihood Es4ma4on

  • In prac4ce, we mostly use the log-likelihood,

which is the natural logarithm of the likelihood func4on

  • Using calculus (if we are lucky), we can take

par4al deriva4ves of the log-likelihood func4on and find maximum values of the parameters we are interested in. Otherwise, we can use a minimiza4on rou4ne (fminsearch() in matlab) on the parameter of interest on the nega:ve log-likelihood func4on

lnL(θ;X1,..., XN ) = ln f (Xi |θ)

i=1 N

slide-32
SLIDE 32
slide-33
SLIDE 33
slide-34
SLIDE 34

Simula4ng fBm Paths

  • Form the covariance matrix,
  • Compute the square root matrix, Σ

(eigenvalue decomposi4on is what I use)

  • Generate a vector, v, of numbers drawn from

a standard Gaussian distribu4on

  • Let ,this is our 1-dimensional

fBm path

  • Can easily add a linear driX by choosing a

velocity, μ, mul4plying by the corresponding 4me vector, and adding it element-wise to u

1 2 ( t

α + s a − t − s α )

u = 2DΣv

slide-35
SLIDE 35
slide-36
SLIDE 36
slide-37
SLIDE 37

Assignments

  • 1) write code that simulates Brownian paths in

2-d (will have 3 inputs (D, τ, T) and should

  • utput a 4me series of posi4ons in x and y
  • 2) write code to calculate MSD (input is a 2-d

path, output is a vector of lag 4mes and a vector of corresponding mean-squared- displacements)

  • 3) write code that finds the maximum

likelihood es4mate of D for a brownian path (input is a 2-d brownian path, and τ; output is an es4mate of D)