Statistical modeling and analysis of neural data (NEU 560), Spring - - PowerPoint PPT Presentation

statistical modeling and analysis of neural data neu 560
SMART_READER_LITE
LIVE PREVIEW

Statistical modeling and analysis of neural data (NEU 560), Spring - - PowerPoint PPT Presentation

Statistical modeling and analysis of neural data (NEU 560), Spring 2018 Jonathan Pillow Princeton University Linear Algebra Review Lecture 2 1 Linear algebra Linear algebra has become as basic and as applicable as calculus, and


slide-1
SLIDE 1

Jonathan Pillow Princeton University

Statistical modeling and analysis of neural data (NEU 560), Spring 2018 Linear Algebra Review Lecture 2

1

slide-2
SLIDE 2

Linear algebra

“Linear algebra has become as basic and as applicable as calculus, and fortunately it is easier.”

  • Glibert Strang, Linear algebra and its applications

2

slide-3
SLIDE 3

vectors

  • v =

    

v1 v2 . . . vN

    

vN v

1 v2

v

1

v2 v v3 v2 v

1

v

3

slide-4
SLIDE 4

column vector

  • v =

    

v1 v2 . . . vN

    

# make a 3-component column vector v = np.array([[3], [1], [-7]])

in python transpose

# transpose v.T

row vector

# create row vector directly v = np.array([[3,1,-7]]) # row vector # or v = np.array([3,1,-7]) # 1D vector

4

slide-5
SLIDE 5

addition of vectors

translated

w w v v v

  • 5
slide-6
SLIDE 6

scalar multiplication

2

v v v

6

slide-7
SLIDE 7

vector norm (“L2 norm”)

  • vector length in Euclidean space

v

1

v2 v

In 2-D: In n-D:

7

slide-8
SLIDE 8

vector norm (“L2 norm”)

# make a vector v = np.array([1, 7, 3, 0, 1]) # many equivalent ways to compute norm np.linalg.norm(v) # built-in function np.sqrt(np.dot(v,v)) # sqrt of dot product np.sqrt(v.T @ v) # sqrt of v-tranpose times v np.sqrt(sum(v * v)) # sqrt of sum of elementwise product
 np.sqrt(sum(v ** 2)) # sqrt of v elementwise-squared 
 # note use of @ and * and ** # @ - gives matrix multiply # * - gives elementwise multiply # ** - gives exponentiation (“raising to a power”)

in python

8

slide-9
SLIDE 9

unit vector

v

1

v2 v

  • in 2 dimensions

unit circle

  • vector such that

9

slide-10
SLIDE 10

unit vector

  • in n dimensions
  • vector such that
  • sits on the surface of an n-dimensional hypersphere

10

slide-11
SLIDE 11

unit vector

  • make any vector into a

unit vector via

  • vector such that

11

slide-12
SLIDE 12

inner product (aka “dot product”)

  • produces a scalar from two vectors

φvw

b

. .

w w v v b w w

. v

12

slide-13
SLIDE 13

linear projection

.

v

u ^

v u

^ u ^

( )

  • intuitively, dropping a vector down onto a linear surface

at a right angle

  • if u is a unit vector,

length of projection is


  • for non-unit vector, length of projection =

13

slide-14
SLIDE 14

linear projection

.

v

u ^

v u

^ u ^

( )

  • intuitively, dropping a vector down onto a linear surface

at a right angle

  • if u is a unit vector,

length of projection is


  • for non-unit vector, length of projection =

}

component of v in direction of u

14

slide-15
SLIDE 15
  • rthogonality
  • two vectors are orthogonal (or “perpendicular”) if their

dot product is zero:

.

v

u ^

v u

^ u ^

( )

}

component of v in direction of u component of v

  • rthogonal to u

15

slide-16
SLIDE 16

linear combination

1

v

2

v

3

v

  • scaling and summing applied to a group of vectors
  • a group of vectors is linearly

dependent if one can be written as a linear combination of the others

  • otherwise, linearly independent

16

slide-17
SLIDE 17

matrices

n × m matrix

m column vectors can think of it as:

c1 cm

r1 rn

  • r

n row vectors

17

slide-18
SLIDE 18

matrix multiplication

One perspective: dot product with each row:

18

slide-19
SLIDE 19

matrix multiplication

Other perspective: linear combination of columns

v1 vm

  • • •

c1 cm u1 un

  • • •

c1 c2 cm v1• + v2• + … + vm• =

19

slide-20
SLIDE 20

transpose

  • flipping around the diagonal

1 4 7 2 5 8 3 6 9 T = 1 2 3 4 5 6 7 8 9 1 4 2 5 3 6 T = 1 2 3 4 5 6 square matrix non-square

20

slide-21
SLIDE 21

inverse

  • If A is a square matrix, its inverse A-1 (if it exists) satisfies:

“the identity” (eg., for 4 x 4)

21

slide-22
SLIDE 22

The identity matrix

“the identity” (eg., for 4 x 4)

for any vector

22

slide-23
SLIDE 23

two useful facts

  • inverse of a product
  • transpose of a product

23

slide-24
SLIDE 24

(Square) Matrix Equation

assume (for now) square and invertible left-multiply both sides by inverse of A:

24

slide-25
SLIDE 25

vector space & basis

1

v

1

v

2

v

2

v

1

v

  • vector space - set of all points that can be obtained by

linear combinations some set of vectors

  • basis - a set of linearly independent vectors that generate

(through linear combinations) all points in a vector space Two different bases for the same 2D vector space (subspace of R2) 1D vector space

25

slide-26
SLIDE 26

span - to generate via linear combination

1

v

1

v

2

v

2

v

1

v

  • vector space - set of all points that can be spanned

by some set of vectors

  • basis - a set of vectors that can span a vector space

Two different bases for the same 2D vector space (subspace of R2) 1D vector space

26

slide-27
SLIDE 27
  • rthonormal basis
  • basis composed of orthogonal unit vectors

1

v

2

v

2

v

1

v

  • Two different orthonormal bases

for the same vector space

27

slide-28
SLIDE 28

Orthogonal matrix

  • Square matrix whose columns (and rows) form an
  • rthonormal basis (i.e., are orthogonal unit vectors)

Properties: length- preserving

28