GNR607 Principles of Satellite Image Processing Instructor: Prof. - - PowerPoint PPT Presentation

gnr607 principles of satellite image processing
SMART_READER_LITE
LIVE PREVIEW

GNR607 Principles of Satellite Image Processing Instructor: Prof. - - PowerPoint PPT Presentation

GNR607 Principles of Satellite Image Processing Instructor: Prof. B. Krishna Mohan CSRE, IIT Bombay bkmohan@csre.iitb.ac.in Slot 2 Lecture 06 Mathematical Preliminaries - 1 Aug. 04, 2014 9.30 AM 10.25 AM IIT Bombay


slide-1
SLIDE 1

GNR607 Principles of Satellite Image Processing

Instructor: Prof. B. Krishna Mohan CSRE, IIT Bombay bkmohan@csre.iitb.ac.in

Slot 2 Lecture 06 Mathematical Preliminaries - 1

  • Aug. 04, 2014 9.30 AM – 10.25 AM
slide-2
SLIDE 2

Contents of the Lecture

  • Mathematical Preliminaries

– Matrix Operations – Vectors – Eigenanalysis of matrices

IIT Bombay Slide 1 GNR607 Lecture 06 B. Krishna Mohan Aug 04, 2014 Lecture 06 Math. Preliminaries - 1

slide-3
SLIDE 3

Review of Mathematical Preliminaries

  • All remote sensing related data processing and analysis

require knowledge of mathematics

  • ALL areas of mathematics are relevant in processing images

acquired by remote sensing

  • Topics considered for a brief review:

– Matrix vector operations and eigenvalue problem – Probability and Statistics – Linear System Principles IIT Bombay Slide 2 GNR607 Lecture 06 B. Krishna Mohan

slide-4
SLIDE 4

Mathematical Preliminaries

IIT Bombay Slide 3 GNR607 Lecture 06 B. Krishna Mohan

slide-5
SLIDE 5

Language of Image Processing

The language of digital image processing is mathematics

– Linear Algebra – Optimization – Probability Theory and Statistics – Matrices and vectors – Geometry – Integral and differential equations – Fuzzy Sets GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 4

slide-6
SLIDE 6

Matrices and vectors

  • Any matrix A of size M x N is an array of entities or elements

(symbols, real numbers, integers, complex numbers …) having M rows and N columns

11 12 1 21 22 2 1 2

... ... ... ...

N N M M MN

a a a a a a A a a a         =          

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 5

10 20 30 40 20 30 Red: Elements on 15 10 20 15 15 10 main diagonal 30 20 40 20 11 19 33 40 51 98 10 12 4x4 Square 4x2 Rectangular Trace = 10+10+40+98=158

slide-7
SLIDE 7

Definitions

  • Matrix A is rectangular if M ≠ N; else it is a square matrix
  • An element amn is said to be on the main diagonal of a square matrix

A if m = n, i.e., its row and column indices are the same

  • If all elements amn of a square matrix A are zero for m ≠ n, then A is

called a diagonal matrix

  • A is a null matrix if amn = 0 for 1 m M, 1 n N
  • Trace (A) = where A is a square matrix of size M
  • Matrix A = Matrix B iff amn = bmn, 1 m M, 1 n N

1 M mm m

a

=

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 6

10 0 0 0 22 0 0 0 44 Diagonal matrix 0 0 0 0 0 0 Null

slide-8
SLIDE 8

Definitions

  • The transpose of a matrix A, denoted by AT is produced by

interchanging the rows and columns of A. If A has M rows and N columns, AT will have N rows and M columns

  • If A = AT, then A is called a symmetric matrix
  • If we multiply every element amn of matrix A by a scalar k, then k.A is

called the scalar multiple of A where k may be real or complex. If k = -1, then the resultant is known as the negative of A

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 7

Symmetric Matrix 12 30 41 30 23 50 41 50 06 15 41 30 15 22 22 39 50 41 39 30 50 Matrix Transpose 15 22 31 30 44 62 -15 -22 -31 44 10 27 88 20 54 -44 -10 -27 Matrix A 2A -A

slide-9
SLIDE 9

Definitions

  • A matrix A of size M x N is called a column vector if the number of

columns N = 1

  • A matrix B of size M x N is called a row vector if the number of

rows M = 1

[ ] [ ]

1 2 1 2 1 2

... ... ...

T N M M

a a b b b a a a a       = = =       a b a

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 8

Column vector is a matrix with

  • nly one

column, row vector is a matrix with

  • nly one row
slide-10
SLIDE 10

Basic Operations

  • Matrix A obtained by the sum of two matrices B and C

is defined as an array of elements amn amn = bmn + cmn

  • Matrix summation is possible only if both matrices B and

C have the same number of rows and columns M and N

  • If A is the difference of matrices B and C, then

amn = bmn - cmn GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 9

slide-11
SLIDE 11

Basic Operations

  • Matrix A obtained by the product of matrices BMxN and CNxP is of size

MxP, whose (i,j)th element aij is defined by

  • aij =
  • Matrix multiplication is only possible if the number of columns of the

first matrix = the number of rows of the second matrix.

1 N in nj n

b c

=

If B.C is feasible to calculate, C.B can be calculated only if B and C are square matrices GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 10

12 15 22 09 10 30 0 462 460 1009 59 16 20 14 x 06 08 11 1 = … 09 12 10 12 10 22 2 13 15 17 4x3 matrix 3x4 matrix 4x4 matrix

slide-12
SLIDE 12

Basic Operations

  • The inverse of a square matrix A, denoted by X satisfies the

property A.X = X.A = I where I is the unit matrix

If the inverse does not exist for a matrix A, then it is non-invertible. In such a case A is called a singular matrix. GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 10a

12 15 22 09 10 30 0 462 460 1009 59 16 20 14 x 06 08 11 1 = … 09 12 10 12 10 22 2 13 15 17 4x3 matrix 3x4 matrix 4x4 matrix

slide-13
SLIDE 13

Basic Operations

  • If A is a rectangular matrix, a conventional inverse of A does not
  • exist. Instead, one computes a pseudo-inverse of A.

In case of pseudo-inverse, there will be two such matrices for a rectangular matrix A. GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 10b If A+ denotes pseudo-inverse of rectangular matrix A

  • f size MxN, then there will be a left pseudo-inverse

resulting in (A+ )L .A = I, of size NxN, and a right pseudo-inverse resulting in A.(A+)R = I of size MxM

slide-14
SLIDE 14

Basic Operations

Pseudo-Inverse GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 10c

Usually such situations are encountered when it is required to solve systems of equations with a)Number of equations more than number of unknowns e.g., A.p = q where A is of size 10x3, p is the unknown vector of size 3x1, and q is of size 10x1. Number of equations = 10 and number of unknowns = 3 b) Number of equations less than number of unknowns e.g., B.r = s where B is of size 3x5, r is the unknown vector of size 5x1, and s is of size 3x1. Number of equations = 3 and number of unknowns = 5

slide-15
SLIDE 15

Vectors and Vector Spaces

  • A vector space is defined as a non-empty set of

vectors V that satisfy properties as shown below:

  • Vector addition

– x + y = y + x – x + (y + z) = (x + y) + z – 0 + x = x + 0 = x (0 is known as zero-vector) – -x + x = x + (-x) = 0 (-x is called negative of x)

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 11

slide-16
SLIDE 16

Vectors and Vector Spaces

Multiplication condition

  • p(qy) = (pq)y given scalars p and q, and vector y.
  • p(x + y) = px + py given scalar p and vectors x and y.
  • (p + q)x = px + qx given scalars p and q, and vector x.

Multiplicative identity

  • 1x = x1 = x for all vectors x --- multiplying a vector by

unity does not alter it.

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 12

slide-17
SLIDE 17

Linear Combination of Vectors

  • Given two vector spaces V1 and V2, if all

elements of V2 are also elements of V1, then V2 is said to be a subspace of V1

  • Linear combination of vectors vk, k = 1,2, …

is defined as v = w1v1 + w2v2 + … + wnvn The weights w1, w2, … are scalars

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 13

slide-18
SLIDE 18

Inner Product

  • The inner product of two column vectors a

and b of size n is a scalar, given by

  • a.b = aTb = bTa = a1b1 + a2b2 + … + anbn
  • This can also be written as a.b =
  • Inner product is also referred to as vector dot

product.

1 n i i i

a b

=

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 14

slide-19
SLIDE 19

Linear Independence

  • If a vector v can be written as a linear

combination of vectors v1, v2, …, vn, then v is said to be linearly dependent on the vectors v1, v2, …, vn.

  • Otherwise v is said to be linearly independent
  • f v1, v2, …, vn

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 15

slide-20
SLIDE 20

Basis Vectors

  • A set S of vectors v1, v2, …, vn is said to span a

subspace Vo of a vector space V if any vector v in Vo can be expressed as a linear combination of v1, v2, …, vn.

  • If all vectors of a vector space V can be expressed as

linear combinations of vectors v1, v2, …, vn, then this set of vectors forms a basis set for V. The number n

  • f vectors is called the dimension of the vector space

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 16

slide-21
SLIDE 21

Example

  • Any vector in the x-y plane can be generated

as the linear combination of the basis vectors [1 0]T and [0 1]T

  • Any point [xo yo]T can be written as

xo[1 0]T + yo[0 1]T

  • Therefore the dimensionality of the x-y

plane is 2.

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 17

slide-22
SLIDE 22

Example

  • Likewise, a vector in 3-dimensional space

can be expressed as a linear combination of the basis vectors

1 0 , 1 and 0 1                              

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 18

slide-23
SLIDE 23

Vector Norms

  • The vector norm defined on a vector space V is a

function that assigns to each vector v in V a unique non-negative real number called the norm of v. The norm is denoted by ||v||. The norm satisfies

  • ||v|| > 0 if v 0; ||0|| = 0
  • ||kv|| = |k|.||v|| where k is a scalar, and v is a vector
  • ||v + u|| < ||v|| + ||u||

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 19

slide-24
SLIDE 24

Vector Norms

  • The most commonly employed norm is the 2-norm

defined as

  • ||v|| = sqrt(v1

2 + v2 2 + … + vn 2)

  • This can also be viewed as the magnitude of vector v,

and as the Euclidean distance of v from the origin of the n-dimensional Euclidean space denoted by 0. This can also be written as

  • ||v|| = sqrt(vTv)

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 20

slide-25
SLIDE 25

Vector Norms

  • A famous inequality known as Cauchy-Schwartz inequality

states that

  • |uTv| < ||u||.||v||
  • One of the most widely used properties of vectors is the angle

between vectors

  • The angle θ between two vectors u and v can be expressed in

terms of cos θ as

  • cos θ = or u.v = ||u|| ||v|| cos θ

. || || || || u v u v

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 21

slide-26
SLIDE 26

Vector Norms

  • If two vectors in the n-dimensional real space ℜn are
  • rthogonal (mutually perpendicular), θ = 90o and

cos θ = 0.

  • If = 0, then u.v = 0. In other words, if two

vectors are orthogonal, their dot product must be zero

  • If u and v are of unit magnitude, and they are orthogonal,

then are said to be orthonormal.

  • Any vector p can be reduced to be of unit magnitude by

defining pn = p/||p||. Obviously, ||pn|| = 1

. || || || || u v u v

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 22

slide-27
SLIDE 27

Orthogonal Vectors

  • A set of vectors is said to be
  • rthogonal if each pair of vectors in

that set is orthogonal

  • A set of vectors is said to be
  • rthonormal if each pair of vectors in

that set is orthonormal

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 23

slide-28
SLIDE 28

Application of Orthogonality

  • If a vector v is a linear combination of a set of
  • rthogonal vectors v1, v2, …, vn, i.e.,
  • v = w1v1 + w2v2 + … + wnvn

the coefficients w1, w2, … can be computed as:

  • If vi are a set of orthonormal vectors, then

wi = v.vi

2

. . . || ||

i

w = =

i i i i i

v v v v v v v

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 24

slide-29
SLIDE 29

Eigenvalues and Eigenvectors

  • The eigenvalues of a real square matrix C are the real

numbers λ for which there is a nonzero vector v such that C v = λ v

  • The vector v (where v 0 ) is known as the

eigenvector of C corresponding to the eigenvalue λ and vice versa.

  • Computing the eigenvalues and eigenvectors of a

matrix is known as eigenanalysis of the matrix.

GNR607 Lecture 06 B. Krishna Mohan IIT Bombay Slide 25