matrices and linear algebra linear algebra
play

MATRICES AND LINEAR ALGEBRA Linear Algebra Matrix manipulation is - PowerPoint PPT Presentation

MATRICES AND LINEAR ALGEBRA Linear Algebra Matrix manipulation is the original essence of Matlab; hence the name MATrix LABoratory. In this section we will cover the basics of linear algebra, the ways of using Matlab in this context, and


  1. MATRICES AND LINEAR ALGEBRA

  2. Linear Algebra • Matrix manipulation is the original essence of Matlab; hence the name MATrix LABoratory. • In this section we will cover the basics of linear algebra, the ways of using Matlab in this context, and also look into a few applications.

  3. Vectors, Matrices and ND-arrays • We have already seen how vectors and matrices can be >> x = [1,2,3] % vector x = defined; either by tabulating 1 2 3 the values or by using >> X = [1,2,3; 4,5,6] % matrix X = functions such as ones , zeros , 1 2 3 rand , eye . 4 5 6 >> X3 = cat(3, X, X) % 3D array • Matlab also supports tables of X3(:,:,1) = 1 2 3 higher dimensions; 4 5 6 ND-arrays . X3(:,:,2) = 1 2 3 • In the example, we define a 4 5 6 3-dimensional table using >> size (X3) ans = cat , which catenates the 2 3 2 matrices along the specified dimension (here 3).

  4. Matrix Product • Matrix multiplication is the >> x = [1, 2, 3, 4]’; % 4-D vector >> T = [1, 0, -1, 0;... basic building block of all 0, 1, 0, 2]; % Projection matrix >> y = T * x linear algebra. y = -2 • Matrix product is able to 10 model di ff erent kinds of 8 geometric transformations: 6 a ffi ne, projective, rotation, 4 shearing, etc. 2 • Also multitude of algebraic x 2 transformations are most 0 conveniently represented via −2 matrices, e.g., Fourier and −4 Haar. −6 −4 −2 0 2 4 6 8 10 12 x 1

  5. Example: Fourier Transform • The Fourier transform is used for detection of frequencies contained in a signal ( e.g., audio). • The transform is defined as a matrix product Fx , where x is the signal and the matrix F is w 0 w 0 w 0 w 0  . . .  N N N N w −( N − 1 ) w − 1 w − 2 w 0 . . .   N N N N   w − 2 ( N − 1 ) w − 2 w − 4 w 0 . . .   , N N N N   . . . . ...   . . . . . . . .     w −( N − 1 ) 2 w −( N − 1 ) w − 2 ( N − 1 ) w 0 . . . N N N N with w N = e − 2 πi/N .

  6. Example: Fourier Transform • For example, a length N = 4 Fourier transform matrix becomes: e 0 e 0 e 0 e 0     1 1 1 1 e 0 e − 2 πi/ 4 e − 4 πi/ 4 e − 6 πi/ 4 1 i − 1 − i      =     e 0 e − 4 πi/ 4 e − 8 πi/ 4 e − 12 πi/ 4 1 − 1 1 − 1    e 0 e − 6 πi/ 4 e − 12 πi/ 4 e − 18 πi/ 4 1 − i − 1 i • The transform of x = ( 1, 2, 3, 4 ) is computed by       1 1 1 1 1 10 1 − 1 − i 2 − 2 + 2 i i        =       1 − 1 1 − 1 3 − 2      1 − i − 1 4 − 2 + 2 i i

  7. Fourier Transform in Matlab % Prepare FT matrix: • The attached code prepares the >> x = cos (2* pi *0.1*(1:N)’); >> powers = (0:N-1)’ * (0:N-1); Fourier transform matrix and >> F = exp (-2* pi *i * powers / N); computes the multiplication. % Transform and plot: >> X = F*x; • The changing thing in the matrix >> plot ( abs (X)) is the exponent, so we first create An oscillating signal x 1 a matrix of exponents; then pass 0.5 it through the exp function. 0 • The frequency of oscillation in −0.5 easily measured. The FT in fact −1 0 10 20 30 40 50 60 70 80 90 100 Time produces a shadow component Fourier transform of x 60 on the right, which can be 50 ignored. 40 30 • Note: The command fft is the 20 10 proper way to do FT. 0 0 10 20 30 40 50 60 70 80 90 100 Frequency

  8. Matrix Inversion >> X = rand (3) X = 0.2348 0.8669 0.3692 0.7350 0.0862 0.6850 0.9706 0.3664 0.5979 • Matrix inversion attempts to find >> iX = inv (X) iX = the matrix that multiplies the original matrix to identity. -0.9186 -1.7644 2.5886 1.0383 -1.0037 0.5089 0.8549 3.4791 -2.8413 • "Given matrix X , find Y such that XY = I ." >> X*iX ans = 1.0000 0 -0.0000 0 1.0000 0 -0.0000 0 1.0000

  9. Eigenvalues and -vectors • Eigenvalue decomposition is an advanced matrix algebra topic, that appears in many 9 applications. 8 • Although the detailed 7 description is not a topic of 6 this course, the left panel 5 illustrates how eigenvalues 4 are used to characterize high 3 dimensional data. 2 • The eigenvectors describe the 6 7 8 9 10 11 12 13 14 15 directions that have the highest variation.

  10. Eigenvalues and -vectors • The direction of maximum % Suppose we have data points in X with % size = [200, 2]. variation can be calculated as % Compute the mean of the point cloud: shown here. >> m = mean (X, 1) m = • Note that the mean of the 10.3992 5.7508 points has to be subtracted % Subtract the mean. We need to extend % the vector m into a 200x2 matrix first first. % (i.e., copy it 200 times vertically). • After that, the axes are solved >> M = repmat(m, [200,1]); by finding the eigenvectors of % Compute the direction of max. variance % using eig the covariance matrix of the [V,D] = eig ( cov (X - M)) V = data. -0.6969 0.7172 • This algorithm is called the 0.7172 0.6969 principal component % The max variation is now along direction (x,y) = (0.7172, 0.6969). analysis (PCA).

  11. Example: Least Squares • As an example of a typical linear algebra procedure in Matlab, let us consider least squares fitting of data. • The Least Squares method is a curve fitting method that originates from 1795, when Gauss invented the method (at the time he was only 18 years old). • The method became widely known as Gauss was the only one able to describe the orbit of Ceres , a minor planet in the asteroid belt between Mars and Jupiter that was discovered in 1801.

  12. Example: Least Squares • Least squares minimizes LS minimizes the total length of vertical distances the distance between the 15 data and the model. • For example: Find the line 10 that passes closest to these y(n) 5 data points. • More specifically: Which 0 line has the smallest average distance to these 0 5 10 15 20 25 x(n) points.

  13. Example: LS Theory (very briefly) • Suppose we would like to fit the function f ( x ) = ax + b into the data points ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . ( x N , y N ) . • Task is to find best a , b and c . • This is done by minimizing the sum of distances: N � ( y n − f ( x n )) 2 minimize n = 1 • In other words N � ( y n − ax n − b ) 2 minimize n = 1

  14. Example: LS Theory (very briefly) • In Matlab, we represent the problem with matrices as follows. • Minimize the distance between these two vectors:     ax 1 + b y 1 ax 2 + b y 2         ax 3 + b y 3 and         . . . .     . .     ax N + b y N

  15. Example: LS Theory (very briefly) • The coe ffi cients a and b repeat at each row, so we can write the task more compactly as follows • Minimize the distance between these two vectors:     1 x 1 y 1 x 2 1 y 2     � a �     1 x 3 y 3 and       b   . . . . . .     . . . � � �� � �     c 1 x N y N � ��������� �� ��������� � � ���� �� ���� � y X

  16. LS in Matlab % Prepare matrix X (add the constant term) • It can be shown that the best >> X = [x, ones( size (x))]; coe ffi cients are given by % Compute coefficients a and b >> c = inv (X’ * X) * X’ * y c = c = ( X T X ) − 1 X T y 0.5372 0.1311 % Compute coefficients a and b • This appears so often, that % using the shortcut >> c = X \ y there is a shortcut in Matlab: c = c = X \ y . 0.5372 0.1311 • In fact, there are also other 1.5 shortcuts for this: Data 1.4 Best linear fit Best quadratic fit 1.3 • pinv(X) * y 1.2 • y’ / X’ 1.1 1 • One can also add higher order 0.9 0.8 terms to fit other polynomials: 0.7 0.6 X = [x.^3, x.^2, x, x.^0]; 0.5 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

  17. Application: Remove Uneven Illumination Original image • As an application example of the LS, consider the attached microscope images. • The illumination is uneven, which prevents automatic extraction of cells. Illumination effect • What we do, is to model the illumination component using a LS model. • The model attempts to predict the brightness from the X-Y-coordinates. • The model can only predict large scale Illumination effect removed variations (slowly changing illumination), and not the small details (cells). • We choose the 2nd order model because the illumination looks like a paraboloid.

  18. Application: Remove Uneven Illumination • Target is to create a model f to predict brightness z from location ( x , y ) ; i.e., z ≈ f ( x , y ) . • The complete model is now    x 2 y 2    z 1 x 1 y 1 x 1 y 1 1 c 1 1 1 x 2 y 2 z 2 x 2 y 2 x 2 y 2 1 c 2       2 2  =  .   . . . . . .   .  . . . . . . . .       . . . . . . . .      x 2 y 2 z N x N y N x N y N 1 c 6 N N • Above, z 1 , z 2 , . . . , z N denote the brightness at coordinates ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) . • Total 1.3 million points in each vector x , y and z .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend