statistical geometry processing
play

Statistical Geometry Processing Winter Semester 2011/2012 Linear - PowerPoint PPT Presentation

Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian space 3 Vector Operations w


  1. Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems

  2. Vector and Function Spaces

  3. Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian space 3

  4. Vector Operations w v v + w “Adding” Vectors: Concatenation 4

  5. Vector Operations 2.0 · v 1.5 · v v -1.0 · v Scalar Multiplication: Scaling vectors (incl. mirroring) 5

  6. You can combine it... v 2 w + v w Linear Combinations: This is basically all you can do. n    r v i i i  1 6

  7. Vector Spaces Vector space: • Set of vectors V • Based on field F (we use only F =  ) • Two operations:  Adding vectors u = v + w ( u , v , w  V)  Scaling vectors w =  v ( u  V,   F) • Vector space axioms : 7

  8. Additional Tools More concepts: • Subspaces, linear spans, bases • Scalar product  Angle, length, orthogonality  Gram-Schmidt orthogonalization • Cross product ( ℝ 3 ) • Linear maps  Matrices • Eigenvalues & eigenvectors • Quadratic forms (Check your old math books) 8

  9. Example Spaces Function spaces: • Space of all functions f :    • Space of all smooth C k functions f :    • Space of all functions f : [0..1] 5   8 • etc... + = 0 1 0 1 0 1 9

  10. Function Spaces Intuition: • Start with a finite dimensional vector • Increase sampling density towards infinity • Real numbers: uncountable amount of dimensions [ f 1 , f 2 ,..., f 9 ] T [ f 1 , f 2 ,..., f 18 ] T f (x) 0 1 0 1 0 1 d =  d = 9 d = 18 10

  11. Dot Product on Function Spaces Scalar products • For square-integrable functions f , g :    n   , the standard scalar product is defined as:    f g : f ( x ) g ( x ) dx  • It measures an abstract norm and “angle” between function (not in a geometric sense) • Orthogonal functions:  Do not influence each other in linear combinations.  Adding one to the other does not change the value in the other ones direction. 11

  12. Approximation of Function Spaces Finite dimensional subspaces: • Function spaces with infinite dimension are hard to represented on a computer • For numerical purpose, finite-dimensional subspaces are used to approximate the larger space • Two basic approaches 12

  13. Approximation of Function Spaces Task: • Given: Infinite-dimensional function space V. • Task: Find f  V with a certain property. Recipe: “Finite Differences” • Sample function f on discrete grid • Approximate property discretely  Derivatives => finite differences  Integrals => Finite sums • Optimization: Find best discrete function 13

  14. Approximation of Function Spaces actual solution function space basis approximate solution Recipe: “Finite Elements” • Choose basis functions b 1 , ..., b d  V = 𝑒 • Find 𝑔 𝜇 𝑗 𝑐 𝑗 that matches the property best 𝑗=1 is described by (  1 ,...,  d ) • 𝑔 14

  15. Examples 2 2 1,8 1,5 1,6 1 1,4 1,2 0,5 1 0 0,8 0,6 -0,5 0,4 -1 0,2 0 -1,5  2  0 0,5 1 1,5 2 0 0 0,5 1 1,5 2 Monomial basis Fourier basis B-spline basis, Gaussian basis 15

  16. “Best Match” Linear combination matches best • Solution 1: Least squares minimization 2 𝑜 𝑔 𝑦 − 𝜇 𝑗 𝑐 𝑗 𝑦 𝑒𝑦 → 𝑛𝑗𝑜 𝑗=1 ℝ • Solution 2: Galerkin method 𝑜 ∀𝑗 = 1. . 𝑜: 𝑔 − 𝜇 𝑗 𝑐 𝑗 , 𝑐 𝑗 = 0 𝑗=1 • Both are equivalent 16

  17. Optimality Criterion Given: W • Subspace W ⊆ V 𝐱 • An element 𝐰 ∈ V 𝐰 V Then we get: • 𝐱 ∈ W minimizes the quadratic error w − 𝐰 2 (i.e. the Euclidean distance) if and only if: • the residual w − 𝐰 is orthogonal to W Least squares = minimal Euclidean distance 17

  18. Formal Derivation Least-squares 2 𝑜 E 𝑔 = 𝑔 𝑦 − 𝜇 𝑗 𝑐 𝑗 𝑦 𝑒𝑦 𝑗=1 ℝ 𝑜 𝑜 𝑜 = 𝑔 2 𝑦 − 2 𝜇 𝑗 𝑔 𝑦 𝑐 𝑗 𝑦 + 𝜇 𝑗 𝜇 𝑘 𝑐 𝑗 𝑦 𝑐 𝑘 𝑦 𝑒𝑦 𝑗=1 𝑗=1 𝑗=1 ℝ Setting derivatives to zero: ⋱ ⋮ ⋰ 𝜇 1 𝑔, 𝑐 1 𝛼E 𝑔 = −2 ⋮ + 𝜇 1 , … , 𝜇 𝑜 ⋯ 𝑐 𝑗 𝑦 , 𝑐 𝑘 𝑦 ⋯ 𝜇 𝑜 𝑔, 𝑐 𝑜 ⋰ ⋮ ⋱ Result: 𝑜 ∀𝑘 = 1. . 𝑜: 𝑔 − 𝜇 𝑗 𝑐 𝑗 , 𝑐 𝑘 = 0 𝑗=1 18

  19. Linear Maps

  20. Linear Maps A Function • f : V  W between vector spaces V, W is linear if and only if: •  v 1 ,v 2  V: f (v 1 +v 2 ) = f (v 1 ) + f (v 2 ) •  v  V,   F: f (  v) =  f (v) Constructing linear mappings: A linear map is uniquely determined if we specify a mapping value for each basis vector of V. 20

  21. Matrix Representation Finite dimensional spaces • Linear maps can be represented as matrices • For each basis vector v i of V, we specify the mapped vector w i . • Then, the map f is given by:    v     1        f ( v ) f v w ... v w    1 1 n n     v     n 21

  22. Matrix Representation This can be written as matrix-vector product:  | |   v     1    f ( v ) w w       1 n     | | v     n The columns are the images of the basis vectors (for which the coordinates of v are given) 22

  23. Linear Systems of Equations Problem: Invert an affine map • Given: Mx = b • We know M , b • Looking for x Solution • Set of solutions: always an affine subspace of  n , or the empty set.  Point, line, plane, hyperplane... • Innumerous algorithms for solving linear systems 23

  24. Solvers for Linear Systems Algorithms for solving linear systems of equations: • Gaussian elimination: O( n 3 ) operations for n  n matrices • We can do better, in particular for special cases:  Band matrices: constant bandwidth  Sparse matrices: constant number of non-zero entries per row – Store only non-zero entries – Instead of (3.5, 0, 0, 0, 7, 0, 0), store [(1:3.5), (5:7)] 24

  25. Solvers for Linear Systems Algorithms for solving linear systems of n equations: • Band matrices, O(1) bandwidth:  Modified O(n) elimination algorithm. • Iterative Gauss-Seidel solver  converges for diagonally dominant matrices  Typically: O( n ) iterations, each costs O( n ) for a sparse matrix. • Conjugate Gradient solver  Only symmetric, positive definite matrices  Guaranteed: O( n ) iterations  Typically good solution after O( n ) iterations. More details on iterative solvers: J. R. Shewchuk: An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, 1994. 25

  26. Eigenvectors & Eigenvalues Definition: • Linear map M , non-zero vector x with Mx =  x •  an is eigenvalue of M • x is the corresponding eigenvector . 26

  27. Example Intuition: • In the direction of an eigenvector, the linear map acts like a scaling  • Example: two eigenvalues (0.5 and 2) • Two eigenvectors • Standard basis contains no eigenvectors 27

  28. Eigenvectors & Eigenvalues Diagonalization: In case an n  n matrix M has n linear independent eigenvectors, we can diagonalize M by transforming to this coordinate system: M = TDT -1 . 28

  29. Spectral Theorem Spectral Theorem: If M is a symmetric n  n matrix of real numbers (i.e. M = M T ), there exists an orthogonal set of n eigenvectors. This means, every (real) symmetric matrix can be diagonalized : M = TDT T with an orthogonal matrix T . 29

  30. Computation Simple algorithm • “Power iteration” for symmetric matrices • Computes largest eigenvalue even for large matrices • Algorithm:  Start with a random vector (maybe multiple tries)  Repeatedly multiply with matrix  Normalize vector after each step  Repeat until ration before / after normalization converges (this is the eigenvalue) • Intuition:  Largest eigenvalue = “dominant” component/direction 30

  31. Powers of Matrices What happens: • A symmetric matrix can be written as:     1   T  T M TDT T T         n • Taking it to the k -th power yields:   k    1   k  T T T  k T  T M TDT TDT ... TDT TD T T T      k    n • Bottom line: Eigenvalue analysis key to understanding powers of matrices. 31

  32. Improvements Improvements to the power method: • Find smallest? – use inverse matrix. • Find all (for a symmetric matrix)? – run repeatedly, orthogonalize current estimate to already known eigenvectors in each iteration (Gram Schmidt) • How long does it take? – ratio to next smaller eigenvalue, gap increases exponentially. There are more sophisticated algorithms based on this idea. 32

  33. Generalization: SVD Singular value decomposition: • Let M be an arbitrary real matrix (may be rectangular) • Then M can be written as:  M = U D V T  The matrices U , V are orthogonal  D is a diagonal matrix (might contain zeros)  The diagonal entries are called singular values. • U and V are different in general. For diagonalizable matrices, they are the same, and the singular values are the eigenvalues. 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend