Matrices A brief introduction Basilio Bona DAUIN Politecnico di - - PowerPoint PPT Presentation

matrices a brief introduction
SMART_READER_LITE
LIVE PREVIEW

Matrices A brief introduction Basilio Bona DAUIN Politecnico di - - PowerPoint PPT Presentation

Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex numbers organized in m rows


slide-1
SLIDE 1

Matrices A brief introduction

Basilio Bona

DAUIN – Politecnico di Torino

Semester 1, 2014-15

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 1 / 44

slide-2
SLIDE 2

Definitions

Definition

A matrix is a set of N real or complex numbers organized in m rows and n columns, with N = mn A =    a11 a12 · · · a1n a21 a22 · · · a2n · · · · · · aij · · · am1 am2 · · · amn    ≡ aij

  • i = 1, . . . , m

j = 1, . . . , n A matrix is always written as a boldface capital letter viene as in A. To indicate matrix dimensions we use the following symbols Am×n Am×n A ∈ Fm×n A ∈ Fm×n where F = R for real elements and F = C for complex elements.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 2 / 44

slide-3
SLIDE 3

Transpose matrix

Given a matrix Am×n we define a transpose matrix the matrix obtained exchanging rows and columns AT

n×m =

    a11 a21 · · · am1 a12 a22 · · · am2 . . . . . . ... . . . a1n a2n · · · amn     The following property holds (AT)T = A

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 3 / 44

slide-4
SLIDE 4

Square matrix

A matrix is said to be square when m = n A square n × n matrix is upper triangular when aij = 0, ∀i > j An×n =     a11 a12 · · · a1n a22 · · · a2n . . . . . . ... . . . · · · ann     If a square matrix is upper triangular its transpose is lower triangular and viceversa AT

n×n =

    a11 · · · a12 a22 · · · . . . . . . ... . . . a1n a2n · · · ann    

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 4 / 44

slide-5
SLIDE 5

Symmetric matrix

A real square matrix is said to be symmetric if A = AT, or A − AT = O In a real symmetric matrix there are at least n(n + 1) 2 independent elements. If a matrix K has complex elements kij = aij + jbij (where j = √−1) its conjugate is K with elements kij = aij − jbij. Given a complex matrix K, an adjoint matrix K∗ is defined, as the conjugate transpose K∗ = K

T = KT

A complex matrix is called self-adjoint or hermitian when K = K∗. Some textbooks indicate this matrix as K† or KH

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 5 / 44

slide-6
SLIDE 6

Diagonal matrix

A square matrix is diagonal if aij = 0 for i = j An×n = diag(ai) =     a1 · · · a2 · · · . . . . . . ... . . . · · · an     A diagonal matrix is always symmetric.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 6 / 44

slide-7
SLIDE 7

Skew-symmetric matrix

Skew-symmetric matrix

A square matrix is skew-symmetric or antisymmetric if A + AT = 0 → A = −AT Given the constraints of the above relation, a generic skew-symmetric matrix has the following structure An×n =     a12 · · · a1n −a12 · · · a2n . . . . . . ... . . . −a1n −a2n · · ·     In a skew-symmetric matrix there are at most n(n − 1) 2 non zero independent elements. We will see in the following some important properties of the skew-symmetric 3 × 3 matrices.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 7 / 44

slide-8
SLIDE 8

Block matrix

It is possible to represent a matrix with blocks as A =   A11 · · · A1n · · · Aij · · · Am1 · · · Amn   where the blocks Aij have suitable dimensions. Given the following matrices A1 =   A11 · · · A1n O Aij · · · O O Amn   A2 =   A11 O O · · · Aij O Am1 · · · Amn   A3 =   A11 O O O Aij O O O Amn   A1 is upper block triangular, A2 is lower block triangular, and A3 is block diagonal

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 8 / 44

slide-9
SLIDE 9

Matrix algebra

Matrices are elements of an algebra, i.e., a vector space together with a product operator. The main operations of this algebra are: product by a scalar, sum, and matrix product

Product by a scalar

αA = α     a11 a12 · · · a1n a21 a22 · · · a2n . . . . . . ... . . . am1 am2 · · · amn     =     αa11 αa12 · · · αa1n αa21 αa22 · · · αa2n . . . . . . ... . . . αam1 αam2 · · · αamn    

Sum

A + B =     a11 + b11 a12 + b12 · · · a1n + b1n a21 + b21 a22 + b22 · · · a2n + b2n . . . . . . ... . . . am1 + bm1 am2 + bm2 · · · amn + bmn    

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 9 / 44

slide-10
SLIDE 10

Matrix sum

Sum properties

A + O = A A + B = B + A (A + B) + C = A + (B + C) (A + B)T = AT + BT The null (neutral, zero) element O takes the name of null matrix. The subtraction (difference) operation is defined using the scalar α = −1: A − B = A + (−1)B

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 10 / 44

slide-11
SLIDE 11

Matrix product

Matrix product

The operation is performed using the well-known rule “rows by columns”: the generic element cij of the matrix product Cm×p = Am×n · Bn×p is cij =

n

  • k=1

aikbkj The bi-linearity of the matrix product is guaranteed, since it is immediate to verify that, given a generic scalar α, the following identity holds: α(A · B) = (αA) · B = A · (αB)

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 11 / 44

slide-12
SLIDE 12

Product

Product properties

A · B · C = (A · B) · C = A · (B · C) A · (B + C) = A · B + A · C (A + B) · C = A · C + B · C (A · B)T = BT · AT In general: the matrix product is non-commutative: A · B = B · A, apart from particular cases; A · B = A · C does not imply B = C, apart from particular cases; A · B = O does not imply A = O or B = O, apart from particular cases.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 12 / 44

slide-13
SLIDE 13

Identity matrix

A neutral element wrt product exists and is called identity matrix, written as In or simply I when no ambiguity arises; given a rectangular matrix Am×n the following identities hold Am×n = ImAm×n = Am×nIn

Identity matrix

I =     1 · · · · · · · · · . . . . . . ... . . . · · · 1    

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 13 / 44

slide-14
SLIDE 14

Trace

Trace

The trace of a square matrix An×n is the sum of its diagonal elements tr (A) =

n

  • k=1

akk The matrix traces satisfies the following properties tr (αA + βB) = α tr (A) + β tr (B) tr (AB) = tr (BA) tr (A) = tr (AT) tr (A) = tr (T−1AT) for non singular T (see below)

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 14 / 44

slide-15
SLIDE 15

Determinant

Once defined the cofactor, the determinant of a square matrix A can be defined “by row”, i.e., choosing a generic row i, det (A) =

n

  • k=1

aik(−1)i+k det (A(ik)) =

n

  • k=1

aikAik

  • r, choosing a generic column j, we have the definition “by column”:

det (A) =

n

  • k=1

akj(−1)k+j det (A(kj)) =

n

  • k=1

akjAkj Since these definition are recursive and assume the computation of determinants of smaller order minors, it is necessary to define the determinant of a matrix 1 × 1 (scalar), that is simply det (aij) = aij.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 15 / 44

slide-16
SLIDE 16

Properties of determinant

det(A · B) = det(A) det(B) det(AT) = det(A) det(kA) = kn det(A) if one makes a number of s exchanges between rows or columns of A,

  • btaining a new matrix As, we have det(As) = (−1)s det(A)

if A has two equal or proportional rows/columns, we have det(A) = 0 if A has a row or a column that is a linear combination of other rows

  • r columns, we have det(A) = 0

if A ` e upper or lower triangular, we have det(A) = n

i=1 aii

if A is block triangular, with p blocks Aii on the diagonal, we have det(A) = p

i=1 det Aii

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 16 / 44

slide-17
SLIDE 17

Singular matrix and rank

A matrix A is singular if det(A) = 0. We define the rank of matrix Am×n, the number ρ(Am×n), computed as the maximum integer such that at least a non zero minor Dp exists. The following properties hold: ρ(A) ≤ min{m, n} if ρ(A) = min{m, n}, A is said to have full rank if ρ(A) < min{m, n}, the matrix does not have full rank and one says that there is a fall of rank ρ(AB) ≤ min{ρ(A), ρ(B)} ρ(A) = ρ(AT) ρ(AAT) = ρ(ATA) = ρ(A) if An×n and det A < n then it has no full rank

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 17 / 44

slide-18
SLIDE 18

Invertible matrix

Given a square matrix A ∈ Rn×n, it is invertible of nonsingular if an inverse matrix A−1

n×n exists, such that

AA−1 = A−1A = In The matrix is invertible iff ρ(A) = n, or rather it has full rank; this implies det(A) = 0. The inverse matrix can be computed as A−1 = 1 det(A)Adj(A) The following properties hold: (A−1)−1 = A; (AT)−1 = (A−1)T. The inverse matrix, if exists, allows to compute the following matrix equation y = Ax

  • btaining the unknown x as

x = A−1y.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 18 / 44

slide-19
SLIDE 19

Orthonormal matrix

A square matrix is orthonormal if A−1 = AT. the following identity holds ATA = AAT = I Given two square matrices A and B of equal dimension n × n, the following identity holds (AB)−1 = B−1A−1 An important results, called Inversion lemma, establish what follows: if A, C are square invertible matrices and B, D are matrices of of suitable dimensions, then (A + BCD)−1 = A−1 − A−1B(DA−1B + C−1)−1DA−1 Matrix (DA−1B + C−1) must be invertible. The inversion lemma is useful to compute the inverse of a sum of matrices A1 + A2, when A2 is decomposable into the product BCD and C is easily invertible, for instance diagonal or triangular.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 19 / 44

slide-20
SLIDE 20

Matrix derivative

If a matrix A(t) is composed of elements aij(t) that are all differentiable wrt (t), then the matrix derivative is d dt A(t) = ˙ A(t) = d dt aij(t)

  • = [˙

aij(t)] If a square matrix A(t) has rank ρ(A(t)) = n for any time (t), then the derivative of its inverse is d dt A(t)−1 = −A−1(t) ˙ A(t)A(t)−1 Since the inverse operator is non linear, in general it results dA(t) dt −1 = d dt

  • A(t)−1
  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 20 / 44

slide-21
SLIDE 21

Symmetric Skew-symmetric decomposition

Given a real matrix A ∈ Rm×n, the two matrices ATA ∈ Rn×n AAT ∈ Rm×m are both symmetric. Given a square matrix A, it is always possible to factor it in a sum of two matrices, as follows: A = As + Aa where As = 1 2(A + AT) symmetric matrix Aa = 1 2(A − AT) skew-symmetric matrix

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 21 / 44

slide-22
SLIDE 22

Similarity transformation

Given a square matrix A ∈ Rn×n and a non singular square matrix T ∈ Rn×n, the new matrix B ∈ Rn×n, obtained as B = T−1AT

  • ppure

B = TAT−1 is said to be similar to A, and the transformation T is called similarity transformation.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 22 / 44

slide-23
SLIDE 23

Eigenvalues and eigenvectors

Considering the similarity transformation between A and Λ, where the latter is diagonal Λ = diag(λi) A = UΛU−1 and U = u1 u2 · · · un

  • Multiplying to the right A by U one obtains

AU = UΛ and then Aui = λiui This identity is the well-known formula that relates the matrix eigenvalues to eigenvectors; the constant quantities λi are the eigenvalues of A, while vectors ui are the eigenvectors of A, usually with non-unit norm.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 23 / 44

slide-24
SLIDE 24

Eigenvalues and eigenvectors

Given a square matrix An×n, the solutions λi (real or complex) of the characteristic equation det(λI − A) = 0 are the eigenvalues of A. det(λI − A) is a polynomial in λ, called characteristic polynomial. If the eigenvalues arre all distinct, the vectors ui that satisfy the identity Aui = λiui are the eigenvectors of A.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 24 / 44

slide-25
SLIDE 25

Skew-symmetric matrices

Skew-symmetric matrix

A square matrix S is called skew-symmetric or antisymmetric when S + ST = O

  • r

S = −ST A skew-symmetric matrix has the following structure An×n =     s12 · · · s1n −s12 · · · s2n . . . . . . ... . . . −s1n −s2n · · ·     Therefore there it has at most n(n − 1) 2 independent elements.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 25 / 44

slide-26
SLIDE 26

Skew-symmetric matrices

For n = 3 it results n(n − 1) 2 = 3, hence an skew-symmetric matrix has as many element as a 3D vector v. Given a vector v = v1 v2 v3 T it is possible to build S, and given a matrix S it is possible to extract the associated vector v. We indicate this fact using the symbol S(v), where, by convention S(v) =   −v3 v2 v3 −v1 −v2 v1  

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 26 / 44

slide-27
SLIDE 27

Skew-symmetric matrices

Some properties: Given any vector v ∈ R3: ST(v) = −S(v) = S(−v) Given two scalars λ1, λ2 ∈ R: S(λ1u + λ2v) = λ1S(u) + λ2S(v) Given any two vectors v, u ∈ R3: S(u)v = u × v = −v × u = S(−v)u = ST(v)u Therefore S(u) is the representation of the operator (u×) and viceversa.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 27 / 44

slide-28
SLIDE 28

Skew-symmetric matrices

The matrix S(u)S(u) = S2(u) is symmetrical and S2(u) = uuT − u2 I Hence the dyadic product D(u, u) = uuT = S2(u) + u2 I

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 28 / 44

slide-29
SLIDE 29

Eigenvalues and eigenvectors of skew-symmetric matrices

Given an skew-symmetric matrix S(v), its eigenvalues are imaginary or zero. λ1 = 0, λ2,3 = ±j v The eigenvalue related to the eigenvector λ1 = 0 is v; the other two are complex conjugate. The set of skew-symmetric matrices is a vector space, denoted as so(3). Given two skew-symmetric matrices S1 and S2, we call commutator or Lie bracket the following operator [S1, S2] def = S1S2 − S2S1 that is itself skew-symmetric. Skew-symmetric matrices form a Lie algebra, which is related to the Lie group of orthogonal matrices.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 29 / 44

slide-30
SLIDE 30

Orthogonal matrices

A square matrix A ∈ Rn is called orthogonal when ATA =     α1 · · · α2 · · · . . . . . . ... . . . · · · αn     with αi = 0. A square orthogonal matrix U ∈ Rn is called orthonormal when all the constants αi are 1: UTU = UUT = I Therefore U−1 = UT

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 30 / 44

slide-31
SLIDE 31

Orthonormal matrices

Other properties: The columns, as well as the rows, of U or orthogonal to each other and have unit norm. U = 1; The determinant of U has unit module: |det(U)| = 1 therefore it can be +1 or −1. Given a vector x, its orthonormal transformation is y = Ux.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 31 / 44

slide-32
SLIDE 32

Orthonormal matrices

If U is an orthonormal matrix, then AU = UA = A. Property in general valid also for unitary matrices, i.e., U∗U = I. When U ∈ R3×3, only 3 out of 9 elements are independent. Scalar product is invariant to orthonormal transformations, (Ux) · (Uy) = (Ux)T(Uy) = xTUTUy = xTy = x · y This means that vector lengths are invariant wrt orthonormal trasformations Ux = (Ux)T(Ux) = xTUTUx = xTIx = xTx = x

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 32 / 44

slide-33
SLIDE 33

Orthonormal matrices

When considering orthonormal transformations, it is important to distinguish the two cases: When det(U) = +1, U represents a proper rotation or simply a rotation, when det(U) = −1, U represents an improper rotation or reflection. The set of rotations forms a continuous non-commutative (wrt product) group; the set of reflections do not have this “quality”. Intuitively this means that infinitesimal rotations exist, while infinitesimal reflections do not have any meaning. Reflections are the most basic transformation in 3D spaces, in the sense that translations, rotations and roto-reflections (slidings) are obtained from the composition of two or three reflections

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 33 / 44

slide-34
SLIDE 34

Figure: Reflections.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 34 / 44

slide-35
SLIDE 35

Orthonormal matrices

If U is an orthonormal matrix, the distributive property wrt the cross product holds: U(x × y) = (Ux) × (Uy) (with general A matrices this is not true). For any proper rotation matrix U and a generic vector x the following holds US(x)UTy = U

  • x × (UTy)
  • =

(Ux) × (UUTy) = (Ux) × y = S(Ux)y where S(x) is the skew-symmetric matrix associated with x; therefore: US(x)UT = S(Ux) US(x) = S(Ux)U

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 35 / 44

slide-36
SLIDE 36

Bilinear and quadratic forms

A bilinear form associated to the matrix A ∈ Rm×n is the scalar quantity defined as b(x, y) def = xTAy = yTATx A quadratic form associated to the square matrix A ∈ Rn×n is the scalar quantity defined as q(x) def = xTAx = xTATx Every quadratic form associated to a skew-symmetric matrix S(y) is identically zero xTS(y)x ≡ 0 ∀x Indeed, assuming w = S(y)x = y × x, one obtains xTS(y)x = xTw, but since, by definition, w is orthogonal to both y and x, the scalar product xTw will be always zero, and also the quadratic form at the left hand side.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 36 / 44

slide-37
SLIDE 37

Definite positive matrices – 1

Recalling the standard decomposition of a generic square matrix A in symmetric term As and an skew-symmetric one Aa, one concludes that the quadratic form depends only on the symmetric part of the matrix: q(x) = xTAx = xT(As + Aa)x = xTAsx A square matrix A is said to be positive definite if the associated quadratic form xTAx satisfies to the following conditions xTAx > 0 ∀x = 0 xTAx = 0 x = 0 A square matrix A is said to be positive semidefinite if the associated quadratic form xTAx satisfies to the following conditions xTAx ≥ 0 ∀x A square matrix A is said to be negative definite if −A is positive definite; similarly, a square matrix A is semidefinite negative if −A ` e semidefinite positive.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 37 / 44

slide-38
SLIDE 38

Definite positive matrices – 2

Often we use the following notations: definite positive matrix: A ≻ 0 semidefinite positive matrix: A 0 definite negative matrix: A ≺ 0 semidefinite negative matrix: A 0 A necessary but not sufficient condition for a square matrix A to bepositive definite is that the elements on its diagonal are all strictly positive. A necessary and sufficient condition for a square matrix A to be definite positive is that all its eigenvalues are strictly positive.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 38 / 44

slide-39
SLIDE 39

Sylvester criterion

The Sylvester criterion states that a square matrix A is positive definite iff all its principal minors are strictly positive. A definite positive matrix has full rank and is always invertible The associated quadratic form xTAx satisfies the following identity λmin(A) x2 ≤ xTAx ≤ λmax(A) x2 where λmin(A) and λmax(A) are, respectively, the minimum and the maximum eigenvalues.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 39 / 44

slide-40
SLIDE 40

Semidefinite matrix and rank

A semidefinite positive matrix An×n has rank ρ(A) = r < n, i.e., it has r strictly positive eigenvalues and n − r zero eigenvalues. The quadratic form sgoes to zero for every vector x ∈ N(A). Given a real matrix of generic dimensions Am×n, we have seen that both ATA and AAT are symmetrical; in addition we know that ρ(ATA) = ρ(AAT) = ρ(A) These matrices have all real, non negative eigenvalues, and therefore they are definite or semidefinite positive: in particular, if Am×n has full rank, then if m < n, ATA 0 and AAT ≻ 0, if m = n, ATA ≻ 0 and AAT ≻ 0, if m > n, ATA ≻ 0 and AAT 0.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 40 / 44

slide-41
SLIDE 41

Matrix derivatives – 1

If matrix A has its elements that are functions of a quantity x, one can define the matrix derivative wrt x as d dx A(x) :=

  • daij

dx

  • If x is the time t, one writes

d dt A(t) ≡ ˙ A(t) :=

  • daij(t)

dt

˙ aij

  • If A is a time function through the variable x(t), then

d dt A(x(t)) ≡ ˙ A(x(t)) :=

  • ∂aij(x)

∂x dx(t) dt

  • ∂aij(x)

∂x

  • ˙

x(t)

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 41 / 44

slide-42
SLIDE 42

Matrix derivatives – 2

Given a vector-values scalar function φ(x) defined as φ(·) : Rn → R1, the gradient of the function φ wrt to x is a column vector ∇xφ = ∂φ ∂x :=      ∂φ(x) ∂x1 · · · ∂φ(x) ∂xn      i.e., ∇x :=      ∂ ∂x1 · · · ∂ ∂xn      = gradx If x(t) is a differentiable time function, then dφ(x) dt ≡ ˙ φ(x) = ∇T

x φ(x)˙

x (Notice the convention: the gradient for us is a column vector, although many textbooks assume it is a row vector)

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 42 / 44

slide-43
SLIDE 43

Jacobian matrix

Given a m × 1 vector function f(x) = f1(x) · · · fm(x)T , x ∈ Rn, the Jacobian matrix (or simply the jacobian) is a m × n matrix defined as Jf(x) =       ∂f1(x) ∂x T · · · ∂fm(x) ∂x T       =        ∂f1(x) ∂x1 · · · ∂f1(x) ∂xn · · · ∂fi(x) ∂xj · · · ∂fm(x) ∂x1 · · · ∂fm(x) ∂xn        =   (gradxf1)T · · · (gradxfm)T   and if x(t) is a differentiable time function, then ˙ f(x) ≡ df(x) dt = df(x) dx ˙ x(t) = Jf(x)˙ x(t) Notice that the rows of Jf are the transpose of the gradients of the various functions.

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 43 / 44

slide-44
SLIDE 44

Gradient

Given a bilinear form b(x, y) = xTAy, we call gradients the following vectors: gradient wrt x: gradxb(x, y) def = ∂b(x, y) ∂x = Ay gradient wrt y: gradyb(x, y) def = ∂b(x, y) ∂y = ATx Given the quadratic form q(x) = xTAx, we call gradient wrt x the following vector: ∇xq(x) ≡ gradxq(x) def = ∂q(x) ∂x = 2Ax

  • B. Bona (DAUIN)

Matrices Semester 1, 2014-15 44 / 44