13 MATH FACTS 101 13 MATH FACTS 13.1 Vectors 13.1.1 Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms of its components with respect to a reference system as ⎧ ⎫ ⎪ 2 ⎪ ⎨ ⎬ a =1 . ⎪ ⎪ ⎩ ⎭7 The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 1. Vector addition: a +b = c ⎧⎫⎧⎫ ⎧ ⎫ ⎪ 2 ⎪ ⎪ 3 ⎪ ⎪ 5 ⎪ ⎨⎬⎨ ⎬ ⎨ ⎬ 1+3 = 4 . ⎪⎪⎪ ⎪ ⎪ ⎪ ⎩ ⎭ ⎩ ⎭ ⎩ ⎭7 2 9 Graphically, addition is stringing the vectors together head to tail. 2. Scalar multiplication: ⎧⎫ ⎧ ⎫ ⎪ 2 ⎪ ⎪ −4 ⎪ ⎨⎬ ⎨ ⎬ −2 × 1= −2 . ⎪⎪ ⎪ ⎪ ⎩ 7 ⎭ ⎩ −14 ⎭ 13.1.2 Vector Magnitude The total length of a vector of dimension m, its Euclidean norm, is given by m ||x|| = x2 i . i=1 This scalar is commonly used to normalize a vector to length one. 13 MATH FACTS 102 13.1.3 Vector Dot or Inner Product The dot product of two vectors is a scalar equal to the sum of the products of the corre-sponding components: m x · y = x T y = xiyi. i=1 The dot product also satisfies x · y = ||x||||y|| cos θ, where θ is the angle between the vectors. 13.1.4 Vector Cross Product The cross product of two three-dimensional vectors x and y is another vector z, x × y = z, whose 1. direction is normal to the plane formed by the other two vectors, 2. direction is given by the right-hand rule, rotating from x to y, 3. magnitude is the area of the parallelogram formed by the two vectors – the cross product of two parallel vectors is zero – and 4. (signed) magnitude is equal to ||y|| sin θ, where θ is the angle between the two x|||| vectors, measured from x to y. In terms of their components, ⎧ ⎫ i j ˆ ⎪ i ⎪ˆ ˆk ⎪ (x2y3 − x3y2)ˆ⎪ ⎨ ⎬ x × y = x1 x2 x3 =(x3y1 − x1y3)ˆj. ⎪ ⎪ ⎪ ⎪ y1 y2 y3 ⎩ (x1y2 − x2y1)kˆ ⎭ 13.2 Matrices 13.2.1 Definition A matrix, or array, is equivalent to a set of column vectors of the same dimension, arranged side by side, say ⎡ ⎤ 23 ⎢ ⎥A =[a b]= ⎣ 13 ⎦ . 72 13 MATH FACTS 103 This matrix has three rows (m = 3) and two columns (n = 2); a vector is a special case of a matrix with one column. Matrices, like vectors, permit addition and scalar multiplication. We usually use an upper-case symbol to denote a matrix. 13.2.2 Multiplying a Vector by a Matrix If Aij denotes the element of matrix A in the i’th row and the j’th column, then the multi-plication c = Av is constructed as: n ci = Ai1v1 + Ai2v2 + ···+ Ainvn = Aij vj , j=1 where n is the number of columns in A. c will have as many rows as A has rows (m). Note that this multiplication is defined only if v has as many rows as A has columns; they have consistent inner dimension n. The product vA would be well-posed only if A had one row, and the proper number of columns. There is another important interpretation of this vector multiplication: Let the subscript : indicate all rows, so that each A:j is the j’th column vector. Then c = Av = A:1v1 + A:2v2 + ···+ A:nvn. We are multiplying column vectors of A by the scalar elements of v. 13.2.3 Multiplying a Matrix by a Matrix The multiplication C = AB is equivalent to a side-by-side arrangement of column vectors C:j = AB:j , so that C = AB =[AB:1 AB:2 ··· AB:k], where k is the number of columns in matrix B. The same inner dimension condition applies as noted above: the number of columns in A must equal the number of rows in B. Matrix multiplication is: 1. Associative. (AB)C = A(BC). 2. Distributive. A(B + C)= AB + AC,(B + C)A = BA + CA. 3. NOT Commutative. AB = BA, except in special cases. 13 MATH FACTS 104 13.2.4 Common Matrices Identity. The identity matrix is usually denoted I, and comprises a square matrix with ones on the diagonal, and zeros elsewhere, e.g., ⎡ ⎤ 100 ⎢ ⎥I3×3 = ⎣ 010 ⎦ . 001 The identity always satisfies AIn×n = Im×mA = A. Diagonal Matrices. A diagonal matrix is square, and has all zeros off the diagonal. For instance, the following is a diagonal matrix: ⎡ ⎤ 4 0 0 ⎢ ⎥A = ⎣ 0 − 20 ⎦ . 00 3 The product of a diagonal matrix with another diagonal matrix is diagonal, and in this case the operation is commutative. 13.2.5 Transpose The transpose of a vector or matrix, indicated by a T superscript results from simply swap-ping the row-column indices of each entry; it is equivalent to “flipping” the vector or matrix around the diagonal line. For example, ⎧ ⎫ ⎪ 1 ⎪ ⎨ ⎬ a =2 −→ a T = { 123}⎪ ⎪ ⎩ ⎭3 ⎡ ⎤ 12 A = ⎢⎣ 45 ⎥⎦ −→ AT = 148 . 259 89 A very useful property of the transpose is (AB)T = BT AT . 13.2.6 Determinant The determinant of a square matrix A is a scalar equal to the volume of the parallelepiped enclosed by the constituent vectors. The two-dimensional case is particularly easy to re-member, and illustrates the principle of volume: det(A)= A11A22 − A21A12 13 MATH FACTS 105 1 det 1 −1 1 = 1 + 1=2. 1 1 x1 x2 In higher dimensions, the determinant is more complicated to compute. The general formula allows one to pick a row k, perhaps the one containing the most zeros, and apply j=n det(A)= Akj(−1)k+jΔkj, j=1 where Δkj is the determinant of the sub-matrix formed by neglecting the k’th row and the j’th column. The formula is symmetric, in the sense that one could also target the k’th column: j=n det(A)= Ajk(−1)k+jΔjk. j=1 If the determinant of a matrix is zero, then the matrix is said to be singular – there is no volume, and this results from the fact that the constituent vectors do not span the matrix dimension. For instance, in two dimensions, a singular matrix has the vectors colinear; in three dimensions, a singular matrix has all its vectors lying in a (two-dimensional) plane. Note also that det(A)= det(AT ). If det(A) = 0, then the matrix is said to be nonsingular. 13.2.7 Inverse The inverse of a square matrix A, denoted A−1, satisfies AA−1 = A−1A = I. Its computation requires the determinant above, and
View Full Document