Mathematics Department Stanford UniversitySummary of Math 51H Linear Algebra MaterialThe following is a brief summary of the main results covered in the linear algebra part of 51H; you should of courseknow all these results and their proofs and be able to apply them in the manner req ui r ed e.g. as in the homeworkproblems.Vectors in Rn, meaning of linear combinations, l.i., l.d., span, subspace. Dot product, Cauchy Schwarz inequalityand its p r oof. Angle between non-zero vectors.Gaussian elimination, Underde te r mine d systems lemma. The Linear Dependence Lemma and its consequences:Basis Theorem and definition of dimension, and the facts that (a) k l.i. vectors in a k-dimensional subspace auto-matically span,(b) k vectors which span a k-dimensi onal subspace are automatically l.i.For an m×n matrix A: Definition of N(A), C(A). Rank/nullity theorem, Basic matrix algebra (product and sums)and the fact that rank(AB) ≤ min{rank(A), rank(B)}. The transpose ATof A and the formula (AB)T= BTAT.Reduction of A toreduced row echelon form rr e f A and its consequences, includin g the alternate proof of the ranknullity theorem and the fact that C(A) is the span of the columns of A wit h column numbers equal to the columnnumbers of the pivot columns of rref A.For each subspace V of Rn, the definition of the orthogonal complement V⊥of V , and the facts that V ∩V⊥= {0},V + V⊥= Rnand that this is a direct sum (i.e. for each z ∈ Rnthere are unique x, y ∈ V, V⊥with z = x + y),dim V + dim V⊥= n, (V⊥)⊥= V . Existence of unique orthogonal projection P : Rn→ Rnwith the pr opertie s (a)P (x) ∈ V and (b) x − P (x) ∈ V⊥∀ x ∈ Rn. Proof that such P is automatically has the additional properties: (i) Itis linear, (ii) P (x) = x ∀ x ∈ V , (iii) it is symmetric (i.e. x · P (y) = y · P (x) ∀x, y ∈ Rn), (iv) P (V⊥) = {0} and (v)kx − P (x)k gives the distance of a point x from V (i.e. P (x) is the nearest point of the subspace V to the vector x).(Terminology: P is called “the orthogonal projection onto V .”)The fact thatC(AT) = (N(A))⊥for any m × n matrix, and the consequence t hat row rank(A) = rank(A) =rank(AT).Affine spacesx0+ V in Rn(where V is a subspace) and the fact that thenearest point of x0+ V to 0is given byx0− P (x0), where P is the orthogonal projection onto V .Themain theorem of inhomogeneous systems: that i f A is m × n and y ∈ Rmis given, then(i) If Ax = y has at least one solution x0, then the whole solution set is precis ely the affine space x0+ N (A),(ii) Ax = y has a solution ⇐⇒ y ∈ C(A) ⇐⇒ y ∈ (N (AT))⊥Permutations and definition of even/odd permutations; inverse permutation of a given permutation and the factthat the parity of a permutation and its inverse are the same.Definition of determinant of an n × n matrix (in terms of the function D and the formula for det A as a sumof n! terms, each ± a product of n te r ms, each term taken from a distinct row and column of A); the propertiesthat(a) det(A) is lin ear in each row, (b ) det(˜A) = − det(A) if˜A is obtained by interchanging two distinct rowsof A, and (c) det(A) = det(AT). Computation of det A by elementary row operations, and the fact that det A 6=0 ⇐⇒ rrefA = I ⇐⇒ rrefA has no zero rows. The formulae for the expansion of det A along the i’th rowand j’th column of A and the correspondingformulaePnj=1(−1)i+jakjdet(Aij) = det A δikfor each i, k = 1, . . . , n,Pni=1(−1)i+jaikdet(Aij) = det A δjkfor each j, k = 1, . . . , n, where δijis the i, j’th entry of the identity matrix (i.e.1 if i = j and 0 if i 6= j). The formulaA−1= (det A)−1(−1)i+jdet(Aji)if det(A) 6= 0. Computation of A−1viaelementary row operations.For an n × n matrix A: A−1exists ⇐⇒ det A 6= 0 ⇐⇒ N(A) = {0} ⇐⇒ rank(A) = n ⇐⇒ rref(A) = I ⇐⇒the map x7→ Ax is 1:1 ⇐⇒ the map x 7→ Ax is onto. The formula (AB)−1= B−1A−1if A−1, B−1exist and ifB is n × n.Gram-Schmidt orthogonalization and the existence of an orthonormal basis for each non-trivial subspace V ofRn; the explicit formula for the orthogonal projection P onto V : P (x) =Pkj=1(x· wj)wj, where w1, . . . , wkis anyorthonormal basis for the non-trivial k-dimensional subspace V , and the formulamatrix of P = W WT, where W isthe n × k matrix with j’th column = wj.Definition of eigenvalues/eigenvectors of an n × n matr ix .TheSpectral Theorem (that if A is a symmetric matrix, the n there is an orthonormal basis of Rnconsisting ofeigenvectors of A, and if Q is the matrix with column s given by such an orthonormal basis, then Q is orthogonal(i.e. QTQ = I) andQTAQ = diagonal matrix with the eigenvalues of A along the leading diagonal).1Mathematics Department Stanford UniversitySummary of Math 51H Multivariable Calculus/Real Analysis MaterialThe following is a brief summary of the main results covered in the multivariable calculus and real analysis part of51H; you should of course know all these results and their proofs and be able to apply them in the manner requirede.g. in t he homework assignments.Open and closed sets in Rn. Theorem that a set C is closed if and only if its complement Rn\ C is open.(Equivalently, since Rn\ (Rn\ C) = C, a s et U is open if anly only if its complement Rn\ U is closed). Bolzano-Weierstrass theoremfor bounded sequences in Rn. Theorem that a continuous real-valued function on a compact setattains both i ts maximum and minimum values.Definition ofdifferentiability, and the fact that differentiability of f impl ies all partials and all directional der iva-tives exist, and Dvf(a ) =Pnj=1vjDjf(a) if f is differentiable at a.Thechain rule for the composite of differentiable functions. Theorem t hat differentiability at point a impliescontinuity at a.Theorem that f of classC1on U implies f differentiable at each point of U , and f of class C2on U impliesDiDjf = DjDif at each point of U . The gradient ∇f of a real-valued C1function f and the fact that the gradientgives thedirection of fastest increase of f at point s where ∇f 6= 0.Quadratic forms Q(ξ) on Rnand definition of positive definite and negative definite. The fact that Q positivedefinite implies that there is an m > 0 such that Q(ξ) ≥ mkξk2for all ξ ∈ Rn.For a C2function on the ball Bρ(x0), the second derivative identityf(x ) = f (x0) + (x− x0) · ∇f (x0) +12Qx0(x−x0) + E(x), with limx→x0kx− x0k−2E(x) = 0, where Qx0(ξ) is the Hessian quadratic
View Full Document