MATH 304Linear AlgebraLecture 10:Linear independence.Basis of a vector space.Linear independenceDefinition. Let V be a vector space. Vectorsv1, v2, . . . , vk∈ V are called linearly dependent ifthey satisfy a relationr1v1+ r2v2+ · · · + rkvk= 0,where the coefficients r1, . . . , rk∈ R are not allequal to zero. Otherwise vectors v1, v2, . . . , vkarecalled linearly independent. That is, ifr1v1+r2v2+ · · · +rkvk= 0 =⇒ r1= · · · = rk= 0.An infinite set S ⊂ V is linearly dependent ifthere are some linearly dependent vectors v1, . . . , vk∈ S.Otherwise S is linearly independent.Examples of linear independence• Vectors e1= (1, 0, 0), e2= (0, 1, 0), ande3= (0, 0, 1) in R3.xe1+ ye2+ ze3= 0 =⇒ (x, y, z) = 0=⇒ x = y = z = 0• Matrices E11=1 00 0, E12=0 10 0,E21=0 01 0, and E22=0 00 1.aE11+ bE12+ cE21+ dE22= O =⇒a bc d= O=⇒ a = b = c = d = 0Examples of linear independence• Polynomials 1, x, x2, . . . , xn.a0+ a1x + a2x2+ · · · + anxn= 0 identically=⇒ ai= 0 for 0 ≤ i ≤ n• The infinite set {1, x, x2, . . . , xn, . . . }.• Polynomials p1(x) = 1, p2(x) = x − 1, andp3(x) = (x − 1)2.a1p1(x) + a2p2(x) + a3p3(x) = a1+ a2(x − 1) + a3(x − 1)2== (a1− a2+ a3) + (a2− 2a3)x + a3x2.Hence a1p1(x) + a2p2(x) + a3p3(x) = 0 identically=⇒ a1− a2+ a3= a2− 2a3= a3= 0=⇒ a1= a2= a3= 0Problem Let v1= (1, 2, 0), v2= (3, 1, 1), andv3= (4, −7, 3). Determine whether vectorsv2, v2, v3are linearly independent.We have to check if there exist r1, r2, r3∈ R not allzero such that r1v1+ r2v2+ r3v3= 0.This vector equation is equivalent to a systemr1+ 3r2+ 4r3= 02r1+ r2− 7r3= 00r1+ r2+ 3r3= 01 3 402 1 −700 1 3 0The vectors v1, v2, v3are linearly dependent if andonly if the matrix A = (v1, v2, v3) is singular.We obtain that det A = 0.Theorem The following conditions are equivalent:(i) vectors v1, . . . , vkare linearly dependent;(ii) one of vectors v1, . . . , vkis a linear combinationof the other k − 1 vectors.Proof: (i) =⇒ (ii) Suppose thatr1v1+ r2v2+ · · · + rkvk= 0,where ri6= 0 for some 1 ≤ i ≤ k. Thenvi= −r1riv1− · · · −ri−1rivi−1−ri+1rivi+1− · · · −rkrivk.(ii) =⇒ (i) Suppose thatvi= s1v1+ · · · + si−1vi−1+ si+1vi+1+ · · · + skvkfor some scalars sj. Thens1v1+ · · · + si−1vi−1− vi+ si+1vi+1+ · · · + skvk= 0.Theorem Vectors v1, v2, . . . , vm∈ Rnare linearlydependent whenever m > n (i.e., the number ofcoordinates is less than the number of vectors).Proof: Let vj= (a1j, a2j, . . . , anj) for j = 1, 2, . . . , m.Then the vector equality t1v1+ t2v2+ · · · + tmvm= 0is equivalent to the systema11t1+ a12t2+ · · · + a1mtm= 0,a21t1+ a22t2+ · · · + a2mtm= 0,· · · · · · · · ·an1t1+ an2t2+ · · · + anmtm= 0.Note that vectors v1, v2, . . . , vmare columns of the matrix(aij). The number of leading entries in the row echelon formis at most n. If m > n then there are free variables, thereforethe zero solution is not unique.Example. Consider vectors v1= (1, −1, 1),v2= (1, 0, 0), v3= (1, 1, 1), and v4= (1, 2, 4) in R3.Two vectors are linearly dependent if and only ifthey are parallel. Hence v1and v2are linearlyindependent.Vectors v1, v2, v3are linearly independent if andonly if the matrix A = (v1, v2, v3) is invertible.det A =1 1 1−1 0 11 0 1= −−1 11 1= 2 6= 0.Therefore v1, v2, v3are linearly independent.Four vectors in R3are always linearly dependent.Thus v1, v2, v3, v4are linearly dependent.Problem. Show that functions ex, e2x, and e3xare linearly independent in C∞(R).Suppose that aex+ be2x+ ce3x= 0 for all x ∈ R, wherea, b, c are constants. We have to show that a = b = c = 0.Differentiate this identity twice:aex+ be2x+ ce3x= 0,aex+ 2be2x+ 3ce3x= 0,aex+ 4be2x+ 9ce3x= 0.It follows that A(x)v = 0, whereA(x) =exe2xe3xex2e2x3e3xex4e2x9e3x, v =abc.A(x) =exe2xe3xex2e2x3e3xex4e2x9e3x, v =abc.det A(x) = ex1 e2xe3x1 2e2x3e3x1 4e2x9e3x= exe2x1 1 e3x1 2 3e3x1 4 9e3x= exe2xe3x1 1 11 2 31 4 9= e6x1 1 11 2 31 4 9= e6x1 1 10 1 21 4 9= e6x1 1 10 1 20 3 8= e6x1 23 8= 2e6x6= 0.Since the matrix A(x) is invertible, we obtainA(x)v = 0 =⇒ v = 0 =⇒ a = b = c = 0WronskianLet f1, f2, . . . , fnbe smooth functions on an interval[a, b]. The Wronskian W [f1, f2, . . . , fn] is afunction on [a, b] defined byW [f1, f2, . . . , fn](x) =f1(x) f2(x) · · · fn(x)f′1(x) f′2(x) · · · f′n(x)............f(n−1)1(x) f(n−1)2(x) · · · f(n−1)n(x).Theorem If W [f1, f2, . . . , fn](x0) 6= 0 for somex0∈ [a, b] then the functions f1, f2, . . . , fnarelinearly independent in C [a, b].Theorem 1 Let λ1, λ2, . . . , λkbe distinct realnumbers. Then the functions eλ1x, eλ2x, . . . , eλkxare linearly independent.Theorem 2 The set of functions{xmeλx| λ ∈ R, m = 0, 1, 2, . . . }is linearly independent.Spanning setLet S be a subset of a vector space V .Definition. The span of the set S is the smallestsubspace W ⊂ V that contains S. If S is notempty then W = Span(S) consists of all linearcombinationsr1v1+ r2v2+ · · · + rkvksuch thatv1, . . . , vk∈ S and r1, . . . , rk∈ R.We say that the set S spans the subspace W orthat S is a spanning set for W .Remark. If S1is a spanning set for a vector spaceV and S1⊂ S2⊂ V , then S2is also a spanning setfor V .BasisDefinition. Let V be a vector space. A linearlyindependent spanning set for V is called a basis.Suppose that a set S ⊂ V is a basis for V .“Spanning set” means that any vector v ∈ V can berepresented as a linear combinationv = r1v1+ r2v2+ · · · + rkvk,where v1, . . . , vkare distinct vectors from S andr1, . . . , rk∈ R. “Linearly independent” implies that the aboverepresentation is unique:v = r1v1+ r2v2+ · · · + rkvk= r′1v1+ r′2v2+ · · · + r′kvk=⇒ (r1− r′1)v1+ (r2− r′2)v2+ · · · + (rk− r′k)vk= 0=⇒ r1− r′1= r2− r′2= . . . = rk− r′k= 0Examples. • Standard basis for Rn:e1= (1, 0, 0, . . . , 0, 0), e2= (0, 1, 0, . . . , 0, 0),. . . ,en= (0, 0, 0, . . . , 0, 1).Indeed, (x1, x2, . . . , xn) = x1e1+ x2e2+ · · · +
View Full Document