Unformatted text preview:

Math 110 - Fall 05 - Lectures notes # 11 - Sep 23 (Friday)The next goal is to make explicit the connection betweenmatrices, familiar from Math 54, and linear transformationsT: V -> W between finite dimensional vectors spaces.They are not quite the same, because the matrix that representsT depends on the bases you choose to span V and W, andthe order of these bases:Def: Let V be a finite dimensional vector space. Anordered basis of V is a basis for V with an order:{v_1,...,v_n}, where n=dim(V).Ex: Let e_i = i-th standard basis vector (1 in i-th entry, 0 elsewhere)Then the bases {e_1,e_2,e_3} and {e_2,e_3,e_1} are the same(order in a set does not matter), but the ordered bases{e_1,e_2,e_3} and {e_2,e_3,e_1} are different.Def: For V = F^n, {e_1,...,e_n} is the standard ordered basis.For P_n(F), {1,x,x^2,...,x_n} is the standard ordered basis.Given ordered bases for V and W, we can express both vectorsin V and W, and linear transformations T:V->W as vectors andmatrices with respect to these ordered bases:Def: Let beta = {v_1,...,v_n} be an ordered basis for V.For any x in V, let x = sum_{i=1 to n} a_i*v_i be the uniquelinear combination representing x. The coordinate vectorof x relative to beta, denoted [x]_beta, is[x]_beta = [ a_1 ][ a_2 ][ ... ][ a_n ]ASK & WAIT: What is [v_i]_beta ?ASK & WAIT: Let V = P_5(F), beta = {1,x,x^2,x^3,x^4,x^5},and v = 3-6x+x^3. What is [v]_beta?ASK & WAIT: If beta = {x^5, x^4, x^3, x^2, x, 1}?Lemma: The mapping Beta: V -> F^n that maps x to [x]_beta is linear.Proof: if x = sum_i a_i*v_i and y = sum_i b_i*v_i, then[x]_beta = [a_1;...;a_n], [y]_beta=[b_1;...;b_n] and1[x+y]_beta = [sum_i (a_i+b_i)*v_i]_beta ... by def of x+y= [a_1+b_1 ; ... ; a_n+b_n ] ... by def of []_beta= [a_1 ; ... ; a_n] + [b_1; ... ; b_n]= [x]_beta + [y]_beta ... by def of []_betaSimilarly, [c*x]_beta = c*[x]_betaWe need this representation of vectors in V ascoordinate vectors of scalars in order to apply T: V -> Was multiplication by a matrix. We will also need torepresent vectors in W the same way.Let beta = {v_1,...,v_n} and gamma = {w_1,...,w_m}be ordered bases of V and W, resp. Let T: V -> W be linear.Then there are unique scalars a_{ij} such thatT(v_j) = sum_{i=1 to m} a_{ij}*w_iThese scalars will be the entries of the matrix representing T:Def: Let T: V -> W be linear, V and W finite dimensional.Using the above notation, the m x n matrix A withentries a_{ij}, is the matrix representation of T in the orderedbases beta and gamma. We write A = [T]_beta^gamma.If V = W and beta = gamma, we write simply A = [T]_betaNote that column j of A is [a_{1j};...;a_{mj] = [T(v_j)]_gammaTo see why we call A the matrix representation of T, let us useit to compute y = T(x).Suppose x = sum_{j=1 to n} x_j*v_j, so [x]_beta = [x_1;...;x_n]is the coordinate vector for x. We claim the coordinatevector for y is just gotten by multiplying by A:[y]_gamma = A * [x]_betaTo confirm this we compute:y = T(x) = T(sum_{j=1 to n} x_j*v_j) ... by def of x= sum_{j=1 to n} x_j*T(v_j) ... since T is linear= sum_{j=1 to n} x_j*(sum_{i=1 to m} a_{ij}*w_i)... by def of T(v_j)= sum_{j=1 to n} sum_{i=1 to m} a_{ij}*x_j*w_i... move x_j into sum= sum_{i=1 to m} sum_{j=1 to n} a_{ij}*x_j*w_i... reverse order of sums= sum_{i=1 to m} w_i * (sum_{j=1 to n} a_{ij}*x_j)... pull w_i out of inner sum2so[y]_gamma = [ sum_{j=1 to n} a_{1j}*x_j ] = A * [ x_1 ] = A*[x]_beta[ sum_{j=1 to n} a_{2j}*x_j ] [ x_2 ] as desired[ ... ] [ ... ][ sum_{j=1 to n} a_{mj}*x_j ] [ x_n ]Ex: T:R^2 -> R^4, T((x,y) = (x-y, 3*x+2*y, -2*x, 7*y)beta = standard basis for R^2, gamma = standard basis for R^4,so T((1,0)) = (1;3;-2;0) and T((0,1)) = (-1;2;0;7), soA = [ 1 -1 ] (for brevity in these notes, we will sometimes use[ 3 2 ] "Matlab notation": T = [ 1 -1 ; 3 2 ; -2 0 ; 0 7 ] )[-2 0 ][0 7]ASK & WAIT: What if beta = {e2,e1} and gamma = {e3 e4 e1 e2}?Ex (continued): Suppose x = 3*e1 - e2; what is T(x)?what is [T(x)]_gamma, using standard bases?T(x) = T(3,-1) = (4,7,-6,-7)[T(x)]_gamma = A * [3;-1] = [4;7;-6;-7]Ex: T: P_3(R) -> P_2(R), T(f(x)) = f’(x),beta = {1, 1+x, x^2, x^3 }, gamma = {2 , x , x^2}Then T(1) = 0, T(1+x) = 1 = (1/2)*2; T(x^2) = 2*x ; T(x^3) = 3*x^2SoT=[01/200][0020][0003]ASK & WAIT: What is T if beta = { 1, x, x^2, x^3 }? If gamma={1,x,x^2}?Having identified matrices with linear transformations betweentwo finite dimensional spaces with ordered bases, and recallingthat mxn matrices form a vector space, we will not be surprisedthat all the linear transformations between any two vectorspaces is also a vector space:Def: Let T and U be linear transformations from V -> W.Then we define the new function T+U: V -> W by (T+U)(v) =T(v)+U(v)and the new function c*T: V -> W by (c*T)(v) = c*T(v)Thm: Using this notation, we have that(1) For all scalars c, c*T+U is a linear transformation(2) The set of all linear transformation from V -> W,3is itself a vector space, usingthe above definitions of addition and multiplication by scalarsProof:(1) (c*T+U)(sum_i a_i*v_i)= (c*T)(sum_i a_i*v_i) + U(sum_i a_i*v_i) ... by def of c*T+U= c*(T(sum_i a_i*v_i)) + U(sum_i a_i*v_i) ... by def of c*T= c*(sum_i a_i*T(v_i)) + sum_i a_i*U(v_i) ... since T,U linear= sum_i a_i*c*T(v_i) + sum_i a_i*U(v_i)= sum_i a_i*(c*T(v_i)+U(v_i))= sum_i a_i*(c*T+U)(v_i) ... by def of c*T+U(2) We let T_0, defined by T_0(v) = 0_W for all v, be the"zero vector" in L(V,W). It is easy to see that all theaxioms of a vector space are satisfied. (homework!)Def: L(V,W) is the vector space of all linear transformationsfrom V -> W. If V=W, we write L(V) for short.Given ordered bases for finite dimensional V and W, we get amatrix [T]_beta^gamma for every T in L(V,W). It is naturalto expect that the operations of adding vectors in L(V,W)(adding linear transformations) should be the same as addingtheir matrices, and that multiplying a vector in L(V,W) by ascalar should be the same as multiplying its matrix by a scalar:Thm: Let V and W be finite dimensional vectors spaces withordered bases beta and gamma, resp. Let T and U be in L(V,W).Then(1) [T+U]_beta^gamma = [T]_beta^gamma + [U]_beta^gamma(2) [c*T]_beta^gamma = c*[T]_beta^gammaIn other words, the function []_beta^gamma: L(V,W) -> M_{m x n}(F)is a linear transformation.Proof: (1) We compute column j of matrices on both sidesand comfirm they are the same. Let beta = {v_1,...,v_n}and gamma = {w_1,...,w_m}. Then(T+U)(v_j) = T(v_j) + U(v_j) so[(T+U)(v_j)]_gamma = [T(v_j)]_gamma + [U(v_j)]_gammaby the above Lemma that shows the mapping x -> [x]_gammawas


View Full Document

Berkeley MATH 110 - Lectures notes

Download Lectures notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lectures notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lectures notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?