DOC PREVIEW
ISU EE 524 - LINEAR ALGEBRA

This preview shows page 1-2-15-16-17-32-33 out of 33 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

LINEAR ALGEBRAAppendix A in Stoica & MosesCh. 2.3 in HayesHere, we follow Ch. 2.3 in Hayes.2.3.1 VectorsLet signal be represented by scalar values x1, x2, . . . , xN. Then,the vector notation isx =x1x2...xN.Vector transpose:xT= [x1, x2, . . . , xN].Hermitian transpose:xH= (xT)∗= [x∗1, x∗2, . . . , x∗N].Sometimes, it is convenient to consider sets of vectors, forEE 524, Fall 2004, # 6 1example:x(n) =x(n)x(n − 1)...x(n − N + 1).Note: Stoica and Moses use “∗” to denote the Hermitiantranspose.Magnitude of a vector?Vector Euclidean norm:kxk =(NXi=1|xi|2)1/2=√xHxScalar (inner) product of two complex vectors a =[a1, . . . , aN]Tand b = [b1, . . . , bN]T:aHb =NXi=1a∗ibi.Cauchy-Schwartz inequality|aHb| ≤ kak · kbk,where equality holds only iff a and b are colinear (i.e. a = αb).EE 524, Fall 2004, # 6 2Orthogonal vectors:aHb = bHa = 0.Example: Consider the output of a linear time-invariant (LTI)system (filter):y(n) =N−1Xk=0h(k)x(n − k) = hTx(n)whereh =h(0)h(1)...h(N − 1), x(n) =x(n)x(n − 1)...x(n − N + 1).2.3.2 Linear Independence, Vector Spaces, and BasisVectorsA set of vectors x1, x2, . . . , xnis said to be linearly independentifα1x1+ α2x2+ ···αnxn= 0 (1)implies that αi= 0 for all i. If a set of nonzero αican befound so that (1) holds, the vectors are linearly dependent. ForEE 524, Fall 2004, # 6 3example, for nonzero α1,x1= β2x2+ ···βnxn.Example 2.3.2:x1=121, x2=101linearly independentx1=121, x2=101, x3=010linearly dependentbecause x1= x2+ 2x3.Given N vectors v1, v2, . . . , vN, consider the set of all vectorsV that may be formed as a linear combination of the vectorsvi:v =NXi=1αivi.This set forms a vector space and the vectors viare said tospan the space V.If vi’s are linearly independent, they are said to form a basisfor the space V and the number of basis vectors N is referredto as the space dimension. The basis for a vector space is notunique!EE 524, Fall 2004, # 6 42.3.3 Matricesn × m matrix:A = {aij} =a11a12a13··· a1ma21a22a23··· a2ma31a32a33··· a3m...............an1an2an3··· anm,If n = m, then A is a square matrix. Symmetric matrix:AT= A.Hermitian matrix:AH= A.Some properties [apply to transposeTas well]:(A + B)H= AH+ BH, (AH)H= A, (AB)H= BHAH.Column and row representations of an n × m matrix:A = [c1, c2, ···, cm] =rH1rH2...rHn(2)EE 524, Fall 2004, # 6 52.3.4 Matrix InverseRank Discussion: The rank of A ≡ # of linearly independentcolumns in (2) ≡ # of linearly independent row vectors in(2) [i.e. # of linearly independent vectors in {r1, r2, . . . , rn}].Important property:rank(A) = rank(AAH) = rank(AHA).For any n × m matrix A: rank(A) ≤ min{n, m}.If rank(A) = min{n, m}, A is said to be of full rank.If A is a square matrix of full rank, then there exists a uniquematrix A−1, called the inverse of A:A−1A = AA−1= IwhereI =1 0 0 ··· 00 1 0 ··· 00 0 1 ··· 0...............0 0 0 ··· 1,is called the identity matrix.If a square matrix A (of size n × n) is not of full rank [i.e.rank(A) < n], then it is said to be singular.Properties: (AB)−1= B−1A−1, (AH)−1= (A−1)H.EE 524, Fall 2004, # 6 6Useful Matrix Inversion IdentitiesMatrix Inversion Lemma:(A + BCD)−1= A−1− A−1B(C−1+ DA−1B)−1DA−1for arbitrary square nonsingular A and C. In the special casewhere C is a scalar, B = b is a column vector and D = dHisa row vector. For C = 1, we obtain the Woodbury identity:(A + bdH)−1= A−1−A−1b dHA−11 + dHA−1b.EE 524, Fall 2004, # 6 72.3.5 The Determinant and the TraceThe determinant of an n × n matrix (for any i):det(A) =nXj=1(−1)i+jaijdet(Aij)where Aijis the (n − 1) × (n − 1) matrix formed by deletingthe ith row and the jth column of A.Example:A =a11a12a13a21a22a23a31a32a33,det(A) = a11deta22a23a32a33− a12deta21a23a31a33+a13deta21a22a31a32.Property: an n × n matrix A is invertible (nonsingular) iff itsdeterminant is nonzero,det(A) 6= 0.EE 524, Fall 2004, # 6 8Properties of the determinant (for n × n matrix A):det(AB) = det(A) det(B), det(αA) = αndet Adet(A−1) =1det A, det(AT) = det A.Another important function of a matrix is trace:tr(A) =nXi=1aii.EE 524, Fall 2004, # 6 92.3.6 Linear EquationsMany practical DSP problems [e.g. signal modeling, Wienerfiltering, etc.] require solving a set of linear equations:a11x1+ a12x2+ ··· + a1mxm= b1a21x1+ a22x2+ ··· + a2mxm= b2...an1x1+ an2x2+ ··· + anmxm= bnIn matrix notation:Ax = b, (3)where• A is an n × m matrix with entries aij,• x is an m-dimensional vector of xi’s, and• b is an n-dimensional vector of bi’s.For A = [a1, a2, ···, am], we can view (3) as an expansion ofb:b =mXi=1xiai.EE 524, Fall 2004, # 6 10Square matrix A : m = n. The nature of the solution dependsupon whether or not A is singular.Nonsingular case:x = A−1b.Singular case: no solution or many solutions.Example:x1+ x2= 1x1+ x2= 2, no solution.x1+ x2= 1x1+ x2= 1, many solutions.EE 524, Fall 2004, # 6 11Rectangular matrix A : m < n. More equations thanunknowns and, in general, no solution exis ts. The systemis called overdetermined.When A is a full-rank matrix and, therefore, AHA isnonsingular, a common approach is to find the least-squaressolution, i.e. the vector x that minimizes the Euclidean normof the error vector:kek2= kb − Axk2EE 524, Fall 2004, # 6 12= (b − Ax)H(b − Ax)= bHb − xHAHb − bHAx + xHAHAx=x − (AHA)−1AHb HAHAx − (AHA)−1AHb +[bHb − bHA(AHA)−1AHb].The second term is independent of x. Therefore, the LSsolution isxLS= (AHA)−1AHb.EE 524, Fall 2004, # 6 13The best (LS) approximationbb to b is given bybb = AxLS= A(AHA)−1AHb = PAb,wherePA= A(AHA)−1AHis called the projection matrix with prope rties:PAa = aif the vector a belongs to the column s pace of A, andPAa = 0if a is orthogonal to the column space of A.The minimum LS error:min kek2= kb − AxLSk2= k(I − A(AHA)−1AH)bk2= k(I − PA) bk2= kP⊥Abk2= bHP⊥Ab,where P⊥A= I −PAis the projection matrix onto the subspaceorthogonal to the column space of A.EE 524, Fall 2004, # 6 14Alternatively, the LS solution is found from the normalequations:AHAx = AHb.which follow from the orthogonality principle:AHe = 0.EE 524, Fall 2004, # 6 15Illustration of LS


View Full Document

ISU EE 524 - LINEAR ALGEBRA

Download LINEAR ALGEBRA
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view LINEAR ALGEBRA and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view LINEAR ALGEBRA 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?