DOC PREVIEW
UW-Madison ECE 734 - A Recursive Method for the Solution of the Linear Least Squares Formulation

This preview shows page 1-2-3-24-25-26 out of 26 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Table of ContentsIntroductionAlgorithmTheoretical AnalysisNumerical Analysis in MatlabImplementation Using the PLX instruction setImplementation of different dimensional multiplications using the PLX instruction setProblems with the PLX instruction setConclusionThe PLX assembly code implementationReferencesECE 734 VLSI Array Structures for DigitalSignal ProcessingProject ReportTitle:A Recursive Method for the Solution of the Linear LeastSquares Formulation – algorithm and performanceusing PLX instruction setAuthors:Claus BenjaminsenShyam BharatTable of ContentsTable of Contents........................................................................................................2Introduction...................................................................................................................3Algorithm.......................................................................................................................6Theoretical Analysis.................................................................................................7Numerical Analysis in Matlab...........................................................................13Implementation Using the PLX instruction set.........................................16Implementation of different dimensional multiplications using the PLX instruction set..................................................................................................16Problems with the PLX instruction set..........................................................19Conclusion...................................................................................................................21The PLX assembly code implementation....................................................222References....................................................................................................................26IntroductionThe least squares method of parameter estimation seeks to minimize the squarederror between the observed data sequence and the assumed signal model, which is somefunction of the parameter to be estimated. In particular, the linear least squaresformulation represents the case where the signal model is a linear function of theparameter to be estimated. The salient feature of the least squares method is that noprobabilistic assumptions are made about the data – only a signal model is assumed. Thismethod is generally used in situations where a precise statistical characterization of thedata is unknown.The linear least squares estimation (LLSE) method ultimately boils down tosolving a set of linear equations. As will be seen from the equations that follow, thesolution to the LLSE involves the computation of the inverse of a matrix. This matrixhappens to be the autocorrelation matrix of the assumed signal model matrix. Now, thedimensions of this autocorrelation matrix are P x P, where ‘P’ is the number of samples inthe observation vector. This computation of the matrix inverse is straight-forward forsmall values of ‘p’ but for large values of ‘p’, it becomes increasingly computationallyintensive. In digital signal processing (DSP) applications, the computation is generallyrequired to be done in real-time for a continuous succession of samples. In other words,for DSP applications one matrix inversion is required for every new input vector. Since itis very time consuming to calculate the inverse of a matrix, it is essential to find analternate way of solving the system of linear equations to yield the unknown parameters.This problem is a fairly well-researched one and there exists an alternate method ofsolving for the unknown parameters. This alternate method basically involves updating aso-called weight vector in time, based on its value at the previous time instant. Themeaning of this statement will become clearer with the aid of equations presented below.3The function to be minimized with respect to the parameter at the present instantis given by:n2k=1J(w(n)) = | ( ) |n kne kl-� , 0 < λ < 1where en(k) = d(k) – wH(n)u(k)In matrix form, this function can be written as:J(w(n)) = [d* - uHw(n)]HΛ[d – uHw(n)]where Λ = diag{λn-1, λn-2, … , λ, 1}The solution to this equation is given by:w(n) = (u Λ uH)-1 u Λ dor w(n) = Φ(n)-1ө(n)where:11( ) ( ) ( )( ) ( ) *( )nHn kknn kkn u k u kn u k d klq l-=-=F ==��As can be observed from the above equations, the path to the solution involves thecomputation of the inverse of a matrix, for every value of ‘n’. The method of recursiveleast squares seeks to circumvent this computationally intensive step by insteadcalculating w(n) from w(n-1), it’s value at the previous time instant.4Φ(n) and ө(n) can be rewritten in recursive form as follows:Φ(n) = λΦ(n-1) + u(n)uH(n)ө(n) = λө(n-1) + u(n)d*(n)The matrix inversion lemma shown below is applied to Φ(n)1 1111( )1HHHA UV AA UV AV A U- ----+ = -+Therefore,1 121 1111( 1) ( ) ( ) ( 1)( ) ( 1)1 ( ) ( 1) ( )HHn u n u n nn nu n n u nlll- --- ----F - F -F = F - -+ F -At this point, the following 2 new terms are defined:P(n) = Φ-1(n) , where P(n) has dimensions M x M11( 1) ( )( )1 ( ) ( 1) ( )HP n u nk nu n P n u nll---=+ -, where k(n) is an M x 1 gain vectorIt can be seen that: k(n) = P(n)u(n)If we proceed with the math, we ultimately arrive at the following recursion formula forw(n):w(n) = w(n-1) + k(n)[d*(n) – uH(n)w(n-1)]The last term (in square brackets) is denoted as α*(n), where α(n) = d(n) – wH(n-1)u(n) iscalled the ‘innovation’ or ‘a priori estimation error’. This differs from e(n) which is the ‘aposteriori estimation error’. To update w(n-1) to w(n), we need k(n) and α(n)5AlgorithmBased on the above formulation, shown below is an algorithm for solving the leastsquares equation recursively:Initialization: P(0) = δ-1I where δ is small and positive w(0) = 0For n = 1,2,3,….x(n) = λ-1P(n-1)u(n)k(n) = [1 + uH(n)x(n)]-1x(n)α(n) = d(n) – wH(n-1)u(n)w(n) = w(n-1) + k(n)α*(n)P(n) = λ-1P(n-1) – k(n)xH(n)6Theoretical AnalysisThe first view into the algorithm is looking into the sizes of the different quantities. Whendoing that one finds that choosing the size of the input vector u and the target values d,determines the sizes of all the other quantities in the algorithm. Therefore if u is (K x 1),and


View Full Document

UW-Madison ECE 734 - A Recursive Method for the Solution of the Linear Least Squares Formulation

Documents in this Course
Load more
Download A Recursive Method for the Solution of the Linear Least Squares Formulation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Recursive Method for the Solution of the Linear Least Squares Formulation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Recursive Method for the Solution of the Linear Least Squares Formulation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?