DOC PREVIEW
UW-Madison ECE 734 - A Recursive Method for the Solution of the Linear Least Squares Formulation

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Project ProposalA Recursive Method for the Solution of the Linear Least SquaresFormulation – algorithm and performance in connection with the PLXinstruction setShyam Bharat Claus BenjaminsenThe least squares method of parameter estimation seeks to minimize the squarederror between the observed data sequence and the assumed signal model, which is somefunction of the parameter to be estimated. In particular, the linear least squaresformulation represents the case where the signal model is a linear function of theparameter to be estimated. The salient feature of the least squares method is that noprobabilistic assumptions are made about the data – only a signal model is assumed. Thismethod is generally used in situations where a precise statistical characterization of thedata is unknown.The linear least squares estimation (LLSE) method ultimately boils down tosolving a set of linear equations. As will be seen from the equations that follow, thesolution to the LLSE involves the computation of the inverse of a matrix. This matrixhappens to be the autocorrelation matrix of the assumed signal model matrix. Now, thedimensions of this autocorrelation matrix are P x P, where ‘P’ is the number of samples inthe observation vector. This computation of the matrix inverse is straight-forward forsmall values of ‘p’ but for large values of ‘p’, it becomes increasingly computationallyintensive. In digital signal processing (DSP) applications, the computation is generallyrequired to be done in real-time for a continuous succession of samples. In other words,for DSP applications, p  ∞. It is practically impossible to compute the inverse of amatrix consisting of infinite entries! Hence, it is essential to find an alternate way ofsolving the system of linear equations to yield the unknown parameters. This problem is afairly well-researched one and there exists an alternate method of solving for theunknown parameters. This alternate method basically involves updating a so-calledweight vector in time, based on its value at the previous time instant. The meaning of thisstatement will become clearer with the aid of equations presented below.The function to be minimized with respect to the parameter at the present instantis given by:n2k=1J(w(n)) = | ( ) |n kne kl-� , 0 < λ < 1where en(k) = d(k) – wH(n)u(k)In matrix form, this function can be written as:J(w(n)) = [d* - uHw(n)]HΛ[d – uHw(n)]where Λ = diag{λn-1, λn-2, … , λ, 1}The solution to this equation is given by:w(n) = (u Λ uH)-1 u Λ dor w(n) = Φ(n)-1ө(n)where:11( ) ( ) ( )( ) ( ) *( )nHn kknn kkn u k u kn u k d klq l-=-=F ==��As can be observed from the above equations, the path to the solution involves thecomputation of the inverse of a matrix, for every value of ‘n’. The method of recursiveleast squares seeks to circumvent this computationally intensive step by insteadcalculating w(n) from w(n-1), it’s value at the previous time instant.Φ(n) and ө(n) can be rewritten in recursive form as follows:Φ(n) = λΦ(n-1) + u(n)uH(n)ө(n) = λө(n-1) + u(n)d*(n)The matrix inversion lemma shown below is applied to Φ(n)1 1111( )1HHHA UV AA UV AV A U- ----+ = -+Therefore,1 121 1111( 1) ( ) ( ) ( 1)( ) ( 1)1 ( ) ( 1) ( )HHn u n u n nn nu n n u nlll- --- ----F - F -F = F - -+ F -At this point, the following 2 new terms are defined:P(n) = Φ-1(n) , where P(n) has dimensions M x M11( 1) ( )( )1 ( ) ( 1) ( )HP n u nk nu n P n u nll---=+ -, where k(n) is an M x 1 gain vectorIt can be seen that: k(n) = P(n)u(n)If we proceed with the math, we ultimately arrive at the following recursion formula forw(n):w(n) = w(n-1) + k(n)[d*(n) – uH(n)w(n-1)]The last term (in square brackets) is denoted as α*(n), where α(n) = d(n) – wH(n-1)u(n) iscalled the ‘innovation’ or ‘apriori estimation error’. This differs from e(n) which is the‘aposteriori estimation error’.To update w(n-1) to w(n), we need k(n) and α(n)Based on the above formulation, shown below is an algorithm for solving the leastsquares equation recursively:Initialization: P(0) = δ-1I where δ is small and positive w(0) = 0For n = 1,2,3,….x(n) = λ-1P(n-1)u(n)k(n) = [1 + uH(n)x(n)]-1x(n)α(n) = d(n) – wH(n-1)u(n)w(n) = w(n-1) + k(n)α*(n)P(n) = λ-1P(n-1) – k(n)xH(n)In order to successfully implement this algorithm the expressions written in theloop above needs to be analyzed. In this project we will therefore investigate what kind ofcomputation is required in the algorithm, and whether anything can be simplified? Whatare the different dependencies among the variables and the sequential statements? Canany of the dependencies be removed, is there anything to gain by rewriting into singleassignment format, can the computations be done in parallel?The next thing we will look into is the numerical properties of the algorithm. Isthe algorithm stable and/or bounded, or can it become unstable and unbounded? What arethe requirements which insure that it works correctly? How does the values grow duringthe calculations, what should be the word length of inputs, intermediate results andoutputs? Is it helpful to scale any of the variables used in the computations and what kindof arithmetic should be used signed/unsigned – saturation/wrap around?Finally we will look into how to actually implement the algorithm using the PLXinstruction set. How should the assembly code be written, what kind of instructions arerequired? Are all necessary instructions available or would other non supportedinstructions be helpful? How should the data be stored and updated? What is the speed ofthe implementations, how many inputs can be processed per second? Can the algorithmbe executed in real time?References:Simon Haykin; Adaptive Filter Theory, Prentice Hall, 2002Fixed and Floating Point Error Analysis of QRD-RLS and STAR-RLS Adaptive Filters,Kalavai J. Raghunath; Keshab K. ParhiAcoustics, Speech, and Signal Processing, 1994. ICASSP-94.,Volume iii, 19-22 April 1994 Page(s):III/81 - III/84


View Full Document

UW-Madison ECE 734 - A Recursive Method for the Solution of the Linear Least Squares Formulation

Documents in this Course
Load more
Download A Recursive Method for the Solution of the Linear Least Squares Formulation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Recursive Method for the Solution of the Linear Least Squares Formulation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Recursive Method for the Solution of the Linear Least Squares Formulation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?