High-resolution ParametricSubspace MethodsThe first parametric subspace-based m ethod was the Pisarenkomethod,, which was further modified, leading to the MUltipleSIgnal Classification (MUSIC) method.MUSIC Method: Recall the modelx(t) = As(t) + n(t),where A is a Vandermonde matrixA =1 1 · · · 1e−jω1e−jω2· · · e−jωL............e−jω1(N−1)e−jω2(N−1). . . e−jωL(N−1).R = E {x(t)xH(t)}= AE {s(t)sH(t)}AH+ σ2I= ASAH+ σ2I,and S = E {s(t)sH(t)} is a full-rank diagonal matrix.EE 524, Fall 2003, # 9 1From the eigenanalysis of R, we see that• the eigenvalues of R are:λk> σ2, k = 1, . . . , L signal subspaceλk= σ2, k = L + 1, . . . , N noise subspace,• the noise subspace eigenvectors are orthogonal to the columnspace of A.Let the signal and noise subspace eigenvectors be given byE = [u1, u2, . . . , uL] signal subspace,G = [uL+1, uL+2, . . . , uN] noise subspace.ThenRG = σ2G.On the other handRG = (ASAH+ σ2I)G = ASAHG + σ2G.Comparing the above two equations, we concludeAHG = 0.EE 524, Fall 2003, # 9 2The noise subspace eigenvectors of R are orthogonal to thecolumns of A. In turn, the signal-subspace eigenvectors spanthe same subspace as the column space of A.The true frequencies {ω}Ll=1are the solutions toaH(ω)GGHa(ω) = N − aH(ω)EEHa(ω) = 0whereGGH= P⊥A, EEH= I − GGH= PA.MUSIC Spectral Estimate:PMUSIC(ω) =1aH(ω)GGHa(ω)=1N − aH(ω)EEHa(ω).In practice, we do not know R, so:bPMUSIC(ω) =1aH(ω)bGbGHa(ω)=1N − aH(ω)bEbEHa(ω).wherebR =1KKXk=1x(k)xH(k),bGbGH=bP⊥A,bEbEH= I −bGbGH=bPA.EE 524, Fall 2003, # 9 3Remarks:• the number of signals L must be known or estimated,• MUSIC involves eigendecomposition,• computational cost can be quite intensive if the MUSICestimate is evaluated with a fine grid.EE 524, Fall 2003, # 9 4Root -MUSIC AlgorithmInstead of computing the spectral MUSIC estimate, root thepolynomialaT(1/z)bGbGHa(z) = 0,wherea(z) =1z−1...z−N+1.Remarks:• MUSIC polynomial has the order 2N − 2, and, therefore,2N − 2 roots,• the roots form N − 1 pairs, where one root is the conjugatereciprocal of another, i.e. if z is a root, then 1/z∗will be aroot as well.0 = aT(1/z)bGbGHa(z) = [aT(1/z)bGbGHa(z)]H= aT(z∗)bGbGHa(1/z∗)= aT(1/ez)bGbGHa(ez) with ez =1z∗.If z ≡ root, then 1/z∗≡ root too!EE 524, Fall 2003, # 9 5Example: Let z = 0.8 ejπ/4be a root of the MUSICpolynomial. Then, the conjugate-reciprocal root is1z∗=10.8e−jπ/4= 1.25 ejπ/4.Thus, the angle of the root doe s not change but it lies at theopposite side of the unit circle.Select only the roots that lie inside the unit circle. Estimatesof signal frequencies:bω = ∠{zl}, l = 1, 2, . . . , Lusing L roots closest to the unit circle (so-called signal roots)with |z| ≤ 1.EE 524, Fall 2003, # 9 6Root-MUSIC has better performance at low SNR or smallnumber of samples than spectral MUSIC because it is insensitiveto radial errors (more precisely, sensitive only to errors thatcause subspace swapping).Root-MUSIC has muc h simpler implementation than spectralMUSIC because it does not re quire any search over frequency.EE 524, Fall 2003, # 9 7Minimum-norm MethodbPMN(ω) =1|aH(ω)w|2,where the vector w• has the first element equal to 1 and minimum norm,• w belongs to the sample noise subspace.Optimization problem:minwwHw subject to wHe1= 1,bGbGHw = w.Substituting the second constraint into the objective functionand first constraint yields:wHw = wHbGbGHbG|{z}IbGHw = wHbGbGHw,wHe1= wHbGbGHe1= 1.With these expressions, our optimization problem transformstominwwHbGbGHw subject to wHbGbGHe1= 1.EE 524, Fall 2003, # 9 8Hence,Q(w)=wHbGbGHw + λ(1 − wHbGbGHe1) + λ∗(1 − eH1bGbGHw)(∇Q)∗=bGbGHw − λbGbGHe1=⇒bGbGHw = λbGbGHe1.Substituting the solutionbGbGHw = λbGbGHe1to the constraintequation wHbGbGHe1= 1, we ge tλ∗eH1bGbGHe1= 1 =⇒ λ =1eH1bGbGHe1.Finally,w = λbGbGHe1=1eH1bGbGHe1bGbGHe1,where we used the second constraint. Substituting this solutioninto the expression forbPMN(ω), we getbPMN(ω) =1|aH(ω)w|2=(eH1bGbGHe1)2|aH(ω)bGbGHe1|2.The constant in the numerator does not alter the shape of thespectrum and, as a rule, is omitted:bPMN(ω) =1|aH(ω)bGbGHe1|2=1|1 − aH(ω)bEbEHe1|2.EE 524, Fall 2003, # 9 9ESPRIT MethodConsiderA =1 1 · · · 1e−jω1e−jω2· · · e−jωL............e−jω1(N−1)e−jω2(N−1). . . e−jωL(N−1).Let A and A be the matrices with eliminated first and last row,respectively.It can be readily shown thatAD = A,D = diag{ejω1, . . . , ejωL}.Let E and E be formed from the signal eigenvector matrix Ein the same way asA and A from A.EE 524, Fall 2003, # 9 10Recall that E and A span the same (signal) subspace. ThereforeE = AC =⇒E = AC = ADC, E = AC =⇒EC−1D−1C = A DCC−1D−1| {z }IC = AC = E.EquivalentlyE = EC−1DC =⇒ E = EΨwhereΨ = C−1DC. (∗)In practice, both C and D are unknown!LS solution for Ψ:Ψ = (EHE)−1EHE.From (∗) it follows that the diagonal elements of D are theeigenvalues of Ψ!EE 524, Fall 2003, # 9 11ESPRIT:Step 1: Compute the eigendecomposition of the samplecovariance matrixbR and obtain the sample signal subspacebE.Step 2: Form the matricesbE andbE.Step 3: ComputebΨ = (bEHbE)−1bEHbE.Step 4: Form the eigenvaluesbψl, l = 1, 2, . . . , L ofbΨ andobtain the frequency estimates as follows:bωl= ∠bψl.EE 524, Fall 2003, # 9 12Mode l-fitting-based ParametricSpectral A nalysisNonlinear LS: Recall low-rank modeling and obtain thefrequencies by minimizingminS,ωnKXt=1kx(t) − A(ω)s(t)k2o= minS,ωkX − A(ω)Sk2=⇒minωtr{P⊥A(ω)bR} ⇔ maxωtr{PA(ω)bR}.EE 524, Fall 2003, # 9 13Nonparametric and Parametric Methods:RelationshipMatrix Inversion Lemma:(H + BCD)−1= H−1− H−1B(C−1+ DH−1B)−1DH−1for arbitrary square nonsingular H and C.Consider the familiar e xpression for the covariance matrixR = ASAH+ σ2Iand apply matrix inversion lemma to obtainR−1= (σ2I + ASAH)−1=1σ2(I +1σ2ASAH)−1=1σ2[I − A(σ2S−1+ AHA)−1AH],which implieslimσ2→0{σ2R−1} = limσ2→0{I − A(σ2S−1+ AHA)−1AH}= I − A(AHA)−1AH= P⊥A.Compare Capon and MUSIC spectra:PCAPON(ω)=1aH(ω)R−1a(ω), PMUSIC(ω)=1aH(ω)P⊥Aa(ω)EE 524, Fall 2003, # 9 14as well as AR (max entropy) and min-norm spectra:PAR(ω) =1|aH(ω)R−1e1|2, PMN(ω) =1|aH(ω)P⊥Ae1|2.Clearly, for high SNR (σ → 0), we obtainPCAPON(ω) ∼ PMUSIC(ω),PAR(ω) ∼ PMN(ω).EE 524, Fall 2003, # 9 15Matlab Example• 3
View Full Document