Starting from:

$30

Problem Set IV: RLS, QRD-RLS and Order Recursive Algorithms

  The Cooper Union Department of Electrical Engineering

ECE416 Adaptive Filters
Problem Set IV: RLS, QRD-RLS and Order Recursive Algorithms

1. Orthogonality Principle: Referencing the Kalman Ölter notation we used in the
course:
x (n + 1) = Ax (n) + v (n)
y (n) = Cx (n) + w (n)
where v; w are 0-mean white, uncorrelated with each other and the initial state x (0),
with covariance matrices Qv; Qw, respectively. Although not indicated, in general
A; C; Qv; Qw may depend on time. With Yn = span fy (k); 1  k  ng, the notation
u^ (mjn) means the projection of u (m) onto Yn. The predicted state-error vector is:
" (n; n  1) = x (n)  x^ (njn  1)
Prove that " (n; n  1) ? v (n) and " (n; n  1) ? w (n) using a BRIEF argument in
each case. Your argument must be compact!
2. RLS: In the LMS problem set for this course, you implemented LMS for adaptive
equalization and adaptive MVDR. Now repeat each using RLS. Use  = 0:95 in each
case; to select  you may need some trial and error, but as an initial attempt try
 = 0:005. Basically you need to run the adaptation long enough so the impact of
 is minimal. [This is similar to Haykin 10.10, 10.11 in 5th ed., 9.11,9.12 in 4th ed.,
except the adaptive equalization and MVDR problems you did for LMS were somewhat
di§erent that as they appear in the textbook]
3. QRD-RLS: First, write MATLAB code to implement the QRD-RLS and inverse
QRD-RLS algorithms (as provided in appropriate tables in Haykin). In this core code,
do not assume the data vector u (n) has any particular structural form (i.e., may not
be a time series). Also write "wrapper code" that calls these inner, general algorithms,
for the special case where these data vectors are obtained from a time-series, i.e., has
the form uM (n) = [u (n); u (n  1);    ; u (n  M + 1)]T
; normally a prewindowing
approach is used (i.e., the assumption is u (i) = 0 for i  0), however your code should
take an OPTIONAL input vector that prescribes initial conditions u (0), u (1),   
(you Ögure out how far back it needs to go). Also, one algorithm does not directly
yield the tap-weight vector ~w, the other doesnít directly yield the output error signal
e (n). Write code to compute each (donít run this at each time step; what I mean is if
you stop the algorithm at some Öxed time N, then write code so you can Önd w^ (N)
at that time, or e (N) at that time, respectively).
Apply this to the same equalization problem you have worked on before. Recall that
adaptation happens during a training sequence period, when the ideal transmitted 1
sequence is know. Therefore, we donít want to initialize the data matrix with zeros.
Instead, consider an initial "preÖx" in your training sequence: we transmit the

More products