$35
The Cooper Union Department of Electrical Engineering
ECE416 Adaptive Filters
Problem Set III: LMS and NLMS
Note: Problems refer to Haykin 5th ed, or 4th ed. (problem numbers are the same).
1. The purpose of this problem is to examine e§ects of various parameters on performance
of LMS and NLMS algorithms. The white noise signals that drive the models here are
complex Gaussian. For the signal u [n] assume an AR model:
u [n] = v1 [n] a1u [n 1] a2u [n 2]
where v1 is unit variance white noise and a1; a2 are chosen to achieve prescribed model
poles. Reference the handout on rational PSD. We will take two cases: poles at 0:3 and
0:5, and poles at 0:3 and 0:95. For model orders, take M = 3, M = 6, and M = 10.
For the underlying desired signal, assume a linear regression model:
d [n] =
Hu [n] + v2 [n]
where v2 is unit variance white noise, and is a vector of length 6, given by:
0k =
1
k
2
; 1 k 6
(a) For each model order, compute RM, pM, the Wiener Ölter w
0
M,jw
0
M j (to compute this for vectors of di§erent length, extend by 0 as necessary). Compute the
exact values from the speciÖed coe¢ cients in this problem, not values obtained by
time-averaging the random signals. Also calculate the eigenvalue spread, and the
bound on for a stable LMS algorithm (call it max), and similarly the bound on
~ for stable NLMS (call it ~max). Comment on the relation between eigenvalue
spread and the locations of the poles. Also, theory suggests what should happen
when the order of the Wiener Ölter matches or exceeds that of the underlying
model for d: check that.
(b) You have two sets of possible poles, and three model orders. That is 6 cases
total. For each case, run 4 di§erent adaptive Ölters: LMS with step sizes 0:05max,
0:5max and 0:8max, and NLMS with step size 0:2~max. Obtain learning curves by
averaging results over 100 runs. Ideally, all runs should be over the same number
of iterations, and you should do enough iterations that all seem to converge to
the steady-state condition (assuming they are stable). Generate graphs of the
learning curve J (n) and the mean-square deviation curve D (n) (remember, D (n)
represents the error of the adaptive Ölter from the corresponding Wiener Ölter,
not from the model vector ). In some cases, zooming in on a smaller number of
iterations may make more sense.
(c) Comments- stability, rate of convergence versus misadjustment, e§ect of eigenvalue spread, e§ect of model order, relation between J (n) and D (n) (theory sa