Starting from:

$30

Assignment #2: Solving Linear Systems SOLVED

COMP 540 Matrix Computations
Assignment #2: Solving Linear Systems.

1. (5 points) (Cholesky factorization with pivoting). In some problems we have singular
or nearly singular symmetric nonnegative denite matrices (A ∈ R
n×n
is said to be symmetric
nonnegative denite if AT = A and x
TAx ≥ 0 for any x ∈ R
n
). In this case, we need to
incorporate the so called symmetric pivoting (or diagonal pivoting) strategy into the Cholesky
factorization (outer-product formulation). At each step k = 1, 2, . . . , r where r = rank(A),
we search the remaining (altered) diagonal (i.e., a
(k)
kk , ..., a
(k)
nn ) for the largest element, say a
(k)
mm,
and exchange rows (and columns) k and m. Briey outline the algorithm and show that the
algorithm produces a permutation matrix P ∈ R
n×n and a lower triangular matrix L ∈ R
n×r
(i.e., lij = 0 for i < j) with positive diagonal entries such that P
TAP = LLT
. In practice
when maxk≤i≤n a
(k)
ii ≤ ? max1≤i≤n a
(1)
ii , where ? = nu, the algorithm terminates and the rank r
is determined to be k − 1.
2. (15 points + 5 bonus points) (Ill conditioned problem). Program in single precision the
Cholesky factorization A = LLT and solution of equations for a symmetric positive definite
matrix A. If your program is about to divide by zero, or take the square root of a negative
number, it should instead output an appropriate message and stop. Write each component in
subroutine or procedure forms and use them in the solution of systems Ax = b, where A are
n × n Hilbert Matrices.
A =
?
1
i + j − 1
?
, b = Ae computed,
for n = 2, 4, 6, 8. Note that e
T = [1, 1, . . . , 1]. Let the computed solution be denoted by xc,
compute
i. The relative error in the solution, kxc − ek2/kxck2.
ii. The relative residual, kb − Axck2/(kAkF kxck2).
iii. The relative matrix residual, kA − LcL
T
c kF /kAkF .
Comment on and justify these results (hint: using the rounding error analysis results and
perturbation analysis results to justify i and using the rounding error analysis results to justify
iii). Print error and residual norms in floating point format so you can see what is happening.
Note that James Wilkinson (1968) showed that Cholesky factorization runs to completion if
20n
3/2uκ2(A) ≤ 1 (an improved condition was given by James Demmel (1989)). The condition
numbers κ2(A) = kAk2kA−1k2 for Hilbert matrices with n = 2, 4, 6, 8 are about 19, 16,000,
15,000,000 and 15,000,000,000 respectively.
If you use MATLAB, find out how to use single precision. If you use a high level language
rather than a MATLAB-like language, you can get up to 5 bonus points.
Please submit your code with a readme file through myCourses, which explains how to run
your code.
1
3. (a) (Bonus, 5 points) Show in GEPP the growth factor ρ ≤ 2
n−1 and the upper bound can
be reached for the following n × n matrix
A0 =







1 1
−1 1 1
.
.
.
.
.
.
.
.
. 1
−1 −1 · · · 1 1
−1 −1 · · · −1 1







(b) (5 points) Write a MATLAB program to solve a general linear system Ax = b by the LU
factorization with partial pivoting. Your program should include the computation of the
growth factor.
(c) (10 points) Use your code to solve n×n Ax = b, where A = A0 + 10−8 ∗ I with A0 defined
in 3(a), b = A ∗ e, e ≡ [1, 1, . . . , 1]T
, and n = 30. So x = e is the exact solution. Suppose
your computed solution is xc, compute kx − xck∞/kxck∞.
Note: The reason we consider solving Ax = b is that if we solve A0x = b, we may not see
any rounding errors because the elements of A are integers, so cannot see how the growth
factor affect the accuracy of the computed solution.
(d) (5 points) Iterative refinement is an established technique for improving a computed solution xc to a linear system Ax = b. The process consists of 3 steps:
1 Compute r = b − Axc.
2 Solve Ad = r.
3 Update y = xc + d.
(Repeat from step 1 if necessary with xc replaced by y).
If there were no rounding errors in the computation of r, d and y, then y would be the
exact solution to the system. The idea behind iterative refinement is that if r and d are
computed accurately enough then some improvement in the accuracy of the solution will
be obtained. Notice when we solve Ad = r we can reuse the LU factorization which was
used in computing xc. So iterative refinement is economic if only a few number of iterative
steps are required.
Now use the iterative refinement technique to refine your computed solution xc obtained
in 3(c). Your iteration should stop when kdk2/kxck2 ≤ τ , where τ is a tolerance and try
τ = 10−15 in your computation, and xc is the latest computed solution. Compute the
relative error kxc − xk∞/kxck∞ for your final solution, and report the number of iterative
steps.
2

More products