OorsigOverview
Die module fokus op numeriese metodes vir matriksbewerkings. Ons kyk na die effektiewe oplos van vierkantige lineêre stelsels, kleinste-kwadrate probleme, en die eiewaarde probleem. Direkte sowel as iteratiewe metodes word behandel, met die klem op yl matrikse en matrikse met struktuur. Slaggate soos numeriese onstabiliteit en sleg-geaardheid word uitgewys. Ons beskou modelprobleme uit parsiële differensiaalvergelykings en beeldverwerking. Teorie, algoritmiese aspekte en toepassings word in gelyke mate beklemtoon. The module focuses on numerical methods for matrix computations. We look at the effective solution of square linear systems, least squares problems, and the eigenvalue problem. We consider direct as well as iterative methods, with special attention to sparse matrices and structured matrices. Pitfalls such as numerical instability and ill-conditioning are pointed out. Model problems are drawn from partial differential equations and image processing. Theory, algorithmic aspects, and applications are emphasised in equal measure.

Test information


Dosente / Lecturers

Dosent / LecturerKantoor / OfficeE-pos / Email
Prof. André WeidemanA315weideman at sun
Dr Stéfan van der WaltA314stefan at sun

Lectures

WeekTopic (All section numbers refer to the book of Ascher & Greif)
1 Review of undergraduate linear algebra (4.1).
2 Vector and matrix norms (4.2), special classes of matrices (4.3), singular values (4.4).
3 Gaussian Elimination and back substitution (5.1), LU decomposition (5.2)
4Pivoting strategies (5.3), Cholesky decomposition (5.5)
5Sparse matrices (5.6), permutations and ordering strategies (5.7), estimating errors and the condition number (5.8).
6Least squares and the normal equations (6.1), orthogonal transformations and QR (6.2)
7Test week
8Vacation
9Householder transformations and Gram-Schmidt orthogonalization (6.3)
10Stationary iterative methods: Jacobi, Gauss-Seidel, SOR (7.1-73)
11Method of Steepest Descent (7.4)
12Methods of Conjugate Directions and Conjugate Gradients (7.4 + Shewchuck). See also the paper by Hestenes & Stiefel.
13Convergence of Conjugate Gradients.
14Other Krylov iterations (Arnoldi, Lanczos).

Assignments

NrTopicHand-in
1Properties of the eigenvalues & vectors of square matrices, vector and matrix norms. (Solutions)17 February
2Gaussian elimination, properties of matrices arising from the discretisation of PDEs (Solutions) | sptools.py | Please note that there are two 9-point discretisations of the Laplacian--both are outlined in the textbook by Saad and shown here; you can use either one.1 March
3Cholesky factorisation, instability in Gaussian elimination, re-ordering strategies, ill-conditioning | LU with Complete Pivoting: (MATLAB | Python). The MATLAB code is provided by Nick Henderson, Stanford University.18 March28 March
TestMemo
4Least squares, Gram-Schmidt, Householder (Solutions)7 April
5Stationary Iterative Methods, Optimal SOR Parameter, Preconditioners21 April
6Conjugate Gradients, Preconditioned Conjugate Gradients, SSORNew!1216 May (hand in at A314)

The following Python code may come in handy for specifying a pre-conditioner, P, since the argument to GMRES, PCG, etc. must be the inverse of the pre-conditioner (the operator M below):

      import scipy.sparse.linalg as spla
      P_inv = lambda x: spla.spsolve(P, x)
      M = spla.LinearOperator((n, n), P_inv)
    

Golden Rules of Numerical Linear Algebra

Tentative marks for the second test can be found here. Please note that these marks are subject to an external moderation process and therefore not yet official. We expect to post the complete list of assignment marks here on 08/06/2011.

Handboek / Textbook

A First Course on Numerical Methods by Uri Ascher & Chen Greif.

Further reading:
Iterative methods for sparse linear systems, Yousef Saad, 2nd ed.

Lesings / Lectures (A409)

Mon 09:00-09:50
Tue 12:00-12:50
Thu 12:00-12:50

Buiteskakels / External links

OCW Lectures

Gilbert Strang's video-lectures are available for download from the local network.