High-Performance Computing on Clusters

NUMMER: n.n.
KÜRZEL: HPerfClus
MODULBEAUFTRAGTE:R: Prof. Dr. Andreas Vogel
DOZENT:IN: Jun. Prof. Dr. Andreas Vogel
FAKULTÄT: Fakultät für Bau- und Umweltingenieurwissenschaften
SPRACHE: Englisch
SWS: 4 SWS
CREDITS: 6 CP
WORKLOAD: 180 h
ANGEBOTEN IM: jedes Wintersemester

BESTANDTEILE UND VERANSTALTUNGSART

a) Vorlesung High-Performance
Computing on Clusters
b) Übung

PRÜFUNGEN

FORM: schriftlich
ANMELDUNG:
DATUM: 0000-00-00
BEGINN: 00:00:00
DAUER: 120 min
RAUM:

LERNFORM

Beamer, computer lab, numerical experiments

LERNZIELE

In this module, the students acquire professional skills to program and employ parallel computing
clusters. Theoretical properties of distributed-memory systems and programming patterns
are conveyed as well as the practical implementation.

INHALT

The lecture deals with the parallelization on cluster computers. Distributed-memory programming
concepts (MPI) are introduced and best-practice implementation is presented based on
applications from scientific computing including the finite element method and machine learning.
Special attention is paid to scalable solvers for systems of equations on distributed
memory systems, focusing on iterative schemes such as simple splitting methods (Richardson,
Jacobi, Gauß-Seidel, SOR), Krylov-methods (Gradient descent, CG, BiCGStab) and, in
particular, the multigrid method. The mathematical foundations for iterative solvers are reviewed,
suitable object-oriented interface structures are developed and an implementation of
these solvers for modern parallel computer architectures is developed. Numerical experiments
and self-developed software implementations are used to discuss and illustrate the theoretical
results.

VORAUSSETZUNGEN

Keine

VORAUSSETZUNGEN CREDITS

Bestandene Modulabschlussprüfung

EMPFOHLENE VORKENNTNISSE

Keine

LITERATUR

∙ W. Hackbusch, Iterative Solution of Large Sparse Systems of Equations, Springer, 1994
∙ Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, 2003
∙ MPI Application Programming Interface (2015), www.mpi-forum.org/docs/mpi-
3.1/mpi31-report.pdf
∙ W. Gropp, E. Lusk, A. Skjellum, Using MPI, MIT Press, 2014
∙ S. Snir, S. Otto. S. Huss-Lederman, D. Walker, J. Dongarra, MPI – The Complete
Reference, MIT Press, 1998
∙ T. Rauber, G. Rünger, Parallel Programming: for Multicore and Cluster Systems,
Springer, 2013
∙ C. Douglas, G. Haase, U. Langer, A Tutorial on Elliptic PDE Solvers and Their
Parallelization, SIAM, 2003
∙ additional literature will be announced in the lecture

AKTUELLE INFORMATIONEN

SONSTIGE INFORMATIONEN