Theoretical computer science deals with the theoretical foundations of computer science. Two core areas are complexity theory and algorithmics. Researchers in the field of complexity theory ask themselves which problems can be efficiently solved in which way. Based on a formal computational model, this question results in classes of problems with different complexities. Complexity theory investigates how these classes relate to each other.

Algorithmics considers methods for the design and analysis of efficient algorithms. The two fields, algorithms and complexity, are closely related. Further, algorithms are closely related to data structures, since some efficient algorithms require clever data structures. Data structures, on the other hand, require clever algorithms for their construction, modification and query.

## Challenges

Many problems include a geometric component, e.g. querying numerical data in a database or searching for the shortest path. Algorithms for such problems are also called geometrical algorithms in short. In addition to general techniques for efficient algorithms, techniques based on the geometric properties of the problem are used here. Therefore, geometric algorithms fall into the overlap of mathematics and computer science. Many of the questions considered are motivated by applications. The more complex the problems are, the more often they turn out to be less efficient to solve. The challenge is to find efficient solutions, nevertheless.

## Focus areas

The group Theoretical Computer Science led by Prof. Dr. Maike Buchin at Ruhr University Bochum considers in particular problems on polygonal curves, on embedded graphs and simple polygons. How these can be compared, summarized, or decomposed into homogeneous parts is, for example, subject of the research. How these can be compared, summarized, or decomposed into homogeneous parts is, for example, subject of the research.

**comparison of geometric objects**: a fundamental question is to determine the similarity of geometric objects, such as point sets, polygonal curves or triangulated surfaces. For this, different similarity measures exist, whose efficient computation depends on the complexity of the measure and the object in question.

**analysis of movement data**: nowadays more and more movement data, that is discrete samples of a continuous movement, is being collected. Geometrically these can be considered as polygonal curves in time and space, which typically contain also noise and are embedded in a geographic context. To analyze large amounts of such data requires suitable efficient algorithms.

**analysis of geometric networks**: networks, for instance street or river networks, occur in many contexts. In their analysis different questions arise, for example, how to compare the quality of two networks or how to reconstruct a network based on a large amount of movement data. Also here the computational complexity of the problems varies greatly with the problem considered.