Prof. Dr. Klaus Meyer-Wegener

Chair of Computer Science

I work on query processing in databases, and the use of artificial intelligence to boost database performance. Regarding the application of databases, I investigate the potential of collecting the queries first and then derive the database design from them.

Research projects

  • Query-driven database design, integration, and optimization
  • Using modern hardware (FPGAs) to speed up database-query processing
  • Investigate AI methods for the improvement of database technology (e.g. autoencoder)
  • Build a repository of neural-ne

Current projects

  • SKYSHARK -Benchmarking Data Processing Systems Using Real-Time Flight Data

    (Own Funds)

    Term: since 16. May 2023

    To test and evaluate a heterogeneous stream-processing system consisting of an FPGA-based systemon-chip and a host, we develop a benchmark called SKYSHARK. It uses real-world data from air-traffic control that is publicly available. These data are enhanced for the purpose of the benchmark without changing their characteristics. They are further enriched with aircraft and airport data. We define 14 queries with respect to the particular requirements of our system. They should be useful for other hardware-accelerated platforms as well. A first evaluation has been done using Apache Flink. We envision a great potential because of the flexibility of the approach.

  • Generation of Symbol Tables for String Compression with Frequent-Substring Trees

    (Own Funds)

    Term: since 19. September 2022

    With the ongoing rise in global data volumes, database compression is becoming increasingly relevant. While the compression of numeric data types has been extensively researched, the compression of strings has only recently received renewed scientific attention.

    A promising approach to string compression is the use of symbol tables, where recurring substrings within a database are substituted with short codes. A corresponding table enables the smooth reconstruction of the original data. This method is distinguished by short compression and decompression times, although the compression rate heavily depends on the quality of the symbol table.

    The research project FST focuses on the creation of optimized symbol tables to maximize the compression rate. For this purpose the eponymous Frequent-Substring Trees are constructed, a trie-like data structure that maps all potential table entries and enables the identification of optimal entries through the use of metadata.

    The primary objective of the research project is to increase the compression rate of string compression methods without significantly affecting the compression and decompression times.

  • Query Optimisation and Near-Data Processing on Reconfigurable SoCs for Big Data Analysis (Phase II)

    (Third Party Funds Single)

    Term: 1. August 2021 - 31. July 2024
    Funding source: Deutsche Forschungsgemeinschaft (DFG)
    Analysing petabytes of data in an affordable amount of time and energy requires massively parallel processing of data at their source. Active research is therefore directed towards emerging hardware architectures to reduce the volume of data close to their source and towards sound query analysis and optimisation techniques to exploit such novel architectures. The goal of the ReProVide project is to investigate FPGA-based solutions for smart storage and near-data processing together with novel query-optimisation techniques that exploit the speed and reconfigurability of FPGA hardware for a scalable and powerful (pre-)filtering of Big Data.
    In the first funding phase, we have fostered the fundamentals for this endeavour. In particular, we have designed an FPGA-based PSoC architecture of so-called Reconfigurable Data Provider Units (RPUs). For data processing and filtering, an RPU exploits the capabilities of dynamic (run-time) hardware reconfiguration of modern FPGAs to load pre-designed hardware accelerators on-the-fly. An RPU is able to process SQL queries or parts of them in hardware in combination with CPU cores also available on the PSoC. For the integration of RPUs into a DBMS, new cost models had to be developed, taking the capabilities and characteristics of an RPU into account. Here, we have elaborated a novel hierarchical (multi-level) query optimisation to determine which operations are worthwhile to be assigned to a RPU (query partitioning) and how to deploy and execute the assigned (sub-)queries or database operators on the RPU (query placement). The implemented query optimiser shares the work between the global optimiser of the DBMS (in our case Apache Calcite) and an architecture-specific local optimiser running on the RPU. 
    In the second funding phase, our major research goals will be: 
    1.) Stream processing: RPUs could equally be beneficial for the filtering of streams. Here, a plethora of fundamentally new module functionality will have to be investigated to support non-standard operators, leading to RPUs applicable to a much more diverse class of tasks including window operations and data-preparation functionality. 
    2.) Scalability: User interaction with modern databases usually involves not only one, but a sequence of queries. At the same time, multiple applications are running concurrently. Here, we will design an eight-node RPU cluster attached to storage and network to enable the distributed and parallel data processing of large databases and data streams. Also required are concepts for data partitioning and novel query optimisation techniques, making use of query-sequence information. 
    3.) Demonstrator & Evaluation: As a testbed and a proof of the benefits of the ReProVide approach in general and an FPGA-based RPU cluster in particular, we want to analytically as well as experimentally evaluate the margins of energy reductions that become possible through near-data processing.

Recent publications





Related Research Fields