Prof. Dr. Andreas Maier

Chair of Pattern Recognition

Our research interests focuses on medical imaging, image and audio processing, digital humanities, and interpretable machine learning and the use of known operators.

Research projects

  • Image Analysis and Fusion
  • Learning Algorithms for Medical Big Data Analysis (LAMBDA)
  • Magnetic Resonance Imaging (MRI)
  • Speech Processing and Understanding
  • Development of a guideline for the three-dimensional non-destructive acquisition of manuscripts
  • Intelligent MR Diagnosis of the Liver by Linking Model and Data-driven Processes (iDELIVER)
  • Molecular Assessment of Signatures ChAracterizing the Remission of Arthritis
  • Improved dual energy imaging using machine learning

  • ODEUROPA: Negotiating Olfactory and Sensory Experiences in Cultural Heritage Practice and Research

    (Third Party Funds Group – Sub project)

    Overall project: ODEUROPA
    Term: 1. January 2021 - 31. December 2022
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
    URL: https://odeuropa.eu/

    Our senses are gateways to the past. Although museums are slowly discovering the power of multi-sensory presentations, we lack the scientific standards, tools and data to identify, consolidate, and promote the wide-ranging role of scents and smelling in our cultural heritage. In recent years, European cultural heritage institutions have invested heavily in large-scale digitization. A wealth of object, text and image data that can be analysed using computer science techniques now exists. However, the potential olfactory descriptions, experiences, and memories that they contain remain unexplored. We recognize this as both a challenge and an opportunity. Odeuropa will apply state-of-the-art AI techniques to text and image datasets that span four centuries of European history. It will identify the vocabularies, spaces, events, practices, and emotions associated with smells and smelling. The project will curate this multi-modal information, following semantic web standards, and store the enriched data in a ‘European Olfactory Knowledge Graph’ (EOKG). We will use this data to identify ‘storylines’, informed by cultural history and heritage research, and share these with different audiences in different formats: through demonstrators, an online catalogue, toolkits and training documentation describing best-practices in olfactory museology. New, evidence-based methodologies will quantify the impact of multisensory visitor engagement. This data will support the implementation of policy recommendations for recognising, promoting, presenting and digitally preserving olfactory heritage. These activities will realize Odeuropa’s main goal: to show that smells and smelling are important and viable means for consolidating and promoting Europe’s tangible and intangible cultural heritage.

  • Intelligent MR Diagnosis of the Liver by Linking Model and Data-driven Processes (iDELIVER)

    (Third Party Funds Single)

    Term: 3. August 2020 - 31. March 2023
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    The project examines the use and further development of machine learning methods for MR image reconstruction and for the classification of liver lesions. Based on a comparison model and data-driven image reconstruction methods, these are to be systematically linked in order to enable high acceleration without sacrificing diagnostic value. In addition to the design of suitable networks, research should also be carried out to determine whether metadata (e.g. age of the patient) can be incorporated into the reconstruction. Furthermore, suitable classification algorithms on an image basis are to be developed and the potential of direct classification on the raw data is to be explored. In the long term, intelligent MR diagnostics can significantly increase the efficiency of use of MR hardware, guarantee better patient care and set new impulses in medical technology.

  • Bereitstellung einer Infrastruktur zur Nutzung für die Ausbildung Studierender auf einem z/OS Betriebssystem der Fa. IBM

    (FAU Funds)

    Term: 2. April 2020 - 31. March 2025
    Funding source: Friedrich-Alexander-Universität Erlangen-Nürnberg
  • Molecular Assessment of Signatures ChAracterizing the Remission of Arthritis

    (Third Party Funds Single)

    Term: 1. April 2020 - 30. September 2022
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    MASCARA zielt auf eine detaillierte, molekulare Charakterisierung der Remission bei Arthritis ab. Das Projekt basiert auf der kombinierten klinischen und technischen Erfahrung von Rheumatologen, Radiologen, Medizinphysikern, Nuklearmedizinern, Gastroenterologen, grundlagenwissenschaftlichen Biologen und Informatikern und verbindet fünf akademische Fachzentren in Deutschland. Das Projekt adressiert 1) den Umstand der zunehmenden Zahl von Arthritis Patienten in Remission, 2) die Herausforderungen, eine effektive Unterdrückung der Entzündung von einer Heilung zu unterscheiden und 3) das begrenzte Wissen über die Gewebeveränderungen in den Gelenken von Patienten mit Arthritis. MASCARA wird auf der Grundlage vorläufiger Daten vier wichtige mechanistische Bereiche (immunstoffwechselbedingte Veränderungen, mesenchymale Gewebereaktionen, residente Immunzellen und Schutzfunktion des Darms) untersuchen, die gemeinsam den molekularen Zustand der Remission bestimmen. Das Projekt zielt auf die Sammlung von Synovialbiopsien und die anschließende Gewebeanalyse bei Patienten mit aktiver Arthritis und Patienten in Remission ab. Die Gewebeanalysen umfassen (Einzelzell)-mRNA-Sequenzierung, Massenzytometrie sowie die Messung von Immunmetaboliten und werden durch molekulare Bildgebungsverfahren wie CEST-MRT und FAPI-PET ergänzt. Sämtliche Daten, die in dem Vorhaben generiert werden, werden in einem bereits bestehenden Datenbanksystem mit den Daten der anderen Partner zusammengeführt und gespeichert. Das Zusammenführen der Daten soll – mit Hilfe von maschinellem Lernen – krankheitsspezifische und mit der Krankheitsaktivität verbundene Mustermatrizen identifizieren.

  • Deep-Learning basierte Segmentierung und Landmarkendetektion auf Röntgenbildern für unfallchirurgische Eingriffe

    (Third Party Funds Single)

    Term: since 6. May 2019
    Funding source: Siemens AG
  • Improving multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information

    (Non-FAU Project)

    Term: 1. April 2019 - 30. April 2022

    This project aims to improve multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information. Such improvements include noise reduction and artifact removal from data acquired in SPECT.

  • Advancing osteoporosis medicine by observing bone microstructure and remodelling using a four-dimensional nanoscope

    (Third Party Funds Single)

    Term: 1. April 2019 - 31. March 2025
    Funding source: European Research Council (ERC)
    URL: https://cordis.europa.eu/project/id/810316

    Due to Europe's ageing society, there has been a dramatic increase in the occurrence of osteoporosis (OP) and related diseases. Sufferers have an impaired quality of life, and there is a considerable cost to society associated with the consequent loss of productivity and injuries. The current understanding of this disease needs to be revolutionized, but study has been hampered by a lack of means to properly characterize bone structure, remodeling dynamics and vascular activity. This project, 4D nanoSCOPE, will develop tools and techniques to permit time-resolved imaging and characterization of bone in three spatial dimensions (both in vitro and in vivo), thereby permitting monitoring of bone remodeling and revolutionizing the understanding of bone morphology and its function.

    To advance the field, in vivo high-resolution studies of living bone are essential, but existing techniques are not capable of this. By combining state-of-the art image processing software with innovative 'precision learning' software methods to compensate for artefacts (due e.g. to the subject breathing or twitching), and innovative X-ray microscope hardware which together will greatly speed up image acquisition (aim is a factor of 100), the project will enable in vivo X-ray microscopy studies of small animals (mice) for the first time. The time series of three-dimensional X-ray images will be complemented by correlative microscopy and spectroscopy techniques (with new software) to thoroughly characterize (serial) bone sections ex vivo.

    The resulting three-dimensional datasets combining structure, chemical composition, transport velocities and local strength will be used by the PIs and international collaborators to study the dynamics of bone microstructure. This will be the first time that this has been possible in living creatures, enabling an assessment of the effects on bone of age, hormones, inflammation and treatment.

  • Deep Learning based Noise Reduction for Hearing Aids

    (Third Party Funds Single)

    Term: 1. February 2019 - 31. January 2022
    Funding source: Industrie
     

    Reduction of unwanted environmental noises is an important feature of today’s hearing aids, which is why noise reduction is nowadays included in almost every commercially available device. The majority of these algorithms, however, is restricted to the reduction of stationary noises. Due to the large number of different background noises in daily situations, it is hard to heuristically cover the complete solution space of noise reduction schemes. Deep learning-based algorithms pose a possible solution to this dilemma, however, they sometimes lack robustness and applicability in the strict context of hearing aids.
    In this project we investigate several deep learning.based methods for noise reduction under the constraints of modern hearing aids. This involves a low latency processing as well as the employing a hearing instrument-grade filter bank. Another important aim is the robustness of the developed methods. Therefore, the methods will be applied to real-world noise signals recorded with hearing instruments.

  • Magnetic Resonance Imaging Contrast Synthesis

    (Non-FAU Project)

    Term: since 1. January 2019

    Research project in cooperation with Siemens Healthineers, Erlangen

    A Magnetic Resonance Imaging (MRI) exam typically consists of several MR pulse sequences that yield different image contrasts. Each pulse sequence is parameterized through multiple acquisition parameters that influence MR image contrast, signal-to-noise ratio, acquisition time, and/or resolution.

    Depending on the clinical indication, different contrasts are required by the radiologist to make a reliable diagnosis. This complexity leads to high variations of sequence parameterizations across different sites and scanners, impacting MR protocoling, AI training, and image acquisition.

    MR Image Synthesis

    The aim of this project is to develop a deep learning-based approach to generate synthetic MR images conditioned on various acquisition parameters (repetition time, echo time, image orientation). This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training.

    MR Image-to-Image Translations

    As MR acquisition time is expensive, and re-scans due to motion corruption or a premature scan end for claustrophobic patients may be necessary, a method to synthesize missing or corrupted MR image contrasts from existing MR images is required. Thus, this project aims to develop an MR contrast-aware image-to-image translation method, enabling us to synthesize missing or corrupted MR images with adjustable image contrast. Additionally, it can be used as an advanced data augmentation technique to synthesize different contrasts for the training of AI applications in MRI.

  • Deep Learning Applied to Animal Linguistics

    (FAU Funds)

    Term: 1. April 2018 - 1. April 2022
    Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):

    For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.

    For more information about the DeepAL project please contact christian.bergler@fau.de.

2021

2020

Related Research Fields

Contact: