Prof. Dr. Bernhard Kainz

Chair for Data, Sensors and Devices / Department Artificial Intelligence in Biomedical Engineering (AIBE)

My research is about intelligent algorithms in healthcare, especially Medical Imaging. I am working on self-driving medical image acquisition that can guide human operators in real-time during diagnostics. Artificial Intelligence is currently used as a blanket term to describe research in these areas.
Thus, we try to democratize rare healthcare expertise through Machine Learning, providing guidance in real-time applications and second reader expertise in retrospective analysis. We develop normative learning algorithms for large populations, integrating imaging, patient records and omics, leading to data analysis that mimics human decision making.

Research projects

  • MAVEHA: Automated Fetal and Neonatal Movement Assessment for Very Early Health Assessment — a project analysing motion patterns of neonates to identify normal or pathological neurological development.
  • iFIND: Intelligent Fetal Imaging and Diagnosis — this project aims at democratizing healthcare expertise for prenatal fetal health screening with ultrasound imaging (and some magnetic resonance imaging).
  • KIKALU: KI-geführte Kartografie und Lokalisierung für Ultraschallbildgebung — guidance through AI agents for ultrasound imaging
  • SENTINEL — Sensitive Evaluation of New Distribution Input with Normative Learning — development of normative learning algorithms for anomaly detection in medical image analysis
  • CADDI — Computer-Assisted Disease Detection in Images: translation of medical image analysis with AI into the clinical practice including federated and privacy-preserving learning.
  • RHD-Nepal: Low-cost portable AI-assisted echocardiography of Rheumatic Heart Disease by non-experts — AI can support healthcare professionals in developing countries.

Current projects

  • A Platform for Dynamic Exploration of the Cooperative Health Research in South Tyrol Study Data via Multi-Level Network Medicine

    (Third Party Funds Single)

    Term: 1. December 2023 - 30. November 2026
    Funding source: Deutsche Forschungsgemeinschaft (DFG)
    URL: https://www.dyhealthnet.ai/

    The Cooperative Health Research in South Tyrol (CHRIS) study offers a comprehensive overview of the health state of >13,000 adults in the middle and upper Val Venosta. It is the largest population-based molecular study in Italy with a longitudinal lookout to investigate the genetic and molecular basis of age-related common chronic conditions and their interaction with lifestyle and environment in the general population. In CHRIS, the combination of molecular profiling data, such as genomics and metabolomics, together with important baseline clinical and lifestyle data offers vast opportunities for understanding physiological changes that could lead to clinical complications or indicate the prevalence or early onset of diseases together with their molecular underpinnings. 

    Where disease-focused studies often have a clear hypothesis that dictates the necessary statistical analyses, population-based cohorts such as CHRIS are more versatile and allow both testing existing hypotheses as well as generating new hypotheses that arise from statistically significant associations of the available data. Ideally, this type of explorative analysis is open to biomedical researchers that do not necessarily have experience with data analysis or machine learning. Network-based approaches are ideally suited for studying heterogeneous biomedical data, giving rise to the field of network medicine. However, network medicine techniques have so far mainly been used in the context of studies focusing on individual diseases. Network-based platforms for the explorative analysis of population-based cohort data do not exist.

    In DyHealthNet, we will close this gap and develop a network-based data analysis platform, which will allow to integrate heterogeneous data and support explorative data analytics over dynamically generated subsets of the CHRIS study data. To fully leverage the potential of the available multi-level data, the DyHealthNet platform combines (1) data integration using standardized medical information models (HL7 FHIR), (2) innovative index structures for scalable dynamic analysis, (3) machine learning, and (4) visual analytics. DyHealthNet will render the CHRIS population cohort data accessible for state-of-the-art privacy-preserving, network-based data analysis. DyHealthNet will hence enable mining of context-specific pathomechanisms for precision medicine, and will serve as a blueprint for dynamic explorative analysis of multi-level cohort data worldwide. To achieve these objectives, the DyHeathNet consortium combines expertise in population-based cohort studies (Fuchsberger) and in the development of complex algorithms for the analysis of molecular networks (Blumenthal), applied biomedical AI and software systems (List), and customized index structures for scalable data management (Gamper).

  • Entwicklung einer innovativen Neurobandage mit integriertem Brain-Compter-Interface zur Überprüfung der Handfunktion

    (Third Party Funds Single)

    Term: 1. October 2023 - 31. October 2026
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)
  • Smart Wound Dressing incorporating Dye-based Sensors Monitoring of O2, pH and CO2 under the wound dressing and smart algorithms to assess the wound healing process

    (Third Party Funds Single)

    Term: 1. September 2023 - 31. July 2026
    Funding source: Bayerische Forschungsstiftung

    In Germany alone, the number of patients with chronic wound healing disorders is estimated at around 2.7 million. According to projections, the treatment of chronic wounds accounts for € 23 - 36 billion per year. Of the treatment costs for chronic wounds, 4.6 to 7.2 billion € alone are accounted for by the associated cost-intensive dressing materials. The aim of the SWODDYS project is to research the fundamentals for a new type of intelligent wound dressing for the treatment of acute and chronic wounds, which can monitor the energy-metabolic tissue and wound healing status individually for each patient and online by integrating fluorescent dye-based oxygen, pH and CO2 sensors. 

  • Teilprojekt A2

    (Third Party Funds Group – Sub project)

    Overall project: Quantitative diffusionsgewichtete MRT und Suszeptibilitätskartierung zur Charakterisierung der Gewebemikrostruktur
    Term: 1. September 2023 - 31. August 2027
    Funding source: DFG / Forschungsgruppe (FOR)
  • Quantitative diffusionsgewichtete MRT und Suszeptibilitätskartierung zur Charakterisierung der Gewebemikrostruktur

    (Third Party Funds Group – Sub project)

    Overall project: FOR 5534: Schnelle Kartierung von quantitativen MR bio-Signaturen bei ultra-hohen Magnetfeldstärken
    Term: 1. September 2023 - 31. August 2027
    Funding source: DFG / Forschungsgruppe (FOR)

    Dieses Projekt ist Teil der Forschungsgruppe (FOR) "Schnelle Kartierung von quantitativen MR bio-Signaturen bei ultrahohen Magnetfeldstärken". Es konzentriert sich auf die Erweiterung, Beschleunigung und Verbesserung der Diffusions- und quantitativen Suszeptibilitäts-Magnetresonanztomographie. Das Arbeitsprogramm ist in zwei Teile gegliedert. Im ersten Teil wird ein beschleunigtes Protokoll für die klinischen Projekte der FOR vorbereitet. Im zweiten Teil sollen eine weitere Beschleunigung sowie Qualitätsverbesserungen erreicht werden. Konkret werden wir eine lokal niedrigrangig regularisierte echoplanare Bildgebungssequenz für die diffusionsgewichtete Bildgebung implementieren. Sie nutzt Datenredundanzen bei Akquisitionen mit mehreren Diffusionskodierungen, um das Signal-Rausch-Verhältnis effektiv zu erhöhen und damit den Akquisitionsprozess zu beschleunigen. Die Sequenz wird im Wesentlichen beliebige Diffusionskodierungsmöglichkeiten ermöglichen (z.B. b-Tensor-Kodierung). In einem zweiten Schritt werden wir eine verschachtelte Mehrschuss-Version dieser Sequenz entwickeln, um Bildverzerrungen zu reduzieren, die bei der 7-Tesla echoplanaren Bildgebung störend sind. Für die quantitative Suszeptibilitätskartierung (QSM) werden wir eine Sequenz mit einer Stack-of-Stars-Aufnahmetrajektorie implementieren. Da die Magnitudenbilder von Gradientenechosequenzen, die zu unterschiedlichen Echozeiten akquiriert werden, Datenredundanzen aufweisen, die mit denen von diffusionskodierten Bildern vergleichbar sind, werden wir bei der Bildrekonstruktion ebenfalls eine lokale Regularisierung niedrigen Ranges verwenden. Die radialen Trajektorien dieser Sequenz sollten für eine unterabgetastete und damit beschleunigte Bildrekonstruktion gut geeignet sein. In einem zweiten Schritt werden wir die Fähigkeiten unserer Sequenz durch eine quasi-kontinuierliche Echozeit-Abtastung erweitern, bei dem jede Speiche ihre eigene optimierte Echozeit hat. Dies wird eine verbesserte Qualität der QSM ermöglichen, wenn Fett im Bild vorhanden ist, wie es häufig bei Muskeluntersuchungen und in der Brustbildgebung der Fall ist. Bezüglich der QSM-Rekonstruktion werden wir Verfahren des tiefen Lernens entwickeln, um eine qualitativ hochwertige Rekonstruktion mit einer geringeren Menge an Bilddaten als bei herkömmlichen Rekonstruktionsansätzen zu ermöglichen. Wir werden bestehende neuronale Netzwerke von niedrigeren Feldstärken auf 7 T anpassen und deren Fähigkeiten so erweitern, dass wir auch atemzyklusabhängige Feldkarten. Dieses Projekt wird parallele Sendemethoden (pTx) vom pTx-Projekt der FOR erhalten. Wir werden die entwickelten Sequenzen nach dem ersten Jahr an die klinischen Projekte der FOR liefern. Darüber hinaus werden wir wesentliche Auswerte- und Bildrekonstruktionsmethoden an die anderen Projekte der FOR transferieren.und quasi-kontinuierliche Echozeiten in die Rekonstruktion integrieren können.

  • Medical Image Analysis with Normative Machine Learning

    (Third Party Funds Single)

    Term: 1. September 2023 - 30. September 2028
    Funding source: Europäische Union (EU)

    As one of the most important aspects of diagnosis, treatment planning, treatment delivery, and follow-up, medical imaging provides an unmatched ability to identify disease with high accuracy. As a result of its success, referrals for imaging examinations have increased significantly. However, medical imaging depends on interpretation by highly specialised clinical experts and is thus rarely available at the front-line-of-care, for patient triage, or for frequent follow-ups. Very often, excluding certain conditions or confirming physiological normality would be essential at many stages of the patient journey, to streamline referrals and relieve pressure on human experts who have limited capacity. Hence, there is a strong need for increased imaging with automated diagnostic support for clinicians, healthcare professionals, and caregivers.

    Machine learning is expected to be an algorithmic panacea for diagnostic automation. However, despite significant advances such as Deep Learning with notable impact on real-world applications, robust confirmation of normality is still an unsolved problem, which cannot be addressed with established approaches.

    Like clinical experts, machines should also be able to verify the absence of pathology by contrasting new images with their knowledge about healthy anatomy and expected physiological variability. Thus, the aim of this proposal is to develop normative representation learning as a new machine learning paradigm for medical imaging, providing patient-specific computational tools for robust confirmation of normality, image quality control, health screening, and prevention of disease before onset. We will do this by developing novel Deep Learning approaches that can learn without manual labels from healthy patient data only, applicable to cross-sectional, sequential, and multi-modal data. Resulting models will be able to extract clinically useful and actionable information as early and frequent as possible during patient journeys.

  • Human Impedance control for Tailored Rehabilitation

    (Third Party Funds Single)

    Term: 3. July 2023 - 30. June 2026
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • AI4MDD: AI-Powered Prognosis of Treatment Response in Major Depression Disorder

    (Third Party Funds Single)

    Term: 1. July 2023 - 30. September 2026
    Funding source: Industrie
  • Dimensionality reduction for molecular data based on explanatory power of differential regulatory networks

    (Third Party Funds Group – Overall project)

    Term: 1. March 2023 - 28. February 2026
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
    URL: https://www.netmap.ai/

    Rapid advances in single-cell RNA sequencing (scRNA-seq) technology are leading to ever-increasing dimensions of the generated molecular data, which complicates data analyses. In NetMap, new scalable and robust dimensionality reduction approaches for scRNA-seq data will be developed. To this end, dimensionality reduction will be integrated into a central task of the systems medicine analysis of scRNA-seq data: inference of gene regulatory networks (GRNs) and driver transcription factors based on cell expression profiles. Each resulting dimension will correspond to a driver GRN, and the coordinate of a cell in this low-dimensional representation will quantify the extent to which the particular driver GRN explains the cell's gene expression profile. These new methods will be implemented as a user-friendly software platform for exploratory expert-in-the-loop analysis and in silico prediction of drug repurposing candidates.

    As a case study, we will investigate CD4 helper T cell exhaustion, a potential limiting factor in immunotherapy. NetMap's strategy consists of (1) analyzing phenotypic heterogeneity of depleted CD4 T cells, (2) identifying transcriptional mechanisms that control this heterogeneity, (3) amplifying/eliminating specific subsets and testing their functional impact. This will allow the development of an atlas of the gene regulatory landscape of depleted CD4 T cells, while the in vivo testing of key regulatory transcription factors will help demonstrate the power of the developed methods and allow evaluation and improvement of predictions. 

  • Testing and Experimentation Facility for Health AI and Robotics

    (Third Party Funds Group – Sub project)

    Overall project: Testing and Experimentation Facility for Health AI and Robotics
    Term: 1. January 2023 - 31. December 2027
    Funding source: Europäische Union (EU)
    URL: https://www.tefhealth.eu/
    The EU project TEF-Health aims to test and validate innovative artificial intelligence (AI) and robotics solutions for the healthcare sector and accelerate their path to market. It is led by Prof. Petra Ritter, who heads the Brain Simulation Section at the Berlin Institute of Health at Charité (BIH) and at the Department of Neurology and Experimental Neurology of Charité – Universitätsmedizin Berlin. The MaD Lab of the FAU is one of the 51 participating project partners from nine European countries.
  • Maschinelles Lernen und Datenanalyse für heterogene, artübergreifende Daten (X02)

    (Third Party Funds Group – Sub project)

    Overall project: SFB 1540: Erforschung der Mechanik des Gehirns (EBM): Verständnis, Engineering und Nutzung mechanischer Eigenschaften und Signale in der Entwicklung, Physiologie und Pathologie des zentralen Nervensystems
    Term: 1. January 2023 - 31. December 2026
    Funding source: DFG / Sonderforschungsbereich (SFB)

    X02 nutzt die in EBM erzeugten Bilddaten und mechanischen Messungen, um Deep Learning-Methoden zu entwickeln, die Wissen über Spezies hinweg transferieren. In silico und in vitro Analysen werden deutlich spezifischere Daten liefern als in vivo Experimente, insbesondere für menschliches Gewebe. Um hier Erkenntnisse aus datenreichen Experimenten zu nutzen, werden wir Transfer Learning-Algorithmen für heterogene Daten entwickeln. So kann maschinelles Lernen auch unter stark datenlimitierten Bedingungen nutzbar gemacht werden. Ziel ist es, ein holistisches Verständnis von Bilddaten und mechanischen Messungen über Artgrenzen hinweg zu ermöglichen.

  • End-to-End Deep Learning Image Reconstruction and Pathology Detection

    (Third Party Funds Single)

    Term: 1. January 2023 - 31. December 2025
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    The majority of diagnostic medicalimaging pipelines follow the same principles: raw measurement data is acquiredby scanner hardware, processed by image reconstruction algorithms, and thenevaluated for pathology by human radiology experts. Under this paradigm, every stephas traditionally been optimized to generate images that are visually pleasingand easy to interpret for human experts. However, raw sensor information thatcould maximize patient-specific diagnostic information may get lost in thisprocess. This problem is amplified by recent developments in machine
    learning for medical imaging. Machine learning has been used successfully inall steps of the diagnostic imaging pipeline: from the design of dataacquisition to image reconstruction, to computer-aided diagnosis. So far, thesedevelopments have been disjointed from each other. In this project, we willfuse machine learning for image reconstruction and for image-based diseaselocalization, thus providing an end-to-end learnable image reconstruction andjoint pathology detection approach that operates directly on raw measurementdata. Our hypothesis is that this combination can maximize diagnostic accuracywhile providing optimal images for both human experts and diagnostic machinelearning models.

  • Digital health application for the therapy of incontinence patients

    (Third Party Funds Single)

    Term: since 1. January 2023
    Funding source: Bundesministerium für Wirtschaft und Klimaschutz (BMWK)

    The goal of this project is the development of an application for supporting the physical rehabilitation therapy of prostatectomy and incontinence patients in planning and execution. An AI-driven algorithm for automatic planning will be developed and extended by a machine learning approach for live exercise execution feedback. The developed application will be clinically evaluated regarding effectiveness and therapy benefit. 

  • Erarbeitung der Studienkonzeption, Medizinisch wissenschaftlich beratende Funktion

    (Third Party Funds Group – Sub project)

    Overall project: Digitale Gesundheitsanwendung zur Therapie von Inkontinenzpatienten
    Term: 1. January 2023 - 31. December 2024
    Funding source: Bundesministerium für Wirtschaft und Klimaschutz (BMWK)

    The goal of this project is the development of an application for supporting physical rehabilitation therapy in planning and execution. An AI-driven algorithm for automatic planning will be developed and extended by a machine learning approach for live exercise execution feedback. The developed application will be clinically evaluated regarding effectiveness and therapy benefit.

  • Applied Data Science in Digital Psychology

    (Third Party Funds Single)

    Term: 1. September 2022 - 31. August 2026
    Funding source: Bayerisches Staatsministerium für Wissenschaft und Kunst (StMWK) (seit 2018)

    University education in psychology, medical technology and computer science currently focuses on teaching basic methods and knowledge with little involvement of other disciplines. Increasing digitalization and the ever more rapid spread of digital technologies, such as wearable sensors, smartphone apps, and artificial intelligence, also in the health sector, offer a wide range of opportunities to address psychological issues from new and interdisciplinary perspectives. However, this requires close cooperation between the disciplines of psychology and technical disciplines such as medical technology and computer science to enable the necessary knowledge transfer. Especially in these disciplines, there is a considerable need for innovative and interdisciplinary teaching concepts and research projects that teach the adequate use of digital technologies and explore the application of these technologies to relevant issues in order to enable better care in the treatment of people with mental disorders.

  • AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics

    (Third Party Funds Group – Sub project)

    Overall project: AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics
    Term: 1. September 2022 - 28. February 2026
    Funding source: Europäische Union (EU)
    URL: https://intelliman-project.eu/
  • Multimodal Machine Learning for Decision Support Systems

    (Third Party Funds Single)

    Term: since 1. June 2022
    Funding source: Siemens AG

    The project aims to identify areas where advanced data analysis and processing methods can be applied to aspects of computer tomography (CT) technology. Furthermore included is the implementation and validation of said methods.

    In this project, we analyze machine and customer data sent by thousands of high-end medical devices every day. 

    Since potentially relevant Information is often presented in different modalities, the optimal application of fusion techniques is a key factor when extracting insights. 

  • Biomechanical Assessment of Big Wave Surfing

    (Third Party Funds Single)

    Term: 1. June 2022 - 31. May 2025
    Funding source: Siemens AG

    The goal of this project is to develop experimental approaches and simulation methods for biomechanical assessment of big wave surfing. This goal will be addressed in collaboration with Sebastian Steudtner and Siemens Healthineers. The methods include, but are not limited to, biomechanical movement analysis, musculoskeletal simulation, and sensor fusion.

    The focus of the research activities will be centered on:

    • Development of a measurement approach for biomechanical assessment of big wave surfing
    • Development of efficient and accurate data processing combining inputs from several sensor systems
    • Design of a biomechanical simulation model that reflects the situation during surfing
    • Analysis of biomechanical measurements and simulation outcomes to provice advice for big wave surfer to improve performance. 
  • Biomechanical Assessment of Big Wave Surfing

    (Third Party Funds Single)

    Term: since 1. June 2022
    Funding source: Siemens AG
  • Machine Learning for CT-Detector Production

    (Third Party Funds Single)

    Term: since 1. April 2022
    Funding source: Industrie

    The main goal of this project is to improve the detector manufacturing for computer tomography (CT). Therefore, data is gathered during the production of a CT-detector. This data is analysed and used to develop and train a machine learning system which should find the best composition of a CT-detector. In the future, the system will be integrated into the process of CT-detector manufacturing which, in result, should further improve the image quality and the production process of CT-devices. Especially, the warehouse utilization and the first-pass-yield should be enhaced. The project is realized in cooperation with Siemens Healthineers Frochheim.

  • Individual Performance Prediction Using Musculoskeletal Modeling

    (Third Party Funds Single)

    Term: 1. February 2022 - 31. January 2025
    Funding source: Industrie

    Biomechanical modeling and simulation are performed to analyze and understand human motion and performance. One objective is to reconstruct human motion from measurement data e.g. to assess the individual performance of athletes and customers. Another objective is to synthesize realistic human motion to study human-production interaction. The reconstruction (a) and synthesis of human motion (b) will be addressed in this  research position. New algorithms using biomechanical simulation of musculoskeletal models will be developed to enable innovative applications and services for Adidas. Moreover, predictive biomechanical simulation will be combined with wearable sensor technology to build a product recommendation application.

  • Unsupervised Network Medicine for Longitudinal Omics Data

    (FAU Funds)

    Term: since 15. January 2022

    Over the last years, large amounts of molecular profiling data (also called “omics data”) have become available. This has raised hopes to identify so-called disease modules, i.e., sets of functionally related molecules constituting candidate disease mechanisms. However, omics data tend to be overdetermined and noisy; and modules identified via purely statistical means are hence often unstable and functionally uninformative. Hence, network-based disease module mining methods (DMMMs) project omics data onto biological networks such as protein-protein interaction (PPI) networks, gene regulatory networks (GRNs), or microbial interaction networks (MINs). Subsequently, network algorithms are used to identify disease modules consisting of small subnetworks. This dramatically decreases the size of the search space and prioritizes disease modules consisting of functionally related molecules, positively affecting both stability and functional relevance of the discovered modules.

    However, to the best of our knowledge, all existing DMMMs are subject to at least one of the following two limitations: Firstly, existing DMMMs are typically supervised, in the sense that they try to find subnetworks explaining differences in the omics data between predefined case and control patients or pre-defined disease subtypes. This is potentially problematic, because it implies that existing DMMMs are biased by our current disease ontologies, which are mostly symptom- or organ-based and therefore often too coarse-grained. For instance, around 95 % of all patients with hypertension are diagnosed with so-called “essential hypertension” (code BA00.Z in the ICD-11 disease ontology), meaning that the cause of the hypertension is unknown. In fact, there are probably several disjoint molecular mechanisms causing “essential hypertension”, and the same holds true for many other complex diseases such as Alzheimer’s disease, multiple sclerosis, and Crohn’s disease. Supervised DMMMs which take existing disease definitions for granted hence risk overlooking the molecular mechanisms causing mechanistically distinct subtypes.

    Secondly, most existing DMMMs are designed for static omics data and do not support longitudinal data where the patients’ molecular profiles are observed over time. Existing analysis frameworks for longitudinal omics data largely use purely statistical means. Consequently, network medicine approaches for time series data are needed.

    To the best of our knowledge, there are only three DMMMs which, in part, overcome these limitations: BiCoN and GrandForest allow unsupervised disease module mining but do not support longitudinal omics data. TiCoNE supports longitudinal data but requires predefined case vs. control or subtype annotations as input. There is hence an unmet need for unsupervised DMMMs for longitudinal omics data. Developing such methods is the main objective of the proposed project.

  • Trusted Ecosystem of Applied Medical Data eXchange; Teilvorhaben: FAU@TEAM-X

    (Third Party Funds Group – Sub project)

    Overall project: Trusted Ecosystem of Applied Medical Data eXchange (TEAM-X)
    Term: 1. January 2022 - 31. December 2024
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
  • Adaptive AI Systems in Sport

    (Third Party Funds Single)

    Term: 1. December 2021 - 31. May 2024
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)

    The digitization of the sports sector leads to the individualization of products and services for everyday athletes. To be able to ensure this, artificial intelligence approaches are needed to know how to create personalized value for the athlete/consumer from large heterogeneous data sets.

    An application example for this is the home training sector, which is gaining importance especially due to the effects of the Corona pandemic. Commercial platforms offer initial approaches to the use of immersive media but fail to generate individualization of content by analyzing heterogeneous data sources for the user.

    The project will therefore investigate mechanisms for user engagement and motivation. Based on this, a comprehensive adaptive AI system for predicting individual goal achievement will be developed and fused with additional data sources. Based on the predictions, a framework for designing stimulus-driven real-time systems for individualizing immersive user interfaces will be defined. The integration of the resulting subsystems into a high-fidelity prototype enables the transfer to further application domains.

  • dhip campus-bavarian aim

    (Third Party Funds Group – Overall project)

    Term: 1. October 2021 - 30. September 2027
    Funding source: Industrie
  • Novel Methods for Remote Acute Stress Induction

    (Own Funds)

    Term: since 1. September 2021

    Repeated exposure to acute psychosocial stress and the associated stimulation of biological stress pathways over a period of time can promote the transition from acute to chronic stress. Unfortunately, established laboratory stress protocols are limited for repeated use due to high personnel and resource demand, creating the need for novel approaches that can be conducted at a larger scale, and, possibly, remotely.

    Therefore, this project aims to develop and test novel methods for inducing acute stress without requiring extensive personnel and resource demands. The project will explore the use of digital technologies, such as virtual reality and mobile apps, to create stress-inducing scenarios that can be experienced (remotely) by study participants. The project will also investigate the use of physiological and behavioral measures to validate the effectiveness of the stress induction methods.

  • Personalized prediction of medications responses in patients with rheumatoid arthritis using Machine Learning algorithms

    (Third Party Funds Group – Sub project)

    Overall project: dhip campus-bavarian aim
    Term: 1. September 2021 - 30. August 2024
    Funding source: Industrie

    There is a wide range of medications for RA patients, Clinical trials and real-time experience demonstrate that sometimes these treatments have adverse effects, for better benefits and later minimizing the damage, we should predict the response for each person.

    This project aims to collect medical data on rheumatology arthritis, select the best factors and identify important clinical features associated with remission and then create a model to predict remissions in patients and prediction of treatment response and course of activity for each patient using machine learning methods. This project could help in preventing wrong prescriptions and time-wasting before disease progression.

    We want to reach the aim by using medical data collected and recorded by rheumatologists from patient characteristics, disease courses, laboratory data, and medication data. Our partners from the medicine side are helping to collect and access existing data. The partners in MaD-Lab carry out Machine learning and data analytics approaches on them to find a remission or development of the prognostic model.

  • Digital Twin of the Musculoskeletal System

    (Third Party Funds Group – Sub project)

    Overall project: dhip campus-bavarian aim
    Term: 1. September 2021 - 31. August 2024
    Funding source: Industrie

    Musculoskeletal (MSK) models represent the dynamics of the human body and can output many different variables i.e. joint angles, joint moments and muscle force. Personalised movement predictions provide accurate outcome variables than a generic prediction. Therefore, we would like to develop a digital twin of the MSK system, which can then be used for personalised movement predictions. Image-based personalisation is the state-of-the-art. Anthropometric variables, such as bone geometry and muscle attachment points can be derived from magnetic resonance imaging (MRI). Muscle parameters require diffusion tensor imaging (DTI) to visualise the alignment of fibres, which is important for the derivation of the muscle size as well as the fibre length. The goal of this project is to develop a personalised digital twin of the MSK system using DTI measurements, and investigate if such a digital twin can improve accuracy of movement predictions.  The aim is to also investigate to what extent image processing can be automated. Furthermore, identification of groups using the personalised models, e.g. to detect MSK diseases, such as rheumatoid arthritis will be investigated. 

  • BioPsyKit – An Open-Source Python Package for the Analysis of Biopsychological Data

    (Own Funds)

    Biopsychology is a field of psychology that analyzes how biological processes interact with behaviour, emotion, cognition, and other mental processes. Biopsychology covers, among others, the topics of sensation and perception, emotion regulation, movement (and control of such), sleep and biological rhythms, as well as acute and chronic stress.

    While some software packages exist that allow for the analysis of single data modalities, such as electrophysiological data, or sleep, activity, and movement data, no packages are available for the analysis of other modalities, such as neuroendocrine and inflammatory biomarkers, and self-reports. In order to fill this gap, and, simultaneously, combine all required tools for analyzing biopsychological data from beginning to end into one single Python package, we developed BioPsyKit.

  • TR&D 1: Reimagining the Future of Scanning: Intelligent image acquisition, reconstruction, and analysis

    (Third Party Funds Single)

    Term: since 1. August 2021
    Funding source: National Institutes of Health (NIH)
    URL: https://grantome.com/grant/NIH/P41-EB017183-07-6366

    The broad mission of our Center for Advanced Imaging Innovation and Research (CAI2R) is to bring together collaborative translational research teams for the development of high-impact biomedical imaging technologies, with the ultimate goal of changing day-to-day clinical practice. Technology Research and Development (TR&D) Project 1 aims to replace traditional complex and inefficient imaging protocols with simple, comprehensive acquisitions that also yield quantitative parameters sensitive to specific disease processes. In the first funding period of this P41 Center, our project team led the way in establishing rapid, continuous, comprehensive imaging methods, which are now available on a growing number of commercial magnetic resonance imaging (MRI) scanners worldwide. This foundation will allow us, in the proposed research plan for the next period, to enrich our data streams, to advance the extraction of actionable information from those data streams, and to feed the resulting information back into the design of our acquisition software and hardware. Thanks to developments during our first funding period, we are now in a position to question long-established assumptions about scanner design, originating from the classical imaging pipeline of human radiologists interpreting multiple series of qualitative images. We will reimagine the process of MR scanning, leveraging our core expertise in pulse-sequence design, parallel imaging, compressed sensing, model-based image reconstruction and machine learning. We will also extend our methods to complex multifaceted data streams, arising not only from MRI but also from Positron Emission Tomography (PET) and other imaging modalities, as well as from diverse arrays of complementary sensors.

  • Empatho-Kinaesthetic Sensor Technology

    (Third Party Funds Group – Overall project)

    Term: 1. July 2021 - 30. June 2025
    Funding source: DFG / Sonderforschungsbereich / Transregio (SFB / TRR)
    URL: https://empkins.de/
    The proposed CRC “Empathokinaesthetic Sensor Technology” (EmpkinS) will investigate novel radar, wireless, depth camera, and photonics based sensor technologies as well as body function models and algorithms. The primary objective of EmpkinS is to capture human motion parameters remotely with wave-based sensors to enable the identification and analysis of physiological and behavioural states and body functions. To this end, EmpkinS aims to develop sensor technologies and facilitate the collection of motion data for the human body. Based on this data of hitherto unknown quantity and quality, EmpkinS will lead to unprecedented new insights regarding biomechanical, medical, and psychophysiological body function models and mechanisms of action as well as their interdependencies.The main focus of EmpkinS is on capturing human motion parameters at the macroscopic level (the human body or segments thereof and the cardiopulmonary function) and at the microscopic level (facial expressions and fasciculations). The acquired data are captured remotely in a minimally disturbing and non-invasive manner and with very high resolution. The physiological and behavioural states underlying the motion pattern are then reconstructed algorithmically from this data, using biomechanical, neuromotor, and psychomotor body function models. The sensors, body function models, and the inversion of mechanisms of action establish a link between the internal biomedical body layers and the outer biomedical technology layers. Research into this link is highly innovative, extraordinarily complex, and many of its facets have not been investigated so far.To address the numerous and multifaceted research challenges, the EmpkinS CRC is designed as an interdisciplinary research programme. The research programme is coherently aligned along the sensor chain from the primary sensor technology (Research Area A) over signal and data processing (Research Areas B and C) and the associated modelling of the internal body functions and processes (Research Areas C and D) to the psychological and medical interpretation of the sensor data (Research Area D). Ethics research (Research Area E) is an integral part of the research programme to ensure responsible research and ethical use of EmpkinS technology.The proposed twelve-year EmpkinS research programme will develop novel methodologies and technologies that will generate cutting-edge knowledge to link biomedical processes inside the human body with the information captured outside the body by wireless and microwave sensor technology. With this quantum leap in medical technology, EmpkinS will pave the way for completely new "digital", patient-centred diagnosis and therapeutic options in medicine and psychology.Medical technology is a research focus with flagship character in the greater Erlangen-Nürnberg area. This outstanding background along with the extensive preparatory work of the involved researchers form the basis and backbone of EmpkinS.
  • Maschinelle Lernverfahren zur Personalisierung muskuloskelettaler Menschmodelle, Bewegungsanalyse

    (Third Party Funds Group – Sub project)

    Overall project: Empathokinästhetische Sensorik - Sensortechniken und Datenanalyseverfahren zur empathokinästhetischen Modellbildung und Zustandsbestimmung
    Term: 1. July 2021 - 30. June 2025
    Funding source: DFG / Sonderforschungsbereich (SFB)
    URL: https://www.empkins.de/
    The extent to which a neural network can be used to effectively personalise gait simulations using motion data is explored. We first investigate the influence of body parameters on gait simulation. An initial version of the personalisation is trained with simulated motion data, since ground truth data is known for this purpose. We then explore gradient-free methods to fit the network for experimental motion data. The resulting network is validated with magnetic resonance imaging, electromyography and intra-body variables.
  • EmpkinS iRTG - EmpkinS integrated Research Training Group

    (Third Party Funds Group – Sub project)

    Overall project: Empathokinästhetische Sensorik
    Term: 1. July 2021 - 30. June 2025
    Funding source: DFG / Sonderforschungsbereich / Integriertes Graduiertenkolleg (SFB / GRK)
    URL: https://www.empkins.de/

    The integrated Research Training Group (iRTG) offers all young researchers a structured framework programme and supports them in their scientific profile and competence development. The diverse measures provided enable the young researchers to work on their respective academic qualifications in a structured and targeted manner. Particular attention is paid to their networking and their ability to communicate intensively and to take responsibility for their own scientific work. The doctoral researchers are supervised by two project leaders.

  • Sensorbasierte Bewegungs- und Schlafanalyse beim Parkinson-Syndrom

    (Third Party Funds Group – Sub project)

    Overall project: Empathokinästhetische Sensorik
    Term: 1. July 2021 - 30. June 2025
    Funding source: DFG / Sonderforschungsbereich (SFB)
    URL: https://www.empkins.de/

    In D04, innovative, non-contact EmpkinS sensor technology using machine learning algorithms and multimodal reference diagnostics is evaluated using the example of Parkinson’s-associated sleep disorder patterns. For this purpose, body function parameters of sleep are technically validated with wearable sensor technology and non-contact EmpkinS sensor technology in comparison to classical poly-somnography and correlated to clinical scales. In an algorithmic approach, multiparametric sleep parameters and sleep patterns are then evalulated in correlation to movement, cardiovascular and sleep phase regulation disorders.

  • Empathokinästhetische Sensorik für Biofeedback bei depressiven Patienten

    (Third Party Funds Group – Sub project)

    Overall project: Empathokinästhetische Sensorik
    Term: 1. July 2021 - 30. June 2025
    Funding source: DFG / Sonderforschungsbereich (SFB)
    URL: https://www.empkins.de/

    The aim of the D02 project is the establishment of empathokinesthetic sensor technology and methods of machine learning as a means for the automatic detection and modification of depression-associated facial expressions, posture, and movement. The aim is to clarify to what extent, with the help of kinesthetic-related modifications influence depressogenic information processing and/or depressive symptoms. First, we will record facial expressions, body posture, and movement relevant to depression with the help of currently available technologies (e.g., RGB and depth cameras, wired EMG, established emotion recognition software) and use them as input parameters for new machine learning models to automatically detect depression-associated affect expressions. Secondly, a fully automated biofeedback paradigm is to be implemented and validated using the project results available up to that point. More ways of real-time feedback of depression-relevant kinaesthesia are investigated. Thirdly, we will research possibilities of mobile use of the biofeedback approach developed up to then.

  • Erforschung der posturalen Kontrolle basierend auf sensomotorisch erweiterten muskuloskelettalen Menschmodellen

    (Third Party Funds Group – Sub project)

    Overall project: Empathokinästhetische Sensorik
    Term: 1. July 2021 - 30. June 2025
    Funding source: DFG / Sonderforschungsbereich (SFB)
    URL: https://www.empkins.de/

    A novel postural control model of walking is explored to characterise the components of dynamic balance control. For this purpose, clinically annotated gait movements are used as input data and muscle actuated multi-body models are extended by a sensorimotor level. Neuromotor and control model parameters of (patho-)physiological movement are identified with the help of machine learning methods. Technical and clinical validation of the models will be performed. New EmpkinS measurement techniques are to be transferred to the developed models as soon as possible.

  • Learning an Optimized Variational Network for Medical Image Reconstruction

    (Third Party Funds Single)

    Term: since 1. June 2021
    Funding source: National Institutes of Health (NIH)
    URL: https://grantome.com/grant/NIH/R01-EB024532-03

    We propose a novel way of reconstructing medical images rooted in deep learning and computer vision that models the process how human radiologists are using years of experience from reading thousands of cases to recognize anatomical structures, pathologies and image artifacts. Our approach is based on the novel idea of a variational network, which embeds a generalized compressed sensing concept within a deep learning framework. We propose to learn a complete reconstruction procedure, including filter kernels and penalty functions to separate between true image content and artifacts, all parameters that normally have to be tuned manually as well as the associated numerical algorithm described by this variational network. The training step is decoupled from the time critical image reconstruction step, which can then be performed in near-real-time without interruption of clinical workflow. Our preliminary patient data from accelerated magnetic resonance imaging (MRI) acquisitions suggest that our learning approach outperforms the state-of-the-art of currently existing image reconstruction methods and is robust with respect to the variations that arise in a daily clinical imaging situation. In our first aim, we will test the hypothesis that learning can be performed such that it is robust against changes in data acquisition. In the second aim, we will answer the question if it is possible to learn a single reconstruction procedure for multiple MR imaging applications. Finally, we will perform a clinical reader study for 300 patients undergoing imaging for internal derangement of the knee. We will compare our proposed approach to a clinical standard reconstruction. Our hypothesis is that our approach will lead to the same clinical diagnosis and patient management decisions when using a 5min exam. The immediate benefit of the project is to bring accelerated imaging to an application with wide public-health impact, thereby improving clinical outcomes and reducing health-care costs. Additionally, the insights gained from the developments in this project will answer the currently most important open questions in the emerging field of machine learning for medical image reconstruction. Finally, given the recent increase of activities in this field, there is a significant demand for a publicly available data repository for raw k-space data that can be used for training and validation. Since all data that will be acquired in this project will be made available to the research community, this project will be a first step to meet this demand.

    Public Health Relevance

    The overarching goal of the proposal is to develop a novel machine learning-based image reconstruction approach and validate it for accelerated magnetic resonance imaging (MRI). The approach is able to learn the characteristic appearance of clinical imaging datasets, as well as suppression of artifacts that arise during data acquisition. We will test the hypotheses that learning can be performed such that it is robust against changes in data acquisition, answer the question if it is possible to learn a single reconstruction procedure for multiple MR imaging applications, and validate our approach in a clinical reader study for 300 patients undergoing imaging for internal derangement of the knee.

  • Holistic customer-oriented service optimization for fleet availability

    (Third Party Funds Single)

    Term: 1. June 2021 - 31. May 2024
    Funding source: Industrie, andere Förderorganisation
  • Symptom detection and prediction using inertial sensor-based gait analysis

    (Own Funds)

    Term: since 1. May 2021

    Parkinson's disease after Alzheimer's is the second most common neurodegenerative disease which mainly affects the patient's mobility and produces gait insecurity and impairment. As patients experience various, asymmetrical and heterogeneous gait characteristics, personalized medication should be at the center of attention in controlling motor complications in Parkinson's patients. potentially, inertial measurement units (IMUs) can be utilized for long-term observation of the disease progress and estimating gait parameters. This project is dedicated to detecting and possibly predicting the motor symptoms of Parkinson's disease such as Bradykinesia, Dyskinesia, and the freeze of gait. This also includes the improvement of the existing gait analysis algorithms to fit the parkinsonian gait more accurately, which is the basis of symptom detection.

  • A comprehensive deep learning framework for MRI reconstruction

    (Third Party Funds Single)

    Term: 1. April 2021 - 31. March 2025
    Funding source: National Institutes of Health (NIH)
    URL: https://govtribe.com/award/federal-grant-award/project-grant-r01eb029957
  • Integratives Konzept zur personalisierten Präzisionsmedizin in Prävention, Früh-Erkennung, Therapie und Rückfallvermeidung am Beispiel von Brustkrebs

    (Third Party Funds Single)

    Term: 1. October 2020 - 30. September 2024
    Funding source: Bayerisches Staatsministerium für Gesundheit und Pflege, StMGP (seit 2018)

    Breast cancer is one of the leading causes of death in the field of oncology in Germany. For the successful care and treatment of patients with breast cancer, a high level of information for those affected is essential in order to achieve a high level of compliance with the established structures and therapies. On the one hand, the digitalisation of medicine offers the opportunity to develop new technologies that increase the efficiency of medical care. On the other hand, it can also strengthen patient compliance by improving information and patient integration through electronic health applications. Thus, a reduction in mortality and an improvement in quality of life can be achieved. Within the framework of this project, digital health programmes are going to be created that support and complement health care. The project aims to provide better and faster access to new diagnostic and therapeutic procedures in mainstream oncology care, to implement eHealth models for more efficient and effective cancer care, and to improve capacity for patients in oncologcal therapy in times of crisis (such as the SARS-CoV-2 pandemic). The Chair of Health Management is conducting the health economic evaluation and analysing the extent to which digitalisation can contribute to a reduction in the costs of treatment and care as well as to an improvement in the quality of life of breast cancer patients.

  • Activity Recognition using IMU Sensors integrated in Hearing Aids

    (Third Party Funds Single)

    Term: 1. October 2020 - 31. March 2024
    Funding source: Industrie

    The hearing aid of the future will be more than just an amplifying device. It may be used as fitness tracker to capture the user’s movements and activity level. Furthermore, it may be used as home monitoring device assessing the user’s vital parameters, tracking the user’s activity status or detecting falls. Hearing aids are becoming more complex and most modern hearing aids are already equipped with additional sensors such as inertial sensors. Acceleration signals are analyzed with signal processing algorithms to enhance speech intelligibility and audio quality. Furthermore, inertial sensors may be used to analyze the user’s movements and physical activity. Hearing aid amplification settings may be adapted according to the current activity. Moreover, given the user's explicit consent, activity recognition enables a long-term tracking of the user’s daily activity status.

    The objective of this project is to investigate automatic activity recognition based on inertial sensor data. Therefore, data of different activities will be recorded using the IMU sensor integrated in the hearing aids. The hearing aids are provided by the cooperation partner WS Audiology. Machine learning algorithms will be developed to automatically classify different activity patterns. 

  • Connecting digital mobility assessment to clinical outcomes for regulatory and clinical endorsement

    (Third Party Funds Single)

    Term: 1. April 2019 - 31. March 2024
    Funding source: Europäische Union (EU)
    URL: http://www.mobilise-d.eu/

    Optimal treatment of the impaired mobility resulting from ageing and chronic disease is one of the 21st century's greatest challenges facing patients, society, governments, healthcare services, and science. New interventions are a key focus. However, to accelerate their development, we need better ways to detect and measure mobility loss. Digital technology, including body worn sensors, has the potential to revolutionise mobility assessment. The overarching objectives of MOBILISE-D are threefold: to deliver a valid solution (consisting of sensor, algorithms, data analytics, outcomes) for real-world digital mobility assessment; to validate digital outcomes in predicting clinical outcome in chronic obstructive pulmonary disease, Parkinson’s disease, multiple sclerosis, proximal femoral fracture recovery and congestive heart failure; and, to obtain key regulatory and health stakeholder approval for digital mobility assessment. The objectives address the call directly by linking digital assessment of mobility to clinical endpoints to support regulatory acceptance and clinical practice. MOBILISE-D consists of 35 partners from 13 countries with long, successful collaboration, combining the requisite expertise to address the technical and clinical challenges. To achieve the objectives, partners will jointly develop and implement a digital mobility assessment solution to demonstrate that real-world digital mobility outcomes can successfully predict relevant clinical outcomes and provide a better, safer and quicker way to arrive at the development of innovative medicines. MOBILISE-D's results will directly facilitate drug development, and establish the roadmap for clinical implementation of new, complementary tools to identify, stratify and monitor disability, so enabling widespread, cost-effective access to optimal clinical mobility management through personalised healthcare.

Recent publications

2024

2023

2022

2021

2020

Related Research Fields

Contact: