Maximilian Dreyer
PhD candidate @Fraunhofer HHI in Berlin
Explainable AI, Deep Learning, Computer Vision
recent
latest work
Prototypical Concept-based Explanations (PCX)
Concept-based prototypes summarize the model behavior in condensed fashion, enabling an understanding of model (sub-)strategies. As such, PCX allows to quickly understand spurious model behavior or data quality issues. Importantly, PCX further enables to validate individual model predictions quantitatively and qualitatively, taking important steps towards more objective and applicable XAI.
More
December 2023
news
  • April 2024 // PCX accepted for SAIAD Workshop at CVPR-24.
  • April 2024 // PURE accepted for XAI for Computer Vision Workshop at CVPR-24.
  • Feb. 2024 // Talk about Reveal2Revise for Stanford MedAI.
work
PURE: Turning Polysemantic Neurons Into Pure Features
Co-authors: Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin
PURE allows to purify representations by turning polysemantic units into monosemantic ones, and thus increases latent interpretability.
accepted:
CVPRW 2024
Paper
Code
April 2024
Understanding the (Extra-)Ordinary: Validating DNN Decisions with Prototypical Concept-based Explanations
Co-authors: Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin
Prototypes enable an understanding of model (sub-)strategies and further allow to validate model predictions.
accepted:
CVPRW 2024
Paper
Code
December 2023
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space
Co-authors: Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin
Introducing a novel method that enforces the unlearning of spurious concepts found in AI models.
published:
AAAI 2024
Paper
Code
September 2023
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Co-authors: Frederik Pahde, Wojciech Samek, Sebastian Lapuschkin
Introducing Reveal to Revise (R2R), an Explainable AI life cycle to identify and correct model bias.
published:
MICCAI 2023
Paper
Code
March 2023
Revealing Hidden Context Bias of Localization Models
Co-authors: Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
By extending CRP to localization tasks, we are able to precisely identify background bias concepts used by AI models.
published:
CVPRW 2023
Paper
Code
December 2022
From attribution maps to human-understandable explanations through Concept Relevance Propagation
Co-authors: Reduan Achtibat, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
Introducing Concept Relevance Propagation (CRP) as a local and global XAI method to understand the hidden concepts used by AI models.
published:
Nature Machine Intelligence
Paper
Code
June 2022
Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Co-authors: Daniel Becking, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin
XAI-adjusted quantization to generate sparse neural networks while maintaining or even improving model performance.
published:
xxAI - Beyond Explainable AI
Paper
September 2021
Comparison of Explainable AI Methods for Object Detection
Comparison of the feature relevance methods LRP, DeepLIFT, SHAP and LIME.
Paper
September 2021
Deep Learning Lane Detection with LiDAR and Camera
Predicting road lanes using LiDAR and camera data. Ground truth is generated via localization in a 2D HD map.
Paper
September 2020
cv
Education
Master of Science: Computational Science
2020 - 2022
University of Potsdam, Potsdam
Advanced studies of computational science with focus in statistical data analysis and artificial intelligence.
Bachelor of Science: Physics
2015 - 2019
Humboldt-University, Berlin
Basic studies of physics.
2018
Uppsala-University, Uppsala
Studies in Uppsala for one semester as a part of the Erasmus exchange programme.
Working Experience
Research Associate / PhD Student
since July 2022
Fraunhofer HHI, Berlin
Research Associate in the Machine Learning Group for Explainable AI (XAI). Advancement and application of concept-based XAI methods, thereby also touching AI robustness.
Research Assistant
November 2020 - July 2022
Fraunhofer HHI, Berlin
Research Assistant in the Machine Learning Group for Explainable AI (XAI). Application and adaption of XAI methods to segmentation and object detection models. Application of XAI to make Deep Neural Networks more efficient by quantization. Development of concept-based explainability methods.
Working Student
June 2019 - November 2020
IAV, Berlin
Tool development, modeling of car motion/behaviour, simulations to analyze safety borders for driving functions. Visualization and interpretation of high-dimensional experimental results.
Working Student, Bachelor Thesis
November 2018 - June 2019
Baumer Hübner, Berlin
General research in magnetic pole rings and magnetization processes. Planing and doing measurements and analyzing the data. Analyzing the magnetic behavior of magnetic pole rings with small diameter. Comparing an analytical model, FEM simulation and experimental measurements.
Working Student
July 2016 - September 2018
DESY, Zeuthen
Supervising school classes visiting the vacuum lab of DESY in Zeuthen. Part of organization team for TeVPA 2018 conference.

contact

Welcome to my homepage!

I live in Berlin and work at Fraunhofer HHI to increase AI transparency. Analyzing and using data to create helpful and interesting results is what made me fall in love with data analytics and programming. Next to my work and studies in these areas, I like to dive deeper into web development.

Please feel free to contact me via max.dreyer[@]outlook.com or on LinkedIn.
© Copyright 2023 | Impressum