Niels Mündler

Profile
I'm a PhD student in Computer Science at ETH Zurich currently working on AI safety and reliability, with a focus on large language models. I previously did research at TU Munich working with Prof. Tobias Nipkow on formal verification of data structures.

My current work focuses on understanding and improving the safety and reliability of large language models. I've developed methods for detecting and mitigating self-contradictory hallucinations in LLMs, as well as analyzing security vulnerabilities in code completion engines. My research aims to make AI systems more trustworthy and reliable.

I have a strong background in formal methods and verification, having worked on verified implementations of fundamental data structures like B-trees and B+-trees. This experience with rigorous formal reasoning now informs my approach to making AI systems more reliable and secure.

Publications

Practical Attacks against Black-box Code Completion Engines

Practical Attacks against Black-box Code Completion Engines

Slobodan Jenko, Jingxuan He, Niels Mündler, Mark Vero, Martin T. Vechev

arXiv.org 2024

SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents

SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents

Niels Mündler, Mark Niklas Müller, Jingxuan He, Martin T. Vechev

Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation

Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation

Niels Mündler, Jingxuan He, Slobodan Jenko, Martin T. Vechev

International Conference on Learning Representations 2023

A Verified Implementation of B+-Trees in Isabelle/HOL

A Verified Implementation of B+-Trees in Isabelle/HOL

Niels Mündler, T. Nipkow

International Colloquium on Theoretical Aspects of Computing 2022

A Verified Imperative Implementation of B-Trees

Niels Mündler

Arch. Formal Proofs 2021