Mert Yuksekgonul

I'm a fourth year PhD student in Computer Science at Stanford University. I am lucky to be advised by James Zou and Carlos Guestrin.

I work on self-improving and self-moderating AI systems.

My goal is building systems that learn from their experience and recognize their limits, so they can safely tackle previously unsolvable problems.

Email  /  Github  /  Twitter  /  Google Scholar

profile photo

Selected Publications

textgrad TextGrad: Automatic ``Differentiation'' via Text
Mert Yuksekgonul*, Federico Bianchi*, Joseph Boen*, Sheng Liu*, Zhi Huang*, Carlos Guestrin James Zou [Preprint, Package and tutorials]
▶ Show Description
bow When and why vision-language models behave like bags-of-words, and what to do about it?
Mert Yuksekgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, James Zou
Oral @ ICLR 2023 (Top 5% of all accepted papers)
[Paper, Code ]
▶ Show Description
beyond_confidence Beyond Confidence: Reliable Models Should Also Quantify Atypicality
Mert Yuksekgonul, Linjun Zhang, James Zou, Carlos Guestrin
NeurIPS 2023, Contributed Talk @ ICLR 2023 Trustworthy ML
[Paper, Code]
▶ Show Description
attention_satisfies Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi , Ece Kamar , Besmira Nushi
ICLR 2024
[ Paper, Code]
▶ Show Description
pcbm Post-hoc Concept Bottleneck Models
Mert Yuksekgonul, Maggie Wang, James Zou
Spotlight @ ICLR2023 (Top 25% of all accepted papers)
[ Paper, Code ]
▶ Show Description
plip Leveraging medical Twitter to build a visual–language foundation model for pathology AI
Zhi Huang*, Federico Bianchi*, Mert Yuksekgonul, Thomas Montine, James Zou
Nature Medicine
[ Preprint, Demo ]
▶ Show Description
cce Meaningfully debugging model mistakes using conceptual counterfactual explanations
Abubakar Abid*, Mert Yuksekgonul*, James Zou
ICML 2022
[ Paper , Code ]
▶ Show Description
Template from this website.