Mert Yuksekgonul

I'm a third year PhD student in Computer Science at Stanford University. I am lucky to be advised by James Zou and Carlos Guestrin.

These days, I think about self-improvement for AI systems. This comes in two forms:

 Self-optimization: building flexible optimization frameworks for self-improvement, e.g., optimizing AI systems or their predictions, using AI systems.
 Self-moderation: building models that can self-moderate, accurately reflect their reliability, and refine their interactions accordingly.

In my earlier work, I worked on making machine learning models more reliable, and their failures more predictable. This broad interest led me to work on topics around interpretability, robustness, multimodality, and uncertainty. I really enjoy thinking about abstraction, concepts, and compositionality in humans and machines.

[ Email  /  Github  /  Twitter  /  Google Scholar ]

profile photo

Recent News

► Older News

Selected Publications

textgrad TextGrad: Automatic ``Differentiation'' via Text
Mert Yuksekgonul*, Federico Bianchi*, Joseph Boen*, Sheng Liu*, Zhi Huang*, Carlos Guestrin James Zou [Preprint, Package and tutorials]
▶ Show Description
bow When and why vision-language models behave like bags-of-words, and what to do about it?
Mert Yuksekgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, James Zou
Oral @ ICLR 2023 (Top 5% of all accepted papers)
[Paper, Code ]
▶ Show Description
beyond_confidence Beyond Confidence: Reliable Models Should Also Quantify Atypicality
Mert Yuksekgonul, Linjun Zhang, James Zou, Carlos Guestrin
NeurIPS 2023, Contributed Talk @ ICLR 2023 Trustworthy ML
[Paper, Code]
▶ Show Description
attention_satisfies Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi , Ece Kamar , Besmira Nushi
ICLR 2024
[ Paper, Code]
▶ Show Description
pcbm Post-hoc Concept Bottleneck Models
Mert Yuksekgonul, Maggie Wang, James Zou
Spotlight @ ICLR2023 (Top 25% of all accepted papers)
[ Paper, Code ]
▶ Show Description
plip Leveraging medical Twitter to build a visual–language foundation model for pathology AI
Zhi Huang*, Federico Bianchi*, Mert Yuksekgonul, Thomas Montine, James Zou
Nature Medicine
[ Preprint, Demo ]
▶ Show Description
cce Meaningfully debugging model mistakes using conceptual counterfactual explanations
Abubakar Abid*, Mert Yuksekgonul*, James Zou
ICML 2022
[ Paper , Code ]
▶ Show Description

► More Publications

Template from this website.