We are pleased to invite you to join us on 5 December 2025 for an insightful lecture delivered by our guest speaker, Mirac Suzgun, on "Can LLMs Distinguish Belief from Knowledge? Implications for Law".
The speaker:
Mirac Suzgun is a Ph.D. candidate in Computer Science at Stanford University, co-advised by Professors Dan Jurafsky and James Zou, and a J.D. candidate at Stanford Law School. His research examines the capabilities and limitations of modern language models, focusing on reasoning, hallucination detection and mitigation, and societal applications. He also conducts legal scholarship on constitutional law, administrative law, and AI governance and regulatory policy, and works closely with Professor Daniel E. Ho at the Stanford RegLab. He graduated from Harvard College with a joint degree in Mathematics and Computer Science and a secondary field in Folklore & Mythology, receiving the Thomas T. Hoopes Prize for his undergraduate thesis. His work has appeared in leading venues including Nature Machine Intelligence, The Lancet Digital Health, Journal of Legal Analysis, Journal of Empirical Legal Studies, ACL, EMNLP, ICLR, and NeurIPS. He has worked at Google Brain, Microsoft Research, Meta's GenAI/Llama team, and OpenEvidence, and has served as a legal intern at the Administrative Conference of the United States and as a litigation summer associate at WilmerHale. His graduate studies have been supported by the Google Ph.D. Fellowship, Stanford HAI-SAP Fellowship, and Stanford Law School Fellowship.
What the lecture is about:
As language models enter high-stakes domains including law, medicine, and journalism, their capacity to distinguish belief from knowledge becomes paramount. Evaluation of 24 state-of-the-art models using KaBLE—a benchmark spanning epistemic reasoning tasks—reveals systematic failures when processing first-person false beliefs, with even advanced models sometimes dramatically underperforming. Models, overall, exhibit pronounced attribution bias, handling third-party beliefs far more accurately than user-stated beliefs. They demonstrate inconsistent application of knowledge's factual nature and striking sensitivity to minor linguistic variations. These limitations carry immediate implications for legal practice, witness testimony analysis, evidence assessment, and the responsible governance of AI systems in judicial contexts.
Registration:
Registration is mandatory, but free of charge. Please register here until 3 December 2025.
We look forward to seeing many of you there!
Location:
SEM 10, Juridicum, Schottenbastei 10-16, 1010 Vienna
