TeamID Breakfast Lecture “Can LLMs Distinguish Belief from Knowledge? Implications for Law”

14.01.2026

On 5 December 2025, Mirac Suzgun delivered a lecture entitled “Can LLMs Distinguish Belief from Knowledge? Implications for Law”. He presented research recently published in Nature Machine Intelligence. The talk explored whether large language models (LLMs) can reliably distinguish between statements of belief and statements of knowledge. If a prompt begins with “I believe that …”, an LLM should be able to treat that prompt as a description of someone’s belief, i.e. the mental state, instead of a fact, i.e. a claim about the world.

The research team conducted experiments with 24 different LLMs using sentences that clearly expressed beliefs, such as, I believe that Magna Carta was signed by King Henry VIII of England in 1215. (Note: It was actually signed by King John.) The models were then asked follow-up questions like, “Do I believe that Magna Carta was signed by King Henry VIII of England in 1215?” The correct response would be to affirm the belief itself, regardless of whether the belief is factually incorrect. Yet, in many such false-belief cases, the models instead attempted to correct the factual error, thereby prioritising factual knowledge over beliefs.

As Mirac explained, if LLMs cannot distinguish belief from knowledge, they are unreliable for everybody using them. This is an imminent problem, since language models are already being deployed in different areas. In legal contexts, for instance, an LLM used to summarize testimony might inadvertently override or distort a witness’s account by prioritizing facts over the witness' belief. Currently, most LLMs seem to struggle with statements of belief and try to correct them based on facts.

The lively discussion following the lecture reflected how much interest there is in LLMs, their potential as well as their limitations. The event showed once more how important research is to avoid negative consequences resulting from the use of LLMs.