LLMs aren’t world models

A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.

When I’m saying that LLMs have no world model, I don’t mean that they haven't seen enough photos of chess knights, or held a knight in their greasy fingers; I don’t mean the physical world, necessarily. And I obviously don’t mean that a machine can’t learn a model of chess, when all leading chess engines use machine learning. I only mean that, having read a trillion chess games, LLMs, specifically, have not learned that to make legal moves, you need to know where the pieces are on the board. Why would they? For predicting the moves or commentary in chess games, which is what they’re optimized for, this would help very marginally, if at all.

When people say things like "Why did ChatGPT lie about XYZ", I usually hold my tongue. But when I don't, it's to say: ChatGPT isn't lying; it truly, literally doesn't know anything. It can't lie because it has no intention and no concepts of truth or fiction, or of you, or of anything. It doesn't even have concepts. It's an amazingly-complex auto-complete - so complex that it creates a fairly convincing illusion of intelligence. But it doesn't actually know anything about the world. It only knows the structure of the things it has read.

But it sometimes looks like it understands the world, or aspects of the world. But these are Potemkin understandings.

And what about asking LLMs why they said what they said?:

The first problem is conceptual: You're not talking to a consistent personality, person, or entity when you interact with ChatGPT, Claude, Grok, or Replit. These names suggest individual agents with self-knowledge, but that's an illusion created by the conversational interface. What you're actually doing is guiding a statistical text generator to produce outputs based on your prompts.

There is no consistent "ChatGPT" to interrogate about its mistakes, no singular "Grok" entity that can tell you why it failed, no fixed "Replit" persona that knows whether database rollbacks are possible. You're interacting with a system that generates plausible-sounding text based on patterns in its training data (usually trained months or years ago), not an entity with genuine self-awareness or system knowledge that has been reading everything about itself and somehow remembering it.

...

A lifetime of hearing humans explain their actions and thought processes has led us to believe that these kinds of written explanations must have some level of self-knowledge behind them. That's just not true with LLMs that are merely mimicking those kinds of text patterns to guess at their own capabilities and flaws.