Monday, March 2, 2026

Confabulated memory and AI

 

Geoffrey Hinton just reframed the biggest supposed flaw in artificial intelligence. And it changes everything. Hinton: “They shouldn’t be called hallucinations. They should be called confabulations.” One word swap. Entire paradigm shifts. When the legacy tech industry calls AI hallucinations a bug, they’re revealing a fundamental misunderstanding of what intelligence actually is. They’re expecting the machine to behave like a database. Store a fact. Retrieve the fact. Return the exact same fact every time. That’s not how intelligence works. Not artificial. Not biological. Hinton: “It’s not that there’s a file stored somewhere in your brain, like in a filing cabinet or in a computer memory.” Your brain doesn’t store memories. It reconstructs them. Every time you recall something, your neural network uses connection strengths shaped by past experience to build the most plausible version of what happened. It fills the gaps. Smooths the inconsistencies. Constructs a coherent story from incomplete signal. And then presents that story to you as fact. Hinton: “If I ask you to remember something that happened a few years ago, you’ll construct something that seems very plausible to you. And some of the details will be right and some will be wrong.” Here’s the part that should stop you cold. You will be equally confident about the wrong details as the right ones. Think about that. Really think about it. Every argument you’ve had about who said what. Every memory you’ve defended as certain. Every time you told a story about your own life with complete certainty. Some of those details weren’t real. You constructed them. Confidently. Fluently. And you had no idea. This isn’t a flaw unique to people with bad memories. Eyewitness testimony is the most confabulated evidence in the human justice system. Innocent people have spent decades in prison because someone remembered something that felt absolutely certain and was absolutely wrong. Your brain didn’t lie to you. It did exactly what brains do. It built the most plausible story it could from the signal it had. AI does the exact same thing. Because it was built on the exact same architecture. The mechanism that makes an AI invent a plausible but wrong answer is the same mechanism that makes it brilliant. You cannot have one without the other. The ability to reason creatively, synthesize across domains, construct explanations for things it has never been told. All of it runs on the same engine as the confabulation. Hinton: “Psychologists have been studying confabulation in people since at least the 1930s.” This isn’t a new phenomenon. It isn’t a software bug. It isn’t something to be patched in the next model update. It is the price of dynamic intelligence. The shadow cast by the same light that makes these systems remarkable. We aren’t building better search engines. We are building synthetic minds that think the way minds actually think. Messy. Confident. Occasionally wrong. And for exactly that reason, capable of something no database ever was.



This reframing feels essential. The word 'confabulation' carries a different weight—it's not a malfunction, it's a process. It acknowledges that intelligence, whether biological or synthetic, is fundamentally generative, not archival. The confidence in wrong details is something I've felt in my own memory—the certainty that fades when you realize you've woven a coherent story from fragments. It makes me wonder if the real risk isn't inaccuracy, but our own expectation of infallibility. We ask machines to be more perfect than we are, then call it a flaw when they mirror us.









No comments:

Post a Comment

Confabulated memory and AI

  Dustin @r0ck3t23 Geoffrey Hinton just reframed the biggest supposed flaw in artificial intelligence. And it changes everything. Hinton: “...