[New post] Why Hallucinations Are (Mostly) a Design Problem, Not a Model Problem
I published a new piece today that might be controversial: I think we're solving LLM hallucinations backwards.
Everyone's focused on making models technically perfect, but I believe the bigger issue is user experience and design. Right now, when you ask ChatGPT or Claude a question, every answer looks identical whether it's rock-solid fact or creative speculation.
You get perfect contract law advice with completely fabricated citations. Historical facts mixed with educated guesses. All presented with the same confident tone and formatting.
The tiny disclaimers at the bottom don't help. They're legal cover, not user guidance.
I tested this with a simple question across major models: "Who is the most liberal president in US history?" After minimal nudging, even the most careful models dropped their hedging and gave definitive answers to what is fundamentally a subjective question.
My take: instead of chasing technical perfection, we should build interfaces that help users navigate uncertainty. Flag risky scenarios. Offer confidence modes. Train models to admit ignorance more often.
The best AI products won't eliminate hallucinations. They'll make them manageable through thoughtful design.
Read the full post here.
Trey