“But ChatGPT Said…”
Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts.
They’re predicting what words are most likely to come next in a sequence.
They can produce convincing-sounding information, but that information may not be accurate or reliable.
Imagine someone who has read thousands of books, but doesn’t remember where they read what.
What kinds of things might they be good at?
What kinds of things might they be bad at?
Sure, you might get an answer that’s right or advice that's good… but what “books” are it “remembering” when it gives that answer? That answer or advice is a common combination of words, not a fact.
Don’t copy-paste something that a chatbot said and send it to someone as if that’s authoritative.
When you do that, you’re basically saying “here are a bunch of words that often go together in a sentence.”
Sometimes that can be helpful or insightful. But it’s not a truth, and it’s certainly not the final say in a matter.
Further reading:
- OpenAI: Why language models hallucinate
- Oxford University: Large Language Models pose risk to science with false answers, says Oxford study
- New York Times: A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse (Archived Version)
- MIT Media Lab: People Overtrust AI-Generated Medical Advice despite Low Accuracy
- Business Insider: Why AI chatbots hallucinate, according to OpenAI researchers
- Reuters: AI 'hallucinations' in court papers spell trouble for lawyers
- Nature: AI chatbots are sycophants — researchers say it’s harming science
- CNN: Parents of 16-year-old sue OpenAI, claiming ChatGPT advised on his suicide
- Financial Times: The ‘hallucinations’ that haunt AI: why chatbots struggle to tell the truth (Archived Version)
- The Guardian: ‘Sycophantic’ AI chatbots tell users what they want to hear, study shows