Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts.
They’re predicting what words are most likely to come next in a sequence.
They can produce convincing-sounding information, but that information may not be accurate or reliable.
Imagine someone who has read thousands of books, but doesn’t remember where they read what.
What kinds of things might they be good at?
What kinds of things might they be bad at?
Sure, you might get an answer that’s right or advice that's good… but what “books” are it “remembering” when it gives that answer? That answer or advice is a common combination of words, not a fact.
Don’t copy-paste something that a chatbot said and send it to someone as if that’s authoritative.
When you do that, you’re basically saying “here are a bunch of words that often go together in a sentence.”
Sometimes that can be helpful or insightful. But it’s not a truth, and it’s certainly not the final say in a matter.
.png)
