EXPERIMENTS IN PROMPT ENGINEERING
AI’s thinking is purely loquacious. Recently I showed how forcing ChatGPT to answer complex questions in “just one word” can derail reasoning. This shows it’s not thinking, but is in fact a talkative idiot (or “fluent imbecile”).
When we remove its ability to ramble, AI also loses its ability to reason.
Logically, the reverse must be true. It struck me that any circumlocution might increase an LLM’s ability to answer questions more correctly. The best way to test this is to make sure the preamble is total garbage — and to not allow it to generate text that could be mistaken for “Chain of Thought”.
So, I decided to see if the words “blah blah blah” make AI ‘reason’ better.
If my hypothesis is correct, it means that “Chain of Thought” (a technique where showing each step supposedly improves accuracy) is effectively a placebo, and not evidence of reasoning. CoT just gives readers something to follow along with that looks like AI is reasoning. I argue it is simply the accumulation of words — not the content therein — that improves its output.
.png)

![Watt Amp That Changed the Industry [video]](https://www.youtube.com/img/desktop/supported_browsers/edgium.png)