Ask HN: Is GPT-5 a regression, or is it just me?

2 hours ago 2

Context: I have been using GPT5 since its release over a month ago, within my Plus subscription. Before this release, I heavily relied on gpt-o3 for most complex tasks, with 4o for simple question. I use it for a mix of scientific literature websearch for e.g. understanding health related topics, the occasional coding assistance and helping me out with *nix sysadmin related tasks. Note that I have not used its API or integration with an IDE.

Based on a month of GPT5 usage, this model feels like primarily like a regression:

1. It's slow: thinking mode can take ages, and sometimes gets downright stuck. It's auto-assessment of whether or not it needs to think feels poorly tuned to most tasks and defaults too easily to going into deep reasoning mode.

2. Hallucinations are in overdrive: I would assess that in 7/10 tasks, hallucinations continuously clutter the responses and warrant corrections and careful monitoring and steering back. It hallucinates list items from your prompt that weren't there, software package functionalities/capabilities and CLI parameters etc. Even thorough prompting with explicit linking to sources, e.g. also wihtin deep research frequently goes of the rails.

3. Not self critical: even in thinking mode, it frequently spews out incorrect stuff, that a blatant "this is not correct, check your answer" can directly correct.

Note: I am not a super advanced prompt engineer, and this above assessment is mainly wrt the previous generation of models. I would expect that with progression of model capabilities, the need for users to apply careful prompt engineering goes down, not up.

I am very curious to hear your experiences.

Read Entire Article