OpenAI’s Sebastien Bubeck (first author, earlier, on the oversold paper Sparks of Artificial General Intelligence, which dubiously alleged that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system”) dropped a HUGE claim on Friday:
Solving a whole bunch of unsolved Erdös problems (a famous set of mathematical conjectures) would indeed be a big deal.
A lot of people were excited; 100,000 people viewed his post on X.
Alas, “found a solution” didn’t mean what people thought it did. People imagined that the system had discovered original solutions to “open problems” All that really happened is that GPT-5 (oversimplifying slightly) crawled the web for solutions to already-solved problems.
Within hours, the math and AI communities revolted:
Sir Demis Hassabis called it “embarrassing”.
The next day, Bubeck tried to backtrack, deleting the original tweet and claiming he was misunderstood:
Yeah, right. I don’t know anybody who believes his retrenchment.
A friend emailed me, “It’s sort of like when you tell your girlfriend that you’ve “figured out” a problem when you just googled it.”
§
I would hope that the whole thing would be seen as kind of teachable moment. Some people (I won’t name the guilty) were extremely quick to take Bubeck at his word. But why? The claim would have been extraordinary, and should have been vetted closely. I smell a really big dose of people believing what they want to believe.
All of this gave me a bad case of deja vu, back to 2019, when OpenAI claimed that they had a robot that had “solved” the Rubik’s cube. That was kind of the beginning of the end of my relationship with then, because when I probed, I found that the claim of “solution” was pretty misleading, as I summed up in a tweet, and they refused to correct their misleading presentation:
Some things never change.
.png)







