OpenAI CEO Sam Altman this week speculated that “most people” would have assumed Artificial General Intelligence had arrived if they'd they seen ChatGPT in action before its arrival in 2020.
Mostly, the question of what AGI is doesn't matter. It is a term that people define differently. The thing that matters is the rate of progress that we have seen year for the last five years
The go-to keynote guest star made the claim at Snowflake’s annual jamboree in San Francisco this week, as he took part in an amicable chinwag with the data warehouse and analytics company’s CEO, Sridhar Ramaswamy.
Altman was encouraged to speculate on when the tech industry would achieve the much-cited benchmark of artificial general intelligence by host Sarah Guo, an investor in AI companies. It turns out, he did not need much encouragement to overlook the fact that the concept itself is much debated.
"Just before we launched GPT-3 [in June 2020], the world had not yet seen a good language model. If you could go back to that moment and show someone ChatGPT today — to say nothing of Codex or anything else — but just ChatGPT. I think most people would say that's AGI for sure," he said.
"We're great at adjusting our expectations, which I think is a wonderful thing about humanity. Mostly, the question of what AGI is doesn't matter. It is a term that people define differently. The thing that matters is the rate of progress that we have seen year for the last five years should continue for at least the next five, probably well beyond that, but hard to say," he said.
The controversial CEO seemed to take the view that a scientific conclusion around AGI could be a matter of popular opinion, of what "most people think." One might speculate, then, that if most people think scientists have cured cancer or understood dark matter, then that must be true, leaving aside the fact that in his native USA, credence in science is in perilously short supply.
Nonetheless, Altman said, “A system that can either autonomously discover new science or be such an incredible tool to people that our rate of scientific discovery in the world like quadruples or something: that would satisfy any test I could imagine for an AGI.”
Guo then encouraged the OpenAI CEO to speculate on what he might do with a thousand times the compute power than is currently available to his organization.
“I would ask it to work super hard on AI research figure out how to build, much better models, and then ask that much better model what we should do with all the compute,” he said.
- Google co-founder Sergey Brin suggests threatening AI for better results
- Some signs of AI model collapse begin to reveal themselves
- AI ain't B2B if OpenAI is to be believed
- No-boom supersonic flights could slide through US skies soon
Ramaswamy said that 1000x more compute power would be useful to Snowflake, but outside tech, it might be good for RNome project, which researches the building blocks inside human cells.
“It's like the DNA sequencing project that we did 20-odd years ago, but it's about figuring out RNA expressions. Turns out they control how proteins work in our body. A breakthrough there, in knowing how RNA controls DNA expression, is likely to solve a ton of diseases and put humanity forward so much more,” he said.
Others might be concerned about whether our current energy supplies might meet the demands of 1000x compute, or what that might do to carbon emissions. Of course, super-intelligent machines will have solved the problem of climate change will be solved by then. Or, at least, so long as most people think the problem is solved, that will be fine.