Ask HN: What would convince you to take AI seriously?

4 hours ago 3

Recently OpenAI announced an AI model/system they had recently developed won a gold medal at the IMO. The IMO is a very difficult exam, and only the best high schoolers in the world even qualify, let alone win gold. Those who do often go on to cutting edge mathematical research, like Terence Tao, who won the Fields medal in 2006. It has also been rumored that DeepMind achieved the same result with a yet to be released model.

Now, success in a tough math exam isn't "automating all human labor" but it is certainly a benchmark many thought AI would not achieve easily. Even so, many are claiming it isn't really a big deal, and that humans will still be far smarter than AI's for the foreseeable future.

My question is, if you are in the aforementioned camp, what would it take you to adopt a frame of mind roughly analogous to "It is realistic that AI systems will become smarter than humans, and could automate all human labor and cognitive outputs within a single-digit number of years".

Would it require seeing a humanoid robot perform some difficult task? (the Metaculus definition of AGI requires that a robot be able to satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.). Would it involve a Turing test of sufficient rigor? I'm curious what people's personal definition of "ok this is really real" is.

Read Entire Article