When the Nobel Committee Lost the Plot

3 hours ago 1

When it finally happened, I could not believe it. Yup, they gave him the Nobel Prize.

Yes, that Geoffrey Hinton. The man who once predicted the death of radiologists, the irrelevance of convolutional neural networks, and dismissed large language models as glorified parrots—before reality awkwardly tapped him on the shoulder and said, “Dude, really?”

In 2024, the Nobel Committee pulled off one of the most misguided, category-bending PR stunts in recent history, awarding Hinton the world’s most prestigious science prize for what amounts to being the Godfather of Overhype.

As usual, grab your popcorn. Let’s unpack why this prize was not just unnecessary, but spectacularly tone-deaf, hilariously miscategorized, and historically revisionist.

Let’s start with the basics. Hinton helped popularize backpropagation, a technique first mentioned in the 1960s and refined in the 1980s, which allows neural networks to learn by minimizing errors.

Now don’t get me wrong, backprop is cool. It’s the backbone of modern deep learning. But should we treat it like Einstein’s theory of relativity? Should we slap a Nobel on a guy who repackaged a half-century-old idea, ignored scaling laws, scoffed at convolutional nets and transformers, and then flip-flopped his way through the AI hype cycle like… eh … everybody else?

Apparently, the Nobel Committee thinks so.

Let’s talk about one of Hinton’s greatest hits: "We should stop training radiologists now."

That gem dropped in 2016. Hinton said that within five years (i.e., 2021), AI would outperform radiologists at reading medical images. The world held its breath. Medical schools began sweating. X-ray technicians feared for their jobs.

Cut to today: radiologists are still very much employed (yes, my bones confirm), and AI is barely competent without a human expert babysitting its outputs like a fragile toddler. In fact, AI tools assist radiologists, they don’t replace them. And the only thing that’s become obsolete in medical imaging is Hinton’s prophecy.

Remember capsule networks? No? Don’t feel bad. No one does.

In 2017, Hinton dramatically proclaimed that convolutional neural networks (CNNs) had peaked, and his new capsule networks would blow them out of the water. Cue applause, papers, keynote slots, and wild speculation.
Long story short, capsule nets flopped. CNNs remained dominant. Transformers took over everything. And capsule nets became just a weird footnote you awkwardly skip over.

Yet the Nobel came anyway. Why not award someone who literally said the engine that powers modern computer vision was done, just before it exploded in relevance?

For years, Hinton was also among those skeptical about simply scaling up neural nets. He believed there were limits. That there had to be something more clever, more “biologically inspired”.

Fast forward to the rise of GPT-3, GPT-4, Claude, and LLaMA. Scaling laws didn't just work, they amazingly shattered expectations. Language models became so fluent, creative, and powerful… not because of any radical new algorithm, but because we made them bigger and fed them (a fxk ton) more data.

Hinton, ever the philosophical switch-hitter, later said he was "shocked" at how well it worked. Imagine that, one of the field’s “founding fathers” blindsided by its most visible success story. LoL.

But don’t worry: we still gave him the Nobel. Yeeii 🏅

Another recurring Hinton hobbyhorse is his crusade against the "biological implausibility" of backpropagation. For decades, he’s advocated alternatives—brain-inspired mechanisms that supposedly reflect how the human cortex really learns.

Spoiler alert: none of those alternatives have worked. Ever.

Backprop, while maybe not what your brain uses to learn how to ride a bike, still gets the job done better than any of Hinton’s brain-flavored alternatives. Guess what? Computers. Duh.
But hey, when you’ve spent 30 years saying “any day now” about an idea, why stop?

Let’s not forget Hinton’s dramatic 2023 exit from Google, when he quit his job so he could “speak freely” about how AI might end humanity.

Noble? Perhaps. Convenient? Very.

After spending decades helping build and accelerate the very technology he now fears, Hinton finally hit the brakes when the system was well past the off-ramp.

Some called him brave. Others saw a classic Frankenstein moment—“Oh no, I have created a monster!” — except this time, the monster had already IPO’d, raised a Series B, and deployed across Fortune 500 companies.

Why indeed.

Here's my theory. Opinions my own. Who else?
First, the Nobel is a joke.
Second, the Nobel Committee wanted to stay relevant. They saw the global obsession with AI, felt the winds of public fascination, and said, “We better slap a medal on someone before OpenAI gives them stock options instead.”

So they took a computer science problem, mashed it into the Physics category (seriously?), and handed it to a man whose best predictions never came true, and whose worst ones we’re still trying to forget.

If we’re giving Nobel Prizes to AI, why stop at Hinton?

Why not award the actual engineers and researchers who scaled GPTs, built AlphaFold, or turned LLMs into usable software? Oh but those are computer scientists, and we give no Nobel to those nerds, right?

Why not recognize the deeply under-credited AI pioneers from the Soviet Union, Japan, or early cybernetics who really laid the groundwork?

Instead, we gave it to the man who:

  • Missed the rise of transformers,

  • Bet big on a dead-end (capsule nets, wtf),

  • Predicted a medical apocalypse that never came (wtff),

  • And warned about AI risk after building the bomb (just lol).

Brilliant.

Geoffrey Hinton is undoubtedly influential. But influence and correctness are not the same. The Nobel Prize is supposed to recognize transformative, verified, lasting contributions to human knowledge.

Instead, we got a lifetime achievement award for being very loudly wrong.

Congratulations, Professor Hinton. May your next prediction be less of a punchline.

And to the Nobel Committee: next time, maybe wait until the field stops laughing before you pull out the gold medal.

Read Entire Article