AI researcher burnout: the greatest existential threat to humanity?

1 month ago 5

updated priors

Drawing on multiple interviews in the community of AI researchers and bolstered by mathematical modelling, we find that the future of humanity hinges on whether we can take sufficient urgent action to safeguard AI researchers from potential burnout

October 9, 2025

The field of long-termism has previously understood the greatest existential risks to humanity to include all-powerful superintelligence, nuclear proliferation, bioweapons, nanotechnology, etc.

Our new report, involving extensive interviews and rigorous mathematical modelling, shows a far greater risk that has been hiding in plain sight: AI researcher burnout.

Our calculation is simple:

  • A burnout rate of 0.001% of AI alignment researchers per year
  • Leads to a 0.002% increase in the likelihood of AGI being misaligned
  • Thereby increasing the existential risk of AGI to humanity by 0.003%
  • Which, given how many humans we expect to live until the end of the universe, means:

1,000,000,000,000,000,000,000,000 potential future humans are murdered every time an AI alignment researcher has a bad day.

Digging into this further, our report finds that:

  • The quality of the first coffee consumed in the morning by any given AI alignment researcher has outsize consequences, with a poor brew leading to a population the size of China being wiped out.
  • Each painful romantic break-up between AI alignment researchers causes a rupture to humanity equivalent to 1,000 Hiroshimas. All AI alignment researchers should therefore be enrolled in couples therapy, irrespective of whether they are in an active relationship — or, if they are polyamorous, in pre-emptive thruples, fourples or fiveples therapy.
  • There is an opportunity to safeguard the future by immediately building new theme parks close to AI alignment hubs. Assuming the parks donate maximum priority queue jump to AI researchers, the contentment they generate for people working in the field will save trillions of potential future human lives, as well as quite a few chickens.
  • Crucially, our research finds that these theme parks should focus on water rides. Potentially nausea-inducing rollercoasters could set off a devastating butterfly effect that would cause AGI to spiral into a doom loop. Magic carpet-style rides are just about acceptable, provided AI alignment researchers are seated towards the middle.

Do you know an unhappy AI alignment researcher? Please let us know as soon as possible — we will arrange for a crisis team to be dispatched.

We will shortly release the full report. Sign up here for updates:

Read Entire Article