What you can do about AI 2027

4 hours ago 2

AI 2027 features as one of its two endings a race ending in which humanity loses control of its destiny in 2027 and is extinct by 2030. How can we avoid this?

Below we share what we see as the best ways for you to improve humanity’s chances.

We depicted AGI in 2027 because we think it’s a plausible outcome and society isn’t acting with anywhere near the appropriate urgency. We may only have 2 years left before humanity’s fate is sealed!

Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.

If you can’t contribute now, keep in mind that AGI timelines are uncertain Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then. Consider preparing to contribute if AGI arrives post-2027.

Let’s imagine the world was up to the task of handling an intelligence explosion. What might that look like?

  1. Governments and the public understand that AGIs will dictate humanity’s future and might arrive soon. There’s high-quality online discussion about AGI, companies disclose their internal AI capabilities, and governments have invested tens of billions into AGI preparedness. A world in which the public is informed about risks from superintelligence would be a safer world.

  2. As companies automate AI R&D, governments are on high alert and take action. Government agencies and nonprofits conduct regular interviews with top researchers at the companies. Companies report their estimates of AI R&D speedups based on surveys and uplift studies.

  3. Companies publish detailed safety cases justifying why their AIs won’t cause catastrophic harm. These are treated with much more seriousness than in industries that don’t pose an existential threat, such as cars. These argue that either (a) their AGIs aren’t adversarially misaligned or (b) even if they were, they wouldn’t be able to put us on a catastrophic path. The government, external organizations, academia, and the public engage deeply with these safety cases. If the safety cases aren’t strong enough, companies refrain from developing or deploying better AIs.

  4. Well-resourced teams inside and outside of AI companies do alignment research to better control AIs’ goals. Alignment research is seen as a top priority with respect to attention and resourcing.

  5. It’s practically impossible for the CEO or POTUS to use aligned AGIs to seize control of humanity’s future. All of their queries to the models are logged and monitored. The model spec and system prompt are public and red-teamed to prevent coups.

  6. The US and China coordinate to reduce competitive pressures, ensuring models aren’t developed without strong safety cases. If necessary for safety, development is slowed. On-chip verification and inspectors allow for trustless enforcement of an international deal.

The above is not an exhaustive list, but it covers some of our top priorities.

Our next project will have detailed recommendations for governments and AGI companies. In the meantime, we encourage focusing on steering toward the world described above.

You might start by learning more about AGI-relevant topics. Along with AI 2027, we recommend the following regarding AGI forecasting and strategy (more in footnote):

  1. The AGI-relevant episodes of the Dwarkesh podcast and the 80,000 Hours podcast

  2. Situational Awareness, though we think it underemphasizes international coordination and AGI alignment difficulty

  3. Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover

  4. AI could defeat all of us combined

  5. AI-Enabled Coups: How a Small Group Could Use AI to Seize Power

Consider also going through this technical AI alignment or AI governance course with a friend, or registering for the facilitated version, with a focus on the portions relevant to existential risks.

Many sorts of work can help. Below we list some of the most common ones along with specific opportunities:

  1. Governance/policy/forecasting research and advocacy. Policy research focuses on determining what AI policies are both impactful and tractable, both in the near-term and during AI takeoff. Policy advocacy focuses on getting these policies implemented.

    1. Opportunities designed for entering the field include the Horizon Fellowship, IAPS AI Policy Fellowship, Pivotal Fellowship, and ERA Fellowship. We’ll also highlight RAND’s Technology and Security Policy Fellowship, GovAI, and our very own AI Futures Project.

  2. Technical research, evaluations, and demonstrations. Research focuses on developing techniques to align and control increasingly capable AIs. Demonstrations and evaluations of AIs’ capabilities and goals help inform decision-makers and the public.

    1. The MATS Program is for entering the field. We’ll also highlight Redwood Research, METR, and Apollo Research. See also this video with technical safety career advice.

  3. Beneficial AI applications: Some applications of AI are especially beneficial for positive AGI outcomes, e.g. AI for decision-making and AI for coordination. This blog post details some promising applications.

  4. Communications and journalism. Help the public understand when AGI might come and the impact it will have.

    1. The Tarbell fellowship is for entering AI journalism.

  5. Infosecurity: Securing AI model weights and algorithmic secrets is important for nonproliferation.

  6. Operations / other. AI safety organizations, like others, also need various other skillsets, such as generalist operations staff and management capabilities.

80,000 Hours and AISafety.com have more comprehensive job boards, and 80,000 Hours gives career advice.

There’s also things to do without working full-time on AI safety, or in addition to doing so.

  1. Contribute to public discourse. As AI improves, the amount of AI discourse will increase and the stakes will rise. Having reasonable voices on blogs, social media, podcasts, etc. will help improve societal decision-making. Organized public advocacy may also play an important role.

  2. Private discourse and informing others. Having open conversations with friends, family, etc. about AGI may have significant effects. If you’re a college student, consider joining your college’s AI safety club or founding one.

  3. Donate. Many AI safety organizations are funding-constrained. Manifund contains a bunch of projects’ information (our information is here), or you can donate to an organization that we listed in the previous section. If you’re interested in donating >$200k email us and we may be able to advise you.

Read Entire Article