Development spend on Transformative AI dwarfs spend on Risk Reduction

I stumbled across these 2020 estimates and thought you should know:

  • around 300 people globally were working on technical AI safety research and 100 on non-technical.
  • 1000x more spent on accelerating the development of transformative AI than on risk reduction.

Now consider two things:

  1. the development of Artificial generalised intelligence (AGI) is anticipated within this century (and likely sooner than later).
  2. AGI will rival or surpass human intelligence across multiple domains; i.e. it is the next existential risk after nuclear.

Around $50 million was spent on reducing catastrophic risks from AI in 2020 — while billions were spent advancing AI capabilities. While we are seeing increasing concern from AI experts, we estimate there are still only around 400 people working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000).3Of these, it seems like about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy.

I have no clue what the numbers will be for 2023, but if you are looking for something meaningful to work on, you just found it.

Related Posts

Get Daily AI Cybersecurity Tips