Advertisement
News

Institute That Pioneered AI ‘Existential Risk’ Research Shuts Down

The Future of Humanity Institute, which along with philosopher Nick Bostrom taught the world to fear existential AI risks, is finished.
Institute That Pioneered AI ‘Existential Risk’ Research Shuts Down
Toby Ord, Andrew Snyder-Beattie, Nick Bostrom, Cecilia Tilli, Anders Sandberg and Stuart Armstrong in FHI's James Martin seminar room.

The Future of Humanity Institute (FHI), an almost two decades-old organization focused on researching and mitigating the existential risks posed by artificial intelligence, has shut down.

“Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home [at Oxford University]),” a post on the organization's website announcing its closure says. “Starting in 2020, the Faculty imposed a freeze on fundraising and hiring.  In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.”

FHI was established in Oxford University in 2005 by Swedish philosopher Nick Bostrom, and received funding from Elon Musk, the European Research Council, the Future of Life Institute, and others.  

“During its 19-year existence, the team at FHI made a series of research contributions that helped change our conversation about the future and contributed to the creation of several new fields and paradigms,” the post on The Future of Humanity Institute website says. “FHI was involved in the germination of a wide range of ideas including existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, grand futures, information hazards, the unilateralist’s curse, and moral uncertainty.”

Advertisement