Existential Risk Observatory News
ERO's Policy Proposals, Dutch Political Parties's Stances On AI, and much more
Policy Proposals in the aftermath of the AI Safety Summit
Existential Risk Observatory propose the following policy measures to be implemented by as many countries as possible, but especially by the US and UK, after the UK AI Safety Summit 1-2 November 2023.
15 proposals in total are divided over three categories: Safety, Democracy & Openness, and Governance. These measures aim first and foremost to reduce human extinction risk, and second to promote democratic development of AGI and superintelligence, in case AI Safety could be assured in the future.
We would like to acknowledge earlier proposals that have partially inspired ours by FLI, StopAI, PauseAI, Jaan Tallinn, and David Dalrymple. For a more elaborate explanation of the proposals, click on the list below.
Publicly fund AI Safety research (but do not purchase hardware)
Recognize AI extinction risk and communicate this to the public
Make the AI Safety Summit the start of a democratic and inclusive process
Demand a reversibility guarantee for increases in frontier AI capabilities
Dutch General Elections Voting Guide
The Dutch General Elections are upon us! During the next governmental cycle, AI is likely to become much more capable than it is right now. This means a historical, fundamental change for society, with many possible risk scenarios to take into account.
Want to know what political party is most mindful of AI X-risk, and which ones less so? Existential Risk Observatory investigated the election manifestos for you! Check out our “stemwijzer” (in Dutch) here!
Other news
Hiring - Want to make an impact in a developing policy field? Check out PourDemain’s vacancy for Advocate for safe artificial intelligence in the EU.
Media - ERO’s Ruben Dieleman appeared on Type One Planet’s podcast to talk about AI X-risk and our organisation’s mission.
News - The EU AI Act is still under negotiation. New developments do not bode well for AI Safety.
News - You have probably heard about what happened at OpenAI in recent days. Here is an explainer by NRC journalist Stijn Bronzwaer (in Dutch). Here is our take on the developments:
Donations - Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.