Existential Risk Observatory Newsletter #6
ERO in TIME, De Zwijger Special on AI x-risk, and much more
Media - TIME: ”An AI Pause Is Humanity's Best Bet For Preventing Extinction”
Last month, for the second time, Otto Barten represented Existential Risk Observatory in TIME Magazine, in order to talk about the suggested AI Pause as may be the most radical and most efficient way to combat existential risks from artificial intelligence:
“When the full weight of our situation sinks in, measures that may appear unrealistic at present, could rapidly gain support. (…) And, sooner than we think, we will implement an AI Pause.”
Find some comments from the readers here.
Event: De Zwijger Special
As you may have noticed, we organised another event on AI x-risk recently, in Pakhuis de Zwijger in Amsterdam. There was some lively discussion among the speakers and the audience on what it is, what it is not, and what can be done about it - also by people who are far outside the corridors of power.
Couldn’t make it to the event? Please find the full recording on YouTube.
Media - Podcasts: Rudi&Freddie Show and AI Verkenners (Dutch)
Existential Risk Observatory was recently hosted by two different podcasts: De Rudi & Freddie Show and AI Verkenners, both in Dutch.
The hosts of the former are Jesse Frederik (“Freddie”) and Rutger Bregman (“Rudi”), whom people outside the Netherlands may know from his book Humankind. They had some tough, legitimate questions for Otto. Find the recording of the episode here!
Next up is AI Verkenners, run by Peter van Aalderen and Kevin Ike. They both want to explore artificial intelligence in all its aspects, including the less-than-pretty sides, as told by our campaign manager Ruben Dieleman. Listen to the episode in its entirety here.
PauseAI Protest, August 11 in the Hague
Do you feel concerned about the development of powerful AI? Do you want to do something about it, but don’t know what? Here’s an idea: Join the PauseAI protest on 11 august in the Hague.
For more details, check out the website, and register. It is the very first of its kind in the Netherlands - you could be part of history!
AI x-risk: Reactions from Antonio Guterres and Mark Rutte
A significant voice joins the discussion: Recently, United Nations Secretary-General António Guterres recognized that AI could lead to human extinction. Earlier this year, in similar vein, UK prime-minister Rishi Sunak dedicated a thread on Twitter to AI risks. By contrast, in a reaction to Guterres’ statement, Dutch prime-minister Mark Rutte admitted not being aware of these risks: “We are going to look into it”.
All the more reason to inform and put pressure on our leaders to work together in order to combat AI x-risk!
Other news
Hiring - Training For Good has an exciting vacancy for AI Programme Lead! Learn more through this link.
Media - Author Anja Sicking published a piece that argues that “the arts could help explore a not (not yet!) existing reality and help us understand AI better” in Parool, which can be read here.
Media - Mark Thiessen and Kees Verhoeven, with whom ERO previously teamed up for Control AI, wrote an exciting piece in Dutch newspaper NRC.
Petition - ERO is known for its focus on AI x-risk, but Fossielvrij NL is doing great work in ending fossile subsidies, an obviously good low hanging fruit policy measure.
We therefore support their open letter.Media - Thrilling news from the US: The Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI.
Donations - Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.