AI Safety Meetup II: PauseAI’s Joep Meindertsma - register now!
A few weeks ago, we organised our first AI Safety Meetup at Pakhuis de Zwijger, featuring researcher Koen Holtman. The big turnout made for some great engagement!
Now, we are happy to announce that next week, same place and same time, on the 18th of March, the second edition of our AI Safety Meetup will host PauseAI initiator Joep Meindertsma! Are you ready for an evening that is all about AI Safety and activism? Register through this link, and be quick: only a few tickets left!
Media (TV) - “We Gaan Er Allemaal Aan”
Presenter Valerio Zeno took a deepdive into the world of AI. According to scientists, world leaders and tech experts, there is no greater and more urgent threat to humanity than advanced artificial intelligence right now. But AI is also a great and helpful part of our daily life: it helps us work, analyse, observe and select smarter and more efficiently. What if AI becomes too smart? Can it become so dangers that we have to fear for the fate of the human race?
For the very first episode, ERO founder Otto Barten was interviewed by Zeno for his public television program “We Gaan Er Allemaal Aan” (Dutch for “We’re All Gonna Die”). An accessible and informative introduction to what has come to be known as AI X-risk. Find the episode here (in Dutch)!
Media - Two more podcasts!
In an interview livestreamed on Twitch, presenters Daniel Lippens and Igmar Felicia asked the question “Will AI destroy us all?” to Otto Barten and Joep Meindertsma. What followed was a great, in-depth conversation together about AI existential risk. Reducing existential risk by informing the public debate!🎢The entire talkshow can be reviewed here (in Dutch).
In yet another podcast, Otto talked with Sanjay Puri on the Regulating AI podcast about the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity.
In a world racing toward the development of AGI, the balance between innovation and existential risk becomes a pivotal conversation. Otto shared valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.
A take on the recent Gladstone Action Plan
The Gladestone AI Action Plan on AI development is an amazingly well-researched report. It is great to see that it is not only informing policy, but also the public! This is exactly the open, informed debate that we need to reduce existential risk succesfully.
The report, among other things, argues for increased funding for technical ASI alignment. This is in line with the proposed solution by Yudkowsky and Bostrom. However, succesful ASI alignment could lead to a single, godlike AI doing whatever the developers have programmed into it, without the rest of the world having anything left to say over their future. Developing ASI, even aligned ASI, without having a thorough societal debate first about the question whether we want godlike AI, and if so, who or what it should be aligned to, is unwise. This is exactly the debate we think should be held in public. If we would be certain that many aligned ASIs could succesfully defend us against unaligned ASIs, this picture may change, and therefore research should aim to shed light on questions such as this one.
Other news
Hiring - The newly formed EU AI Office published its first round of vacancies. Be sure to check them out and contribute to AI Safety!
Media - The Diplomat recently published a piece that stated that, in order to prevent an AI Apocalypse, the world needs to work with China, as it is said to have the desire, foundation, and expertise to work with the global society on mitigating catastrophic risks from advanced AI.
Media - Some very worthwhile pieces on AI Safety and recent developments by Jacobin, the New Yorker, and Time.
Donations - Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.