Event - AI Safety Summit Talks with Yoshua Bengio and panel
The second edition of the AI Safety Summit took place in South Korea last month. The importance of the continuation of this platfrom cannot be overstated. Many leading scientists are worried that AI could be an existential risk to humanity. Not for nothing did the participating state representatives release a statement:
Unfortunately, these summits still took place behind closed doors, meaning citizens cannot verify how AI risks, which impose existential risks upon them, are being reduced. In contrast, our AI Safety Summit Talks are open to the general public, policymakers, and journalists. At our events, we discuss what the largest risks of future AI are and how to reduce them.
The latest edition on the 21st of May featured “godfather of AI” Yoshua Bengio, as well as several other reputable speakers, moderated by the amazing David Wood. Find the full recording of our latest AI Safety Summit Talks here! You can also read Otto Barten’s poignant closing remarks here.
Event - De Zwijger Special - “The future of AI: too much to handle?” featuring Roman Yampolskiy
The next event, following shortly after the AI Safety Summit Talks, was a Pakhuis De Zwijger Special, on the 6th of June. The future of AI will become a determining factor of our century. If you want to understand future AI’s enormous consequences for the Netherlands and the world, Pakhuis de Zwijger was the place to be!
None other than Roman Yampolskiy (University of Louisville) discussed the question of controllability of superhuman AI. The subsequent implications of his results for AI development, AI governance, and society were taken on by a panel consisting of philosopher Simon Friederich (Rijksuniversiteit Groningen), parliamentarians Jesse Six Dijkstra (NSC), Queeny Rajkowski (VVD) and Marieke Koekkoek (Volt), as well as policy officer Lisa Gotoh (Ministry of Foreign Affairs), and AI PhD Tim Bakker (UvA). All under the moderation by Maarten Gehem.
News - Merger with CAIS
Recently, the Existential Risk Observatory merged with Campaign for AI Safety, which is geared towards boosting public understanding of AI safety and strong laws to stop big tech from building dangerous and overly powerful AI. Its founder, Nik Samoylov, will remain a part of the organisation. We look forward to working together closely in order to prevent AI doom.
News - CFO Day
Otto Barten recently spoke at the CFO Day to an audience of CFOs of major Dutch companies about AI's existential risks. Great to be able to spread an important message among such a nice group of people!
Other news
Media -
Media - A great piece by Émile P. Torres on #TeamHuman VS #TeamPosthuman can be read here.
Donations - Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.