Existential Risk Observatory News
AI Safety Meetup featuring Koen Holtman, Funding, and much more
Announcing: AI Safety Meetup featuring AI Safety researcher Koen Holtman
20 February, 19.30-21.30, Pakhuis de Zwijger
What can we expect from the EU AI Act? (How) will it help us contain existential risks from AI? Join us for an exciting in-person event where we dive deep into the fascinating world of AI safety, together with systems architect and independent AI safety researcher Koen Holtman!
โThe Existential Risk Observatory brings together experts, enthusiasts, and curious minds to discuss the potential risks and challenges associated with artificial intelligence.
Engage in thought-provoking conversations exploring the development and deployment of AI technologies with us! Don't miss this opportunity to be part of the AI safety community and contribute to shaping a secure future with artificial intelligence.
This event will consist of about one hour of talks, discussions, and a Q&A, and about one hour of social drinks afterwards.
๐๐ข๐ ๐ง ๐ฎ๐ฉ ๐ญ๐ก๐ซ๐จ๐ฎ๐ ๐ก ๐ญ๐ก๐ ๐ฅ๐ข๐ง๐ค: https://www.eventbrite.com/e/ai-safety-meetup-ft-koen-holtman-hosted-by-existential-risk-observatory-tickets-814087787487
Post-AI Safety Summit Networking Event featuring FLIโs Risto Uuk
Late December, we organised our last event of the year, hosting Risto Uuk from Future of Life Institute at Dudok Den Haag.
Our conversations spanned the EU AI Act and the Bletchley Declaration, and of course what could and should happen next for the benefit of #AISafety. Some very livey discussions will have to be continued the next time we meet each other - which is fortunately very soon!
Thanks for being with us in 2023, and be sure to join us again this year!
New funds for Existential Risk Observatory!
Existential Risk Observatory has been honoured to receive funding from Longterm Future Fund! Next to this, the Observatory has also received an extremely generous contribution from an individual donor.
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, it seeks to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Existential Risk Observatory would like to express our immense gratitude for the support here, and we will continue ever more determined to reduce existential risk worldwide!
Other news
Upon the discovery of deepfakes, supercharged by AI, of singer Taylor Swift, the call for regulation became all the more prominent since last week.
The aim of Mark Zuckerbergโs Meta to build AGI was not met with unanimous applause. Instead, a few experts pointed out that it would be โirresponsibleโ to make tools on par with human intelligence open source.
Is the potentially existential threat that advanced AI poses breaking into the general consciousness? 90 seconds to Bulletin of the Atomic Scientists's Doomsday Clock striking midnight is the closest to midnight humanity has ever come.
A little droplet of positive news in an otherwise rather ominous series of developments: The White House signaled that global co-operation on AI Safety is the way to go.
Donations - Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is unfortunately still funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.