Existential Risk Observatory Newsletter #7
UK AI Safety Summit Events: Save the Dates - 10th and 31st of October! And much more
Save these dates - October 10th and October 31st!
AI Summit Talks: Navigating Existential Risk, Conway Hall London, 10 October, 19.30-22.00
As the world grapples with the challenges and opportunities presented by the rapid advancement of Artificial Intelligence, the UK will host the first major global summit on AI safety. It will bring together key countries, leading tech companies and researchers to agree on safety measures to evaluate and monitor the most significant risks from AI.
What will be decided upon here may influence our common future directly. Much is at stake! A good occasion for two (!) events about what this summit on the development of artificial intelligence should produce: What are the best scenarios with regards to existential risks from AI, and what are the worst, for humanity?
Join ERO and Conjecture at London’s historic Conway Hall on October 10th at 19.30, for the first event: an evening that will include a talk by Associate Professor Roman Yampolskiy (University of Louisville), and a panel discussion hosting Conjecture's Connor Leahy and Andrea Miotti, as well as leading voices from the societal debate, industry, and the political realm, such as investor Jaan Tallinn, co-founder of CSER, economist Alexandra Mouzavizadeh, journalist Tom Ough, and ICFG policy analyst Eva Behrens, along with a special message by Sir Robert Buckland MP! Moderator of the evening will be David Wood, chair of the London Futurists.
Be quick and book your spot here - only 300 in-person spots can be allocated. Can’t make it? We are working on creating a livestream and making recordings available afterwards.
On the 31st of October at 14.00, in the historic environment of Bletchley Park, our second event on AI Safety will see the return of none other than Prof. Stuart Russell, with several more exciting panellists to be announced soon. Right outside where the actual AI Safety Summit will take place the next day, we will talk about how to safeguard humanity from AI disaster. Stay tuned!
Position Paper: Veilige & Menselijke AI (Dutch)
Dutch General Election Season is upon us. Existential Risk Observatory submitted its most recent position paper “Veilige & Menselijke AI” (“Safe and Secure AI”, in Dutch) to all parties willing to take AI x-risk seriously.
Reading the Dutch political party programs thus far available, we are hopeful but nowhere near satisfied - AI x-risk should be a much higher priority!
“Dutch Government, take control over AI!”
AI Onder Controle, the petition calling on the Dutch government to be much more pro-active on all kinds of consequences to the application of artificial intelligence, was received by the Dutch Government’s Commissie Digitale Zaken (Digital Affairs Committee).
Thank you all for signing and sharing our concerns! We are hopeful that the suggestions from this petition will be taken seriously by the next Dutch Government as well!
EAGxBerlin meetup: Advocacy & Communications
At EAGxBerlin 2023, ERO’s Ruben Dieleman hosted a meetup for all professionals in the field of advocacy and communications, in order to exchange ideas and learn from each other. The turnout was much higher than expected, with nearly 40 people sharing their perspectives on how to lobby best for the most pressing causes. Thank you for attending, and stay tuned for more!
Other news
Opportunity - You can now apply for a placement through Training For Good’s EU Tech Fellowship - An exceptional chance at making a positive impact!
Opportunity - Calling all writers! Pour Demain and the organisers of the Swiss Cyber Security Days launched the AI Safety Prize. Submit your demonstration of vulnerabilities in large language models or specialised ML models. The authors of winning submissions will be invited to the Swiss Cyber Security Days in February 2024.
Media - ERO’s Ruben Dieleman published a letter about why the AI Pause would be a good idea in Dutch newspaper NRC, in reaction to an op-ed by Wim Naudé.
Protest - On October 21st, PauseAI will organise a global wave of protests! You can find out where to join them from your location here.
Donations - Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.