Existential Risk Observatory Newsletter #4
ERO in Time Magazine, an op-ed made with ChatGPT, the next ERO event, and more!
Media - Why Uncontrollable AI Looks More Likely Than Ever (Time Magazine)
Otto Barten and Roman Yampolskiy published a piece in Time Magazine on why uncontrollable AI looks more likely than ever. In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. What can be done to reduce misalignment risks of AGI? A sensible place to start would be for AI tech companies to increase the number of researchers investigating the topic beyond the roughly 100 people available today. Ways to make the technology safe, or to reliably and internationally regulate it, should both be looked into thoroughly and urgently by AI safety researchers, AI governance scholars, and other experts.
Existential Risk Observatory on Tour
One of our very first organised events took place on the 24th of January! In sold-out Dudok in the Hague, ERO director Otto Barten had a discussion on the risks of human-level AI with journalist and popular science book author Bennie Mols.
Both the audience and the speakers were divided on the question whether AI will outsmart us within 10 years. The one thing everyone seemed to agree on is that we, lay-people and experts alike, should focus on AI-risks that are happening right now instead of worrying about the ones in the far future.
And of course that the debate was way too short and we should take more time to dive into the topic. And that is where you are in luck! Please read on below ↓
The next Existential Risk Observatory event
Mark it down in your agendas: On the 12th of April, from 20.00 to 21.30, in Pakhuis de Zwijger along Piet Heinkade 179 in Amsterdam, the next (English-language) debate organised by Existential Risk Observatory will take place!
British computer scientist Stuart Russell, known for his highly used work Artificial Intelligence: A Modern Approach, is our first confirmed speaker. Central to the debate will be the question: What is the current status of artificial intelligence as an existential risk? Policy makers, academics, and other experts will discuss what is the best way forward. Keep an eye out for more details to follow on our website!
Media - An AI-generated op-ed (NRC)
Dutch newspaper NRC published another Existential Risk Observatory piece (in Dutch). ChatGPT, the now-famous chatbot based on AI, generated this op-ed, as tasked by and source-fed with the opinion of our founder Otto Barten. The main takeaway? Please beware the blurring border between what is real and what is AI. It is essential to remain aware of the limitations of this technology, and to remain in control.
New Research Paper
Our colleague Alexia de Roode Torres Georgiadis recently published a paper about The Effectiveness of AI Existential Risk Communication to the American and Dutch Public. The paper finds that awareness about AI Existential Risk can be raised successfully using mass media for members of the general public. Also, raising awareness is significantly more successful for female participants and participants with a bachelor degree, and videos items raise awareness more successfully than newspaper articles. Find a PDF here!
Profile - Sam Bogerd
Hi, I'm Sam Bogerd, ERO's a volunteer policy advocate. For now, I am mostly focussed on AI policy in the Netherlands. I hope to help policymakers understand the potential long term harms and benefits of AI. I have been getting more interested in existential risks since reading the Precipice by Toby Ord, which argues that these risks are both urgent to address and not getting the attention they need.
Other news
Existential Risk Observatory welcomes the statement by Dutch Minister of Economic Affairs and Climate Micky Adriaansens (full video) as a confirmation that she will take AI alignment into account in future EU policy making, as suggested by Queeny Rajkowski and Lammert van Raan during the first Dutch parliamentary debate on AI.
The New York Times published a significant piece on Bing Chat, with the memorable, ominous quote: “If future AIs gain the ability to rapidly improve themselves without human guidance or intervention, they could potentially wipe out humanity.”
Is the plan that this newsletter is only AI going forward?