Existential Risk Observatory Newsletter #5
ERO in the media, PauseAI, research, AI x-risk hitting mainstream discourse, and another event coming soon!
Media - Is AI a threat to humanity? (De Telegraaf)
Earlier in May, Otto Barten and Kritika Maheshwari took opposite sides to debate this statement the special column De Kwestie (in Dutch). While both recognized the potential dangers from runaway AI, the former found these dangers to be generally underestimated, and the latter deemed them exaggerated. The reactions to the piece are also noteworthy, and give an impression of the general sentiment among Telegraaf readers towards AI as an existential risk.
ERO on EO and the radio
Otto Barten explained why and how AI could pose an existential risk on Dutch television. Find the episode with the interview from the programme Dit Is De Kwestie (EO) here. He was also a guest on Dutch radio programma Dit Is De Dag, to talk about the same topic. It’s great to see that the interest is increasing (and ERO’s work seems to be paying off!).
PauseAI-protest in Brussels
In front of the headquarters of Microsoft in Brussels, under police supervision, a few representatives of the PauseAI movement took to the streets to protest the continued training of AI models. PoliticoEU dedicated an elaborate article to it, as one of the first out of a series of organised manifestations worldwide. Ruben Dieleman represented the Existential Risk Observatory. This Twitter thread from initiator Joep Meindertsma explains the reasons why protesting could be useful, and why it is timely.
Research: How existential risk from AI has shifted in people’s perception
Did you see our former intern Alexia’s work on the perception of AI x-risk? Ever since, new findings have been published. Learn more about how recent media items have changed people’s minds about AI x-risk, or what the American public thinks of the proposed AI moratorium.
Statements and Open Letters, but what happens next?
Various people and organisations of stature have sounded the alarm about the direction in which AI is developing.
Recently, the Center for AI Safety published a statement that was signed by OpenAI’s Sam Altman, former Google AI pioneer Geoff Hinton, among many representatives from universities and industry worldwide. It managed to make headlines globally, a sign that AI existential risk may be hitting the discursive mainstream. Similarly, the open letter that was put out by the Future of Life Institute earlier this year, endorsed by the likes of Elon Musk and Steve Wozniak among many others, took the idea of an AGI Moratorium out of the realm of fantasy and into the more feasible ranges of the Overton Window.
The question remains: Will these efforts translate into increased governance and bonafide policy aimed at safeguarding humanity from the unbenign kinds of artificial intelligence? And if so, will they come in time, and will they be enough?
The EU’s AI Act
National and intergovernmental bodies worldwide are working on legislation for artificial intelligence. The European Union's AI Act could be the world's first comprehensive legislation governing the technology, with new rules on facial recognition and biometric surveillance, but EU governments and lawmakers still need to agree a common text. Ahead of an agreement, possible pacts with companies such as Google and OpenAI are being discussed.
Teaser: ERO’s Next Event, 10th of July
Mark down the 10th of July in your agendas: The Existential Risk Observatory will organise a new event, in Amsterdam, evening time. More information will follow soon, but we can already say that it is going to be special!
It was great that so many people showed up to our last event. The lecture by Stuart Russell and the following debate can be found below - it has already garnered 10.000 views!
Other news
Our volunteer Anja Sicking published an interesting opinion piece on AI in Dagblad van het Noorden (Dutch).
Opportunity: Co-found an incubator for independent AI Safety researchers together with Alexandra Bos. More details and an application link can be found here.
An interesting little clip from marketeer Nik Samoylov about existing narratives surrounding AI Safety. More research by him can be found here.
Have you considered donating to the Existential Risk Observatory? Existential risk awareness building is funding-constrained. With additional funding, we could operate in more countries, organize more and better events, and do more research investigating the effects of our interventions. We are sincerely happy with all support, both large and small! You can either directly contact us or donate through this link.