Existential Risk Observatory Newsletter #2
4th Quarter 2022 - New Campaign Lead, Funding from the Dreamery Foundation, And More!
Profile: Eefje van Esch
I am the new Campaign Lead for the Netherlands at the Existential Risk Observatory. Personally, I am quite the layman to risks on human extinction, but I am excited to discover all the information there is to help others understand the issue in a easy way, especially about these somewhat complicated matters. Informing others, raising awareness about the issues and thereby starting a discussion is genuinely (and slightly weird maybe!) what I like doing. Especially since it will impact everybody, regardless of who you are. I am super excited to introduce our campaigns to you in a later newsletter, but also don’t hesitate to reach out to us with your ideas or questions!
Funding from Steven Schuurman’s Dreamery Foundation
Billionaire philanthropist Steven Schuurman, known for his sizable donations to Dutch political parties D66 and GroenLinks, is the chair of the Dreamery Foundation, which aims to promote the preservation of the Earth’s livability for future generations of humans and animals. The Existential Risk Observatory is thrilled to announce it has received funding from this foundation to continue its current mission and expand its work!
Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs?
There are currently 72 million or billion dollar projects around the world focused on developing a human-level AI, also known as an AGI — meaning an AI which can do any cognitive task at least as well as humans can. A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety.
We have to reach a consensus on whether human-level AI indeed poses an existential threat to humanity, as most AI Safety and existential risk academics say. And we have to find out what to do about it, where some form of regulation seems inevitable. The fact that we don't know yet what manner of regulation would effectively reduce risk should not be a reason for regulators to not address the issue — but rather a reason to develop effective regulation with the highest priority.
Not doing anything — and thus letting CEOs like Zuckerberg determine the future for all of us — could very well lead to disaster. Our first piece on media platform Salon on this worrisome development.
ERO’s Position Paper for the AI Roundtable Meeting (Dutch)
On the 13th of September, the Dutch Parliament will gather for a Dutch Roundtable Meeting on the dangers of artificial intelligence. The development of new AI is taking off. Existential Risk Observatory’s Otto Barten and Sam Bogerd worry about the negative effects of powerful, future artificial intelligence, and they have published a position paper (in Dutch) to set the stage and inform the participants of the aforementioned meeting about the three main points of attention for a to-be-established Committee for Digital Affairs. The Dutch Government should:
Account for risks with a small chance of happening but with a very significant impact it its national risk analysis, and it should communicate these risks to the Dutch public,
Prepare to implement solutions, when these become available,
Finance research to existential risks and AI alignment, for example by establishing a Research Institute for Existential Risks.
Other news
Last call! WE ARE STILL LOOKING FOR AN INTERN!
EAGxRotterdam, Effective Altruism’s global conference, will take place in Rotterdam on the 4th, 5th, and 6th of November. Apply now!
Otto Barten made an acte de présence on the podcast La Prospective by the media platform The Flares.
ERO’s Ben Bucknall published a new paper on near-term AI as a potential existential risk factor.
A new piece on our website about the development of AI, with or without sentience.
Our talk at the Koninklijke Industrieele Groote Club led to some new insights.
Lastly, some food for thought:
My current response to ‘existential risk’ is from a different point of view. Namely, within semantic ‘categorisation’ and behaviour. Human communication has its representation in speech and Speech Tech is experiencing a boost with GPT/LLM development. On my ‘resurrected’ website you’ll find my train of thoughts on the matter and a recent presentation, combined with a pre-ChatGPT earlier one. See: https://divorytaur.com/presentation-lithme-2023/ one may contact me: faye@divorytaur.com for further communications.