top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Is AI really an existential threat?



"The risks from advanced AI could be more challenging than any other global threat that humanity has encountered.”



Guest contributor: Harlan Stewart



Can the same technology that writes our emails and powers our chatbots really pose an existential threat? And as AI systems grow more powerful by the day, are we doing enough to steer them in the right direction?


To shed light on these pressing questions, we spoke with Harlan Stewart, a spokesperson for the Machine Intelligence Research Institute (MIRI).


Harlan gave us some fascinating insights into the current state of AI development, its potential trajectory, and the existential risks keeping some researchers up at night. From the technical challenges of controlling superintelligent systems to the urgent need for global cooperation, our conversation highlighted both the risks a smarter-than-human AI could pose — and the crucial steps we can take to harness this technology responsibly, ensuring it becomes a powerful force for good.



1) What do you view as the biggest risks posed by AI?


Machine learning techniques have allowed the AI industry to make rapid progress towards smarter-than-human AI systems while making very slow progress towards a robust scientific understanding of these systems and how they work internally. Humanity is on a collision course towards building a powerful technology that it can’t control, and human extinction is a likely outcome.


Reinforcement learning from human feedback (RLHF) is the main tool used to steer the behaviour of today’s systems, and it involves large amounts of trial-and-error with human supervision. But if AI progress continues, then someday AI systems will make decisions that are too complex for humans to supervise, and that are too consequential for a trial-and-error approach. There is no known method for safely steering the behaviour of smarter-than-human AI systems.


What will happen if we build an AI system that is smarter than us but not aligned with our interests? There are limits to how well we can predict the behaviour of such a system, but there are two things that we can be fairly sure of. The first is that it will probably consider power and resources to be useful for its goals, because power and resources are both useful for almost any goal a mind could have. The second is that if it is competing with humanity for power and resources, and it is considerably more intelligent than humanity, it will probably win that competition.



2) How does the potential risk of AI compare to other major global threats like climate change, nuclear war, or global pandemics?


When contending with the extinction risk posed by smarter-than-human AI, it might be useful to look at other examples of global threats and how humanity has responded to them. For example, maybe we can learn lessons about international coordination to prevent global threats by looking at the successes or failures of past efforts. How did humanity come together to pull off the Nuclear Non-Proliferation Treaty and the Montreal Protocol? Why was the Kyoto Protocol not more successful?



But the risks from smarter-than-human AI are unique, and playbooks from the past won’t always work for mitigating them.  Nuclear weapons would be significantly more dangerous if they could outsmart their operators, escape containment, and make copies of themselves. It would be harder to prevent a pandemic if humanity was encountering viruses for the very first time. Persuading society to take action on climate change might be even more difficult if the harms it causes were not visible until it was too late to do anything. The risks from advanced AI could be more challenging than any other global threat that humanity has encountered.



3) Are big tech companies moving too fast with AI development, or is their speed essential for staying ahead of potential threats?


The leading AI companies are moving recklessly fast with AI development. Every year they pour increasing amounts of funding into making AI more powerful, but no one knows when the technology will become powerful enough to cause serious harm, and no one knows how to prevent it from causing serious harm. The AI industry is playing an extremely dangerous game that is profitable in the short term but ultimately lethal.



4) How can research institutions help us proactively address and mitigate AI risks?


The problem of how to steer the behaviour of a smarter-than-human AI system is a technical challenge that could probably be solved eventually, with enough time and effort. But given the current trajectory, it seems unlikely that researchers will make the scientific breakthroughs needed to solve this problem before it is too late.


To make things worse, the world currently would not even have the capacity to quickly change this trajectory in the face of an emergency. The situation is not hopeless, though. There might still be time to create the physical and regulatory infrastructure that would be needed to enact a moratorium on frontier AI development. By building this “off switch,” governments may be able to avoid a catastrophe in the future and buy enough time for researchers to develop the technical solutions needed to safely build and reap the benefits of a smarter-than-human AI.


1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

OpenAI's o1 model has been hailed as a breakthrough in AI

Issue 35

Just days ago, the AI world was buzzing with the announcement of OpenAI's secretive "Strawberry" project. Now known as the o1 model, this AI powerhouse has shattered benchmarks. But does it live up to the hype?

Will AI take over: A conversation with Jaan Tallinn

Issue 34

AI pioneer Jaan Tallinn, founding engineer of Skype and Future of Life Institute, shares his insights on AI's potential dangers — and how we can mitigate them.

DeepMind's AlphaProteo AI is outpacing years of scientific research

Issue 33

Designing proteins from scratch has long been a scientific puzzle. Now, Google DeepMind believes it's one step closer to solving this problem.

1X Technologie’s new humanoid Robot NEO is seriously impressive

Issue 32

1X Technologies just unveiled NEO Beta, a humanoid robot designed to be your personal home assistant, and they're gearing up for pilot deployments in select homes this year.

Can AI-generated images be copyrighted?

Issue 31

Content creation with the help of computers is nothing new. What has changed is the extent to which machines can now contribute to our creative process.

California's push to regulate AI has Silicon Valley on edge.

Issue 30

Move fast and break things, has long been Silicon Valley's mantra. But when it comes to AI regulation, California lawmakers are hitting the brakes — and not everyone's happy about it.

Related Articles
bottom of page