top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Is AI really an existential threat?



"The risks from advanced AI could be more challenging than any other global threat that humanity has encountered.”



Guest contributor: Harlan Stewart



Can the same technology that writes our emails and powers our chatbots really pose an existential threat? And as AI systems grow more powerful by the day, are we doing enough to steer them in the right direction?


To shed light on these pressing questions, we spoke with Harlan Stewart, a spokesperson for the Machine Intelligence Research Institute (MIRI).


Harlan gave us some fascinating insights into the current state of AI development, its potential trajectory, and the existential risks keeping some researchers up at night. From the technical challenges of controlling superintelligent systems to the urgent need for global cooperation, our conversation highlighted both the risks a smarter-than-human AI could pose — and the crucial steps we can take to harness this technology responsibly, ensuring it becomes a powerful force for good.



1) What do you view as the biggest risks posed by AI?


Machine learning techniques have allowed the AI industry to make rapid progress towards smarter-than-human AI systems while making very slow progress towards a robust scientific understanding of these systems and how they work internally. Humanity is on a collision course towards building a powerful technology that it can’t control, and human extinction is a likely outcome.


Reinforcement learning from human feedback (RLHF) is the main tool used to steer the behaviour of today’s systems, and it involves large amounts of trial-and-error with human supervision. But if AI progress continues, then someday AI systems will make decisions that are too complex for humans to supervise, and that are too consequential for a trial-and-error approach. There is no known method for safely steering the behaviour of smarter-than-human AI systems.


What will happen if we build an AI system that is smarter than us but not aligned with our interests? There are limits to how well we can predict the behaviour of such a system, but there are two things that we can be fairly sure of. The first is that it will probably consider power and resources to be useful for its goals, because power and resources are both useful for almost any goal a mind could have. The second is that if it is competing with humanity for power and resources, and it is considerably more intelligent than humanity, it will probably win that competition.



2) How does the potential risk of AI compare to other major global threats like climate change, nuclear war, or global pandemics?


When contending with the extinction risk posed by smarter-than-human AI, it might be useful to look at other examples of global threats and how humanity has responded to them. For example, maybe we can learn lessons about international coordination to prevent global threats by looking at the successes or failures of past efforts. How did humanity come together to pull off the Nuclear Non-Proliferation Treaty and the Montreal Protocol? Why was the Kyoto Protocol not more successful?



But the risks from smarter-than-human AI are unique, and playbooks from the past won’t always work for mitigating them.  Nuclear weapons would be significantly more dangerous if they could outsmart their operators, escape containment, and make copies of themselves. It would be harder to prevent a pandemic if humanity was encountering viruses for the very first time. Persuading society to take action on climate change might be even more difficult if the harms it causes were not visible until it was too late to do anything. The risks from advanced AI could be more challenging than any other global threat that humanity has encountered.



3) Are big tech companies moving too fast with AI development, or is their speed essential for staying ahead of potential threats?


The leading AI companies are moving recklessly fast with AI development. Every year they pour increasing amounts of funding into making AI more powerful, but no one knows when the technology will become powerful enough to cause serious harm, and no one knows how to prevent it from causing serious harm. The AI industry is playing an extremely dangerous game that is profitable in the short term but ultimately lethal.



4) How can research institutions help us proactively address and mitigate AI risks?


The problem of how to steer the behaviour of a smarter-than-human AI system is a technical challenge that could probably be solved eventually, with enough time and effort. But given the current trajectory, it seems unlikely that researchers will make the scientific breakthroughs needed to solve this problem before it is too late.


To make things worse, the world currently would not even have the capacity to quickly change this trajectory in the face of an emergency. The situation is not hopeless, though. There might still be time to create the physical and regulatory infrastructure that would be needed to enact a moratorium on frontier AI development. By building this “off switch,” governments may be able to avoid a catastrophe in the future and buy enough time for researchers to develop the technical solutions needed to safely build and reap the benefits of a smarter-than-human AI.


1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

OpenAI develops web search capabilities

Issue 44

OpenAI's latest move is to give ChatGPT real-time web searching powers.

Getting Machine Learning Projects from Idea to Execution

Issue 43

Eric Siegel, Ph.D., former Columbia University professor and CEO of Gooder AI, outlines practical strategies discussed in his new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, to help organisations turn machine learning projects into real-world successes.

The World's Largest AI Supercluster: xAI Colossus

Issue 42

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

The Future of AI Cannot Be a Race to the Bottom

Issue 41

Sama CEO Wendy Gonzalez shares invaluable insights on building an ethical foundation for AI development in our latest thought leadership piece.

Moral AI and How Organisational Leaders Can Get There

Issue 40

Renowned neuroscientist and moral AI researcher Dr Jana Schaich Borg shares valuable insights on how industry leaders can implement moral AI

The future of high-performance compute: How Northern Data Group is powering the next generation
of AI

Issue 39

In our latest Q&A, Rosanne discusses how Northern Data Group is powering the next generation of innovation through its sustainable, state-of-the-art, HPC solutions.

Related Articles
bottom of page