top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Will AI take over? A conversation with Jaan Tallinn



In the rapidly evolving landscape of artificial intelligence (AI), few voices carry as much weight as that of Jaan Tallinn. A founding engineer of Skype and the Future of Life Institute (FLI), Tallinn has since pivoted his focus to what he considers one of the most pressing issues of our time: the existential risks posed by advanced AI systems.



Guest contributor: Jaan Tallinn



AI, as we are repeatedly told, is poised to revolutionise our world. However, alongside promises of innovation and technological development, concerns about its potential risks are also mounting.


Lawmakers worldwide caution about potential misuse of AI systems by malicious actors, while AI lab employees voice safety concerns publicly. Even some of the world’s leading AI experts believe that preventing AI-related risks should be a global imperative.


The rapid progress in AI has ignited concerns that extend far beyond the specter of job losses. We are now grappling with the potential for AI to become a tool for spreading misinformation at an unprecedented scale, influencing our democratic processes, and even outpacing human intelligence in ways we can scarcely imagine. At its most extreme, this anxiety extends to the possibility of advanced superintelligent AI systems posing a fundamental threat to human existence itself.


To shed light on these pressing issues, we engaged in a conversation with Jaan Tallinn about the current state of AI development, its trajectory, and the steps needed to ensure it becomes a force for good rather than an existential threat.


From the technical challenges of aligning superintelligent systems with human values to the urgent need for global cooperation in AI governance, our discussion with Tallinn unveils a roadmap for steering AI's immense potential away from catastrophe and towards a future that benefits all of humanity.



1) Last year, you and several other tech leaders, including Elon Musk and Steve Wozniak, called for a pause in AI development. Given the accelerated pace of AI development since then, what is your assessment of the current situation? Do you believe a pause or slowdown in development is still necessary?


I believe we are racing towards a dangerous precipice with AI development, and part of the danger is that we do not know exactly where the cliff edge is. Companies are racing ahead in developing ever-more powerful models, and the rate of advancement in AI capabilities is outpacing both the rate of progress on technical AI  safety and our ability to govern this powerful evolving technology.  


In my view we are rapidly coming to the point where it will be essential for governments both to have oversight of frontier AI development, and to have the capability to slow or even shut down certain AI experiments where significant risk exists; for example, a training run larger than any carried out to date. Some  proposals I believe have merit include:  


  • Requiring proofs of safety for high-risk training runs, using formal mathematics or fault-tree analysis. 

  • Having an oversight body or committee broadly representative of the global public that would need to approve an AI project carrying a significant risk.

  • Ensuring that we maintain the ability to gracefully shut down AI technology at a global scale, in case of emergencies caused by AI. 



2) You've called for greater public awareness of AI risks. What is the main message or discussion you want to kick start?


To state it plainly, I believe there is a substantial chance that humanity might destroy itself with AI over the next decade or two, if we do not begin approaching AI development with a great deal more care and caution. I would love to see more people across society and across the world become aware of this issue and develop more informed opinions about it.


It is an issue that will likely affect everyone under the age of 60, and yet these risks are being imposed on all of humanity by a very small number of people and companies who stand to benefit the most from it. It is tempting to think that there are adults in the room and these risks will be managed; but at the moment I think it is more the case that our decision-makers are asleep at the wheel. If more people demand the right to understand this risk and speak up about it, then we and our children will all stand a better chance of surviving – and hopefully flourishing – with AI technology.



3) Many new technologies have been developed without any democratic consent, like smartphones or social media. Why are AI technologies different? Do you believe there should be more democratic consent around AI development?


My main argument relates to the severity of the consequences – that unconstrained AI development might lead to human extinction. It should not be the case that every human faces this risk without having any opportunity to have a say into whether they want to take that risk.


Nuclear weapons and fossil fuel technologies were also developed without democratic consent, and have imposed risks and harms on the world at large, but powerful public and civil society movements have built up around them and have influenced governance in meaningful ways. The leaders of the companies aiming to develop AGI speak openly about the transformational impact this technology would have on humanity, and acknowledge the risk of human extinction. If AGI is set to affect humanity to such a great extent, then it is only right that humanity should have a say.



4)You've recently outlined what you believe are 'restrictive' and 'constructive' approaches to AI development. In your view, what distinguishes a good from a bad AI project, and what harmful patterns are you seeing emerge in the industry right now?


I think a good AGI project would have clearly defined risk thresholds, and would not advance beyond a certain point until it was satisfied risk concerns had been addressed. It would be engaging actively with external assessors such as the AI Safety Institutes and sharing all relevant information for them to do meaningful assessments of their processes and models right through the development process. It would have robust whistleblower channels, an empowered board with external membership tasked with ensuring good safety practices are upheld.


However, I think the right focus is not on which specific AI projects are good or bad. Rather, it should be on what a good governance model needs to include. Unless we have governance that meaningfully constrains all projects and ensures that only ‘good’ AI projects can push the frontier of capability development, then the chances of catastrophe remain.



5) Focusing on the positive examples, what specific AI applications do you find most promising for the future? What key principles should guide their development to ensure they benefit society?


I am concerned about the aim to achieve artificial general intelligence specifically,  but am optimistic about the application of AI in many areas of life. Two areas I am especially excited about include AI in healthcare, and collective intelligence.


AI has a tremendous potential to support and improve healthcare in everything from diagnostic support to drug discovery. Moreover, improving healthcare benefits everyone, and thus is less likely to contribute to the sorts of adversarial geopolitical dynamics that AI might engender in other industries. To this end, I’ve recently joined HealthcareAgents as a co-founder. 


I’m also excited about how AI can help us work together better and make better decisions – in effect, become collectively superintelligent ourselves. I see a lot of promise in the use of LLMs as tools to help us reach positive-sum agreements and mediate group decision-making. 

The applications of AI I’m most excited about support humans in providing better care to each other, helping each other, and reaching cooperative goals together.  Other principles that I think are important include  


  • Labelling: we should always know when we are interacting with a machine rather than a human being; AI systems should not be able to deceive people into thinking they are interacting with a human. 

  • Liability: both the users and developers of AI should be held to account for harms caused by AI, and there should be clear lines of accountability.

  • Guaranteed safety: Rigorous safety specifications are a requirement for us to rely on many technologies such as electricity, foods, pharmaceuticals, and vehicles – AI should be no different. As my colleague Stuart Russell says, AI  safety should be as integral to AI, as bridge safety is to bridge construction.

1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

OpenAI develops web search capabilities

Issue 44

OpenAI's latest move is to give ChatGPT real-time web searching powers.

Getting Machine Learning Projects from Idea to Execution

Issue 43

Eric Siegel, Ph.D., former Columbia University professor and CEO of Gooder AI, outlines practical strategies discussed in his new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, to help organisations turn machine learning projects into real-world successes.

The World's Largest AI Supercluster: xAI Colossus

Issue 42

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

The Future of AI Cannot Be a Race to the Bottom

Issue 41

Sama CEO Wendy Gonzalez shares invaluable insights on building an ethical foundation for AI development in our latest thought leadership piece.

Moral AI and How Organisational Leaders Can Get There

Issue 40

Renowned neuroscientist and moral AI researcher Dr Jana Schaich Borg shares valuable insights on how industry leaders can implement moral AI

The future of high-performance compute: How Northern Data Group is powering the next generation
of AI

Issue 39

In our latest Q&A, Rosanne discusses how Northern Data Group is powering the next generation of innovation through its sustainable, state-of-the-art, HPC solutions.

Related Articles
bottom of page