top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence



 By the CogX R&I team

July 22, 2024



On 12 July 2024, the EU published its landmark Artificial Intelligence Act (AI Act), ushering in a comprehensive framework for AI safety across the bloc. This landmark legislation, the first of its kind globally, sets the wheels in motion for a comprehensive framework governing AI development, deployment, and use.


The AI Act, hailed as the world's first horizontal AI law, comes into force on August 1, 2024, marking the beginning of a new chapter in the global tech landscape. However, the real countdown for tech companies and AI developers begins today, with a series of staggered deadlines that will reshape the AI industry in the coming years, as follows:



"The AI Act is a highly impactful piece of legislation that businesses need to respond to now," warns Nils Rauer, an AI regulation expert at Pinsent Masons. The urgency is palpable, with the first set of prohibitions set to take effect on February 2, 2025.


These initial bans target AI applications deemed to pose "unacceptable risk", including systems that use subliminal techniques to manipulate behaviour, exploit vulnerabilities of specific communities or engage in social scoring. Notably, the use of real-time biometric identification systems in public spaces faces severe restrictions, with limited exceptions for law enforcement under strict conditions.


At its core, the AI Act takes a risk-based approach to regulating AI, categorising systems based on their potential impact on society and individual rights. The regulation's scope is also vast, applying to providers placing AI systems on the EU market, deployers within the EU, and even those outside the EU whose AI outputs are used within the bloc.


This extraterritorial reach means that global tech giants and startups alike must align their AI strategies with EU standards or risk being shut out of one of the world's largest markets.


Perhaps most controversially, the Act introduces a special regime for general-purpose AI (GPAI) models, which includes large language models powering popular chatbots. Providers of these models face obligations ranging from technical documentation to cooperation with authorities and respect for copyright laws. Those classified as posing "systemic risk" – a designation that includes models with computational power exceeding 10^25 floating point operations – face additional scrutiny and requirements.




Can the EU still find its place in the AI race?


The newly enacted EU AI Act has sparked fierce debate within the tech industry. Despite planned accommodations for startups and SMEs, notably through regulatory sandboxes, some fear the Act could stifle European AI innovation in the long term.


Compliance costs are also a central concern. Andreas Cleve, CEO of Danish healthcare startup Corti, echoes anxieties shared by many entrepreneurs: "This legislation could become a significant burden for small companies like mine." With compliance costs potentially reaching six figures for companies with 50 employees, questions about global competitiveness are justified, the Financial Times reports.


Adding to these concerns, critics warn that overly regulating foundation models - the powerful AI systems that underpin applications like ChatGPT - could stifle beneficial innovation in this core technology, rather than just addressing the risks associated with specific AI applications.


Despite the push back, the Act also has strong advocates who believe clear regulations can actually propel innovation. Alex Combessie, CEO of French open-source AI company Giskard, sees the EU AI Act as "a historic moment and a relief". He argues that clear rules, even for complex areas like foundation models, can foster trust and ensure responsible development.




Finding the right balance


For startups and SMEs, the picture is less rosy. Marianne Tordeux Bitker of France Digitale acknowledges the Act's focus on ethics but worries about "substantial obligations" despite planned adjustments. She fears these "regulatory barriers" could benefit competitors and hinder European AI leadership.


The stakes are indeed high. Non-compliance could result in fines of up to €35 million or 7% of global annual turnover, whichever is higher. These figures dwarf even the GDPR's much-publicised fines and highlight the EU's unwavering commitment to enforcing its vision of ethical AI.


As the clock ticks towards the various compliance deadlines – with the core provisions taking full effect on August 2, 2026 – the global tech community watches with bated breath. Will the EU's gambit position it as a leader in ethical AI, or will it cede ground to less regulated markets?


The coming years will be crucial in answering this question. As companies scramble to adapt, and EU member states work to implement the necessary oversight mechanisms, the shape of the future AI landscape hangs in the balance.


One thing is clear: the EU AI Act is not just a European affair. Its ripple effects will be felt across the global tech ecosystem, potentially setting a new standard for AI governance worldwide.

1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2


Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

Getting Machine Learning Projects from Idea to Execution

Issue 43

Eric Siegel, Ph.D., former Columbia University professor and CEO of Gooder AI, outlines practical strategies discussed in his new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, to help organisations turn machine learning projects into real-world successes.

The World's Largest AI Supercluster: xAI Colossus

Issue 42

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

The Future of AI Cannot Be a Race to the Bottom

Issue 41

Sama CEO Wendy Gonzalez shares invaluable insights on building an ethical foundation for AI development in our latest thought leadership piece.

Moral AI and How Organisational Leaders Can Get There

Issue 40

Renowned neuroscientist and moral AI researcher Dr Jana Schaich Borg shares valuable insights on how industry leaders can implement moral AI

The future of high-performance compute: How Northern Data Group is powering the next generation
of AI

Issue 39

In our latest Q&A, Rosanne discusses how Northern Data Group is powering the next generation of innovation through its sustainable, state-of-the-art, HPC solutions.

The first AI chip to enable self-improvement

Issue 38

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

Related Articles
bottom of page