top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

California's push to regulate AI has Silicon Valley on edge.



Move fast and break things, has long been Silicon Valley's mantra. But when it comes to AI regulation, California lawmakers are hitting the brakes — and not everyone's happy about it.



 By the CogX R&I team

August 29, 2024



Two bills are at the centre of a heated debate: AB 3211, which would require "watermarking" AI-generated content, and SB 1047, mandating safety testing for large AI models. Tech heavyweights like OpenAI, Anthropic and even Elon Musk have taken unexpected sides – and their stances are only adding fuel to an already heated debate.

 

OpenAI is playing both sides of the fence?

  • The ChatGPT developer is backing AB 3211 for AI content watermarking, a measure that could help combat the spread of harmful AI content, including deepfakes used for political misinformation.

  • However, in a statement given last week OpenAI opposed SB 1047's safety testing mandate for large models, claiming it would “stifle innovation and drive talent from California” – a view that bill author Senator Wiener dismissed as "nonsensical".


The company faces internal criticism: Former OpenAI researchers Daniel Kokotajlo and William Saunders, who resigned earlier this year due to safety concerns, have publicly criticised their former company's stance on AI regulation. The pair accused OpenAI of hypocrisy, pointing out that CEO Sam Altman has repeatedly called for AI regulation, only to oppose it when it's on the table.

 

But the landscape is more complex than it seems: Anthropic, an OpenAI rival, has taken a nuanced stance on SB 1047. After suggesting amendments, some of which were incorporated, CEO Dario Amodei now believes the bill's "benefits likely outweigh its costs". Meanwhile, Elon Musk has thrown his support behind SB 1047, despite his own AI company potentially being affected.





The stakes are high: Already 65 AI-related bills have been introduced in California this legislative season. With major elections looming and AI-generated content already making waves, the outcome of these bills could shape the future of AI regulation far beyond California's borders.


 

Now read the rest of the CogX Newsletter


Will AI become a €1 trillion bubble?


 

As tech giants pour billions into AI development, investors are questioning whether this massive bet will pay off.



By the CogX R&I team

 

In tech, there's a long history of boom and bust. From national railways to telecom networks, the story often unfolds in a similar fashion: massive capital investments, promises of transformative change, and the potential for both immense rewards and significant risks for investors. The current AI frenzy is no exception.


While several technologies claim to be 'world-changing', there's no denying that AI technology seems to be in a league of its own— at least according to most tech CEOs and evangelists, who likely view the development of this technology every bit revolutionary as the invention of fire or the steam engine. Regardless of the hype surrounding this technology, there's no escaping the fact that investor FOMO is at an all-time high. And big money is certainly flowing into the sector.


OpenAI, the poster child of this world, is now valued at $80 billion, triple its worth from a year ago. Rivals like Anthropic and Mistral are seeing similar growth. Even Elon Musk's AI venture, xAI, has secured $6 billion to create advanced chatbots.


But as investments soar, so do the costs. Bigger AI models are more expensive to create and run, eating up vast amounts of computing power and energy.


So, how much money will the tech sector need to keep up with the hype? Recent industry estimates suggest a staggering $1 trillion investment in the coming years.


This eye-popping sum is what big tech firms, corporations, and even utility companies plan to spend to support AI. Already, tech behemoths the likes of Microsoft and Google have invested hundreds of billions into building this AI infrastructure. But while all this spending has certainly driven much of the S&P 500’s gains in recent months, all this money has to eventually be recuperated. Failure to do so could lead to a significant market correction or even a broader economic downturn.


… want to keep reading? Check out the full OpEd here on the CogX Blog



How to use Cursor, the ‘Google Doc’ for programmers




Sometimes, an AI tool emerges from obscurity and quickly captures the attention of users and developers alike. This week, the app dominating social media is Cursor, an AI coding tool that harnesses the power of models like Claude 3.5 Sonnet and GPT-4o to simplify the app development process. 


This is how to build an app in minutes using Cursor:


  1. Create a New Project: Start by creating a new project in Cursor. You can choose from various templates or start from scratch.

  2. Describe Your App: Use natural language to describe the app you want to build. For example, "Create a to-do list app with a user-friendly interface."

  3. Generate Code: Cursor will generate the initial code based on your description.

  4. Refine and Customise: Review the generated code and make any necessary adjustments. You can also ask Cursor to explain specific parts of the code or suggest improvements.

  5. Test and Iterate: Run your app to test its functionality and make any needed changes. You can continue to refine your app by providing additional instructions or asking Cursor for suggestions.



An AI dilemma: 



Forget Hollywood's dystopian visions of robot uprisings. The real danger of AI, according to historian Yuval Noah Harari, lies in less dramatic threats. In an exclusive excerpt from his new book, Harari warns of the potential for AI to manipulate and control human behaviour on a massive scale.


 

Also in the news


In the world of coding, time is a precious resource: Amazon CEO Andy Jassy claims AI coding assistant Q has saved the company $260 million and 4,500 developer-years (yes years). This was reportedly achieved by significantly reducing software upgrade times, allowing programmers to complete tasks in hours that once took an average of 50 workdays.


Authors file lawsuit against Anthropic: The lawsuit alleges Anthropic used a dataset called ‘Books3’, containing nearly 200,000 pirated ebooks, to train Claude without permission. The authors claim Anthropic's actions were illegal and harmful to their livelihoods. Anthropic has not commented on the allegations.

 

Perplexity AI to start running ads in Q4: Despite plagiarism controversies, Perplexity has raised over $1 billion in funding and seen significant user growth. Now, the company has launched a revenue-sharing model with publishers and is now expanding into advertising.


DeepMind workers protest Google's military contracts: A letter, dated May 16 and signed by over 200 DeepMind employees, expressed concern about Google's involvement with military organisations - particularly its contracts with the Israeli military, reports Time.



 

In case you missed it


A creative director used Luma AI's Dream Machine 1.5 to create a remarkably realistic timelapse of a 1-year-old child ageing into an elderly woman.





1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

Machine learning is not “just” statistics

Issue 24

In this OpEd, Anil Ananthaswamy, an award-winning science writer and acclaimed author of “Why Machines Learn: The Elegant Maths Behind Modern AI”, challenges the oversimplified narrative that reduces ML to "glorified statistics"

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Issue 22

On 12 July 2024, the EU published its landmark Artificial Intelligence Act (AI Act), ushering in a comprehensive framework for AI safety across the bloc.

Are AI’s energy demands spiralling out of control

Issue 21

You've likely seen the alarming headlines... AI guzzling electricity like entire countries and warnings about overloaded power grids.

AI can revolutionise the way we play, watch, and understand sports

Issue 20

As artificial intelligence infiltrates sports arenas and analytics, a new breed of tech is giving glimpses into its new data-driven future

Who are the AI power users, and how to become one

Issue 19

2024 is the year AI in the office becomes a reality and a new breed of employee is leading the charge.

Silicon Valley Sets its Sights on Curing Diseases with AI

Issue 18

Bringing a new drug to market can take a decade and cost billions. But recent AI advancements promise to slash this timeline and cost dramatically.

Related Articles
bottom of page