7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
How Big Tech is cutting corners on safety in the race to develop the next cutting-edge LLM
By the CogX R&I team
June 21, 2024
With AI development advancing at warp speed, is the dream of ethical AI more distant than ever?
In the past few weeks, the race for the next generation of AI has hit a rough patch. Amidst a string of botched tech rollouts and an exodus of safety experts, public trust is eroding faster than ever before.
Last month, Ilya Sutskever and Jan Leike, long-serving members of OpenAI and leaders of the company’s super alignment team — the group responsible for keeping powerful AI creations in check — announced their departures under allegedly mysterious circumstances.
Within days, Sam Altman established a new safety team, featuring a significantly smaller safety committee led by himself, along with board members Adam D’Angelo and Nicole Seligman.
This exodus followed a tumultuous period at OpenAI. Since a botched attempt to oust CEO Sam Altman last November, a steady stream of safety-focused researchers has left the company – at least five other safety-focused employees have exited OpenAI, either voluntarily or forcibly.
Leike, in a recent X post, alluded to a "breaking point" with OpenAI's leadership, accusing them of prioritising "shiny products" over safety.
However, the issues surrounding AI safety are not confined to OpenAI’s internal turmoil. The pressure to prioritise short-term gains over long-term safety seems to be industry-wide.
Early last week, social media giant Meta sparked privacy concerns among users with its new data collection policy. The company notified its Facebook and Instagram users that it will soon begin leveraging their information (including photos and posts) to train its AI systems.
But beyond the spectre of data privacy concerns, a recent wave of fumbled AI rollouts from major tech players including Google, OpenAI, and Meta paints a concerning picture.
Big tech is after your data
For Meta, data privacy concerns have always been a pain point. Not only are they burning through cash as they pivot focus towards AI, but their struggles with data privacy are an ongoing issue.
Their recent plan to train AI tools on public posts and images scraped from Facebook and Instagram sparked outrage from digital rights groups in Europe. Users were understandably concerned about how their information was being used, particularly given Meta's alleged lack of transparency.
Following the pushback from EU regulators, including the Irish Data Protection Commissioner (DPC) and the UK’s Information Commissioner’s Office (ICO), Meta confirmed that it would pause its plans to start training its AI models using data from its users in the EU and UK.
But this isn't an isolated incident. Tech giants are in a mad dash for fresh, multi-format data to fuel their AI projects, including chatbots, image generators, and other flashy AI products.
Mark Zuckerberg, Meta's CEO, even highlighted the importance of their "unique data" as a key element in their AI strategy during a February earnings call.
Similar concerns recently erupted when Adobe's updated terms of service sparked outrage among artists and content creators earlier this week. The new terms potentially granted Adobe free access to user projects for "product innovation," raising fears that the company might be using this data for AI development.
Google hasn't been immune to controversy either.
Last summer, reports surfaced that Google's legal department was pushing to expand the scope of user data collection, potentially including content from free versions of Google Docs, Sheets, Slides, and even restaurant reviews on Google Maps.
The quest for bigger, better AI requires vast troves of user data — data that these companies have not always handled with the utmost care. From high-profile breaches to controversial data-sharing practices, Big Tech's track record on privacy is far from spotless.
As these companies push the boundaries of what's possible with AI, they'll need to find a way to balance innovation with ethical data practices, or risk jeopardising both their reputation and the future of AI itself.
A cutthroat race for AI dominance might be behind the recent spate of botched tech rollouts.
Take Google's much-anticipated "AI Overviews", a feature that promised to revolutionise search queries by directly answering user’s questions instead of simply linking relevant websites. Unfortunately, some of its responses veered wildly off course, like suggesting a daily dose of rocks to keep geologists healthy or recommending glue as a pizza topping (Yeah, probably not a good idea…).
OpenAI found itself in a similar predicament, reeling from the public outcry over its "Her"-inspired AI assistant that mimicked Scarlett Johansson's character and voice. Creators across industries, already wary of AI's potential to mimic and replace them, were understandably enraged by this blatant appropriation.
Does this highlight a deeper issue?
At their core, popular large language models like ChatGPT and Gemini are designed to generate seemingly coherent answers, not necessarily factual ones.
We've seen this play out with recent stumbles by Google, OpenAI, and Meta. From Google's AI Overviews to OpenAI's Scarlett Johansson-inspired misstep, these examples showcase how rushing untrustworthy AI models can backfire spectacularly.
As we teeter on the precipice of an AI-driven future, Big Tech must recalibrate its priorities. The relentless pursuit of AI supremacy shouldn't come at the expense of user trust and ethical principles. Companies need to embrace transparent data practices, fortify their safety measures, and establish genuine channels of communication with both their users and the public at large.
The current course raises unsettling questions: Is the breakneck race for AI advancement worth the potential societal fallout? Can these tech giants pivot towards more ethical and sustainable practices without stifling innovation?
The answers to these existential questions will define the future of AI and its impact on humanity. The onus lies on industry leaders, policymakers, and society as a whole to steer us towards a future grounded in ethical principles.
Public trust in AI hinges on trustworthy tools. However, early attempts have left us wanting.
Popular Articles
Get the CogX Newsletter
Get the latest tech news in your inbox each week
A Conversation with Craig Mundie: Navigating 'Genesis: Artificial Intelligence, Hope, and the Human Spirit.'
Issue 46
Craig Mundie, president of Mundie & Associates and former Chief Research and Strategy Officer at Microsoft, teams up with Henry A. Kissinger and Eric Schmidt to explore the future of Human-AI collaboration in their new book, "Genesis: Artificial Intelligence, Hope, and the Human Spirit"
Getting Machine Learning Projects from Idea to Execution
Issue 43
Eric Siegel, Ph.D., former Columbia University professor and CEO of Gooder AI, outlines practical strategies discussed in his new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, to help organisations turn machine learning projects into real-world successes.