top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

The Future of AI Cannot Be a Race to the Bottom


Guest contributor: Wendy Gonzalez, CEO, Sama



It’s no secret that AI is already a major part of the global economy—the question isn’t if AI will grow, it’s how much AI will grow. One projection has the market reaching over $800 billion by 2030. With so much money to be made, it can be tempting for companies to pursue the lowest-cost options for their AI supply chains—maximizing their own profits. 


But AI itself is a world-changing technology that presents an opportunity. It can completely reshape our world economy and allow capital to flow to places that have been traditionally left behind by tech booms. My company, Sama, was founded on the principle that talent is equally distributed around the world, but opportunity isn’t, and we’ve focused our efforts in Kenya and Uganda.


AI is still relatively young, and companies have more power than they realize to shape this technology’s future for the better. The time is now to build a healthy, thriving industry that values every worker in the supply chain. 



CSR 2.0: Acting with Real Purpose


Most companies today have a corporate social responsibility (CSR) policy or something similar. 93% of the world’s largest 250 companies publish some kind of report every year, in fact. But not every company always operates to those standards. And make no mistake, it’s not easy. Research has found that over half of executives don’t believe their organizations act with purpose, even as 64% of CEOs believe that CSR is a vital part of their business


Clearly, there’s a disconnect here. A focus on only pleasing shareholders, satisfying the financial bottom line, or growing at all costs can be detrimental to everything else. As just one example, the commonly-used reverse-bidding auction process for vendor selection doesn’t encourage companies to select suppliers that have positive social impacts: it encourages undercutting and choosing the lowest bidder. There will always be companies willing to go as low as they can to secure contracts, and that often comes with a trickle-down cost to their workers that hasn’t always been accounted for. Some vendors may not even have workers, preferring to crowdsource instead, leaving people in what can be precarious positions. 


Companies must operate to the CSR standards and policies they espouse. More succinctly: they need to put their money where their mouth is. It may seem like only big movers like Google, Meta, or Walmart have the power to create change. Yet all buyers can positively influence the broader ecosystem, with many small effects piling up to have the same impact. Buyers also have the advantage of sitting in the very middle of the supply chain, with the ability to hold entities both above and below them in the chain to account


In practice, this means that buyers have to be choosy. Start the selection process with your principles in mind, and stick to them. Ask about data governance policies. Ask to visit a supplier’s office, if you can. Make the extra effort to strike the balance between managing costs and doing good for people and the planet. Have a framework in place that every decision is made under. 



Resources for Better Practices


Not sure where to begin in developing that framework? Surprisingly, governments might be a good place to start. For example, the Scottish government has a Q&A tool specifically for SMEs to evaluate and refine their labour practices. The EU’s AI Act is starting to come into effect, and in the US, states like Colorado are leading the way in setting their own policies. With so many regulations at play, it may be better to instead evaluate vendors’ policies as they relate to globally-developed AI guidelines


To go a step further, there are neutral third parties investigating working conditions. The Fair Labor Association provides accreditation and conducts ongoing evaluations to keep those companies honest, as well as offering knowledge-building programs to a network of 2,500 affiliates. Global Labor Justice (formerly known as the International Labor Rights Forum) has a 40-year history of advocating for safe and dignified work. If a supplier is accredited or associated with these kinds of organizations, you can and should ask to see their certifications. 


The big companies — Amazon, Meta, Google, and so on — are already shifting how they are approaching AI’s development. 13 other companies joined the aforementioned big three companies in committing to new safety guidelines at a summit co-hosted by the British and Republic of Korea governments in May. Amazon’s put out a free responsible AI course. Meta has its five pillars of responsible AI. These companies, with their vast resources, can shift the industry with even the smallest of moves. Yet even their power only stretches so far. Other companies must step up and make the effort to do better, too.



Public Influence on Private Companies


Some of that stepping up may be driven by government regulation rather than a desire to be more socially responsible. For example, when California mandated that boards had to be more diverse, the change was noticeable. Make no mistake: AI regulation is coming. The EU AI Act, which is finally coming into effect, is only the beginning. The big players have started moving towards compliance, looking to avoid massive fines later. However, even the smallest companies have the same choice before them: comply now, or deal with the consequences later. The end result for either choice ends up being the same: more responsible AI that adheres to laws


While governments may not all be as strict as the EU, they are still moving forward — and quickly. Gartner predicts 50% of all governments will require responsible AI usage in just two short years. That seems far away given the speed at which AI is developing, but given the difficulty of creating full documentation and oversight in the middle of the development process, it will move faster than you think. Last year’s executive order for US agencies had 90-day and 180-day timelines for creating policies – and agencies have accomplished those goals


But regulations aren’t the only way that governments can put their thumbs on the scale. 2023’s executive order created guidelines on selecting suppliers. If you think government contracts aren’t that lucrative, let’s illustrate: Apple’s market cap is around $3.5 trillion. The US’ fiscal year budget last year was $6.1 trillion. Government dollars flowing to certain suppliers over others rewards them for responsibility. And that’s just one country. What would it look like if other governments adopted similar policies? A domino effect. 


When it comes to AI, there is no responsible or ethical decision that is too small to matter. Every decision from companies of every size adds up regardless, and everyone can use AI more responsibly. And when something has this much potential, making the effort now can prevent us from falling into familiar patterns where the lowest bidder wins, where the profit remains concentrated in only a few parts of the world, and where life-changing technology only helps a few, not all of us. 


We are at an inflexion point with AI. Are you ready to make the change?

1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

OpenAI develops web search capabilities

Issue 44

OpenAI's latest move is to give ChatGPT real-time web searching powers.

Getting Machine Learning Projects from Idea to Execution

Issue 43

Eric Siegel, Ph.D., former Columbia University professor and CEO of Gooder AI, outlines practical strategies discussed in his new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, to help organisations turn machine learning projects into real-world successes.

The World's Largest AI Supercluster: xAI Colossus

Issue 42

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

The Future of AI Cannot Be a Race to the Bottom

Issue 41

Sama CEO Wendy Gonzalez shares invaluable insights on building an ethical foundation for AI development in our latest thought leadership piece.

Moral AI and How Organisational Leaders Can Get There

Issue 40

Renowned neuroscientist and moral AI researcher Dr Jana Schaich Borg shares valuable insights on how industry leaders can implement moral AI

The future of high-performance compute: How Northern Data Group is powering the next generation
of AI

Issue 39

In our latest Q&A, Rosanne discusses how Northern Data Group is powering the next generation of innovation through its sustainable, state-of-the-art, HPC solutions.

Related Articles
bottom of page