top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Guest Author: Preparing for AI

Dr Joy Buolamwini


We asked Dr. Joy Buolamwini 7 questions about her work in uncovering and combating biases in AI systems. In this discussion, Dr. Buolamwini shares her thoughts on the coded gaze, the societal impacts of algorithmic discrimination, and the steps we can take to ensure a more inclusive technological future — for all.


Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI


“We do not have to accept digital humiliation as the tax for innovation.”


Dr. Joy Buolamwini, acclaimed computer scientist and author, has made it her mission to unmask the hidden biases lurking within AI systems. Through her research at the MIT Media Lab, Buolamwini discovered that facial recognition software often struggles to accurately detect faces with darker skin tones  including her own.


This startling realisation led Buolamwini to coin the term "coded gaze" to describe how the preferences, priorities, and prejudices of those who create AI systems can become embedded in the technology itself. As the founder of the Algorithmic Justice League, she now works to raise awareness about the potential harms of biassed AI and advocates for greater accountability and fairness in the development of these powerful tools.


In her new book, "Unmasking AI: My Mission to Protect What Is Human in a World of Machines", Buolamwini explores the far-reaching social implications of AI bias. She warns that without proper safeguards, flawed algorithms risk perpetuating and even amplifying existing inequities and stereotypes, with devastating consequences for marginalised communities. She also offers real-world examples of pushing for positive change and companies taking commendable steps.


Her work serves as an urgent wake-up call about the need to centre human values and ethics in the development of AI  before it's too late.


 

The following is a discussion between ourselves, and Dr. Joy Buolamwini.


1. The book begins with an encounter with the coded gaze, can you explain what that is?


The coded gaze refers to the ways in which the priorities, preferences, and prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure. It highlights how the machines we build reflect the biases and values of their creators, impacting individuals' opportunities and liberties. While the coded gaze is not always explicit, it is ingrained in the fabric of society, similar to systemic forms of oppression like patriarchy and white supremacy. I encountered what I call the coed gaze when I put on white mask or my dark-skinned face to be detected.


2. What does it mean to be excoded?


To be excoded means to be an individual or part of a community that is harmed by algorithmic systems. This harm can manifest in various ways, such as receiving unfair rankings or treatment from automated systems, being denied opportunities or services based on algorithmic decisions, or facing exclusion and discrimination due to the use of AI systems. 


Being excoded highlights the risks and realities of how AI systems can negatively impact people's lives, especially those who are already marginalized or vulnerable.


3. What are some of the ways algorithmic bias and discrimination is impacting everyday people?


One significant impact is the perpetuation of bias and discrimination in AI systems, leading to harmful outcomes for individuals from marginalized communities. For example, algorithmic audits and evocative audits reveal how systemic discrimination is reflected in algorithmic harms, affecting individuals on a personal level. These biases can result in unfair treatment, misidentification, and targeting of certain groups, such as Black individuals and Brown communities, in areas like criminal justice and facial recognition technologies.


Take for example Porcha Woodruff, who was eight months pregnant when she was arrested by police using faulty face surveillance tools for robbery and carjacking. While in their custody, she started experiencing contractions in her holding cell and had to be rushed to the hospital when they released her. AI-powered facial recognition failed her – putting her and her unborn child at risk in a country where Black maternal mortality rates are double and triple that of their counterparts. 


Sadly, Porcha is far from the last to be impacted by the implications of AI. Last year, Louise Stivers, graduate student at University of California Davis Political was accused of cheating using generative AI even though she hadn’t used it. Despite the algorithm being wrong and her innocence ultimately being proven, the investigation remains on her resume and she will have to self-report to law schools and state bar associations. The algorithm got it wrong, and she wouldn’t be the last victim of its error.


Deepfakes or AI-generated photorealistic images and videos is another tool that could have disastrous consequences. Aside from potentially skewing election results with the spread of false information, deepfakes are often used to superimpose the faces of celebrities onto the bodies of individuals performing sexual acts without any regard for consent. The most recent and prominent examples include Taylor Swift and Bobbi Althoff.


Ultimately, no one is immune from AI harms. The need for biometric rights becomes ever more apparent as we see how easily your likeness can be taken and transformed, or your face surveilled for nefarious uses. It is crucial that we use our voices to speak out against harmful uses of AI. If we remain silent, an AI backlash will result in pushback against beneficial applications of this technology. We do not have to accept digital humiliation as the tax for innovation.


4. What can be done to prevent harmful AI?


The following measures can be taken to prevent harmful AI:


  1. Companies and governments should prioritize addressing existing AI systems with demonstrated harms rather than focusing solely on hypothetical existential risks posed by superintelligent AI agents. Resources and legislative attention should be directed towards minimizing the real dangers posed by current AI technologies.


  2. Companies claiming to fear existential risks from AI should demonstrate a genuine commitment to safeguarding humanity by refraining from releasing AI tools that could potentially have catastrophic consequences. Governments concerned about the lethal use of AI systems can adopt protections advocated by organizations like the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization.


  3. Structural violence perpetrated by AI systems, such as denial of access to healthcare, housing, and employment, should be addressed to prevent individual harms and generational scars. It is essential to recognize that AI systems can cause harm slowly over time, not just through immediate physical violence.


  4. Immediate problems and emerging vulnerabilities with AI, particularly concerning algorithmic bias and false arrests, need to be addressed promptly. Efforts should be made to ensure that the burdens of AI do not disproportionately affect marginalized and vulnerable populations.


  5. Responsibility for preventing harms from AI lies with the companies that create these systems, the organizations that adopt them, and the elected officials tasked with the public interest. Communities can contribute by sharing their experiences, documenting harms, and advocating for their dignity to be prioritized in AI development. Supporting organizations that pressure companies and policymakers to prevent AI harms is also crucial in mitigating potential risks associated with AI technologies.


5. Do you support banning any types of AI powered technologies?


I support banning lethal autonomous systems and digital dehumanization in order to prevent the creation of fatal AI systems. The Campaign to Stop Killer Robots has been championing protections against potentially fatal uses of AI without making the leap to creating sentient systems that could pose existential risks. 


The focus is on addressing existing and emerging AI harms that have demonstrated dangers, such as AI systems falsely classifying individuals, robots used for policing, and self-driving cars with faulty tracking systems, rather than solely prioritizing hypothetical existential risk.


6. You’ve been in the documentary Coded Bias, the face of the Olay Decode the Bias ad campaign, from those experiences what role do you see media having when it comes to conversations around artificial intelligence? 


Unsurprisingly, these campaigns go a lot further than any of my research papers. My involvement in the documentary "Coded Bias" and being the face of the Olay Decode the Bias ad campaign have allowed me to bring attention to important issues related to AI to millions of people. Leveraging these platforms has helped create a more informed public discourse on AI and its implications for society. 


With the Olay campaign for example, I was brought on as a creative partner and collaborator. The campaign initially focused on the idea of inclusion and collecting face photos to improve Olay's Skin Advisor system. Eventually, I became the face of Olay's # DecodeTheBias campaign in 2021 with complete control over its messaging to increase the number of women in STEM and raise awareness about algorithmic bias. The campaign aimed to create just, responsible, and inclusive consumer AI products. Olay committed to taking actions based on the campaign's recommendations, including the Consented Data Promise to use only data collected with explicit user agreement. This ultimately led to changes in how models and spokespeople were portrayed, with no post-production blemish-reducing techniques used. 


7. You describe yourself as a poet of code, what does that mean and how does that inform your approach to artificial intelligence?


In doing my Gender Shades research, the results for AI systems being used by IBM, for Microsoft, and then later on for Amazon showed that these systems work better on men's faces versus women's faces as well as lighter faces versus dark faces.  When we did an intersectional analysis, we saw that it didn't work as well on the faces of dark skinned women like me. After observing that data, I wanted to move from performance metrics to performance arts to actually humanize what it means to see those types of labels. That’s what led to my poem “AI, Ain’t I a  Woman.” At first, I thought it would be an explainer video like I've done with other projects. When I was talking to a friend, they asked me to describe what it felt like. After my response, he said “that sounds like a poem”.


As a poet of code, I use words, performance, video, and technical research to highlight the contradictions between the promises made about technology, such as artificial intelligence advancing humanity, and the reality of how technology can oppress rather than liberate. This approach to art and technology allows me to convey complex ideas and implications in a visceral and creative manner.


 

If you enjoyed this post, you’ll love our weekly briefings on Preparing for AI. Check out some previous editions here, or just cut straight to the chase and subscribe to our newsletters exploring AI, net zero, the future of work, investing, cinema, and deeptech. 

1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

France's ChatGPT competitor is going all in

Issue 47

French genAI startup Mistral AI is taking the fight straight to OpenAI with an arsenal of cutting-edge updates announced earlier this week.

A Conversation with Craig Mundie: Navigating 'Genesis: Artificial Intelligence, Hope, and the Human Spirit.'

Issue 46

Craig Mundie, president of Mundie & Associates and former Chief Research and Strategy Officer at Microsoft, teams up with Henry A. Kissinger and Eric Schmidt to explore the future of Human-AI collaboration in their new book, "Genesis: Artificial Intelligence, Hope, and the Human Spirit"

This is AI's first development plateau

Issue 45

The AI industry's powerhouse OpenAI is grappling with an unexpected challenge: diminishing returns in AI model improvement.

OpenAI develops web search capabilities

Issue 44

OpenAI's latest move is to give ChatGPT real-time web searching powers.

Getting Machine Learning Projects from Idea to Execution

Issue 43

Eric Siegel, Ph.D., former Columbia University professor and CEO of Gooder AI, outlines practical strategies discussed in his new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, to help organisations turn machine learning projects into real-world successes.

The World's Largest AI Supercluster: xAI Colossus

Issue 42

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

Related Articles
bottom of page