top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Moral AI and How Organisational Leaders Can Get There



  • AI Organisations face a trust crisis. Only 15-35% of Americans trust AI companies to act responsibly.

  • Companies are losing revenue, customers, and facing legal challenges due to AI bias.

  • Trust starts at the top. By actively engaging in ethical AI practices, leaders can rebuild trust and benefit both society and their organisations.


Guest contributor: Dr Jana Schaich Borg



Organisations that create and use artificial intelligence (AI) have a trust problem.  At the start of the year, only 35% of Americans said they trusted businesses in the AI sector to do what is right.  When another group asked, only 15% said they trusted companies to develop AI systems responsibly.  Poll after poll tells us people do not believe AI technology organisations have consumers’ interests at heart or will protect them from the meaningful harm AI development may cause. 


This distrust is more than a reputational problem.  Just using the term “artificial intelligence” in a product description can make people less likely to buy it.  In 2022, 36% of organisations interviewed had already faced legal challenges or lost revenue, customers, or employees due to AI bias, and this number is likely even higher now.  People want their countries to reap the economic and social benefits of leading AI development, but won’t trust an industry that does not effectively protect consumer interests.  How can organisational leaders walk this tightrope?


Despite common assumptions, delegating these issues to compliance and technology specialists won’t solve the problem.  Consider Google’s costly Gemini chatbot controversy from earlier this year.  Most people feel AI should be fair and should have just consequences, and Google has an elite legal compliance team with deep expertise in the laws and federal guidelines that could make them liable to lawsuits and fines for AI bias.  There are also a variety of technical “Fair AI” tools to help make training data more representative, make individual algorithms more fair, and to audit AI outcomes for bias.  Google created many of them, in fact!  Even with these resources, Google nonetheless came under worldwide scrutiny when Gemini’s image generator was deemed unfairly “woke” because it depicted racial identities in historically inaccurate ways, presumably as an over-correction for other kinds of bias detected in Gemini’s model.  This “upset a lot of people” so much that it cost Google’s parent company Alphabet more than $70 billion.  Moral trust matters, and leaders need to go beyond legal compliance to earn it. 


To be clear, as a Moral AI technology creator myself (with co-authors Vincent Conitzer, Walter Sinnott-Armstrong, and others), I do think that technology will be an important piece of the puzzle.  There has already been great progress in creating tools that make AI more fair, private, and safe, and our team has been working on ways AI can learn to make decisions that align with human moral values more generally. I am also excited about AI tools we are developing to assist humans in learning moral skills and becoming more consistent and informed moral decision-makers. While there is more to be done to make these tools sufficiently usable and effective, I am optimistic the research community will generate many technical tools to help organisations of all sizes and resources reduce AI’s harms while enhancing its benefits.  Compliance teams, too, have a lot of relevant experience that will guide companies in navigating legal regulations related to AI.  Especially given that AI regulation is behind what the majority of American citizens want, though, technical tools and compliance alone will not be enough to earn society’s trust that a company will shepherd AI technology ethically.  What’s missing?


Many of AI’s unethical outcomes can be tied back to organisational practices and cultures that are not well-suited to the combination of the AI field’s competitiveness, its global reach, and the unknown nature of AI products’ impacts on societal dynamics.  If organisational leaders figuratively put their ears to the ground,  they would likely learn that their employees are looking to them to make and take responsibility for some of the tough ethical calls employees have thus far been largely left to make on their own—like what kind of fairness to aspire to—and need leaders to provide more explanation and documentation about what values should guide team members’ decisions and strategies when priorities conflict.  They would also learn that employees need leaders to admit openly when their work processes and incentives are not well-aligned with the type of effort needed to make their AI ethical, and show evidence of their commitment to improving that alignment.  Further, AI contributors are searching for well-designed on-the-job opportunities to develop ethical problem solving skills relevant to their daily work.  More broadly, most AI team members need their leaders to communicate more convincingly that ethical values are a real priority not only for the organisation, but also for the leaders themselves.  This is not easy stuff, especially when the pressure to show shareholders your organisation is embracing AI is so high.  Moreover, making the necessary changes to address these issues will likely require substantial investment.  But here are some ways to get started.


Begin by sending out surveys to your employees asking for candid feedback about organisational barriers and supports they encounter when trying to translate moral AI goals into practice, and allow employees to respond anonymously.  Take this feedback seriously, and use it to guide your change strategy.  Next, ensure you have a system for contributors at all seniority levels to flag ethical concerns with AI products, encourage contributors to use the system, and transparently report back to employees how each raised issue is addressed.   Simultaneously, work with your technical teams to develop agile-compatible work processes that incorporate ethical reviews early in the AI product lifecycle, ideally as soon as an AI project is even considered, and at consistent and frequent intervals throughout the project’s progress.  Collectively, these efforts will help formalise processes that allow your organisation to address moral problems raised by your AI products before they are launched.  


Your next focus should be designing a thoughtful strategy to get everyone in your organisation up to speed not only on the main ethical issues AI products raise, but also on the skills needed to problem-solve those issues in real-life organisational contexts characterised by competing pressures, uncertainties, and team members with diverse moral outlooks.  Most of us were not given the opportunity to develop the kind of “moral systems thinking” and interpersonal acumen needed to navigate problems in these settings, so AI-using organisations need to provide a reliable way for contributors and leaders to learn these capacities on the job.   Like many skills, Moral AI systems thinking takes practice, feedback, and experiences failing to develop.  Each organisation will need to create processes that facilitate this type of moral experiential learning without letting the mistakes that are part of that learning impact consumers.  In most cases, isolated online training modules are not going to cut it here. 


If your organisation has a chief learning officer, designing your organisation’s Moral AI learning strategy would be a good challenge to assign to them.   If you don’t yet have a chief learning officer, consider adding one with Moral AI expertise to your C-suite, even if you need to fund them to go back to graduate school to get that expertise.  While you are at it, just like many organisations encourage their employees to get MBAs, consider funding technical contributors to get additional graduate training in ethics or moral thinking, and funding compliance or social science team members to get additional technical training that will allow them to understand the engineering problems associated with implementing ethical concepts in deployed systems.  Focus your initial educational efforts on AI product managers and engineering managers, since they are typically the ones who determine how AI product development will unfold on a day-to-day basis.


Finally, leaders need to take a hard, honest look at how much they prioritise moral values and concerns, and what they are conveying about that priority to their organisations.  Consumers and community-members aren’t the only ones who doubt whether leaders care about the ethical consequences of AI technology—your own team members and organisational members share those doubts too.  Contributors report time and time again that even if their leaders say they care about AI ethics, financial goals take precedence and determine how contributions are evaluated, and few organisations are working actively to mitigate ethical issues related to their AI systems.  One of the most efficient ways for leaders to address this perception is to work with AI product managers and engineering managers to develop Moral AI KPIs that are used alongside existing KPIs to evaluate product success and establish contributors’ compensation and promotion.  Importantly, these Moral AI KPIs should be tied to leaders’ evaluations and compensation, too.  Doing so will signal to stakeholders and shareholders that the organisation is serious about the moral impacts of the AI technology it is deploying, and will ensure your leaders' incentives are aligned with delivering products that achieve moral impacts.


Of course, there is much more to do to earn society’s trust in the AI industry, and my co-authors and I offer more comprehensive actions you can take in our book Moral AI and How We Get There.  The steps I have outlined here will get you off on the right foot, though, and will help you identify other hidden areas that need your attention within your own organisation.


As you begin your work on AI trust, I want to offer one more perhaps controversial piece of advice: organisational leaders should not feel that they need to be morally perfect.  Nobody knows how to “solve” every ethical challenge AI technology poses, especially when there is so much that remains unknown about AI technology anyway.  We should expect the industry as a whole, and all the contributors in it, to make many moral mistakes.  Society isn’t looking for perfection, it is looking for evidence that AI leaders and contributors genuinely want to prevent those mistakes, take responsibility for them when they are made, and are investing all the grit, commitment, and intelligence that earned them their position in the AI industry in first place into ensuring that AI benefits society with minimal harms. 


As a leader, you may find that you have some introspection and additional learning to do to be clear on your own personal moral values and to have the skills to integrate those values into your leadership style and choices.  Don’t be ashamed if this is the case – most of us need more development in this area.  The key is to commit to doing this personal work.  If you don’t, at best, your consumers and organisation will be confused about what priority society’s interests have for you.  More likely, they simply will not trust you to protect them from AI’s harms, and their purchasing and voting choices will eventually reflect that distrust.  On the other hand, if you embrace a moral growth mindset and provide a public example of how others can do the same, your organisation’s bottom-line and reputation will be more resilient, you will help build trust in the AI industry as a whole, and you will likely feel more inspired and fulfilled by the work you are doing.  The bottom line is this: leadership can’t take a back seat when it comes to Moral AI.  You have to get involved, understand what is at stake and why, and get in the trenches along with your AI developers and compliance teams.  When you do, you, your organisation, and society will reap the benefits.


1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

Moral AI and How Organisational Leaders Can Get There

Issue 40

Renowned neuroscientist and moral AI researcher Dr Jana Schaich Borg shares valuable insights on how industry leaders can implement moral AI

The future of high-performance compute: How Northern Data Group is powering the next generation
of AI

Issue 39

In our latest Q&A, Rosanne discusses how Northern Data Group is powering the next generation of innovation through its sustainable, state-of-the-art, HPC solutions.

The first AI chip to enable self-improvement

Issue 38

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

Undoing the Tech Coup: A Thrilling Conversation with Marietje Schaake

Issue 37

Marietje Schaake,a former European Parliament member and Fellow at Stanford University’s Cyber Policy Center and the Stanford Institute for Human-Centered Artificial Intelligence, discusses the strategies outlined in her new book, 'The Tech Coup: How to Save Democracy from Silicon Valley,' on how to reclaim democratic control in the digital age.

Is Sam Altman right about the future of AI

Issue 36

It's not every day that a tech CEO morphs into an AI prophet, but when OpenAI's Sam Altman speaks, you know many people will be listening.

OpenAI's o1 model has been hailed as a breakthrough in AI

Issue 35

Just days ago, the AI world was buzzing with the announcement of OpenAI's secretive "Strawberry" project. Now known as the o1 model, this AI powerhouse has shattered benchmarks. But does it live up to the hype?

Related Articles
bottom of page