7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Guest contributor: Future of Capital Allocation
A few weeks ago, AI expert and investor Roger Spitz shared with us his 5 Strategies to Activate Your Agency and Stay Relevant in the Age of AI, a thought-leadership piece on navigating the complex AI future.
Inspired by his insights, we sat down to ask him 6 questions on how businesses can leverage this knowledge, to stay relevant in the age of AI.
5 Strategies to Activate Your Agency and Stay Relevant in the Age of AI
‘My focus on AI is not about trends or the latest technology fads. I try to prioritise signals over noise, investigating the fundamentals that propel change.’
Roger Spitz is the bestselling author of Disrupt With Impact: Achieve Business Success in an Unpredictable World and the four-volume collection The Definitive Guide to Thriving on Disruption. President of Techistential (Strategic Foresight), and founder of Disruptive Futures Institute (Think Tank) in San Francisco, Spitz is a leading expert and investor in Artificial Intelligence, and is known for coining the term “Techistentialism”. He publishes extensively on the future of strategic decision-making and AI. Spitz is also a partner of Vektor Partners (Palo Alto, London), a VC fund investing in the future of mobility. As former Global Head of Technology M&A with BNP Paribas, Spitz advised on over 50 transactions with a deal value of $25bn.
In this exclusive interview, Spitz sheds light on the core drivers of change, the overlooked risks facing businesses today, and the philosophical underpinnings essential for anticipating and mitigating future disruptions. From his pioneering concept of ‘Techistentialism’ to the imperative of adopting adaptive strategies, Spitz offers pragmatic frameworks and thought provoking perspectives to steer us towards a more resilient and inclusive technological future.
1. With your background in tech and investment, what drew you to focus on AI?
As former global head of Technology Mergers & Acquisitions (M&A) at a leading investment bank, and today partner and LP investor in venture capital funds, my insights have been shaped by a relentless pursuit of understanding the core drivers of change to create sustainable value. Drawing on my experiences advising CEOs, founders, boards, and policymakers on every continent, I focus on developing a deep comprehension of decision-making amid uncertainty, anticipating disruptions, and sustaining competitiveness.
My focus on AI is not about trends or the latest technology fads. I try to prioritise signals over noise, investigating the fundamentals that propel change. I view disruption systemically, transcending the passing technological hypes that emanate from Silicon Valley. Amongst others, AI is undeniably a key underlying driver of change, hence my interest.
The more I investigate the essence of our world, the more I realise that our systems are not just fragile, but outright ineffective. Everything in our world is constantly evolving - except our organisations, strategies, and governance structures. Too often, we act on flawed assumptions, believing in a world that is predictable, linear, stable, and controllable. But the cost and missed opportunities from these misconceptions are on the rise.
My work explores opportunities, risks, strategies, and tactics to remedy this lack of resiliency. It offers practitioner frameworks and real-world insights to enhance foresight and decision-making in addressing major disruptions, from sustainability and artificial intelligence to geopolitical shifts and cybersecurity risks.
2. How are businesses failing to acknowledge AI’s ‘Complex Five’?
Our "Complex Five" — Gray Rhino, Black Jellyfish, Black Elephants, Black Swan, and the Butterfly Effect — represent critical, high-impact scenarios often overlooked by businesses.
Gray Rhinos are extremely likely, visible and high-impact threats that businesses frequently ignore, like the rapid spread of AI disinformation.
Black Jellyfish symbolise obscure, low-probability but impactful events, that sart off seeming normal but escalate into complex crises, such as the amplification of societal biases by AI technologies.
Black Elephants are apparent, probable threats that are commonly disregarded, akin to ignored cybersecurity risks in critical infrastructures.
Black Swans are unpredictable, rare, yet crucial events that can lead to severe consequences, challenging us to prepare for what we cannot foresee.
The Butterfly Effect brings these all together, illustrating how minor changes can trigger major, unforeseen impacts.
For a deeper look into The Complex Five and how to manage them, please see my CogX OpEd piece.
If you read annual reports, press releases, and corporate messaging, they give the impression of addressing these. But do they understand how to build the resiliency and capabilities to effectively address these “Complex Five” systemically? Anticipatory governance, regulation, and disclosure requirements can help foresee, monitor, and mitigate our Complex Five.
For instance, consider the integrity of infrastructure, systems, and products - new standards for these must integrate security by design, from IoT devices to smart cities.
In 2024, the FBI formally warned that state-sponsored hackers have infiltrated US technology such as routers, potentially compromising critical infrastructure. Lack of accountability for software developers is one factor contributing to poor safeguards. Facing the rapid evolution of cybersecurity threats, we need ongoing education and agility at all levels, from schoolchildren to CEOs. Dedicating resources, leveraging dynamic learning loops, creating training programs (throughout the entire organisation), and continuously running simulations all help us adapt when actual breaches arise.
While not much can be done to combat the ascent of cheap and user-friendly AI disinformation tools, their effects can be mitigated through anticipatory capacity building.
3. Given businesses and investors are always looking to mitigate risk, why do you think there are still 'known-knowns' that remain unaddressed?
In Michele Wucker’s Gray Rhino, our focus is on high-impact, extremely probable, and obvious. Unlike the “invisible” Black Elephant, which for now may be sitting quietly, it is difficult not to see the Gray Rhino, as it is already charging towards us. While we have the choice to respond - or not - to the Gray Rhino, inaction comes at a cost. Examples include the Covid pandemic, Greek credit crisis, Argentina credit crisis, and Challenger Space Shuttle accident.
Despite knowledge of the risks and clarity of the situation, responses often fall short because decisions are made too late. The rhino charges in plain sight, yet we turn a blind eye to it. When taking action before getting trampled, consider the speed of development, the degree of consensus on root causes and possible solutions, and the size and complexity of the problems.
Let’s look at a few reasons why “known-knowns” remain unaddressed.
The first reason relates to the wrong definitions about what risk actually is. Relying on decisions made by modelling uncertainties won’t deliver certainty. Probabilities are not always measurable and risk cannot be isolated in a systemic world. Low probability can be tremendously dangerous in our complex nonlinear world.
The second reason is because we wrongly assume the world to be predictable, changing linearly, navigable through standard playbooks, accurately represented by our models, and safe to consider only with short timescales. In our real, interdependent, hyperconnected, and systemic world, no risk can be sealed off or contained.
Third, as Douglas Adams’ Ford Prefect describes the idea of Someone Else’s Problem: “An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem. That’s what SEP means. Somebody Else’s Problem. The brain just edits it out, it’s like a blind spot.” In our hyperconnected and accelerating complex world, someone else’s problem can rapidly morph into everyone’s problem.
So what should be done?
First, always think in terms of complex connected systems because small inputs can result in disproportionate effects with large impacts.
Second, you can easily heighten your anticipatory senses for the more obvious rhinos. Ask the right questions and challenge them; detect and interpret the obvious signals - don’t wait for the rhino to start charging.
Third, statistical analysis won’t always help - it could even be dangerous, as it can provide false reassurance. It’s often impossible to accurately calculate the risks of these highly consequential rare events. Even if you could, we operate in a nonlinear world, so small inputs can result in disproportionate outcomes.
Finally, combinations amplify - there is a combination of precipitating runaway chain reactions in complex systems that eliminate any predictability.
4. As the originator of ‘Techistentialism’, are humans already beginning to act like ‘idle machines’ as a result of increasing reliance on AI? If so, can we reverse/mitigate this trend?
So indeed, we are entering an era of “Techistentialism,” a term we use to describe the nature of human existence in our AI-driven world. My foresight practice is called Techistential, a play on the terms technology and existential that seems appropriate as technology further defines our current and future existence. Today, we face both technological and existential conditions that are inseparable, and we define this as Techistentialism.
Algorithms are already making decisions that determine insurance costs, mortgages, creditworthiness, employee performances, recruitment, and recidivism predictions. Over the next decades, this scope could evolve even further, with algorithms making life-altering decisions on our behalf involving brain-computer interfaces, education, healthcare interventions, and driving. All affect our privacy, responsibility, agency, and nearly every other aspect of our lives, including what it means to be human. Trustworthiness is paramount when AI is operating independently of human control - even with “benevolent” AI.
As algorithms become the most important decision-makers in our lives, the question is not only whether we can trust AI, but whether we can trust that we understand AI well enough.
Even in complex areas, cognitive computing may provide a way of sifting through near-infinite information to seek optimised decisions for society. In such a third-horizon future, AI-optimised parameters could allocate resources such as healthcare, nutrition, education, and energy to seek more equitable outcomes within society as a whole. But who determines those parameters - AI, humans, or some combination of both?
It may not stretch the imagination to consider a transformation scenario where society is forced to discard its role of driving decision-making in an overwhelmingly complex world. If we are not anticipatory, this future scenario could see human agency replaced, with decisions made by algorithms instead of humans.
Ultimately, acting like an idle machine may be akin to relying on AI without a solid understanding of the consequences. But how do we reverse/mitigate this trend?
Well, updating our education system should now become an absolute priority. Education that teaches effective problem-solving can help humanity become relevant and future-ready for our complex 21st century. We should inspire passion, nurture curiosity, emphasise uncertainty, develop range, and foster critical thinking to examine assumptions. Most importantly, we need to form a new relationship with inquiry, experimentation, and failure (which goes hand in hand with creativity). We must harness curiosity and diverse perspectives, because today’s standard knowledge will never solve tomorrow’s surprises. These features should help us problem-solve out of the most complex, systemic, and existential risks.
5. Businesses operate using robust risk-mitigation frameworks, but numbers can only take them so far. How can businesses use philosophy as a tool to help them see beyond the numbers?
In our work, we focus on two types of philosophy to anticipate disruptions ahead, rethink sustainable value creation, and drive system innovation:
First, is existential philosophy. Necessity is the mother of invention, not least in the invention of our own existence and flourishing. Probably the most empowering aspect of the open futures ahead of us is the opportunity to create our own selves - our “beingness.” We have the intentionality to constantly invent ourselves, create our reality and our essence, a freedom only possible by virtue of the indeterminacy and uncertainty of life.
From Sartre’s famous lecture “Existentialism is a Humanism,” it is worth spending a moment understanding our human condition and its connection to invention. For Sartre, existentialism is the power (and responsibility) of humans to make free choices; it is what allows human value to be self-created (as opposed to predetermined). That opportunity literally allows us to invent ourselves and change our lives. You are free; you make choices; you invent. That process of invention is freedom. In that respect, despite the anguish of unpredictability, uncertainty is empowering and optimistic. It is a philosophy of action wherein we literally define and make ourselves what we are, building our way forward.
Our choices create our future. Our freedom exists in our ability to emerge in each and every uncertain moment rather than through predetermined responses. Uncertainty is a prerequisite for freedom.
Second, in an effort to understand, survive, and thrive in our disruptive world, it’s useful to consider the timeless teachings of Eastern philosophy and Zen Buddhism. When we aim to improve our comfort with impermanence, transformation, and change, we can learn from a set of tactics developed and refined over millennia. There are a few essential concepts here, especially when we realize the biggest risks are the flawed assumptions we make and our lack of imagination.
Shoshin is the Japanese concept of beginner’s mind, which articulates the value of approaching each situation with an open, accepting, curious mind. While a beginner’s mind most obviously
connects to intuition and invention, it also aids its practitioners in improvisation by being in the moment, and achieving the impossible.
The beginner’s mind enables first-principles thinking - a problem-solving technique that teaches us to deepen our understanding by decomposing a problem or a thing into its most foundational elements. As individuals, and collectively as humankind, greater practice of shoshin would improve our intuition, imagination, and capacity for invention.
6. How can organisations implement adaptive strategies to ‘expect the unexpected’ in an AI-driven future?
These considerations typically revolve around building resilience with anticipatory governance capabilities:
● Agency: Aligning leadership and decision-making among stakeholders, values, and actions, we have the agency to make impactful changes in our deeply complex and uncertain world.
● Futures intelligence capacity: Learn to scan and qualify weak signals, interpret next-order impacts, and connect the shifting dots with action triggers. Embrace foresight tools and visioning, map out plausible futures.
● Existential risks: We must not ignore the omnipresent drivers of change, today within our control, that may grow into irreversible outcomes. Our governance systems and underlying incentives must be restructured to foster systems-level change.
● Organisational resilience: For businesses to survive, they need to build strategic resilience for complex risks. The cost of being prepared pales in comparison with the costs of lacking anticipation. Resilient governance requires anticipatory governance.
● Systemic approaches: Understanding better the features of the entire system, given the unpredictability and interdependencies of moving pieces where the whole is different from the sum of the parts.
● Bridging: Cross-fertilization with T-shaped profiles that couple deep expertise with broad experience, creating new combinations in a world where patterns are hard to interpret, and generalists flourish.
Adaptive strategies can support sense-making and decision-making that enable us to emerge with relevance in our present, complex world. We must harness curiosity, creativity, and diverse perspectives. We must create lean, nimble cells that attack problems independently. Inspired by nature itself, these agile strategies have risen in all sorts of areas.
Finally, we should foster “Technology Foresight”, to use cognitive and mental tools in consideration of the future of technology and its impacts on society and the environment. Due to the high stakes, scale, sophistication, and irreversibility of technology, “technology foresight” should be a compulsory:
Anticipate: At the outset, we must thoroughly consider the implications of developing technologies.
Monitor: Ethically questionable products and services that are highly profitable are tough to curtail. Legal standards, governance, compensation metrics, and incentives should be in place for all stakeholders. When serious issues materialise, there should be accountability. Additionally, monitoring innovations day-to-day is critical.
Mitigate: Steps 1 (Anticipate) and 2 (Monitor) allow the consequences of technology to be qualified at the earliest opportunity. Effective mitigation of the unpredictable can only be achieved by carrying out the first two steps.
Author:
Roger Spitz is the bestselling author of Disrupt With Impact: Achieve Business Success in an Unpredictable World and the four-volume collection The Definitive Guide to Thriving on Disruption, from which this article is derived. President of Techistential (Strategic Foresight), and founder of Disruptive Futures Institute (Think Tank) in San Francisco, Spitz is a leading expert and investor in Artificial Intelligence, and is known for coining the term “Techistentialism”.
He publishes extensively on the future of strategic decision-making and AI. Spitz is also a partner of Vektor Partners (Palo Alto, London), a VC fund investing in the future of mobility. As former Global Head of Technology M&A with BNP Paribas, Spitz advised on over 50 transactions with deal value of $25bn.
Did you enjoy this post? Then you’ll love our weekly briefings on The Future of Capital Allocation. Check out some previous editions here, or just cut straight to the chase and subscribe to our newsletters exploring AI, net zero, investing, cinema, and deeptech.
Popular Articles
Get the CogX Newsletter
Get the latest tech news in your inbox each week
A Conversation with Craig Mundie: Navigating 'Genesis: Artificial Intelligence, Hope, and the Human Spirit.'
Issue 46
Craig Mundie, president of Mundie & Associates and former Chief Research and Strategy Officer at Microsoft, teams up with Henry A. Kissinger and Eric Schmidt to explore the future of Human-AI collaboration in their new book, "Genesis: Artificial Intelligence, Hope, and the Human Spirit"