The Trust Crisis: Why Outsourcing Trust To AI May Be Our Biggest Mistake

4 min read
9 July 2025
The Trust Crisis: Why Outsourcing Trust To AI May Be Our Biggest Mistake
5:26

Listening to Yuval Noah Harari,  the historian, philosopher and bestselling author known for exploring humanity’s future, speak about AI and trust made me pause and reflect. Harari highlights what he calls a paradox at the heart of our relationship with technology today:

Humans don’t fully trust each other, we know we’re flawed, biased and emotional. Yet many of us are ready to trust AI.

If we hesitate to trust people, how can we so easily trust the systems those people create? As Harari warns:

AI doesn’t transcend human bias. It encodes and amplifies it, often in ways we don’t even see.

And yet, here we are, rushing to hand over our trust to AI as if it can finally provide the certainty we’ve been longing for.

But what began as a paradox, in my view, is rapidly becoming a crisis of trust. Every day, I see people turning to AI not as a tool, but as a source of truth.

What I’m Noticing: When AI Feels Safer Than a Friend

This trust shift is everywhere. Someone recently shared with me:

When I pitch a business idea to a friend, they hesitate. They ask hard questions, point out risks. But when I ask AI, it immediately builds me a business plan. The AI sounds supportive, confident and encouraging.

And now, I see this playing out across social media:

People ask AI a question.

They post their response, unexamined as if it’s a definitive source of truth.

This pattern is spreading. fuelled by the role modelling of tech leaders. Despite knowing the limitations and risks of AI, they keep accelerating its development. They act as if AI can be trusted, so naturally, people follow.

This isn’t just happening, it’s being modelled, reinforced and amplified.

Why AI Feels So Tempting

The human brain craves certainty. Uncertainty triggers discomfort and stress regions like the anterior cingulate cortex and amygdala. Certainty lights up reward circuits.

So when a friend challenges us, we feel uneasy. When AI offers confident guidance, we feel relieved.

But this relief is dangerous when:

  • We mistake confidence for wisdom.
  • We bypass the hard work of reflection, dialogue and critical thinking.
  • We start outsourcing not just tasks but judgment itself.

Why Humans Need Trust and Why Wisdom Cannot Be Lost

Trust isn’t optional. It’s essential for human survival.

From our earliest days, we thrived because we trusted, not blindly, but wisely. Trust allowed us to cooperate, form alliances, raise children and build societies. Those who couldn’t trust, or couldn’t be trusted, risked isolation.

Human wisdom emerged from this dance of trust, doubt and discernment. It’s what allowed us to navigate risk, forgive, adapt and evolve.

That wisdom, messy, hard-won and irreplaceable, must not be lost now.

AI may offer speed and confidence, but only humans bring moral reasoning, empathy and context.

The Illusion of AI as a New Source of Trust

We’re tempted to believe AI solves the trust problem. But as Harari reminds us:

Super-intelligent AI is, at its core, an alien agent, unpredictable and ultimately untrustworthy.

AI doesn’t eliminate the need for trust. It makes the need for human wisdom greater than ever.

What’s at Stake

  • We reduce relationships to transactions judged by algorithms.
  • We risk losing nuance, forgiveness and second chances, the very fabric of meaningful human trust.
  • We weaken the faculties, empathy, reflection, and responsibility that define human wisdom.

How We Can Do Better

To navigate this trust crisis, I like to think in terms of something that looks like a triangle. I designed this quick triangle to remind us that no single element is enough on its own; true wisdom comes from holding all three in balance.

I call it ‘The Trust Triangle’, a simple visual that reminds us what’s needed for balanced, thoughtful decision-making:

  • Human Wisdom:
    Our moral reasoning, empathy and context

  • AI Support:
    Technology as a tool, not a master

  • Critical Reflection
    The ongoing practice of questioning and discernment 

The-Trust-Crisis--Why-Outsourcing-Trust-To-AI-May-Be-Our-Biggest-Mistake-1

Final Thoughts 

If we want to navigate this new landscape with integrity and clarity, we must stay conscious of how we relate to AI. The choices we make today will shape not only our technologies but also the kind of leaders, communities and societies we become.

Here are guiding principles I believe can help us protect what matters most:

  1. Let AI support, not replace, human judgment.

  2. Remain accountable for decisions made with AI’s input.

  3. Embrace uncertainty,  that’s where human wisdom grows.

  4. Choose relationships that challenge and stretch us, not just voices (or systems) that comfort us.

AI can help us process complexity. But it can’t make us wise. Human wisdom cannot and must not be lost, because it is wisdom, not certainty, that will guide us through the challenges ahead.

Trust is still ours to build. In this age of artificial agents, we need it more than ever.

Leading Wisely in the Age of AI

If the ideas in this article speak to you, and you’re exploring how to lead with more empathy, reflection and wisdom in a rapidly changing world, the Brain-Friendly Leadership Certification may offer a helpful next step.

It’s a space to deepen the human capacities like trust-building, discernment and relational awareness, that remain essential, especially as we integrate AI into our work and lives.

You can learn more about the program here👇

Banner-ICF-i4-Neuroleader-Brain-Friendly-Leadership-Certification-1

Sources:

  • Fukuyama, F. (1995). Trust: The social virtues and the creation of prosperity. Free Press.
  • Harari, Y. N. (2018). 21 lessons for the 21st century. Spiegel & Grau.
  • Hirsh, J. B., Mar, R. A., & Peterson, J. B. (2012). Psychological entropy: A framework for understanding uncertainty-related anxiety. Psychological Review
  • Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., & Camerer, C. F. (2005). Neural systems respond to degrees of uncertainty in human decision-making.

Subscribe by email

Get Email Notifications

No Comments Yet

Let us know what you think