top of page

Would you hire an AI coach?

  • Writer: Helen Barnes
    Helen Barnes
  • 1 day ago
  • 5 min read

Updated: 18 hours ago

Trust and leadership judgement in the age of AI


AI coaches are starting to filter into the mainstream. On the surface, it’s easy to see the appeal. They’re always available (hello 3am overthinking spirals). They don’t get tired of your rants and they don’t judge (except of course to judge that you’re right, all the time). They’re quick, obliging, and reassuringly calm. For people under pressure, they promise support without friction.


If your days are already packed with decisions, trade-offs and conversations that don’t quite land, the idea of something that will patiently listen while you empty your head can feel like a relief. No diary juggling. No awkward silences. No feeling that you should already have this all figured out.

It feels harmless. Private. Safe.


But there’s a question hiding underneath all that convenience:


Can you trust an AI coach with your thinking?


So far, most of the discussion about AI coaching and leadership judgement has focused on safety at the extremes. Will it spot when someone is in crisis? Will it respond appropriately if things tip into mental health territory? Are there guardrails in place to redirect people to human support if needed?

Those questions matter. But they also set the bar surprisingly low.


Because coaching - real coaching - isn’t just something you turn to when you’re falling apart. It lives in the messy middle. In everyday decision-making, problem-solving, second-guessing, option-weighing, identity-questioning brain-dumping. The stuff you don’t put in an email. The thoughts you haven’t yet finished. The things you’re not ready to say out loud to the people around you.


And that raises a much more ordinary, and much more uncomfortable, question.


Can you trust an AI system to hold your stuff?


Not just to respond politely or ethically in the moment, but to respect your rights, your privacy, and the deeply personal material you’re sharing as you think out loud.


Most of us, if we’re honest, don’t really know where our information goes once we hand it over. Privacy and consent have been headline concerns for years, particularly since GDPR, but the reality is that the world has moved faster than our understanding. The terms are long, the language is dense, and in the rush of a working day, “accept” is usually just another button we click so we can get on with things.


The problem is that coaching and therapy are not neutral conversations. They involve big themes: identity, doubt, fear, ethical tension, complex relationship dynamics and unfinished thinking. If we start opening up those spaces to AI, we need to ask a harder question than whether the technology is technically compliant.


How safe can we really feel sharing this material with an unknown entity?

And how might it be used, interpreted or repurposed in the future, in ways we can’t yet see coming?


And that’s only part of the story.


When AI support turns into substitution of judgement


I work with people who are already operating at the edge of their capacity. Senior leaders in fast-paced, uncertain environments that rarely let up. People who carry responsibility all day, absorb complexity for a living, and are expected to have a view even when the ground is shifting underneath them.


For them, AI can look like a very tempting support system: something that can summarise the noise, reduce the cognitive load, help to organise their thinking.


But there’s a point where support quietly turns into substitution. And that’s where things get interesting.


There’s a well-documented quirk in human behaviour that shows up whenever we work alongside automated systems. When we’re under pressure, tired, or juggling too much at once, we have a tendency to trust the system more than ourselves. Not because it’s always right, but because it feels authoritative. Calm. Certain. In fields like aviation and healthcare, this has a name: automation bias.


The same thing happens, more quietly, with thinking tools.


We already outsource parts of our cognition without much thought. We offload memory to calendars and navigation to GPS. Now we’re offloading deeper parts of thinking - organising ideas, summarising complexity, generating options - to AI tools. And early evidence suggests that this kind of cognitive offloading is associated with lower levels of critical thinking over time.

This doesn’t mean AI is inherently dangerous, or that people suddenly become incapable of thought. The risk is much more subtle and cumulative. Over time, confidence can grow even as judgement quietly weakens.


You see it when big decisions are made in a heartbeat. When a neat explanation rides without scrutiny. When a difficult edge is smoothed off too quickly. It shows up in meetings where options are presented confidently but thinly held. In strategies that sound plausible but haven’t been properly stress-tested. In moments where progress feels reassuringly fast, yet something important has quietly been skipped. This isn’t because anyone is careless. It’s the subtle disconnect that appears when the thinking itself has been outsourced.


This is what can happen when we hand over parts of thinking that were never meant to be delegated in the first place. Tools that reduce cognitive load are undeniably powerful, but without intention and boundaries they can encourage us to stop exercising the very mental muscles that make judgement reliable.


Coaching isn’t advice. It’s a trust function.


One of the reasons this conversation gets so tangled is that coaching is often misunderstood as advice, motivation or fixing. In reality, its value sits somewhere else entirely.


Coaching operates as a trust function.


It’s a structured human relationship where uncertainty, doubt and unfinished thinking can be explored without being rushed, exploited or prematurely resolved. Where blind spots can be gently exposed. Where comfortable stories can be challenged. Where responsibility isn’t smoothed over, but held.


In high-pressure roles, any failure of judgement tends to be when context gets distorted, assumptions go untested, or the weight of responsibility becomes too heavy to carry alone.

This is where the idea of a human witness matters.


Good judgement doesn’t emerge in isolation. It’s shaped in relationship. We calibrate ourselves through another person who can notice when our thinking narrows, when we rationalise, or when we quietly avoid the harder truth. And crucially, that person is ethically accountable for the influence they have - for the questions they ask, and for the impact those conversations create over time.


AI, by contrast, has no stake in the consequences.


It can offer prompts. It can reflect language back. It can suggest options. But it cannot carry responsibility for how its responses shape a person’s sense of self, their decisions, or the downstream effects of those decisions on others. The accountability stops at the interface.


Leadership, AI and the future of thinking quality


This isn’t just a philosophical concern. It’s a practical one.


A report from the McKinsey Health Institute talks about the growing importance of “brain capital” - the combination of brain health and brain skills needed to drive resilience, productivity and growth in the age of AI.


In other words, our capacity to think well is becoming a strategic asset.

That changes the stakes.


If thinking quality matters, then where and how we develop it matters too. And so does who - or what - we trust to shape it.


AI can absolutely support human thinking. It can widen perspective, reduce noise, and free up mental space. But judgement, ethical responsibility and self-awareness require something slower, more relational and more accountable.


If we don’t get clearer about that boundary now, habits will form that are difficult to unwind later. Leaders will become more efficient, but potentially less reflective. More certain, but not necessarily more accurate.


In a world full of tools designed to agree with us, the most valuable investment may still be a human who doesn’t.


----



P.S. This piece reflects the level I work at with leaders - not giving answers, but protecting the quality of thinking and judgement in complex, high-pressure roles. If you’re curious, you can find out more about how I work here.

 
 
 

Comments


ICF Professional Coaches logo

Aperture Coaching

Animas Accredited Coach logo
PCC Accreditation badge

SE London, United Kingdom

©2025 Helen Barnes, All rights reserved 

Accredited Transformative Coach badge
  • LinkedIn
bottom of page