When Leaders Think Like Machines

REFLECTIONS

There is something seductive about a confident answer. AI delivers confidence fluently; ask it to summarise a document, analyse a dataset or identify themes in a body of research and the output arrives clean, structured and assured. No hesitation, no loose ends, no visible uncertainty. This is genuinely useful - but it is also quietly dangerous.

The danger is not that AI gets things wrong, though it sometimes does, but that it produces answers which feel resolved - and resolved answers close down the questions that strategy often depends on.

Real decisions rarely turn on what is obvious. They turn on what is unresolved: the tension a customer can't articulate, the cultural shift not yet visible in the data, the single remark in a two-hour conversation that reframes everything. These signals are weak by nature; they sit in contradiction, in hesitation, in the gap between what people say and what they mean.

Real human behaviour is messy, contradictory and often inarticulate. That friction isn't noise - it's where meaning sits. AI is not built to preserve this friction; it's built to smooth it. Language models are optimised for coherence, producing outputs that are internally consistent, logically structured and linguistically confident. The result is a version of reality that feels stable, even when the underlying situation isn't. We must acknowledge that this is not a flaw - it's simply what the technology does. The real question is what happens to us when we start to rely on it.

The risk is that we inherit AI's certainties without noticing. When the first draft comes back tidy, we're less likely to ask what might be missing. When the summary is well-structured, we're less likely to return to the original material. When the themes are clearly labelled, we're less likely to wonder whether a different framing would reveal something more useful. The machine does not replace our judgement, but it can make us forget that judgement was required.

This is the quiet - and perhaps most dangerous - cost. Not hallucination; that's obvious when it happens. The subtler problem is premature closure: the feeling that a question has been answered and an insight has been found, when in fact it has only been organised.

In my experience, real interrogation rarely feels tidy. The best work comes from continuing to ask the why behind the why - from staying with the question longer than feels comfortable, revisiting what seemed settled, allowing for the possibility that the first framing was wrong. That discipline is what separates insight from organisation. And it's precisely what a clean, confident AI output can quietly discourage.

None of this is an argument against using AI. The technology is powerful, and used well it sharpens the work. But there is a difference between using AI to prepare for insight and expecting it to deliver insight. Data can describe the world, but it can't decide what matters. AI can surface patterns, but it can't tell you which ones change the decision. That still requires a person - someone willing to sit with ambiguity, to notice what doesn't fit, to ask whether the confident answer is actually the right one.

The human advantage is not that we are faster or more consistent. It is that we can hold uncertainty without rushing to resolve it.

The real danger isn't a single bad decision made on a confident summary. It's what happens over time - when leaders grow accustomed to answers that arrive resolved and stop expecting to sit with ambiguity. When the instinct to ask "what's missing?" or "why does this feel too neat?" begins to atrophy. AI doesn't just produce tidy outputs; it can quietly reshape what we think rigour looks like, until interrogation feels like inefficiency and doubt feels like weakness. That's the long cost. Not wrong answers, but a slow forgetting of what real questions require.

In a world where AI offers resolution at speed, the discipline of staying with the question is becoming rare. And valuable.

These reflections draw on work with language models and study undertaken at MIT.