In our recent More Bang for Your Buck session, Morag was joined by Hugh (founder of Live Minds) to talk about one of the biggest questions in research right now (Check out the podcast here):
Where does AI genuinely add value in qualitative research and where do you still need a human in the room?
Spoiler: it’s both!
And if you get that balance wrong, you could end up confidently presenting something that sounds brilliant…but is completely wrong.

When AI sounds right (but isn’t)
The session kicked off with us reflecting on how a UK police force reportedly used Microsoft Copilot as the sole input for generating intelligence reports, and got it so wrong. Why is that funny and also slightly terrifying?
Because anyone who has worked hands-on with LLMs is familiar with this truth. AI can generate findings that sound authoritative, confident and entirely plausible yet be deeply flawed.
We’ve tested models using real project data. Sometimes what comes back reads beautifully. Clear themes. Strong statements. Convincing conclusions. But, when you know the data inside out, you can see it’s wrong. It’s over emphasised some points, ignored others, generalised, and averaged out any tensions.
But that doesn’t mean AI is useless. It means you need to understand what it’s actually good at.

Where AI really shines in qual
Used properly, AI is remarkable. Here’s where we’ve seen it add immediate, tangible value:
1. The Blank Canvas Problem
Starting a discussion guide or screener from scratch? Trying to frame early hypotheses? AI is exceptional at getting you moving. It’s brilliant at:
- Generating first drafts
- Suggesting angles you may not have considered
- Stress-testing your logic
- Or acting as a critique partner
It won’t replace your thinking, but it can accelerate it dramatically. And with squeezed timelines and budgets, that’s a win.

2. Transcription and Translation
This is the no-brainer win - instant transcription and immediate translation, helping you to complete multi-market projects without days of waiting, and waiting. Even better, analysing responses in participants' native language before translating which helps preserve nuance far more effectively than translating first and analysing second.
For global qual, this has been transformative. What used to be slow and expensive is now fast and much more accessible.

3. Focused, Tactical Projects
AI performs best when the objective is clear, the scope is tight, the sample is focused and the question is well defined. Think about messaging and tone checks, quick directional concept feedback. When the task is tight and specific, AI-led approaches can be not just useful but 'good enough’.
Where AI still falls short
Now for the uncomfortable part. Where does AI struggle? In our view it’s when the objectives are broad or ambiguous, the stakes are high (i.e. the decisions and actions riding on it), you’re trying to uncover deep emotional drivers or the context is so important and matters more than surface patterns.
In these circumstances, there are three big watchouts.
1. Hallucinated Authority
AI doesn’t know what it doesn’t know. So it will confidently present:
- Misinterpretations
- Over-simplifications
- Fabricated nuance
The output sounds polished. The language is persuasive. But this is not the same as the truth. And if you don’t know the data inside out, you won’t catch the flaws.
2. The Averaging Problem
Humans pick up tensions, contradiction, intensity, silence. What wasn’t said. But AI tends to smooth things out. It averages responses and equalises strength of opinion. It can struggle to distinguish between a passing comment and a deeply held belief. Yet in strategic work, those differences are everything.
3. Context Blindness
Qualitative research is rarely just about what was said. It’s about who said it, why they said it, what else was happening, what else they said, what they didn’t articulate...
AI can summarise. It can cluster. It can extract quotes. But ‘reading between the lines’? That remains a very human skill.

So what’s the right model?
The answer isn’t AI or humans. It’s both, used with intention. Think of AI as an accelerator, a thought partner, a first-pass analyst. Luckily they don’t get tired, and don’t get offended if you don’t use their ideas!
And think of humans as the strategic interpreters, the nuance detectors, the context holders, the decision stewards.
When the decision is small, fast, and tactical, AI may be sufficient. But when the decision is strategic, high-stakes, business-defining, then human oversight isn’t optional.

The real question isn’t “Can AI do it?”
It’s ‘should AI do it?’
Technically, AI can do lots of things - design projects, write screeners, moderate conversations, analyse transcripts, generate reports, make recommendations. But just because some claim AI can, it doesn't mean that it should.
You need researchers who know the data well enough to spot when something feels ‘off’. Because the most dangerous output isn’t obviously wrong. It’s the one that sounds right. And that’s far more dangerous.
Challenge Assumptions. Talk to Hummingbird Insights