
A new year, a fresh notebook, and a world that feels… let’s say challenging.
January 8, 2026Why humanity still matters in qualitative research
Imagine building what you believe is a beautifully efficient recruitment system. Smart software, clear criteria, automated screening and fast decisions.
Then discovering it’s quietly rejecting some of your best candidates.
That’s not dystopian fiction. It’s exactly what the Harvard Business School study Hidden Workers: Untapped Talent found was happening across the US, UK and Germany. Surveying over 2,000 executives, the researchers reported that 88% believed qualified, high-skilled candidates were being filtered out because they didn’t match the algorithm precisely. For middle-skilled roles, that figure rose to 94%.
Not because the candidates weren’t capable, because the system didn’t recognise them.
And here’s the important bit: this wasn’t a rogue AI problem. It was a human design problem. The algorithms were enforcing rules that people had created. Rigid criteria applied at speed and scale, without judgement, curiosity or context.
If you work in research, that should feel uncomfortably familiar.
The comfort of systems
There’s something reassuring about structure; Criteria, scoring frameworks and dashboards. Applicant Tracking Systems became standard because they solved a real operational issue: volume. When hundreds of CVs land in hours, you can’t manually read each one in depth. So software filters, keywords, experience thresholds and formatting compliance ensure evaluation is logical and efficient.
But efficiency is not the same as accuracy. The moment you reduce a human being to a checklist, you lose everything that doesn’t sit neatly in a predefined box. The career break, the unconventional path, the experience that translates perfectly but uses different language.
The algorithm didn’t “fail”, it did exactly what it was designed to do. The flaw sat in the assumptions embedded within it.
That distinction matters enormously for our industry. The same temptation exists in research
In market research, we face a quieter version of this every day.
It’s tempting to believe that if we ask enough structured questions, capture enough data points and build a clean enough dashboard, we’ll have the truth. It feels robust, scalable, presentable.
But quantitative research can only measure what you thought to include.
You can tell a client that 67% of visitors rated their experience positively. You cannot tell them why the remaining 33% felt underwhelmed unless you’ve actually listened to them.
You can show that satisfaction dipped among first-time attendees. You can’t understand whether that’s overwhelm, poor wayfinding, mismatched expectations or language barriers without conversation.
Data confirms patterns. Qualitative research explains them and explanation requires judgement.
Where AI genuinely helps
Let’s be clear: AI is not the villain here. Used well, it’s transformative. It can process thousands of open-text responses in minutes, it can cluster themes, it can translate at scale, it can provide a first-pass coding structure that would once have taken days.
That’s augmentation, it’s valuable. But there’s a line between assistance and autonomy. AI can group comments by sentiment but it can’t reliably detect irony.
AI can identify repeated phrases but it can’t weigh emotional intensity.
AI can summarise what was said but it can’t interpret what was meant.
That final step, interpretation, is where experience lives. It’s where context, sector knowledge and instinct come into play. And that’s not a processing task, it’s a thinking task.
Particularly in events
In the events and exhibitions sector, this distinction becomes even more pronounced.
These are human experiences. They’re about energy, connection, decision-making confidence, reassurance. They’re about the moment someone decides to renew, or not. To return, or quietly drift away.
Those decisions rarely hinge on a single metric. A Net Promoter Score won’t tell you why a visitor felt overwhelmed. A post-show survey won’t reveal that an exhibitor’s frustration was actually about mismatched expectations set months earlier.
Those insights emerge in conversation, when someone says something slightly contradictory, pauses before answering, or uses a word that signals something deeper is going on.
If your methodology strips out nuance in favour of speed, you risk building conclusions that are tidy but incomplete.
And incomplete insight is dangerous, because it looks authoritative.
Human oversight isn’t optional
The Harvard researchers didn’t argue that technology should be abandoned. They argued that consequential decisions require human oversight, especially where automated systems are excluding or ranking people.
The parallel for research is direct.
Use technology where it accelerates processing, use human expertise where interpretation is required and be explicit about which stage is which.
The real risk isn’t AI itself. It’s the quiet confidence that comes from algorithmic outputs presented as if they are finished insight.
Clean charts, crisp summaries and blind spots no one has thought to question.
The real objective
Research was never about data collection. It was about understanding people.
– What they value.
– What frustrates them.
– What would bring them back.
– What would push them away.
Understanding requires curiosity. It requires someone willing to ask a follow-up question. Someone comfortable sitting with ambiguity rather than rushing to resolution.
AI will continue to improve. It will get faster, sharper, more sophisticated. But curiosity, empathy and interpretive judgement are not technical bugs waiting to be solved. They’re human capabilities.
And in qualitative research, they’re not a nice extra.
They’re the methodology.
Posted by: Lisa Holt
Sources: Fuller, J., Raman, M. et al. (2021). “Hidden Workers: Untapped Talent.” Harvard Business School & Accenture.
Brookings Institution (2024). AI hiring tools and discrimination risk.
Gartner (2025). Candidate acceptance rate data, 2023–2025.




