
Our Zingy Christmas Mixology Night
August 5, 2025When respondents aren’t really respondents
AI is everywhere — from writing emails and generating headlines to pretending to be your next best friend on social media. But now it’s creeping into research panels and online surveys too, and that’s where things start to get a little… murky.
Let’s be clear: there’s a world of difference between using AI to analyse research and being tricked by AI pretending to participate in it. The former is a powerful tool; the latter is a threat to the very foundation of insight — real human understanding.
The Rise of the Artificial Respondent
As more research shifts online, fraudsters and opportunists have found new ways to game the system. AI language models can now complete surveys faster, smarter, and more plausibly than any bored teenager in a darkened bedroom ever could. They can mimic human emotion, fabricate believable opinions, and even “learn” from previous questions to sound consistent.
Sounds impressive, right? Until you realise you might be building business strategies on a dataset full of digital ghosts.
What’s at Risk
AI-generated responses don’t feel. They don’t have lived experience, frustrations, or needs. They can’t tell you why they chose one brand over another or what made them smile, cry or rage at your latest campaign.
Using them contaminates more than just data quality — it distorts reality. You might think your audience loves something they’ve never even seen. You could make commercial or creative decisions based on the voice of something that doesn’t exist.
And because AI respondents can be trained to “sound” demographically diverse, they’re harder than ever to spot — which means traditional data-cleaning methods aren’t enough anymore.
Spotting the Fakes
A few tell-tale signs still exist:
- Overly articulate answers that read like a press release, not a person.
- Zero variance in response time — perfect pacing across a long survey.
- Contradictory logic — ticking “no children” then waxing lyrical about bedtime routines.
- Hyper-rational language — “I believe Brand X demonstrates superior alignment with my personal values.” (No one talks like that in real life.)
But as AI evolves, detection needs to evolve too. Techniques like digital fingerprinting, behavioural analysis, and open-ended text scoring are becoming vital.
Why It Matters
Good research has always been about understanding people. When AI fills the seats meant for humans, we lose the beating heart of insight — the texture, nuance and imperfection that makes data meaningful.
As an industry, we’ve got to tighten recruitment, challenge sample providers, and demand transparency about data validation methods. Because if we don’t, “AI contamination” will quietly erode trust in our findings — and in research itself.
The Bottom Line
AI is brilliant when it’s our tool, not our respondent.
Used right, it helps researchers work faster, spot themes, visualise insight, and get to the “why” quicker. But when AI is in the sample rather than behind the scenes, it stops being a helper and starts being a hazard.
It’s time to get forensic about data quality — because the future of credible insight depends on keeping it human.
Posted by Lisa Holt