AEO Marketing Forums – Measuring Customer Satisfaction

It was great to catch up with some partners and clients at last weeks’ AEO Forums, it’s always a good focus for the start of another year.

With a loose customer experience theme running through the day, the Marketing Forum was not short of statistical references, after all, no-one knows how well they’re delivering customer experience unless they measure it.  In fact, all but one of the audience were using NPS or The Net Promoter Score® to give it its full moniker.

Of course, NPS is a very simplistic measure of customer loyalty and simple is good, right?  Well yes and no (get comfortable, this isn’t a quick explanation!).

You see the problem I found while listening to various sessions was how many times there were mentions of because of X and Y “our NPS increased by….”.  Unfortunately because NPS is a simplistic measure it doesn’t register your specific changes, it simply tells you whether the audience who respond to your survey criticise, remain impassive or promote their experience, it can (if the right follow-up questions are asked) highlight improvement opportunities, it can tell you what benefits you should be communicating to prospects and it can provide invaluable marketing guidance, what it can’t do is to tell you what aspects of delivery are responsible for an upturn or improvement in your score, you can hypothesise but ultimately it’s a guess.

NPS as an overall loyalty measure, can be heavily influenced by sample and since the majority of event NPS measures come from self-selected, online surveys these days, year-on-year score tracking is often subject to sample bias which inherently impacts the NPS.

Essentially you can’t trust the score unless you check, balance and validate the sample – unfortunately few do.  ‘So what’ you say, so long as our NPS is improving.  Well what if you’re missing a critical segment of your audience?  What if you’re only hearing from the people who can be bothered i.e. those with polarised experiences – detractors and promoters, how useful is your customer experience metric now?

More concerning and something we’ve heard a lot recently, is when NPS is aligned to appraisal objectives and bonuses.   You see, the problem is NPS as a simplistic measure of loyalty, really isn’t that simple.

Because two percentages are involved in the NPS calculation (detractors from promoters), both percentages (even in a validated sample) are subject to margins of error, meaning the overall range of potential scores in any given sample can be wide.

As a rough guide, even if you have a thousand responses (which many of you won’t have, especially for exhibitor NPS), you only really know your scores within 10 points (+/-5 each way).

I’ve illustrated an example of this below:

(Figure 1)

Show X 2015 2016
Survey sample 110 115
Audience size (visitors/exhibitors) 500 510
Response rate % 22% 23%
Confidence interval (unvalidated sample) at the 95% level +/-9.3% +/-9.1%

 

(Figure 2)

2015 2016
Promoters 36% 39%
Passives 9% 26%
Detractors 55% 35%
NPS -19 +4

 

So looking at figure 2, on the face of it, you have a +23 improvement in NPS from 2015 to 2016 (-19 to +4 scores) congratulations right?!  Well yes but not as you might imagine.  With samples of this size the scores are within their margin of error meaning the NPS upturn is not significant.    What is significant is the reduction in detractor’s year-on-year (down 36% or 20 percentage points year-on-year).

While it’s still positive in this instance, reverse it and how do you feel?  Your score looks like it’s gone down significantly or it’s just remained static – but has it really?  You’ve worked your backside off on this event, you listened to audience feedback, you’re confident you understand their needs and you’ve delivered what you truly believe is a fantastic event .  There doesn’t seem to be any rhyme or reason to the difference in scores so how do you justify it?

That’s why you need to understand more than the very ‘simple’ NPS calculation, you need a robust and reliable sample, you need to minimise sample bias wherever possible and you need to set targets/objectives that are cognisant of the error margins.

And ultimately, while NPS is a very good ‘simple’ measure of customer loyalty, it’s simply not enough on its own to provide reliable understanding of the customer experience, what works and what doesn’t.

If you want to know how best to do that, get in touch, we’d love to share our experience.

Posted by Lisa

Anyone can gather insights, right?

True enough, there are loads of ‘do it yourself’ tools available out there and it doesn’t take a rocket scientist to create and send a customer survey, but as one… Read more »

Unlocking business success

In today’s dynamic business landscape, understanding customers has become more important than ever. Consumer preferences are constantly evolving, and businesses need to adapt quickly to stay competitive. Market research plays… Read more »

Why is gender equality important?

As a women, research business leader, I feel that it’s important to mark International Women’s Day now, more than ever. A recent Ipsos Mori study, found that almost 1 in… Read more »

My family

Happy anniversary!

When I jumped on the Zing train 12 years ago and we symbolically launched on American Independence day, I had no idea I’d still be driving it 12 years later,… Read more »

So here it is…..

2022 has passed like a whirlwind – we’ve been busier than a busy person in busy world, not that we’re complaining – the opposite, in fact, we’ve had an amazing… Read more »

Merry Christmas from all at Zing Insights

We’re just crossing the Ts and dotting the Is before signing off for the Christmas break – and a much-needed break it is too. What a year it has been… Read more »

Our Clients

We have extensive experience working across a wide range of industry sectors for many leading brands.