AEO Marketing Forums – Measuring Customer Satisfaction

It was great to catch up with some partners and clients at last weeks’ AEO Forums, it’s always a good focus for the start of another year.

With a loose customer experience theme running through the day, the Marketing Forum was not short of statistical references, after all, no-one knows how well they’re delivering customer experience unless they measure it.  In fact, all but one of the audience were using NPS or The Net Promoter Score® to give it its full moniker.

Of course, NPS is a very simplistic measure of customer loyalty and simple is good, right?  Well yes and no (get comfortable, this isn’t a quick explanation!).

You see the problem I found while listening to various sessions was how many times there were mentions of because of X and Y “our NPS increased by….”.  Unfortunately because NPS is a simplistic measure it doesn’t register your specific changes, it simply tells you whether the audience who respond to your survey criticise, remain impassive or promote their experience, it can (if the right follow-up questions are asked) highlight improvement opportunities, it can tell you what benefits you should be communicating to prospects and it can provide invaluable marketing guidance, what it can’t do is to tell you what aspects of delivery are responsible for an upturn or improvement in your score, you can hypothesise but ultimately it’s a guess.

NPS as an overall loyalty measure, can be heavily influenced by sample and since the majority of event NPS measures come from self-selected, online surveys these days, year-on-year score tracking is often subject to sample bias which inherently impacts the NPS.

Essentially you can’t trust the score unless you check, balance and validate the sample – unfortunately few do.  ‘So what’ you say, so long as our NPS is improving.  Well what if you’re missing a critical segment of your audience?  What if you’re only hearing from the people who can be bothered i.e. those with polarised experiences – detractors and promoters, how useful is your customer experience metric now?

More concerning and something we’ve heard a lot recently, is when NPS is aligned to appraisal objectives and bonuses.   You see, the problem is NPS as a simplistic measure of loyalty, really isn’t that simple.

Because two percentages are involved in the NPS calculation (detractors from promoters), both percentages (even in a validated sample) are subject to margins of error, meaning the overall range of potential scores in any given sample can be wide.

As a rough guide, even if you have a thousand responses (which many of you won’t have, especially for exhibitor NPS), you only really know your scores within 10 points (+/-5 each way).

I’ve illustrated an example of this below:

(Figure 1)

Show X 2015 2016
Survey sample 110 115
Audience size (visitors/exhibitors) 500 510
Response rate % 22% 23%
Confidence interval (unvalidated sample) at the 95% level +/-9.3% +/-9.1%


(Figure 2)

2015 2016
Promoters 36% 39%
Passives 9% 26%
Detractors 55% 35%
NPS -19 +4


So looking at figure 2, on the face of it, you have a +23 improvement in NPS from 2015 to 2016 (-19 to +4 scores) congratulations right?!  Well yes but not as you might imagine.  With samples of this size the scores are within their margin of error meaning the NPS upturn is not significant.    What is significant is the reduction in detractor’s year-on-year (down 36% or 20 percentage points year-on-year).

While it’s still positive in this instance, reverse it and how do you feel?  Your score looks like it’s gone down significantly or it’s just remained static – but has it really?  You’ve worked your backside off on this event, you listened to audience feedback, you’re confident you understand their needs and you’ve delivered what you truly believe is a fantastic event .  There doesn’t seem to be any rhyme or reason to the difference in scores so how do you justify it?

That’s why you need to understand more than the very ‘simple’ NPS calculation, you need a robust and reliable sample, you need to minimise sample bias wherever possible and you need to set targets/objectives that are cognisant of the error margins.

And ultimately, while NPS is a very good ‘simple’ measure of customer loyalty, it’s simply not enough on its own to provide reliable understanding of the customer experience, what works and what doesn’t.

If you want to know how best to do that, get in touch, we’d love to share our experience.

Posted by Lisa

Merry Christmas from all at Zing Insights

We’re just crossing the Ts and dotting the Is before signing off for the Christmas break – and a much-needed break it is too. What a year it has been… Read more »

Toot the horn – we’re 10!

Today is a day of celebration for us all at Zing – we’ve made it to our 10th anniversary.   Hey, we all know this is an amazing achievement for… Read more »

Healthcare Trends Webinar

This week, Lisa led a webinar in conjunction with Simon Marrett from Ellerton Marketing. The webinar discussed healthcare trends across 2020 and 2021, with research undertaken through use of the… Read more »

It was the best of times. It was the worst of times ….

Published in 1859, Charles Dickens wrote the first line to A Tale of Two Cities more than 150 years ago, but it’s so pertinent to 2020.  Let’s face it, this… Read more »

3 words

What 3 words would you use to describe 2020 so far?  Here’s a few for starters … Unprecedented?  Challenging?  Chaotic?  Scary? Dystopian? Exciting? Uncertain? Tumultuous? Mindful?  Life-changing? Anxious?  The reality… Read more »

When a little insight can be just enough….

Once upon a time, not so very long ago, commissioning an insight project typically meant embarking on a long and often pretty expensive journey.  Projects routinely ran for 8+ weeks… Read more »

Our Clients

We have extensive experience working across a wide range of industry sectors for many leading brands.