Time to junk the satisfaction survey

Why your customer strategy should focus on complaints

I was half listening to the news the other day when I heard a story about a couple who were selling their new-build house at less than they paid for it “because it was sh*t”. Apparently, the poor people had got so fed up with the number of things wrong that they had wearied of their constant complaints to the builders and decided that to put it on the market. Presumably they’d felt that the loss they’d realise on the sale would be less than the stress of continuing to live there.

I can’t find the reference to the story – there’s only so many times you can type four-letter words into Google – so I don’t know who the builders in question were, although they did issue a statement to say that the customers had received compensation and repairs in line with the contract they had signed. Whoever they were though they have missed a massive opportunity. In fact, they should have paid the couple handsomely for taking the time to complain, because

complaints are the best customer feedback you can get.

So wrong, it’s right

Most companies I know invest time and money in finding out what their customers think of them. The problem with this “voice of the customer” approach is that, when applied to all your customers, it just produces an average view – i.e. what keeps most of them satisfied. Even the sainted Net Promoter Score approach frequently fails to ask the net promoters why they are so enthusiastic about the company.

I think you could just junk most customer satisfaction surveys as the surefire way to get meaningful feedback is to make it easy for the customer to provide you with it, positive or negative.

Think about it: customers who love you or who have just had amazing service will want to tell you about it and those who haven’t will also do the same.

“That’s all very well” I hear you say, “but what about the ones who don’t complain and just leave.”

To which my reply is: too bad – you probably didn’t act on the feedback from your complainers earlier – and those middling-dissatisfied customers would be unlikely to respond to your customer-wide survey.

Inconvenient truths

I’ve noticed that organisations generally don’t give as much attention to complaints and negative feedback as they should do, and I think the reasons for this is that there’s a bias – reassuringly it’s a human trait – towards good news. We’re hard-wired to create a story around the way we want things to be rather than how they actually are, or indeed, how they appear to someone else.

Customers who fail to fully appreciate the products and services we’ve spent a massive effort perfecting are an inconvenience, a distraction from the narrative we’re trying to create, even when their lack of appreciation is down to something we’ve failed to deliver.

Our bias, therefore, is towards those customers who fit the norm. Unfortunately, they are not the source of innovation and improvement.

Just about managing

Dealing with customers who don’t fit the norm – i.e. the massively dissatisfied ones – is the domain of complaints management which is a frequently neglected and under-resourced area. The overriding attitude is to get the complaint dealt with as efficiently as possible, making sure the customer isn’t over-compensated along the way. Sure, you need to provide redress and to put things right, but the opportunity is often missed to find out what the customer’s desired outcome was and to do what you could to deliver that outcome, not just the initial product service.

For example, the hapless couple in the new-build house probably wanted something more than a non-leaking roof over their heads (although apparently even this was beyond their builder’s capability), they wanted a home. And whilst this isn’t an unusual outcome, I bet the building firm didn’t have the nous to sit down with their customers and find out what it was about a home that they wanted – their unspoken needs if you like. Finding and delivering against these, rather than ineptly repairing an initial botched job, would have created delighted customers.

Complaints at the core

Building a customer strategy around complaints is the efficient, if counter-intuitive, route to increasing the number of delighted customers. The elements of such a strategy should answer the following questions:

  • How easy do we make it for customers to provide feedback? (Answer: it should be very easy, and through multiple channels, including social media.)
  • How do we resource those channels?
  • How far do we empower front line people to sort out complaints focusing on customer outcomes?
  • How do we learn from feedback?
  • How do we use the learning to be able to anticipate and deal with complaints before they happen?
  • Who is responsible for driving through the improvements that result from root cause analysis?

Focus your strategy on answering these questions and you can drop your customer satisfaction surveys.

Net Promoter Score – what’s the point?

It all depends on the context

An unwanted set of medical visits last week resulted in an equally unwanted set of follow-up texts.

My local hospital trust “would like me to think about your recent experience in the Emergency Department. How likely are you to recommend us to your friends and family if they needed similar care or treatment? Reply 1 for Extremely likely, 2 for Likely, 3 for Neither likely nor unlikely, 4 for Extremely unlikely or 6 for Don’t know. Please reply today, your feedback is anonymous and important to us and helps us to improve our service…”

There was no follow up question in their survey – clearly they were just wanting a number.

My GP’s surgery then did exactly the same.

Yes, the much-touted and widely-discussed Net Promoter Score (NPS) was at work again!

Well, actually the experience and the care in both cases were great but I didn’t reply, but because the context of the experience means that I think NPS has no significance in isolation. If I had responded with 8 but with no opportunity for follow-on comment how can they react? If the hospital looks at their scores how can they do anything meaningful unless they have some view of what aspect of my experience is not great in the my opinion.

No choice

It got me thinking what do people use NPS for? Picture the scene if you can: someone close to you is suddenly taken ill. The LAST thing you are going to do is say “Hmm, let’s take you to XYZ Hospital, they have a really great patient experience and I’d heartily recommend it!”

If you lived in my neck of the woods you would only have one choice in an emergency, assuming it didn’t require an ambulance: the nearest hospital. And that’s in London – an area not short of “competitor” hospitals; elsewhere you most likely wouldn’t have a choice.

Similarly, signing up for a GP is not like having a bank account or a phone service: you tend to sign up long-term and don’t like to switch unless you move house. You might recommend individual doctors within a practice to your nearest and dearest depending on your experience but that’s not the question.

I asked a GP friend of mine who’d moved from my surgery to another practice whether they were using a similar measure. “Oh yes” she said, “we do the scoring as specified and then we have to send the results to the Department of Health.” As far as I can tell there is no follow-up or any expectation to do anything differently. The score was being used little more than a traffic light to gauge the surgery was performing above a minimum threshold.

So, what’s the point of NPS?

Despite its misapplication in parts of the National Health Service, the measure is partially useful, but it does not deliver quite the impact it claims:

  • If I have a great experience from supplier A where various competitors are readily available, I’ll form an emotional attachment to the supplier that provided it. I might quite like Supplier A but part of the attachment is based on confidence they can do the job and trust that this will happen consistently.
  • I’ll be more much likely to tell someone that I recommend supplier A and much of the time I will give them a 9 or 10. In this scenario NPS accuracy is working.
  • I might be using NPS after a visit to a retailer. If I got what I wanted, and the assistant had smiled at me nicely then I would be more than happy to give a nice round 10. Then I would most likely forget I had ever been there and I never raise it in conversation again. The scoring system is not working so well.

Because it’s focused on measuring my reaction to specific events NPS is not a complete picture. The experience I have had needs to be part of a journey towards a particular outcome. To use my recent healthcare example, that journey won’t be complete until I have had a follow-up appointment and further treatment, if needed, a process involving referral and booking into the appropriate clinic. My satisfaction (not likelihood to recommend) won’t be determined until my desired outcome – good health, reassurance about future health concerns – is achieved.

And it still won’t involve me recommending any form of medical treatment, no matter how great the experience is.

Building on success

At NextTen we find it’s much more helpful to talk about customer success which we define as a combination of fulfilling the customer’s desired outcome and providing a good experience. Using these two dimensions we can build a customer advocacy matrix. High advocacy companies combine a great customer experience with a great outcome delivery, although it’s possible to achieve business success with an OK or even below-average experience as long as you deliver the customers’ desired outcomes as low-cost airlines continue to prove.

  • Ryanair and Spirit: poor customer experience but great profitability.
  • Kingfisher Airlines: great experience but went bust!

Context is everything

NPS can certainly tell you if you’re in the high advocacy quadrant of the matrix, but you’ll need additional qualitative data to understand why you’re there, or if you’re not, where you need to improve. And if your market context isn’t one where high levels of customer choice or switching occur then you would be better off measuring something meaningful like the number of and reasons for customer complaints.

Gender pay gap: a blunt instrument is better than no instrument

Obsessing about what’s right obscures the real issue

Measuring the “right” things is talked about in almost every company. Unless we agree with what and how something has been measured we howl “this is too subjective” and then refute some of the core findings. The rush to meet the 4th April deadline for UK companies to report on their gender pay gap is a case in point.

Criticism has come in from many quarters that the measures created an unnecessary burden on employers and failed to measure enough of the right things other than what we already knew – gender pay inequality exists.

If you look a little deeper then there are findings which are valuable and actionable:

  • The headline is that men are paid on average 9.9 per cent more than women but with significant disparity between companies and industries. Some companies like Google were quick to claim that none existed and others like HSBC, which has claimed equal opportunity as a core value, emerged as one of the worst offenders.
  • The trend in closing the gap is currently slow with pay parity only by 2048 unless something changes.
  • Interestingly, some reports demonstrated a poor understanding of statistics with 38 companies reporting no difference between median and average pay, something that is statistically highly unlikely.

It has to be said the figures are pretty depressing, but what these figures will do is put pressure on companies to action in change in a way that has not been done before. There is a very good chance that a significant improvement in pay parity will be achieved a long time before 2048 – so even if on reflection the initiative could have been delivered better it has gone further to establish change than many.

So wrong it’s right

The debate illustrates one of the key problems with measurement in business. Almost all measures other than those on the balance sheet, can be subject to debate and accusations of subjectivity.

This is particularly acute in customer experience. I’ve used all sorts of measures in my time – various types of customer satisfaction scale and, of course, Net Promoter Score – and the only conclusion I can come to is that none of them predict with absolute certainty how customers will behave.

But even if accurate prediction is impossible, correlation can give you some clues about likely customer behaviour. This is both the flaw and the virtue of Net Promoter Score: Frederick Reichheld’s research correlated high NPS with high financial performance but that doesn’t mean that in all circumstances you can generate the same increase in performance simply by raising NPS. However, the correlation indicates a general tendency for high NPS companies to do well, so if your NPS is lower than your peer group’s score it indicates there’s something you need to pay attention to and you need to drill down into the root causes of customer reluctance to promote your company.

Accountability

This is like gender pay gap reporting. Any company with a gender pay gap will appear to be under-rewarding female employees but a drill-down in to the reasons why will expose the factors that cause this. Some – such as the likelihood of women to take more career breaks for childcare – may be seen to be outside the company’s control, but the link between this factor and the disparity in pay should force the debate about what the company could do to, for example, to make it easier for anyone returning from a career break to make the same, or improved contribution that they made before the break, and to ensure that it was rewarded fairly.

Whether you are talking about gender pay gaps or the gap in your customers’ experience, the essential thing is to have people accountable for the changes in what the organisation does and the improvement in the associated measures.

In the customer experience world, I have seen too many organisations where the customer experience leads have the responsibility for the measures, but insufficient accountability – whether shared or individual – exists for their improvement.

It’s wrong that women should be paid less than men for the same job. It’s wrong that people should not be seeing year on year improvements in customer experience. Both are fixed by adopting measures and clear accountability.

 

Dog days: when brands bite

I like the occasional beer, and I like brands that position themselves as something a bit different, so it was disappointing to read of the contortions that self-styled punk brewers Brewdog went through when their solicitors asked Birmingham pub The Wolf to change its original name – The Lone Wolf – as it conflicted with the brewer’s new spirits brand of the same name.

Brewdog’s actions sit uncomfortably

Continue reading