I’ve just had a great service experience with BT and now that more than 10 years have passed since I was responsible for their customer service strategy, I’m not blowing my own trumpet to praise them. It made me realise that when you get a great service it’s sometimes unremarkable. In this case, having a better understanding of customer outcomes could have moved it from great to outstanding.
My broadband connection has been dropping out quite a bit recently and as it’s sometimes not easy to tell – the three lights on the router remain optimistically blue whenever this happens – so I decided to see what BT could do about it. A visit to the site showed that there were some faults in the network, but not in my area, so I ran the offered diagnostic tests. The site cheerily advised me that I should ‘make a cuppa’ while the tests were running but they finished before I could finish deciding whether I actually wanted one or not. The (rather opaque) tests told me that I should contact the helpdesk and an online chat then ensued. After the usual slightly annoying request for account details (I’m already logged in…) and some details of my router model, more mysterious diagnostics were run which showed I have a faulty router. A new one was duly ordered and should be with me in a couple of days.
Assuming this solves the problem, this was a great experience but it illustrates that great service is sometimes rather unremarkable – in this case I raised an issue and it was dealt with.
But, thinking about it, how good was BT’s service in this instance, and could it have been better?
It all depends on what question you ask…
If you were to ask me if I was I satisfied with the service I received I would answer very satisfied (amazingly I haven’t been asked: most other sites don’t let you move without suggesting you might have 10 minutes spare to give them your opinions). If you asked me on a scale of 1 to 10 whether I would recommend BT to my friends, family etc, I would be unlikely to hit the magic 9 or 10 that would make me an advocate. The reasons why NPS can be a dubious measure of success are something I will cover in my next post but, in this case, it’s a fair reflection: I’m not that keen on a broadband service that’s been less than reliable for the last few weeks.
And here’s the issue: we need to focus on desired customer outcomes to really understand how to provide an outstanding service. This is the approach I use in my customer strategy work and my own example shows how it can throw up opportunities that more convergent problem solving might miss.
If my desired outcome was ‘fix my faulty router’ then BT has performed well. Arguably it could have rushed a new box to me immediately but the fault is intermittent and I can survive for a couple of days so my threshold of satisfaction is pretty low so I have no need for outstanding service. And if I’m being nit-picky the time it took me to resolve the issue was the best part of an hour, although I could have had several cuppas and multi-tasked while the chat was going on.
However, my real customer outcome is 100% available broadband or, more specifically, the services – email, web, video, etc – that ride on the back of that, some of which are critical to my and my wife’s businesses. On the basis of that outcome – or at least something very close to it – BT has demonstrably failed to deliver.
I’m wondering in this instance if it’s possible for the diagnostics to run automatically every few days: it would have been a great service if I had been contacted out of the blue by BT to say that they had detected that my router was underperforming and a new one was on its way to me. Whilst this would cost something to develop and run, it would save both customer and adviser effort and drive up satisfaction and net promoter scores.
Once my broadband is back up to speed I’ll see what they think of this idea. Meanwhile I’m left thinking that more businesses need to get a better understanding of their customers’ outcomes to drive a superior experience.