Summary: Customer satisfaction is a central pursuit for any company, and yet we have few clear tools to manage it. In this article we look at the usual suspects through the perspective of our web empowered customers. While these tools do help we urgently need to expand their scope, cut through the consultant speak and look for specific opportunities.
Customer satisfaction has been rightly established as a necessary competitive advantage to help us retain customers and has been helped by concepts like customer satisfaction surveys, net promoter score and customer effort, which we can summarize in the following table:
|Solve specific customer service problems.
|Net Promoter Score.
|What it does.
|Post sales customer survey.
|Substract customer "promoters" from "detractors"
|Focus on solving post sales problems efficiently.
|Gives us a service benchmark against which to improve.
|Only requires one or two questions.
|Lets us focus on improving specific problems.
|Questions may not cover what customer really cares about, answers might be biased.
|Needs to be complemented with more questions, doesn't tell us where to improve.
|Applies only to customers who have contacted customer service, not the entire universe of clients.
When we talk about customer satisfaction we usually mean surveys we ask customers to fill after a sale, which we can then aggregate and follow through time to see how we’re doing; the problem is the surveys themselves can be a nuisance for customers (for instance if they’re long); they could be giving answers devoid of meaning to get away as quickly as they can; the questions can be interpreted differently from what we meant (the customer might not have the heart to give the overall experience a bad mark for fear of harming sales staff, or on the contrary might give them poor marks for something that is beyond their control); or the questions might not make sense to them:
- Customer rates his overall satisfaction 8 out of 10.
- Next question in the survey: Why did you not rate us 10 / excellent?
- Customer thinks to himself: “Because perfection is only the purview of the gods, so no one ever gets a 10”. We could normalize but then other customers might be willing to give it a 10.
The first lesson is that we need to adapt our customer satisfaction questions, how and when they are asked as they are themselves part of our customer’s experience: do they feel that we really care or that we are going through the motions? Does it capture the aspects they care for? Is it an additional hassle to respond?
From our customers’ perspective their experience has expanded thanks to the web, so it refers to aspects beyond the scope of the traditional satisfaction survey and the control of our sales staff. Our customers now find answers to their questions through other consumers on the web, while ignoring our advertising and marginally remembering periodic sales and promotions. When asking a question (does the color wear off quickly?) they get answers from their peers, which are not necessarily right but point to a much bigger experience through which our customer is discovering our product’s attributes, quality and value. As I have argued before, it is in our own interest not just to respond to our customer, but to lead them through this wider and deeper experience, so they can achieve an overall better result and a productive and even fun experience at every step of the way.
The second lesson is that we need to respond to these new expectations through the web, become another trusted peer in our customers’ network, and help our sales staff be in sync. With this expanded definition of our customers’ experience we then need to adapt our notion of their satisfaction.
The Net Promoter Score (NPS) introduces the concept of a much shorter survey, usually consisting of one of two questions, such as “would you recommend us?”, which seek to summarize our customer’s overall experience; it has the advantage of addressing some of the concerns with longer surveys. We can then choose to focus on our “promoters” and / or “detractors”, which seems to play well with our customer’s interaction with our product through the web, particularly the idea of our brand’s fans.
The first problem with NPS is that it doesn’t tell us where to improve, which leads us to ask more questions, which in turn takes away its advantage over normal surveys.
The second problem is that while it sounds nice to focus on our “promoters”, “detractors” are a lot more vociferous on the web, so unless we have a plan to deal with them we might find ourselves putting out many, many fires.
The third problem is that satisfied customers are not necessarily promoters, much less fans of our brand. Brands with real fans (as opposed to fake Facebook like “fans”) took years to cultivate them, and that relationship might not be as profitable as we would like it to be: while Apple is a prime example of hard core fans for their Mac PCs, their numbers were so low compared to other brands that the company almost died several times, and resurrection only came thanks to a much wider audience through iPods, iPhones and iPads, people who generally are fans of the brand if not quite as rabid as the Mac crowd.
The third lesson is that while a simple, easy to grasp indicator such as NPS can help us see which way the wind is blowing, it doesn’t tell us where to invest, and may even derail our efforts.
Customer Effort introduces an interesting concept with respect to Customer Service: instead of trying to exceed our customers’ expectations we should deal with the specific issues they struggle with and spend more effort trying to solve, quickly and efficiently. This has the advantage of letting us know where to invest to improve customer satisfaction.
The problem with the Customer Effort approach is that its power as predictor of customer loyalty only applies to post sales Customer Service, not to the customer’s entire experience: as a general rule we do want to exceed our customers’ expectations when they are looking for our product, which is a different mindset from solving post purchase problems efficiently: meeting or exceeding those expectations might indeed preclude problems from occurring in the first place.
The fourth lesson is that if we expand the concept of customer effort to the entire customer experience as seen through the web we might find something even more useful: how much effort does our customer have to spend to find out which options are best for him? What works best for his budget? Which product attributes are more important? It’s not so much a series of problems as a series of opportunities to improve our customer’s overall result, his experience at every step of the way, and his satisfaction; this would have the added advantage of letting us know precisely where to invest.
To improve our customers’ satisfaction and hence attract and retain more of them we need to understand that their expectations are changing as their experience is now wider and deeper thanks to the web; that is to say, thanks to their interaction with other consumers, through which they are discovering our product’s attributes and what is best for them; and it is through the web itself that we need to engage our customers to improve their satisfaction. The web can help us complement our sales staff’s efforts and help them be in sync with our customer’s new expectations.