Calculating the Value of Improving
CX in a Product Business
This is an essential step in justifying CX improvement projects
You’re tearing your hair out. It should be screamingly obvious to everyone that the company must invest in CX. What is wrong with people?
But it is sometimes difficult to produce the figures. You’ll need them to justify the company’s investments in projects that improve customer experience.
You’re tearing your hair out. It should be screamingly obvious to everyone that the company must invest in CX. What is wrong with people?
We have a way of simplifying at least part of that discussion, namely the calculation of how much it is worth to improve customer happiness. It is not all that is needed, but it is a very good start.
The method should work for most CX measurement systems and we use NPS in this example.
So relax a bit.
If all of your customers need to phone for help, you probably have deeper issues
What sort of NPS are we talking about?
For the calculation that follows it is critical to know the identity of the customers that we are measuring. We don’t mean their name; any unique identifier will do. This is one of the rare situations where double-blind customer research is not useful.
The reason is that the calculation method requires us to know what Customer X, who provides particular satisfaction ratings, actually does in terms of purchasing. In product businesses you should be able to find this info relatively easy, and especially in B2C situations.
Why is this procedure not used more often? We don’t know.
The NPS numbers that work for the calculation are those that represent a significant proportion of the overall customer experience. In e-commerce, for example, customer feedback given just after order confirmation would work. So should feedback obtained several weeks after the order was delivered.
NPS ratings from contact centers are not useful, as most customers probably never need support. (If all of your customers need to phone for help, you probably have deeper issues.)
Unique customer ID needed
You need to be able to match customers between your survey system and your ordering system. You must be able to see whether a particular customer has only ordered once, or multiple times. If you also have data on order value that is helpful, but not essential.
The premise of the calculation is simple: unhappy customers are less likely to order multiple times. Unless your measurement system has been biased in some way, your own results should confirm this logic right away.
Segment by NPS category
To make the results easy to communicate we suggest doing the calculations by NPS category. The number we are looking for is the proportion of customers who place repeat orders, broken down by Promoter, Passive and Detractor.
Here is an example we adapted from a real-world e-commerce case. The company sent the feedback request just after order confirmation. They had a 32% response rate, with 4,196 survey responses.
New | Repeat | Total | |||
Promoters | 1,079 | 37% | 1,869 | 63% | 2,948 |
Passives | 326 | 58% | 238 | 42% | 564 |
Detractors | 279 | 63% | 167 | 37% | 446 |
Total | 1,684 | 2,274 | 3,958 |
To clarify, the figures mean, for example, that 2,948 customers gave a 9 or 10 rating to the “How likely are you to recommend…” question. Of these, 63% were repeat customers and 37% had ordered for the first time.
Calculate
In the real-life case, the values of repeat orders and first orders were similar. Furthermore, the value of repeat orders did not vary significantly by NPS category. Your situation may vary, and you may need to adjust your sums.
Here are the calculations:
Label | Type of Information | Value | Source* |
A | Value of one order | 100 | From your ordering system |
B | Average orders per repeat customer per year | 2.2 | From your ordering system |
C | Value of moving one customer from one-time to repeat customer; that is, the value of additional orders | 120 | (B – A) * A |
D | Proportion of Promoters who are repeat customer | 63% | NPS and ordering system |
E | Proportion of Passives who are repeat customers | 42% | From table above |
F | Proportion of Detractors who are repeat customers | 37% | From table above |
G | Weighted average for Passives and Detractors | 40% | (E + F) / 2, then rounded up |
H | Difference in probability of repeat business by Promoters | 23% | D – G |
I | Value per thousand customers moved to Promoter | $27,600 | C * H |
Possible imperfections in the calculation
If you have particularly low response rates, say less than 10%, the results become biased. The proportions of Promoters and Detractors in your sample will be greater than that in your general customer population. This is because in a low-response situation it is those who have extreme feelings that are most likely to respond.
There is another obvious imperfection in the calculation, and it makes the results conservative. A customer who has only ordered once could be a new customer. If yours is a new company with low response rates, we suggest you explicitly assume that the low response rates and the newness of your company balance each other out.
The table above gives you a number for a period of 12 months. Hopefully your customers will stay with you for longer.
Your company may have a formal ‘qualifying period’ for ROI justifications. If, for example, you are required to have a positive ROI within 18 months, we suggest using 18 months instead of the annual number. It’s as easy as that.
We cover a more sophisticated proposal for how to calculate customer lifetime value in a separate article, How Much is a Customer Worth? Good Question.
Applying the calculation to contract renewals
We believe the same principles can be applied to businesses that depend on contract renewals. There are some not-so-subtle differences. In product businesses the costs of serving happy and unhappy customers tend to be similar. Not so in contract businesses, where your efforts to recover the customer may even make retaining them unprofitable.
Furthermore, in contract businesses, do not be surprised to discover that Passives are less likely to renew than Detractors. This pattern can also arise in a product business, though it is rare. The reasons behind the phenomenon may be surprising. They will be the subject of another article.
Conclusion
If you have a common customer ID that is shared between your feedback system and your ordering system, you may be in luck. At the very least you should be able to determine the relationship between survey responses and actual customer buying behavior.
If there is no particular relationship, your feedback system has major issues. If the relationship is as expected, you should be able to use the resulting calculations to justify improvement investments.
Over time you will build knowledge and track record. They will enable you to improve the accuracy of the predicted impact of a given CX project on real-world customer behavior.
Soon enough, you can stop tearing your hair out.