INSIGHTS
Your NPS Data Is Exactly as Bad as The Sales Team Thinks It Is
JOURNAL

Your NPS Data Is Exactly as Bad as the Sales Team Thinks It Is

In the battle against instinct-driven decision making, only high-quality data wins.

“People … operate with beliefs and biases. To the extent you can eliminate both and replace them with data, you gain a clear advantage.”

– Michael Lewis, Moneyball: The Art of Winning an Unfair Game

There is a story, I have no way of knowing it’s true or not, that when the legendary software pioneer Tom Siebel was pitching his eponymous, pioneering CRM business to CEOs he would take them to play golf. Then, the pitch was a simple one: do you believe the forecasts your sales team are giving you?

True or not, I was thinking of the contemporary version of that story. If it was me on the golf course, I’d have different questions. The first would be “did you see where that ball went?” but the second would ask not about prospects but about customers. Do you trust the reports you get from your team on the health of your customers? Do you believe they provide you with an accurate assessment of risk of loss? Or even opportunity?

We ask leaders that question, a lot. The answer is rarely positive, often worse than no. Many executives suggest – not flippantly – that they would wager against their team’s judgement.

Now I’m not trying to pick on any particular function. Call it customer success, account management, sales or whatever; human judgement in regard to risk and opportunity around customers is deeply flawed and at one level, the team involved are blameless. We all bring so many cognitive biases to our communications that our evaluation of any given conversation is faulty by nature. Add to that a natural reticence of customers to provide candid feedback, especially in conversation with another person whose relationship they may wish to preserve, and there’s a formula for “unreliable witnesses”.

Furthermore, the different mental models we use to make decisions mean that we are all, at the very least, inconsistent. Those of you who are fans of Daniel Kahneman, the Nobel Prize winning author of Thinking, Fast and Slow can relate to the difference between system 1 and system 2 thinkers: the former being more rapid, instinctive decision making, the latter being more considered. The point is, the latter group is far less susceptible to cognitive bias. In any group of people unless they are all the same type, we know we will get very different judgement quality. And if they are all system 1, decision by instinct: look out below! You might as well play “spin the bottle” to figure out your account status.

So, relying on the testimonials of staff as a basis for account risk evaluation could prove worse than tossing a coin. While data scientists adjust their answers to reduce the risk of the most damaging errors, at the expense of less damaging errors, most humans may not. In other words, a false positive report that “the account is doing fine” when it isn’t, carries a lot more risk than a false negative report based on too much pessimism. There’s a cost to working on saving an account that wasn’t at risk after all, in wasted effort, but it’s rarely as expensive as losing a customer.

People’s answers may be as much political as rational: I don’t want to get fired today by saying my account is in trouble; I’ll take my chances on a better future. Maybe things will improve. Maybe I’ll get lucky. All those over-optimistic assessments have the effect of deflecting management attention away from customers who are “doing great” and onto those who everyone can agree are in trouble. Distraction can be a real problem in account portfolio management.

I started by describing teams as blameless, however, looked at differently, some can be part of the problem.

“The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little.”

– Daniel Kahneman, Thinking, Fast and Slow

There is a scene in the movie “Moneyball” where the innovative data-driven manager of the Oakland A’s, Billy Beene, confronts a group of old-school baseball scouts who are, to say the least, less impressed by new techniques. They even go on to suggest that the appearance of the girlfriend of one of the players should be a selection factor, because it’s an indication of his lack of confidence – and they aren’t referring to confidence intervals! In fact, there is a term for this: algorithmic aversion vs algorithmic appreciation. It’s a classic scene: an argument between “experience based human judgement” (prejudice in this instance), and data. With hindsight, we already know who won that battle; sports science today is widely adopted by just about every major team and few top professional teams don’t crunch the stats. Data won. Algorithms won. System 2 thinking won.

Some customer facing teams, even leadership teams, more closely resemble the philosophy of the baseball scouts. They survived – even thrived – over the last few decades of their careers by playing the game under the old rules. Now, in an era of customer lifetime economics, they run the risk of replicating their legacy approach by thinking about customer through the lens of personal judgement.

Here comes the customer experience data

In many ways, customer experience data was the first effort to change that approach. Getting direct feedback from customers through surveys created an opportunity to at least provide data to identify challenges and genuine loyalty. Like many CX practitioners, I assumed that this data would be welcomed as a valuable insight by customer facing teams. After all, any data that shines a light on my customer is useful, right? Better still, an opportunity for a conversation, and one that is based around a topic the customer actually cares about: their success, their challenges.

It hasn’t turned out that way, for two very important reasons. One is a miscalculation around human behavior. The other is a significant limitation around our traditional approaches to customer experience measurement.

 

Homo Economicus at work and play

It’s not just that people aren’t entirely rational, system 2 thinkers. It’s equal parts that, and the fact that we implement customer experience so badly. Here’s an all-too frequent failure formula for sales teams.

    1. Establish NPS performance measures that are based on such poor data they don’t represent your customer anyway. Wrong respondents, B2C methodology applied to B2B, poor response rates. Basically, not a score that reflects actual performance.
    2. Establish performance improvement goals that make no sense. Grab some data from a text book (or worse, a weak benchmark you googled), decide that “if Apple can do 80 so can we,” and generate impossible targets. Add an arbitrary timeline (12 months should do it).
    3. Take the above plan, and tie compensation to it. Sales guys actually read compensation plans and collectively understand far about how to optimize compensation than the plan author typically does. They will hate this plan; lots of variables out of their control.

The outcome from this is either:

a. The results are gamed. Everyone is happy, everyone knows the data is junk. The company and its shareholders lose.

b. Nobody gets paid. Everyone is unhappy. The CX program is hated. Probably killed.

Politically, customer experience data like this has no value in the organization and may even be a negative. In many instances, only persistent top-down leadership pressure keeps the efforts going, and without the support of the sales team, any change in leadership or financial stress on the company kills the program. Right now, customer experience is an easy target for cuts: too much effort, too little reward, limited political sponsorship.

 

Low-value CX data

If this all sounds like the worse kind of politics, or just backwards thinking, the other reason for nonsupport is completely rational, not political. The customer experience program data is so poor that customer facing teams see no value in it.

First, it provides a flawed view of account status. Part of this is the natural limitation of surveying; not everyone you want to hear from responds, and responses are highly infrequent so the data is, for most of the time, too old to be valuable. In both regards, it gives opponents of the approach a largely justified argument against the validity of the data.

The second problem is that the customer experience universe thinks in terms of individuals, not accounts, even though for B2B companies the atomic economic unit is the account, not the individual. It’s accounts that buy things, renew, upsell, attrit. Yes, of course, accounts are composed of individuals, but when they transact they are doing so as a single economic entity. And of course people come and go; not just from our customers, but also from their roles and levels of influence. While we may be interested in how people influence account decisions, it’s still the account that’s the atomic unit.

Underlying this is the limitation of surveying. Even the best executed, survey driven B2B program can’t escape the quality black hole that is low-response-rate, low-frequency data. And the gap between desired and actual quality is only growing. Companies have invested in better sales automation systems and more sophisticated B2B marketing automation solutions so the modern sales organization is – gradually – learning to live with a more data driven, data accountable universe. Pipeline analytics are getting more sophisticated, even the adoption of predictive models to substitute for guesswork in revenue forecasts – the dream of the 1990s.

Meanwhile, the customer experience universe has focused on areas of technological innovation that really don’t impact the core problems. Applying natural language processing (NLP) to open text answers is really just squeezing more signal out of an ever-diminishing survey data set. Using NLP in the contact center can be transformative for the contact center team, but is often guilty of focusing on what is easy to measure (contact center interactions) vs. what needs to be measured (the overall performance of accounts). And finally, better surveying technology does little to overcome the fundamental challenges of wrong respondents and low frequency. Strapping a Panasonic lithium-ion battery to a buggy whip doesn’t make it a Tesla.

 

Where next?

We have talked about two fundamental problems here. First, organizations are stuck in outdated models of sales decision making, and like the scouts in baseball, won’t change until they lose. In this instance, change is brought about by shifts in leadership, either from forward thinking new management, or through failure and crisis forcing a reckoning. Right now, a pandemic is contributing more to urgency around digital transformation than 30 years of gradual change has been able to accomplish. Case studies of success will inspire others, for sure, but the 10% who lead will likely reap the rewards. Whether it’s the sales organization or the customer experience team that’s stuck, history suggests that it’s external events that actually change approaches.

For that burning platform to mean anything, we need to move to better solutions. There are now better models for B2B methodology, albeit not as simplistic as some of our traditional B2C models. The math is harder, but that’s not an excuse, this isn’t rocket science. The biggest obstacle is more often an invisible cultural wall between customers teams, especially sales, and customer experience teams.

Transitioning away from a survey dependency is proving harder. Not because the technology and data science doesn’t exist, but because the customer experience teams too often define themselves as “the survey team” and the rest of the organization finds it easy to pigeonhole them that way. It becomes self-reinforcing and both groups need a significant reset in thinking to break free. The new skills and methodologies that will transform customer experience, like the application of machine learning, are already happening and will be commonplace 2-3 years from now as we shift from survey (mechanism) thinking to data asset and business outcome thinking. Too often it’s the customer experience team making the case against change and sticking with the old ways.

ABOUT RICHARD OWEN

As CEO, Richard’s singular professional focus: Delivering financial value through CX. He co-founded OCX Cognition to combine technology and programmatic consulting in pursuit of that goal, and now leads the company’s coordinated efforts to deliver the right solutions for its clients.

Richard’s 30-year career has centered on transforming business operations with technology, and he is one of the best-known CX thought leaders. While CEO at Satmetrix, his team led the development of the Net Promoter Score® methodology with Fred Reichheld, creating the world’s most widely used CX measurement approach. With Laura Brooks, he co-authored Answering the Ultimate Question, the best-selling “how to” guide for NPS practitioners.

Richard transformed the supply chain and built what was then the world’s largest e-commerce business at Dell, and has led two software companies, AvantGo and Satmetrix, to successful exits. With an MBA from MIT Sloan Management School, he has served on several boards and committees at public and private companies and is an active venture investor and international business thinker. Richard has lived on three continents; he and his family now divide their time between Arizona and London.

ABOUT OCX COGNITION

OCX Cognition delivers the future of NPS. We ensure customer experience success by combining technology and data science with programmatic consulting. In our Insights section, we present a comprehensive and evolving collection of resources based on our research and expertise, collected for CX leaders committed to delivering business outcomes.