Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
NPS doesn’t say anything particularly useful (cranberryblog.substack.com)
109 points by boraginaceae on Dec 30, 2021 | hide | past | favorite | 81 comments


I see proposals against NPS like this from time to time, but they always tend to ignore the underlying science that went into making NPS in the first place, primarily that there was a very strong correlation between the calculated NPS score and business performance.

Also, I find that many of these articles have somewhat of a caricaturish view of how NPS is used in the real world. "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'", it's instead that NPS can be like a canary in a coal mine - an unexpected drop in NPS is something to investigate and get more information on what went wrong, usually from more detailed/free-form questions in the NPS questionnaire.

As a software engineer I love reading our NPS reports, because they always have great tidbits of info that are often difficult to deduce just by looking at standard analytics.


> As a software engineer I love reading our NPS reports, because they always have great tidbits of info that are often difficult to deduce just by looking at standard analytics.

What you love is the customer feedback--this can be accomplished in multiple ways and you don't need NPS for it.


NPS has the advantage of being concrete and simple, producing evenly sampled feedback. This is hard to get any other way.

A proper NPS asks only one question and requires only a single click to answer -- eg "Scale of 1-10 how likely are you to recommend <product> to a friend?" > click [8] > done.

There's a free-text "Tell us more" field that is totally optional.

This means users actually answer it. NPS feedback is the closest I've ever seen to being a clean sample.

By contrast:

- Everyone hates an n-question "would you like to fill out our survey" feedback form. Any data from those is going to be skewed, because only a certain kind of person fills those, and usually only if they're angry enough that they want to vent.

- Feedback from a passive "Submit feedback" button in-app is also skewed. Maybe useful as a bug report mechanism, but it won't tell you what's making your happy users happy.

- Proactively reaching out and talking to/observing users is obviously good, and nothing ever substitutes for that. But you want bulk data, not just anecdotal evidence.

- Most importantly, NPS really shines at telling you what's making your almost-happy users almost-happy. When someone clicks [10] and writes "I love it", that feels great. But when someone clicks [7] and tells you what's annoying them, that's often extremely useful.

- NPS does a great job priming people to give useful feedback. By forcing you to pick a single out-of-10, you just did a quick mental accounting. "9? Nahh, <reason>." Then you click 8 and write the reason.

--

NPS was popularized by that famous HBS study where they found it correlates well with product growth.

I wonder how much of that result is simply because it's one of the least obnoxious ways of sampling user feedback, and therefore produces clean data.


In an almost Goodhart’s Law case, now that I know how NPS is used, I answer these questions differently, being far more prone to use 6, 7, or 8 to express my opinion.


Eh this is a niche issue. I bet less than 1% of people answering NPS have ever heard of "NPS".

The only way this would be a meaningful effect is for a dev tools or similar product where your audience is the HN audience.


It probably depends on the survey. If I just want to get out the door of the service department at the car dealership I'll probably give a perfect score unless they really screwed something up. I don't want to explain a non-perfect score and probably shouldn't give one so long as the experience was "OK."

On the other hand, like many, I probably tend to follow something like the XKCD star rating levels (https://xkcd.com/1098/) for products. And for employee surveys, I generally answer somewhere in the mildly positive range to most questions.

I'm not sure I've ever answered an NPS survey but, for most companies, I'd probably be somewhere in a similar range, even for companies I'm perfectly fine with most of the time.

The more complex the transactions/products the more reliably you'll have areas of gripe-age. A book or movie is rarely 5 stars for me. A USB cable pretty much works or it doesn't.


The "science" is quite dubious: https://articles.uie.com/net-promoter-score-considered-harmf...

The reason why NPS exists is because it's an easy number to calculate that sounds convincing, and manager-types love numbers without having to think too hard about how they got them.


That article just links to the wikipedia page. The main criticism paper[1] reasonably successfully debunks NPS as uniquely valuable, but more or less shows it as of comparable value to other measures of customer satisfaction.

1: https://web.archive.org/web/20200716065914/https://pdfs.sema...


Yes, that's correct. The broader point is that it's really hard to boil down something as complex as user satisfaction to a number. My time spent with UX professionals has taught me that all of these measures are lossy, and my time with data scientists has taught me that once you start depending too much on lossy measures, you're going to be led astray.

But management-types love little numbers like NPS, so it usually gets done anyways, especially in big orgs. And then it goes downhill, because PMs and leads are incentivized to optimize for the particular number their management chain tells them matters, and they game it because that's how people work. Later on, frustrated engineers and PMs who aren't a part of that game wonder if they're the crazy ones because they see very real customer frustrations brushed aside by an org structure that doesn't seem to care much about what users actually tell them anymore. Or they say they do, but never incentivize the rest of the org to address issues coming in through verbatim feedback.

Maybe, someone with the title of VP eventually wonders why a competitor is doing really well, looks at verbatim feedback themselves (it's usually been given to them already but they forgot about it), and then realizes they've been steering a big ship in the wrong direction, and causes chaos all kinds of chaos in the process. The sycophants in the org all line up to agree with them and declare they were right all along (and get rewarded at the end of the year), line managers and their reports are left confused because of the jolting priority shift, and people who might have felt vindicated by the direction shift are wondering if their leadership are actually fit to be leaders or not.

Is that all the fault of NPS or other numbers? Of course not! But they're an easy and generally accepted way for bad leaders to hide their bad leadership.


Like sibling comment said, I'd love to see actual science behind it.

> a caricaturish view of how NPS is used in the real world. "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'"

That sounds exactly like what responses at many companies would be. There's often much nuanced discussion about how/when/why to adopt certain metrics, but once they are they become sacrosanct and any deviation from "good" results is a crisis needing immediate action.


Agreed. The first line of the article says specifically for "companies and their investors to stop using Net Promoter Score (NPS) as a KPI", and NPS doesn't provide a measurement that directly supports specific actionable steps; KPIs are supposed to be about planning for actionable, measurable steps toward a clear business outcome.

(And I have plenty of opinions on how useless KPIs have been in practice for me and the places/teams I've worked. They're hard to get right unless everyone in management is on the same page about what a clearly defined business goal is, and what an actionable step is, and what a strong metric for determining whether the action met the goal is.)


> they always tend to ignore the underlying science that went into making NPS in the first place, primarily that there was a very strong correlation between the calculated NPS score and business performance.

There is no underlying science behind NPS. It was an arbitrary measure with no research or model behind it. Also, it doesn't work very well, in large because because there is no strong correlation between NPS and anything else, because - as this article makes clear - NPS is an extremely noisy number. Look at the graphs - you can have enormous variations in NPS, with zero change in underlying customer sentiment. Obviously that can't correlate with anything useful.

I think what you're doing is conflating the idea of "asking how likely people are to recommend your product" with NPS. The former is a decent idea, and it does have some (limited) science behind it! There really was a study done that really did find that question was an excellent predictor of many key results. (...and some others that found it was not, but hey, it's something.)

So yes, asking people the question and reading the results is probably a good idea. ...but that's not what NPS is.

> an unexpected drop in NPS is something to investigate

It really, really isn't. And I would challenge you to point to any evidence to the contrary.


I would think an unexpected and sustained drop in NPS is something to investigate. A single daily measure wiggling low? Of course not.


Honest question: what science? Any good papers you could recommend?

After a quick search it seems that NPS originated in a Harvard Business Review article, which I don't consider a credible source of scientific results. The scientific papers I'm seeing mostly seem pretty skeptical, judging from the abstracts.


They're using the popular meaning of "science": A claim from an impressive or impressive-sounding person or institution.


I generally agree, but I think one problem is when improving the NPS score becomes the sole objective and not improving the underlying problems that caused it to go down. In other words, calling a doctor to treat the canary does not improve the conditions in the coal mine that made the canary sick.


IDK, in my experience NPS prioritization trends towards "hiring more people to focus on customer happiness."

Source: Worked in multiple startups that IPO'd using this as one of their 'growth strategies'


> primarily that there was a very strong correlation between the calculated NPS score and business performance.

There is avsolutely no science linking performance and NPS. Just pseudo science.


> As a software engineer I love reading our NPS reports, because they always have great tidbits of info

My only experience of NPS is when I worked retail/sales in my first job, and it absolutely sucked in that context.

Less “great tidbits” and more “I just lost 30% of my commission this month because of something 100% outside of my control”.


> "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'"

This is precisely what happened at my company, hah.


The point of NPS vs free cash flow or operational metrics is that it is in theory a leading indicator instead of a trailing indicator.

It's nice (but a bit too late) to know that your business is screwed because your customers left, it's another to know that your business is about to be screwed because your customers are on the verge of leaving if an opportunity appears.

The criticisms of NPS pointed out (that it's a measure of a high-variance metric) are fair, but the conclusion is not.


One of the most common contexts for NPS scoring is giving them to direct retail customers as a phone/email followup survey. This is the most dangerous context for it, because a customer who isn't familiar with the system, is sort of primed to have the wrong expectations.

They're thinking in terms of "how satisfied was I", which is a different scale entirely with "would you recommend the product". But the NPS question, even when explicitly explained, reads close enough that they'll answer it with the "how satisfied am I" answer,

NPS might have worked in a context of a professionally proctored focus group where everyone understood the question and discussed it, but I'm worried you're losing a lot of information when you turn it to one digit on a keypad.


What exactly would you suggest it's a leading indicator of? Churn?

Ultimately if it matters, it should have to be an input to the cash flow time series -- mediated by churn -- no? (Article means to frame free cash flow as a series rather than a point-in-time value).

If you're interested in forecasting, as a company, you should know the things that mean your customer is not doing well within the specific context of your specific business. E.g. in software, you should have a customer health score that's built up from product data. Or you could ask simple questions that are easily interpreted, e.g. asking "how would you rate X's value for money," or "how satisfied are you with X," or even "have you recommended X in the last Y months." These things have a cleaner relationship to future business metrics and a tidier interpretation.


Hopefully, it should be a leading indicator for negative churn -- per-account and new account growth / virality -- and for more complicated products, a way to slice that across different personas and features. ('bakers at high-end restaurants have high NPS for the new precision thermometer tracker while those at regular shops ignore them and have low NPS.')

Esp in early days of B2B products, it's hard to get that because you don't have the volumes and velocity of b2c nor a good way to detect and attribute viral activation. If you are in a startup and not riding the channel of some megacorp, even more so. Alternatives like signups or other activation checkpoints, or say qualitative interviews, are also interesting, but even more spotty. (Ex: startups raising based on GitHub stars.) We don't do NPS as we have our plate full with known funnel holes through less annoying data collection methods, but as soon as we are happy with the baseline funnel, that's the simple next step.


My own understanding is that it’s supposed to be a prediction input organic growth and nothing more. All it’s really telling you is whether a customer is likely to provide you with another customer, which is why only really high values answers matter, because “hell yeahs” are all that really matter.

I’m skeptical as to its usefulness even for this, but most companies I’ve worked for use it as a general KPI which is even more aggravating.


If you go into the trouble of collecting it, it's reasonable to use as a KPI, as it carries useful information.

I imagine you are annoyed that they optimize for it. And yes, it's not reasonable to optimize for it. Why do people optimize every KPI?


>Why do people optimize every KPI?

Because if it's not worth optimizing around, how key can it be?


You can do a lot of things with a number that are not optimizing.

For example, you can satisfy it, you can alert on behavior, or you can use it as a control (but ok, maybe this one makes a two number KPI instead of two KPIs).


I'd expect it to be a leading indicator of user growth / churn, yes.


Why do everyone seem to think that management does not have actual good intentions with measuring NPS as a KPI? The companies I've worked with have all had genuinely good intentions making the best possible experience for the user, because that is what ultimately wins in the end. If someone scores 1-6, you ask them to provide additional feedback, to learn from it - problem solved and everyone has all the info they need to go about their work.

There seems to be this idea that KPIs are evil and interpreted in vacuum. It's rarely like that.


> There seems to be this idea that KPIs are evil and interpreted in vacuum.

Not just KPIs, but the Hackernews crowd seems to apply the same to management and corporations as a whole.


20% of companies are knocking it out of the park, and 50% are nowhere near a leaderboard. I expect such folks are correct in their observation, but work for the 50%. Perhaps you work for one that is getting a promoter score.

My advice to friends looking for a job, ask them what their NPS is. The leaders in NPS are leaders in employee engagement. If they don’t know, or it’s bad and your role isn’t to make it better, look elsewhere.


What’s more interesting to me than the NPS scores are the free form comment section below the 0-10 rating.

Never mind that anything below X is considered neutral or a detractor. You asked a user / customer/ etc to rate you on a scale of 1-10 and then tell you why.

Want to know what you’re truly screwing up on? Take the feedback on your 1-6 scores seriously and you can find the low hanging fruit to take a product from mediocre to a great user experience.

That’s the true NPS value. It’s all about how you handle the feedback.


Background: I have designed and implemented NPS feedback systems in several companies.

I noticed that people are concerned about this when it was not implemented correctly, i.e. as a complimentary question in the long questionnaire or as a follow-up question at the end of the journey.

The NPS collection mechanic has many advantages:

1. The mental entry point for the user is very low - just 1 question.

2. It is very useful to track user feedback over their lifetime (ask every 6 months) and actively resolve issues if NPS shows it.

3. Another great thing is not a numerical feedback, but a written answer that gives a lot of really useful ideas for improvement.

4. And finally, you don't need any specific context to show the question, so it can be embedded in almost any stage of the user experience.


Agree, the NPS aggregation is mostly BS, but getting a signal from median/mean, e.g. by comparing trends across some dimension, and reading complimentary feedback is quite useful.


This post misses that the primary benefit of an NPS program is not the metric—that's a nice byproduct—it's the dialogue with customers.

As a single metric Net Promoter Score is OK. It's extremely easy for customers to answer. So there's volume in the number of answers. You can learn more by segmenting different ways—Users vs Admins, Enterprise vs SMB, Verticals, etc..

The company I currently work for puts a lot into NPS, not necessarily the metric, but the process. Everyone that leaves feedback gets a follow-up email and interview from someone in Product or User Research. That feedback is then organized in Productboard where it is grouped with similar requests/issues. Overall those interviews heavily weigh what features we refine and build.

For some background, I previously designed an NPS tool and have written about NPS pretty extensively.

https://solomon.io/understanding-net-promoter-score/


I suspect the concept of NPS is, if not the source of, a major driver of the whole "anything less than 10/10, 5 stars, etc. is basically a fail" meme. There seem to be signifiant toxic effects from this on customer-facing employees, in my experience.


This is absolutely a massive issue with it. Apple did this with retail employees back in the day and it was terrible. Employee reviews were based on it, managers had to call customers and apologize if the customer didn't leave a 9 or 10 rating, it was absolutely awful.

One of the biggest problems with it is that an average person, unaware of what NPS is, doesn't understand that giving any rating less than a 9 is essentially giving a rating of zero.

If I have what I would consider to be an average interaction with a business, e.g. I just buy something and leave, no need for support, no problems to deal with, etc., that seems average to me. Based on a non-NPS understanding of a 0-10 scale, I'd say that's what, a 7? But the business now considers this as a failure on the part of the person that helped me at that store.

This is why sales people and phone reps are constantly now asking you to give them a 10/10 rating if you receive a survey, because even if they literally just took your money and handed you change, their jobs depend on you acting as if you had a heart attack and they saved your life by performing CPR or something.

It's honestly a terrible system that produces no meaningful feedback for the company and causes employees to do whatever they can to game the numbers. All you're measuring with NPS is how good your employees are at juking the stats, nothing more.


I remember when I worked my first job as tech support for an ISP our entire performance was measured by NPS. The company had a question asking 'how would you rate the rep' and 'how would you rate the company'. But, the rep question was a decoy, and the company question actually counted in my personal NPS stats.

I would get so many 10s for the rep (which does not count at all) and 1s for the company which would obliterate my stats. And, only the last person who spoke to the customer was rated, so this encouraged meaninglessly transferring people around like a hot potato.

Number of times I would get a customer who had issues for weeks, and then I take a look at their issue, resolve it in 15 minutes, they would be ecstatic on the call and then the NPS survey comes back, 10 for me and 1 for the company. Then I would get a tap on my shoulder from the supervisor asking me to explain the detractor.

I almost got fired after about 6 months due to my NPS being too low, until I made friends with one of the vets, and one evening in the pub with couple beers too many in him he explained to me that the only way to survive is to game the system. He told me how to crash the call client to prevent it from sending surveys to angry callers, how to transfer people to infinite hold queue which does not result in survey when they hang up, and how to trick the system into thinking that I had and inbound call when I did not (which gave me time to actually do my work, like performing relocations and resolving complex provisioning issues since any time not spent on inbound call was considered to be not adhering to schedule). I went from less than +40 NPS to +90 NPS in one month, so most of the NPS feedback was fake anyway.


Damn, this sounds like the definition of "evil".


This, to me, is a general problem with a purely quantified, metrics driven approach to management, not just limited to NPS. I'm not demonizing these approaches, just saying that you have to balance humanity with data.

I like to look at this kind of thing as the one of the dark sides/dark patterns of using data for decisionmaking.

Qualitative measures are important, as is maintaining as much humanity as possible, to have a balanced and healthy culture. Being solely metrics/data driven can lead to cold, heartless, damaging culture (might be efficient or make profit, but very dehumanizing).


https://en.wikipedia.org/wiki/McNamara_fallacy

“The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.”


Related is Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."

https://en.wikipedia.org/wiki/Goodhart%27s_law

My favorite story related to this, because the managers thought LoC would be a good measure of productivity, is "-2000 lines of code": https://www.folklore.org/StoryView.py?story=Negative_2000_Li...


Thank you for this very interesting share, I had not heard of this before!


Most internal surveys at my company use a scale of 1 to 6, where 1=4 and 5=6. Our managers can’t even see an improvement from “terrible” to “fairly decent” in the underlying distribution.

Sometimes they require writing something to justify a bad score (but not a good score!)


> giving any rating less than a 9 is essentially giving a rating of zero.

Well, at least its not a negative number!


If you convert it to IEEE 754, it could be -0. ;)


Companies with a promoter level score during the pandemic grew their business. There is a direct correlation between a promoter NPS and revenue growth. The percentage of companies that now provide a promoter level service is now over 20%, up from 13% before the pandemic. The percentage that are rated a detractor level score also grew to nearly 50%. These org’s revenues either shrank or stagnated. This is an existential issue for organizations, not a nuisance KPI.

Companies that are leaders in NPS are leaders in employee engagement. This post sounded pretty frustrated with an employer/contract…

NPS is a measure of past experiences, it isn’t a GPS navigation system to improvement. It is also 1 question, so it is the most simple to collect. However even this gets screwed up all the time. Interfaces such as a telephone where 10 can be measured as the first digit 1, or PIN pads where they are a nuisance to a customer when in a hurry to pay and missed when paying with NFC.

Understanding negative NPS is like understanding a traffic ticket with no other data. Why did I get a traffic ticket? Well, what did or didn’t you do and how fast were you doing it? Were you a jerk to the cop? Was the cop stressed by something else that day?

To be a leader in NPS is an organizational effort, not the result of divining an insight from a few survey answers. Ideally, customer journeys maps have been developed and problem areas are self evident before an NPS score arrives.


My "problem" with NPS is when essential services, which I have next to no choice over using, ask me if i would recommend them to others.

No, IRS, I would never recommend your service to friends or family. That's not really indicative of anything.


Ah, like the good old

> How likely are you to recommend Windows 10 to a friend or colleague? Please explain why you gave this score.

> I need you to understand that people don't have conversations where they randomly recommend operating systems to one another


But they do - people ask each other if it's worth updating macOS for example - I've literally been asked if I'd recommend updating many times - I've asked people if they'd recommend a Linux distribution - etc.


I think people recommend operating systems more than any other software. Windows vs Mac vs whatever Linux. Android vs iOS. My mother lived in the era when Bill Gates was the villain and refused to update to Win 10 because she expected it to have microtransactions or something.


No PatientCo, I did not enjoy paying my medical bills through your system. I have no intention of recommending your bill payment system to my family. Why is this even asked?


I switched doctors when their billing became impossible to deal with.

Assuming they gave a shit, that survey response would have been useful to the practice.


It's still useless. You don't choose the payment processor of your doctor, and you certainly do not recommend it to anybody.

They should be asking doctors. And if they want your point of view, they should ask something minimally reasonable (like, did you have any problem using it?), not some crazy hypothetical.


You need to pay taxes. It's conceivable that the IRS could have a system that at least made it easy and painless. Compared to alternative tax filing methods, you might recommend a really easy way to file taxes to a friend.


One company I worked at used NPS internally to measure employee satisfaction and holy hell did they not enjoy seeing what that looked like.


NPS = Net Promoter Score

> measures customer experience and predicts business growth


Yes, NPS is an imperfect measure, as all are. What I’ve never seen is a great argument for what one should use instead.


Company I was working with done an internal study on this I believe CSAT or a simple average were better if your goal is to improve revenue.


It's not fair to compare how "important" different metrics are to the whole business. A business is composed of and built on top of many sectors. That's why we have Finance team, Sales team, Marketing team, Product and Development team..., and not just one "we're a company" team! Free cash flow is important for business operations. Churn is important to predict sales. NPS, though not directly affecting the "money", might be a good indicator for the marketing team. There are a lot more important marketing metrics than NPS, but for a big corporation's branding team, especially in consumer goods industry, it might be useful to look at the "trend" of their NPS over a long period of time. NPS has noise just because of its nature of being collected with surveys. Like any questionnaires or surveys, if you design it well, you can reduce the noise.


I hate NPS. You know what makes it shit? It requires your customers to fill it out. So then your bugging them to complete it


I love giving feedback though. If I can't give it direct, it's going through Google Maps or something.


How would you (reliably) measure user satisfaction without asking users?


This post is correct about two things - first, that NPS does not give you direction on how to act. Second, that statistical realities make sampling approaches like NPS very noisy. I don’t think either discounts the concept of NPS itself. “Bad statistical work” is going to be bad regardless of what metric we’re talking about. And a metrics being too abstract to action doesn’t make it useless - at least not any more so than something like free cash flow. Yes, connecting the dots to some outcome is difficult. But it’s a pulse on the business, and if done right, that has some inherent value.


You know what would be vastly better than NPS?

Ask people every so often how they're feeling about the product. Let them pick a thumbs up or a thumbs down. If they pick thumbs down, say "we're sorry you hear that. Please let us know what's bothering you", and display a little text box. Carefully read every comment you receive.

Asking unhappy users for feedback is a great idea. The 0 to 10 scale and - especially - the bizarre calculation NPS does with it - is statistical malpractice that just isn't mathematically capable of doing what NPS supports claim.


> Asking unhappy users for feedback is a great idea. The 0 to 10 scale and - especially - the bizarre calculation NPS does with it - is statistical malpractice

I disagree. On the contrary, it is good survey practise to ask yes/no questions with a scale, to get information on intensity of opinion. (Often this is a scale 1--7 ranging from "strongly disagree" through "neutral" to "strongly agree", but there are other alternatives of which NPS is one.)

Then you might think only the bottom half of the range would formally be considered "detractors", and that would (as far as the evidence goes) be accurate. However, we're not looking for how many people are detractors today -- we want to know how many people are at risk of becoming detractors in the near future. That's what makes NPS a somewhat leading metric. And that's also why you see the sleight of hand that counts answers up to 6 as detractors.

I don't know what "mathematically capable" means in your context, nor what "NPS supports claim" in your experience, but the number absolutely makes sense.


Be careful how you do this. When an app interrupts my normal workflow to ask me to rate the app or ask me how I'm enjoying the app, the answer is always negative. I'm using the app to get work done, not satisfy some VP's KPIs.


I was unfamiliar with NPS until about 12 hours ago (but not because of this post).

NPS has had a strong prediction of stock price (I learned this morning).

https://www.marketwatch.com/story/this-surprising-investing-...?

Of course, it’s just opinion. But maybe there’s some use to NPS after all.


I didn't see discussed here the main drivers you might want NPS vs. outcome- (and company) specific metrics:

1) it's harder to game and/or bullshit. This is a massive issue with surveying customers and then using their responses to feed back into internal performance goals & rewards.

2) because it has grown to be such a common customer experience metric, it's possible to benchmark against others, which isn't really possible with question types that are very specific to your business

I'll agree that NPS is overrated, mostly because the Net Promoter organization and survey companies have spent large amounts of time and money pushing it as The One True Metric for decades, but it still has its merits and its place.


> it's harder to game and/or bullshit. . . it's possible to benchmark against others,

I don't have access to A/B testing of a large business to understand how things affect responses. But I certainly intuit that there is a significance in when/how you ask someone for feedback. e.g. If you ask your question directly after the checkout screen you'll get a significantly different response than if you survey people randomly who visit your website, or email customers a set period after making a purchase/ upon transaction completion.

I've seen all these, which brings me to the second part, how do you benchmark against a competitors data, and why would you trust a competitors data?


Because if you're a big company you don't run your own surveys - you pay one of the big CX companies like Maritz, Medallia, InMoment, etc. to run them for you or for smaller feedback not requiring a full management system you hire one of the 100+ research firms to do it for you. Because these firms run surveys for tons of clients, they generally have standard practices for how & when feedback is collected and are willing to share industry-standard benchmarks for things like NPS. I worked with two of the names above on a 2-year stint with a major auto manufacturer that sent over a million surveys a year and NPS comparisons were one of many tools used to get meaning out of responses.



> (The most irritating — and by far the most common — reason companies seem to measure NPS is that it’s standardized, but more importantly that everyone else does it. Which is fine for one-time-use benchmarking, but not a good basis for internal KPIs.)

Except the alternative isn't to stop tracking NPS the alternative is to use unstandardized customer survey metric's, this is worse because you have no true industry comparison & can be gamed more easily since they define the formula!


https://www.qualtrics.com/experience-management/customer/wha... is another standardized alternative, that I suspect would suffer less from the problems in the article.


CSAT is not standardized and there are many ways to calculate it, one of which you have linked. I've seen firsthand many of the largest brands calculate it differently.


Yeah, I agree it doesn't have the universal agreement on implementation that NPS does. Here's an alternative that I believe does and is referenced in some of the NPS skeptical research: https://en.wikipedia.org/wiki/American_Customer_Satisfaction...


That "measures the satisfaction of consumers across the U.S. economy", very different from Brand's Customer Experience Surveys. Having worked as a software vendor in the CX field, I saw a lot of different implementations of the "same" survey metrics. NPS was only standardized one (I think because its from b school in the 80's).


Nevertheless it seems to be roughly as predictive of growth - https://web.archive.org/web/20200716065914/https://pdfs.sema...


The real problem with NPS is when a certain score is a goal. After all goodharts law states that is the case with any good measure.


Yep. I had an advertising-funded app that asked me to fill out a survey about why I rated it poorly in a modal I can’t skip. So I always gave it a good score, until I stopped using it because it interrupted me so often to ask. They lost me and they’ll never know why




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: