Do Americans REALLY score more positively than Europeans?

In a previous blog, I wrote that Europeans were more stingy than Americans when it came to customer feedback. Or words to that effect.

Since then, people have been asking if this is REALLY true, and where is the evidence for this claim.

Well, yes it IS true and while I’m not an expert in the area, I do know somebody who is: Professor Anne-Wil Harzing, Research Professor and Research Development

Advisor at ESCP Europe.

In 2006, Professor Anne-Wil Harzing conducted an analysis of different response styles across 26 different countries.

We recently sat down with Anne-Wil Harzing to discuss these differences.

 

 

John: Professor Harzing, if I look at our own clients – which are mainly headquartered in Europe, USA and Australia – their customers can be based anywhere in the world. When we often report results back by country, we often identify differences from country to country in Customer Relationship Quality (CRQ) or Net Promoter Score (NPS). How should we interpret those differences?

Anne-Wil: Good question – let me answer that in two ways. First, there are characteristics at a country level such as power distance, collectivism, uncertainty avoidance and extraversion which all have a major influence on the way people respond to questionnaires and surveys. This is particularly true when you use Likert scales – you know, the 1-7 scales that you use, or the 0-10 scale that’s used in Net Promoter Score surveys. Second, there are differences based on whether the respondent is replying to a questionnaire in his or her native tongue. Also, English language competence is positively related to extreme response styles and negative related to middle response styles.

John: Can you explain the difference response styles?

Anne-Wil: The main styles that people talk about are Acquiescent Response Style (ARS) where respondents are more likely to agree or give a positive response to a question, and Extreme Response Style (ERS) where the response is more likely to be highly positive or highly negative than Middle Response Style (MRS) where there is a greater tendency to go for an ‘average’ response. High ARS implies better/higher scores while ERS gives you more varied or extreme (and possibly higher) scores than MRS.

John: Can you give us a few examples of those country differences?

Anne-Wil: Sure. Respondents from Spanish-speaking countries show higher ERS and ARS while Japanese and Chinese respondents tend to be far less extreme in their response styles. Across Europe, the Greeks stand out as the highest levels of acquiescence and ERS. Countries across Northern and Western Europe – where many of Deep-Insight’s clients are based – tend to exhibit fairly similar response patterns.

John: And Americans?

Anne-Wil: High ERS and high ARS – you’ll generally get a more positive response from an American audience than from a Western or Northern European audience.

John: That’s very much in line with our own findings. We also see it in a lot of discussions around Net Promoter Scores (NPS). On some American websites, you will read that the average NPS for B2B companies is between 25% and 30%, yet our experience at Deep-Insight is that the average NPS score is closer to 10% and this may well be related to the fact that the majority of our customers (or more important, their clients) are European or Australian, rather than American.

Anne-Wil: It just goes to show that you need to take great care when interpreting cross-country scores. When people complete a survey, their answers should be based on the substantive meaning of the questions. However, we know that people’s responses are also influenced by their response style, so differences between a company’s geographically-based divisions might simply reflect differences in the way clients respond to surveys, rather than picking up real differences in the ways those divisions are going to market.

 

Our own research – although more anecdotal than Professor Harzing’s – backs up her results. Apart from the higher NPS scores I mentioned in the discussion, I also see Americans give higher Customer Relationship Quality (CRQ) scores than Europeans. We pick this up on the standard deviation figures from our results as well. This often results in fewer “Rationals” in the customer base of American clients. (Rationals are good, but not extremely loyal, customers who typically make up 50% of a typical customer base for any of our clients.) In contrast, American clients tend to have more “Ambassadors” and sometime more “Opponents”, which reflects the ERS and ARS styles that Professor Harzing describes.

In her paper, Harzing concludes that:

“Regardless of what remedy is used to eliminate or alleviate response bias, the first step towards finding a solution is acknowledging that response bias can be a serious threat to valid comparisons across countries. We hope this article has provided a step in that direction and that in future response bias will receive the attention it deserves from researchers in the area of international and cross cultural management.”

Good advice!

* Net Promoter® and NPS® are registered trademarks and Net Promoter SystemSM and Net Promoter ScoreSM are trademarks of Bain & Company, Satmetrix Systems and Fred Reichheld

Satisfaction or ‘Statisfaction’?

One of my esteemed colleagues recently sent a draft document to me that had a typo – satisfaction had been spelt with an extra ‘t’, making up a new word ‘statisfaction’.

That got me thinking!

I have been involved in numerous movements and initiatives to drive customer-focused business improvement for over 25 years – from Total Quality & Customer Satisfaction (CSat) through to Net Promoter Score (NPS) and Customer Relationship Quality (CRQ).

One thing that I have learned working with hundreds of companies across the world is that:

IT’S NOT ABOUT THE SCORE – IT’S ABOUT THE CUSTOMERS

Businesses like things quantified (there’s a perception that companies are run by accountants nowadays?), and on the whole I go along with the “what gets measured gets managed” mantra (see below), so I fully endorse customer experience and internal capability measurement.

I also like statistics! I’m intrigued by the fact that (as happened recently in a client) the average score of the Net Promoter question can go up but the NPS itself goes down! I love exploring how ‘the same’ internal capability score can be made up of completely different profiles of strength, weakness, consistency and impact across the organisation.

The first trouble with ‘the numbers’ (scores, averages, top-box, etc.) is that they DE-HUMANISE their source – our customers and how we manage their experience and value.

Yes, verbatims that are often included in the appendices of research reports and are summarised into frequency graphs of positive & negative sentiment (quantification again!), but I really wonder how many executives actually read every customer comment?

My point here is that customers are on a JOURNEY, and have a STORY to tell, but organisationally we’re only interested in a number.

My second problem with ‘the numbers’ is that hitting the score target can easily become the objective in itself rather than improving organisational capabilities. I have seen this lead to many counter-cultural, and indeed downright destructive, behaviours:

-Deselection of unhappy or difficult customers from surveys

-Writing new strategies instead of implementing the one you’ve got

-NPS begging – “please give me a 9 or 10 or I’ll get fired”

-Only ever addressing ‘quick wins’ – never the underlying systemic issues

-Blaming sample sizes and methodologies as an excuse for inactivity

-Blatant attempts to fix the scores (e.g. fabricated questionnaire completions, ‘evidence’ of capability that is in fact just a Powerpoint slide)

-Corporate tolerance of low-scorers – many companies seem content with the fact that large proportions of their customers hate them!

-Putting metrics into performance scorecards but with such a low weighting (vs. sales) that nobody really cares

-Improving “the process” instead of “the journey”

-No follow-up at a personal level because of research anonymity; or inconsistent follow-up if anonymity is waived – often only of low scorers treated as complainants – what about thanking those who compliment and asking for referrals from advocates?

I could go on, but I hope the point is made – beware of “what gets measured gets managed” becoming:

“WHAT GETS MEASURED GETS MANIPULATED”

So instead of targeting statistical scores, seek to find ways of improving your systemic capabilities to cost-effectively manage your customer experience – and then listen to what they’re saying to you about how satisfying it is.

By the way, your scores will improve too!

 

Peter Lavers is Deep-Insight’s UK MD. If you’d like to find out more about how Deep-Relationship-NPS overcomes these issues, please contact Peter here.