Susan and Bill have Relationship Problems!

The Susan & Bill Trilogy

Susan and Bill have relationship problems.

When we updated our Customer Relationship Quality (CRQ™) methodology in 2014, we created a storyline around two fictitious characters. The first was Bill, a thoughtful but somewhat introverted Marketing Director. His counterpart was Susan, a more aggressive but low-attention-span Sales Director. They may be fictitious but they bear more than a passing resemblance to some sales and marketing directors we have met in the past.

Episode 1 finds Susan and Bill having relationship problems. Well, their problems are primarily related to understanding the relationship their company had with its main corporate clients. However, there is also some evidence of tension between Susan and Bill themselves. This is the sort of natural tension that exists between Sales and Marketing in any large organisation.

EPISODE 1: Susan and Bill have Relationship Problems!

Susan – Sales Director

“I need real customer feedback. Something that helps my sales teams manage their key accounts. For the long-term. All Marketing are interested in is some box-ticking exercise for the folks in HQ.”

Bill – Marketing Director

“I need to provide HQ with Net Promoter Score (NPS®) metrics. It’s our corporate policy. For some reason, Sales just don’t seem to get it. NPS is a useful tool if they could only figure out how to use it properly.”

Net Promoter Score

Net Promoter Score (NPS) is a simple easy-to-use metric for measuring customer loyalty. Many large, well-known companies now use it as a key business metric. The concept behind NPS is simple: loyal customers are more willing to recommend you to a friend or colleague. To find out how loyal your customer base is, measure their willingness to recommend. The higher your NPS score, the more loyal your customer base is.

NPS is easy to calculate. It’s based on a single question: “Would you recommend Company XYZ to a friend or colleague?”. The problem is that Sales Directors find it hard to turn the answer to that question into a clear set of actions. Actions that can be used to improve a complex web of relationships in a large corporate account – or across an entire customer portfolio.

Does NPS work for B2B Organisations?

Yes, but!

On its own, NPS is not sufficient for understanding complex B2B relationships. It does provide a good starting point but in complex B2B relationships it must be supplemented by other metrics. These metrics must help account managers take action at an INDIVIDUAL account level, as well as helping senior executives focus on a small number of strategic initiatives across ALL accounts.

Deep-Insight’s unique Customer Relationship Quality (CRQ™) methodology helps Sales Directors do exactly that. CRQ identifies which accounts are its greatest Ambassadors, and which on the point of defection (Stalkers and Opponents). More important, the CRQ methodology identifies – for each account manager – what needs to be done to transform an Opponent into an Ambassador.

Relationship Segmentation

NPS tells you if you have a problem but not how to fix it. CRQ tells you what the problem is and exactly how to address it.

Back to Susan and Bill

Bill needs NPS data in a comparable format to data from other parts of the organisation, with feedback on brand, image, product and pricing. With Customer Relationship Quality (CRQ™), Bill gets his NPS data in exactly the way he needs it. That keeps Bill and his Marketing team happy.

On the other hand, Susan gets detailed account-level customer relationship feedback for her sales teams. Each account manager gets an account report for every client. They can looking at levels of Trust and Commitment for each client so that they, and Susan, can avoid any surprises when contracts come up for renewal. That keeps Susan and her Sales team happy.

Join us next week for EPISODE 2.

Do Americans REALLY score more positively than Europeans?

In a previous blog, I wrote that Europeans were more stingy than Americans when it came to customer feedback. Or words to that effect. So do Americans REALLY score more positively than Europeans?


Since then, people have been asking if this is REALLY true. In other words, where is the evidence for this claim?

Well, yes it IS true. While I’m not an expert in the area, I do know somebody who is: Anne-Wil Harzing, Professor of International Management at Middlesex University, London.

In 2006, Professor Anne-Wil Harzing conducted an analysis of different response styles across 26 different countries.

We recently sat down with Anne-Wil Harzing to discuss these differences.
 
 

Interview with Anne-Wil Harzing

John: Professor Harzing, if I look at our own clients – which are mainly headquartered in Europe, USA and Australia – their customers can be based anywhere in the world. When we often report results back by country, we often identify differences from country to country in Customer Relationship Quality (CRQ) or Net Promoter Score (NPS). How should we interpret those differences?

Anne-Wil: Good question – let me answer that in two ways. First, there are characteristics at a country level such as power distance, collectivism, uncertainty avoidance and extraversion. These all have a major influence on the way people respond to questionnaires and surveys. This is particularly true when you use Likert scales. You know, the 1-7 scales that you use, or the 0-10 scale that’s used in Net Promoter Score surveys. Second, there are differences based on whether the respondent is replying to a questionnaire in his or her native tongue. Also, English language competence is positively related to extreme response styles and negative related to middle response styles.

John: Can you explain the difference response styles?

Anne-Wil: The main styles that people talk about are Acquiescent Response Style (ARS) and and Extreme Response Style (ERS). ARS is where respondents are more likely to agree or give a positive response to a question. ERS is where the response is more likely to be highly positive or highly negative than Middle Response Style (MRS) where there is a greater tendency to go for an ‘average’ response. High ARS implies better/higher scores. ERS gives you more varied or extreme (and possibly higher) scores than MRS.

John: Can you give us a few examples of those country differences?

Anne-Wil: Sure. Respondents from Spanish-speaking countries show higher ERS and ARS while Japanese and Chinese respondents tend to be far less extreme in their response styles. Across Europe, the Greeks stand out as the highest levels of acquiescence and ERS. Countries across Northern and Western Europe – where many of Deep-Insight’s clients are based – tend to exhibit fairly similar response patterns.

John: And Americans?

Anne-Wil: High ERS and high ARS – you’ll generally get a more positive response from an American audience than from a Western or Northern European audience.

John: That’s very much in line with our own findings. We also see it in a lot of discussions around Net Promoter Scores (NPS). On some American websites, you will read that the average NPS for B2B companies is between 25% and 30%. And yet our experience at Deep-Insight is that the average NPS score is closer to 10%. This may well be related to the fact that the majority of our customers (or more important, their clients) are European or Australian, rather than American.

Anne-Wil: It just goes to show that you need to take great care when interpreting cross-country scores. When people complete a survey, their answers should be based on the substantive meaning of the questions. However, we know that people’s responses are also influenced by their response style, so differences between a company’s geographically-based divisions might simply reflect differences in the way clients respond to surveys, rather than picking up real differences in the ways those divisions are going to market.

Americans v Europeans

So Europeans ARE more stingy than Americans! Or to put it more kindly, Americans REALLY score more positively than Europeans.

Our own research – although more anecdotal than Professor Harzing’s – backs up her results. Apart from the higher NPS scores I mentioned in the discussion, we also see Americans give higher Customer Relationship Quality (CRQ) scores than Europeans. We pick this up on the standard deviation figures from our results as well. This often results in fewer “Rationals” in the customer base of American clients. (Rationals are good, but not extremely loyal, customers who typically make up 50% of a typical customer base.) In contrast, American clients tend to have more “Ambassadors” and sometime more “Opponents”, which reflects the ERS and ARS styles that Professor Harzing describes.

In her paper, Harzing concludes that:

“Regardless of what remedy is used to eliminate or alleviate response bias, the first step towards finding a solution is acknowledging that response bias can be a serious threat to valid comparisons across countries. We hope this article has provided a step in that direction and that in future response bias will receive the attention it deserves from researchers in the area of international and cross cultural management.”

Good advice!
 
 

* Net Promoter® and NPS® are registered trademarks and Net Promoter SystemSM and Net Promoter ScoreSM are trademarks of Bain & Company, Satmetrix Systems and Fred Reichheld

Satisfaction or ‘Statisfaction’?

One of my esteemed colleagues recently sent a draft document to me that had a typo – satisfaction had been spelt with an extra ‘t’, making up a new word ‘statisfaction’.

That got me thinking!

I have been involved in numerous movements and initiatives to drive customer-focused business improvement for over 25 years – from Total Quality & Customer Satisfaction (CSat) through to Net Promoter Score (NPS) and Customer Relationship Quality (CRQ).

One thing that I have learned working with hundreds of companies across the world is that:

IT’S NOT ABOUT THE SCORE – IT’S ABOUT THE CUSTOMERS

Businesses like things quantified (there’s a perception that companies are run by accountants nowadays?), and on the whole I go along with the “what gets measured gets managed” mantra (see below), so I fully endorse customer experience and internal capability measurement.

I also like statistics! I’m intrigued by the fact that (as happened recently in a client) the average score of the Net Promoter question can go up but the NPS itself goes down! I love exploring how ‘the same’ internal capability score can be made up of completely different profiles of strength, weakness, consistency and impact across the organisation.

The first trouble with ‘the numbers’ (scores, averages, top-box, etc.) is that they DE-HUMANISE their source – our customers and how we manage their experience and value.

Yes, verbatims that are often included in the appendices of research reports and are summarised into frequency graphs of positive & negative sentiment (quantification again!), but I really wonder how many executives actually read every customer comment?

My point here is that customers are on a JOURNEY, and have a STORY to tell, but organisationally we’re only interested in a number.

My second problem with ‘the numbers’ is that hitting the score target can easily become the objective in itself rather than improving organisational capabilities. I have seen this lead to many counter-cultural, and indeed downright destructive, behaviours:

-Deselection of unhappy or difficult customers from surveys

-Writing new strategies instead of implementing the one you’ve got

-NPS begging – “please give me a 9 or 10 or I’ll get fired”

-Only ever addressing ‘quick wins’ – never the underlying systemic issues

-Blaming sample sizes and methodologies as an excuse for inactivity

-Blatant attempts to fix the scores (e.g. fabricated questionnaire completions, ‘evidence’ of capability that is in fact just a Powerpoint slide)

-Corporate tolerance of low-scorers – many companies seem content with the fact that large proportions of their customers hate them!

-Putting metrics into performance scorecards but with such a low weighting (vs. sales) that nobody really cares

-Improving “the process” instead of “the journey”

-No follow-up at a personal level because of research anonymity; or inconsistent follow-up if anonymity is waived – often only of low scorers treated as complainants – what about thanking those who compliment and asking for referrals from advocates?

I could go on, but I hope the point is made – beware of “what gets measured gets managed” becoming:

“WHAT GETS MEASURED GETS MANIPULATED”

So instead of targeting statistical scores, seek to find ways of improving your systemic capabilities to cost-effectively manage your customer experience – and then listen to what they’re saying to you about how satisfying it is.

By the way, your scores will improve too!

 

Peter Lavers is Deep-Insight’s UK MD. If you’d like to find out more about how NPS overcomes these issues, please contact Peter here.