Deep-Insight seen as ‘Unique’ in 2019 CRQ assessment

Our 2019 Customer Relationship Quality Results

A big Thank You to all of our clients and channel partners who completed our CRQ assessment this year! It provided us with a wealth of feedback. We are humbled that you have given us such positive scores and we are thrilled with both the overall results and the detailed responses that we received.

In summary: we had an overall completion rate of 49%, a CRQ score of 6.0 and a Net Promoter score of +53%. This is the strongest set of scores that we have ever received.
CRQ and NPS CRQ assessment

Regaining our Unique Status

Our scores this year mean that we are now back into ‘Unique’ territory, which we just missed out on last year. Uniqueness requires a combination of a winning ‘Solution’ and a great ‘Experience’. Last year, our ‘Solution’ scores had slipped and over the past 12 months we have been working hard to regain this ‘Unique’ status. It’s very gratifying to see all of our hard work paying off.

Unique Solution and Unique Experience CRQ assessment

Deep-Dive: New and Improved

We have reflected a bit on last year’s journey for Deep-Insight and why we regained our Unique status. As a result of your feedback last year, we took your comments on board and put together a plan to upgrade our Deep-Dive platform. As a result, we have recently rolled out Deep-Dive v1.1 which is faster, has more features and allows our clients to access individual account reports at a click of a button. The work that the development team put into Deep-Dive has paid off as we have seen our ‘Solution’ scores increase from 4.9 to 5.7 this year and we have already received some very positive feedback regarding our upgraded Deep-Dive platform.

Our Plans for 2019?

Even though we received an amazing set of scores this year and we are thrilled with the results, that does not mean we will take a break. We have put our heads together and came up with the following action points for the upcoming months.

No. 1 – Share Results with All Clients – and Create Joint Action PLans

We tell our clients to share their results with their customers as this is a very effective way of building strong relationships. In the past we have sometimes been guilty of not taking our own advice but this year we plan on doing exactly that with all of our clients. Expect us to reach out to you in the very near future so that we can review your feedback together and see how Deep-Insight can be more effective this year at helping you achieve your 2019 objectives.

No. 2 – More Support with Account Management 

This year we asked you to what you use our services for. The answer? You use Deep-Insight and primarily as an Account Management/ Customer Retention tool, followed closely by Customer Experience feedback.

Customer Portfolio CRQ assessment
The message for us at Deep-Insight is that we need to spend more time with our clients at the start of any assessment to understand how you segment your client base, how you allocate account managers and service teams to those accounts, and how we can help you get more account-based insights from using the Deep-Dive platform. We also need to be more supportive in helping you use the results to manage those accounts more effectively. This is one area we would specifically like to explore with each of you in the coming weeks.

No. 3 – Aim for a Higher Response Rate in 2020

This is more of an internal action for us at Deep-Insight. A 49% response rate is not bad but we know that some of you set targets of 60% or higher, and you achieve them. We will be aiming for a 60% completion rate in 2020. We always advise our clients to work with their account teams to achieve the highest response rate possible, so for next year we will definitely put a stronger focus on this for ourselves as well.
Thank you again for your time and input into this year’s customer assessment. We will be in touch shortly.

John O’Connor

5 Things To Remember To Get Your Completion Rates Up

One of the questions we get asked a lot is: “What sort of completion rates do you guys normally get on an assessment?”

Well, the answer is that it depends on what sort of assessment you’re talking about – we provide feedback on relationships with customers, channel partners and suppliers, and the completion rates differ from one type of assessment to the next:

-For employee assessments, our typical completion rate is in excess of 90%.

-For corporate customer and channel partner assessments, it’s typically 35-40%.

-For supplier assessments, the average completion rate are somewhere in the middle: 60-70%.

The next question we get asked is “Is it really that high?”

Well, we mainly get asked that question in connection with customer assessments, as some of our clients think 35-40% sounds impressive. This is particularly the case when people compare our figures to the ones you might get on a typical consumer surveys, where sometimes as few as 2% of consumers will bother to complete a questionnaire (Petchenik & Watermolen, 2011).

Remember that we are talking about existing, often long-standing, business-to-business (B2B) relationships – that’s what we do at Deep-Insight. We’re not a consumer research company. In fact, we’re not even a market research company, although we often are compared to firms like TNS or Gallup. We’re different. We look at – and assess – the quality of the relationships that large companies have with their biggest B2B clients. And if you think about it, why would good customers NOT want to provide feedback on their relationship with you, particularly if their account manager has convinced them that it’s an important part of their ongoing customer feedback process, and that their input is genuinely used to help improve the service given not just to them but to all clients?

The 5 pieces of advice I give to our clients are:

1. Spend Time Getting A Good Contact List Ready.

Most of our clients tell us they can pull together a list of key client contacts in a week. Two at the most. Our experience tells us that it takes at least 4-6 weeks to come up with a really good clean list of customer contacts who have a strong view of their relationship with our client. If the list isn’t compiled properly, we end up polling the views of people who really don’t have a strong view on the company, and who won’t be interested in responding.

2. Pre-Sell The Assessment To Customers.

One of our clients has been achieving customer completion rates in excess of 70% on a consistent basis for the past number of years. It does this because the CEO – together with the account managers – has managed to convince his key accounts that the 10-15 minutes they invest in providing feedback WILL result in a better service. “Tell me what’s wrong, and I promise we’ll do our best to fix it.”

3. Make Sure to Contact Customers While The Assessment Is Live.

We normally hold our assessments open for two weeks and we know from experience that if account managers have been properly briefed to mention the assessment in every conversation they have with a client during those two weeks, the completion rates will improve dramatically.

4. Manage The Campaign Smartly.

This is not rocket science, but you would be amazed at the number of companies that want to run assessments over school holiday periods, or during particular times of the year that may coincide with the most most busy time of the year for their customers. Plan your launch dates in advance, and think about the timing for issuing reminders. We usually recommend launching a customer assessment on a Tuesday morning, with the final reminder going out on the Tuesday two weeks later. That means that even if somebody is out of the office for two weeks, they’ll still have an opportunity to provide feedback.

5. Don’t Panic At The End of Week 1.

We normally see a flurry of activity during the first six or eight hours of a B2B campaign and typically the completion rate after Day 1 is about 8%. At the end of the first week (before we send out a first reminder) it’s often the case that the response rate hasn’t broken through the 10% barrier. This is not unusual. Completion rates will increase and a message in the final reminder that “This assessment is closing today” usually elicits a final flurry of responses!

As I said, a lot of this isn’t rocket science but it does require a bit of advance planning. If you do put the effort in up-front, you’ll see it rewarded in significantly higher completion rates.

What is a ‘Good’ B2B Net Promoter Score?

So what’s a good Net Promoter Score* for a B2B company?

It’s a question we get asked a lot. Sometimes the question comes in a slightly different form: “What NPS target should we set for the company? 25% seems low, so maybe 50%? Or should we push the boat out and aim for 70%?”

Well, it all depends. On a number of different factors. As we mentioned in an earlier blog, it can even depend on factors such as whether your customers are American or European.

We can’t state often enough how crucial it is to understand how these various factors (we’ll discuss them in detail below) impact the overall Net Promoter Score you receive, as the NPS calculation makes it incredibly sensitive to small changes in individual customer scores. Be aware of these factors when deciding on a realistic NPS figure to aim for.

HOW IS THE NET PROMOTER SCORE CALCULATED?

For the uninitiated, a company’s Net Promoter Score is based on the answers its customers give to a single question: “On a scale of 0 to 10, how likely are you to recommend Company X to a friend or colleague?” Customers who score 9 or 10 are called ‘Promoters’. Those who score 7 or 8 are ‘Passives’ while any customer who gives you a score of 6 or below is a ‘Detractor’. The actual NPS calculation is:

Net Promoter Score = The % of Promoters minus the % of Detractors

Theoretically, companies can have a Net Promoter Score ranging from -100% to +100%.

Most Europeans consider a score of 8 out of 10 as a pretty positive endorsement of any B2B product or service provider, but in the NPS world, a person who scores you 8 is a ‘Passive’ and therefore gets ignored when calculating the Net Promoter Score (see box above).

Here’s the thing. If you can persuade a few of your better customers to give you 9 instead of 8, then suddenly you’ve boosted your Promoter numbers significantly. We know more than a handful of account managers who carefully explain to their clients that 8/10 is of no value to them whatsoever and that if they appreciate the service they are getting they really do need to score 9 or 10. Sure, there’s always a little ‘gaming’ that goes on in client feedback forms, particularly when performance-related bonuses are dependent on the scores. However, we find it intriguing to see the level of ‘client education’ that account managers engage in when the annual NPS survey gets sent out!

What Factors Impact Your Net Promoter Score?

We said at the outset that the Net Promoter Score you achieve is dependent on a number of factors. So what are they?

1. Which geographical region do your customers come from?
We’ve covered this point in an earlier discussion with Professor Anne-Wil Harzing – Americans will score higher than Europeans – probably 10% higher and possibly even more.

2. Do you conduct NPS surveys by telephone or face-to-face or by email?
In the UK and Ireland, we don’t like giving bad news – certainly not in a face-to-face (F2F) discussion. Even if we’re talking over the phone, we tend to modify our answers to soften the blow if the feedback is negative. Result: scores are often inflated. In our experience, online assessments give more honest feedback but can result in scores that are at least 10% lower than in telephone or F2F surveys. This gap can be smaller in countries like the Netherlands and Australia where conversations and customer feedback can be more robust. It’s a cultural thing.

3. Is the survey confidential?
Back to the point about culture – it’s easier to give honest feedback if you have the choice of doing so confidentially, particularly if the customer experience has been negative and you have a harsh message to deliver to your service or product provider. Surveys that are not confidential tend to give a rosier picture of the relationship than those that are confidential.

4. Is there a governance structure in place to determine which clients (and which individuals in those client companies) are included in the survey?
At Deep-Insight, we advocate a census approach when it comes to customer feedback: every B2B customer above a certain size MUST be included in the assessment. No ifs or buts. Yet we are often amazed by the number of companies that allow exceptions such as “We’re at a very sensitive stage of discussions with Client X so we’re not going to include them on the list this year”or “We’ve just had a major delivery problem at Client Y – they certainly won’t appreciate us asking them now what they think of us”. In many cases, it’s more blatant – customers are excluded simply because everybody knows they are going to give poor feedback and pull down the overall scores. In some cases, it’s a little more subtle, particularly where it’s left to the account manager to decide which individuals to survey in a particular account. A proper governance structure is required to ensure ‘gaming’ is kept to a minimum and that the assessment process has credibility. If a company surveys its Top 100 accounts annually, senior management must be given the final say over which clients are added to or taken off the list. It’s not feasible to have the MD to approve every single client, but at least make sure the MD understands which of the major accounts – and which individuals in those accounts – are to be included on the list.

5. Is the survey carried out by an independent third party, or is it an in-house survey?
In-house surveys can be cost-effective but suffer from a number of drawbacks that generally tend to inflate the scores. For starters, in-house surveys are rarely seen as confidential, and are more prone to ‘gaming’ than surveys that are run by an independent third party. We have seen cases where in-house surveys have been replaced by external providers and the NPS scores have dropped by a whopping 30% or more. Seriously, the differences are that significant.

So What Is a Good Score?

Now, coming back to the question of what constitutes a good Net Promoter Score in a B2B environment, here’s our take on it.

Despite the claims that one hears at conferences and at the water coolers that “we achieved 52% in our last NPS survey” or “we should be setting the bar higher – the NPS target for 2015 is going to be 60%” these types of score are rarely if ever achieved. We’ve been collecting NPS data for B2B clients since 2006 and we have customer feedback from clients across 86 different countries. Our experience is that in a well-run, properly-governed independent confidential assessment, a Net Promoter Score of 50% or more is almost impossible to achieve. Think about it. To get 50%, you need a profile like the one below, where a significant majority of responses are 9 or 10 and most of the others are pretty close to that level. In Europe, that simply doesn’t happen.

Our experience of B2B assessments is that a Net Promoter Score of +30% is truly excellent, and that means you are seen as ‘Unique’ by your customers. A Net Promoter Score of around +10% is par for the course – consider that an average score. A negative NPS is not unusual – approximately one third of our B2B customers are in negative territory and one in ten of our clients score -30% or even lower.

In fairness, Deep-Insight’s customer base is predominately European or Australian so we also need to be careful about how we benchmark different divisions within the same company that are in different regions or markets.

In our opinion, the best benchmark – for a company, business unit or division – is last year’s score. If your NPS is higher this year than it was last year, and nothing else has changed, then you’re moving in the right direction. And if your NPS was positive last year, and is even more positive this year, happy days!

* Net Promoter® and NPS® are registered trademarks and Net Promoter SystemSM and Net Promoter ScoreSM are trademarks of Bain & Company, Satmetrix Systems and Fred Reichheld

Satisfaction or ‘Statisfaction’?

One of my esteemed colleagues recently sent a draft document to me that had a typo – satisfaction had been spelt with an extra ‘t’, making up a new word ‘statisfaction’.

That got me thinking!

I have been involved in numerous movements and initiatives to drive customer-focused business improvement for over 25 years – from Total Quality & Customer Satisfaction (CSat) through to Net Promoter Score (NPS) and Customer Relationship Quality (CRQ).

One thing that I have learned working with hundreds of companies across the world is that:

IT’S NOT ABOUT THE SCORE – IT’S ABOUT THE CUSTOMERS

Businesses like things quantified (there’s a perception that companies are run by accountants nowadays?), and on the whole I go along with the “what gets measured gets managed” mantra (see below), so I fully endorse customer experience and internal capability measurement.

I also like statistics! I’m intrigued by the fact that (as happened recently in a client) the average score of the Net Promoter question can go up but the NPS itself goes down! I love exploring how ‘the same’ internal capability score can be made up of completely different profiles of strength, weakness, consistency and impact across the organisation.

The first trouble with ‘the numbers’ (scores, averages, top-box, etc.) is that they DE-HUMANISE their source – our customers and how we manage their experience and value.

Yes, verbatims that are often included in the appendices of research reports and are summarised into frequency graphs of positive & negative sentiment (quantification again!), but I really wonder how many executives actually read every customer comment?

My point here is that customers are on a JOURNEY, and have a STORY to tell, but organisationally we’re only interested in a number.

My second problem with ‘the numbers’ is that hitting the score target can easily become the objective in itself rather than improving organisational capabilities. I have seen this lead to many counter-cultural, and indeed downright destructive, behaviours:

-Deselection of unhappy or difficult customers from surveys

-Writing new strategies instead of implementing the one you’ve got

-NPS begging – “please give me a 9 or 10 or I’ll get fired”

-Only ever addressing ‘quick wins’ – never the underlying systemic issues

-Blaming sample sizes and methodologies as an excuse for inactivity

-Blatant attempts to fix the scores (e.g. fabricated questionnaire completions, ‘evidence’ of capability that is in fact just a Powerpoint slide)

-Corporate tolerance of low-scorers – many companies seem content with the fact that large proportions of their customers hate them!

-Putting metrics into performance scorecards but with such a low weighting (vs. sales) that nobody really cares

-Improving “the process” instead of “the journey”

-No follow-up at a personal level because of research anonymity; or inconsistent follow-up if anonymity is waived – often only of low scorers treated as complainants – what about thanking those who compliment and asking for referrals from advocates?

I could go on, but I hope the point is made – beware of “what gets measured gets managed” becoming:

“WHAT GETS MEASURED GETS MANIPULATED”

So instead of targeting statistical scores, seek to find ways of improving your systemic capabilities to cost-effectively manage your customer experience – and then listen to what they’re saying to you about how satisfying it is.

By the way, your scores will improve too!

 

Peter Lavers is Deep-Insight’s UK MD. If you’d like to find out more about how NPS overcomes these issues, please contact Peter here.