5 Things To Remember To Get Your Completion Rates Up

One of the questions we get asked a lot is: “What sort of completion rates do you guys normally get on an assessment?”

Well, the answer is that it depends on what sort of assessment you’re talking about – we provide feedback on relationships with customers, channel partners and suppliers, and the completion rates differ from one type of assessment to the next:

-For employee assessments, our typical completion rate is in excess of 90%.

-For corporate customer and channel partner assessments, it’s typically 35-40%.

-For supplier assessments, the average completion rate are somewhere in the middle: 60-70%.

The next question we get asked is “Is it really that high?”

Well, we mainly get asked that question in connection with customer assessments, as some of our clients think 35-40% sounds impressive. This is particularly the case when people compare our figures to the ones you might get on a typical consumer surveys, where sometimes as few as 2% of consumers will bother to complete a questionnaire (Petchenik & Watermolen, 2011).

Remember that we are talking about existing, often long-standing, business-to-business (B2B) relationships – that’s what we do at Deep-Insight. We’re not a consumer research company. In fact, we’re not even a market research company, although we often are compared to firms like TNS or Gallup. We’re different. We look at – and assess – the quality of the relationships that large companies have with their biggest B2B clients. And if you think about it, why would good customers NOT want to provide feedback on their relationship with you, particularly if their account manager has convinced them that it’s an important part of their ongoing customer feedback process, and that their input is genuinely used to help improve the service given not just to them but to all clients?

The 5 pieces of advice I give to our clients are:

1. Spend Time Getting A Good Contact List Ready.

Most of our clients tell us they can pull together a list of key client contacts in a week. Two at the most. Our experience tells us that it takes at least 4-6 weeks to come up with a really good clean list of customer contacts who have a strong view of their relationship with our client. If the list isn’t compiled properly, we end up polling the views of people who really don’t have a strong view on the company, and who won’t be interested in responding.

2. Pre-Sell The Assessment To Customers.

One of our clients has been achieving customer completion rates in excess of 70% on a consistent basis for the past number of years. It does this because the CEO – together with the account managers – has managed to convince his key accounts that the 10-15 minutes they invest in providing feedback WILL result in a better service. “Tell me what’s wrong, and I promise we’ll do our best to fix it.”

3. Make Sure to Contact Customers While The Assessment Is Live.

We normally hold our assessments open for two weeks and we know from experience that if account managers have been properly briefed to mention the assessment in every conversation they have with a client during those two weeks, the completion rates will improve dramatically.

4. Manage The Campaign Smartly.

This is not rocket science, but you would be amazed at the number of companies that want to run assessments over school holiday periods, or during particular times of the year that may coincide with the most most busy time of the year for their customers. Plan your launch dates in advance, and think about the timing for issuing reminders. We usually recommend launching a customer assessment on a Tuesday morning, with the final reminder going out on the Tuesday two weeks later. That means that even if somebody is out of the office for two weeks, they’ll still have an opportunity to provide feedback.

5. Don’t Panic At The End of Week 1.

We normally see a flurry of activity during the first six or eight hours of a B2B campaign and typically the completion rate after Day 1 is about 8%. At the end of the first week (before we send out a first reminder) it’s often the case that the response rate hasn’t broken through the 10% barrier. This is not unusual. Completion rates will increase and a message in the final reminder that “This assessment is closing today” usually elicits a final flurry of responses!

As I said, a lot of this isn’t rocket science but it does require a bit of advance planning. If you do put the effort in up-front, you’ll see it rewarded in significantly higher completion rates.

What is a ‘Good’ B2B Net Promoter Score?

So what’s a good Net Promoter Score* for a B2B company?

It’s a question we get asked a lot. Sometimes the question comes in a slightly different form: “What NPS target should we set for the company? 25% seems low, so maybe 50%? Or should we push the boat out and aim for 70%?”

Well, it all depends. On a number of different factors. As we mentioned in an earlier blog posting, it can even depend on factors such as whether your customers are American or European.

We can’t state often enough how crucial it is to understand how these various factors (we’ll discuss them in detail below) impact the overall Net Promoter Score you receive, as the NPS calculation makes it incredibly sensitive to small changes in individual customer scores. Be aware of these factors when deciding on a realistic NPS figure to aim for.

HOW IS THE NET PROMOTER SCORE CALCULATED?

For the uninitiated, a company’s Net Promoter Score is based on the answers its customers give to a single question: “On a scale of 0 to 10, how likely are you to recommend Company X to a friend or colleague?” Customers who score 9 or 10 are called ‘Promoters’. Those who score 7 or 8 are ‘Passives’ while any customer who gives you a score of 6 or below is a ‘Detractor’. The actual NPS calculation is:

Net Promoter Score = The % of Promoters minus the % of Detractors

Theoretically, companies can have a Net Promoter Score ranging from -100% to +100%.

Most Europeans consider a score of 8 out of 10 as a pretty positive endorsement of any B2B product or service provider, but in the NPS world, a person who scores you 8 is a ‘Passive’ and therefore gets ignored when calculating the Net Promoter Score (see box above).

Here’s the thing. If you can persuade a few of your better customers to give you 9 instead of 8, then suddenly you’ve boosted your Promoter numbers significantly. We know more than a handful of account managers who carefully explain to their clients that 8/10 is of no value to them whatsoever and that if they appreciate the service they are getting they really do need to score 9 or 10. Sure, there’s always a little ‘gaming’ that goes on in client feedback forms, particularly when performance-related bonuses are dependent on the scores, but we find it intriguing to see the level of ‘client education’ that account managers engage in when the annual NPS survey gets sent out!

What Factors Impact Your Net Promoter Score?

We said at the outset that the Net Promoter Score you achieve is dependent on a number of factors. So what are they?

1. Which geographical region do your customers come from?
We’ve covered this point in an earlier discussion with Professor Anne-Wil Harzing – Americans will score higher than Europeans – probably 10% higher and possibly even more.

2. Do you conduct NPS surveys by telephone or face-to-face or by email?
In the UK and Ireland, we don’t like giving bad news – certainly not in a face-to-face (F2F) discussion. Even if we’re talking over the phone, we tend to modify our answers to soften the blow if the feedback is negative. Result: scores are often inflated. In our experience, online assessments give more honest feedback but can result in scores that are at least 10% lower than in telephone or F2F surveys. This gap can be smaller in countries like the Netherlands and Australia where conversations and customer feedback can be more robust. It’s a cultural thing.

3. Is the survey confidential?
Back to the point about culture – it’s easier to give honest feedback if you have the choice of doing so confidentially, particularly if the customer experience has been negative and you have a harsh message to deliver to your service or product provider. Surveys that are not confidential tend to give a rosier picture of the relationship than those that are confidential.

4. Is there a governance structure in place to determine which clients (and which individuals in those client companies) are included in the survey?
At Deep-Insight, we advocate a census approach when it comes to customer feedback – every B2B customer above a certain size must be included in the assessment. No ifs or buts. Yet we are constantly amazed by the number of companies that allow exceptions such as “We’re at a very sensitive stage of discussions with Client X so we’re not going to include them on the list this year”or “We’ve just had a major delivery problem at Client Y – they certainly won’t appreciate us asking them now what they think of us”. In many cases, it’s more blatant – customers are excluded simply because everybody knows they are going to give poor feedback and pull down the overall scores. In some cases, it’s a little more subtle, particularly where it’s left to the account manager to decide which individuals to survey in a particular account. A proper governance structure is required to ensure ‘gaming’ is kept to a minimum and that the assessment process has credibility. If a company surveys its Top 100 accounts annually, senior management must be given the final say over which clients are added to, or taken off, the list. It’s not feasible to have the MD to approve every single client, but at least make sure the MD understands which of the major accounts – and which individuals in those accounts – are to be included on the list.

5. Is the survey carried out by an independent third party, or is it an in-house survey?
In-house surveys can be cost-effective but suffer from a number of drawbacks that generally tend to inflate the scores. For starters, in-house surveys are rarely seen as confidential, and are more prone to ‘gaming’ than surveys that are run by an independent third party. We have seen cases where in-house surveys have been replaced by external providers and the NPS scores have dropped by a whopping 30% or more. Seriously, the differences are that significant.

So What Is a Good Score?

Now, coming back to the question of what constitutes a good Net Promoter Score in a B2B environment, here’s our take on it.

Despite the claims that one hears at conferences and at the water coolers that “we achieved 52% in our last NPS survey” or “we should be setting the bar higher – the NPS target for 2015 is going to be 60%” these types of score are rarely if ever achieved. We’ve been collecting NPS data for B2B clients since 2006 and we have customer feedback from clients across 86 different countries. Our experience is that in a well-run, properly-governed independent confidential assessment, a Net Promoter Score of 50% or more is almost impossible to achieve. Think about it. To get 50%, you need a profile like the one below, where a significant majority of responses are 9 or 10 and most of the others are pretty close to that level. In Europe, that simply doesn’t happen.

Our experience of B2B assessments is that a Net Promoter Score of +30% is truly excellent, and that means you are seen as ‘Unique’ by your customers. A Net Promoter Score of around +10% is par for the course – consider that an average score. A negative NPS is not unusual – approximately one third of our B2B customers are in negative territory and one in ten of our clients score -30% or even lower.

In fairness, Deep-Insight’s customer base is predominately European or Australian so we also need to be careful about how we benchmark different divisions within the same company that are in different regions or markets.

In our opinion, the best benchmark – for a company, business unit or division – is last year’s score. If your NPS is higher this year than it was last year, and nothing else has changed, then you’re moving in the right direction. And if your NPS was positive last year, and is even more positive this year, happy days!

* Net Promoter® and NPS® are registered trademarks and Net Promoter SystemSM and Net Promoter ScoreSM are trademarks of Bain & Company, Satmetrix Systems and Fred Reichheld

Susan and Bill have Relationship Problems! (Part III)

The last time we met Susan and Bill, they were discussing survival tactics. Thankfully, they have managed to get the company back on an even keel – excuse the boating pun – over the past few months and now have a new challenge to face.

At the last board meeting, the CEO asked them to prepare a strategy that would transform the company from an ‘Even Keel’ company to becoming a ‘Leading Edge’ company.

“I don’t want us to be competing on price. I want us to be seen by our clients as unique, innovative, really easy to do business with. Now it’s up to you two to make that happen. Get back to me by 23 September with a strategy. And it better be good.”

Unfortunately, Susan and Bill are at loggerheads trying to plot a course towards that Leading Edge organisation that their CEO so desperately wants to become.

Different Views from Sales and Marketing

Susan“Leading Edge is a simple sales concept. Leading Edge = More Sales. It really is as simple as that. We can become Leading Edge if Bill provides me with market-beating products. That’s the thing he can’t seem to grasp.”

Bill:“Leading Edge is a complex brand concept. It’s how you are seen vis-à-vis the competition. We’re a services business and the differentiating factor is the quality of our service and account teams, not the products. That’s what Susan fails to grasp.”

Susan’s view is (as usual) plain and easy to grasp: “Give me decent products/services and I’ll sell them. If the products/services are Leading Edge, we’ll sell more of them. It’s not really my job to DESIGN them, so don’t go asking me about transforming this company into a leading edge organisation.”

Bill has a slightly more nuanced view. He accepts that it’s his job to translate customer needs into the sorts of products and services that the clients will love and buy, but he also makes the valid point that he and Susan are in a B2B services business, and that Susan’s account teams (as well as the Service/Delivery teams) have a key role in making the service a Leading Edge one in the client’s mind.

Bridging the Gap

As usual, Bill is half-right. And so is Susan.

But let’s start by bringing a little clarity on the terms we are using. Let’s begin with a definition of what a ‘unique’ brand is in the business-to-business world.

In the B2B world, the uniqueness of your brand is dependent on a combination of whether you provide a unique Solution for your clients and whether they find the Experience of working with you to be uniquely satisfying.

Deep-Insight defines Solution as a combination of innovationleading edge and value-for-money. These are three related but slightly different concepts but if you score well on all three, the chances are that you have an offering that can help your clients improve their standing in the marketplace in a way that none of your competitors can provide. When we talk about ‘solutions’ we’re not just talking ‘product’. As Bill says, it’s as much about how the account managers, sales and delivery teams position your company’s product or service, as it is about the product/service itself.

Experience is a measure of how easy you are to do business with and if you are seen as a trusted partner. You can have the best products or services in the world but if your clients can’t work with you and don’t see your people as trusted partners, your brand is going to suffer.

So when Bill and Susan’s CEO talks about wanting to be a unique, innovative, leading edge company, he’s really talking about building a B2B brand that excels at all the different elements that we group under the headings Solution and Experience. And that means the Bill and Susan need to work together to get all those elements right. But as the methodology above shows, you can’t build a unique B2B brand without having an excellent service to underpin it. So Bill and Susan and going to have to rope in the Operations Director as well. We wish them well on their journey.

Ultimately, the real definition of Leading Edge will be dictated by your customers. But you’ll never know if you don’t ask them.

Susan and Bill have Relationship Problems!

The Susan & Bill Trilogy

When we launched Deep-Relationship-NPS in early-2014, we created a storyline around two fictitious characters called Bill (a thoughtful but somewhat introverted Marketing Director) and Susan (a more aggressive low-attention-span Sales Director). They may be fictitious but they bear more than a passing resemblance to some sales and marketing directors we have met in client organisations in the past.

Episode 1 finds Susan and Bill having relationship problems. Well, their problems are primarily related to understanding the relationship their company had with its main corporate clients but there is also some evidence of tension between Susan and Bill themselves – the sort of natural tension that exists between Sales and Marketing in any large organisation.

EPISODE 1: Susan and Bill have Relationship Problems!

Susan. Sales Director.

“I want some real customer feedback that helps my sales managers manage their key accounts for the long-term. All Marketing are interested in is some box-ticking exercise for the folks in HQ.”

Bill. Marketing Director.

“I need to provide HQ with Net Promoter Score (NPS®) metrics. It’s our corporate policy. For some reason, Sales just don’t seem to get it. NPS is a useful tool if they would only figure out how to use it properly.”

Net Promoter Score (NPS) is a simple easy-to-use metric for measuring customer loyalty. Many large, well-known companies now use it as a key business metric. The concept behind NPS is simple: loyal customers are more willing to recommend you to a friend or colleague. To find out how loyal your customer base is, measure their willingness to recommend; the higher your NPS score (% willing to recommend less % not willing to recommend), the more loyal your customer base is.

The problem is that while NPS is easy to calculate, many sales directors find it hard to turn the answer to a single question “Would you recommend Company XYZ to a friend or colleague?” into a clear set of actions that can be used to improve a complex web of relationships in a large corporate account – or across your full customer portfolio.

Does NPS work for B2B Organisations?

Yes!

NPS provides a good starting point for understanding complex B2B relationships but it must be supplemented by other metrics that help account managers take action at an INDIVIDUAL account level, as well as helping senior executives focus on a small number of strategic initiatives across ALL accounts.

Deep-Insight already has a unique B2B methodology – Customer Relationship Quality (CRQ™) – that helps Sales Directors identify which of their major accounts are its greatest Ambassadors, and which on the point of defection (Ambivalents, Stalkers and Opponents).

More important, the CRQ methodology identifies – for each account manager – what needs to be done to transform an Opponent to an Ambivalent, and a Rational to an Ambassador.

Deep-Relationship-NPS combines the power of our CRQ methodology with the internationally-recognised NPS benchmark. NPS tells you if you have a problem, CRQ tells you what the problem is and how to address it.

Back to Susan and Bill

Bill needs NPS data in a comparable format to data from other parts of the organisation, with feedback on brand, image, product and pricing. With Deep-Relationship-NPS, Bill gets his NPS data in exactly the way he needs it. That keeps Bill and his Marketing team happy.

On the other hand, Susan gets detailed account-level customer relationship feedback for her sales teams, and by looking at levels of trust and commitment, Susan can avoid any surprises when contracts come up for renewal. That keeps Susan and her Sales team happy.

Join us next week for Episode 2.