Why Sample Sizes are Nonsense (in the B2B World)

Most of Deep-Insight’s work is based on helping large international B2B organisations run effective Customer Experience (CX) programmes.

The key to running a good CX programme is understanding how to change the culture of an organisation to make it truly customer-centric, and that has to be based on regular high-quality conversations – both formal and informal – with your B2B clients.

Without regular client feedback, sales directors and account teams will not be in a position to address small issues before they escalate to a point where they damage or destroy the client relationship.

When we plan Customer Relationship Quality (CRQ™) assessments for our clients, one of the questions I regularly get asked is “How many of our clients should we sample?”  The stock answer that I’ve been using for the past decade is “Think Census, Not Sample”. In other words, get feedback from your entire client base – every single one – and it’s the answer I still use.

It’s not meant to be a glib response but there are a few subtleties underpinning the answer that are worth exploring.

TRADITIONAL APPROACH TO SAMPLING

Many market research and customer insight people – even in B2B organisations – tend to approach the subject of customer feedback from a consumer perspective, where there are tried and trusted approaches for surveying large customer bases, or “populations” to use the technical term. If you’re not that familiar with these approaches or terminology like random sampling, margins of error and confidence levels, have a look at the Box below.

MARGINS OF ERROR, CONFIDENCE LEVELS AND SAMPLE SIZES – TRADITIONAL APPROACH

If you’re not a market researcher or statistician, don’t worry – there are plenty of good primers on the Internet explaining the basics of sampling techniques and associated terms – here’s one from YouGov.

You’ll also find several handy little calculators on the Internet (here’s a link to one) which let you know how many respondents are required for a particular population (customer base) in order to give a confidence level and margin of error. From this, it’s easy to calculate the number of individuals you need to invite to participate in a survey in order to get a robust answer.

Most opinion polls are conducted with a random sample of at least 1,000 people and here’s the reason why: pollsters like to be confident that their results are within a margin of error of 3% or less. Supposing the voting population in a country is 10 million people. Plug that number into our online calculator and we see that a 3% margin of error and a confidence level of 95% requires a sample of 1,067.

All that is fine if you’re working in a consumer environment or if you have tens (or hundreds) of thousands of SME customers. However, the traditional sampling techniques have less value when you are a B2B organisation and the vast proportion of revenues is generated by a handful of large clients. There may be a “long tail” of smaller customers but in most cases the Pareto Principle applies, whereby 80% of revenues are generated by 20% of clients. In some cases, the ratio can be 90/10 rather than 80/20. In such cases, the old traditional sampling approach needs to be chucked out of the nearest window and a different set of principles applied.

THE DEEP-INSIGHT APPROACH – FOLLOW THE MONEY

Our approach is to be pragmatic and follow the money – concentrate on those clients that generate the majority of the revenues, and do a ‘deep dive’ into those relationships.

It’s probably easier to explain using an example.

Case Study – Large UK Services Company
Revenues: Over £1 billion
Key Clients: 100
One of Deep-Insight’s UK clients has over 10,000 employees and generates annual revenues in excess of £1 billion. However, its customer base is actually quite small and the contracts it has with these key clients are extremely large. The company has several hundred clients in total but the vast majority of its revenues come from the ‘Top 100’ and even among the ‘Top 100’ the revenues are skewed heavily towards the 10 largest clients.

So how do you run a CX programme when your client base looks like this? In that particular case, the company has chosen to focus exclusively on its ‘Top 100′ clients. Purists might argue that this is not representative of the full customer base. This may well be true but it definitely is representative of the full revenue base, and that’s the commercial perspective of “following the money.”

From a pragmatic perspective, it makes little sense to take a sample of the Top 100 clients. You should attempt to get feedback from every single one and ideally you want to get a wide representation of views from across each of those 100 clients.

Even from a statistical perspective, it makes little sense to sample – if you need convincing, have a look at the second Box below.

APPLYING THE DEEP-INSIGHT APPROACH

Suppose there are 10 key individuals (at most) in each ‘Top 100’ client whose feedback is really “important” (in other words, the decision-makers who will renew the current contract when it’s up for renewal) that’s still only a population of 1,000 individuals across your Top 100 clients – run the numbers and you’ll see that you need to include all, in order to get a statistically significant sample.

Let’s plug those figures from our Case Study into the online calculator and see what happens.

For a population of 1,000 decision-makers, we need 278 responses to get a robust score (robust being a margin of error of 5% and a confidence level of 95%). Deep-Insight will typically achieve completion rates of 35-40% from its online B2B assessments so that means we need to invite 700-800 of those 1,000 key individuals to participate.

If you think a margin of error of 5% is too high, then plug in 3% into the online calculator. Now the number of responses jumps to 517 out of 1,000. This means you DEFINITELY need to invite all 1,000 to participate to get anywhere near your target margin of error.

Successful CX programmes in B2B companies are not built around statistics. They are built around empowering staff and providing account managers with all the customer feedback they need to manage client relationships more effectively. That means getting feedback from ALL individuals in those key clients and working really hard with the account teams to get participation and completion rates as high as possible.

So when you’re planning your next Customer Relationship Quality (CRQ™) assessment, remember to get the account managers involved and “Think Census, Not Sample”.

5 Generic Actions to Drive up your Relationship NPS Scores

We run B2B customer assessments for large corporate clients in the Netherlands and elsewhere. Very often I get asked questions like this after we deliver the customer feedback to senior management:

“OK, you’ve told us what our customers think of us, but what do we do about it now?”

“Tell us what we do in the next few weeks so that we don’t lose momentum.”

So here’s what I tell them:

1. BRIEF YOUR PEOPLE.

Typically, you need to brief at two levels and both are equally important:

– Executive Team. The senior executives in your organisation need to ‘own’ the overall results. If the customer feedback is negative or requires fundamental change, only the executive team can decide on the appropriate actions.

– Account Managers. These are the people who need to ‘own’ the results at account level. They are also the people you need to set up and run the account feedback sessions (see below).

Make sure to get the senior executives to read all the verbatim comments and see if their summary of the top issues agrees with the Deep-Insight analysis (they will understand the context better than we can). The most effective way of increasing Customer Relationship Quality (CRQ) and NPS scores is to have the programme driven by the executive team. Without the drive and passion at this level, the programme will fail.

2. DISCUSS THE RESULTS WITH YOUR CLIENTS.

Typically this happens in two stages:

– Stage 1. Within 2 weeks of the assessment results being delivered, you should get a general communication to all customers that you invited to give feedback. This should include a general ‘Thank You’ message for participating in the assessment (OK, not everybody completed it – our completion rates are typically 35-40% – but let’s not be mean!) as well as a message that their account manager will be in touch to arrange a feedback session to discuss the results (the overall results plus the specific results for their account)

– Stage 2. Within 8 weeks (or whatever target you set – but you must set a target), the account managers should have completed face-to-face meetings with all key customers. Ideally, they will have used that discussion to create a ‘Joint Action Plan’ setting out what both parties need to do, in order to address any issues unearthed in the assessment. Remember that the actions are on both sides – the account manager may need the client to change certain things as well.

3. FEED THE RESULTS INTO THE ANNUAL BONUS SCHEME.

If you haven’t done so already, think about incorporating the CRQ or NPS scores into account managers’ annual targets and bonus plans. If you have done so already, pay out the bonuses! This is probably the second most important driver of success in any customer experience programme. What gets measured and rewarded, gets done.

4. PLAN THE ACTIONS.

Again, do this at two levels:

– At corporate level. These are typically the key themes mentioned in the verbatim comments. Choose one or two big initiatives to work on during 2015. Keep it focused – any more than one or two initiatives will result in a dilution of effort.

– At client level. Each client will have its own specific set of issues including some ‘quick wins’ that can be addressed immediately.

5. MOST IMPORTANT, MONITOR PROGRESS.

If the results are poor, consider an interim assessment in 6 months (we call this a ‘Healthcheck’) but definitely repeat the feedback assessment after 12 months. If you don’t do this, you won’t know if you have achieved success.

None of the above five items are rocket science, but it strikes me as odd that some clients fail to take these actions. I know from 10 years of delivering Deep-Insight customer feedback to clients in the UK and the Netherlands, I know that these actions will result in better customer experience and improved feedback scores.