Why Sample Sizes are Nonsense (in the B2B World)
26.01.2016 , by John O'Connor

Most of Deep-Insight’s work is based on helping large international B2B organisations run effective Customer Experience (CX) programmes.

The key to running a good CX programme is understanding how to change the culture of an organisation to make it truly customer-centric, and that has to be based on regular high-quality conversations – both formal and informal – with your B2B clients.

Without regular client feedback, sales directors and account teams will not be in a position to address small issues before they escalate to a point where they damage or destroy the client relationship.

When we plan Customer Relationship Quality (CRQ™) assessments for our clients, one of the questions I regularly get asked is “How many of our clients should we sample?”  The stock answer that I’ve been using for the past decade is “Think Census, Not Sample”. In other words, get feedback from your entire client base – every single one – and it’s the answer I still use.

It’s not meant to be a glib response but there are a few subtleties underpinning the answer that are worth exploring.


Many market research and customer insight people – even in B2B organisations – tend to approach the subject of customer feedback from a consumer perspective, where there are tried and trusted approaches for surveying large customer bases, or “populations” to use the technical term. If you’re not that familiar with these approaches or terminology like random sampling, margins of error and confidence levels, have a look at the Box below.


If you’re not a market researcher or statistician, don’t worry – there are plenty of good primers on the Internet explaining the basics of sampling techniques and associated terms – here’s one from YouGov.

You’ll also find several handy little calculators on the Internet (here’s a link to one) which let you know how many respondents are required for a particular population (customer base) in order to give a confidence level and margin of error. From this, it’s easy to calculate the number of individuals you need to invite to participate in a survey in order to get a robust answer.

Most opinion polls are conducted with a random sample of at least 1,000 people and here’s the reason why: pollsters like to be confident that their results are within a margin of error of 3% or less. Supposing the voting population in a country is 10 million people. Plug that number into our online calculator and we see that a 3% margin of error and a confidence level of 95% requires a sample of 1,067.

All that is fine if you’re working in a consumer environment or if you have tens (or hundreds) of thousands of SME customers. However, the traditional sampling techniques have less value when you are a B2B organisation and the vast proportion of revenues is generated by a handful of large clients. There may be a “long tail” of smaller customers but in most cases the Pareto Principle applies, whereby 80% of revenues are generated by 20% of clients. In some cases, the ratio can be 90/10 rather than 80/20. In such cases, the old traditional sampling approach needs to be chucked out of the nearest window and a different set of principles applied.


Our approach is to be pragmatic and follow the money – concentrate on those clients that generate the majority of the revenues, and do a ‘deep dive’ into those relationships.

It’s probably easier to explain using an example.

Case Study – Large UK Services Company
Revenues: Over £1 billion
Key Clients: 100
One of Deep-Insight’s UK clients has over 10,000 employees and generates annual revenues in excess of £1 billion. However, its customer base is actually quite small and the contracts it has with these key clients are extremely large. The company has several hundred clients in total but the vast majority of its revenues come from the ‘Top 100’ and even among the ‘Top 100’ the revenues are skewed heavily towards the 10 largest clients.

So how do you run a CX programme when your client base looks like this? In that particular case, the company has chosen to focus exclusively on its ‘Top 100′ clients. Purists might argue that this is not representative of the full customer base. This may well be true but it definitely is representative of the full revenue base, and that’s the commercial perspective of “following the money.”

From a pragmatic perspective, it makes little sense to take a sample of the Top 100 clients. You should attempt to get feedback from every single one and ideally you want to get a wide representation of views from across each of those 100 clients.

Even from a statistical perspective, it makes little sense to sample – if you need convincing, have a look at the second Box below.


Suppose there are 10 key individuals (at most) in each ‘Top 100’ client whose feedback is really “important” (in other words, the decision-makers who will renew the current contract when it’s up for renewal) that’s still only a population of 1,000 individuals across your Top 100 clients – run the numbers and you’ll see that you need to include all, in order to get a statistically significant sample.

Let’s plug those figures from our Case Study into the online calculator and see what happens.

For a population of 1,000 decision-makers, we need 278 responses to get a robust score (robust being a margin of error of 5% and a confidence level of 95%). Deep-Insight will typically achieve completion rates of 35-40% from its online B2B assessments so that means we need to invite 700-800 of those 1,000 key individuals to participate.

If you think a margin of error of 5% is too high, then plug in 3% into the online calculator. Now the number of responses jumps to 517 out of 1,000. This means you DEFINITELY need to invite all 1,000 to participate to get anywhere near your target margin of error.

Successful CX programmes in B2B companies are not built around statistics. They are built around empowering staff and providing account managers with all the customer feedback they need to manage client relationships more effectively. That means getting feedback from ALL individuals in those key clients and working really hard with the account teams to get participation and completion rates as high as possible.

So when you’re planning your next Customer Relationship Quality (CRQ™) assessment, remember to get the account managers involved and “Think Census, Not Sample”.