Your Views on Deep-Insight

In January, we asked you what you thought of your relationship with Deep-Insight so let me start by saying THANK YOU to everybody who completed our own Customer Relationship Quality (CRQ) assessment.

Two years ago we had a CRQ score of 5.7 and a Net Promoter Score (NPS) of +37%. Our clients across Europe and in Australia told us that they saw us as “Unique” and that our solution was essential to their business.

I was delighted by these customer scores and more than a little humbled by the positive comments our clients shared with us. Since then, we have all been asking ourselves: Could we replicate those results? Could we remain Unique? Let’s get straight to the results and find out.


Bottom line: this year our scores are good but not as good as in our previous assessment.

We had a 45% completion rate, a CRQ of 5.4 and a NPS result of +23%.

These are good scores but to our disappointment, our Uniqueness “prize” has been taken away from us. When we first saw the headline results, we felt like a hard-working team in a top restaurant that has just lost its Michelin star. The great Irish chef and restauranteur Kevin Thornton described that experience as akin to “a stab in the heart”. I don’t think we took the news as badly as Kevin did but we certainly felt a little deflated.

On the plus side, we were only just outside the Unique zone, and the encouraging message for me was that the overall feedback was still very positive:

-Top quartile performance (even if it wasn’t top decile)

-Half of our customers are Ambassadors – same as in our previous assessment

-No Ambivalents, Stalkers or Opponents in our customer portfolio – better than last time around

-Very positive reaction to the introduction of our Deep-Dive online platform



Being Unique was great. It was a validation of everything we had been striving to achieve. It meant we were on a par with the the Top 10% of all companies globally. But it’s a challenge to maintain that Unique status so it was important for us to understand what messages lay behind the headline results and address any common issues across our client base.

When we went through the results in more detail, we discovered that the number of clients who had given us higher scores this year was the same as the number who had given us lower scores. Our new clients also gave us very good scores so it wasn’t an on-boarding issue which is an issue we see in some of our clients’ assessments. We did notice that one of our larger clients had scored us badly for not being flexible enough in our dealings with them, and this became very clear when we went through the verbatim comments. There was no shortage of suggestions from people on how we could improve our offering and regain our Unique status.

We want our Unique status back! What are we going to do about it? Based on your feedback, we are planning three sets of activities in 2018:

#1. Implement a benchmarking programme

While reading your comments we realised that one word that kept getting mentioned: ‘benchmarking’. In previous years, we would tell our clients that the two best benchmarks you can have are against your previous performance (have you made significant improvements since last year?) and against a Best In Class standard (are you seen as “Unique”?). Most of our clients buy into those messages but there is still a strong desire among senior executives to understand how their organisations fare against their peers in their industry. Deep-Insight has over 15 years worth of historic CRQ and NPS data across a variety of B2B industry sectors and this year we will start implementing benchmarking comparisons in our reporting.

#2. Bring more innovation to the table

In late-2016 we started to roll out a new online reporting and analysis tool called Deep-Dive. It has turned out to be a very popular addition as it allows additional analysis and insights to be gleaned from the underlying data. This year, we will be making minor improvements on reporting and more intuitive navigation. In 2019 our aim is to give Deep-Dive a complete makeover with a new interface and features. We have also taken on board a number of comments regarding our product offering. For example: questionnaire length (“Can we make it shorter!”), anonymity (“Do customer assessments always have to be anonymous?”) and frequency (“Can you run monthly/ quarterly NPS surveys for us?”) and we will have announcements on these items later in 2018.

#3. Review our processes and be more flexible

We are perceived as rigid and inflexible by some clients. Now there are times when we need to be rigid. For example, our clients trust us to keep their details secure and their customers trust us to keep their feedback anonymous. On the other hand, we are aware that there is still work to do on our processes and in particular on the automation of tasks and activities. We have already started reviewing them to see how we can upgrade and automate some of these activities. We are also looking at how we can better integrate the Deep-Insight offering with other CX technologies in the marketplace, as well as examining how to make the core CRQ question set shorter..

In the coming weeks, we will be discussing these results with you in order to figure out how we can better serve your needs in 2018. On a personal level, I’m looking forward to those discussions and welcome the opportunity to allow you shape Deep-Insight’s future.

John O’Connor
CEO, Deep-Insight


Why Sample Sizes are Nonsense (in the B2B World)

Most of Deep-Insight’s work is based on helping large international B2B organisations run effective Customer Experience (CX) programmes.

The key to running a good CX programme is understanding how to change the culture of an organisation to make it truly customer-centric, and that has to be based on regular high-quality conversations – both formal and informal – with your B2B clients.

Without regular client feedback, sales directors and account teams will not be in a position to address small issues before they escalate to a point where they damage or destroy the client relationship.

When we plan Customer Relationship Quality (CRQ™) assessments for our clients, one of the questions I regularly get asked is “How many of our clients should we sample?”  The stock answer that I’ve been using for the past decade is “Think Census, Not Sample”. In other words, get feedback from your entire client base – every single one – and it’s the answer I still use.

It’s not meant to be a glib response but there are a few subtleties underpinning the answer that are worth exploring.


Many market research and customer insight people – even in B2B organisations – tend to approach the subject of customer feedback from a consumer perspective, where there are tried and trusted approaches for surveying large customer bases, or “populations” to use the technical term. If you’re not that familiar with these approaches or terminology like random sampling, margins of error and confidence levels, have a look at the Box below.


If you’re not a market researcher or statistician, don’t worry – there are plenty of good primers on the Internet explaining the basics of sampling techniques and associated terms – here’s one from YouGov.

You’ll also find several handy little calculators on the Internet (here’s a link to one) which let you know how many respondents are required for a particular population (customer base) in order to give a confidence level and margin of error. From this, it’s easy to calculate the number of individuals you need to invite to participate in a survey in order to get a robust answer.

Most opinion polls are conducted with a random sample of at least 1,000 people and here’s the reason why: pollsters like to be confident that their results are within a margin of error of 3% or less. Supposing the voting population in a country is 10 million people. Plug that number into our online calculator and we see that a 3% margin of error and a confidence level of 95% requires a sample of 1,067.

All that is fine if you’re working in a consumer environment or if you have tens (or hundreds) of thousands of SME customers. However, the traditional sampling techniques have less value when you are a B2B organisation and the vast proportion of revenues is generated by a handful of large clients. There may be a “long tail” of smaller customers but in most cases the Pareto Principle applies, whereby 80% of revenues are generated by 20% of clients. In some cases, the ratio can be 90/10 rather than 80/20. In such cases, the old traditional sampling approach needs to be chucked out of the nearest window and a different set of principles applied.


Our approach is to be pragmatic and follow the money – concentrate on those clients that generate the majority of the revenues, and do a ‘deep dive’ into those relationships.

It’s probably easier to explain using an example.

Case Study – Large UK Services Company
Revenues: Over £1 billion
Key Clients: 100
One of Deep-Insight’s UK clients has over 10,000 employees and generates annual revenues in excess of £1 billion. However, its customer base is actually quite small and the contracts it has with these key clients are extremely large. The company has several hundred clients in total but the vast majority of its revenues come from the ‘Top 100’ and even among the ‘Top 100’ the revenues are skewed heavily towards the 10 largest clients.

So how do you run a CX programme when your client base looks like this? In that particular case, the company has chosen to focus exclusively on its ‘Top 100′ clients. Purists might argue that this is not representative of the full customer base. This may well be true but it definitely is representative of the full revenue base, and that’s the commercial perspective of “following the money.”

From a pragmatic perspective, it makes little sense to take a sample of the Top 100 clients. You should attempt to get feedback from every single one and ideally you want to get a wide representation of views from across each of those 100 clients.

Even from a statistical perspective, it makes little sense to sample – if you need convincing, have a look at the second Box below.


Suppose there are 10 key individuals (at most) in each ‘Top 100’ client whose feedback is really “important” (in other words, the decision-makers who will renew the current contract when it’s up for renewal) that’s still only a population of 1,000 individuals across your Top 100 clients – run the numbers and you’ll see that you need to include all, in order to get a statistically significant sample.

Let’s plug those figures from our Case Study into the online calculator and see what happens.

For a population of 1,000 decision-makers, we need 278 responses to get a robust score (robust being a margin of error of 5% and a confidence level of 95%). Deep-Insight will typically achieve completion rates of 35-40% from its online B2B assessments so that means we need to invite 700-800 of those 1,000 key individuals to participate.

If you think a margin of error of 5% is too high, then plug in 3% into the online calculator. Now the number of responses jumps to 517 out of 1,000. This means you DEFINITELY need to invite all 1,000 to participate to get anywhere near your target margin of error.

Successful CX programmes in B2B companies are not built around statistics. They are built around empowering staff and providing account managers with all the customer feedback they need to manage client relationships more effectively. That means getting feedback from ALL individuals in those key clients and working really hard with the account teams to get participation and completion rates as high as possible.

So when you’re planning your next Customer Relationship Quality (CRQ™) assessment, remember to get the account managers involved and “Think Census, Not Sample”.

Are you going to NPS me? Yes, I am!

This is the topic of a talk I’m giving this week at a conference in Melbourne. It is in response to another talk entitled “Are you going to NPS me? No I’m not” in which Dr Dave Stewart of Marketing Decision Analysis will be presenting the case that Net Promoter is a deeply flawed concept, and should be discarded by organisations that espouse customer advocacy.

To be honest, Dave’s position is probably close to what I thought of the Net Promoter Score concept after it was first introduced by a pretty smart academic and business consultant called Fred Reichheld back in 2003. Reichheld’s basic premise is that you only need to ask one question in order to understand if a customer is going to stay loyal to you or not:

“How likely are you to recommend us to a friend or colleague?”

Fred, being the excellent marketeer that he is, proclaimed the benefits of this Net Promoter Score (NPS) concept in respected publications like the Harvard Business Review and then in his own book The Ultimate Question which came out in 2006, shortly after I took on the CEO role here at Deep-Insight. Since then, NPS has became very popular as a customer loyalty metric but it also attracted some heavy criticism – in particular from one researcher called Tim Keiningham who gave NPS a particularly scathing review saying that he and his research team could find no evidence for the claims made by Reichheld. (It should be said that Keiningham worked for the market research company Ipsos so his views may not be completely unbiased.)

At that time, my own view was that NPS was probably too simplistic a metric for business-to-business (B2B) companies, and that Deep-Insight’s own customer methodology – which incidentally included a ‘would you recommend’ question – was a much better fit for complex business relationships. And if I’m honest, there was an element of ‘Not Invented Here’ going on in our own organisation as well.

So we decided to ignore NPS.

But here’s the thing: our customers didn’t. When we ran customer feedback programmes for customers like Reed Elsevier and Atos in the UK, ABN AMRO in the Netherlands, Santander in Poland, and the Toll Group in Australia, they would all say to us: “Can you add in the NPS question for us – we have to report the numbers back to headquarters?” Of course, being the good marketeers that we were, we duly obliged. However, we always gave the results back in a separate spreadsheet, so that it wouldn’t contaminate our own reports and our own wonderful methodology!

Roll the clock forward to 2013. NPS still hadn’t gone away. In fact it had become even more popular, particularly with large international companies where a simple understandable metric was needed to compare results across different different divisions and geographical areas. And when I finally looked into it, I discovered that Deep-Insight had actually been gathering NPS data from customers across 86 different countries since 2006. That little fact was an eye-opener, and we now even use it on the front page of our website.

Around the same time we also did some research into our own database to find out what really drove loyalty and profitability in our clients. Now this is not an easy thing to do, as may of you who have tried will know, but where we had several years of customer feedback data, it was relatively straightforward to analyse how many of our clients’ B2B customers were still with them, and for those who have deliberately defected, we investigated if that defection could have been predicted by a poor Net Promoter Score or by any of the metrics in our own customer methodology.

I have to say that the results were quite interesting. It transpired that while a low ‘willingness to recommend’ was not the BEST predictor of customer defection, it was actually a pretty good one. Deep-Insight’s overall Customer Relationship Quality (CRQ)metric was a slightly better predictor while a low Commitment score – one of the key components of our CRQ methodology – was the best predictor of whether a B2B client was going to defect to the competition or not.

So there we had it: NPS did actually work.

It worked not because it’s the BEST predictor of whether a client was going to defect, but because it’s a GOOD predictor, coupled with the fact that NPS has been embraced by some of the world’s leading organisations as an easy-to-use and internationally-accepted customer benchmark. At Deep-Insight, we may have come a little late to the party – we only incorporated the Net Promoter Score into our customer methodology in early-2014 – and we have found the combination of NPS and our own CRQ metrics works really well for our clients.

Now let’s go back to the cartoon at the top of the blog (and thank you Tom Fishburne for allowing us to use it). Surely if there’s is a statistically purer methodology than NPS, why not use that instead?

The answer is simple: most senior executives aren’t interested in re-inventing the wheel. They are much more interested in taking the feedback from their clients and acting on it, so that they can protect and enhance the revenues they get from those clients.

So for those B2B executives who are wondering if NPS is the right customer metric for them or not, I would suggest that you’re asking the wrong question. What good CEOs and Sales Directors are asking these days is:

“If my Net Promoter Score is low or if I have a lot of Opponents and Stalkers as clients, what do I do?”

In fact, the really successful CEOs and Sales Directors are spending the time thinking about the challenges of putting a really effective customer change programme in place, rather than worrying about the purity of the metrics. That’s what you should be doing too.