We are genuinely delighted to announce that our NPS (Net Promoter Score) and CRQTM(Customer Relationship Quality) scores are fantastic, again! Stifle that yawn – I promise it gets more interesting.
We are very proud of this picture. Most of our customers are promoters – they love our products and services and are willing to tell the world about it. Time to celebrate and shout this from the rafters – right?
Call it intuition or call it 20+ years’ experience in understanding client feedback but far more digging into the feedback would be needed before we were ready to celebrate.
The result of this digging is the creation of a new role – Product Manager – and their first objective will be to validate if we have the correct product strategy.
So, how is this logical given the amazing feedback that I have just shared, especially around product? The answer to that is a lesson on why you should never just rely on NPS to tell you how your customers are feeling or what their future intentions might be.
Lets start digging.....
This is where CRQ™ really helps us to get under the bonnet of even the rosiest feedback, forcing us to listen to the murmurs of bubbling discontent.
The first red flag is when we asked all respondents what our greatest weakness is, not only did we have a new winner – we had a new topic entirely and it was mentioned by 15% of respondents.
Is this really a problem?
Immediately the internal arguments came that this was a blip and not that important. Arguments we used to try and convince ourselves were:
15% is still not that many!
Price (often a key indicator of competitiveness) is not raised by even one respondent.
We are in the CX business for over 20 years (long before CX was even a thing) – you will find it difficult to find a competitor in the B2B space with more global, cross industry, experience than us.
Just look at that promoter graph again, our customers love us!
The only way to answer these arguments is to establish if there are further data insights that support this feedback? (Keep Digging)
We started by segmenting the feedback into the respondents who know us best – the CX Teams we work with every day and Key Decision Makers who repeatedly choose us as their CX Partner.
Turns out that even a higher percentage of the individuals who know us best believe this to be a weakness for us.
Further investigation of CRQ™ scores only compounded that we need to listen. Focusing again on those individuals who know us best, scores that link closely to this type verbatim have slipped from Top Decile Scores to Second Quartile Score .
🙁 Our customers want more CX services than we currently offer, and they perceive that there are other suppliers in the CX space now who can give them what they want.
😐 Some of our customers also believe that other CX suppliers are better at promoting themselves in the market and raising brand awareness.
The exciting part of all of this
🙂 Our customers do not want to use those other suppliers; they trust us and believe in our integrity as their CX Partner. They want us to provide these additional services, and they want us to tell the world how great we are.
The Action - A new position in Deep-Insight: PRODUCT MANAGER
FIRST OBJECTIVE: Validate if our current product strategy is correct, needs to be tweaked or needs a massive overhaul.
FIRST STEP: Ask many customers, previous customers, industry contacts and friendsfor your input and I will be extremely grateful to anyone who can give us the time to help
PURPOSE: Change, even if that is in a way that neither us nor our customers can predict just yet
P.S. I am not ignoring the brand promotion and awarenessfeedback, our CEO John O’Connor is going to take personal ownership of addressing this. Watch this space, his thoughts will follow shortly.
Back to one of my pet topics: What is a ‘Good’ Net Promoter Score?
More specifically, what’s a good Net Promoter Score score for a football team? OK, I know NPS wasn’t designed as a footballing metric but bear with me as I try to illustrate a point about how Net Promoter Score works and how NPS scores are significantly lower than most people think.
Let’s have a look at two fabulous teams that battled it out in the 2023 Women’s World Cup Final in Sydney last Sunday: England and Spain.
How did the Lionesses score in NPS terms?
I first looked at NPS for football teams in 2020. That year Liverpool won the Premiership for the first and only time and did so with a Net Promoter Score of -45, based on the player ratings of the Liverpool team.
Based on their performance in the Women’s World Cup Final last weekend, England’s Lionesses have a Net Promoter Score of -82. If you include the three English substitutes, their NPS drops to -86.
That’s not my view. It’s based on the player ratings for the starting 11, as compiled by The Guardian’s Louise Taylor in the aftermath of their 1-0 defeat to Spain. And Louise Taylor is not being particularly harsh. Most English – or European – sports commentators adopt a similar approach to scoring.
It’s not that England were poor last Sunday. They played well in the Final and had a great tournament. They were unfortunate to be matched against a Spanish team that was simply brilliant. Spain controlled the game superbly from beginning to end. Their passing was sublime. Olga Carmona’s goal was inch perfect. It had to be to beat Mary Earps.
By the way, Mary Queen of Stops was only rated 8, despite a penalty save late in the game. That’s a ‘Passive’ in Net Promoter terminology.
A quick recap on the scoring system: 9s and 10s are Promoters. 7s and 8s are Passives. 6 and below are Detractors. The Net Promoter Score itself is the percentage of of Promoters MINUS the percentage of Detractors.
Have a look at Louise Taylor’s player scores for the starting 11 below. 0% Promoters; 18% Passives (that’s two players: Earps and Hemp); 82% Detractors. That’s how the -82% NPS result is calculated. 0 – 82 = -82.
So did the Lionesses deserve a NPS of -82? Of course not. But at least the victorious Spanish team scored well in the NPS stakes. Or did they?
ENGLAND PLAYER RATINGS
Mary Earps. Mary Queen of Stops made vital saves from Paralluelo at the near post and Caldentey before denying Hermoso from the penalty spot. Had no hope of saving Carmona’s goal. Sometimes furious with her defence. 8
Jess Carter. Her great late block from Hermoso epitomised a fine tournament contribution. Coped well with second-half switch from right-sided central defender in a back three to left back and generally held her own. 6
Millie Bright. England’s captain advanced from central defence to join the attack in the closing stages. Sometimes struggled to cope with Paralluelo’s pace and movement but her decent positional play almost certainly kept the score down. 6
Alex Greenwood. Required a Terry Butcher-style head bandage after being caught by Paralleulo’s knee late in the second half. Showed flashes of her classy distribution but, for once, it was not enough from the elegant defender. 6
Lucy Bronze. Targeted by Spain and at fault in the preamble to Carmona’s opener, losing concentration and possession after taking one touch too many. A big mistake and too reckless at times. 4 Georgia Stanway. Did not see as much of the ball as she would have wanted and proved wasteful in possession but worked hard to help protect her defence. 5
Keira Walsh. Not at her best in the final – or the tournament – and still looks slightly uncomfortable in a midfield five. Struggled to retain possession. Conceded a handball penalty awarded after a lengthy VAR review. 4
Rachel Daly. Like Bronze, targeted by Spain and often pinned back by advances from Batlle, Bonmati and Redondo. Replaced by Chloe Kelly as England switched to a back four at half-time. 5
Ella Toone. Preferred to Lauren James in the starting XI but her poor positioning and slow reaction exacerbated Bronze’s error before Carmona’s opener. Replaced by Beth England in the 86th minute. 4
Alessia Russo. Worked as hard as ever but largely isolated as Spain dominated possession and was replaced by Lauren James at half-time. 5
Lauren Hemp. Reborn in a central attacking role, Hemp scared Spain, hitting the bar with an accomplished first-half shot. Should have scored after connection with a Kelly cross. Harshly booked. 7
Substitutes Lauren James. Back for the second half after a two-match suspension but initially struggled to get on the ball before later growing into the game. 6 Chloe Kelly. Ran at Spain with menace and crossed brilliantly for Hemp but failed to supply sufficient killer final balls. 6 Bethany England. 6
The Spanish performance: NPS = +9 (or maybe -7)
As I said, the Spanish team performance in last Sunday’s Final was magnificent. So what was their Net Promoter Score? Surely it was +50 or higher?
Actually, Spain’s NPS was a paltry +9. If you include the three Spanish subs, its NPS was -7. Yes, that’s right – a negative NPS for the 2023 World Cup Winners.
How can that be?
The answer is pretty simple. Net Promoter is an American scoring system that rates advocacy on a 0 to 10 scale and only recognises scores of 9 or 10 as excellent. Americans tend to score more positively than Europeans. Northern Europeans are particularly tough in the way they score. 9s and 10s are generally reserved for extra-special performances.
It’s a culture thing.
Look at the player scores below. Only 2 Promoters in the Spanish side: the goal scorer Olga Carmona and midfielder Aitana Bonmati. That’s 18% of the starting 11. Subtract 9% for the one Detractor (Jennifer Hermoso, and no, she didn’t have a bad game). NPS = 18 – 9 = +9.
If the three substitutes are included, that an additional 7 (Passive) and two 6s (Detractors). Now the NPS score becomes 14% (2 Promoters out of 14 players) minus 21% (3 Detractors out of 14). 14 – 21 = -7.
SPAIN: PLAYER RATINGS
Cata Coll. Spain’s inexperienced goalkeeper saved smartly from Lauren Hemp early on and Lauren James late on. Well protected by her defence and not really threatened by England but did not put a foot wrong in only her fourth senior appearance. 7
Ona Batlle. Stretched England when advancing from right back and combined well with Aitana Bonmatí. Gave Rachel Daly quite a workout and defended extremely well too. 8
irene Paredes. Sent a first-half chance flying wide and repeatedly second-guessed England’s attacking intentions superbly. 8
Laia Codina. Defended well before limping off injured to be replaced by Ivana Andrés in the 73rd minute but, unsportingly, got Lauren Hemp booked. 7
Olga Carmona. Her third goal for Spain was a piece of left-footed technical perfection directed low into the bottom corner. It deserved to win a World Cup. Defended well, too. 9
Aitana Bonmatí. Brilliant and utterly irrepressible. Showed off a wonderful change of pace as she regularly fazed England. Fractionally off-target with fine second-half shot. Too hot for the Lionesses to handle. 9
Teresa Abelleira. Helped Spain hog possession and ensured that England’s increasingly midfield were left chasing shadows. 7
Jennifer Hermoso. Had a second-half penalty save but that was about the only moment nerves got the better of the midfielder. 6
Alba Redondo. An important outlet for Spain down the right. Her low crosses created some good chances but faded slightly in the second half and was replaced by Oihane Hernández after 59 minutes. 7
Mariona Caldentey. Saw a shot brilliantly saved by Mary Earps after nutmegging Lucy Bronze early in the second half. Replaced by two time Ballon d’Or winner Alexia Putellas after 89 minutes. 8
Salma Paralluelo. The 19-year-old Barcelona winger has been a star of the tournament and her quick, clever feet stretched England. Should have scored. Reminded England she was once a star 400m runner. Hit a post. Booked and very lucky not to receive a second yellow late on. 7
Substitutes Oihane Hernández. The right-back arrived on the wing to help maintain Spain’s lead. 7
Ivana Andrés. 6 Alexia Putellas. 6
The average B2B Net Promoter Score in Europe is not much above zero
I came across two posts on LinkedIn recently where two separate business-to-business (B2B) companies – one professional services company and one IT services provider – announced the exact same Net Promoter Score results from their clients: +91. The spokesman for the profession services company was particularly chuffed: “We were delighted with the results of the survey resulting in an NPS of 91.”
You get the picture. To achieve a Net Promoter Score of +91, almost everybody has to love you. Not just LIKE you, but LOVE you. And I mean REALLY, REALLY love you!
+91 is an astonishingly good score in the B2B world.
A bit more context: In Northern Europe we generally think that a score of 8 out of 10 is pretty good. 9s and 10s are reserved for experiences that are truly special. I’ve written about this before. It’s conditioned into us in school and at university not to give 9s and 10s when we rate somebody or some service that we have received. Think about it. If you have a college education and graduated with a First Class Honours degree, you scored 70% (or maybe a little higher) in your final year exams. That’s 7 out of 10.
If you’re a Premier League footballer and score a couple of goals in a Cup Final, you might be lucky enough to get a player score of 8 from the sports writers commenting on the game. When Liverpool won the Premiership for the first and only time in 2020, they did so with a Net Promoter Score of MINUS 45.
We’re a difficult bunch in Europe. A dour lot. And the further north you go, the harsher we score. Other countries are different.In America (both north and south), you can get 10/10 if you do a good job or provide an excellent service. There are major attitudinal differences from country to country when if comes to scoring – you can read about it here.
An 'average' B2B Net Promoter Score is slightly above zero
So what happens in real life? How many B2B companies score +91 on the NPS metric?
At Deep-Insight, we have been running large NPS programmes for nearly two decades – mainly in Europe – and the reality is that there is a surprisingly wide spread of scores ranging from -50 to +50.
So am I saying that the professional services and IT firms claiming Net Promoter Scores of +91 are lying?
Not necessarily. Theoretically, it is possible to get a NPS result of +100 from your customer experience (CX) programme but in nearly 20 years we have never seen this happen. In fact, we’ve never seen any B2B company get close to +75.
In practical terms, the only way you can get a NPS result of +91 is as follows:
First, you really do have to be excellent at what you do – particularly when it comes to delivering excellent service every time
But that’s not enough. You also need to ‘frig the system’ by selecting a small number of clients who are Ambassadors for you and your service
You also need to select only those individuals in those client organisations who you believe will score you 9/10 or 10/10
You need to carefully deselect any client that is likely to give you a poor score – you can use the excuse: “Now is not the right time to ask them their views” or “We’ll only antagonise them if we approach them now”
Never send a survey to somebody who doesn’t know you really well, even if it’s a senior decision maker that you’d love to have a conversation with – as we’ve seen already, the chances of them giving you 9 or 10 are very slim indeed
You might think I’m being cynical. Surely B2B companies don’t act in such a manner? Surely the leadership and CX teams will prevent this happening by putting an appropriate governance process in place?
Even if companies aren’t that cynical – and in our experience most are not – subtle biases always creep in to soften any hard messages, inflate the true Net Promoter Scores, and water down the recommended actions. Sometimes these biases are blatant. But they always exist.
What’s worse is that leadership teams often compound the problem by setting inappropriate targets (“We’re expecting a completion rate of 75%”) or by incentivising a completely biased result by paying bonuses if certain NPS targets are reached. We all know that if you give good sales managers a target and an incentive plan, they will do their best to achieve it.
Don’t fall into that trap with your CX programme. Work hard at getting what we refer to as ‘unvarnished truth’ about what your customers really think.
Things that never happened: a NPS of +91
Back to our professional services and IT companies and their +91 NPS results.
I don’t believe they deliberately set out to ‘frig the system’ in order to achieve a score of +91. I also suspect they genuinely do deliver a really good service. But even without knowing the full details behind the surveys, I know in my heart that they were administered to a small sample of hand-picked clients. The individuals administering the survey were probably not even aware that they were ‘frigging the system’. After all, they had to ask to account managers to nominate the people to be contacted as they don’t manage the client relationships themselves. They weren’t to know that the leadership teams had (unwittingly) conveyed to the account teams that a high NPS result would be good to promote their company on LinkedIn and other social media. They didn’t tell the CEO that she needed to put a robust governance process in place.
With a good governance process in place to elicit the ‘unvarnished truth’ from clients, European B2B companies will never achieve Net Promoter Scores of +91. That’s simply a fact. It never happened.
Some weeks ago, I met Nick Lee, Professor of Marketing at Warwick Business School to discuss his views about Net Promoter Score (NPS). I specifically wanted to get Nick’s views on NPS as a measurement tool. Does it work? Is it linked to sales growth? What does Net Promoter Score even measure?
Nick is more than just an academic. He holds strategic advisor positions for a number of innovative sales and leadership development companies, and he was part of the All Party Parliamentary Group inquiry into professional sales in 2019. His work has been featured in The Times, the Financial Times and Forbes, and he has appeared on BBC Radio 4, BBC Radio 5Live, and BBC Breakfast television.
Over the past two decades, NPS has divided opinions. While it has been embraced enthusiastically by many businesses, it has been shunned by others. The academic world has questioned what Net Promoter Score actually measures.
I think you’ll find Nick’s comments on Net Promoter Score and what it really measures quite fascinating because he doesn’t hold back from his criticism of NPS but also points out that the flaws don’t invalidate its usefulness as a measurement system, as long as it’s used in the right way: tracking changes over time, rather than simply chasing a number.
Good morning Nick. To start, could I ask you to tell me a little about your own academic background. What was your first interest in the field of marketing?
Well, I began my academic career as a doctoral student in marketing strategy. It seemed to me that the connection between sales and psychology was quite important. And there was a lot of work in management that was related to psychology, but very little of that research had been focused on the sales force.
A lot of sales research is actually about things like incentive structures and territory design. I call that ‘technical management’ but what I was more interested in was not so much the decisions that managers made, but how they implemented those decisions. Sales management is more about psychology than a mathematical or technical thing. More recently, we’ve seen how digital transformation has led to a a merging of the ‘technical’ things with the more psychological things, and that’s really the space I operate in now.
Is NPS a Fundamentally Flawed Metric?
So let’s talk about the psychology of Net Promoter Score. It’s clearly a sales and marketing concept. It’s also a performance metric. When did you start getting involved with Net Promoter Score and is it a good sales and marketing measurement tool?
My interest in NPS really came from Sven Bähre and that paper we wrote called The use of Net Promoter Score (NPS) to predict sales growth. Sven drove that project while my role was to use that project to address something that was important to the marketing literature. And I think it is very interesting that academia’s gone down one road with Net Promoter Score, the very simple road which says “NPS is useless and a load of rubbish”.
At the same time, business practice has completely ignored that academic view. Net Promoter Score has become the dominant customer metric in business. It feels like someone has to be right and someone has to be wrong here. But the interesting thing is it turns out that both sides are right. They’re just talking about different things. And that’s what fascinated me.
So tell me about those different things. When I looked at Net Promoter Score many years ago, a guy called Tim Keiningham – one of the people you refer to in your paper – was very critical about NPS as a metric. At the time he worked at Ipsos so I wondered if he was bringing his own biases to the table. But at the same time, he was saying that the data did not show any link between NPS and sales growth.
Oh, that’s interesting about Keiningham, I didn’t know that he was at Ipsos then. So there are a couple of issues that lead to this disconnect. One is that we generally don’t like it when a publication like the Harvard Business Review tells us there’s a single number that every company needs to look at. That automatically gets people’s interest and it actually didn’t make sense to me.
The other issue is, and I have every sympathy with this view, is that Net Promoter Score doesn’t really measure what it claims it measures. There are so many potential flaws in the idea that this one number could be a valid measurement of anything real. I spend a lot of time trying to develop measures around attitudes and psychological concepts. And this is a classic example of a metric that doesn’t seem to actually ‘measure’ anything. So on that basis, it is quite flawed.
When you add in the idea that you have to subtract the bottom scores (Detractors) from the top scores (Promoters), you're torturing the measure to within an inch of its life.
What does NPS Actually Measure?
Surely Net Promoter Score is a measure of advocacy, if nothing else?
To some extent it taps into advocacy, sure. However, it’s a number in response to a single question that doesn’t take account of all kinds of other factors that might be relevant. And then you have this weird calculation for subtracting ‘Detractors’ from ‘Promoters’. As a mathematical construct, that’s not great. But the real issue is that advocacy is a much more complicated idea and can’t really be accurately captured by a response to a single question.
So there’s no real evidence that that answer to the NPS question is a reliable measure of advocacy. And then when you add in the idea that you have to subtract the bottom scores (Detractors) from the top scores (Promoters), you’re torturing the measure to within an inch of its life. At that point, it ceases to become a measure anymore, even if it was at the beginning. It becomes a number which is divorced from the underlying concept.
I’m with you. But does that invalidate it completely as a measure?
Well, here we get to the bigger question: rather than “is NPS a measure”, we need to ask “is NPS actually useful?” In academia. we’ve said nothing about NPS for the last 20 years apart from “it’s crap”. But when I see a whole bunch of senior executives in large companies saying “well, I’m finding a use in it”, then academia needs to look at that.
What’s my conclusion? I think that is a problem for academia insofar as we tend to talk past each other in a lot of areas. We have to provide some insight into what practitioners are doing in this field. Of course, it’s our sole driving force as a discipline to find out what practice is doing and we study that. But at the same time, it is worth studying if the entire business community is using something that 20 years ago we in academia said was wrong.
Have you come to any conclusions as to why senior leaders use Net Promoter Score, or how they can use it more effectively?
How Should NPS be Used?
One reason why it’s used so much is partly a self-fulfilling prophecy. It’s used because everyone uses it and therefore nobody wants to not have that information. That’s an important factor.
But then the other aspect is that it’s used because it’s simple. It’s easy to collect and it’s simple to use. Whether it’s easy to interpret is actually a more challenging question. I don’t think it is that easy to interpret. For starters, what does the NPS number actually mean at any given point in time?
Now when you start tracking NPS over time, those questions fall away because what you’re looking at is trend data. What was it last year? Is it up or is it down? You have to operationalise NPS in the right way – by tracking the change in NPS score from one period to the next, not the absolute score. That’s important.
Net Promoter Score is also influenced by a lot of transient factors. For example, it’s very easy to manipulate and there’s a big selection bias. Who is asked to complete the NPS survey? Also, every surveyor cannot help but lead the customer on towards a higher NPS score. So at any given point in time, the net promoter score doesn’t mean much because of that selection bias. But if you assume that those forces are broadly the same over time, you can extract that little bit of signal from that noise with the time series a little bit more effectively than at a single point in time.
I’m with you. But in our experience, the level of bias can increase over time. So you need to have a governance structure to ensure consistency. Or indeed, you may need to break the system apart and start again if the ‘gaming’ gets too deeply ingrained.
We should track trends, not individual time points. And the more data we have, the better. More bad data isn't better than less good data. But more flawed data is probably better than less flawed data.
Yeah, I think you’ve got it right there. It is important not to be naïve that over time there might be an ‘instrument effect’ or a ‘history effect’ where people learn how to better game the system. I think with something like Net Promoter Score, that’s less of a thing because it’s pretty easy to know how to game the system straight away. And the only thing you can do really is try to say to your customers: “this is really important to me, can you please leave me a good score?” And there’s only so convincing you can be there. It’s not like you’re going to get better at doing that after a certain point in time. So I would be less worried about that.
But of course there are always ways to game the system. But the point is we should track trends and not individual time points. And the more data we have the better. Of course more bad data isn’t better than less good data. But more flawed data is probably better than less flawed data. So given we assume the data is flawed all the time, the most important thing is to know how that data is flawed. And while you can never perfectly extract the signal from the noise, the signal is there if you have enough data points gathered over a sufficiently long period of time.
Where Do We Go From Here?
So where do we go from here, and where should academics be focusing their efforts?
A few things for us to work on. First is international comparability. Big multinationals use Net Promoter Score across their different national areas. And I would imagine they’re comparing EMEA with America with Australasia. Is NPS really able to support that comparison? That’s a challenge so that’s the first thing I would look at.
Second is to move away from a single question. We really need multiple measures in order to compare them statistically across different cultures. So Net Promoter Score is one item. You would like it to be three or four items. And then you could compare those items across countries.
A third area is to get a wider industry perspective. We looked at NPS in a branded consumer goods context: sportswear. Is it equally useful across all kinds of different industry sectors? Particularly if you look at the service sector and front line services, which are linked to business to business (B2B) personal selling. Is NPS a useful metric for these interpersonal interactions? How well does it work in a B2B setting?
Nick, I really appreciate your time today. I’m looking forward to seeing more research into Net Promoter Score. From a selfish perspective, I’d particularly like to see some B2B research done as there’s very little out there that I can find on the topic. Thanks again, Nick.
There’s a lot of talk at law firms about client relationships. For many clients these can still seem hollow words based on one-way relationships.
Robert Millard and John O’Connor explore how firms that are trying to embrace true client centricity are setting themselves apart.
* * * * * * * * * * * * * * * * * * *
The CX Factor
Much has been written over the years about how diﬃcult it is for clients to diﬀerentiate between one law ﬁrm and the next. From a client perspective, law ﬁrms all look remarkably similar. Trust, reputation and brand generally play an unusually important role in buying professional services.
Appearing in directories such as Chambers & Partners, Legal 500 and International Financial Law Review are also important as are word-of-mouth recommendations. These are recognised to be among the most compelling means of winning new clients.
But what keeps clients loyal? What drives client relationship longevity? Except for the most complex or unique of matters, a range of ﬁrms exist from which clients can choose. Those ﬁrms are all staﬀed by highly competent, capable lawyers.
Making the Transition from Client Listening to Customer-Centricity
Within ranges, all charge roughly similar fees for similar matters. All are highly attentive to service quality. Most engage in at least some form of client listening. They claim to mould their services and service delivery channels around the needs of clients. But have they?
In our opinion, few have transitioned from client listening to becoming truly customer-centric.
This article is aimed at helping law ﬁrms to make that transition. The content is based on client- centricity work that John O’Connor has done with many large corporates and ﬁnancial institutions, including DWF Group plc. It is also based on Robert Millard’s unparalleled understanding of modern law ﬁrms.
It was informed by interviews with Baker McKenzie LLP (Ana-Maria Norbury and Deanna Gilbert), DWF Group plc (Zelinda Bennett), Shoosmiths LLP (Peter Duﬀ and Gaius Powell) and Travers Smith LLP (Julie Stott and Charlie Rogers) about their CX journeys. All of these were exceedingly generous with their time and insights. We thank them most sincerely.
Clients’ Demands are Shifting
Across many industry sectors and geographies, customers are shifting the ways in which they choose suppliers and service providers. Current research in the United States shows that the percentage of clients recommending law ﬁrms is at an all-time high of 69%. That’s up from 49% in 2020 and from 47% in 2019.
This increase is remarkable. But those results are not from superb skill in solving legal problems alone – the focus on service quality has given way to one of client experience (CX). For all but the most complex and diﬃcult of services, service quality is no longer a source of sustained competitive advantage. It is a prerequisite to be even considered.
Clients now demand that their experience with the ﬁrm advising them be hassle free, transparent and even emotionally uplifting. They also expect law ﬁrms to look further than the legal advice. They expect them to help solve business problems.
Law ﬁrms are changing their business models in line with these shifting client requirements. But too slowly, in our view. The time has come to accelerate. Bluntly, modern law ﬁrms must move from client listening to more detailed conversations, and act decisively on what they discover.
No UK law ﬁrm has what a leading corporate or ﬁnancial service client would acknowledge to be a world-class CX programme, or true customer-centricity. Pockets of excellence do exist though, and some of these can be seen in the case studies at the end of this article.
CX is Different to Service Quality
The concept of ‘quality’ emerged from the total quality management (TQM) movement of the 1950s. In the early days, the focus was on product quality. The emphasis moved in the 70s and 80s to service quality as economies in the western world became more services-based economies. ‘Client satisfaction’ became a prominent metric.
Client experience (CX) is diﬀerent. It means that a ﬁrm’s core focus is on its entire relationship with its clients – not just on satisfaction. Contemporary research shows that CX is generated through a long process of interaction between a ﬁrm and its clients, across multiple channels and through generating both functional and emotional eﬀects.
To achieve this requires ‘client-centricity’ which, in simple terms, means putting clients at the very heart of the ﬁrm. This transcends quality, to mean all the ﬁrm’s lawyers and business services professionals viewing every aspect of the ﬁrm from the client perspective. In this article, we use the terms ‘client centricity’ and ‘CX’ interchangeably.
For clients, quality assurance is diﬃcult in legal and other professional services. Lawyers and other professionals frequently have more knowledge of the topic in hand than do their clients. This creates a ‘power asymmetry’. Work product is frequently co-created with clients, or at least based heavily on client inputs. Consistently poor performance leads inevitably to reputational damage, sanctions for professional negligence and, ultimately, failure.
The Intangibilty of CX in the Legal World
That much is clear. How, though, does a client assess whether services rendered in a speciﬁc matter were merely ‘good’, or ‘excellent’?
It turns out that it is far easier for clients to assess how they feel about the services and about their experience, than the objective quality of the service received. Clients must trust the professionals that they instruct to be technically competent and diligent. Such trust is not necessary to assess their reaction to their experience – their ‘gut-reaction’ – to dealing with the ﬁrm and the way in which the ﬁrm deals with them.
At an event held at White & Case’s oﬃces in London some time ago, the former chairman of Allen & Overy (A&O), David Morley spoke of a very complex, challenging transaction where A&O was pitching for the legal advisory work against the usual range of premium London law ﬁrms. A&O won the engagement and, he said, he was later told by the client’s general counsel that the reason for that was that they felt that when, late at night in the midst of the deal when pressures were immense, they believed that A&O’s lawyers would be the easiest to deal with.
This is an excellent example of how intangible CX can be.
Professional Services are Different
Professional services have always been recognised as being distinct from products, and from other types of services. More than two decades ago, professional services were deﬁned as:
highly knowledge intensive, delivered by highly educated people, frequently linked to cutting-edge knowledge;
involving a high degree of customisation;
involving a high degree of discretionary eﬀort and personal judgement on the part of the professional creating and delivering the service;
requiring substantial interaction with the client; and
being delivered within constraints of professional norms of conduct, including setting client needs above proﬁt and respecting the limits of professional expertise.
For much of the past century, this has been an accurate description of the services delivered to clients by lawyers. Ask any lawyer if they are concerned about their clients, and the quality of services that they deliver to them, and the answer will almost always be: “of course I do!” And that response would be sincere and truthful – to the extent even that the question might be regarded as facile.
Yet the statistics for clients defecting to rival ﬁrms in recent years have been alarming. Legal services are also changing. On the one hand, the complexity of legal issues increases continually and exponentially.
On the other, it is becoming diﬃcult to justify including the more process-driven ‘commoditised’ services under the umbrella of professional services. This does not mean that law ﬁrms need to discard these services. They form an important part of the business of many law ﬁrms.
The term for services that are not ‘professional’ is not ‘unprofessional’. It’s ‘technical’. The fact is that clients view technical legal services through a diﬀerent lens, and the proﬁt drivers of these services are diﬀerent to those of professional services. The ﬁrm’s business model needs to be more granular if the tensions between these client expectations and proﬁt drivers are to be managed.
As the ‘4th Industrial Revolution’ unfolds, more of the services now delivered by people will be better delivered by technology. Some lawyers will focus on using ever-more complex technological tools to advise clients on meeting their own increasingly diﬃcult, complex needs. The business of law is also being disrupted by emerging digital technologies and the geo-economic impacts that they spawn. Some firms will build highly proﬁtable legal service platforms (LegalZoom being a good current example) to focus on more mainstream legal needs. Best CX practice will evolve diﬀerently for each.
These tensions can and must be managed. CX has proved a valuable tool for banks, retail organisations, airlines and others to improve levels of customer satisfaction. It is now gaining rapid traction with law ﬁrms and might even be a new frontier on which law ﬁrms are competing. Many ﬁrms, however, appear to be struggling to separate the concept from similar ones such as ‘service quality’ and ‘client relationships’ and ‘client listening’.
What to Measure?
Metrics are obviously crucial. One of the best-known CX metrics is Net Promoter Score (NPS), created by Fred Reichheld based on his work at the consulting company Bain & Co. In his book The Loyalty Effect, Reichheld stated that clients should be valued according to the net present value (NPV) of the future revenues to be earned from them. This has given rise to the notion of client lifetime value (CLV).
NPS is based on the proven premise that client relationship longevity can be predicted by a client’s response to a single question: “how likely would you be to recommend our ﬁrm to a friend or colleague?”
Reichheld’s research showed that surprisingly high NPS scores are required to indicate long-term client loyalty. The NPS of a ﬁrm overall is calculated by subtracting the percentage of clients who allocated a score of 6 or less (Detractors) from the percentage who allocated a score of 9 or 10 (Promoters).
But is NPS the best metric for law ﬁrms? We mentioned earlier how A&O won an engagement based on the general counsel’s level of Trust in the ﬁrm’s ability to deliver when the going got tough. Few companies measure trust explicitly – yet it is the fundamental building block of any client relationship.
Customer Relationship Quality (CRQ)
An alternative to NPS is to view the client relationship more holistically. Client relationship quality can be visualised as a pyramid comprised of three diﬀerent levels (see Figure 1).
Figure 1. The Customer Relationship Quality (CRQ) model
Three levels of customer relationship quality
The ﬁrst and most fundamental is the Relationship level. Do your clients trust you, are they committed to a long-term relationship with you, and are they satisﬁed with that relationship?
The second is the Uniqueness level. Do your clients view the experience of working with you, and the solutions you oﬀer, as truly diﬀerentiated and unique?
At the top of the pyramid is the Service level. Are you seen as reliable, responsive and caring?
If law ﬁrms score well on all six elements of customer relationship quality (CRQ), their clients will act as ambassadors, generating a high NPS.
NPS and CRQ scores are highly correlated. Law ﬁrms should track their NPS but in order to understand what that is really telling you – and what you have to do to improve that score – law ﬁrms also need to measure and understand all six elements of the CRQ model.
Turning ‘Client Listening’ into an Effective CX Programme
Client listening is obviously more than just the score and the verbatim feedback that is captured. A fully-ﬂedged CX programme is also far more than a client listening survey. It includes what we refer to as ‘hard side’ and ‘soft side’ activities (see Figure 2).
Figure 2. Deep-Insight CX framework
The four quadrants are:
LEADERSHIP. The most important quadrant. Good customer excellence (CX) programmes are always led from the top.
STRATEGY. Good CX programmes link customer, product, operational and organisational strategy explicitly to customer needs.
EXECUTION. Success requires properly resourced teams that are brilliant at executing the strategy.
CULTURE. Finally, customer excellence must become integral to the DNA of the organisation: “it’s how we do things around here”.
The hard side activities of Strategy and Execution are important. These include setting up the CX programme, determining what to measure, executing the survey process, and using the client feedback to update company strategy. However, one of the key lessons from interviews with corporate leaders is that successful CX programmes require heavy investment in ‘soft side’ activities if they are to generate real long-lasting results. This means spending signiﬁcant amounts of time with law partners and client teams planning for success.
All four quadrants are necessary for a successful CX programme. Many law ﬁrms start at the execution quadrant and are often disappointed when their client-listening programme produces no meaningful result or change. In our experience, the soft side is often overlooked and almost always under-resourced. Leadership is the most important quadrant while culture is the most challenging.
Step 1. Drive change from the leadership level
Client relationship longevity is a crucial building block of the ﬁrm’s client value proposition (CVP). It deserves the attention of the ﬁrm’s most senior leaders. Without active and highly visible senior leadership support, a ﬁrm is unlikely to achieve the CX results that they need to build sustained competitive advantage. It is crucial that the ﬁrm’s leaders themselves be truly client-centric. The must:
Be genuinely passionate advocates for the ﬁrm’s clients and their interests;
Take personal ownership of enhancing client- centricity in the ﬁrm;
Have an intuitive understanding that client satisfaction drives ﬁnancial success;
Use client-centricity as a lever to eﬀect organisational change; and
Be relentless about execution.
This list might appear daunting, but it is crucial. Too often, a ﬁrm’s CX initiatives founder because the task is delegated to mid-level teams who have no more than lukewarm support from senior leadership. The result? They are unable to drive the degree of change that can really make a diﬀerence. The need for active and visible senior leadership support is evident in the comments of Peter Duﬀ, chairperson of Shoosmiths, in Case Study 1.
Step 2. Link Strategy Explicitly to Actual Client Needs
Once the leadership for the CX programme has been secured, the law ﬁrm must use the voice of the customer to drive all aspects of the ﬁrm’s strategy. This can, and often will, involve major organisational and operational change. It will also require changes to the ﬁrm’s business model (CVPs, resources and proﬁt model). O’Connor and Whitelaw devote an entire chapter of their book Customer at the Heart to the strategy of client-centricity.
In Case Study 2, Zelinda Bennett speaks of some of the major strategic changes that DWF Group have made in order to serve their global clients more eﬀectively. Reorganising the business into global divisions and acquiring an alternative legal services provider (ALSP) were bold and decisive actions taken precisely because DWF wanted to become more client-centric.
Strategy must involve all aspects of the law ﬁrm’s business. It includes HR (hiring, training and promoting the most client-centric lawyers) as well as ﬁnance (investing only in initiatives that will have a demonstrable impact on clients). It must pervade the entire organisation. Every department in the law ﬁrm must see its role through the lens of the client.
Step 3. Build a CX Execution Capability
Besides strong leadership, a successful CX initiative also requires an ‘execution’ capability to ensure that the voice of the client is both captured correctly and acted upon. Execution is more than setting up a client listening post. It involves turning the outputs from those client conversations and collaborative explorations into tangible actions that solve real client problems.
In today’s world, the client personnel involved in buying and consuming legal services extend far beyond the legal department. The client’s voice needs to extend beyond just the GC and her or his legal team. Law firms must think about the ‘inﬂuencers’ who are telling those decision makers that “We have to work with Firm X” or “Firm Y really aren’t delivering value for money – we should be looking elsewhere”.
One of the better examples of a good execution capability is Baker McKenzie’s Reinvent programme (Case Study 3). Reinvent started by using client listening to map existing client interactions with the law ﬁrm – ‘journey mapping’ as it’s often referred to – but then moved to the next logical level. Baker McKenzie started working with clients to re-engineer processes and even co-creating new services and solutions. The Reinvent programme was developed to establish the governance, skills and infrastructure required to support better client outcomes. This programme focuses both on re-engineering speciﬁc processes and services with clients, as well as a way to develop teams across the ﬁrm – empowering execution at a grassroots level. Such an approach is a highly eﬀective way to build engagement with the CX process and commitment to its success.
Step 4. Embed Client-Centricity into the DNA of the Organisation
Lawyers are consummate professionals. But are they truly client-centric? Most legal professionals entered the legal industry to practise law. They wanted to advise clients and to mitigate risk. They didn’t join to help CFOs and procurement professionals to cut costs. However, that’s what partners in law ﬁrms are being asked to do these days.
Embedding behaviour changes and aligning the ﬁrm’s culture with the ‘voice of the client’ takes patience, persistence and continuous eﬀort over a long time. Engagement with clients must be ongoing. Building and sustaining the momentum required to be true client-centric needs a constant stream of input from clients. It also requires constant conversations within the ﬁrm about what that input means, and how clients can be better served.
In Case Study 4, we look at Travers Smith’s ability to embed the culture of client-centricity into the DNA of the ﬁrm. Silos have been broken down. Close collaboration between lawyers and business services has been achieved. International clients are serviced almost seamlessly. The ﬁrm’s senior leadership takes a very active lead in this.
The reason why most law ﬁrms are lagging behind might be not that they are inattentive to clients (that is usually patently not the case). It is more likely to be that they simply do not have the systems and processes in place that are required to get input of the quality and detail that can drive continuous improvement. A properly designed CX programme delivers that. Over time, measurable results emerge both in terms of client loyalty (NPS and CRQ scores) and also, more importantly, economic performance.
Earlier, we said that many companies start with Execution. We strongly believe that the ﬁrst step in a successful CX programme is gaining the right Leadership commitment to putting the client at the heart of everything a law ﬁrm does.
Once that leadership is in place, it becomes easier to get the law ﬁrm’s strategy aligned to what clients actually need and the CX execution tasks become much easier. With leadership, strategy and execution in place, culture change automatically follows.
As David Morley’s earlier anecdote reveals, the primary impactors of CX emerge when things go wrong. Clients report four major areas where the law ﬁrms that advise them are inconsistent, namely: keeping them informed; dealing with unexpected changes; handling problems; and meeting scope. Feel free to work on these immediately, of course.
But if you want to achieve a step change, that starts at the top.