When it comes to connecting CX and business value, there are a lot of traps, misconceptions, and bad advice out there. I love calling BS and, upon encouragement from a great boss, I wrote down all my thoughts on the topic. It was long enough that I split it into seven parts (all links below).
Since publishing the series last year, I was invited to discuss the concepts on a podcast, I presented them at my company’s annual conference, I went on tour to train my company’s field teams on them, and I further road tested them via client workshops… For my first post on Substack, I’d like to revisit the series with learnings from the past 10k miles.
Note: The bullets below are additions to the original articles. If you want the whole message, click the links to the articles.
While written for CX, the core message also applies to employee experience (EX). I had originally planned to write a series dedicated to EX, but the principles are the same so I probably won’t.
I’ve been learning about applications of Karl Popper’s theory of falsification recently, specifically the concept that science advances by eliminating what is false, not by proving what is true. I think the nine reasons CX scores and business value move in opposite directions in this article are examples of this.
CX advocates sometimes want to take credit for creating business value when CX scores go up. But what about when scores go down? Do they say that CX destroyed business value?
In response to my case against hunting for a magical correlation between CX scores and business value, a few people have said something like, “Well, it can’t hurt.” Well, I think it can. In addition to wasting company resources and damaging credibility, it incentivizes bad decisions (AKA score chasing). Also, shame on people who say this. Bad models cause harm all the time.
I have an open invite for people to share arguments or evidence I may have missed. A couple of people shared research papers that I hadn’t seen before (link, link), arguing that we can in fact use scores as a proxy for customer/employee behavior or business value. Both papers failed to address the point. I plan to cover why in an upcoming post. The offer remains open.
Earlier this year I wrote an article on creating value for the business vs. for customers with my friend, Elizabeth. The core message is that it really helps to ask, “value to whom?” The answer helps clarify what is and isn’t valuable. For example, while business value is always a function of cash flow, non-businesses (e.g. governments) tend to value other things (like a reduction in unemployment or an increase in trust).
The EX folks out there might consider the following business value metrics:
Talent acquisition: # of talent referrals per employee, candidate offer conversion rate, etc.
Employee productivity: Lost productivity per employee per week, new hire ramp time, etc.
Employee retention: Voluntary employee attrition rate
Cost to serve workforce: Employee safety incident rate, employee support requests, etc.
If you’re struggling to identify good business value metrics, start with the problem you are trying to solve. Meaningful business problems impact cash flow. If your problem doesn’t, you might not get enough support from the organization to solve it.
I have spent more time hitting on the importance of actions over the past year than on all of the other topics combined. I’m pleased to say that in response, many people at my company are changing the way they communicate our capabilities, serve customers, talk about value, and tell customer stories.
A lot of people think value realization is an analysis problem. At least for CX programs, I think it’s more of an action problem. There’s no amount of data that can tell me a program has realized value if there is no action. And driving real, meaningful action is hard to do.
Earlier this year I published a guide on our value chain framework with my friend, Isabelle. This framework is the foundation of all of the value work I do, and I packed the guide full of my best advice.
I would include “negative actions” within my definition of actions; e.g. eliminating manual effort through an automation, deciding to cancel an initiative because of an insight, etc.
This chart helps communicate the types of actions that a CX program might drive as it matures. It’s not meant to be perfect (e.g. some programs make process improvements before 1:1 issue resolution).
Getting to rung 3 in the ladder of causation (counterfactuals) requires a reliable model of how the world works. Experiments are a great way to improve your understanding of how the world works, as opposed to just basing your model on correlations (AKA rung 1).
Ask yourself how defensible your claims of business impact are. Remember that you aren’t trying to persuade people who want to believe you — you are trying to persuade people who are skeptical or who actively don’t want to believe you. Think like a scientist and make an effort to falsify your claims (related: see point #2 about Karl Popper). Then ask ChatGPT to play the role of a skeptical executive to get another set of eyes.
PICO: Problem, Intervention, Control, Outcome. When designing an experiment, make sure you have a clear definition of the problem you are trying to solve (Problem), the change you are going to make (Intervention), the control group to compare against your treatment group (Control), and how you will measure the results (Outcome). Then ask ChatGPT to play the role of a skeptical data scientist and poke holes in your approach.
Even imperfect experiments can significantly reduce your uncertainty as to whether a given initiative will cause significant harm or significant benefit. If you want precise estimates of smaller impacts, ask a data scientist for help.