You know that moment in a meeting when someone throws up a graph and someone else feels compelled to make a comment, and there goes the next 30 minutes? Mark Graban hates that. Well, we all hate that.
When we overreact to irrational blips in our metrics, everything looks like a free-for-all. It's not just wasted meeting time. You can spend years chasing the wrong metrics and trying to fix the wrong things.
As a lean management expert, Mark Graban explains how to filter the signal from the noise and offers excellent advice to leaders when the organization doesn't hit their aggressive goals.What we learned from this episode
-Practices that belong to different industries are transferable and can be learned or modified to suit a different one.
-More often than not, the frequent fluctuations in business metrics are around an average and can be considered noise. Use upper and lower process limits to identify "signals", the fluctuations that need to be investigated.
-It's important to understand the cause and effect relationship between what we're doing to try to improve our business and what the metrics are showing us. Not all fluctuations deserve your attention and an action. Notice the difference between the signal and the noise and put your problem-solving skills where it matters.What you can do right now
-Don't overreact to the next fluctuation that you see. Instead, apply long-term principles of gathering data and looking for wide trends.
-If you're a leader who sets aggressive goals, be open to the possibility that the goal might not be achieved. Instead of threatening and blaming and punishing, work together with the entire organization to hit that goal.Key Quotes
"One of the challenges is trying to help people be open minded that we can learn from other industries without losing what's special about healthcare or people use phrases like cookbook medicine or assembly line medicine and, well, not all assembly lines are bad. And a cookbook can be helpful, but it doesn't mean a cookbook can turn me into a Michelin star chef."
"When we have a metric that's in that realm of noise, there is no easy answer or root cause for every up or down."
"If we're desperate to paint a picture of success, people will abuse, they'll cherry pick data points, they'll massage the data, if not torture that data, to prove the point that they need to show because politically, it might be really unsafe to either not know, or it might be really unsafe to try something that fails."Links mentioned
https://www.amazon.com/Measures-Success-React-Better-Improve-ebook/dp/B07DV8ZC4P - Measures of Success: React Less, Lead Better, Improve More
https://www.amazon.com/Measure-What-Matters-Google-Foundation/dp/0525536221 - Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs
Today, our guest is Mark Graban. He’s an author, speaker, and consultant. And this episode is Work Minus Overreacting to Metrics. Hi, Mark. How are you?
I’m good, Neil. Thanks. How are you?
I’m doing excellent. I’m really excited about this topic. I think it’s very important for a lot of people listening in, why don’t you start off just giving us a little bit of your background?
I started off my career. I’m an industrial engineer. I grew up in Michigan around the auto industry. So, my first job out of college was at General Motors and it was during an era when GM and other automakers were desperately trying to learn from Toyota and in some ways copy them and catch up to them. Then as my career has progressed, I’ve ended up working in a couple of different manufacturing companies, two different software startups, and I’ve worked a lot over the last 13 or 14 years now in healthcare, applying lessons from industrial engineering and what you call the Toyota Production System or lean manufacturing, trying to help people in healthcare improve the way they deliver patient care.
Has that been a pretty seamless transition in terms of industry wise, like healthcare adopting to a more metrics-driven lean approach?
It’s interesting. I think for people who have worked in manufacturing and healthcare, you start seeing connections of systems and workflows and people. And there’s an awful lot of similarity. There are obviously key differences. What’s challenging, though, is people who have only worked in healthcare, they find it harder to connect those dots. And I think it’s understandable when people in healthcare cringe when they’re either thrown scenarios, practices that would be transferable from aviation or from manufacturing or from Disney, and people in healthcare say, “Hey, we’re different.” I’m like, well, of course you are. And there are special things about healthcare. So, one of the challenges is trying to help people be open minded that we can learn from other industries without losing what’s special about healthcare or people use phrases like cookbook medicine or assembly line medicine and, well, not all assembly lines are bad. And a cookbook can be helpful, but it doesn’t mean a cookbook can turn me into a Michelin star chef.
This is going to be great. I actually want to start off with this topic of signal versus noise. So, give us just the baseline understanding of how you define the difference between the two.
I’ve been fortunate to learn some statistical methods that date back almost 100 years, and I don’t think it means that they’re outdated. I think it’s just good, proven math and methodology. But one of the ways when you think of noise as being just the typical routine fluctuation that you see in a metric. So, a lot of times if we look at a business metric, and hopefully we’re not just looking at a two data point comparison of this month versus last month, or this quarter versus a year before, if we actually take our metric and just create what Excel would call a line chart, plot the data, look at data over 12 to 20 data points. And sometimes we can see maybe that metric is just fluctuating around an average so we can calculate and draw the average on the chart. And then the other way, when you help filter out the noise is to put two other lines on a chart, we think of these as guardrails above and below the average. There’s a way of calculating, you can think of it as a guardrail, or sometimes it’s called a process limit that says if this metric has been fluctuating predictably around an average, it’s going to continue predictively fluctuating within this range. And so, then we apply three rules to this chart, if we see a data point outside of those guardrails we’d look and say, and conclude something has changed significantly in our business, or our operations, that would be a signal.
The second rule is looking for eight consecutive data points above or below an average, that’s unlikely to be random. And then there’s a third rule, it’s a little harder to explain verbally, but we’re looking for a clustering of three or four data points that are close to one of those limits. And so, one of the things we can do through this methodology called process behavior charts is learn to recognize when there’s noise, or just routine fluctuation, stop reacting to all of the noise, stop asking people to explain the noise and save our attention and problem solving skills for times when we see a signal in a chart. That signal could be in a bad direction, which means it’s time to start investigating. And if we see a signal in a positive direction, then we make sure we understand why that happened, make sure we understand our business, if we’ve seen, for example, an increase or a signal in revenue or customer conversion rates, that can help us prove cause and effect between some new initiative that we’ve put in place and seeing a meaningful difference or a signal in our metric, instead of let’s say, launching a new marketing campaign and the honest reality is revenue is still just fluctuating around a predictable average, that helps us hopefully better connect the dots and understand cause and effect relationships between what we’re doing to try to improve our business and what the metrics are showing us.
So, here’s the situation I think we’ve all found ourselves in. You’re in a meeting, you’re giving a bunch of reports that are coming through, you put up a chart or some kind of graph on there that you know is just a basic status thing. But then somebody stops the meeting, and asks you to explain these two data points or explain something that you feel like is fairly insignificant, or just noise that’s in there. Let’s look at this from two sides of it. First, if you’re the one presenting, and it’s your chart up there, and somebody stops and asks you a question, is there any way that you can prevent that from happening or structure your data in a different way?
Well, I mean, there are times when I’ve helped organizations and their leaders develop new habits, because you’re right, it’s a very common tendency, let’s say, somebody reports a metric number of visitors to our website is down 20%. Someone might say, “Well, you need to understand what happened and find a root cause and keep asking why.” The reality and part of the context might be, well, you know what? Number of web visitors had gone up 22%, the month before, and it went down 17%. You know what? The number of visitors to our website fluctuates because of Google and algorithms, and all kinds of other factors. Trying to find a root cause for that routine variation, or that noise, just ends up being a waste of time. So, there’s a story I’ve blogged about, and I include it in the book, there’s a software company that I’ve been involved in and I’ve coached the CEO and the marketing director to overreact less. And there’s times when the CEO would asked this well-intended, inquisitive question of, why did the number of sales qualified leads drop, and Maggie figures she would go spend an hour to investigating and not really ever coming up with a convincing answer. And that’s not her fault. When we have a metric that’s in that realm of noise, there is no easy answer or root cause for every up or down. So, some of this is developing new habits, and helping them realize that if Maggie could spend an hour to actually improving the marketing system in some way, instead of just trying to explain the noise, we actually started making a lot more progress. So, I think it’s better to try to figure out how do we improve the metric instead of just explaining a metric. But the problem is, back to your other point, this executive habit, or this leadership habit of asking people to explain every up and down, like it feels good. “I’m not tolerating poor performance. I’m taking action.” But not all action is helpful. And that can be hard. It’s difficult to try to open people’s eyes. And I’ve tried to do this in workshops through some hands on simulations and adult learning exercises. But sometimes it’s a challenge to even get people to the table and get them to look in the mirror and challenge themselves of, maybe some of my behaviors as a leader, although well-intended, maybe sometimes get in the way of our improvement.
And if you’re in that leadership position, what are some of those other habits you need to watch out for to prevent yourself but also, you don’t want to not act on something you need to or if you feel like somebody’s trying to hide something in a report, then you want to be able to pull that out. So, what are those habits we can build?
One of those habits in organizations that use these process behavior charts on some dashboard. Now, let’s say you have six key organizational metrics, and maybe three of them have gotten worse in the last month, a lot of organizations would say, well, those three have gotten worse, we need people to go and figure it out and go and work on it, when one of those three maybe shows this statistical signal. So, we should be focusing and prioritizing our resources on metrics or scenarios where there is something worth explaining when we see a signal. And sometimes organizations just spread themselves too thin, when they react equally to everything that’s gotten worse. So, that’s one habit is using the charts to help prioritize when we react, because like you said, there is a time and a place where you need to react pretty quickly, and investigate when the situation is fresh. But then there’s also a time when you have a metric that’s fluctuating around an average and you’re not happy with where that average is. Well, we can certainly try to improve business performance, but that usually happens in a decidedly non reactive way. So, taking a step back and understanding your business, understanding the systems and processes and workflows, you can improve a stable, predictable system. But sometimes that requires taking a step back so that you’re trying to learn habits of when do I react and when do I maybe take a breath and figure out how to improve the system, that thought process is something that can be developed. That’s a new habit. And these charts can help point us and help us so we’re not guessing. Well, do I think that’s a signal there? There’s some pretty hard statistical rules that are easy to learn and easy to implement that can help us tell the difference between those situations.
As someone who has spent most of their career with metrics, with numbers, where are you in terms of being comfortable with not knowing the answer? They’re used to their situations, you can’t explain why this number changed. What are the limits of those metrics? What kind of philosophical thoughts have you had on that?
I think it’s interesting. I mean, I think, in the lean methodology and the Toyota Production System, there’s a big emphasis on being honest about the real reality. And there’s an aspect of this lean culture that says, it’s okay to not know the answer to a question. It’s better to go and take some time and investigate the real situation instead of just making up an answer that sounds good. And I think, for listeners who might know, Eric Ries and the Lean Startup methodology, I think there’s similar questions that Eric raises around, you could call it integrity. If we’re fooling ourselves as an organization, we’ve spent a lot of money on some initiative, and we’re desperate to paint a picture of success, or Eric uses the phrase success theater. If we’re desperate to paint a picture of success, people will abuse, they’ll cherry pick data points, they’ll massage the data, if not torture that data, to prove the point that they need to show because politically, it might be really unsafe to either not know, or it might be really unsafe to try something that fails. And I think the issue of integrity, it comes down to if we’re fooling ourselves, we’re only hurting ourselves long-term. So, I work with people who are leaders in healthcare or the CEO of a startup, you want to create an environment where people are being honest about the real reality, they’re being honest about challenges or gaps in performance, because as soon as you start getting that culture where it’s more important to make things look good, that gets corrosive really quickly.
I want to go deeper into this idea of painting a picture or the idea of a narrative because when we’re looking at numbers, we feel very scientific, we feel very much like we’re looking at the truth that’s out there. But we’re always telling a story based on those numbers that come through. So, what are some of the spectrums you’ve seen in how organizations blend that narrative with the honest data that’s there and what’s a good approach for that?
There was a workshop I taught a couple years ago, and I incorporated this story into my book. It was a nursing executive from England and part of their national health service. And they had this important goal, they had a challenge where the goal was to reduce patient falls by 50%. She came into the workshop, and she had data. So, there’s data-driven, like you said, there’s a spectrum where she came into the class with data and said, “I need to show a 50% reduction.” So, my thought process, well, wait a minute, let’s look at the data. There’s this phrase that we use, the voice of the process. So, you might have a voice of the customer that says, well, we should reduce falls by 50%. Or the customer might really say we should eliminate falls and patient harm all together. But you’ve got this goal of 50% reduction. This wasn’t the point of the workshop or the methodologies I teach. We could go and cherry pick two data points and show an overly simplistic before and after picture. But that’s not the real reality. So, what we saw from creating a line chart and turning it into a process behavior chart, we saw there had been the signal that showed a downward shift in the average number of falls. Now, the number of falls had been fluctuating around an average that they thought was too high. And then because of some changes they made and improving the way the work was done, they saw it was now fluctuating around a lower number. And so, I would tend to look at it and say, I think the most honest, valid way of looking at this is to look at the difference between the averages. And the average number of falls was down 30%, which that’s great progress. But now it comes back to the politics and the management question around, well, you have set this goal of 50% reduction. What happens if someone doesn’t get there? So, in a lot of organizations, there’s this mindset that’s threatening. It’s like, well, I’ve set an aggressive target, you need to hit it, or else. Well, then when that’s the mindset, when there’s a lot of fear, people will then maybe find it easier to distort the data, like let’s start covering up falls instead of reporting them honestly. And then we get back into the realm of maybe we’re just fooling ourselves. And maybe we’re just hurting ourselves.
So, I think that’s another dimension, too, as leaders, it’s good to set aggressive goals. But you have to be careful that that doesn’t lead to situations that are really dysfunctional. If you remember what happened with Wells Fargo a couple years ago. And the headlines said that these bank branches around the country had what Wells Fargo described as unethical tellers and unethical managers who were boosting their performance by opening unauthorized accounts. So, I look at that and say, well, there’s clearly a systemic problem here. And part of the root of that problem was at the time CEO of Wells Fargo set this completely unrealistic goal that every customer should have eight accounts. And it was easier for people to start gaming the system, fudging the numbers, distorting the system. And I think as a leader, when you set an aggressive goal, you have to leave open the possibility that the organization might not be able to hit the goal. And instead of blaming and threatening and punishing, I think more effective leaders act more like servant leaders who say, well, let’s work together to hit that aggressive goal. The hospital executives might say, well, the goal was 50% reduction in falls, you got a 30% reduction. That’s worth celebrating. Now, what else can we do to help further improve the system to get us closer to our goal? I think some of those reactions are more helpful. And some of those reactions from leaders just create a lot of chaos and dysfunction.
Is there a difference between the statements “I need to show a 50% drop and falls” and “I want people to fall less in my hospital”?
I think there’s subtle differences where sometimes people get too wrapped up around the metric, or another thing I’ll hear leaders say related to patient satisfaction surveys, they’ll get fixated on the metric and they’ll say things like, “We need those scores to go up.” Well, another way of saying that is, “We need to improve the experience for patients, which will then lead to better scores.” So, I think one thing we learned from the Toyota approach and lean manufacturing is the right process brings the right results, that yes, we care about results. But when all you do is promise rewards or threaten punishment for not hitting a result, people might take all kinds of shortcuts or make bad decisions that don’t really improve the organization. So, we’ve got to have a balance. We’ve got to improve the process in a way that leads to better results. And hopefully, we’re working together, instead of just judging and ranking and firing. I heard somebody who had left a hospital in the U.S. recently that said the culture had gotten really bad where they had some new executives who basically threatened leaders like him, “If you don’t hit the targets, we’ll fire you and find someone else who can.” And I think in a lot of cases, that’s a real mistake. There might be very systemic reasons that are out of that leader’s control. So, firing that leader and replacing with another one probably isn’t going to make… I wouldn’t necessarily buy the hypothesis that replacing the leader is the only thing that can make a difference. It might make no difference.
You get in this situation where what is the actual, the metric that you want? Is it that number or is it something different?
Yeah, and sometimes a patient satisfaction survey is just the best estimate you have of actual patient satisfaction. So, they mail these surveys out to people, it’s voluntary, it’s self selecting. And so, it’s not an equal sampling of the patient population, the people who are the most upset, or the most happy, maybe are the ones most likely to fill out the survey. Patient populations that have more time to fill out these lengthy surveys might be more willing to fill them out. So, one thing I find fascinating about hospitals is that unlike a lot of other businesses, the customer is right there. When I go to the doctor’s office, I can’t think of a single time a doctor or a dentist has asked, so how was your visit today? Is there anything we could do better? That’s a far richer opportunity for a discussion about satisfaction and improvement compared to sending out a survey that comes weeks later and might arrive to people that have relatively fuzzy memories. You go to the grocery store and it’s almost a default question. Why did you find everything you needed today? The doctors offices don’t ask the equivalent of that. And I think that’s a lost opportunity. So, sometimes it’s not just the metrics, sometimes it’s getting the actual voice of the customer instead of looking at a metric that at best approximates the voice of the customer.
Close us out with this scenario. Let’s say you’re either a young manager coming into a new team or you’ve inherited a new team, they have a set of metrics they’ve been working on, what’s the best way to come in and evaluate those, decide what you want to keep and keep pushing for and what you want to pull back from?
There’s two key questions. One is deciding and evaluating what we measure and how we measure it. Do we have too many metrics? Do we have a good balanced scorecard of metrics that look at quality and cost and customer satisfaction, employee engagement? Is there a good balanced scorecard or not? So, I think you can look at what to measure, there are methodologies, Toyota and the lean approach would bring a methodology called strategy deployment. A lot of people, especially in the tech sector, are really into an approach called OKR that looks at objectives and key results. There’s a good book out there, it’s been a bestseller called “Measure What Matters” by John Doerr. So, I think all of that is an important discussion. But then I think where my book, “Measures of Success,” builds upon those different methodologies is answering the question of, now, what do we do with that metric over time? So, I think I would come in as a leader and first evaluate, are we measuring the right things? Are we measuring too much? Do we have a good balance of measures? Because if all I’ve worked in manufacturing where all of the metrics were related to production volumes and production speed, that ended up causing a lot of problems related to quality.
So, I would have that discussion around, what are we measuring? And then I would take a look at the metrics, and I would create process behavior charts and look and see, are the metrics changing over time? So, we could look for signals that indicate that or is the reality that these metrics are just fluctuating around an average? And as I’ve done with other organizations, I would try to teach some of these new habits around learning to ignore some of the noise so that we’ve got the bandwidth to respond to signals. So, the subtitle of my book tries to summarize the idea here, “when we react less and lead better, we can improve more.” So, back to what you said earlier, just final thought, some of these old habits are just that. We react to everything, we demand explanations, and that feels good as a leader, but I’ve seen cases where people step back and reflect and through different learning exercises opens their eyes and says, well, it’s not that I’m a bad leader. But I can learn a few tactics that help me become a more effective leader, use some methods that waste less time for the organization. And that allows us to focus more on improving the business and actually improving performance. That’s what I’m trying to help people with.
Awesome. I love it. Mark, it’s been really interesting for me, I’ve learned a lot out of this situation and this conversation. Tell us where we can go to find the book and learn more about you.
The website for the book is measuresofsuccessbook.com. It can be found on Amazon. It’s available as a print book. It’s available as a Kindle book. It’s available through Apple books for people who prefer that platform. So, it’s available now. And thankfully, I’ve gotten a good response from people. It’s great to see people reading the book. It’s good to see positive reviews. But I think the most satisfying thing is the emails that come from people who have applied this methodology and seen it be really helpful to them as leaders or to see it be helpful as a business. So, again, that’s measuresofsuccessbook.com.
Awesome. Well, Mark, thanks so much for being on the show and sharing everything with us. We appreciate it.
Alright, Neil. Thank you.