This is the blog of Adam Kalsey. Unusual depth and complexity. Rich, full body with a hint of nutty earthiness.
Sometimes teams have trouble grasping how a leading indicator can drive outcome-based goals. How do I take an outcome I want to achieve and turn that into input metrics and then use those metrics to track progress toward my goal?
Imagine you want to lose weight, lower cholesterol, and become healthier over the next 6 months. You could set key results of "lose 40 pounds" and "drop cholesterol to 125" but the problem is that it’s hard to track that every week. If you do a bunch of things to try and accomplish this every week, it could be several weeks or months before you see any results.
Here’s where input metrics come into play. You know that to lose weight and improve your health, you need to exercise more and eat better. Doing those things will lead to the weight loss and other health improvements you’re after. So your input metrics might look like this (this is totally made up and you obviously should not use this as health advice):
These are things you can measure every single week, knowing that they’re advancing you toward your ultimate goals. By measuring these input metrics you can know if you;re likely on track for your outcome of losing weight and lowering your cholesterol. By adding those measurements as additional key results, you’ll have a well-rounded score card that gives you early insight into progress and measures the actual outcome you want to acheive.
Julie’s product team wanted to increase how customer-centric they were. But they were having a hard time creating a measurable outcome for this goal. Even if they could define a way to measure "customer-centric," they knew it would take multiple release cycles over a year before they’d have meaningful changes.
A common issue with outcome goals is that they’re lagging indicators. The outcome you’re after might be a long way down the road. It might take several iterations to do right. These things often end up off a team’s goals because it’s hard to figure out how to measure a meaningful outcome in the timespan the goal covers. This especially happens with quarterly goals or OKRs.
You need a leading indicator. A great form of a leading indicator is an input metric. Measure something that shows you you’re doing the right activities that are likely to eventually lead to the long-term outcome you’re after.
Sales teams do this all the time. It’s how they build sales forecasts and execute them. Not every customer you call will buy, but if you’re closing deals with one-third of qualified leads, then talking to 30 qualified leads should lead to 10 sales.
Those sales might take months. Every sale depends on factors far outside the sales team’s control. So how do they know if a change in pricing, process, or positioning is effective? They use the leading indicator of conversations with qualified leads. If the process change caused sales reps to have 50 weekly conversations instead of 30, then chances are their change worked. Input 50 weekly conversations and instead of 10 future sales, you’re on track to close 16 deals.
To become more customer-centric, Julie’s team decided the first step would be to improve how they react to customer needs. They defined a leading indicator of the number of customer touchpoints the product managers had every week. They knew that by talking to customers more often, they would be faster to recognize the needs of their customers.
They also decided that getting early feedback from customers on new features would increase the adoption rate of those features. Setting a goal of an increased adoption rate on those features would be easy, but Julie was concerned that this metric could be met even if the team didn’t develop a feedback habit. She opted for a proxy metric as a leading indicator. The team set a goal to run two experiments per week and iterate based on feedback for each experiment. This input metric might not create an increased adoption rate, but it would change the team’s habits in a way that was aligned with improving feature adoption. Julie knew that creating an experimentation culture was key to driving product adoption over time.
The key to input metrics is to figure out what is the fuel that drives your business and turn that fuel into a metric. For that sales team, the fuel is conversations with qualified leads. The more they have, the faster they go. For Julie, it was experiments. The higher the experiment velocity, the more customer adoption they saw.
These input metrics measure a behavior change. They’re an outcome. Your product managers were only running one experiment a quarter before, but the outcome you created is that they’re running every week.
Using input metrics can expand the range of what you can use as outcomes. This can make it easier to bring long-term thinking into your goal setting.
"Outcomes over outputs."
So often we talk about measuring work by the impact it creates, not whether we did it. A goal of "ship Foo in Q3" isn’t nearly as good as "35 new users are using Foo" or "retention increased 12% after releasing Foo."
This extreme focus on outcomes can have an undesired effect on product teams. The team might stop doing anything that doesn’t directly drive a business metric. They can start choosing only metrics that they know how to move - which tend to be the easy, well-understood problems. For teams with quarterly goals, it can lead to focusing on short-term metrics that can move in a few weeks.
It is tempting to use output goals as a solution for these problems. "Establish a baseline for retention" or "learn why users aren’t adopting new features."
But a better way is to re-frame these outputs as an outcome. An outcome is simply a behavior change in a person. "Increase retention by 12%" is a behavior change in your customer - more of them stick around.
This is a great way to handle goals when you don’t really know yet what the business outcome looks like. It can help teams address long-term, ill-defined problems.
If you want to "learn why users aren’t adopting new features," think about why you don’t already know this. Why is it hard for you to get this answer? Is it because you don’t have product analytics? Then your outcome could be "answer two new questions about product usage each week." If it’s because you’re not talking weekly with customers, then perhaps your goal could be "product teams have spoken to 20 customers." Those are outcomes. They are measuring a behavior change in your team. Before, the world looked one way, but after you made changes, it looks a different way.
If your goal is "ship the new feature in time for the September marketing event" then ask yourself why shipping itself is the goal. Perhaps you’ve had trouble hitting deadlines before and you want to fix that with iterations and continuous delivery. The behavior you want to change is all about how you ship, so frame your goal that way. This might turn into "Demonstrate continuous delivery by shipping 6 releases this quarter that each improve the new feature needed for the September marketing event."
When a customer requests a custom one-off feature, it can be hard to say no. It can be even harder when a new customer prospect promises to buy your product… but only if you’ll make a change for them. In the best case, the sales team has identified an opportunity and asks Product to evaluate it. In the worst case, Sales signed a contract with a promise to deliver a vague, poorly scoped feature.
This will come up from time to time in every company. When it happens too often, your product becomes sales-led, at the mercy of the whims of potential customers. In a healthy company, these requests are expansions to the product’s current capabilities. For example, the prospect requests an integration with a popular business intelligence tool you don’t yet support. When it happens this way it can be good. You’ve known for a while that you needed that other BI integration. Now you have a reason to prioritize your integration and your customer is a built-in beta tester.
In an unhealthy company, these requests from sales prospects aren’t something you’d planned on doing soon, if ever. They represent large shifts from how your product works today, they’ll take large amounts of investment to pull off, and they might even make the product worse for your other customers. Still, there’s pressure to “just get it done” because you need this sale to hit your financial targets.
The CEO sends you an email. "Skyhook Industries wants to know if we can add some features so they can use us for their water treatment division. They’re one of the largest customers of our highway construction management product. Can we add their water treatment features and keep them happy?"
You reply that the product team saw this request already and learned that none of your other construction customers would find the feature useful. The company had no plans to start selling to the water treatment industry. The feature would be used by Skyhook and only Skyhook. You had told the customer that you would not be adding the requested feature.
The CEO comes back to you later. Skyhook’s management is willing to pay your company $50,000 to develop the feature. It’s not complicated to build, so a junior developer could build it. $50k is half a junior developer’s salary, and it certainly won’t take them 6 months to build. Sounds like pure profit and you make a top customer happy.
This is a common story at software companies all over. A customer wants a pet feature. Something no one else will ever use, something that doesn’t fit your product vision. Usually, this is a very large customer and you’re afraid they’ll leave if you don’t say yes. Sometimes this is a smaller customer that’s promising to become a huge one if only you could do this one thing. Sometimes they’re not a customer at all but promise to buy the product if you take care of their special need. Often, the customer is willing to pay you to build it.
Sounds like a win all the way around. But it’s not.
Many software product management job advertisements request or require expertise in the problem domain the company competes in. With rare exceptions, a good product manager does not need to be an expert in the problem. Often, it’s detrimental for a product manager to be a domain expert.
Products are about solving problems for customers. When a product manager is a domain expert, it is hard for them to avoid injecting their own views and desires into the product. They act as if they were the customer and solve problems they imagine a customer might have instead of ones that the customer actually has.
What’s important is that a product manager possesses good technique. That they are able to execute a repeatable product discovery process. That they can analyze customer needs to identify problems that the customer needs solving. That they can use data to determine if the product is solving those problems for their customers. These are the skills and knowledge a product manager needs.
Hiring product managers based on their expertise with a specific market or customer type can lead to a product that’s led by the product manager’s understanding and biases instead of what the customer actually wants.
For this reason, we spend time coaching product managers to approach problems with a beginner’s mind. To discover what your customer needs you must throw out your notions of what problems exist or how to solve them.
It’s strange to ask someone to think with a beginner’s mind but also require them to come with expertise.
This isn’t to say a product manager should remain completely ignorant of the domain. A good product manager can learn what they need to know about the problem space in a few months. By interacting with your customers the product manager will continue to grow that knowledge over time.
This applies to customer types and markets as well. Managing consumer products isn’t a mystical thing that an enterprise software product manager can’t figure out. The tactics used to get product feedback can be different across customer segments. The go-to-market mechanisms will vary. But that’s also true of different market categories or geographies or buyer types or product stages. If a product manager can be effective with pricing strategies for a mid-market SaaS accounting product, they can design effective pricing for mobile games or enterprise healthcare or consumer finance tools.
Someone does not need to be a mother to build software products for moms. They need to know how to talk to moms and elicit problems from them.
What are the rare exceptions? If the problem domain has deep tribal knowledge that’s not expressed anywhere or requires extensive training to understand, a product manager won’t be able to learn enough fast enough to know if they understand the customer needs well. The quality of a product manager’s decisions will suffer. If there aren’t sufficient checks on those decisions, problems will arise. A product manager for firmware in implanted medical devices likely needs more than superficial medical knowledge. A product that manages compliance for a highly-regulated environment would benefit from someone that knows the laws well.
Another exception is the creation of a new product category. If you are solving problems that customers don’t know they have it’s hard to talk to customers about those problems. This is a much more rare state than most entrepreneurs and product people think it is. You probably aren’t Steve Jobs. Steve Jobs probably wasn’t Steve Jobs.
This all applies to software products only, since that’s what I know. Building performance racing carburetors for vintage motorcycles or designing complex financial investment vehicles might need domain expertise. Or might not. I have no idea.
When you’re hiring software product managers, hire for product management skills. Looking for domain experts will reduce the pool of people you can hire and might just be worse for your product.