Product Development Marty Cagan

Planning Product Discovery

Much of product discovery work doesn’t actually require a lot of planning. We need to come up with a solution to a particular problem, and often this is straight forward, and we can proceed quickly to delivery work.  But for certain efforts, this is decidedly not the case, and some planning and true problem solving becomes critically important.  Big projects and especially initiatives (projects spanning multiple teams) are common examples.

Discovery Sprints have some planning built into the start of the week, but we don’t do discovery sprints that often as they’re a special tool for an intense effort especially where we need to make a big decision in a short period of time.

For the other cases where some planning is called for but not with the time-boxed structure of a discovery sprint, I wanted to talk about how we frame our discovery work to ensure alignment and identify key risks.

There are really two goals here:

The first is to ensure the team is all on the same page in terms of clarity of purpose and alignment.  In particular, we need to agree on the specific problem we are intending to solve (also referred to as “job to be done” if you prefer that nomenclature), which user or customers you’re solving that problem for, and how will you know if you’ve succeeded.  Not accidentally, these should align directly to your OKR’s.

The second purpose is to identify the big risks that will need to be tackled during the discovery work.  I find that most teams tend to gravitate towards a particular type of risk that they might be most comfortable with.  Two common examples I find is that a team will immediately proceed to tackling technology risks – especially performance or scale.  Or, the team may zero in on usability risks.  They know this change involves a complex work-flow and they’re nervous about that so they want to dive in there.

Those are both legitimate risks, but they’re far from the only risks, and at least in my experience, those are often the easier risks to tackle.

We must also consider value risk – do the customers actually want this particular problem solved, or is our proposed solution good enough to get people to switch from what they have now?

And then there’s the often messy stakeholder risk where we have to make sure that the solution we come up with in discovery actually works for the different parts of the company.  Here are some common examples of that:

– Financial risk – can we afford this solution?

– Business development risk – does this solution work for our partners?

– Marketing risk – is this solution consistent with our brand?

– Sales risk – is this solution compatible with our go to market strategy?

– Legal risk – is this solution something we can legally actually do?

– Ethical risk – is this solution something we should do?

Again, for many things we won’t have concerns along these dimensions, but when we do, it’s something that we need to tackle aggressively.

If the product manager, designer and tech lead do not feel there’s a significant risk for any of these areas, then normally we would just proceed to delivery, fully realizing there’s a chance the team will occasionally be proven wrong.  However, this is preferable to the alternative of having the team be extremely conservative and testing every assumption.  We like to use our discovery time and validation techniques for those situations where we know there’s a significant risk, or where members of the team disagree.

There is a very rich example of this in the news of late.  You’ve no doubt all heard of the fake news problem on Facebook.  Imagine you’re on a product team tasked with tackling this very difficult problem.  Certainly there’s very promising technologies, such as natural language processing, and machine learning more generally, that may be able to help.  And that’s what most people are talking about right now, but there are some broader issues as well:  Who gets to define truth?  Is it even appropriate for Facebook to take on that role?  And how does all of this mesh with Mark Zuckerberg’s product vision?  Are there freedom of speech concerns (real or perceived)?  How does this get reconciled with different cultural norms around the world, and even censorship?  What are the financial implications of restricting monetization on news stories?  What are the sales channel implications?

These are all very real risks that will substantially impact any proposed solution.  This gets to the heart of what makes product difficult, and why tackling these risks in discovery is so critical to coming up with solutions that work not just for your customers, but for your company as well.