Ask people in the startup or innovation community about Eric Ries’ The Lean Startup, and I bet “innovation accounting” won’t come up in 9 out of 10 of those conversations. Concepts such as minimum viable product (MVP) and pivoting have become firmly embedded in the startup lexicon thanks in part to Ries’ book. Innovation accounting has not. (According to Google AdWords, monthly search volume for “minimum viable product” is around 27x higher than for “innovation accounting.”) That’s unfortunate, because without it, Ries’ Build, Measure, Learn model falls apart. At best, measurement ends up being based largely on a few qualitative assessments that may not rigorously test your hypothesis. It’s also unfortunate because Ries’ innovation accounting methodology offers a simple, intuitive way of measuring learning within an active innovation project.
In this post, the latest installment of our Innovation Book Series (see our first post on Christensen’s The Innovator’s Dilemma), I explore what Eric Ries’ influential book has to say about measuring innovation. While the book is primarily oriented to entrepreneurial startups, the author points out the methodology’s applicability in other innovation contexts. Some of his key principles have certainly been adopted more broadly.
I won’t cover Ries’ full Lean Startup methodology here, just his concept of innovation accounting. This is a somewhat artificial separation, so if you haven’t already read the book, I’d encourage you to do so.
There are three main aspects of Ries’ approach to innovation accounting:
- Build a financial model that enables quantitative testing of hypotheses
- Establish a target and a baseline
- Measure progress from baseline to target
Model Your Value and Growth Hypotheses
Innovation accounting requires a financial model. This model should quantitatively link the intended outcome of an innovation project to the underlying assumptions to be tested. Example of intended outcomes include future revenue, EBIT, risk-adjusted NPV, etc. If the project is a new product or service, the model should enable testing of assumptions about how it will create value for customers and how its adoption will grow. Ries refers to these as your “value hypothesis” and “growth hypothesis,” respectively.
This model doesn’t need to be mathematically complex. But it does require careful thought about the underlying growth and/or value drivers. A good model will:
- Identify the hypotheses you already know you need to test AND shine a light on other important assumptions.
- Identify the complex inter-relationships and feedback loops between different aspects of your model.
- Help you develop a strong intuition for which assumptions to test first, based on their relative importance and uncertainty.
System dynamics approaches may be useful tool here—more on this in a future post.
Define Target and Baseline Scenarios
Once a model has been developed, establish both an ideal scenario and a baseline scenario. The ideal scenario represents what you’re striving to achieve while the baseline scenario represents where you are today. Establishing an ideal scenario is relatively straightforward. What is the objective you have been set or want to aim for? Revenue of $100 million per year in 5 years? Cost savings of $40 million per year? Create an ideal scenario in your model that achieves (mathematically!) that objective. The scenario should draw on real data and making assumptions as appropriate.
Creating the baseline scenario requires an MVP. By pushing an MVP out into the world, you will start to test key assumptions, and gather associated data. That data can then be used to populate your model – establishing baseline performance levels for individual metrics. If your MVP is designed to test most of your assumptions, or you’ve utilized several MVPs in parallel, you will be able to establish a baseline performance level for your overall objective. Newsflash: it will very likely look terrible! But that’s not the point, the point is you now have a measure of current state.
Measure Your Experiments
With this measure of baseline performance, you are well equipped to track progress towards the ideal state—to measure learning. As you undertake initiatives, large or small, to test hypotheses you can measure the progress of your learning. Each initiative you undertake should be assessed by measuring the impact it has on a parameter within your model (on a growth or value driver). The model can then be updated to establish the impact it has on the overall target outcome. If you can’t model the impact of a particular initiative, it may be because the initiative did not impact a growth or value driver. In which case it was probably not something worth undertaking. There could also be gap in your model that needs to be addressed.
If your original strategic hypothesis was correct (and, to be honest, a lot other “ifs” are true – e.g., smart experiments, hard work, good timing, stars aligning)* you will see progress towards something that at least approximates your ideal scenario. If, on the other hand, your strategic hypothesis was wrong, you will not see much progress beyond the baseline and may need to consider pivoting. For more on what Ries has to say about pivoting, see the book!
Could Innovation Accounting Help Measure Learning?
At the beginning of The Lean Startup, Eric Ries describes how, early in his career, he would measure progress on a new product or service using classic project management tools. Is the project proceeding according to the planned scope, on time / budget, and being executed to a high level of quality? He found himself plagued by the questions: “…What if we found ourselves building something that nobody wanted? … [would] it matter if we did it on time and on budget?”
I regularly hear the same concern from innovation and R&D practitioners – it’s easy to track budget, but how do they communicate the value they are creating, via learning, to their leadership? Ries’ innovation accounting methodology could help measure learning by, for example, showing whether your efforts are closing the gap between current state and the ideal scenario.
This approach obviously won’t work in every project or for every organization. It probably only makes sense when you’re utilizing other aspects of the Lean Startup methodology. And it doesn’t work for your earliest exploratory efforts, before you’ve learned enough to formulate a product/service concept or the hypotheses to build the model. But, beyond those constraints, innovation accounting seems worth experimenting with.
We’re trying it out Commodore – if you’re interested in learning more about what this looks like, send me an email (phil [at] commodoreinnovation.co) and I’ll share it with you.
* Remember, that litany of uncertainty is why some fall into the trap of not measuring, or over-focusing on activity metrics. But it’s the uncertainty associated with innovation that calls for more rigor, more measurement, not less.