For managing disruptive innovation, how to manage uncertainty is a key factor, and I think prototyping is one of the ways to embrace it. However, the argument supporting prototyping as a part of innovation process tends to just emphasise the difference of the prototyping process with linear processes of project (or product development) management. It produces an impression as if iteratio automatically facilitates innovation. Some of the arguments suggest iterative process gradually refines products, services or any sort of outcomes. If it is about incremental innovation or solving well-defined problems, it might be true, but it won’t be applicable to disruptive innovation.
Arguments around the Lean Startup methodology much profoundly argue this point. After the iteration process of testing your insight, you will eventually face the moment that you have to decide to keep going (persevere) or take a different way (pivot). Eric Ries, an advocate of the Lean Startup methodology, suggests that there is no formula for properly choosing “pivot” or “persevere”. Without clear criteria to judge the result of your trial, you won’t be able to decide which way you should take or ,worse, you would always see that the result is right.
Even he admits there is no formula, he recommends to use some sorts of performance mesurement called innovation accounting, which is a different measurement of general accounting. He argues that how to measure performance of startups is different from one for enterprises. This can be explained by the concept of ‘wicked problems’. The problems startups tackle is different from ones enterprises tackle. The same thing can be said for innovation management. The market for startups usually does not exist yet, so how to build the market or enter the invisible market is ill-defined as much as making innovation.
So what is innovation accounting? One of the key things is the metrics. He argues two types of metrics: Vanity metrics and actionable metrics. Vanity metrics is using the collected data just for justifying your direction. It may or may not make action to change your behavior and improve your performance, but usually it only strengthens your confidence even if the direction is wrong. On the other hand, actionable metrics lead actions to change. Ries shows three characteristics of good actionable metrics:
The metrics must lead actions. For that, the cause and effects should be clear. Otherwise what action is wrong or right is not clear (or at least difficult to guess) and it is hard to make a decision for the next action.
Accessibility here means two aspects: being accessible to the meaning of metrics/data, and being accessible to the data itself. Even if everyone can get access to to the data but they cannot understand the meaning, it is the same as being unable to get access to the data. Ries recommends to make reports simple and use tangible and concrete units.
Even though you have easy access to the data and it is easy to understand, the metric would not help you to take action if the data is incredible. In other words, if the source of data seems to be wrong, your conclusion might not be reliable enough to drive your team to take action even if the inference based on the data seems to be right. He suggests two tips to avoid the mistakes. One is to regularly communicate with customers to confirm it is right. The other is to use the master data to reduce the complexity of producing a report, which eventually leads more possibility to draw a wrong conclusion.
Here is a risk of iterative processes. When the metrics to be improved is clear, iterative processes as validation work very well. However, if the metrics is not clear, the improvement by the iterative process might be heading in a wrong direction. Even worse, the result of iteration or prototyping can be controlled or distorted to lead to a conclusion which has a benefit for specific people. Not only the result of prototyping but how to evaluate it should be also reflected in the course of iterative processes.