It’s wrong to stare at someone else’s MVP and compare sizes

November 16, 2011

Whenever a new term enters the Internet lexicon, arguments inevitably follow about it’s meaning and scope. Engineers argued about what was or wasn’t technically ‘Ajax’ technology or ‘Web 2.0’ design, and content strategists are facing a similar problem pinning down the scope of their relatively new industry.

Most recently, the concept of a Minimum Viable Product (MVP) has been debated – does it mean a small first release, a barely-usable prototype, or even just a Google AdWord?

An MVP may or may not be any of those things – depending on how you’re using them – but it doesn’t really matter what the definition is. Like many good ideas, the concept of the MVP is more important than a rigid definition. Rather than trying to replicate other people’s tactical implementations of the concept, it’s better to focus on the beautiful strategy of the MVP approach: what it is trying to achieve.

Validated learning lies at the heart of MVP. This means that you should create something that enhances your knowledge (of the market or the problem space) using real data from customers or potential customers. Moreover, you must endeavour to extract the maximum amount of learning from the least amount of effort (money and time).

This is why the Google AdWord is a compelling implementation of the MVP: spend a few hours and a few dollars creating adverts, and you can learn about the market reaction to individual features, price demand curves, and even – if you’ve configured decent analytics on your landing page – the demographics and trends in daily usage patterns of your potential customers.

However, AdWords aren’t suitable for all cases. If your app is in a highly competitive market, you’ll either need to spend a lot more money to test it (and therefore an alternative MVP implementation may require less effort), or else it will languish on the second or third page of results, where it doesn’t receive enough attention for you to learn anything with confidence.

Similarly, if your app is in a new market, you may need more than the limited space of an AdWord to persuade early adopters to try it. If Twitter had tested an MVP AdWord, they would have received relatively few clickthroughs and may have decided to abandon the idea. Even a sophisticated mock-up or internal prototype may not have been a suitable MVP for Twitter, who would have needed a basic functioning system with the ability to post and read messages before they could discover anything about how early adopters used the product.

Don’t limit yourself to one specific MVP implementation if you can quickly and easily learn more through multiple sources; the MVP is an ongoing, iterative process, not a single definitive test of an idea. Depending on your market, you may decide to create an advert, send out a survey email and also present an annotated wireframe to the board of an interested corporation. When we first started to think about Clickdensity, we published early data on an O’Reilly blog to get feedback from alpha geeks, prototyped with a client to see how they used the results, and emailed our customers to assess their reaction to the proposition.

Which leads me to my final point: you need to carefully consider and perfect your app proposition before you test an MVP. It’s all too easy to get caught up in the idea of an app and assume that people will know what it does and why they should use it. For many forms of MVP, it is the text of the proposition rather than the interactive functionality of an app that’s being tested, so be sure to run your text past a couple of relevant people before testing it with a larger audience.

Tags: , , , , ,

Leave a Reply