Say Hi

Bryant Street
San Francisco, CA


Thoughts on stuff

Features and feedback cycles - can you split test without a product?

Ryan Carlson

Customer Development is a framework for systematically separating things that you believeabout your startup from things that you know about your startup.  The more things you can move from the believe category to the know category, the faster you'll get to a repeatable business (or to the realization you're not going to make it on the path you're going down).

With internet startups (especially consumer facing ones) this has become much easier. The most convincingly argued predictions can now be countered with easy to obtain data.  It's often faster to put several options in front of customers and measure which performs better.  This is well-trodden ground, and understandably so.

But what about a startup that has custom-designed hardware?  Or an enterprise SW product that has a months-long sales cycle? These are saddled with longer feedback loops that can preclude a quick A/B testing approach.  There are several things you can do that still let you turn opinions into facts.  All of these assume you already have a product vision which you're looking to confirm / deny / refine. Remember, you don't get your product vision from your customers.

Create product strawmen, but create several versions

A common first step is to take a product strawman (powerpoint description of your product) out to target customers for feedback before development starts.  If you're doing it well, you're asking questions more like "would you buy this product for $X" and less like "what features would you add to this list?" But you can go further and create multiple product concepts and either show different versions to different groups of customers (a split test of sorts), or show multiple versions to each customer.  You vary the features about which you're most uncertain. You can also vary the schedule for each feature/option, as another lever to gauge true demand. 

Use the "Roadmap bucket" trick

Getting to the minimum feature set that your *early* customers will buy is your goal.  A tactic that can help you determine this set is to use the roadmap bucket to help distinguish between features needed now and features that can wait (perhaps forever).  Intentionally put some of the features you're not sure are must-haves into a Roadmap category that is planned but not precisely scheduled.  Features that customers forcefully pull out of that Roadmap bucket into v1.0 are more likely to be part of your MVP.  This tactic can also lend itself to split testing - you can show different breakouts between v1.0 and Roadmaps to different customers.

Ask for real orders before you have a product

It's not easy to ask for money for something you haven't built yet.  Many companies instead secure beta sites or trial agreements, where the quid pro quo is free product in exchange for detailed feedback.  But giving away the first versions of your product squanders an opportunity to really test the viability of your product.  Talk is cheap - customers instead reveal their preferences with their actions, and their checkbook is the surest metric you can use.  You should use it early, by asking for LOI's or (cancelable?) PO's in advance of your first products.  Create scarcity by offering up only X number of "early customer" slots, but make them true customers by charging them money.

Caveat Emptor:

Some of the tactics above can make you appear, to the uninitiated within your company, as disorganized or sloppy.  We change our feature set weekly, WTF?  You told the last customer we'll have X on our 12-month roadmap, but you told this customer we'll have it in our first beta - are you insane?  The entire team needs to agree beforehand that learning is the goal, and experimentation is a way to get there.  And they must be tolerant of frequent changes.  Whether or not a startup team can be convinced of this, or whether it's in their DNA, is a topic for a separate post.

That's All You Got?

Ryan Carlson

There are many names for early versions of a product or feature intended to test a market: Minimum Viable Product, "thin edge of the wedge", etc. Pursuing products this way is an efficient way to get real, validated feedback from customers, which is why they are the new hotness.

But beware of the challenges for someone who puts forth an MVP feature set for consideration. MVP proposals often elicit reactions that amount to: "that's not bad, but you could also do this one more feature or you could add this other great thing and it would be a lot better".  When commenting on proposed features, *everyone* becomes a designer or PM and wants to jump in and add more.

Worse (although less common) are reactions like these: "That's all you got? That's what you've come up with for our next killer feature? Dude, we need to get a new PM in here, this guy is not aiming high enough". And these reactions can come not only from sales but from the development team too.

Two tactics to help with this. First: when anyone is reviewing the MVP proposal, challenge commenters to improve the product by *removing* functionality. Anyone can make something "better" by adding functionality, but that's not the goal of the MVP.

Second, and more important: make absolutely sure the team is bought into a prototype / test / iterate philosophy, where they are 100% behind releasing early versions to paying customers. Don't just take their word for it either. Look for actions that indicate this philosophy. Very few people argue with an iterative approach in principal; but many have difficulty actually putting something minimal out there for all to see.