How to observe your way to insight

When we were starting out in my last startup, even before we had heard of customer development, we had a method for learning from users. It wasn’t great but it was better than what a lot of startups were doing. 

This is what we did (after we had exhausted our friends as beta testers):
– We took our buggy early version out to a cafe or college campus
– By asking “Do you want to check out our new startup?” we got people to stop and test it for us (at about a 20% acceptance rate)
– We then observed the people as they used our service
– When they finished, we interviewed them to gain additional knowledge into what they thought
– We did this over 100 times at about 20 – 30 minutes per individual tester.

When you observe people use your service, you’ll learn a lot. For us, most of the takeaways at that stage were design-related (we were at that point oblivious to the question “should we even build this service?”). But as for design learning for example, after seeing a high percentage of testers start to sign up and then stop and sit there, we realized that our sign up process was not intuitive. The words on the screen went unread. The design had to tell a story visually as well as in text.

Here’s what we missed:

– One-off tests didn’t give us insight into what repeat user behavior might look like. I think of this as the “Pepsi Challenge” problem. If you remember, Pepsi invited people to take a sip of 2 unmarked cups of soda. Most people chose the soda that was Pepsi. But in reality, people preferred Coke instead. The reason behind the difference in peoples’ choice? When you have a one-off small sample, people prefer sweeter drinks; when they have to drink a whole can, they prefer less sweetness.
Poor Pepsi. And us. We learned only afterward, that one-off user behavior (long interaction time, for example) looked nothing like what people would do when they got home (low repeat usage).

– We missed the ability to quickly iterate on designs based on what we learned. While design is not the complete answer in building a good product, we could have at least used the feedback we were getting rather than sitting on it for larger, infrequent updates.

– Instead of focusing on building the service, we should have mocked up more of it to more easily test. Then based on what we learned we could decide what to build.

– We lost good data when people didn’t collect it. You have to know what to look for, write it down and compare results or you’ll lose what you learn.

Still, we did some things right.

– Our later data analysis revealed what we missed in the one-off trials: that people happily engaged for long time periods but didn’t return frequently. Cohort analysis was key here. We were then able to redesign the service as one geared to themed, timed events, where it worked much better.
– And in retrospect, we beat a competitor that had 15 people and $500K in seed money (compared to our 3 people and minimal $4K cost to launch). As far as I saw, they did no customer development or even field tests. Instead, they went for PR, which didn’t result in any lasting results for them.

People often can’t tell you what they want, but you can learn about their problems, how they solve them today and then devise ways to solve those problems.

Filed in: lean startup