Decision Speed and Feedback Loops
Depending on your work and life, you will have different situations in which to make decisions and to learn from them. Some of this is related to the speed at which you make decisions (lots of small ones versus few large ones) and the quality and also speed of the feedback you receive.
In my roles with three startup accelerators and incubators I’ve been fortunate in that over the past five or six years I’ve been able to make five times as many decisions as I would in a VC role, as well as receive more rapid feedback than normal. Here’s a breakdown of what created opportunities for those decisions and that feedback.
Create Opportunities to Make Many Trackable Small Decisions
Hong Kong / AcceleratorHK. Chose 12 startups across two cohorts for the first funded startup accelerator in Hong Kong. Acceptance rate approximately 6%, which means 200 companies were evaluated to find the 12. $20K – $50K invested per startup.
Rome/Vatican / Laudato Si’ Challenge. Chose nine startups in one cohort for an environmental-tech focused startup accelerator affiliated with the Vatican. Acceptance rate a little under 3%, which means over 300 companies were evaluated to find the nine. $100K – $200K invested per startup.
University of Southern California in Los Angeles / USC Incubator / Marshall Greif Incubator. Across three years, 82 startups have participated across ten cohorts. I started the program three years ago and after the first year, the acceptance rate has hovered around 10%, which means approximately 700 companies were evaluated to find the 82. This program is different from the two above in that it is restricted to founders with a connection to USC, there is no financial investment in the companies, and that I sometimes get to know the founders before they apply and see the progress they make. I have declined to offer spots to founders who were too early only to accept them when they reapplied later on. This program is run non-stop with three intakes per year (unusual even for fast-paced accelerators).
If I were at a “typical” VC firm, I’d be involved in decisions to accept (invest in) perhaps five companies per year. True, I’d look at more companies per year (perhaps 1,000 to get to that five) and the due diligence would be much more intense. Further, judging whether the decision was correct would be more difficult as well. If the VCs need to eventually exit their investment positions, in my case at the Incubator at least, I really just need to see signs of progress (anything from the company receives their first investment, increases their rate of growth, grows revenue, builds a team to things that were prevented, such as the reduction of co-founder problems). I average 30 companies accepted per year, at least over the past three years with 300 to 600 evaluated per year. The feedback loop cycle time for metrics I care about is also much shorter, at around one year rather than perhaps several years at a VC.
As a result, I’ve improved my accuracy in founder and company evaluation.
Feedback Loops in Other Places
I never thought I’d teach at a university, but it has changed the way I think about learning and getting feedback. In my first year teaching I had a casual conversation with a colleague who admitted that all of the students failed a recent pop quiz that was based on what the class had discussed for the past month. I remember thinking “wow, you must be terrible…” I decided to try the same thing in a short quiz that included a couple formulas, a definition, and some basic business math. Result from that pop quiz in a class of 47 people: an average grade of 48%.
I couldn’t get this out of my head.
I provided the answer key and reviewed the questions with them. I told the class to expect another quiz like that one. One month later I gave the exact same quiz. Result: average grade of 70%.
Again, I couldn’t figure this out. Why such low grades? I wanted to get to 100%.
I provided the answer key and reviewed the questions with them. I told the class to expect another quiz like that one. In the last week of class I started by reviewing the same information. I gave the opportunity to ask questions. I cold called students to provide answers. I put the calculations on the board. I reviewed it all one more time.
And I gave the exact same quiz.
Result: average 95%. That meant that I still didn’t reach a couple people in the class.
If you’re not used to receiving feedback that you don’t know something, that you must improve in some way, and it’s inescapable (the repeat quiz rather than a one-off unlucky quiz) it can be terribly upsetting. It was to a few students.
Mostly we do not seek out opportunities to test our proficiency in subject matter and in different situations. When we do, it’s often a one-off test. We don’t revisit our proficiency again and again, forget, and fall into bad habits. Some professions, like medicine, require regular learning throughout a career, not only to review and test our retention of what we learned, but also to learn new techniques. We rarely do this in other fields. In some areas of our lives we may even actively fight the opportunity to learn how good we are.
If you end up thinking about this and implementing ways of making more decisions and developing feedback loops, let me know what techniques you use.