There’s a lot written about startup accelerators. At least, there’s a lot written by outside casual observers; very little is written about them from the inside. There are reasons for that. In my experience running an accelerator, I wouldn’t share much about the startups until after demo day (more about why in a later post).
Before we go any further, let’s define “startup accelerator” as: A program designed to speed up the development of a new business (usually tech), accepting applicants usually of early-stage companies for a defined length of time and granting a seed investment in exchange for equity. Accelerators usually also include mentorship from an internal team and external subject matter experts and concludes with a demo day where the companies show what they have built and discuss their future plans. If the startups want, additional funding is often planned to be raised at or after demo day.
Major themes for the last couple years are that there are too many accelerators and that they don’t produce results.
This begs the question “What makes a successful accelerator?“
Success factors usually mentioned for accelerators.
There are many different types of accelerator programs. They are usually measured the same way, but in reality many of them have different goals and have different factors of success. But the two most common ways accelerators are measured are:
- Funding raised by, or shortly after, demo day. This is a problematic measurement, especially for programs that operate where there are small local investment communities or where they focus on very early stage startups. I’ve also seen startups who are not even looking for funding post demo day. When did amount of money raised become a measurement of success for a startup?
- Survival rate after demo day. The question is who reports these numbers and where do they get the data. It is very easy for a startup to appear live when the founders have given up even if it hasn’t closed down. Plus, almost no one does the work to check on the numbers. My two-year old review of TechStars’ original published success rates (which seemed amazingly high when they first published them in late 2011) led me to lots of “zombie startups”. And those were just the ones I found. The great thing is that TechStars periodically updates their data. Now that I revisited that old post and compared it to TechStar’s results page, most of the startups that I listed as possible zombies on my old blog post are now marked as failed. At a minimum, the results from the most recent one year of their startups should be removed from the statistics. Those startups haven’t yet had time to fail or succeed. From the 25 startups I had in AcceleratorHK and Startups Unplugged, only one has shut down, so that means my “success” rate is 96% if I measure myself the same way.
These factors have something in common — they are seemingly easy to measure so they become the standard of measurement. But they don’t really tell us what’s most meaningful.
Instead, the focus should be some other things that are harder to measure
Time and money saved not building something no one wants, but which would have kept the would-be entrepreneurs busy for months or years. I’ve seen a lot of startups that are really just a couple of guys with a hobby and a devotion to writing code to serve that hobby. How much do the companies at start point develop and where are they by end point? How far did people get over the program?
Strength and involvement of the mentors. Are the mentors celebrity mentors, too busy to take time with the startups or are they perhaps not famous but dedicated to the success of the startups?
Involvement of the accelerator with the startups after the program ends. Are there follow-ups? What’s the activity of the alumni network? What do the entrepreneurs go on to do five or ten years later?
The accelerator’s success exiting from portfolio companies years later. Again, hard to measure because of the time lag.