Breaking my own rule
I wrote about a related issue a few weeks ago, but now find myself pulled back to this topic sooner than I thought. Yesterday, in a TechCrunch article called “The Startup Accelerator Trend Is Finally Slowing Down,” the author says that the overcrowded market for early-stage funding destines most accelerators to fail. Those of you who know me well will note that I was breaking my own rule to not read TechCrunch, but the article was in my Twitter feed and I had a moment of weakness.
I ran one of the 170 accelerators that the article’s author Mark Lennon says there are. Actually, that 170 number is really low. There are probably several times that number when you look beyond the CrunchBase data cited — and that’s why you can’t just rely on one single data source. CrunchBase is good, but certainly incomplete, as I found last year when I was scrambling to use their data at the TechCunch Disrupt Hackathon. But increasing the number from 170 accelerators to something like 500 would only make the situation in the article seem even worse.
The way accelerators are viewed is bizarre
The accelerator I ran with Steve Forte, AcceleratorHK, was never thought of as an unchanging program that would last forever or one that would instantly turn Hong Kong into a tech mecca. Instead, we proved that we could attract talent from around the world and get the startups a lot further along their own paths than if they had continued to work independently, while showing that an accelerator can work in Hong Kong (we were the first program there). Some of the startups went on to raise money, generate revenue and get awarded grants, while team members changed, company direction changed and I’m sure lots of other changes are to come. For a program, you need build something that works in your market, which can mean not being too influenced by what comes out of the Valley and what is published in the mainstream tech press. I expect that there will be other accelerator programs in Hong Kong (and other non-tech hubs) in the future and that they will do things their own way based on what is needed.
But this search for the appropriate number of accelerators or for all-inclusive ways to judge all of them based on public data snippets is part of the problem. For example typically,
- Accelerators are all viewed the same way. One in the middle of nowhere (in the startup world) is judged the same way as one in the Valley,
- Accelerators are judged as though they are there to serve investors. And when it comes to serving the startups, they are only judged based on what is directly related to the investors, such as percentage of startups that raise money after the program or number of exits, when most programs haven’t been around long enough to have many, or any. Programs are not judged on factors individual to their markets or qualities more difficult to measure like progress the startups made during and after the programs,
- Accelerators are judged by other things that are easier to see with minimal research or digging in to see what matters, such as their mentor lists (regardless of mentor involvement), where the startup founders went to school, or acceptance rates,
- Because qualities that are not listed publicly on things like CrunchBase are harder to measure, they are ignored.
There is also just a lack of willingness to think of ways that this world can change. How many other things could happen that change “the number”? For example will crowd-funding change the situation for early-stage startups significantly? Will corporate accelerators become more commonplace? Will programs first validate ideas themselves and then find teams to execute on them? If you take a static view of the world, then it’s hard to think about anything new while reading that article.
The startup world is big — bigger than you probably think. Even at hundreds of accelerators, with the numbers of startups that they “graduate” a year, there is no way that these programs have significant impact to sway the entire startup world. They might get a disproportionate amount of attention, but they are not drastically changing the environment by either creating startups that otherwise wouldn’t exist or disappointing the startups that don’t raise money — accelerators only touch a tiny fraction of all the startups out there. Now, to the question of whether there just aren’t enough good startups out there to invest in and therefore there can only be a certain number of accelerators (and the number is 170 or 500 or whatever), I think that this is a lack of imagination.
Two sides of the story
When I speak to tech investors in tech hubs, they usually tell me there are too many accelerators. Too many programs chasing too few good investments, which after all is how they view the world.
When I speak to startups, the feeling is much more positive. With the exception of those that are too far along to go to an accelerator, they usually express interest in the programs. I even had one funded startup in LA tell me that given the option, you should always go to an accelerator because there’s almost no way you could end up worse for it. As Paul Graham said about that in terms of getting value for the equity you give up: “If we take 6 percent, we have to improve a startup’s outcome by 6.4 percent for them to end up net ahead. That’s a ridiculously low bar.” In visiting and talking to lots of people involved with accelerators, while I have seen situations where programs do damage, I think they mostly they do good.
But only a tiny fraction of startups ever go to accelerators. Acceptance rates are on average in the single digits and less than one percent for the most popular programs. Given that only a fraction of startups apply leaves only a tiny fraction of one percent ever going to an accelerator. Again, as Paul Graham wrote:
“[T]he one thing you can measure is dangerously misleading. The one thing we can track precisely is how well the startups in each batch do at fundraising after Demo Day. But we know that’s the wrong metric… [F]undraising is not merely a useless metric, but positively misleading. We’re in a business where we need to pick unpromising-looking outliers, and the huge scale of the successes means we can afford to spread our net very widely. The big winners could generate 10,000x returns. That means for each big winner we could pick a thousand companies that returned nothing and still end up 10x ahead… I can now look at a group we’re interviewing through Demo Day investors’ eyes. But those are the wrong eyes to look through! We can afford to take at least 10x as much risk as Demo Day investors… [E]ven if we’re generous to ourselves and assume that YC can on average triple a startup’s expected value, we’d be taking the right amount of risk if only 30% of the startups were able to raise significant funding after Demo Day. I don’t know what fraction of them currently raise more after Demo Day. I deliberately avoid calculating that number, because if you start measuring something you start optimizing it, and I know it’s the wrong thing to optimize.” (excerpt from Black Swan farming)
How else could we measure accelerators?
The vast majority of accelerators out there are generalist programs. An application, review of applicants, some seed capital in exchange for a little equity, three months in a group location, mentorship and maybe workshops, ending with a demo day. And then often nothing at all afterward.
Here are some other ways to measure accelerators. But these are difficult to measure without doing a lengthy round of interviews and visits and so I expect there will not be a good global view of this written by someone on the outside of this world. Some ideas of what else to look at:
- How much progress did the startups make? Did they avoid wasting a year working on something that won’t work? (Tough to measure)
- How many people did they save from less productive work? (Tough to measure)
- What does the program do to follow up with its startups afterward? (Easy to measure if you put the time in, but not listed publicly online)
- Is there an alumni network? Is there really an alumni network? (Easy to measure)
- How involved are the mentors? Is there real domain expertise in the areas that the startups are focused on? (Tough to measure)
- Can the program or its mentors get their startups in doors of places they couldn’t by themselves, giving their companies a huge advantage? (Tougher to measure)
- Are the programs doing something entirely different that works better in their markets? (Easy to measure if you can get to know the people running the program)
Until then, these articles on accelerator metrics are often more distraction than they are discernment.