As an entrepreneur who has now dealt with VCs for close to ten years, one phrase repeated more times than I care to recount has been, "The time for paying tuition is over; it’s time to show revenue multiples."
The first times I heard this mantra it went without question. I know, as does everyone involved in a start-up, that revenue is goodness and messing around ("paying tuition") is badness. I think, in general, that shareholder and investor impatience for a quick return on capital is a proper and laudable expectation. If you’re in the big leagues, you need to either hit, field or pitch, or better still, multiples of these.
But neither technology nor markets are predictable. Another statement frequently heard is "if you need to educate the market, your business model is wrong." Another is "show me the way to $20 million annual revenues within the next XX months."
I ain’t a kid anymore, and I appreciate the demands for performance and results. Starting up a business and spending other people’s money (not to mention my own and my family’s) to achieve returns is not for the fainthearted. Fair enough. And understood.
But the real disconnect is how to balance multiple factors. I think I appreciate the pressures on VCs for returns. I also understand their win some/lose some mentality. (Actually, what I don’t understand is the acceptance of the high percentage rates of individual investment failures; something is not systematic and wrong here; but I digress.)
But what I truly don’t understand is the application of mantras vs. a careful balance of positive and negative factors for a venture. Excellent and innovative technology is often in search of proper applications and markets. Excellent and innovative technology is often not initially mature for market acceptance. Excellent and innovative technology is often misdirected by its founders until engagement with the market and customers helps refine features and product expressions. Excellent and innovative technology is sometimes tasty cookie dough that needs more time in the oven.
Presumably, as has been the case for my own ventures, the basis for investment has been excellent and innovative technology. We all know the standard recipe of market-technolgy-management that sprinkles every high-tech VC Web site. But, of course, and honestly and realistically, not all of these factors are in play when venture financing is sought. And, let’s face it, if they were in play, there would not be an interest by the entrepeneurs to dilute their ownership.
I suppose, then, that all players in a venture-financed start-up are subject to various forms of willful or self-deception. Entrrepreneurs and VCs alike believe they have all the answers. And, of course, neither do.
What I have come to learn is that it is the market that has the answers, and sometimes that takes time to figure out. Good diligence at the front end is warranted — after all, there needs to be the basis of some excellent foundations — as are mechanisms for "feeding out the line" of venture dollars and claw back and other egregious ways to lay off risk because bad choices are often made. But what should not be acceptable, should not be perpetuated, is the expectation as to WHEN these returns will be achieved.
There is simply no avoiding that new, innovative and sexy technology may not be able to be precisely timed. Rather than railing about not paying more tuiition, every VC that has done diligence and made a venture commitment should be cheering for more learning and more refinement. Begin with good partial foundations (be they technology-management-market) and applaud the tuition of learning and refinement. In the end, we never graduate; we hopefully progress to life-long learning.
"Longing gazes and worn out phrases won’t get you where you want to go. No!" - Mamas and Papas
Next up: "The Myth of Superman"
There was an interesting exchange between Martin Nisenholtz and Tim O’Reilly at a recent Union Square Session on the topic of peer production and open data architectures. Martin was questioning how prominent “winners” like Wikipedia may prejudice our view of the likelihood of Web winners in general. Here’s the exchange:
NISENHOLTZ: I sort of call it the lottery syndrome. There was a Powerball lottery yesterday. Tons of people entered it. We know that someone won in Oregon . . . we also know that the chances of winning were one in 164 million . . . .I guess what I’m struggling with is how we measure the number of peer production efforts that get started versus Wikipedia, which has become the poster child, the lottery, the one in 164 million actually works. Now it may not be one in 164 million. It may be one in 10. It may be one in 50, but I think that groups of people like [prominent Web thinkers] tend to create the lottery winner and hold the lottery winner up as the norm.
O’REILLY: Look at Source Forge, there’s something like 104,000 projects on Source Forge. You can actually do a long tail distribution and figure out how many of them — but … I would guess that one in like … 154 million are probably out of those 100,000 projects, there are probably, you know, at least 5,000 who have made significant reputation gains as a result of their work. Maybe more. But, again, somebody should go out and measure that.
It just so happens that I had recently done that SourceForge project analysis in June, which is mostly still relevant since only a few months old. That info is reproduced below.
Strong Growth for Open Source Projects
In open source there are some big visibility winners and lots of activity. (For an excellent overview of the leading and successful open source projects, see Uhlman.) The numbers of these projects have grown rapidly, increasing by about 30% to 100,000 projects in the past year alone. However, like virtually everything else, the relative importance or use of open source projects tend to follow standard power curve distributions.
The truly influential projects only number in the hundreds, as figures from SourceForge, a clearinghouse solely devoted to open source projects, indicate. There is a high degree of fluctuation, but as of May 2005 there were on the order of perhaps 13 million total software code downloads per week from SourceForge (A). Though SourceForge statistics indicate it has some 100,000 open source projects within its database, in fact fewer than half of those have any software downloads, only 1.7% of the listed projects are deemed mature, and only about 15,000 projects are classified as production or stable.
But power curve distributions indicate even a much smaller number of projects account for most activity. For example, the top 100 SourceForge projects account for 60% of total downloads, with the top two, Azureus and eMule, alone accounting for about one-quarter of all downloads. Indeed to even achieve 1000 downloads per day, a SourceForge open source project must be within the top 150 projects, or just 0.2% of those active or 0.1% of total projects listed.
Similar trends are shown for cumulative downloads. Since its formation in 2000, software code downloads from SourceForge have totalled nearly one billion (actually, an estimated 892 million as of May 2005) (B, logarithmic scale). Again, however, a relatively small number of projects has dominated.
For example, 60% of all downloads throughout the history of SourceForge have occurred for the 100 most active projects. It can be reasonably defended that the number of open source projects with sufficient reach and use to warrant commercial attention probably total fewer than 1,000.
Open Source is Not the Same as Linux
Some observers, such as for example the Open reSource site, tends to equate open source with the Linux operating system and all aspects around it. While it is true that Linux was one of the first groundbreakers in open source and is the operating system with the largest open source market share, that is still only about one-half of all projects according to SourceForge statistics:
Windows projects have been growing in importance, along with Apple. In terms of programming languages, various flavors of C, followed by the ‘P’ languages (PHP, Python, Perl) and Java are the most popular. Note, however, that many projects combine languages, such as C for core engines and PHP for interfaces. Also note that many projects have multiple implementations, such as support for both Linux and Windows installations and perhaps PHP and Perl versions. Finally, the popularity of the Linux – Apache – MySQL and P languages have earned many open source projects the LAMP moniker. When replaced by Windows this is sometimes known as WAMP or with Java its known as LAMJ:
Because of the diversity of users, larger and more successful projects tend to have multiple versions.
Few Active Developers Support Most Projects
Despite source code being open and developers invited for participation, most mature open source projects in fact receive little actual development attention and effort from outsiders. Entities that touch and get involved in an open source project tend to form a pyramid of types. This pyramid, and the types of entities that become involved from the foundation upward, can be characterized as:
Most effort around successful open source projects is geared to extending the environments or interoperability of those projects with others — both laudable objectives — rather than fundamental base code progression.
Mature Projects are Stable, Scalable, Reliable and Functional
David Wheeler has maintained the major summary site for open source performance statistics and studies for many years. In compiling literally hundreds of independent studies, Wheeler observes that “OSS/FS [open source software/free software] . . . is often the most reliable software, and in many cases has the best performance. OSS/FS scales, both in problem size and project size. OSS/FS software often has far better security, perhaps due to the possibility of worldwide review. Total cost of ownership for OSS/FS is often far less than proprietary software, especially as the number of platforms increases.” However, while obviously an advocate, Wheeler is also careful to not claim these advantages across the board or for all open source projects.
Indeed, most of the studies cited by Wheeler obviously deal with that small subset of mature open source projects, and often surrounding Linux and not necessarily some of the new open source projects moving towards applications.
Probably the key point is that even though there may be ideological differences between advocates for or against open source, there is nothing inherent in open-source software that would make it inferior or superior to proprietary software. Like all other measures, the quality of the team behind an initiative is the driving force for quality as opposed to open or closed code.
 I’d like to thank Matt Asay for pointing the way to digging into SourceForge statistics. It is further worth recommending his “Open Source and the Commodity Urge: Distruptive Models for a Distruptive Development Process,” November 8, 2004, 17 pp., which may be found at: http://www.open-bar.org/docs/matt_asay_open_source_chapter_11-2004.pdf
 Of course, downloads may occur at other sites than SourceForge and there are other proxies for project importance or activity, such as pageviews, the measure that SourceForge itself uses. However, as the largest compilation point on the Web for open source projects, the SourceForge data are nonethless indicative of these power curve distributions.
 D.A. Wheeler, Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers!, versuion updated May 5, 2005. See http://www.dwheeler.com/oss_fs_why.html. The paper also has useful summaries of market informatin and other open source statistics.
From Broadcast Newsroom, BBN Technologies just released version 2.0 of its AVOKE STX speech-to-text software. According to BBN, this new version improves the relevance of multimedia search results by transforming audio into searchable text with unprecedented accuracy. Applications include enterprise search, business and government intelligence, consumer search, audio mining, video search, broadcast monitoring, and multimedia asset management.
BBN says AVOKE STX 2.0 separates speech from non-speech, such as music or laughter, and then processes the speech to identify additional characteristics. This information is captured, tagged with metadata, and indexed in an XML format for use by standard search engines or technology. Because each word in the metadata is time-stamped, users can navigate easily to any point in the transcript, listen to the original audio, or watch the corresponding video.
BBN’s legacy extends to playing a key role in pioneering the development of the ARPANET, the forerunner of the Internet. BBN supports both commercial and government clients. Its AVOKE speech technology translates Arabic and Chinese with additional foreign languages planned.
I’ve just finished reading a fascinating 228 pp transcript on the topic of peer production and open data architectures. This discussion, the first of the so-called Union Square Sessions, involved more than 40 prominent Web and other thinkers with a heavy sprinkling of VCs.
That being said, I was disappointed that neither interoperability nor extensibility directly entered into any of the discussions. I suspect this may be due to conjoining the important singular topic of open data architectures with the lens of peer production or social networks. For me, the quote closest to my interests among this disparate group was from Dick Costolo, who stated “the bottom line implication is that an open data architecture will be one that is purely API based and not destination based.”
Nonetheless, this is an interesting start. I’d like to humbly suggest open and extensible data architectures (including, importantly, database engines in addition to extensible exchange formats such as XML) for a future discussion topic.
Here is the link to the Union Square Sessions Transcript.
I just came across a VC blog pondering the value to a start-up of operating in "Stealth Mode" or not. I’ve amusingly come to the conclusion that all of this — particularly the "stealth" giveaway — is so much marketing hype. When a start-up claims they’re coming out of stealth mode, grab your wallet.
The most interesting and telling example I have of this is Rearden Commerce, which was announced in a breathy cover story in InfoWorld in February 2005 about the company and its founder/CEO Patrick Grady. The company has an obvious "in" with the magazine; in 2001 InfoWorld also carried a similar piece on the predecessor company to Rearden, Talaris Corporaton.
According to a recent Business Week article, Rearden Commerce and its predecessors reaching back to a earlier company called Gazoo founded in 1999 have raised $67 million in venture capital. While it is laudable the founder has reportedly put his own money into the venture, this venture through its massive funding and high-water mark of 80 employees or so hardly qualifies as "stealth."
As early as 2001 with the same technology and business model, this same firm was pushing the "stealth" moniker. According to an October 2001 press release:
"The company, under its stealth name Gazoo, was selected by Red Herring magazine as one of its ‘Ten to Watch’ in 2001." [emphasis added]
Even today though no longer the active name Talaris Corporation has close to 115,000 citations on Yahoo! Notable VCs such as Charter Ventures, Foundation Capital, JAFCo and Empire Capital have backed it through its multiple incubations.
Holmes Report a marketing company, provides some insight into how the earlier Talaris was spun in 2001:
"The goal of the Talaris launch was to gain mindshare among key business and IT trade press and position Talaris as a ‘different kind of start-up’ with a multi-tiered business model, seasoned executive team and tested product offering."
The Holmes Report documents the analyst firms and leading journals and newspapers to which it made outreach. Actually, this outreach is pretty impressive. Good companies do the same all of the time and that is to be lauded. What is to be questioned, however, is how many "stealths" a cat can have. Methinks this one is one too many.
"Stealth" thus appears to be code for an existing company of some duration that has had disappointing traction and now has new financing, a new name, new positioning, or all of the above. So, interested in a start-up that just came out of stealth mode? Let me humbly suggest standard due diligence.