Posted:November 13, 2005

Venture capitalists, when the straw gets short or the proverbial hits the fan, are famous for calling for new managerial blood. After all, we did our due dilgence on this company, it is not profitable — perhaps even bleeding excessively — so what went wrong?

Actually, to be fair, perhaps the founding entrepreneurs are having the same thoughts. We wrote the business plan, we beat the odds to even get angel and (“Isn’t that special,” says the Church Lady) VC financing, thus we have had affirmation about our markets, technology, team and other aspects from the “smart” money, so why is it not working? Why aren’t we profitable? What went wrong?

Getting external financing from professional VCs is non-trivial and itself is putting a company in the “less-than-0.1% club.” And, of course, getting any financing is hard to do, be it an angel, your own checking account, your spouse or your friends and family. Forsaking Janie’s college education for a chance on a start-up requires tremendous belief and suspension of dis-belief for any early investor.

But, the initial financing hurdle has been met. Some time has passed. Neither profits nor the plan are fulfilling themselves. What do we — obviously the smart ones since we put up the money or had the ideas — do about our belief while return is not being fulfilled?

Time for Superman?

In nearly two decades of mentoring various ventures I’ve observed one possible reaction is to look for Superman. If only the company had the right missing individual in a CEO or senior manager position, then many of the current problems would go away. But as my Mom used to say, nothing is easy. Easy answers can lead to uneasy situations. And, I think, the myth of Superman more often than not fits into such a facile error.

When things go wrong (or, at least, are not going as desired), things are tough for all of those with a stake in success. Is the source of discomfort that money was put up and is now at risk of loss? Is it that individuals were supported but are not yet achieving success? Is it ego that due diligence was made but success is looking tenuous? And, if things are going wrong or progress is disappointing, what is the root cause? Is the market needful or ready? Is the technology or product responsive or ready? Is the business model correct? Are other pieces such as partners, advisors, infrastructure, collateral, or whatever in place?

New people do not need to be hired to pose these questions nor to spend purposeful and thoughtful time addressing them. And, even if new people and skills are deemed critical to supplement the skills presently available, setting expectations that are too high or too superhuman are likely to not be fulfilled, take to long to do so if even achievable, and cost too much in focus and precious resources.

The Kryptonite

In fact, pursuing the myth of Superman can actually worsen a current situation for the following reasons:

  • Supermen Are Rare — there are thousands of new startups formed each year, hundreds of which receive significant venture funding from VCs, angels, or small business R&D efforts or grants. Only a very small percentage achieve high returns and only a small percentage of those can be ascribed to the “superstar” performance of a specific individual. Sure, names are known and the business and trade press love to lionize these individuals. But the statistical occurrence of a clearly superior manager or executive is measured in the tenths of a single percent or less
  • Supermen Are Not Infallible — even that small minority of individuals that do receive recognition as “superstars” may have achieved that lofty status as much due to luck or circumstance. Serially successful entrepreneurs are rarer still than one-off “superstars.” And, for those few individuals that have shown repeated success, they are more often interested in pursuing their own loves and interests and are not for hire for someone else’s venture
  • Supermen Are Not Obvious — perhaps because of serendipity and some of the reasons above, “superstars” also defy characterization by sex, background, age, appearance, personality, education or other discernible metric. So, if a Superman is not reliably a Superman in his next engagement, nor if there is a way to reliably identify Supermen-in-waiting, then why is so much time spent on finding the unfindable?
  • Supermen Are Expensive — both in terms of equity and compensation, any individual brought in as a savior will cost the startup plenty. Resources are always most precious and constrained for startups. Perhaps, if the identification of the “superstar” could be reliably assured, then this expense could be justified. But since that reliability is not there, the hiring may only drain limited cash and resources and create resistance by the key founders who don’t receive the Superman rents
  • They Can Screw Up Dynamics — by the time the Superman option is considered the company has alreadly likely achieved some success, visibility and funding. Founders and key employees, not to mention early financial backers, have worked hard to bring things to their current point. Raising the Superman spectre not only affects the morale of existing players and sends a negative message but, if an individual is then subsequently hired, existing dynamics can be challenged and irreparable harmed. Of course, outside money that controls such decisions may have reached the conclusion that dynamics were already broken and needed fixing, but the likelihood of a new player augmenting and bolstering existing positive interactions is less than the opposite prospect
  • Finding Superman Diverts Attention — a Superman initiative poses a huge opportunity cost to the limited bandwidth of existing executive and director attention within a startup. Defining the qualifications, collecting the names, conducting the recruiting, interviewing the prospects, and then deciding to offer and negotiating the compensation package are extremely time consuming activities. All time spent on this stuff is time not spent on building the company, its products and pursuing sales
  • In Fact, Superman May Not Exist — this is actually the most interesting observation. It is seductive and a statistical error to look at the instance of a managerial or entrepreurial success and conclude it is repeatable. After all, haven’t some individuals beat the track, the stock market, or the start-up venture odds? Let me first say there are perhaps the spectacular individuals — say a Warren Buffet — who consistently outperforms the normal. But Howard Hughes did the same and still ended up with a Spruce Goose that barely flew and fingernails inches long, and there are compelling few numbers of billionaires for the millions of existing businesses. At the statistically low numbers here, we can safely say that for practical purposes Superman does not exist.

Change the Perspective, Change the Mindset

Raising the Superman option only occurs when a company is in trouble and needs help. The key individuals associated with a startup — Board and management alike — are better advised to concentrate on business model, strategy, execution and maintaining focus than searching for the impossible or (at least) statistically highly unlikely.

When problems arise, look to problem identification and problem-solving approaches before copping out with easy Superman answers.

Efforts should be focused; business models should be clear; execution should be emphasized; resources should be zealously protected and stewarded; questions should be constantly asked; and team efforts and building should be fostered. Patience is not a four-letter word, especially if progress is steady and being accomplished in a cost-effective manner.

Nurture and training of initial founders and staff is important. Financing would not have been initially achieved without some belief in these individuals. Not now actually performing to plan is, in fact, an expected outcome, not one warranting excoriation.

These positive mindsets are hard to keep when the venture’s performance or sales is not meeting plan. And, of course, some of these instances will warrant abandonment of the venture rather throwing more good after bad. There are no guarantees. And mistakes get made.

But make the choice. Commit to the venture and improving its prospects through hard work and engagement, or walk away. Superman is a false middle ground.

Don’t Get Me Wrong

Please, don’t get me wrong. Without a doubt some people are better managers, some are some are better salespeople, some are better intellects, some are better strategists, some are better marketers and some are better networkers than others. Anyone who is superior, committed and a believer in the cause of your venture will likely bring some value. And there are indeed rare individuals and rare circumstances when hiring the right new executive could and should make all of the difference toward success.

The more important point, however, is that startups are more often than not constrained in their team and resources. Be smart about where to spend limited time and focus. Hiring good and even great people is a good focus. Searching for Superman is not. Rather than the impossible combination in a single person, look to a collective team that embodies the needed and valuable trailts deemed important for your venture’s success.

Posted by AI3's author, Mike Bergman Posted on November 13, 2005 at 6:03 pm in Software and Venture Capital | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:November 6, 2005

Nova Spivack has announced the imminent release of a semantic Web tool for the desktop, Open Iris:

Following in the footsteps of Douglas Engelbart’s pioneering work, SRI has announced the upcoming open-source (LGPL) release of Open IRIS — an experimental Semantic Web personal information manager that runs on the desktop. IRIS was developed for the DARPA CALO project and makes use of code libraries and ontology components developed at SRI, and my own startup, Radar Networks, as well as other participating research organizations.

IRIS is designed to help users make better sense of their information. It can run on it’s own, or can be connected to the CALO system which provides advanced machine learning capabilities to it. I am very proud to see IRIS go open source — I think it has potential to become a major platform for learning applications on the desktop.

IRIS is still in its early stages of evolution, and much work will be done this year to add further functionality, improve the GUI and make IRIS even more user-friendly. But already it is perhaps the most sophisticated and comprehensive semantic desktop PIM ever created. If you would like to read more about IRIS, this paper provides a good overview.

Posted by AI3's author, Mike Bergman Posted on November 6, 2005 at 8:26 pm in Searching, Semantic Web, Semantic Web Tools | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:November 4, 2005

For some years now the enterprise search market has been sick and in search of relief.  In frankly a shocking development, Autonomy announced today it was acquiring the veritable Verity search company for $500 million.  See this Bloomberg story ….

This acquisition amount is itself a signal of how poorly this market has been going.  Enterprise search has averaged about $500 million annually in revenues over the past few years with little or slow growth.  Autonomy has been the small cousin from England, but has now gobbled up the old dinosaur in Verity.

BTW, if someone mentions synergies or complementarity to you about this acquistion, don’t believe it.  This is totally an indication of how poor the entire enterprise search market has become.  With unstructured data representing 60-80% of all data available to an enterprise, these valuations are indicatiive of how piss-poor the enterprise document technology is as present.

Bon voyage!  I expect all of these dinosaurs to find their final resting place in the sun.  RIP

Posted by AI3's author, Mike Bergman Posted on November 4, 2005 at 6:47 pm in Searching, Software and Venture Capital | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:November 1, 2005

The first recorded mentions of “semi-structured data” occurred in two academic papers from Quass et al.[1] and Tresch et al.[2] in 1995. However, the real popularization of the term “semi-strucutred data” occurred through the seminal 1997 papers from Abiteboul, “Querying semi-structured data,” [3] and Buneman, “Semistructured data.” [4] Of course, semi-structured data had existed well before this time, only it had not been named as such.

What is Semi-structured Data?

Peter Wood, a professor of computer science at Birkbeck College at the University of London, provides succinct definitions of the “structure” of various types of data:[5]

  • Structured Data — or classes. Entities in the same group have the same descriptions (or attributes), while descriptions for all entities in a group (or schema): a) have the same defined format; b) have a predefined length; c) are all present; and d) follow the same order. Structured data are what is normally associated with conventional databases such as relational transactional ones where information is organized into rows and columns within tables. Spreadsheets are another example. Nearly all understood database management systems (DBMS) are designed for structural data
  • Unstructured Data — in this form, data can be of any type and do not necessarily follow any format or sequence, do not follow any rules, are not predictable, and can generally be described as “free form.” Examples of unstructured data include text, images, video or sound (the latter two also known as “streaming media”). Generally, “search engines” are used for retrieval of unstructured data via querying on keywords or tokens that are indexed at time of the data ingest, and
  • Semi-structured Data — the idea of semi-structured data predates XML but not HTML (with the actual genesis better associated with SGML, see below). Semi-structured data are intermediate between the two forms above wherein “tags” or “structure” are associated or embedded within unstructured data. Semi-structured data are organized in semantic entities, similar entities are grouped together, entities in the same group may not have same attributes, the order of attributes is not necessarily important, not all attributes may be required, and the size or type of same attributes in a group may differ. To be organized and searched, semi-structured data should be provided electronically from database systems, file systems (e.g., bibliographic data, Web data) or via data exchange formats (e.g., EDI, scientific data, XML).

Unlike structured or unstructured data, there is no accepted database engine specific to semi-structured data. Some systems attempt to use relational DBMS approaches from the structured end of the spectrum; other systems attempt to add some structure to standard unstructured search engines. (This topic is discussed in a later section.)

Semi-structured data models are sometimes called “self-describing” (or schema-less). These data models are often represented as labelled graphs, or sometimes labelled trees with the data stored at the leaves. The schema information is contained in the edge labels of the graph. Semi-structured representations also lend themselves well to data exchange or the integration of heterogeneous data sources.

A nice description by David Loshin[6] on Simple Semi-structured Data notes that structured data can be easily modeled, organized, formed and formatted in ways that are easy for us to manipulate and manage. In contrast, though we are all familiar with the unstructured text in documents, such as articles, slide presentations or the message components of emails, its lack of structure prevents the advantages of structured data management. Loshin goes on to describe the intermediate nature of semi-structured data:

There [are] sets of data in which there is some implicit structure that is generally followed, but not enough of a regular structure to “qualify” for the kinds of management and automation usually applied to structured data. We are bombarded by semi-structured data on a daily basis, both in technical and non-technical environments. For example, web pages follow certain typical forms, and content embedded within HTML often have some degree of metadata within the tags. This automatically implies certain details about the data being presented. A non-technical example would be traffic signs posted along highways. While different areas use their own local protocols, you will probably figure out which exit is yours after reviewing a few highway signs.

This is what makes semi-structured data interesting–while there is no strict formatting rule, there is enough regularity that some interesting information can be extracted. Often, the interesting knowledge involves entity identification and entity relationships. For example, consider this piece of semi-structured text (adapted from a real example):

John A. Smith of Salem, MA died Friday at Deaconess Medical Center in Boston after a bout with cancer. He was 67.

Born in Revere, he was raised and educated in Salem, MA. He was a member of St. Mary’s Church in Salem, MA, and is survived by his wife, Jane N., and two children, John A., Jr., and Lily C., both of Winchester, MA.

A memorial service will be held at 10:00 AM at St. Mary’s Church in Salem.

This death notice contains a great deal of information–names of people, names of places, relationships between people, affiliations between people and places, affiliations between people and organizations and timing of events related to those people. Realize that not only is this death notice much like others from the same newspaper, but that it is reasonably similar to death notices in any newspaper in the US.

Note in Loshin’s example that the “structure” added to the unstructured text (shown in yellow; my emphasis) to make this “semi-structured” data arises from adding informational attributes that further elaborate or describe the document. These attributes can be automatically found using “entity extraction” tools or similar information extraction (IE) techniques, or manually identified. [7] These attributes can be assigned to pre-defined record types for manipulation separate from a full-text seach of the document text. Generally, when such attributes are added to the core unstructured data it is done through “metatags” that a parser can structurally recognize, such as by using the common open and close angle brackets. For example:

<author=John Smith>

In semi-structured HTML, the tags that provide the semi-structure serve a different purpose in terms of either formatting instructions to a browser or providing reference links to internal anchors or external documents or pages. Note that HTML also uses the open and close angle brackets as the convention to convey the structural information in the document.

The Birth of the Semi-structured Data Construct

One could argue that the emergence of the “semi-structured data” construct arose from the confluence of a number of factors:

  • The emergence of the Web
  • The desire for extremely flexible formats for data exchange between disparate databases (and therefore useful for data federation)
  • The usefulness of expressing structured data in a semi-structured way for the purposes of browsing
  • The growth of certain scientific databases, especially in biology (esp., ACeDB), where annotations, attribute extensibility resulting from new discoveries, or a broader mix of structural and text data was desired.[8]

These issues first arose and received serious computer science study in the late 1970s and early 1980s. In the early years of trying to find standards and conventions for representing semi-structured data (though not yet called that), the major emphasis was on data transfer protocols.

In the financial realm, one proposed standard was electronic data interchange (EDI). In science, there were literally tens of exchange forms proposed with varying degrees of acceptance, notably abstract syntax notation (ASN.1), TeX (a typesetting system created by Donald Knuth and its variants such as LaTeX), hierarchical data format (HDF), CDF (common data format), and the like, as well as commercial formats such as Postscript, PDF (portable document format), RTF (rich text format), and the like.

One of these proposed standards was the “standard generalized markup language” (SGML), first published in 1986. SGML was flexible enough to represent either formatting or data exchange. However, with its flexibility came complexity. Only when two simpler forms arose, namely HTML (HyperText Markup Language) for describing Web pages and XML (eXtensible Markup Language) for data exchange, did variants of the SGML form emerge as widely used common standards.[9]

The XML standard was first published by the W3C in February 1998, rather late in this history and after the semi-structured data term had achieved some impact.[10] Dan Suciu was the first to publish on the linkage of XML to semi-structured data in 1998,[11] a reference that remains worth reading to this day.

In addition, the OEM (Object Exchange Model) has become the de facto model for semi-structured data. OEM is a graph-based, self-describing object instance model. It was originally introduced for the Tsimmis data integration project,[12] and provides the intellectual basis for object representation in a graph structure with objects either being atomic or complex.

How the attribute “metadata” is described and associated has itself been the attention of much standards work. Truly hundreds of description standards have been proposed from specific instances in medical terminology such as MESH to law to physics to engineering and to cross-discipline proposed standards such as the Dublin core. (Google these for a myriad of references.)

Challenges in Semi-structured Data

Semi-structured data, as for all other data structures, needs to be represented, transferred, stored, manipulated or analyzed, all possibly at scale and with efficiency. It is often easy to confuse data representation from data use and manipulation. XML provides an excellent starting basis for representing semi-structured data. But XML says little or nothing about these other challenges in semi-structured data use:

  • Data heterogeneity — the subject of data heterogeneity in federated systems is extremely complex, and involves such areas as unit or semantic mismatches, grouping mismatches, non-uniform overlap of sets, etc. “Glad” may be the same as”happy” but it may also be expressed in metric v. English units. This area is complex and subject to its own topic
  • Type inference — related to the above is the data type requiring resolution, for example, numeric data being written as text
  • Query language — actually, besides transfer standards, probably more attention has been given to query languages supporting semi-structured data, such as XQuery, than other topics. Remember, however, that a query language is the outgrowth of a data storage framework, not a determinant, and this distinction seems to be frequently lost in the semi-structured literature
  • Extensibility — inherent with the link to XML is the concept of extensibility with semi-structured data. However, it is important to realize that extensibility as used to date is in reference to data representation and not data processing. Further, data processing should occur without the need for database updates. Indeed, it is these later points that provide a key rationale for BrightPlanet‘s XSDM system
  • Storage — XML and other transfer formats are universally in text or Unicode, excellent for transferability but shitty for data storage. How these representations actually get stored (and searched, see next) is fundamental to scalable systems that support these standards
  • Retrieval — many have and are proposing native XML retrieval systems, and others have attempted to clone RDBMSs or search (text) engines for these purposes. Retrieval is closely linked to query language, but, more fundamentally, also needs to be speedy and scalable. As long as semi-structured retrieval mechanisms are poor-cousin add-ons to systems optimized for either structured or unstructured data, they will be poor performers
  • Distributed evaluation (scalability) — most semi-structured or XML engines work OK at the scale of small and limited numbers of files. However, once these systems attempt to scale to an enterprise level (of perhaps tens of thousands to millions of documents) or, god forbid, Internet scales of billions of documents, they choke and die. Again, data exchange does not equal efficient data processing. The latter deserves specific attention in its own right, which has been lacking to date
  • Order — consider a semi-structured data file transferred in the standard way (which is necessary) as text. Each transferred file will contain a number of fields and specifications. What is the efficient order of processing this file? Can efficiencienes be gained through a “structural” view of its semi-structure? Realize that any transition from text to binary (necessary for engine purposes, see above), also requires “smart” transformation and load (TL) approaches. There is virtually NO discussion of this problem in the semi-strucutred data literature
  • Standards — while XML and its variants provide standard transfer protocols, the use of back-end engines for efficient semi-structured data processing also requires prescribed transfer standards in order to gain those efficiencies. Because the engines are still totally lacking, this next level of prescribed formats is lacking as well.

Generally, most academic, open source, or other attention to these problems has been at the superficial level of resolving schema or definitions or units. Totally lacking in the entire thrust for a semi-structured data paradigm has been the creation of adequate processing engines for effiicient and scalable storage and retrieval of semi-structured data. [13]

You know, it is very strange. Tremendous effort goes into data representations like XML, but when it comes to positing or designing engines for manipulating that data the approach is to clone kludgy workarounds on to existing relational DBMSs or text search engines. Neither meet the test.

Thus, as the semantic Web and its association to semi-structured data looks forward, two impediments stand like gatekeepers blocking forward progress: 1) efficient processing engines and 2) scalable systems and architectures.

[1] D. Quass, A. Rajaraman, Y. Sagiv, J. Ullman and J. Widom, “Querying Semistructured Heterogeneous Information,” presented at Deductive and Object-Oriented Databases (DOOD ’95), LNCS, No. 1013, pp. 319-344, Springer, 1995.

[2] M. Tresch, N. Palmer, and A. Luniewski, “Type Classification of Semi-structured Data,” in Proceedings of the International Conference on Very Large Data Bases (VLDB), 1995.

[3] Serge Abiteboul, “Querying Semi-structured data,” in International Conference on Data Base Theory (ICDT), pp. 1 – 18, Delphi, Greece, 1997. See

[4] Peter Buneman, “Semistructured Data,” in ACM Symposium on Principles of Database Systems (PODS), pp. 117 – 121, Tucson, Arizona, May 1997. See

[5] Peter Wood, School of Computer Science and Information Systems, Birkbeck College, the University of London. See

[6] David Loshin, “Simple Semi-structured Data,” Business Intelligence Network, October 17, 2005. See

[7] This example is actually quite complex and demonstrates the challenges facing “entity extraction” software. Extracted entities most often relate to the nouns or “things” within a document. Note also, for example, how many of the entities involve internal “co-referencing,” or the relation of subjects such as “he” to times such as “10 a.m” to specific dates. A good entity extraction engine helps resolve these so-called “within document co-references.”

[8] Peter Buneman, “Semistructured Data,” in ACM Symposium on Principles of Database Systems (PODS), pp. 117 – 121, Tucson, Arizona, May 1997. See

[9] A common distinction is to call HTML “human readable” while XML is “machine readable” data.

[10] W3C, XML Development History. See

[11] Dan Suciu, “Semistructured Data and XML,” in International Conference on Foundations of Data Organization (FODO), Kobe, Japan, November 1998. See PDF option from

[12] Y. Papakonstantinou, H. Garcia-Molina and J. Widom, “Object Exchange Across Heterogeneous Information Sources,” in IEEE International Conference on Data Engineering, pp. 251-260, March 1995.

[13] Matteo Magnani and Danilo Montesi, “A Unified Approach to Structured, Semistructured and Unstructured Data,” Technical Report UBLCS-2004-9, Department of Computer Science, University of Bologna, 29 pp., May 29, 2004. See

NOTE: This posting is part of an occasional series looking at a new category that I and BrightPlanet are terming the eXtensible Semi-structured Data Model (XSDM). Topics in this series cover all information related to extensible data models and engines applicable to documents, metadata, attributes, semi-structured data, or the processing, storing and indexing of XML, RDF, OWL, or SKOS data. A major white paper will be produced at the conclusion of the series. Stay tuned!
Posted:October 31, 2005

As an entrepreneur who has now dealt with VCs for close to ten years, one phrase repeated more times than I care to recount has been, "The time for paying tuition is over; it’s time to show revenue multiples."

The first times I heard this mantra it went without question.  I know, as does everyone involved in a start-up, that revenue is goodness and messing around ("paying tuition") is badness.  I think, in general, that shareholder and investor impatience for a quick return on capital is a proper and laudable expectation.  If you’re in the big leagues, you need to either hit, field or pitch, or better still, multiples of these.

But neither technology nor markets are predictable.  Another statement frequently heard is "if you need to educate the market, your business model is wrong."  Another is "show me the way to $20 million annual revenues within the next XX months."

I ain’t a kid anymore, and I appreciate the demands for performance and results.  Starting up a business and spending other people’s money (not to mention my own and my family’s) to achieve returns is not for the fainthearted.  Fair enough.  And understood.

But the real disconnect is how to balance multiple factors.   I think I appreciate the pressures on VCs for returns.  I also understand their win some/lose some mentality.  (Actually, what I don’t understand is the acceptance of the high percentage rates of individual investment failures; something is not systematic and wrong here; but I digress.) 

But what I truly don’t understand is the application of mantras vs. a careful balance of positive and negative factors for a venture.  Excellent and innovative technology is often in search of proper applications and markets.  Excellent and innovative technology is often not initially mature for market acceptance.  Excellent and innovative technology is often misdirected by its founders until engagement with the market and customers helps refine features and product expressions.  Excellent and innovative technology is sometimes tasty cookie dough that  needs more time in the oven.

Presumably, as has been the case for my own ventures, the basis for investment has been excellent and innovative technology.  We all know the standard recipe of market-technolgy-management that sprinkles every high-tech VC Web site.  But, of course, and honestly and realistically, not all of these factors are in play when venture financing is sought.  And, let’s face it, if they were in play, there would not be an interest by the entrepeneurs to dilute their ownership.

I suppose, then, that all players in a venture-financed start-up are subject to various forms of willful or self-deception.  Entrrepreneurs and VCs alike believe they have all the answers.  And, of course, neither do. 

What I have come to learn is that it is the market that has the answers, and sometimes that takes time to figure out.  Good diligence at the front end is warranted — after all, there needs to be the basis of some excellent foundations — as are mechanisms for "feeding out the line" of venture dollars and claw back and other egregious ways to lay off risk because bad choices are often made.  But what should not be acceptable, should not be perpetuated, is the expectation as to WHEN these returns will be achieved. 

There is simply no avoiding that new, innovative and sexy technology may not be able to be precisely timed.  Rather than railing about not paying more tuiition, every VC that has done diligence and made a venture commitment should be cheering for more learning and more refinement.  Begin with good partial foundations (be they technology-management-market) and applaud the tuition of learning and refinement.  In the end, we never graduate; we hopefully progress to life-long learning.

"Longing gazes and worn out phrases won’t get you where you want to go.  No!"   -  Mamas and Papas   

Next up:  "The Myth of Superman"  

Posted by AI3's author, Mike Bergman Posted on October 31, 2005 at 9:46 pm in Software and Venture Capital | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is: