Posted:January 28, 2013

The Semantic Enterprise Part 4 in the Enterprise-scale Semantic Systems Series

Text, text everywhere, but no information to link!

For at least a quarter of a century the amount of information within an enterprise embedded in text documents has been understood to be on the order of 80%; more recent estimates put that contribution at 90%. But, whatever the number, or no matter how you slice it, the percentage of information in documents has been overwhelming for enterprises.

The first documentation systems, Documentum being a notable pioneer, helped keep track of versions and characterized its document stores with some rather crude metadata. As document management systems evolved — and enterprise search became a go-to application in its own right — full-text indexing and search was added to characterize the document store. Search allowed better access and retrieval of those documents, but still kept documents as a separate information store from the true first citizens of information in enterprises — structured databases.

That is now changing — and fast. Particularly with semantic technologies, it is now possible to “tag” or characterize documents not only in terms of administrative and manually assigned tags, but with concepts and terminology appropriate to the enterprise domain.

Early systems tagged with taxonomies or thesauri of controlled vocabulary specific to the domain. Larger enterprises also often employ MDM (master data management) to help ensure that these vocabularies are germane across the enterprise. Yet, even still, such systems rarely interoperate with the enterprises’ structured data assets.

Semantic technologies offer a huge leverage point to bridge these gaps. Being able to incorporate text as a first-class citizen into the enterprise’s knowledge base is a major rationale for semantic technologies.

Explaining the Basis

Let’s start with a couple of semantic givens. First, as I have explained many times on this blog, ontologies — that is, knowledge graphs — can capture the rich relationships between things for any given domain. Second, this structure can be more fully expressed via expanded synonyms, acronyms, alternative terms, alternative spellings and misspellings, all in multiple languages, to describe the concepts and things represented in this graph (a construct we have called “semsets“.) That means that different people talking about the same thing with different terminology can communicate. This capability is an outcome from following SKOS-based best practices in ontology construction.

Then, we take these two semantic givens and stir in two further ingredients from NLP. We first prepare the unstructured document text with parsing and other standard text processing. These steps are also a precursor to search; they provide the means for natural language processing to obtain the “chunks” of information in documents as structured data. Then, using the ontologies with their expanded SKOS labels, we add the next ingredient of OBIE (ontology-based information extraction) to automatically “tag” candidate items in the source text.

Editors are presented these candidates to accept or not, plus to add others, in review interfaces as part of the workflow. The result is the final subject “tags” assignment. Because it is important to tag both subject concepts or named entities in the candidate text, Structured Dynamics calls this approach “scones“. We have reusable structures and common terminology and syntax (irON) as canonical representations of these objects.

Add Conventional Metadata

Of course, not all descriptive information you would want to assign to a document is only what it is about. Much other structural information describing the document goes beyond what it is about.

Some of this information relates to what the document is: its size, its format, its encoding. Some of this information relates to provenance: who wrote it? who published it? when? when was it revised? And, some of this information relates to other descriptive relationships: where to download it? a picture of it; other formats of it. Of course, any additional information useful to describe the document can be also tagged on at this point.

This latter category is quite familiar to enterprise information architects. These metadata characterizations have been what is common for standard document management systems reaching back for three decades or more now.

So, naturally, this information has proven the test of time and also must have a pathway for getting assigned to documents. What is different is that all of this information can now be linked into a coherent knowledge graph of the domain.

Some Interface and Workflow Considerations

What we are seeking is a framework and workflow that naturally allows all exisitng and new documents to be presented through a pipeline that extends from authoring and review to metadata assignments. This workflow and the user interface screens associated with it are the more difficult aspects of the challenge. It is relatively straightforward to configure and set up a tagger (though, of course, better accuracy and suitability of the candidate tags can speed overall processing time). Making final assignments for subject tags from the candidates and then ensuring all other metadata are properly assigned can be either eased or impeded by the actual workflows and interfaces.

The trick to such semi-automatic processes is to get these steps right. There are the needs for manual overrides when the suggested, candidate tags are not right. Sometimes new terms and semset entries are found when reviewing the processed documents; these need to be entered and then placed into the overall domain graph structure as discovered. The process of working through steps on the tag processing screens should be natural and logical. Some activities benefit from very focused, bespoke functionality, rather than calling up a complicated or comprehensive app.

In enterprise settings these steps need to be recorded, subject to reviews and approvals, and with auditing capabilities should anything go awry. This means there needs to be a workflow engine underneath the entire system, recording steps and approvals and enabling things to be picked up at any intermediate, suspended point. These support requirements tend to be unique to each enterprise; thus, an underlying workflow system that can be readily modified and tailored — perhaps through scripting or configuration interfaces — is favored. Since Drupal is our standard content and user interface framework, we tend to favor workflow engines like State Machine over more narrow, out-of-the-box setups such as the Workflow module.

These screens and workflows are not integral to the actual semantic framework that governs tagging, but are essential complements to it. It is but another example of how the semantic technologies in an enterprise need to be embedded and integrated into a non-semantic environment (see the prior architecture piece in this series).

But, Also Some Caveats

Yet, what we have described above is the technology and process of assigning structured information to documents so that they can interoperate with other data in the enterprise. Once linked into the domain’s knowledge graph and once characterized by the standard descriptive metadata, there is now the ability to search, slice, filter, navigate or discover text content just as if it were structured data. The semantic graph is the enabler of this integration.

Thus, the entire ability of this system to work derives from the graph structure itself. Creating, populating and maintaining these graph structures can be accomplished by users and subject matter experts from within the enterprise, but that requires new training and new skills. It is impossible to realize the benefits of semantic technologies without knowledgeable editors to maintain these structures. Because of its importance, a later part in this series deals directly with ontology management.

While ontology development and management are activities that do not require programming skills or any particular degrees, they do not happen by magic. Concepts need to be taught; tools need to be mastered; and responsibiilties need to be assigned and overseen to ensure the enterprise’s needs are being met. It is exciting to see text become a first-class information citizen in the enterprise, but like any purposeful human activity, success ultimately depends on the people involved.

NOTE: This is part of an ongoing series on enterprise-scale semantic systems (ESSS), which has its own category on this blog. Simply click on that category link to see other articles in this series.
Posted:March 12, 2010

Friday Brown Bag Lunch

Today, in the advanced knowledge economy of the United States, the information contained within documents represents about a third of total gross domestic product, or an amount of about $3.3 trillion annually.

Yet our understanding of the value of documents and the means to manage them is abysmal. These failures impact enterprises of all sizes from the standpoints of revenues, profitability and reputation. Continued national productivity growth — and thus the wealth of all citizens — depends critically on understanding and managing these document values.

As this white paper describes, the lack of a compelling and demonstrable common understanding of the importance of documents is in itself a major factor limiting available productivity benefits. There is an old Chinese saying that roughly translated is “what cannot be measured, cannot be improved.” Many corporate officers may believe this to be the case for document creation and productivity, but, as this paper shows, in fact many of these document issues can be measured.

Friday Brown Bag Lunch This Friday brown bag leftover was first placed into the AI3 refrigerator on July 20, 2005. No changes have been made to the original posting.

I’d like to thank David Siegel for recently highlighting this post from 5 years ago with nice kudos on his PowerOfPull blog. That reference is what caused me to dust off the cobwebs from this older piece.

To wit, some 25% of all of the annual trillions of dollar spent on document creation costs lend themselves to actionable improvements:


$ Million

Cost to Create Documents


Benefits to Finding Missed or Overlooked Documents



Benefits to Improved Document Access



Benefits of Re-finding Web Documents



Benefits of Proposal Preparation and Wins



Benefits of Paperwork Requirements and Compliance



Benefits of Reducing Unauthorized Disclosures



Total Annual Benefits




$ Million

Cost to Create Documents


Benefits to Finding Missed or Overlooked Documents


Benefits to Improving Document Access


Benefits of Re-finding Web Documents


Benefits of Proposal Preparation and Wins


Benefits of Paperwork Requirements and Compliance


Benefits of Reducing Unauthorized Disclosures


Total Annual Benefits


Table 1. Mid-range Estimates for the Annual Value of Documents, U.S. Firms, 2002[1]

The total benefit from improved document access and use to the U.S economy is on the order of $800 billion annually, or about 8% of GDP. For the 1,000 largest U.S. firms, benefits from these improvements can approach nearly $250 million annually per firm. About three-quarters of these benefits arise from not re-creating the intellectual capital already invested in prior document creation. About one-quarter of the benefits are due to reduced regulatory non-compliance or paperwork, or better competitiveness in obtaining solicited grants and contracts.

Indeed, even these figures likely severely underestimate the benefits to enterprises from an improved leverage of document assets. It has always been the case that the best and most successful companies have been able to make better advantage of their intellectual assets than their competitors. The competitiveness advantage from better document access and use alone may exceed the huge benefits in the table above.

Documents — that is, unstructured and semi-structured data — are now at the point where structured data was at 15 years ago. At that time, companies realized that consolidating information from multiple numeric databases would be a key source of competitive advantage. That realization led to the development and growth of the data warehousing or business intelligence markets, now representing about $3.9 billion in annual software sales.

Search and enterprise content management software today only represents a fraction of that amount — perhaps on the order of $500 million annually. But given that intellectual content in documents represents three to four times the amount in numeric structured data, it is clear that document software capabilities are not being well utilized, reaching only a small fraction of their market potential.

The estimates provided in this white paper are drawn from numerous sources and are extremely fragmented, perhaps even inconsistent. One hope in preparing this document was to stimulate more research attention and data gathering around the critical issues of document value to the enterprise and the economy at large.



Documents: The Drivers of a Knowledge Economy

Documents: The Linchpin of Corporate Intellectual Assets

Documents: Unknown Value, Huge Implications

Documents: The Next Generation of Data Warehousing?

Connecting the Dots: A Pointillistic Approach


Number of ‘Valuable’ Documents Produced per Firm

Total Annual U.S. ‘Costs’ to Create Documents

‘Cost’ of Creating a ‘Typical’ Document

‘Cost’ of a Missed or Overlooked Document

Other Document Total ‘Cost’ Factors and Summary

Archival Lifetime of ‘Valuable’ Documents


Estimate of Time and Effort Devoted to Document Search

Effect of Non-persistent Search Efforts

‘Cost’ of Creating and Maintaining a Document Category Portal

‘Cost’ of Inaccessible or Hidden Intranet Sites


‘Costs’ and Opportunity Costs of Winning Proposals

‘Costs’ of Regulation and Regulatory Non-compliance

‘Cost’ of an Unauthorized Posted Document



How many documents does your organization create each year? What effort does this represent in terms of total staffing costs? What does it cost to create a ‘typical’ document? Of documents created, how much of the value in them is readily sharable throughout your organization? How long do you need to keep valuable documents and how can you access them? How much existing document content is re-created simply because prior work cannot be found? When prior information is missed, what do these prior investments in documents represent in terms of loss of market share, revenue or reputation? Indeed, what does the term, “document” represent in your organization’s context?

If you have difficulty answering these questions, you are not alone. Depending on the survey, from 90% to 97% of enterprises cannot answer these questions — in whole or in part. The purpose of this white paper is to provide the first comprehensive assessment ever of these document values.

Enterprises and the analyst community have historically overlooked the impact of document creation as opposed to document handling. Document creation is about 2-3 times more important — from an embedded cost standpoint — than document handling. Second, all aspects of document creation, and later access and use, assume a much greater role in the overall economics of enterprises than have been realized previously.

Documents: The Drivers of a Knowledge Economy

Put your index finger one inch from your nose. That is how close — and unfocused — document importance is to an organization. Documents are the salient reality of a knowledge economy, but like your finger, documents are often too close, ubiquitous and commonplace to appreciate.

How do your employees earn their livings? Writing proposals? Marketing or selling? Evaluating competitors or opportunities? Persuading? Analyzing? Communicating? Teaching? Of course, in some sectors, many make their living from growing things or making things. These are essential jobs — indeed, until the last few decades were the predominant drivers of economies — but are now being supplanted in advanced economies by knowledge work. Perhaps up to 35% of all company employees in the U.S. can be classified as knowledge workers.

And knowledge work means documents. The fact is that knowledge is produced and communicated through the written word. When we search, when we write, when we persuade, we may often do so verbally but make it persistent through the written word.

Documents: The Linchpin of Corporate Intellectual Assets

IBM estimates that corporate data doubles every six to eight months, 85% of which are documents.[2] At least 10% of an enterprise’s information changes on a monthly basis.[3] Year-on-year office document growth rates are on the order of 22%.[4] As later analysis indicates, there are perhaps on the order of 10 billion documents created annually in the U.S with a mid-range “asset” value of $3.3 trillion per year. Documents are a huge contributor to the United States’ gross domestic product of $10.5 trillion (2002).

  • According to a Coopers & Lybrand study in 1993:[5]
  • Ninety percent of corporate memory exists on paper
  • Ninety percent of the papers handled each day are merely shuffled
  • Professionals spend 5-15 percent of their time reading information, but up to 50 percent looking for it
  • On average, 19 copies are made of each paper document.

A Xerox Corporation study commissioned in 2003 and conducted by IDC surveyed 1000 of the largest European companies and had similar findings:[6],[7]

  • On average 45% of an executive’s time was spent dealing with documents
  • 82% believe that documents were crucial to the successful operation of their organizations
  • A further 70% claimed that poor document processes could impact the operational agility of their organizations
  • While 83%, 78% and 76% consider faxes, email and electronic files as documents, respectively, only 48% and 46% categorize web pages and multimedia content as such.

Documents: Unknown Value, Huge Implications

But, if defining what constitutes a document is hard, identifying the costs associated with all the document activities is almost impossible for many organizations. Ninety to 97 percent of the corporate respondents to the Coopers & Lybrand and Xerox studies, respectively, could not estimate how much they spent on producing documents each year. Almost three quarters of them admit that the information is unavailable or unknown to them.

An A.T. Kearney study sponsored by Adobe, EDS, Hewlett-Packard, Mayfield and Nokia, published in 2001, estimated that workforce inefficiencies related to content publishing cost organizations globally about $750 billion. The study further estimated that knowledge workers waste between 15% to 25% of their time in non-productive document activities.[8]

Enterprise document use (SPIN)

Figure 1. The Situation of Poor Enterprise Document Use Leads to Real Implications

But the situation is much broader and results in part from the inability to quantify the importance of both internal and external document assets to all aspects of the enterprise’s bottom line. For examples drawn from the main body of this white paper, early adopters of enterprise content software typically capture less than 1% of valuable internal documents available; large enterprises are witnessing the proliferation of internal and external Web sites, sometimes exceeding thousands; use of external content is presently limited to Internet search engines, producing non-persistent results and no capture of the investment in discovery or results; and “deep” content in searchable databases, which is common to large organizations and represents 90% of external Internet content, is completely untapped.

A USC study reported that typically only 32% of employees in knowledge organizations have access to good information about technical developments relevant to their work, and 79% claim they have inadequate information about what their competitors are doing.[9]

The enterprise content integration software market is fragmented and confused, with only a few established companies providing partial solutions. Content integration is still a small market with annual revenues of less than $50 million worldwide.[10] Vendor offerings fail to satisfy customer needs because of a lack of functionality and a lack of scalability to enterprise volumes. Sales in the market remain distinctly lower than those projected by industry analysts, even as the magnitude of “information overload” continues to grow at a dramatic rate.

Documents: The Next Generation of Data Warehousing?

Documents — that is, unstructured and semi-structured data — are now at the point where structured data was at 15 years ago. At that time, companies realized that consolidating information from multiple numeric databases would be a key source of competitive advantage. That realization led to the development and growth of the data warehousing or business intelligence markets, now representing about $3.9 billion in annual software sales.[11]

Certain categories of businesses have been leaders in content integration, especially those that have recently had mergers and acquisitions activity, those that need to integrate business applications with content, and those for which the reuse of marketing assets across the organization is critical.10

Stonebraker and Hellerstein have provided an insightful roadmap for how enterprise data integration or “federation” has trended over time: Data warehousing → Enterprise application integration → Enterprise content integration → Enterprise information integration.[12] There are two threads to this trend. First, there has been a growing recognition of the importance of document (unstructured) content to contribute to actionable information. Second, increasingly unified and integrated means are being applied to all data sources to allow single-access retrievals.

Connecting the Dots: A Pointillistic Approach

The state of information regarding the value and cost of documents is extremely poor. Lack of defensible and vetted estimates for this information undercuts the ability to properly estimate the intellectual assets tied up in documents or the impacts of overlooked or misused documents.

Only three large document studies — the Coopers & Lybrand, Xerox and A.T. Kearney studies noted above — have been conducted in the past ten years regarding the use and importance of documents within enterprises, and then solely from the standpoint of executive perceptions.

The quantified picture presented in this white paper regarding the costs and benefits of document creation, access and use is a paint-by-the-numbers assemblage of disparate data. The paper draws upon about 80 different data sources, many fragmented. The analysis approach by necessity has needed to conjoin assumptions and data from many diverse sources.

This approach leads to both uncertainty regarding “true” values and likely inaccuracies or mis-estimates in some areas. To make the assessment as consistent as possible, a base year of 2002 was used, the common year reference for most of the available data sources. To bracket uncertainties, most estimates are provided in low, medium and high estimates.

Thus, this study should be viewed as preliminary, but strongly indicative of the value of documents. Further research and data collection will surely refine these estimates. Clearly, though, by any measure, the value of documents to the enterprise is significant and huge, and should not continue to be overlooked.


Though valuable content resides everywhere, the first challenge to enterprises is getting a handle on their own internal document content.

Number of ‘Valuable’ Documents Produced per Firm

A recent UC Berkeley study on “How Much Information?” estimated that more than 4 billion pages of internal office documents with archival value are generated annually in the U.S. (Note: this is not the amount created, only those documents deemed worthy of retaining for more than one year).

Firm Size (employees)



























Knowledge Workers









Number of Pages  – Low









Number of Pages  – High









Number of Docs  – Low









Number of Docs- High









Docs/Firm  – Low









Docs/Firm  – High









Docs/Firm – 3 yr Low









Docs/Firm – 5 yr High









Content Management Workers


















Table 2. Document Projections for U.S. Firms by Size, 2002 Basis

Sources: UC Berkeley[13], U.S. Commerce Department[14], U.S. Bureau of Labor Statistics[15], U.S. Census Bureau[16]

Table 2 and Table 3 attempt to summarize the scale of this challenge for U.S. firms (for internal enterprise documents only). (See[17] for a description of methodology regarding document scales, note[18] for estimating the numbers of enterprise knowledge workers, and note[19] for estimating content workers. A rough multiplier of 3x to 4x can be applied to extrapolate globally.[20]) Breakouts are provided by size of firm; these include estimates for the number of knowledge and content workers within U.S. firms.







Knowledge Workers


Annual Number of Docs – Low


Annual Number of Docs- High


Annual Docs/Firm – Low


Annual Docs/Firm – High


Total Docs/Firm – 3 yr Low


Total Docs/Firm – 5 yr High


Content Management Workers




Table 3. Total Annual Document Projections for U.S. Firms, 2002 Basis

Table 4 takes this information and breaks out distribution of document production for a ‘typical’ knowledge worker according to major document types. The data from this table is based on analysis of dozens of BrightPlanet customers averaged across about 10 million documents in various repositories.

% Based On










Archival Documents (3 yrs)













































Current Documents (I yr)













































Total per Employee













































Table 4. Document Production for a ‘Typical’ Knowledge Worker

Note that word processed documents account for about 50% of typical production and storage demands. However, also note that documents of the highest archival value, as converted to PDFs for sharing and deployment, also represent about a third to two-fifths of stored documents.

Total Annual U.S. ‘Costs’ to Create Documents

Based on the information from Table 2 to Table 4 above, all updated to a common year 2002 basis, we can now estimate the total annual costs in the U.S. for creating all internal enterprise documents. The analysis is based on the UC Berkeley information and the Coopers & Lybrand studies. The “bottom up” case is based on the number of annual U.S. documents estimated based on Table 2. These results are shown in the table below:

Annual U.S. Office Documents

Number (M)


Total $ (B)

“Bottom Up” – Low




“Bottom Up” – High




Coopers & Lybrand








C&L – “Bottom Up”








Table 5. Annual U.S. Office Document Cost Estimates[21]

The average numbers above represent the average of the unique values in each column. The Table 5 analysis suggests there may be on the order of 10 billion documents created annually in the U.S with a total “asset” value on the order of $3.3 trillion per year.

‘Cost’ of Creating a ‘Typical’ Document

Based on the averages in the table above, a ‘typical’ document may cost on the order of $380 each to create.[22] Of course, a “document” can vary widely in size, complexity and time to create, and therefore its individual cost and value will vary widely. An invoice generated from an automated accounting system could be a single page and produced automatically in the thousands; proposals for very large contracts can take tens of thousands to millions of dollars to create. For examples, here are some other ‘typical’ costs for a variety of documents:

Ave. Cost

‘Typical’ Document




Mortgage Application


‘Typical’ Proposal



Table 6. ‘Typical’ per Document Creation Costs

Depending on document mix and activities, individual enterprises may want to vary the average document creation costs used in their cost-benefit estimates.

‘Cost’ of a Missed or Overlooked Document

The Coopers & Lybrand study suggests that 7.5 percent of all documents are lost forever, and that it costs $120 in labor ($150 updated to 2002) to find a misfiled document;[26] other studies suggest that 5% to 6% of documents are routinely misplaced or misfiled.

In fact, the extent of this problem is unknown and is affirmed by the Xerox results:[27]

  • Almost three quarters of corporate respondents admit that the information is unavailable or unknown to them
  • 95% of the companies are not able to estimate the cost of wasted or unused documents
  • On average 19% of printed documents were wasted.

Other Document Total ‘Cost’ Factors and Summary

Five independent studies suggest that, on average, organizations spend from 5% to 15% of total company revenue on handling documents.27,[28],[29],[30],[31] These seemingly innocuous percentages can translate into huge bottom-line impacts for U.S. enterprises. For example, the total GDP of the United States was on the order of $10.5 trillion at the end of 2002.[32] Translating this value into the results of Table 5 and the information in previous sections indicates the importance of document creation and handling for U.S enterprises:




Total U.S. Gross Domestic Product ($B)




Total Document Handling ($B)




% of total GDP:




Total Document Creation ($B)




% of total GDP:




Total Document Misfiled ($B)




% of total GDP:




ALL U.S. Document Burdens ($B)




% of total GDP:




Table 7. Range Estimates for Total U.S. Document Burdens in Enterprises, 2002[33]

A few observations relate to this table. First, enterprises and the analyst community have greatly overlooked the impact of document creation as opposed to document handling. Document creation is about 2-3 times more important  – from an embedded cost standpoint  – than document handling. Second, all aspects of document creation assume a much greater role in the overall economics of enterprises than has been realized previously.

The fact that documents have received so little management attention, awareness, measurement and direct attention to improve performance is shocking.

Archival Lifetime of ‘Valuable’ Documents

The ‘low’ and ‘high’ estimates for documents in Table 2 and Table 3 assume that 2% and 5%, respectively, of internal documents have archival value. Were these percentages to be higher, the volume of documents requiring integration and access would likewise increase. The 2% value is derived from the UC Berkeley study,[34] which also refers to an unpublished European study that places archival amounts at 10%. Unfortunately, there is little empirical information to support the degree to which documents deserve to be kept for archival purposes.

Assuming that documents may retain value for three to five years, the largest firms perhaps have as many as 4 million internal documents on average with enterprise-wide value. Firms with fewer employees generally have lower document counts. Archival percentages, however, are a tricky matter, since apparently 85% of all archived documents are accessed.[35]


Various estimates by Cowles/Simba,[36] Veronis, Suhler & Associates,[37] and Outsell[38] place the current market for on line business information in the $30 billion to $140 billion range, with significant projected growth. Outsell also indicates that marketing, sales, and product development professionals rely most heavily on information from the Internet for their daily decision making, based on a comparative study of Fortune 500 business professionals’ use of the open Web and fee-based desktop information content services.[39] Clearly, relevant and targeted content, much of which resides on line, has extreme value to enterprises.

UC Berkeley estimates that about 500 petabytes of new information was published on the Web in 2002,34 based on original analysis conducted by BrightPlanet.[40] The compound growth rate in Web documents has been on the order of more than 200% annually.[41] Estimates for deep Web content range from about 6-8 times larger [42] to 500 times larger40 than standard “surface web” content. The size of Internet content is overwhelming, of highly variable quality, growing at a rapid pace, and with much of its content ephemeral.

Estimate of Time and Effort Devoted to Document Search

According to a recent study by iProspect, about 56 percent of users use search engines every day, based on a population of which more than 70 percent use the Internet more than 10 hours per week. Professionals abandon a current search 38% of the time after inspecting only one results page (the listing of document result URLs), and overall 82% of users attempt another search if relevant results are not found within the first three results pages. Just 13 percent of users said that they use different search engines for different types of searches.[43] Only 7.5 percent of Internet users said they refined their search with additional keywords in cases where they were unable to achieve satisfactory results.[44]

The average knowledge worker spends 2.3 hrs per day  – or about 25% of work time  – searching for critical job information.[45] IDC estimates that enterprises employing 1,000 knowledge workers waste well over $6 million per year each in searching for information that does not exist, failing to find information that does, or recreating information that could have been found but was not.[46] As that report stated, “It is simply impossible to create knowledge from information that cannot be found or retrieved.”

Vendors and customers often use time savings by knowledge workers as a key rationale for justifying a document or content initiative. This comes about because many studies over the years have noted that white collar employees spend a consistent 20% to 25% of their time seeking information; the premise is that more effective search will save time and drop these percentages. As a sample calculation, each 1% reduction in time devoted to search produces:

$50,000 (base salary) * 1.8 (burden rate) * 1.0% = $900/ employee

The stable percentage effort devoted to search over time suggests it is the “satisficing” allocation. (In other words, knowledge workers are willing to devote a quarter of their time to finding relevant information.) Thus, while better tools to aid better discovery may lead to finding better information and making better decisions more productively  – a far more important justification in itself  – there may not result a strict time or labor savings from more efficient search.[47]

Effect of Non-persistent Search Efforts

The percentage of Web page visits that are re-visits is estimated at between 58%[48] and 80%.[49] While many of these re-visitations occur shortly after the first visit (e.g., during the same session using the back button), a significant number occur after a considerable amount of time has elapsed. Thus, it is not surprising that a survey of problems using the Web found “Not being able to find a page I know is out there,” and “Not being able to return to a page I once visited,” accounted for 17% of the problems reported, and that the most common problem using bookmarks was, “Changed content.”[50] Depending on the content type, users use either “direct” or “indirect” approaches to re-find previously discovered information:



Specific Information



General Information



Specific Documents



Web Documents






Table 8. General Approaches to Re-finding Previously Discovered Information [51]

Direct approaches require remembering or specifically noting the specific location of the information. Direct approaches include: direct entry; emailing to self; emailing to others; printing out; saving as file; pasting the URL into a document; and posting to a personal Web site.

Indirect approaches include: searching; looking through bookmarks; and recalling from a history file. All of these indirect approaches are supported by modern browsers. Note that re-finding Web pages or documents relies heavily on having a record of a previously visited URL.

As a University of Washington study supported by Microsoft discovered, all of the specific direct and indirect techniques applied to these re-discovery approaches have significant drawbacks in terms of desired functions for the recall process: [52]

Portability No of Access Points Persistence Preservation Currency Context Reminding Ease of Integration Communication Ease of Maintenance


Direct Entry











Email to Self











Email to Others






















Save as File











Paste URL in Doc











Personal Web Site













































Table 9. Strengths and Weakness of Existing Techniques to Re-use Web Information

The general observation is that no present technique is able alone to keep search persistent, current or maintain context. These combined inadequacies mean that previously found information is not easily found again, or re-discovered, as the following table shows:


Information No Longer Available


Re-tracing Path Fails


Time Length Since Last Find


Other Failure Reasons


Total Information Lost


Success Finding Lost Information


Table 10. Success in Finding Important Earlier Found Web Information [53]

This table has a number of important observations. First, some 37% of previously found information disappears from the Web, consistent with other findings that estimate about 40% of all Web content disappears annually, some of which has historical or archival value.[54]

Second, and most importantly, nearly 70% of previously found valuable information cannot be rediscovered again. More than half of this problem is because the information is no longer available on the Web, but other reasons relate to the inadequacies of recall techniques for finding previously discovered information.

These observations can translate into some relatively huge costs on a per employee and per enterprise basis, as the table below shows:

Per Knowledge Worker

Per ‘Large’


Per Doc

All Docs

Enterprise ($000)

Enterprises ($M)

Re-finding Documents





Re-creating Documents









Table 11. ‘Cost’ of Not Readily Re-finding Valuable Web Information

This analysis assumes that some previously found information of value is again re-found (60%), but some is also not re-found and must be re-created (40%).[55] The ‘large’ enterprise is identical to the definition in Table 2 (which is also nearly equivalent to a Fortune 1000 company).[56]

The analysis indicates that poor methods to recall previously found and valuable Web documents may cost $1,600 per knowledge worker per year. This translates into nearly a $10 million productivity loss for the largest enterprises, or nearly $33 billion across all U.S. industries.

In relation to the total document costs noted in Table 7 above, these may seem to be comparatively small numbers. However, when viewed in the context of unproductive standard Web search, they indicate important failings in the ability to recall previously found valuable results from searches and their attendant productivity losses.

‘Cost’ of Creating and Maintaining a Document Category Portal

Users, administrators and industry analysts alike recognize the importance of placing content into logical, intuitive and hierarchically organized categories. About 60% of knowledge workers note that search is a difficult process, made all the more difficult without a logical organization to content.[57] While technical distinctions exist, these logical structures organized into a hierarchical presentation are most often referred to as “taxonomies,” though other terms such as ontology, subject directory, subject tree, directory structure or classification schema may be used.

Delphi Group’s research with corporate Web sites points to the lack of organized information as the number one problem in the opinion of business professionals. More than three-quarters of the surveyed corporations indicated that a taxonomy or classification system for documents is imperative or somewhat important to their business strategy; more than one-third of firms that classify documents still use manual techniques.57 Hierarchical arrangements of categorized subjects trigger associations and relationships that are not obvious when simply searching keywords. Other advantages cited for the taxonomic presentation of documents are the greater likelihood of discovery, ease-of-use, overcoming the difficulty of formulating effective search queries, being able to search only within related documents, discovery of relationships among similar terminology and concepts, and user satisfaction.[58],[59]

From the user standpoint, knowledge workers want to impose taxonomic order on document chaos, but only if the taxonomy models their domain accurately. They also want software to assist with categorizing, as long as it respects the taxonomy they created. Finally, the results of these category placements should be presented via a portal. Thus, as the common concern across all requirements, the taxonomy takes on tremendous importance for an application’s success.[60]

Large firm documents

Figure 2. Typical Large Firm Documents, Thousands

Enterprises that have adopted directory structures for content management are not yet achieving enterprise-wide relevance, presenting on average 1% of all relevant documents in an organized portal view. These limitations appear to be driven by weaknesses in the technology and high costs associated with conventional approaches:

  • Comprehensiveness and Scale – according to a market report published by Plumtree in 2003, the average document portal contains about 37,000 documents.[61] This was an increase from a 2002 Plumtree survey that indicated average document counts of 18,000.[62] However, about 60% of respondents to a Delphi Group survey said they had more than 50,000 internal documents in their portal environment (generally the department level), 3 and as Table 2 indicates above, most of the largest firms likely have millions or more internal documents deserving of common access and archiving.
  • The left-hand bar in Figure 2 indicates current averages for documents in existing content portals. The right-hand (yellow and orange) bar indicates potential based on high and low estimates. The ‘Archive’ case (middle bar) show the same values as provided in Table 2, and represent a conservative view of “archival-likely” documents. The right bar is a more representative view of actual current internal content that enterprises may want to make available to their employees.[63] Two observations have merit: 1) under current practice, enterprises are at most making 10% of their useful documents available, and more likely slightly over 1%; 2) the documents that are being made available are solely internal, and neglect potentially important external sources that would increase document counts considerably.
  • Implementation Times – though average time to stand-up a new content installation is about 6 months, there is also a 22% risk that deployment times exceeds that and an 8% risk it takes longer than one year. Furthermore, internal staff necessary for initial stand-up average nearly 14 people (6 of whom are strictly devoted to content development), with the potential for much larger head counts[64]
  • Ongoing Maintenance and Staffing Costs – ongoing maintenance and staffing costs typically exceed the initial deployment effort. This trend is perhaps not surprising in that once a valuable content portal has been created there will be demands to expand its scope and coverage. Based on these various factors, Table 12 summarizes set-up, ongoing maintenance and key metrics for today’s conventional approaches versus what BrightPlanet can do (the BrightPlanet document count is based on a ‘typical’ installation; there are no practical scale limits)










Current Practice














BP Advantage

6.8 x + up

6.2 x

6.7 x

280.4 x

21.4 x

144.6 x

Table 12. Staff, Time and per Document Costs for Categorized Document Portals

  • The content staff level estimates in the table are consistent with anecdotal information and with a survey of 40 installations that found there were on average 14 content development staff managing each enterprise’s content portal.[65]

Though conventional approaches to content integration seem to lead to high per document set-up and maintenance costs, these should be contrasted with standard practice that suggests it may cost on average $25 to $40 per document simply for filing.29 Indeed, labor costs can account for up to 30% of total document handling costs.28 Nonetheless, at $5 to $11 per document for content management alone, this could result in no actual cost savings if electronic access does not displace current filing practices. When multiplied across all enterprise documents, these uncertainties can translate into huge swings in costs or benefits for a content portal initiative.

  • Software License v. Full Project Costs – according to Charles Phillips of Morgan Stanley, only 30% of the money spent on major software projects goes to the actual purchase of commercially packaged software. Another third goes to internal software development by companies. The remaining 37% goes to third-party consultants.[66] In evaluating a commitment, internal staff and consulting time should be carefully scrutinized. Efficiencies in initial deployment and ongoing support are the biggest cost drivers
  • Internal PLUS External Sources – weaknesses in scalability and high implementation costs often lead to a dismissal of the importance of integrating internal plus external content. Few installations address relevant content external to the enterprise essential to achieving its missions. Granted, the increase in scales associated with external content are large, but for some businesses integration with external content may be essential.

While other vendors claim fast categorization times, what they fail to mention is the lengthy pre-processing times necessary for generating their categorization metatags. According to Forrester Research, some of these metatagging systems can only process five to 15 documents per hour![67]

‘Cost’ of Inaccessible or Hidden Intranet Sites

In 2003, the portal vendor Plumtree noticed a new trend that it called “Web sprawl,” by which it meant the costly proliferation of Web applications, intranets and extranets.[68] BEA has taken up this trend as a major thrust to its Web service offerings through an approach it calls “enterprise portal rationalization” (EPR).[69] According to BEA, its architectural offerings are meant to control the “metastasizing” of corporate Web sites.

How common and to what scale is the proliferation of enterprise Web sites? I have not been able to find any comprehensive studies on this topic, but has been able to find many anecdotal examples. The proliferation, in fact, began as soon as the Internet became popular:

  • As reported in 2000, Intel had more than 1 million URLs on its intranet with more than 100 new Web sites being introduced each month[70]
  • In 2002, IBM consolidated over 8,000 intranet sites, 680 ‘major’ sites, 11 million Web pages and 5,600 domain names into what it calls the IBM Dynamic Workplaces, or W3 to employees[71]
  • Silicon Graphics’ ‘Silicon Junction’ company-wide portal serves 7,200 employees with 144,000 Web pages consolidated from more than 800 internal Web sites[72]
  • Hewlett-Packard Co., for example, has sliced the number of internal Web sites it runs from 4,700 (1,000 for employee training, 3,000 for HR) to 2,600, and it makes them all accessible from one home, @HP [73],[74]
  • Avaya Corporation is now consolidating more than 800 internal Web sites globally[75]
  • The Wall Street Journal recently reported that AT&T has 10 information architects on staff to maintain its 3,600 intranet sets that contain 1.5 million public Web pages[76]
  • The new Department of Homeland Security is faced with the challenge of consolidating more than 3,000 databases inherited from its various constituent agencies.[77]

BrightPlanet’s customers confirm these trends, with indicators of hundreds if not thousands of internal Web sites common in the largest companies. Indeed, it is surprising how many instances there are where corporate IT does not even know the full extent of Web site proliferation. The problem is likely much greater than realized:




Number of Large Firms




Ave Number of Web Sites per Firm




Ave. Number of Documents per Web Site




Total Large Firm Web Sites




Percentage of Known Web Sites




Percentage of Doc Federation for Known Sites




Site Development & Maintenance
Development Cost per Web Site




Annual Maintenance Cost per Site




Total Yr 1 Cost per Site




Total Yr 1 per Large Firm Costs ($000)




Total Yr 1 Large Firm Costs ($M)




‘Cost’ of Unfound Documents
No. of Unknown Documents per Firm




Total Number of Large Firm Unknown Docs




Total Cost per Web Site




Cost of Unknown Docs per Firm ($000)




Total Cost of Large Firm Unknown Docs ($M)




Total Cost per Firm ($000)




Total Cost all Large Firms ($M)




Development as % of Total Costs




Unfound Documents as % of Total Costs




Table 13. Development and Unfound Document ‘Costs’ for Large Firms due to Web Sprawl

Table 13 consolidates previous information to estimate what the ‘costs’ of Web sprawl might be to larger firms (analogous to the Fortune 1000). The table presents Low, Medium and High estimates for number of Web sites per firm, known and unknown documents in each, and associated costs for initial site development and first-year maintenance plus the value of unfound information. The Medium category uses the average values from previous tables. The Low and High values bracket these amounts based on distribution of known values and expert judgment.

The table indicates as a mid-range estimate that an individual Web site for a large enterprise may cost about $6,000 to set-up and maintain in the first year and represents $24,000 in opportunity costs due to unknown or unfound documents. For the average large enterprise across all Web sites, these costs may be $4.2 million and $12.0 million, respectively. Across all large firms, total costs due to Web sprawl may be on the order of $22 billion.

While site development and maintenance costs are not trivial, exceeding $4 billion for all large firms (which can also be significantly reduced  – see previous section), the major cost impact comes from the inability to find or federate the information that is available. Unfound documents represent well in excess of 80% of the costs associated with Web sprawl.

The Web sprawl situation is analogous to other major technology shifts. For example, in the early 1980s, IT grappled mightily with the proliferation of personal computers. Centralized control was impossible in that circumstance because individuals and departments recognized the productivity benefits to be gained by PCs. Only when enterprise-capable vendors of networking technology, such as Novell, were able to offer integration solutions was the corporation able to control and fully exploit the PC’s technology potential.

The proliferation of internal enterprise Web sites is responding to similar drivers: innovation, customer service, or superior methods of product or solutions delivery. Ambitious mid-level managers will continue to exploit these advantages by “cowboy” additions of more corporate Web sites, and that is likely to the good for most enterprises. Gaining control and fully realizing the value of this Web site proliferation  – while not stymieing innovation  – will likely require enabling technology analogous to the networking of PCs.


The previous analysis has focused on more-or-less direct costs and drivers. These impacts are huge and deserve proper consideration. But there are other implications from the inability to access and manage relevant document information. These implications fall into the categories of lost opportunities, liabilities, or non-compliance. These implications often far outweigh the direct costs in their bottom-line impacts. This section presents only a few of these many opportunities.

‘Costs’ and Opportunity Costs of Winning Proposals

Competitive proposals are an important revenue factor to hundreds of thousands of businesses. Indeed, contracts and grants from federal, state and local governments accounted for 12.1% of GDP in 2002; the amount competitively awarded equaled about 5.6% of GDP.[78] Reducing the fully-burdened costs of producing responses to competitive procurements and improving the rate of successfully obtaining them can be a huge competitive advantage to business.

Significant proportions of commercial projects and programs are likewise awarded through competitive proposals and bids. However, literature references to these are limited, and the remainder of this section relies on federal sector statistics as a proxy for the overall category.

Though the federal government is making strides in providing central clearinghouses to opportunities  – and is also doing much in moving to uniform application standards and electronic application submissions  – these efforts are still in their nascent stages and similar efforts at the state and local level are severely lagging. As a result, the magnitude of the proposal opportunity is perhaps largely unknown to many businesses. This lack of appreciation and attention to the cost- and success-drivers behind winning proposals is a real gap in the competitiveness of many individual businesses.

Table 14 on the following page consolidates information from many government sources to quantify the magnitude of this competitively-awarded grant and contract opportunity with governments.

Number of Awards

Amount ($000)

Federal Government
Total Grants



[79] [80]
Total Contract Procurements



Competitively-awarded Grants



Competitively-awarded Procurements



Total Competitive Opportunities



Ave Competitive Opportunity


State & Local Government [84] [85]
Total Grants



Total Contract Procurements



Competitively-awarded Grants



Competitively-awarded Procurements



Total Competitive Opportunities



Ave Competitive Opportunity


Total (no B-to-B)
Competitively-awarded Grants



Competitively-awarded Procurements



Total Competitive Opportunities



Ave Competitive Opportunity


Table 14. Federal, State & Local Contract and Grant Opportunities, 2002

This analysis suggests there are nearly $600 billion available each year for competitively awarded grants and procurements from all levels of government within the U.S.; about 60% from the federal sector. The average competitive award is about $270 K for grants; about $220 K for contract procurements.

Aside from construction firms (which are excluded in this and prior analyses), there are on the order of 92,500 federal contract-seeking firms today.[87] In 2003, the top 200 federal contracting firms accounted for nearly $190 billion in contract outlays.[88] While it is unclear what proportion of these commitments were competitive (81% of total federal commitments) or based on all contract procurements (57% of total federal commitments), it is clear that more than 90,000 firms are competing via a classic power curve for a minor portion of available federal revenues. This power curve is shown in Figure 3 below for the 200 largest federal contractors, which obtain a proportionately high percentage of all contract dollars.

Power curve distribution of Fedeeral contractors

Figure 3. Power Curve Distribution of Top 200 Federal Contractors by Revenue, 2002

The combination of these factors enables an estimate of the bottom-line proposal impacts by firm. This information is shown in the table below:


Amount ($000)

Total Competitive Awards



State & Local



Number of Competing Firms


Number of Winning Firms


Number of Winning Proposals


Number of Submitted Proposals


Direct Proposal Preparation Costs
Winning Proposal Preparation


Losing Proposals Preparation


TOTAL Proposal Preparation





Improvement in RFP Development




Proposal Preparation
Benefits – Individual Submitters ($000)




Benefits – All Submitters ($000)




Proposal Success Benefits
Increase in Number of Winning Submissions




Increase in Number of Winning Firms




Benefits – Individual Submitters ($000)




Benefits – All Submitters ($000)




Benefits – All Submitters/All Aspects




Table 15. Combined Preparation Costs and Opportunity Costs for Proposals

Across all entities, the annual cost of preparing proposals to competitive solicitations from government agencies at all levels is on the order of $22 billion, $5 billion for winning firms and $17 billion for losing firms. Better access to missing information and better information  – assuming no change in the underlying ideas or proposal-writing skills  – suggests that proposal response costs could be reduced by more than $3 billion annually. Another $3 billion annually is available for better winning of competitive proposals. Individual benefits to firms that respond to competitive solicitations is on average $1.25 million per competing firm.[95]

The more significant benefit to individual firms from improved access to “missing” information and better information is increasing the likelihood of winning a competitive award. Firms that embrace these practices are estimated to obtain a $1.2 million annual benefit. Given that many firms that have previously been losing awards have relatively low annual revenues, the percent impact on the bottom line can be quite striking due to improved proposal preparation information.

‘Costs’ of Regulation and Regulatory Non-compliance

A December 2001 small business poll by the National Federation of Independent Business (NFIB) gauged the impacts of the regulatory workload on firms. When asked “is government regulation a very serious, somewhat serious, not too serious, or not at all serious problem for your business,” nearly half, or 43.6 percent, answered “very serious” or “somewhat serious.” The respondents indicated the most serious regulatory problems were at the federal level (49 %), state level (35 %) or local level (13%) of government. The biggest single regulatory problem cited was extra paperwork, followed by difficulty understanding how to comply with regulations and dollars spent doing so.[96] A later December 2003 NFIB survey indicates that the average cost per hour of complying with paperwork requirements was $48.72.[97]

Type of Regulation

All Firms

<20 Employees

20-499 Employees

500+ Employees

All Federal Regulations




















Tax Compliance





Table 16. Per Employee Costs of Federal Regulation by Firm Size, 2002

According to a 2001 report, “The Impact of Regulatory Costs on Small Firms” by W. Mark Crain and Thomas D. Hopkins, the total costs of Federal regulations were estimated to be $843 billion in 2000, or 8 percent of the U. S. Gross Domestic Product. Of these costs, $497 billion fell on business and $346 billion fell on consumers or other governments. Here are how those impacts are estimated on a per employee basis across a range of firm sizes:[98]

As of September 30, 2002, federal agencies estimated there were about 8.2 billion “burden hours” of paperwork government-wide. Almost 95 percent of those 8.2 billion hours were being collected primarily for the purpose of regulatory compliance. [99]

Burden Hrs (million)

Labor Costs ($M)

Total Government



Total Gov (excl. Treasury)

























































FAR (contracts)












Veterans Administration















Table 17. Federal Government Paperwork Burdens, 2002[100]

A December 2003 NFIB survey indicates that the average cost per hour of complying with paperwork requirements was $48.72.[101] If these costs are substituted, the total cost burden in the table above would be about $400 billion, $71 billion of which excludes Treasury and the IRS.

Despite legislation requiring federal paperwork reduction and embracing of e-government initiatives, paperwork burdens continue to increase. Total burden hours in 2002, for example, increased 600 million hours, or about 4 percent, from the previous year. The Code of Federal Regulations (CFR) continues to expand despite efforts to curtail further growth. The CFR grew from 71,000 pages in 1975 to 135,000 pages in 1998. Annually, there are more than 4,000 regulatory changes introduced by the federal government. The federal government now has over 8,000 separate information collection requests authorized by OMB.[102]

Federal Source

Fines ($ 000)

Internal Revenue Service



Corporate Income


Employment Taxes


Excise Taxes


Other Taxes




Economic Stabilization


Labor & Immigration


Commerce & Customs (excl SEC)





Narcotics & Alcohol


Mine Safety


Environmental Protection









Table 18. Federal Fines and Penalties to Corporations, 2002

Another source of costs to enterprises are civil penalties and fines for non-compliance with existing regulations, as shown in the table above for 2002 by agency. A total of $5 billion annually is expended by U.S. businesses for civil penalties due to non-compliance with federal regulation, $1 billion of which is due to non-tax purposes.

However, these estimates may undercount actual fines and penalties levied by the federal government due to the accounting basis of the OMB source. For example, the Department of Labor (DOL) collected fines and penalties totaling $175 million from employers in fiscal year 2002 for Fair Labor Standards Act (FLSA) violations.[107] According to a 2002 report, since 1990, 43 of the government’s top contractors paid approximately $3.4 billion in fines/penalties, restitution, and settlements.[108] And, according to another report, the corporations liable to the top 100 False Claims Act paid more than $12 billion since 1986.[109] Since there is no central clearinghouse for this information, with both individual agency general counsels and the Department of Justice responsible for actual collections, the figures in Table 18 should be interpreted as estimates.

Table 19 on the next page consolidates the information in Table 16 to Table 18 to estimate the overall regulatory and paperwork burdens on U.S. businesses, plus estimates of the benefits to be gained from better document access and use.

‘Cost’ of an Unauthorized Posted Document

Unauthorized information disclosures derive mainly from within an organization. The ease of electronic record duplication and dissemination  – particularly through postings on enterprise Web sites  – increases a firm’s vulnerability to this problem. Records mutate and propagate in poorly controlled environments. On average, unauthorized disclosure of confidential information costs Fortune 1000 companies about $15 million per company per year.[110]

A few privacy laws demonstrate the potential liabilities associated with disclosure of confidential information due to inadvertent mistakes or disgruntled employees. As one example, the Health Insurance Portability and Accountability Act (HIPAA) of 1996 sets security standards protecting the confidentiality and integrity of “individually identifiable health information,” past, present or future. Failure to comply with any of the electronic data, security, or privacy standards can result in civil monetary penalties up to $25,000 per standard per year. Violation of the privacy regulations for commercial or malicious purposes can result in criminal penalties of $50,000 to $250,000 in fines and one to ten years of imprisonment.[111]

Amount ($000)

Total Federal Paperwork Burden (non-tax)


Total Federal Other Regulatory Burden


Total Federal Fines and Penalties


Total State and Local Paperwork Burden (non-tax)


Total State and Local Other Regulatory Burden


Total State and Local Fines and Penalties





Improvements Due to Better Information




Paperwork Burdens (non-tax)
Benefits per Large Firm




Benefits – All Firms




Other Regulatory Burdens
Benefits per Large Firm




Benefits – All Firms




Reductions in Fines and Penalties
Benefits per Large Firm




Benefits – All Firms




TOTAL – All Regulatory Burdens
Benefits per Large Firm




Benefits – All Firms




Table 19. Regulatory Burden and Benefits to Firms from Improved Information

As another example, the Gramm-Leach-Bliley Act (GLBA) of 1999 mandates the financial industry to create guidelines for the safeguarding of customer information. GLBA includes severe civil and criminal penalties for non-compliance, with civil penalties up to $100,000 for each violation and key officers may be fined up to $10,000 per violation. Violation of the GLBA can also carry hefty sanctions, including termination of FDIC insurance and fines of up to $1,000,000 for an individual or one percent of the total assets of the financial institution.[117]

Other major areas of unauthorized disclosure liability occur in national security, identity theft, and commerce, tax and Social Security information. Indeed, virtually every state and federal agency related to a company’s business has policies and fines regarding unauthorized disclosures. Monitoring these requirements is thus an imperative for enterprise management to prevent exposure to fines and loss of reputation.

On a less-quantifiable basis there are also risks about the clarity of the enterprise message to customers, suppliers and partners. Unmanaged Web sprawl is a critical hole for enterprises to ensure compliance with privacy and confidentiality regulations, and to promote clarity of message and accuracy to stakeholders.


Prior to the analysis in this white paper, the state of understanding about the value of document assets had been abysmal. While still preliminary and subject to much improvement, this study has nonetheless found:

  • The value of documents  – in their creation, access and use  – can indeed be measured
  • The information contained within U.S. enterprise documents represents about a third of gross domestic product, or an amount of about $3.3 trillion annually
  • Some 25% of all of these expenditures lend themselves to actionable improvements
  • There are perhaps on the order of 10 billion documents created annually in the U.S.
  • Corporate data doubles every six to eight months; 85% of this data is contained in documents
  • Ninety to 97 percent of enterprises cannot estimate how much they spend on producing documents each year
  • Document creation is about 2-3 times more important  – from an embedded cost standpoint  – than document handling
  • It costs, on average, $350 to create a ‘typical’ document
  • The total potential benefit from practical improvements in document access and use to the U.S economy is on the order of $800 billion annually, or about 8% of GDP
  • For the 1,000 largest U.S. firms, benefits from these improvements can approach nearly $250 million annually per firm
  • About three-quarters of these benefits arise from not re-creating the intellectual capital already invested in prior document creation
  • Another 25% of the benefits are due to reduced regulatory non-compliance or paperwork, or better competitiveness in obtaining solicited contracts and grants
  • $33 billion is wasted each year in re-finding previously found Web documents
  • Paperwork and regulatory improvements due to documents can save U.S. enterprises $120 billion each year
  • Lack of document access due to Web sprawl costs U.S. enterprises $22 billion each year
  • $8 billion in annual benefits is available due to document improvements for competitive governmental grant and contract solicitations
  • These figures likely severely underestimate the benefits to enterprises from improved competitiveness, a factor not analyzed in this study
  • Documents are now at the point where structured data was at 15 years ago at the nascent emergence of the data warehousing market.

As noted throughout, there is a considerable need for additional research and data on document creation, use, costs and benefits. Additional technical endnotes are provided in the PDF version of the full paper.

[1] All sources and assumptions are fully documented in footnotes in the main body of this white paper; general assumptions used in multiple tables are provided in the Technical Endnotes.

[2] As quoted by Armando Garcia, vice president of content management at IBM; see

[3] Delphi Group, “Taxonomy & Content Classification Market Milestone Report,” Delphi Group White Paper, 2002. See

[4] Based on the 1999 to 2001 estimate changes in reference 34, Table 2-6.

[5] As initially published in Inc Magazine in 1993. Reference to this document may be found at:

[6] J. Snowdon, Documents The Lifeblood of Your Business?, October 2003, 12 pp. The white paper may be found at:

[7] Xerox Global Services, Documents – An Opportunity for Cost Control and Business Transformation, 28 pp., 2003. The findings may be found at:

[8] A.T. Kearney, Network Publishing: Creating Value Through Digital Content, A.T. Kearney White Paper, April 2001, 32 pp. See

[9] S.A. Mohrman and D.L. Finegold, Strategies for the Knowledge Economy: From Rhetoric to Reality, 2000, University of Southern California study as supported by Korn/Ferry International, January 2000, 43 pp. See

[10] C. Moore, TheContent Integration Imperative, Forrester Research Trends Report, March 26, 2004, 14 pp.

[11] D. Vesset, Worldwide Business Intelligence Forecast and Anal ysis, 2003-2007, International Data Corporation, June 2003, 18 pp. See

[12] M. Stonebraker and J. Hellerstein, “Content Integration for E-Business,” in ACM SIGMOD Proceedings, Santa Barbara, CA, pp. 552-560, May 2001.

[13] P. Lyman and H. Varian, “How Much Information, 2003,” retrieved from on December 1, 2003.

[14] U.S. Department of Commerce, Digital Economy 2003, Economic Statistics Administration, U.S. Dept. of Commerce, Washington, D.C., April 2004, 155 pp. See

[15] U.S. Department of Labor, “Occupation Employment and Wages, 2002,” Bureau of Labor Statistics. See

[16] U.S. Census Bureau, “Statistics of U.S. Businesses 2001.” See–.htm.

[17] Total office documents counts were obtained on a page basis from reference 13, which used a value of 2% for what documents deserve to be archived. This formed the ‘lo’ case, with the high case using a 5% estimate (lower still than the ENST 10% estimated cited in reference 13). Total pages were converted to numbers of documents on an average 8 pp per document basis; see Technical Endnotes for further discussion.

[18] See Technical Endnotes for the derivation of knowledge worker estimates.

[19] See Technical Endnotes for the derivation of content worker estimates.

[20] Citation sources and assumptions for this analysis are presented in the BrightPlanet white paper, “A Cure to IT Indigestion: Deep Content Federation,” BrightPlanet Corporation White Paper, June 2004, 31 pp.

[21] The “bottom up” cases are built from the number of assumed knowledge workers in Table 3. The “low” and “high” variants are based on a 5% archival value or 350 annual documents created per worker, respectively, applied to worker staff costs associated with document creation. The “Coopers & Lybrand” case is a strict updating of that study to 2002. The other two “C&L” cases use the updated per document costs from the C&L study; the first variant uses the annual documents created from the UC Berkeley study without archiving; the second variant uses the average of the “low” and “high” document numbers. See further Technical Endnotes for other key assumptions.

[22] The individual values in Table 5 range from about $140 to $740 per document, with the update of the Coopers & Lybrand study being about $270. Separate Delphi analysis by BrightPlanet has shown median values of about $550 per document.

[23] See http://

[24] See

[25] See

[26] As initially published in Inc Magazine in 1993. Reference to this document may be found at:

[27] Xerox Global Services, Documents – An Opportunity for Cost Control and Business Transformation, 28 pp., 2003. The findings may be found at: and J. Snowdon, Documents  – The Lifeblood of Your Business?, October 2003, 12 pp. The white paper may be found at:

[28] Optika Corporation. See

[29] Cap Ventures information, as cited in ZyLAB Technologies B.V., “Know the Cost of Filing Your Paper Documents,” Zylab White Paper, 2001. See

[30] ALL Associates Group, Inc., EDAM Sector Summary, April 2003, 2 pp.

[31] ALL Associates Group, 2002 EDAM Metrics for Major U.S. Companies.

[32] By the second Q 2004, this amount was $11.6 trillion. U.S. Federal Reserve Board, Flow of Funds Accounts for the United States, Sept. 16, 2004. See

[33] The bases for this table have the following assumptions: 1) the three cases for document handling are based on 5%, 10% and 15% of total enterprise revenues, per the earlier section; 2) the three cases for document creation are based on the ‘C&L Bottom-Up’, ‘Bottom-up  – High,’ and ‘Coopers & Lybrand’ items for the Low, Medium, and High columns, respectively, in Table 5; and 3) the document misfiling case draws on the same basis but using the total document estimates and misfiled percentages of 5%, 7.5% and 9% consistent with the previous discussion section. See further the Technical Endnotes.

[34] P. Lyman and H. Varian, “How Much Information, 2003,” retrieved from on December 1, 2003.

[35] Cap Ventures information, as cited in ZyLAB Technologies B.V., “Know the Cost of Filing Your Paper Documents,” Zylab White Paper, 2001. See

[36] As reported in,2049,7_2322,00.html.

[37] See, August 2, 2000.

[38] See, June 2, 2000.

[39] See

[40] M.K. Bergman, “The Deep Web: Surfacing Hidden Value,” BrightPlanet Corporation White Paper, June 2000. The most recent version of the study was published by the University of Michigan’s Journal of Electronic Publishing in July 2001. See

[41] This analysis assumes there were 1 million documents on the Web as of mid-1994.

[42] See, for example, C. Sherman and G. Price, The Invisible Web, Information Today, Inc., Medford, NJ, 2001, 439 pp., and P. Pedley, The Invisible Web: Searching the Hidden Parts of the Internet, Aslib-IMI, London, 2001, 138pp.

[43] iProspect Corporation, iProspect Search Engine User Attitudes, April/May 2004, 28 pp. See

[44] As reported at

[45] Delphi Group, “Taxonomy & Content Classification Market Milestone Report,” Delphi Group White Paper, 2002. See

[46] C. Sherman and S. Feldman, “The High Cost of Not Finding Information,” International Data Corporation Report #29127, 11 pp., April 2003.

[47] M.E.D. Koenig, “Time Saved  – a Misleading Justification for KM,” KMWorld Magazine, Vol 11, Issue 5, May 2002. See

[48] G. Xu, A. Cockburn and B. McKenzie, Lost on the Web: An Introduction to Web Navigation Research,

[49] A. Cockburn and B. McKenzie, What Do Web Users Do? An Empirical Analysis of Web Use, 2000. See

[50] Tenth edition of GVU’s (graphics, visualization and usability} WWW User Survey, May 14, 1999. See

[51] C. Alvarado, J. Teevan, M. S. Ackerman and D.Karger, “Surviving the Information Explosion: How People Find Their Electronic Information,” AI Memo 2003-06, April 2003, 11 pp.., Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory. See

[52] W. Jones, H. Bruce and S. Dumais, “Keeping Found Things Found on the Web,” See

[53] J. Teevan, “How People Re-find Information When the Web Changes,” AI Memo 2004-014, June 2004, 10 pp., Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory. See

[54] Library of Congress, “Preserving Our Digital Heritage: Plan for the National Digital Information Infrastructure and Preservation Program”, a Report to Congress by the U.S. Library of Congress, 2002, 66 pp. See

[55] Consistent with Table 8; this analysis also assumes the 25% search time commitment by employee and previous values from earlier tables.

[56] All subsequent references to ‘Large’ firms is based on the last column in Table 2, namely the 930 U.S. firms with more than 10,000 employees.

[57] Delphi Group, “Taxonomy & Content Classification Market Milestone Report,” Delphi Group White Paper, 2002. See

[58] S. Stearns, “Realize the Value Locked in Your Content Silos Without Breaking the Bank: Automated Classification Tools to Improve Information Discovery,” Inmagic White Paper, version 1.0, 2004. 10 pp. See

[59] P. Sonderegger, “Weave Search into the Browsing Experience,” ForresterQuick Take, Forrester Research, Inc., Feb. 18, 2004. 2 pp.

[60] P. Russom, “An Eye for the Needle,” Intelligent Enterprise, January 14, 2002. See

[61] This average was estimated by interpolating figures shown on Figure 8 in reference 68.

[62] This average was estimated by interpolating figures shown on the p.14 figure in Plumtree Corporation, “The Corporate Portal Market in 2002,” Plumtree Corp. White Paper, 27 pp. See

[63] The ‘low’ case represents the archival value in the middle bars with the addition that 30% of internal documents generated in the current year have a value to be shared for one year; the ‘high’ case represents the related archival value in the middle bars but with 40% of documents generated in that year having a value to be shared for one year.

[64] Analysis based on reference 68, with interpolations from Figure 16.

[65] M. Corcoran, “When Worlds Collide: Who Really Owns the Content,” AIIM Conference, New York, NY, March 10, 2004. See

[66] C. Phillips, “Stemming the Software Spending Spree,” Optimize Magazine, April 2002, Issue 6. See

[67] C. Moore, “The Content Integration Imperative,” Forrester Research, Inc., March 26, 2004, 14 pp.

[68] Plumtree Corporation, “The Corporate Portal Market in 2003,” Plumtree Corp. White Paper, 30 pp. See

[69] BEA Corporation, “Enterprise Portal Rationalization,” BEA Technical White Paper, 23 pp., 2004. See

[70] A. Aneja, C.Rowan and B. Brooksby, “Corporate Portal Framework for Transforming Content Chaos on Intranets,” Intel Technology Journal Q1, 2000. See

[71] J. Smeaton, “IBM’s Own Intranet: Saving Big Blue Millions,” Intranet Journal, Sept. 25, 2002. See

[72] See

[73] D. Voth, “Why Enterprise Portals are the Next Big Thing,” LTI Magazine, October 1, 2002. See

[74] A. Nyberg, “Is Everybody Happy?” CFO Magazine, November 01, 2002. See

[75] See

[76] Wall Street Journal, May 4, 2004, p. B1.

[77] pers. comm.., Jonathon Houk, Director of DHS IIAP Program, November 2003.

[78] These figures are based on Table 12 and the GDP figures from reference 32. Note, the analysis in this section also ignores business-to-business opportunities, which are also likely significant.

[79] Total grant and procurement amounts are derived from the U.S. Census Bureau, Consolidated Federal Funds Report (CFFR). See

[80] The number of awards and an analysis of which line items are competitively awarded was derived from the U.S. Census Bureau, Federal Assistance Award Data System (FAADS). See

[81] Specific categories of grants were analyzed based on the U.S. General Services Administration’s Catalog of Federal Domestic Assistance (CFDA) definitions to determine degree of competitiveness; see Figures from the U.S. Department of Health and Human Services, Clearinghouse (see suggest that $350 billion in federal grants is available, but many of the specific grant opportunities are geared to state governments or individuals. That is why the figures shown indicate only $100 billion in competitive opportunities available directly to enterprises.

[82] U.S. General Services Administration, Federal Procurement Data System  – NG (FY 2003 data); see and These sources are also the reference for the number of actions or successful awards. Due to discrepancies, these amounts were adjusted to conform with the totals in reference 79.

[83] Average competitive opportunities are derived by dividing the total award amount by category by the number of awards for that category.

[84] See This is the only summary reference for state and local information found. Splits between grants and contract procurements were adjusted based on the assumption that contract amounts differed at the non-federal level. Thus, while the split for grant-contract procurements in the federal sector is about 58%-42% in the federal sector, it is assumed to be 38%-62% at the state and local level.

[85] There may also be some double counting of state amounts due to transfers from the federal government. For example, in 2002, $360,534 million in direct transfers was made to states and localities from the federal government. U.S. Census Bureau, State and Local Government Finances by Level of Government and by State: 2001  – 02. See

[86] This analysis assumes that individual grant and contract awards are 80% of the amount shown at the federal level.

[87] To be listed requires a minimum of $10,000 in federal contracts; see

[88] See

[89] This header information is drawn from Table 12.

[90] Number of competing firms is increased from the federal contractor baseline by a factor of 1.30 to account for new state and local government contractors.

[91] Winning and losing proposal preparation costs are based on the empirical percentages from NIST (see reference 93), namely 0.85% and 0.59%, respectively, as a percent of total award amounts.

[92] The ‘Low’ basis for improvements is based on the finding of missing information discussed in a previous section; the ‘High” basis reflects the difference between lowest quartile and highest quartile efforts spent on successful proposal preparation (see reference 93). The ‘Med’ basis is an intermediate value between these two.

[93] The increase in winning submissions is calculated based on numbers of winning proposals times the RFP improvement factor. In fact, because all things being equal the pool of contract dollars does not change, this amount merely represents a shift of winning awards from existing winners to new winners. In other words, total contracts amounts are a zero-sum game with proposal improvements by previous losers taken from the pool of previous winners.

[94] The analysis in Figure 2 indicates there is a power curve distribution of awards. The number of new winning proposals was applied to this curve to estimate the actual number of new firms winning awards; see Figure 2 for the power-curve fitting equation.

[95] Of course, better probabilities of winning competitive solicitations are a zero-sum game. New winners displace old winners. The real advantage in this arena is to individual firms that better succeed at securing the existing pool of competitive funds. The benefits to individual companies can be the difference between profitability, indeed survival.

[96] NFIB, Coping with Regulation, NFIB National Small Business Poll, Vol. 1, Issue 5. See

[97] NFIB, Paperwork and Record-keeping, NFIB National Small Business Poll, Vol. 3, Issue 5. See

[98] W. M. Crain & T. D. Hopkins, “The Impact of Regulatory Costs on Small Firms”, Report to the Small Business Administration, RFP No. SBAHQ-00-R-0027 (2001). The report’s 2000 year basis was updated to 2002 based on a 4% annual inflation factor.

[99] U.S. General Accounting Office, Paperwork Reduction Act: Record Increase in Agencies’ Burden Estimates, testimony of V. S. Rezendes, before the Subcommittee on Energy, Policy, Natural Resources and Regulatory Affairs, Committee on Government Reform, House of Representatives, April 11, 2003. See

[100] Office of Management and Budget, Managing Information Collection and Dissemination, Fiscal Year 2003, 198 pp. (Table A1). See

[101] NFIB, Paperwork and Record-keeping, NFIB National Small Business Poll, Vol. 3, Issue 5. See

[102]U.S. Small Business Administration, Final Report of the Small Business Paperwork Relief Task Force, June 27, 2003, 64 pp. See

[103] IRS, Civil Penalties Assessed and Abated, by Type of Penalty and Type of Tax (Table 26), September 20, 2002. See

[104] Except as footnoted, the figures below are drawn from the OMB Public Budget Tables. Civil penalties for crime victims have been excluded from these figures. See

[105] Obtained orders in SEC judicial and administrative proceedings requiring securities law violators to disgorge illegal profits of approximately $1.293 billion. Civil penalties ordered in SEC proceedings totaled approximately $101 million. See SEC

[106] T. L. Sansonetti, U.S. Department of Justice, testimony before the House Committee on the Judiciary, Subcommittee on Commercial and Administrative Law, March 9, 2004. See

[107]Argy, Wiltse & Robinson, Business Insights, Summer 2003, 4 pp. See

[108] Project on Government Oversight, Federal Contractor Misconduct: Failures of the Suspension and Debarment System, revised May 10, 2002. See

[109]Corporate Crime Reporter, Top 100 False Claims Act Settlements, December 30, 2003, 64 pp. See

[110] According to Alchemia Corporation testimony citing a Price Waterhouse Coopers study, FDA Hearing, Jan. 17, 2002. See 00d1538/00d-1538_mm00023_01_vol7.doc.

[111] For example, see

[112] From Table 17.

[113] From Table 16 after adjusting by total number of employees for all firms as shown on Table 2, and removal of total burdens as shown in Table 17.

[114] From Table 18.

[115] All ‘State and Local’ items are based on the ratio of state and local budgets in relation to the federal budget, excluding direct federal transfers, and applied to those factors for the federal sector. This ratio is 0.563. See

[116] All ‘Large Firm’ estimates are based on the ratio of large firm documents to total firm documents; see Table 2.

[117] For example, see

Posted:February 21, 2007

Deep WebIt’s Taken Too Many Years to Re-visit the ‘Deep Web’ Analysis

It’s been seven years since Thane Paulsen and I first coined the term ‘deep Web‘, perhaps representing a couple of full generational cycles for the Internet. What we knew then and what “Web surfers” did then has changed markedly. And, of course, our coining of the term and BrightPlanet’s publishing of the first quantitative study on the deep Web did nothing to create the phenomenon of dynamic content itself — we merely gave it a name and helped promote a bit of understanding within the general public of some powerful subterranean forces driving the nature and tectonics of the emerging Web.

The first public release of The Deep Web: Surfacing Hidden Value (courtesy of the Internet Archive’s Wayback Machine), in July 2000, opened with a bold claim:

BrightPlanet has uncovered the "deep" Web — a vast reservoir of Internet content that is 500 times larger than the known "surface" World Wide Web. What makes the discovery of the deep Web so significant is the quality of content found within. There are literally hundreds of billions of highly valuable documents hidden in searchable databases that cannot be retrieved by conventional search engines.

The day the study was released we needed to increase our servers nine-fold to meet news demand after CNN and then 300 major news outlets eventually picked up the story. By 2001 when the University of Michigan’s Journal of Electronic Publishing and its wonderful editor, Judith A. Turner, decided to give the topic renewed thrust, we were able to clean up the presentation and language quite a bit, but did little to actually update many of the statistics. (That version, in fact, is the one mostly cited today.)

Over the years there have been some books published and other estimates put forward, more often citing lower amounts in the deep Web than my original estimates, but, with one exception (see below), none of these were backed by new analysis. I was asked numerous times to update the study, and indeed had even begun collating new analysis at a couple of points, but the effort to complete the work was substantial and the effort always took a back seat to other duties and so was never completed.

Recent Updates and Criticisms

It was thus with some surprise and pleasure that I first found reference yesterday to Dirk Lewandowski’s and Phillip Mayr’s 2006 paper, “Exploring the Academic Invisible Web” [Library Hi Tech 24(4), 529-539], that takes direct aim at the analysis in my original paper. (Actually, they worked from the 2001 JEP version, but, as noted, the analysis is virtually identical to the original 2000 version.) The authors pretty soundly criticize some of the methodology in my original paper and, for the most part, I agree with them.

My original analysis combined a manual evaluation of the “top 60″ then-extant Web databases with an estimate of the total number of searchable databases (estimated at about 200,000, which they incorrectly cite as 100,000) and assessments of the mean size of each database based on a random sampling of those databases. Lewandowski and Mayr note conceptual flaws in the analysis at these levels:

  • First, by use of mean database size rather than median size, the size is overestimated,
  • Second, databases of questionable content to their interests in academic content (such as weather records from NOAA or Earth survey data by satellite) skewed my estimates upward, and
  • Third, my estimates were based on database size estimates (in GBs) and not internal record counts.

On the other hand, the authors also criticized that my definition of deep content was too narrow, and overlooked certain content types such as PDFs now routinely indexed and retrieved on the surface Web. We also have had uncertain, but tangible growth in standard search engine content — with the last cited amounts about 20 billion documents since Google and Yahoo! ceased their war of index numbers.

Though not really offering an alternative, full-blown analysis, the authors use the Gale Directory of Databases to derive an alternative estimate of perhaps 20 billion to 100 billion documents on the deep Web of interest for academic purposes, which they later seem to imply also needs to be discounted by further percentages to get at “word-oriented” and “full-text or bibliographic” records that they deem appropriate.

My Assessment of the Criticisms

As noted, I generally agree with these criticisms. For example, since the time of original publication, we have seen the power distribution nature of most things on the Internet, including popularity and traffic. Exponential distributions will always result in overestimates using calculations based on means rather than medians. I also think that meaningful content types were both overused (more database-like records) and underused (PDF content that is now routinely indexed) in my original analysis.

However, the authors’ third criticism is patently wrong, since three different methods were used to estimate internal database record counts and the average sizes of each record they contained. I would also have preferred a more careful reading by the authors of my actual paper, since there are numerous other citations in error and mis-characterizations.

On an epistemological level, I disagree with the authors’ use of the term “invisible Web”, a label that we tried hard in the paper to overturn and that is fading as a current term of art. Internet Tutorials (initially, SUNY at Albany Library) addresses this topic head-on, preferring “deep Web” on a number of compelling grounds, including that “there is no such thing as recorded information that is invisible. Some information may be more of a challenge to find than others, but this is not the same as invisibility.”

Finally, I am not compelled by the author’s simplistic, alternate partial estimate based solely on the Gale database, but they readily acknowledge to not doing a full-blown analysis and to having different objectives in mind. I agree with the authors in calling for a full, alternative analysis. I think we all agree that is a non-trivial undertaking and could itself be subject to newer methodological pitfalls.

So, What is the Quantitative Update?

Within a couple of years after the initial publication of my paper, I suspected the “500 times” claim for the greater size of the deep Web in comparison to what is discoverable by search engines may have been too high. Indeed, in later corporate literature and Powerpoint presentations, I backed off the initial 2000-2001 claims and began speaking in ranges from a “few times” to as high as “100 times” greater for the size of the deep Web.

In the last seven years, the only other quantitative study of its kind of which I am aware is documented in the paper, “Structured Databases on the Web: Observations and Implications,” conducted by Chang et al. in April 2004 and published in the ACM SIGMOD, that estimated 330,000 deep Web sources with over 1.2 million query forms, reflecting a fast 3-7 times increase in 4 years from the date of my original paper. Unlike the Lewandowski and Mayr partial analysis, this effort and others by that group suggests an even larger deep Web than my initial estimates!

The truth is, we didn’t know then — and we don’t know now — what the actual size of the dynamic Web truly is. (And, aside from a sound bite, does it really matter? It is huge by any measure.) Heroic efforts such as these quantitative analyses or the still-more ambitious efforts of UC Berkeley’s SIM School on How Much Information? still have a role in helping to bound our understanding of information overload. As long as such studies gain news traction, they will be pursued. So, what might today’s story look like?

First, the methodological problems in my original analysis remain and (I believe today) resulted in overestimates. Another factor today leading to a potential overestimate of the deep Web v. the surface Web would be the fact that much “deep” content is being more exposed to standard search engines, be it through Google’s Scholar, Yahoo!’s library relationships, individual site indexing and sharing such as through search appliances, and other “gray” factors we noted in our 2000-2001 studies. These factors, and certainly more, act to narrow the difference between exposed search engine content (“surface Web”) and what we have termed the “deep Web.”

However, countering these facts are two newer trends. First, foreign language content is growing at much higher rates and is often under-sampled. Second, blogs and other democratized sources of content are exploding. What these trends may be doing to content balances is, frankly, anyone’s guess.

So, while awareness of the qualitative nature of Web content has grown tremendously in the past near-decade, our quantitative understanding remains weak. Improvements in technology and harvesting can now overcome earlier limits.

Perhaps there is another Ph.D. candidate or three out there that may want to tackle this question in a better (and more definitive) way. According to Chang and Cho in their paper, “Accessing the Web: From Search to Integration,” presented at the 2006 ACM SIGMOD International Conference on Management of Data in Chicago:

On the other hand, for the deep Web, while the proliferation of structured sources has promised unlimited possibilities for more precise and aggregated access, it has also presented new challenges for realizing large scale and dynamic information integration. These issues are in essence related to data management, in a large scale, and thus present novel problems and interesting opportunities for our research community.

Who knows? For the right researcher with the right methodology, there may be a Science or Nature paper in prospect!

Posted by AI3's author, Mike Bergman Posted on February 21, 2007 at 1:22 pm in Deep Web, Document Assets | Comments (7)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:September 8, 2006

John Newton (co-founder formerly of Documentum, now of Alfresco) puts a telling marker on the table in his recent post on the Commoditization of ECM. Though noting the term "enterprise content management" did not even exist prior to 1998, he goes on to observe that expansion of the definition of what was appropriate in ECM and the consolidation of the leading players occurred rapidly. He concludes that this process has commoditized the market, with competitive differentiation now based on market size rather than functionality. The platforms from the leading IBM, Microsoft and EMC-Documentum vendors all can manage documents, Web content, images, forms and records via basic library services, metadata management, search and retrieval, workflow, portal integration, and development kits.

If such consolidation and standardization of functionality were Newton’s only point one could say, “ho, hum,” such has been true in all major enterprise software markets.

But, in my reading, he goes on to make two more important and fundamental points, both of which existing enterprise software vendors ignore at their peril.

Poor Foundations and Poor Performance

Newton notes that ECM applications are never bought based on the nature of their repositories, but an inefficient repository can result in the rejection of the system. He also acknowledges that ECM installations are costly to set up and maintain, difficult to use, poorly performing and lack essential automation (such as classification). (Kind of sounds like most enterprise software initiatives, doesn’t it?)

Indeed, I have repeatedly documented these gaps for virtually all large-scale document-centric or federated applications. The root cause — besides rampant poor interface designs — has been in my opinion poorly suited data management foundations. Relational or IR-based systems both perform poorly for different reasons in managing semi-structured data. This problem will not be solved by open source per se (see below), though there are some interesting options emerging from open source that may point the way to new alternatives, as well as incipient designs from BrightPlanet and others.

The Proprietary Killers of Open Standards and Open Source

Service-oriented architectures (SOA), the various Web services standards (WS**), the certain JSRs (170 and 283 in documents, but also 168 and others), plus all of the various XML and semantic derivatives are moving rapidly with the very real prospect of “pluggability” and the substitution of various packages, components and applications across the entire enterprise stack.

In quoting Newton’s case at Alfresco, by aggregating these existing open source components they were able to get their ECM product ready in less than one year:

  • Spring – A framework that provides the wiring of the repository and the tools to extend capabilities without rebuilding the repository (Aspect-Oriented Programming)
  • Hibernate – An object-relational mapping tool that stores content metadata in database and handles all the idiosyncrasies of each SQL dialect
  • Lucene – An internet-scale full-text and general purpose information retrieval engine that supports federated search, taxonomic, XML and full-text search
  • EHCache – Distributed intelligent caching of content and metadata in a loosely coupled environment
  • jBPM – A full featured enterprise production workflow and business process engine that includes BPEL4WS support
  • Chiba – A complete Xforms interface that can be used for the configuration and management of the repository
  • Open Office – Provides a server-based and Linux-compatible transformation of MS Office based content
  • ImageMagic – Supports transformation and watermarking of images.

Moreover, the combination of these components led to an inherent architecture including pluggable modules, rules and templating engines, workflow and business process management, security, and other enterprise-level capabilities. In prior times, I estimate no proprietary-based vendor could have accomplished this for ten times or more the effort.

Similar Trends and Challenges in the Entire Enterprise Space

Newton is obviously well placed to comment on these trends within ECM. But similar trends can be seen in every major enterprise software space. For virtually every component one can imagine, there is a very capable open source offering. Many of the newer open source ventures are indeed centered around aggregating and integrating various open source components followed by either dual-source licensing or support services as the basis of their business models. At its most extreme, this trend has expanded to the whole process of enterprise application integration (EAI) itself through offerings such as LogicBlaze FUSE with its SOA-oriented standards and open source components. Initiatives such as SCA (service component architecture) will continue to fuel this trend.

So, enterprise software vendors, listen to your wake up call. It is as if gold dubloons, pearls and jewels are laying all of the floor. If you and your developers don’t take the time to bend over and pick them up, someone else will. As Joel Mokyr has compellingly researched, the innovation of systems or how to integrate pieces can be every bit as important as the ‘Aha!’ discovery. Open source is now giving a whole new breed of bakers new ingredients for baking the cake.

Posted:April 4, 2006

Author's Note: An earlier blog series by me has now been turned into a PDF white paper under the auspices of BrightPlanet Corp The citation for this effort is:

M.K. Bergman, "Why Are $800 Billion in Document Assets Wasted Annually?” BrightPlanet Corporation White Paper, April 2006, 27 pp.

Download PDF file Click here to obtain a PDF copy of this full report (27 pp, 203 KB)

It is a tragedy of no small import when $800 billion in readily available savings from creating, using and sharing documents is wasted in the United States each year. How can waste of such magnitude occur right before our noses? And how can this waste occur so silently, so insidiously, and so ubiquitously that none of us can see it?

This free white paper attempts to address these questions. This report is the result of a series of posts in response to an earlier white paper I authored under BrightPlanet sponsorship entitled, Untapped Assets: The $3 Trillion Value of U.S. Enterprise Documents. [1]

This full report intetgrates information from earlier blog postings:

Public and enterprise expenditures to address the wasted document assets problem remain comparatively small, with growth in those expenditures flat in comparison to the rate of document production. This report attempts to bring attention and focus to the various ways that technology, people, and process can bring real document savings to our collective pocketbooks.

[1] Michael K. Bergman, "Untapped Assets: The $3 Trillion Value of U.S. Enterprise Documents," BrightPlanet Corporation White Paper, July 2005, 42 pp. The paper contains 80 references, 150 citations, and many data tables.

Posted by AI3's author, Mike Bergman Posted on April 4, 2006 at 10:29 am in Adaptive Information, Document Assets, Information Automation | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is: