Posted:October 13, 2005

I just came across a VC blog pondering the value to a start-up of operating in "Stealth Mode" or not.  I’ve amusingly come to the conclusion that all of this — particularly the "stealth" giveaway — is so much marketing hype.  When a start-up claims they’re coming out of stealth mode, grab your wallet.

The most interesting and telling example I have of this is Rearden Commerce, which was announced in a breathy cover story in InfoWorld in February 2005 about the company and its founder/CEO Patrick Grady.  The company has an obvious "in" with the magazine; in 2001 InfoWorld also carried a similar piece on the predecessor company to Rearden, Talaris Corporaton.

According to a recent Business Week article, Rearden Commerce and its predecessors reaching back to a earlier company called Gazoo founded in 1999 have raised $67 million in venture capital.  While it is laudable the founder has reportedly put his own money into the venture, this venture through its massive funding and high-water mark of 80 employees or so hardly qualifies as "stealth."

As early as 2001 with the same technology and business model, this same firm was pushing the "stealth" moniker.  According to an October 2001 press release:

 "The company, under its stealth name Gazoo, was selected by Red Herring magazine as one of its ‘Ten to Watch’ in 2001."  [emphasis added]

Even today though no longer the active name Talaris Corporation has close to 115,000 citations on Yahoo! Notable VCs such as Charter Ventures, Foundation Capital, JAFCo and Empire Capital have backed it through its multiple incubations.

Holmes Report a marketing company, provides some insight into how the earlier Talaris was spun in 2001:

"The goal of the Talaris launch was to gain mindshare among key business and IT trade press and position Talaris as a ‘different kind of start-up’ with a multi-tiered business model, seasoned executive team and tested product offering."

The Holmes Report documents the analyst firms and leading journals and newspapers to which it made outreach.  Actually, this outreach is pretty impressive.  Good companies do the same all of the time and that is to be lauded.  What is to be questioned, however, is how many "stealths" a cat can have.  Methinks this one is one too many.

"Stealth" thus appears to be code for an existing company of some duration that has had disappointing traction and now has new financing, a new name, new positioning, or all of the above.  So, interested in a start-up that just came out of stealth mode?  Let me humbly suggest standard due diligence.

Posted by AI3's author, Mike Bergman Posted on October 13, 2005 at 9:19 am in Software and Venture Capital | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/143/stealth-mode-grab-your-wallet/
The URI to trackback this post is: https://www.mkbergman.com/143/stealth-mode-grab-your-wallet/trackback/
Posted:October 11, 2005

BrightPlanet has announced a major upgrade to its Deep Query Manager knowledge worker document platform.  According to its press release, the new version achieves extreme scalability and broad internationalization and file format support, among other enhancements.  The DQM has added the ability to harvest and process up to 140 different foreign languages in more than 370 file formats plus new content export and system administration features.  The company also claims the new distributed architecture allows scalability into hundreds or thousands of users across multiple machines with the ability to handle incremental growth and expansions.

According to the company:

The Deep Query Manager is a content discovery, harvesting, management and analysis platform used by knowledge workers to collaborate across the enterprise. It can access any document content — inside or outside the enterprise — with strengths in deep content harvesting from more than 70,000 unique searchable databases and automated techniques for the analyst to add new ones at will. The DQM’s differencing engine supports monitoring and tracking, among the product’s other powerful project management, data mining, reporting and analysis capabilities.

According to Paul de la Garza of the St. Petersburg Times, the Special Operations Command (SOCom) based out of MacDill Air Force Base in Tampa Bay will be opening a new Joint Intelligence Operations Center (JIOC) in St. Petersburg to process open source intelligence (OSINT) in support of the global war on terrorism.

The Center was announced by Rep.C.W. Bill Young, R-Indian Shores (FL) on October 7.  Rep. Young said that Blackbird Technologies of Virginia was awarded the $27-million contract to operate the Center, which will contain 60 people to conduct OSINT.  Young, chairman of the Defense Appropriations Subcommittee, said the center will open soon but declined to offer more details because of the classified nature of the facility.

According to de la Garza, SOCom has played a pivotal role in the war on terror since 9/11, with an increase in budget from $3.8-billion to $6.6-billion and an increase in staff from 6,000 to 51,441. In March, President Bush signed a directive that puts SOCom in charge of "synchronizing" the war on terror.

Posted by AI3's author, Mike Bergman Posted on October 11, 2005 at 9:51 am in OSINT (open source intel) | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/141/socom-awards-new-osint-center/
The URI to trackback this post is: https://www.mkbergman.com/141/socom-awards-new-osint-center/trackback/
Posted:October 6, 2005

Collaboration is important.  BrightPlanet‘s earlier research paper on the waste associated with enterprise document use (or lack thereof) indicated that $690 billion a year alone could be reclaimed by U.S. enterprises from better sharing of information. That represents 88% of the total $780 billion wasted annually.

The issue of poor document use within the organization is certainly not solely a technological issue, and is likely due more to cultural and people issues, not to mention process. At BrightPlanet, we have been attempting a concerted “document as you go” commitment by our developers and support people, and have worked hard to put in place Wiki and other collaboration tools to minimize friction.

But friction remains, often stubbornly so. At heart, the waste and misuse of document assets within organizations arises from a complex set of these people, process and technology issues.

Dave Pollard, the inveterate blogger on KM and other issues, provided a listing of 16 reasons of ‘Why We Don’t Share Stuff’ on September 19.[1] That thoughtful posting received a hail storm of responses, which caused Dave to update that listing to 23 reasons on September 29 under a broader post called ‘Knowledge Sharing & Collaboration 2015’ (a later post upped that amount to 24 reasons). (BTW, my own additions below have upped this number to 40, though high listing numbers are beside the point.) This is great stuff, and nearly complete grist for laying out the reasons — some major and some minor — why collaboration is often difficult.

I have taken these reasons, plus some others I’ve added of my own or from other sources, and have attempted to cluster them into the various categories below.[2] Granted, these assignments are arbitrary, but they are also telling as the concluding sections discuss.

People, Behavior and Psychology

These are possible reasons why collaboration fails due to people, behavior or psychological reasons. They represent the majority (56%) of reasons proferred by Pollard:

  • People find it easier and more satisfying to reinvent the wheel than re-use other people’s ‘stuff’ (*)
  • People only accept and internalize information that fits with their mental models and frames (Lakoff’s rule) (*)
  • Some modest people underestimate the value of what they know so they don’t share (*)
  • We all learn differently (some by reading, some by listening, some by writing down, some by hands-on), and people won’t internalize information that isn’t in a format attuned to how they learn (one size training doesn’t fit all) (*)
  • People grasp graphic information more easily than text, and understand information conveyed through stories better than information presented analytically (we learn by analogy, and images and stories are better analogies to our real-life experiences than analyses are) (*)
  • People cannot readily differentiate useful information from useless information (* split)
  • Most people want friends and even strangers to succeed, and enemies to fail; this has a bearing on their information-sharing behaviour (office politics bites back) (*)
  • People are averse to sharing information orally, and even more averse to sharing it in written form, if they perceive any risk of it being misused or misinterpreted (the better safe than sorry principle) (*)
  • People don’t take care of shared information resources (Tragedy of the Commons again) (*)
  • People seek out like minds who entrench their own thinking (leads to groupthink) (**)
  • Introverts are more comfortable wasting time looking for information rather than just asking (sometimes it’s just more fun spending 5 hours on secondary research, or doing the graphics for your powerpoint deck by trial and error, than getting your assistant to do it for you in 5 minutes) (**)
  • People won’t (or can’t) internalize information until they need it or recognize its value (most notably, information in e-newsletters is rarely absorbed because it rarely arrives just at the moment it’s needed) (**)
  • People don’t know what others who they meet know, that they could benefit from knowing (a variant on the old “don’t know what we don’t know” — “we don’t know what we don’t know that they do”) (**)
  • If important news is withheld or sugar-coated, people will ‘fill in the blanks’ with an ‘anti-story’ worse than the truth (**)
  • Experts often speak in jargon or “expert speak.” They don’t know they aren’t communicating, and non-experts are afraid to ask (***).

Management and Organization

These are possible reasons why collaboration fails due to managerial or organization limits. They represent about one-fifth (20%) of the reasons proferred by Pollard:

  • Bad news rarely travels upwards in organizations (shoot the messenger, and if you do tell the boss bad news, better have a plan to fix it already in motion) (*)
  • People share information generously peer-to-peer, but begrudgingly upwards (“more paperwork for the boss”), and sparingly downwards (“need to know”) in organizational hierarchy — it’s all about trust (*)
  • Managers are generally reluctant to admit they don’t know, or don’t understand, something (leads to oversimplifying, and rash decision-making) (*)
  • Internal competition can mitigate against information sharing (if you reward individuals for outperforming peers, they won’t share what they know with peers) (*)
  • The people with the most valuable knowledge have the least time to share it (**)
  • Management does not generally appreciate its role in overcoming psychology and personal behaviors that limit collaboration (***)
  • Management does not appreciate the trremendous expense, revenue, profitability and competiveness implications from lack of collaboration (***)
  • Management does not know training, incentive, process, technology or other techniques to overcome limits to collaboration (***)
  • Earlier organization attempts with CIOs, CKOs, etc., have not been sustained or were the wrong model for internalizing these needs within the organization (***)
  • Organizational job titles still reinforce managerial v. expertise in status and reward (***)
  • Hiring often inadequately stresses communication and collaboration skills, and does not provide in-house training if still lacking (***).

Technology, Process and Training

These are possible reasons why collaboration fails due to technology, process or training. They represent about one-eighth (12%) of the reasons proferred by Pollard, but also realize his original premise was on human or psychological reasons, so it is not surprising this category is less represented:

  • People know more than they can tell (some experience you just have to show) & tell more than they can write down (composing takes a lot of time) (Snowden’s rule) (*)
  • People feel overwhelmed with content volume and complex tools (info overload, and poverty of imagination) (* split)
  • People will find ways to work around imposed tools, processes and other resources that they don’t like or want to use (and then deny it if they’re called to account for it) (**)
  • Employees lack the appreciation for the importance of collaboration to the success of their employer and their job (***)
  • Most means for “recording” the raw data and information for collaboration have too much “friction” (***)
  • There needs to be clear divisions between “capturing” knowledge and information and “packaging” it for internal or external consumption (***)
  • Single-source publication techniques suck (***)
  • Testing, screening, vetting and making new technology or process advantages is generally lacking (***).

Cost, Rewards and Incentives

These are possible reasons why collaboration fails due to the cost and rewards structure, again about one-eighth (12%) of the reasons proferred by Pollard. Again, realize his original premise was on human or psychological reasons, so it is not surprising this category is less represented:

  • The true cost of acquiring information (time wasted looking for it) and the cost of not knowing (Katrina, 9/11, Poultry Flu etc.) are both greatly underestimated in most organizations (*)
  • Rewards for sharing knowledge don’t work for long (*)
  • People value information they paid for more highly than that they get free from their own people (thus the existence of the consulting industry) (from James Governor) (**)
  • Find reduced cost document solutions (***)
  • Link performance pay to collaboration goals (***).

Insights and Quibbles

There are some 25 reasons provided by Dave and his blog respondents, actually closer to 40 when my own are added, that represent a pretty complete compendium of “why collaboration fails.” Though I can pick out individual ones of these to praise or criticize that would miss the point.

The objective is neither to collect the largest numbers of such factors or to worry terribly about how they are organized. But there are some interesting insights.

Clearly, human behavior and psychology provides the baseline for looking at these questions. Management’s role is to provide organizational structure, incentives, training, pay and recognition to reward the collaborative behavior it desires and needs. Actually, management’s challenge is even greater than that since in most cases upper level managers don’t yet have a clue as to the importance of the underlying information nor collaboration around it.

Like in years past, leadership for these questions needs to come from the top. The disappointments of the CIO and CKO positions of years past need to be looked at closely and given attention. The idea of these positions in the past was not wrong; what was wrong was the execution and leadership commitment.

Organizations of all types and natures have figured out how to train and incentivize its employees for difficult duties ranging from war to first response to discretion. Putting in place reward and training programs to encourage collaboration — despite piss poor performance today — should not be so difficult in this light.

I think Dave brings many valuable insights into such areas as people being reluctant to reinvent the wheel but liking creative design, or without some sense of ownership a collaboration repository is at risk, or people are afraid to look stupid, or some people communciate better orally v. in written form, etc. These are, in fact, truisms of human diversity and skill differences. I believe firmly if organizations want to purposefully understand these factors they can still design reward, training and recognition regimens to shape the behavior desired by that organization.

The real problem in the question of collaboration within the enterprise begins at the top. If the organization is not aware and geared to address human nature with appropriate training and rewards, it will continue to see the poor performance around collaboration that has characterized this issue for decades.

NOTE: This posting is part of a series looking at why document assets are so poorly utilized within enterprises.  The magnitude of this problem was first documented in a BrightPlanet white paper by the author titled, Untapped Assets:  The $3 Trillion Value of U.S. Enterprise Documents.  An open question in that paper was why more than $800 billion per year in the U.S. alone is wasted and available for improvements, but enterprise expenditures to address this problem remain comparatively small and with flat growth in comparison to the rate of document production.  This series is investigating the various technology, people, and process reasons for the lack of attention to this problem.

[1] There have been some other interesting treatments of barriers to collaboration including that by Carol Kinsey Goman’s Five reasons people don’t tell what they know and Jack Vinson’s Barriers to knowledge sharing.

[2] Pollard’s initial 16 reasons are shown with a single symbol (*); the next 8 additions with a double symbol (**). All remaining reasons added by me have three symbols (***).

Posted by AI3's author, Mike Bergman Posted on October 6, 2005 at 1:41 pm in Adaptive Information, Document Assets, Information Automation | Comments (5)
The URI link reference to this post is: https://www.mkbergman.com/135/why-are-800-billion-in-document-assets-wasted-annually-ii-barriers-to-collaboration/
The URI to trackback this post is: https://www.mkbergman.com/135/why-are-800-billion-in-document-assets-wasted-annually-ii-barriers-to-collaboration/trackback/
Posted:October 3, 2005

A recent column (Sept. 22) by David Wessel in the Wall Street Journal argues that “Better Information Isn’t Always Beneficial.” His major arguments can be summarized as follows:

  1. Having more information available is generally good
  2. Having some information available is clearly bad (to terrorists, privacy violations)
  3. However, other information is also bad because it may advance the private (profit) interest but not that of society, and
  4. Computers are worsening Argument #3 by reducing the cost of processing information.

Wessel claims that computers are removing limits to information processing that will force society to wrestle with practical issues of inequities that seemed only theoretical a generation ago. Though this article is certainly thought provoking, and therefore of value, it is wrong on epistemological, logical, and real-world grounds.

Epistemology

All of us at times confuse data or content with the concept of information when we describe current circumstances with terms such as “information overload” or “infoglut.” This confusion often extends to the economics literature in how it deals with the value of “information.” Most researchers or analysts in knowledge management acknowledge this hierarchy of value in the knowledge chain:

data (or content) » information » knowledge (actionable)

This progression also represents a narrowing flow or ‘staging’ of volume. The amount of total data always exceeds information; only a portion of available information is useful for knowledge or action.

Rather than provide “definitions” of these terms, which are not universally agreed, let’s use the example of searching on Google to illustrate these concepts:

  • Data — the literally billions of documents contained within Google’s search index
  • Information — subsets of this data appropriate to the need or topic at hand. While this sounds straightforward, depending on how the user queries and its precision, the “information” returned from a search may have much lower or higher percentages of useful information value, as well as a great range of total possible results
  • Knowledge — Google obviously does not provide knowledge per se, but, depending on user review of the information from more-or-less precise search queries and information duplication or not, knowledge may come about through inspection and learning of this information.

The concept of staging and processing is highly useful here. For example, in the context of a purposeful document repository, initial searches to Google and other content aggregation sites — even with a query or topic basis — could act to populate that repository with data, which would then need to be mined further for useful information and then evaluated for supplying knowledge. Computers always act upon data, whether global in a Google case or local in a local repository case, and whether useful information is produced or not.

Wessel and indeed most economists co-mingle all three terms in their arguments and logic. By missing the key distinctions, fuzzy thinking can result.

A Philosophical or Political Polemic?

First, I will not take issue with Wessel’s first two arguments above. Rather, I’d like to look at the question of Argument #3 that some information is “bad” because it delivers private vs. societal value. His two economist references in the piece are to Arrow and Hirshleifer. As Wessel cites Hirshleifer:

“The contrast between the private profitability and the social uselessness of foreknowledge may seem surprising,” the late economist Jack Hirshleifer wrote in 1971. But there are instances, he argued, where “the community as a whole obtains no benefit … from either the acquisition or the dissemination (by resale or otherwise) of private foreknowledge.”

Yet Hirshleifer had a very specific meaning of “private foreknowledge,” likely not in keeping with the Wessel arguments. The Hirshleifer[1] reference deals entirely with speculative investments and the “awareness” or not (knowledge; perfect information) of differing economic players. According to the academic reviewer Morrison[2]:

In Hirshleifer’s terms, ‘private foreknowledge’ is information used to identify pricing errors after resource allocation is fixed. Because it results in a pure wealth transfer but is costly to produce, it reduces social surplus. . . . As opposed to private foreknowledge, ‘discovery information’ is produced prior to the time resource allocation is fixed, and because it positively affects resource allocation it generally increases social surplus. But even discovery information can be overproduced because optimal expenditures on discovery information will inevitably be subject to pricing errors that can be exploited by those who gather superior information. In cases of both fixed and variable resource allocation, then, excess search has the potential to occur, and private parties will adopt institutional arrangements to avoid the associated losses.

Hmmm. What? Is this actually in keeping with the Wessel arguments?

Wessel poses a number of examples where he maintains the disconnect between private gain and societal benefit occurs. The examples he cites are:

  • Assessing judges as to how they might rule on patent infringement cases
  • Screening software for use in jury selections
  • Demographic and voting information for gerrymandering U.S. congressional districts
  • Weather insurance for crops production.

These examples are what Wessel calls “the sort of information that Nobel laureate Kenneth Arrow labeled ‘socially useless but privately valuable.’ It doesn’t help the economy produce more goods or services. It creates nothing of beauty or pleasure. It simply helps someone get a bigger slice of the pie.”

According to Oldrich Kyn, an economics professor emeritus from Boston University, Joseph Stiglitz, another Nobel laureate, took exception to Arrow’s thesis regarding information in the areas of market socialism and neoclassical economics as shown by these Stiglitz quote excerpts:

The idea of market socialism has had a strong influence over economists: it seemed to hold open the possibility that one could attain the virtues of the market system–economic efficiency (Pareto optimality)–without the seeming vices that were seen to arise from private property.

The fundamental problem with [the Arrow–Debrue model] is that it fails to take into account . . .  the absence of perfect information–and the costs of information–as well as the absence of certain key risk markets . . .

The view of economics encapsulated in the Arrow–Debreu framework . . . is what I call ‘engineering economics’ . . .  economics consisted of solving maximization problems . . . The central point is that in that model there is not a flow of new information into the economy, so that the question of the efficiency with which the new information is processed–or the incentives that individuals have for acquiring information–is never assessed. . .  the fundamental theorems of welfare economics have absolutely nothing to say about . . .  whether the expenditures on information acquisition and dissemination– is, in any sense, efficient.

Stiglitz in his own online autobiography states: “The standard competitive market equilibrium model had failed to recognize the complexity of the information problem facing the economy – just as the socialists had. Their view of decentralization was similarly oversimplified.” Grossman and Stiglitz[3] more broadly observe “that perfectly informative financial markets are impossible and . . .  the informativeness of prices is inversely related to the cost of information.”

I am no economist, but reading the original papers suggests to me a narrower and more theoretical focus than what is claimed in Wessel’s arguments. Indeed, the role of “information” is both central to and nuanced within current economic theory, the understanding of which has progressed tremendously in the thirty years since Wessel’s original citations. By framing the question of private (profit) versus societal good, Wessel invokes an argument based on political philosophy and one seemingly “endorsed” by Arrow as a Nobel laureate. Yet as Eli Rabett commented on the Knowledge Crumb’s Web site, “[the Wessel thesis] is a communitarian argument which has sent Ayn Rand, Alan Greenspan, Newt Gingrich and Grover Norquist to spinning in their graves.”

Logical Fallacies

Even if these philosophical differences could be reconciled, there are other logical fallacies in the Wessel piece.

In the case of assessing the performance of patent judges by crunching information that can now be sold cost-effectively to all participants, Wessel asks, “But does it increase the chances that the judge will come to a just decision?” The logical fallacies here are manifest:

  • Is the only societal benefit one of having the judge come to a just decision or, also potentially, society learning about judicial prejudices singly or collectively or setting new standards in evaluating or confirming judicial candidates?
  • No new information has been created by the computer. Rich litigants could have earlier gone through expensive evaluations. Doesn’t cost-effective information democratize this information?
  • Is not broad information availability an example of desired transparency as cited by Knowledge Crumbs?

Wessel raises another case of farmers now possibly being able to buy accurate weather forecasts. But he posits a resulting case where the total amount of food available is unchanged and insurance would no longer be necessary. Yet, as Mark Bahner points out, this has the logical fallacies of:

  • The amount of food available would NOT be “unchanged” if farmers knew for certain what the weather was going to be. Social and private benefits would also accrue from, for example, applying fertilizers when needed without wasteful runoffs
  • Weather knowledge would firstly never be certain and other uncertainties (pests, global factors, etc.) would also exist. Farmers understand uncertainty and would continue to hedge through futures or other forms of insurance or risk management.

The real logical fallacies relate to the assumption of perfect information and complete reduction of uncertainty. No matter how much data, or how fast computers, these factors will never be fully resolved.

Practical Role of the Computer

Wessel concludes that by reducing the cost of information so much, computers intensify the information problem of private gain v. societal benefit. He uses Arrow again to pose the strawman that, “Thirty years ago, Mr. Arrow said the fundamental problem for companies trying to get and use information for profit was ‘the limitation on the ability of any individual to process information.'”

But as Knowledge Crumbs notes, computers may be able to process more data than an individual, but they are still limited and always will be. Moreover there will remain the Knowledge Problem and the SNAFU principle to make sure that humans are not augmented perfectly by their computers. Knowledge Crumbs concludes:

The issue with knowledge isn’t that there is too much, it is that we lack methods to process it in a timely fashion, and processing introduces defects that sometimes are harmful. When data is reduced or summarized something is lost as well as gained.

The speed of crunching data or computer processing power is not the issue. Use and misuse of information will continue to exist, as it has since mythologies were passed by verbal allegory by firelight.

Importance to Document Assets

So, why does such a flawed polemic get published in a reputable source like the Wall Street Journal? There are real concerns and anxieties underlying this Wessel piece and it is always useful to stimulate thought and dialog. But, like all “information” that the piece itself worries over, it must be subjected to scrutiny, testing and acceptance before it can become the basis for action. The failure of the Wessel piece to pass these thresholds itself negates its own central arguments.

Better that our pundits should focus on things that can be improved such as why there is so much duplication, misuse and overlooking of available information. These cost the economy plenty, totally swamping any of Wessel’s putative private benefits were they even correct.

Let’s focus on the real benefits available today through computers and information to improve society’s welfare. Setting up false specters of computer processing serving private greed only takes our eye off the ball.

NOTE: This posting is part of a series looking at why document assets are so poorly utilized within enterprises.  The magnitude of this problem was first documented in a BrightPlanet white paper by the author titled, Untapped Assets:  The $3 Trillion Value of U.S. Enterprise Documents.  An open question in that paper was why nearly $800 billion per year in the U.S. alone is wasted and available for improvements, but enterprise expenditures to address this problem remain comparatively small and with flat growth in comparison to the rate of document production.  This series is investigating the various technology, people, and process reasons for the lack of attention to this problem.

[1] J. Hirshleifer, “The Private and Social Value of Information and the Reward to Inventive Activity,” American Economic Review, Vol. 61, pp. 561-574, 1971.

[2] A. D. Morrison, “Competition and Information Production in Market Maker Models,” forthcoming in the Journal of Business Finance and Accounting, Blackwell Publishing Ltd., Malden, MA. See the 20 pp. online version, http://users.ox.ac.uk/~bras0541/12_jbfa5709.pdf#search=’Hirshleifer%20private%20foreknowledge

[3] S.J. Grossman and J.E. Stiglitz, “On the Impossibility of Informationally Efficient Markets,” American Economic Review, Vol. 70, No. 3, pp. 393-403, June 1980.

Posted by AI3's author, Mike Bergman Posted on October 3, 2005 at 9:14 am in Adaptive Information, Document Assets, Information Automation | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/130/why-are-800-billion-in-document-assets-wasted-annually-i-is-private-information-bad/
The URI to trackback this post is: https://www.mkbergman.com/130/why-are-800-billion-in-document-assets-wasted-annually-i-is-private-information-bad/trackback/