Posted:September 27, 2005

Though it has been out since June, I just today came across an interview with Tim Berners-Lee on the Semantic Web that was conducted by Andrew Updegrove for the Consortium Standards Bulletin.  I highly recommend this piece for any interested in an insider’s view to the creation and use of the semantic Web.  Here are some highlights.  All are direct quotes from Berners-Lee.

Here are some excerpts relating to the vision of the semantic Web:

The goal of the Semantic Web initiative is to create a universal medium for the exchange of data where data can be shared and processed by automated tools as well as by people. The Semantic Web is designed to smoothly interconnect personal information management, enterprise application integration, and the global sharing of commercial, scientific and cultural data.

Many large-scale benefits are, not surprisingly, evident for enterprise level applications. The benefits of being able to reuse and repurpose information inside the enterprise include both for savings and new discoveries. And of course, more usable data brings about a new wave of software development for data analysis, visualization, smart catalogues… not to mention new applications development. The point of the Semantic Web is in the potential for new uses of data on the Web, much of which we haven’t discovered yet.

As for status of the initiative, Berners-Lee directly addresses some critics by emphasizing the importance of automated tools and not author tagging:

It’s not about people encoding web pages; it’s about applications generating machine-readable data on an entirely different scale. Were the Semantic Web to be enacted on a page-by-page basis in this era of fully functional databases and content management systems on the Web, we would never get there. What is happening is that more applications — authoring tools, database technologies, and enterprise-level applications — are using the initial W3C Semantic Web standards for description (RDF) and ontologies (OWL).

Berners-Lee goes on to say:

One of the criticisms I hear most often is, “The Semantic Web doesn’t do anything for me I can’t do with XML”. This is a typical response of someone who is very used to programming things in XML, and never has tried to integrate things across large expanses of an organization, at short notice, with no further programming. One IT professional who made that comment around four years ago, said a year ago words to the effect, “After spending three years organizing my XML until I had a heap of home-made programs to keep track of the relationships between different schemas, I suddenly realized why RDF had been designed. Now I used RDF and its all so simple — but if I hadn’t have had three years of XML hell, I wouldn’t ever have understood.”

Many of the criticisms of the Semantic Web seems (to me at least!) the result of not having understood the philosophy of how it works. A critical part, perhaps not obvious from the specs, is the way different communities of practice develop independently, bottom up, and then can connect link by link, like patches sewn together at the edges. So some criticize the Semantic Web for being a (clearly impossible) attempt to make a complete top-down ontology of everything.

Others criticize the Semantic Web because they think that everything in the whole Semantic Web will have to be consistent, which is of course impossible. In fact, the only things I need to be consistent are the bits of the Semantic Web I am using to solve my current problem.

The web-like nature of the Semantic Web sometimes comes under criticism. People want to treat it as a big XML document tree so that they can use XML tools on it, when in fact it is a web, not a tree. A semantic tree just doesn’t scale, because each person would have their own view of where the root would have to be, and which way the sap should flow in each branch. Only webs can be merged together in arbitrary ways. I think I agree with criticisms of the RDF/XML syntax that it isn’t very easy to read. This raises the entry threshold. That’s why we wrote N3 and the N3 tutorial, to get newcomers on board with the simplicity of the concepts, without the complexity of that serialization.

Some of the other insights in the interview is that early adoption is likely to be internally by enterprises on their intranets, that there will definitely be first-mover advantages for software applications that embrace RDF and OWL, and that a more widely embraced rules-based language (think of a successor to Prolog) may likely emerge.

Highly recommended reading!

Posted by AI3's author, Mike Bergman Posted on September 27, 2005 at 9:33 am in Adaptive Information, Semantic Web | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/128/tbl-on-the-semantic-web/
The URI to trackback this post is: http://www.mkbergman.com/128/tbl-on-the-semantic-web/trackback/
Posted:September 19, 2005

Author’s Note: I am pleased to offer this comprehensive guide prepared from my “Preparing to Blog” series covering the first four months of learning in creating this AI3 blog site.

The citation for this effort is:

Michael K. Bergman, “Comprehensive Guide to a Professional Blog Site:  A WordPress Example,” A Guide Book from the AI3 Blog Site, September 2005, 80 pp.

Download PDF file Click here to obtain a copy of this free guide (80 pp, 1016 K)

Gone beyond Blogger? Want to really be aggressive in functionality and scope of content for your personal, professional or corporate blog? If so, this Comprehensive Guide to a Professsional Blog Site may be useful to you.

This Guide is the result of 350 hrs of learning and experimentation to test the boundaries of blog functionality, scope and capabilities. I myself began this process as a total newbie about six months ago — which likely shows in gaps and naïveté — but I have been aggressive in documenting as I have gone. The learning from my professional blog journey, still ongoing, is reflected in these pages.

This Guide addresses about 100 individual “how to” blogging topics and lessons, all geared to the content-focused and not occasional blogger. More than 140 citations from more than 80 experts provide additional guidance.  The Guide itself occupies 80 pages. It is all free with no sign up required.

But there is hopefully more than one pony under the pile for those needing to join the “1% club” of purposeful, content-oriented, professional bloggers. In this Guide you will find discussion of these useful topics:

  • How to choose blogging software and add-on tools
  • Taking control of the blogging process by hosting your own site
  • Getting your blog to display and perform right
  • Effective techniques for converting existing documents to your blog site HTML
  • Being efficient in posting, organizing and work-flowing to allow your diarist activities to flow naturally and productively
  • Keeping the blog site pump primed with fresh and relevant content.

I created this Guide as a discipline in learning how to be a diarist or journalist, akin to the heyday era of “persons of letters” prior to the telegraph. In part, I undertook this discipline to rekindle those daily journal skills of the past. But, for the most part, I undertook the effort because I believe a fundamentally new means and mechanism for adaptive advantage is being created with social computing, of which blogging is a part.

Enjoy! And I welcome your corrections or suggestions for improvements.

Posted:September 18, 2005
This AI3 blog maintains Sweet Tools, the largest listing of about 800 semantic Web and -related tools available. Most are open source. Click here to see the current listing!

My current research efforts involve the semantic Web and ontologies. By the semantic Web I include that topic, plus the related technologies and standards of metadata, ontologies, taxonomies, thesauri, controlled vocabularies, XML, RDF and OWL.

A good starting point on tools is from Michael Denny, which is an update of his 2002 ontology editor survey. Other tools surveys include a 2003 HP review from the SIMILE research program on metadata and thesaurus tools; the Semantic Web has a listing of about 245 tools on its beta Web site; and the W3C, as might be expected with its role in RDF and related standards, has an excellent starting point for developer resources, including entries for related standards and technologies.

The ONTOLOG community also lists some tools resources, but more importantly has a very excellent recommended reading compendium. These links are essential starting points for anyone beginning their investigations into the semantic Web.

Finally, Kendall Clark, editor of XML.com, just posted a fascinating piece on SPARQL 2.0, a possible query language to the semantic Web and a longer article on the possible convergence of Web 2.0 and the semantic Web. As he puts it, I’m starting to catch the scent of one of those big convergence things just possibly starting to happen. It smells like money!

Posted:September 15, 2005

Google announced its new beta blog search service this week, and I immediately went to check it out.  To my dismay, none of my AI3 blog posts were listed!  $#%&*#

My first hint of what to do came from the About Google Blog Search page, which indicated that while Google does not yet have a submission form for submitting pings, the new service does monitor updating services, specifically mentioning Weblogs.com.  I then tried to access this site, which was slower than molasses and I timed out many times (I suspect many others were following the same path I was).

That got me into a whole investigation of ping and ping success in general with my WordPress installation.  (See my earlier post on Pings and Trackbacks).  I was alarmed to discover that many of my ping locations had not been updating well, for reasons that still remain somewhat murky (though others have noted sporadic miscues by WordPress in ping updates, not to mention some of the ping update sites recommended for it such as Ping-o-matic).

The WordPress dashboard suggested that Google was using Ping-o-matic as one of its update services for new listings, so I manually submitted my site again to Ping-o-matic and waited to see the results.  Voila!   After a reasonable hour or so delay, I found my posts and sites now on the Google blog search service and other locations.

Thus, in the interim before Google completes its submission expansions, I recommend that WordPress bloggers who are not yet listed in the Google blog search:

  1. Occasionally manually ping Ping-o-matic rather than rely that your updates are being handled automatically (but, DON’T do it too frequently since that  can be interpreted as spamming behavior)
  2. On a one-time basis, up your synidcation feeds limit on the Options-Reading-Syndication Feeds panel in the dashboard to be large enough to include All of your desired recent postings
  3. Manual submit an update at Ping-o-matic, and
  4. Return the syndicated feeds number to your original amount in your dashboard.

With this simple approach, I can now happily report that all of the AI3 listings are now in the new Google blog search service, and so can yours!

Posted by AI3's author, Mike Bergman Posted on September 15, 2005 at 5:52 pm in Blogs and Blogging, Searching | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/124/getting-listed-on-google-blog-search/
The URI to trackback this post is: http://www.mkbergman.com/124/getting-listed-on-google-blog-search/trackback/
Posted:September 14, 2005

According to iProspect, about 56 percent of users use search engines every day, based on a population of which more than 70 percent use the Internet more than 10 hours per week.[1] The average knowledge worker spends 2.3 hrs per day — or about 25% of work time — searching for critical job information.[2] IDC estimates that enterprises employing 1,000 knowledge workers may waste well over $6 million per year each in searching for information that does not exist, failing to find information that does, or recreating information that could have been found but was not.[3]

Vendors and customers often use time savings by knowledge workers as a key rationale for justifying a document or content initiative. This comes about because many studies over the years have noted that white collar employees spend a consistent 20% to 25% of their time seeking information. The premise is that more effective search will save time and drop these percentages. For example, EDS has suggested that improvements of 50 percent in the time spent searching for data can be achieved through improved consolidation and access to data.[4]

Using these premises, consultants often calculate that every 1% reduction in the total work time devoted to search works out illustratively on a fully burdened basis as a big cost savings benefit:

$50,000 (base salary) * 1.8 (burden rate) * 1.0% = $900/ employee

Beware such facile analysis!

The fact that many studies over the years have noted white collar employees spend a consistent 20% to 25% of their time devoted to search suggests it is the “satisficing” allocation of time to information search. (In other words, knowledge workers are willing to devote a quarter of their time to finding relevant information; the remainder for analysis and documentation.)

Thus, while better tools to aid better discovery may lead to finding better information and making better decisions more productively — an important justification in itself — there may not result a strict time or labor savings from more efficient search.[5] Be careful of justifying project expenditures based on “time savings” related to search. Search is likely to remain the “25% solution.” The more relevant question is whether the time that is spent on search produces better information or not.


[1] iProspect Corporation, iProspect Search Engine User Attitudes, April/May 2004, 28 pp. See http://www.iprospect.com/premiumPDFs/iProspectSurveyComplete.pdf.

[2] Delphi Group, “Taxonomy & Content Classification Market Milestone Report,” Delphi Group White Paper, 2002. See http://delphigroup.com.

[3] C. Sherman and S. Feldman, “The High Cost of Not Finding Information,” International Data Corporation Report #29127, 11 pp., April 2003.

[4] M. Doyle, S. Garmon, and T. Hoglund, “Make Your Portal Deliver: Building the Business Case and Maximizing Returns,” EDS White Paper, 10 pp., 2003.

[5] M.E.D. Koenig, “Time Saved — a Misleading Justification for KM,” KMWorld Magazine, Vol 11, Issue 5, May 2002. See http://www.kmworld.com/publications/magazine/index.cfm.

Posted by AI3's author, Mike Bergman Posted on September 14, 2005 at 12:45 pm in Information Automation, Searching | Comments (1)
The URI link reference to this post is: http://www.mkbergman.com/121/search-and-the-25-solution/
The URI to trackback this post is: http://www.mkbergman.com/121/search-and-the-25-solution/trackback/