Posted:November 14, 2008

Multi-part Federated Search Interview

Topics Range from the Deep Web to Semantic Web in this Search Luminaries Series

I’m pleased to wrap up a multi-part interview with the Federated Search Blog as part of their ongoing ‘Search Luminaries’ series. Sol Lederman, editor of the blog, does a thorough and comprehensive job! Over the past month on every Friday, I have answered some 25 or so of his detailed questions.

Federated Search Blog was particularly interested in the deep Web, its discovery and size. Many of the early questions deal with those themes. However, by Part 4 things get a bit more current, with the topics shifting to the semantic Web, linked data and Zitgist.

Here are the links to the series:

To give you a flavor of the interview, here is an example of one of the questions (and probably my favorite):

20. Tim Berners-Lee, credited with inventing the World Wide Web, has been talking about the importance and value of the Semantic Web for years yet common folks don't see much evidence of the Semantic Web gaining traction. Is there substance to the Semantic Web? What's happening with it now and what does its future look like?

Wow, in 10,000 words or less?

No, actually, this is a very good question. As things go, I am a relative newbie to the semantic Web, only having studied and followed it closely since about 2005. I'm sure my perspective in coming later to the party may not be shared by those at the beginning, which dates to the mid-1990s as Berners-Lee's vision naturally progressed from a Web of documents, as most of us currently know the Web, to a Web of data.

I think there is indeed incredibly important substance to the semantic Web. But, as I have written elsewhere, the semantic Web is more of a vision than a discernable point in time or a milestone.

The basic idea of the semantic Web is to shift the focus from documents to data. Give data a unique Web address. Characterize that data with rich metadata. Describe how things are related to one another so that relationships and connections can be traced. Provide defined structures for what these things and relationships "mean"; this is what provides the semantics, with the structures and their defined vocabularies known as "ontologies" (which in one analog can be seen as akin to a relational database schema).

As these structures and definitions get put in place, the Web itself then becomes the infrastructure for relating information from everywhere and anywhere on any given topic or subject. While this vision may sound grandiose, just think back to what the Web itself has done for us and documents over the past decade or so. This same architecture and infrastructure can and should be extended to the actual information in those documents, the data. And, oh, by the way, conventional databases can now join this party as well. The vision is very powerful and very cool.

Progress has indeed been slow. Many advocates fairly point to how long it takes to get standards in place and for a while people spoke of the "chicken-and-egg" problem of getting over the threshold of having enough structured data to consume to make it worthwhile to create the tools and applications and showcases that consume that data.

From my perspective, the early visions of the semantic Web were too abstract, a bit off perhaps. First, there was the whole idea of artificial intelligence and machines using the data as opposed to better ways for humans to draw use from the data at hand. The fundamental and exciting engine underneath the semantic Web — the RDF (Resource Description Framework) data model — was not initially treated on its own. It got admixed with XML that made understanding difficult and distinctions vague. There is and remains too much academia and not enough pragmatics driving the bus.

But that is changing and fast.

There is now an immediate and practical "flavor" of the semantic Web called linked data. It has three simple bases:

(1) RDF as the simple but adaptable data model that can represent any information — structured or unstructured — as the basic "triple" statement of subject-predicate-object. That sounds fancy, but just substitute verb for predicate and noun for subject and object. In other words: Dick sees Jane; or the ball is round. It sounds like a kindergartner reader, but that is how data can be easily represented and built up into more complex structures and stories

(2) Give all objects a unique Web identifier. Unique identifiers are common to any database; in linked data, we just make sure those identifiers conform to the same URIs we see constantly in the address bar of our Web browsers, and:

(3) Post and expose this stuff as accessible on the Web (namely, HTTP).

My company adds some essential "spice" to these flavors with respect to reference structures and concepts to give the information context, but these simple bases remain the foundation.

These are really not complex steps. They are really no different than the early phases of posting documents on the Web. Only now, we are exposing data.

More importantly, we can forget the chicken-and-egg problem. Each new data link we make brings value, in the similar way that adding a node to a network brings value according to Metcalfe's Law. Only with linked data, we already have the nodes — the data — we are just establishing the link connections (the verbs, predicates or relations) to flesh out the network graph. Same principle, only our focus is now to connect what is there rather than to add more nodes. (Of course, adding more linked nodes helps as well!)

The absolutely amazing thing about our current circumstance as Web users is that we truly now have simple and readily deployable mechanisms available to finally overcome the decades of enterprise stovepipes. The whole answer is so simple it can be mistaken as snake oil when first presented and not inspected a bit.

As an industry accustomed to hype and cynical about so much of this, I only ask that your readers check out these assertions for themselves and suspend their normal and expected disbelief. For me, in a career of more than 30 years focusing on information and access, I feel like we finally now have the tools, data model and architecture at hand to actually achieve data interoperability.

Thanks again to Sol and Federated Search Blog for this opportunity.

Schema.org Markup

headline:
Multi-part Federated Search Interview

alternativeHeadline:

author:

image:

description:
Topics Range from the Deep Web to Semantic Web in this Search Luminaries Series I’m pleased to wrap up a multi-part interview with the Federated Search Blog as part of their ongoing ‘Search Luminaries’ series. Sol Lederman, editor of the blog, does a thorough and comprehensive job! Over the past month on every Friday, I […]

articleBody:
see above

datePublished:

Leave a Reply

Your email address will not be published. Required fields are marked *