Posted:October 28, 2008

It's UMBELievable!

UMBEL’s New Web Services Embrace a Full Web-Oriented Architecture

I recently wrote about WOA (Web-oriented architecture), a term coined by Nick Gall, and how it represented a natural marriage between RESTful Web services and RESTful linked data. There was, of course, a method behind that posting to foreshadow some pending announcements from UMBEL and Zitgist.

Well, those announcements are now at hand, and it is time to disclose some of the method behind our madness.

As Fred Giasson notes in his announcement posting, UMBEL has just released some new Web services with fully RESTful endpoints. We have been working on the design and architecture behind this for some time and, all I can say is, it’s UMBELievable!

As Fred notes, there is further background information on the UMBEL project — which is a lightweight reference structure based on about 20,000 subject concepts and their relationships for placing Web content and data in context with other data — and the API philosophy underlying these new Web services. For that background, please check out those references; that is not my main point here.

A RESTful Marriage

We discussed much in coming up with the new design for these UMBEL Web services. Most prominent was taking seriously a RESTful design and grounding all of our decisions in the HTTP 1.1 protocol. Given the shared approaches between RESTful services and linked data, this correspondence felt natural.

What was perhaps most surprising, though, was how complete and well suited HTTP was as a design and architectural basis for these services. Sure, we understood the distinctions of GET and POST and persistent URIs and the need to maintain stateless sessions with idempotent design, but what we did not fully appreciate was how content and serialization negotiation and error and status messages also were natural results of paying close attention to HTTP. For example, here is what the UMBEL Web services design now embraces:

  • An idempotent design that maintains state and independence of operation
  • Language, character set, encoding, serialization and mime type enforced by header information and conformant with content negotiation
  • Error messages and status codes inherited from HTTP
  • Common and consistent terminology to aid understanding of the universal interface
  • A resulting componentization and design philosophy that is inherently scalable and interoperable
  • A seamless consistency between data and services.

There are likely other services out there that embrace this full extent of RESTful design (though we are not aware of them). What we are finding most exciting, though, is the ease with which we can extend our design into new services and to mesh up data with other existing ones. This idea of scalability and distributed interoperability is truly, truly powerful.

It is almost like, sure, we knew the words and the principles behind REST and a Web-oriented architecture, but had really not fully taken them to heart. As our mindset now embraces these ideas, we feel like we have now looked clearly into the crystal ball of data and applications. We very much like what we see. WOA is most cool.

First Layer to the Zitgist ‘Grand Vision’

For lack of a better phrase, Zitgist has a component internal plan that it calls its ‘Grand Vision’ for moving forward. Though something of a living document, this reference describes how Zitgist is going about its business and development. It does not describe our markets or products (of course, other internal documents do that), but our internal development approaches and architectural principles.

Just as we have seen a natural marriage between RESTful Web services and RESTful linked data, there are other natural fits and synergies. Some involve component design and architecting for pipeline models. Some involve the natural fit of domain-specific languages (DSLs) to common terminology and design, too. Still others involve use of such constructs in both GUIs and command-line interfaces (CLIs), again all built from common language and terminology that non-programmers and subject matter experts alike can readily embrace. Finally, some is a preference for Python to wrap legacy apps and to provide a productive scripting environment for DSLs.

If one can step back a bit and realize there are some common threads to the principles behind RESTful Web services and linked data, that very same mindset can be applied to many other architectural and design issues. For us, at Zitgist, these realizations have been like turning on a very bright light. We can see clearly now, and it is pretty UMBELievable. These are indeed exciting times.

BTW, I would like to thank Eric Hoffer for the very clever play on words with the UMBELievable tag line. Thanks, Eric, you rock!

Posted:October 12, 2008

World Wide Web historic logoWeb-Oriented Architecture (and REST) is Gaining Enterprise Mindshare

Nick Gall, a VP of Gartner, first coined the TLA (three-letter acronym) for WOA (Web-oriented architecture) in late 2005. In further describing it, Nick simply defines it as:


In the longer version, Nick describes WOA as based on the architecture of the Web that he further characterizes as “globally linked, decentralized, and [with] uniform intermediary processing of application state via self-describing messages.”

WOA is a subset of the service-oriented architectural style. He describes SOA as comprising discrete functions that are packaged into modular and shareable elements (“services”) that are made available in a distributed and loosely coupled manner.

Representational state transfer (REST) is an architectural style for distributed hypermedia systems such as the World Wide Web. It was named and defined in Roy Fielding‘s 2000 doctoral thesis; Roy is also one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification.

REST provides principles for how resources are defined and used and addressed with simple interfaces without additional messaging layers such as SOAP or RPC. The principles are couched within the framework of a generalized architectural style and are not limited to the Web, though they are a foundation to it.

REST and WOA stand in contrast to earlier Web service styles that are often known by the WS* acronym (such as WSDL, etc.). (Much has been written on RESTful Web services v. “big” WS*-based ones; one of my own postings goes back to an interview with Tim Bray back in November 2006.)

While there are dozens of well-known methods for connecting distributed systems together, protocols based on HTTP will be the ones that stand the test of time. And since HTTP is the fundamental protocol of the Web, those protocols most closely aligned with its essential nature will likely be the most successful.

– Dion Hinchliffe [2]

Shortly after Nick coined the WOA acronym, REST luminaries such as Sam Ruby gave the meme some airplay [1]. From an enterprise and client perspective, Dion Hinchliffe in particular has expanded and written extensively on WOA. Besides his own blog, Dion has also discussed WOA several times on his Enterprise Web 2.0 blog for ZDNet.

Largely due to these efforts (and — some would claim — the difficulties associated with earlier WS* Web services) enterprises are paying much greater heed to WOA. It is increasingly being blogged about and highlighted at enterprise conferences [3].

While exciting, that is not what is most important in my view. What is important is that the natural connection between WOA and linked data is now beginning to be made.

Analogies to Linked Data

Linked data is a set of best practices for publishing and deploying data on the Web using the RDF data model. The data objects are named using Web uniform resource identifiers (URIs), emphasize data interconnections, and adhere to REST principles.

Most recently, Nick began picking up the theme of linked data on his new Gartner blog. Enterprises now appreciate the value of an emerging service aspect based on HTTP and accessible by URIs. The idea is jelling that enterprises can now process linked data architected in the same manner.

I think the similar perspectives between REST Web services and linked data become a very natural and easily digested concept for enterprise IT architects. This is a receptive audience because it is these same individuals who have experienced first-hand the challenges and failures of past hype and complexity from non-RESTful designs.

It helps immensely, of course, that we can now look at the major Web players such as Google and Amazon and others — not to mention the overall success of the Web itself — to validate the architecture and associated protocols for the Web. The Web is now understood as the largest Machine designed by humans and one that has been operational every second of its existence.

Many of the same internal enterprise arguments that are being made in support of WOA as a service architecture can be applied to linked data as a data framework. For example, look at Dion’s 12 Things You Should Know About REST and WOA and see how most of the points can be readily adopted to linked data.

So, enterprise thought leaders are moving closer to what we now see as the reality and scalability of the Web done right. They are getting close, but there is still one piece missing.

False Dichotomies

I admit that I have sometimes tended to think of enterprise systems as distinct from the public Web. And, for sure, there are real and important distinctions. But from an architecture and design perspective, enterprises have much to learn from the Web’s success.

With the Web we see the advantages of a simple design, of universal identifiers, of idempotent operations, of simple messaging, of distributed and modular services, of simple interfaces, and, frankly, of openness and decentralization. The core foundations of HTTP and adherence to REST principles have led to a system of such scale and innovation and (growing) ubiquity as to defy belief.

So, the first observation is that the Web will be the central computing touchstone and framework for all computing systems for the foreseeable future. There simply is no question that interoperating with the Web is now an enterprise imperative. This truth has been evident for some time.

But the reciprocal truth is that these realities are themselves a direct outcome of the Web’s architecture and basic protocol, HTTP. The false dichotomy of enterprise systems as being distinct from the Web arises from seeing the Web solely as a phenomenon and not as one whose basic success should be giving us lessons in architecture and design.

Thus, we first saw the emergence of Web services as an important enteprise thrust — we wanted to be on the Web. But that was not initially undertaken consistent with Web design — which is REST or WOA — but rather as another “layer” in the historical way of doing enterprise IT. We were not of the Web. As the error of that approach became evident, we began to see the trend toward “true” Web services that are now consonant with the architecture and design of the actual Web.

So, why should these same lessons and principles not apply as well to data? And, of course, they do.

If there is one area that enterprises have been abject failures in for more than 30 years it is data interoperability. ETL and enterprise busses and all sorts of complex data warehousing and EAI and EIA mumbo jumbo have kept many vendors fat and happy, but few enterprise customers so. On almost every single dimension, these failed systems have violated the basic principles now in force on the Web based on simplicity, uniform interfaces, etc.

The Starting Foundation: HTTP 1.1

OK, so how many of you have read the HTTP specifications [4]? How many understand them? What do you think the fundamental operational and architectural and design basis of the Web is?

HTTP is often described as a communications protocol, but it really is much more. It represents the operating system of the Web as well as the embodiment of a design philosophy and architecture. Within its specification lies the secret of the Web’s success. REST and WOA quite possibly require nothing more to understand than the HTTP specification.

Of course, the HTTP specification is not the end of the story, just the essential beginning for adaptive design. Other specifications and systems layer upon this foundation. But, the key point is that if you can be cool with HTTP, you are doing it right to be a Web actor. And being a cool Web actor means you will meet many other cool actors and be around for a long, long time to come.

Concept “Routers” for Information

An understanding of HTTP can provide similar insights with respect to data and data interoperability. Indeed, the fancy name of linked data is nothing more than data on the Web done right — that is, according to the HTTP specifications.

Just as packets need their routers to get to their proper location based on resolving the names of a URI to a physical device, data or information on the Web needs similar context. And, one mechanism by which such context can be provided is through some form of logical referencing framework by which information can be routed to its right “neighborhood”.

I am not speaking of routing to physical locations now, but the routing to the logical locations about what information “is about” and what it “means”. On the simple level of language, a dictionary provides such a function by giving us the definition of what a word “means”. Similar coherent and contextual frameworks can be designed for any information requirement and scope.

Of course, enterprises have been doing similar things internally for years by adopting common vocabularies and the like. Relational data schema are one such framework even if they are not always codified or understood by their enterprises as such.

Over the past decade or two we have seen trade and industry associations and standards bodies, among others, extend these ideas of common vocabularies and information structures such as taxonomies and metadata to work across enterprises. This investment is meaningful and can be quite easily leveraged.

As Nick notes, efforts such as what surrounds XBRL are one vocabulary that can help provide this “routing” in the context of financial data and reporting. So, too, can UMBEL as a general reference framework of 20,000 subject concepts. Indeed, our unveiling of the recent LOD constellation points to a growing set of vocabularies and classes available for such contexts. Literally thousands and thousands of such existing structures can be converted to Web-compliant linked data to provide the information routing hubs necessary for global interoperability.

And, so now we come down to that missing piece. Once we add context as the third leg to this framework stool to provide semantic grounding, I think we are now seeing the full formula powerfully emerge for the semantic Web:

SW = WOA + linked data + coherent context

This simple formula becomes a very powerful combination.

Just as older legacy systems can be exposed as Web services, and older Web services can be turned into WOA ones compliant with the Web’s architecture, we can transition our data in similar ways.

The Web has been pointing us to adaptive design for both services and data since its inception. It is time to finally pay attention.

[1] Sam and his co-author Leonard Richardson of RESTful Web Services (O’Reilly Media Inc., 446 pp, May 2007; ISBN 0596529260) have preferred the label ROA, for Resource-oriented Architecture.
[2] D. Hinchcliffe, “A Search for REST Frameworks for Exploring WOA Patterns — And Current Speaking Schedule”, Sept. 10, 2006; see
[3] The Linked Data community should pay much closer attention to existing and well-attended enterprise conferences in which the topic can be inserted as a natural complement rather than trying to start entire new venues.
[4] The current specification is RFC 2616 (June 1999), which defines HTTP/1.1; see For those wanting an easier printed copy, a good source in PDF is
Posted:July 16, 2008

Bringing Context through a Meta-Subject Framework for the Web

Today marks the first public release of UMBEL, a lightweight subject concept reference structure for the Web. This version 0.70 release required a full 12 months and many person-years of development effort.

UMBEL (Upper Mapping and Binding Exchange Layer) is a lightweight ontology structure for relating Web content and data to a standard set of 20,000 subject concepts. Its purpose is to provide a fixed set of common reference points in the global knowledge space. These subject concepts have defined relationships between them, and can act as semantic binding nodes for any Web content or data. The UMBEL reference structure is a large, inclusive, linked concept graph.

Connecting to the UMBEL structure gives context and coherence to Web data. In this manner, Web data can be linked, made interoperable, and more easily navigated and discovered. UMBEL is a great vehicle for interconnecting content metadata.

The UMBEL vocabulary defines some important new predicates and leverages existing semantic Web standards. The ontology is provided as Linked Data with Web services access (and pending SPARQL endpoints). Besides its 20,000 subject concepts and relationships distilled from OpenCyc, a further 1.5 million named entities are mapped to that structure. The system is easily extendable.

Fred Giasson, UMBEL’s co-editor, posts separately on how the UMBEL vocabulary can enrich existing semantic Web ontologies and techniques. Also, see the project’s Web site for additional background and explanatory information on the project.

UMBEL is provided as open source under the Creative Commons 3.0 Attribution-Share Alike license; the complete ontology with all subject concepts, definitions, terms and relationships can be freely downloaded. All subject concepts are Web-accessible as Linked Data URIs.

Development of UMBEL and the hosting of its Web services is provided by Zitgist LLC with support from OpenLink Software.

Access and Documentation

Five volumes of technical documentation are available. The two key volumes explaining the UMBEL project and process are UMBEL Ontology, Vol. A1: Technical Documentation (also online) and Distilling Subject Concepts from OpenCyc, Vol. B1: Overview and Methodology.

A new overview slideshow is also available.

Ontology Access and Download

UMBEL Web Services

Cytoscape Files

There are two input files for Cytoscape, the open source program used for certain large-scale UMBEL visualization and analysis:

  • umbel_cytoscape.csv — lists all the nodes and arcs to import into Cytoscape to visualize the UMBEL graph
  • umbel_cytoscape.cys — a pre-prepared input file to Cytoscape that includes a force-directed layout of the UMBEL subject concept graph; this is the file that most should use unless you want to re-build from scratch within Cytoscape.

Other Documentation

The two complete references to all current and archived files and access procedures in the UMBEL project are UMBEL Ontology, Vol. A2: Subject Concepts and Named Entities Instantiation and Distilling Subject Concepts from OpenCyc, Vol. B2: Files Documentation. Finally, the fifth documentation volume accompanying the release is Distilling Subject Concepts from OpenCyc, Vol. B3: Appendices, which provides supporting materials and detailed backup.

Current Editorial Positions

As discussed on the Web site on UMBEL’s role, the project currently has adopted two pivotal positions with respect to OpenCyc and its use:

  1. All UMBEL subject concepts are based on existing concepts in OpenCyc. This means UMBEL inherits the proven structure and relationships extant in OpenCyc
  2. No new subject concepts will be added to UMBEL that are not included in OpenCyc. This means that UMBEL’s structure will not diverge from the structural relations already in OpenCyc. This decision preserves the use of UMBEL as a sort of contextual middleware between unstructured Web content and the inferential and tools infrastructure within OpenCyc (and beyond into ResearchCyc and Cyc for commercial purposes) and back again to the Web.

For these positions to be effective, we are putting in place mechanisms for UMBEL to collect and forward community comments regarding the suitability of the subject concept structure, and for Cycorp to deliberate on that input and respond as appropriate to maintain the coherence of the knowledge base.

Fortunately, Cycorp has been supremely responsive to date and made changes to the OpenCyc concept structure and its conversion to OWL in support of needs and observations brought forth by the UMBEL project. We anticipate this excellent working relationship to continue.

Setting Realistic Expectations

This version 0.70 release is based on versioning and numbering as presented in the supporting documentation. But, also, releasing with a version increment below 1.0 additionally signals the newness and relative immaturity of the system.

This release is the first one in which the UMBEL subject concepts and ontology will be applied as a real vocabulary in public settings. Some areas are known to be weaker and less complete than others. Some areas, such as the coverage of Internet and the Web topics particular to domain experts, are relatively sparse. Other areas, such as organizing science and academic disciplines, have seen much improvement, but more is necessary. Still additional areas will certainly surface as warranting better subject concept coverage.

Input mechanisms are being put in place for user feedback and input and discussion is always welcomed at the project’s discussion forum and mailing list. We anticipate rapid changes and versioning over the next six months or so, which is also roughly the forecasted horizon for the first production-grade version 1.0.

Contributions and Thanks

A number of individuals and organizations have contributed significantly to this release, for which the project offers hearty thanks.

Zitgist LLC has been the major source of staff time and hosting services to the project. Two of Zitgist’s principals, Mike Bergman and Fred Giasson, have acted as editors on the UMBEL project.Zitgist also has contributed nearly two person-years of effort to the project.Zitgist intends on continuing to lead and manage the project with a substantial future commitment of time and effort.
OpenLink Software has been the major source of infrastructure, financing and software for the project. OpenLink’s Virtuoso virtual data management system is the hosting software environment for UMBEL and its Web services.Kingsley Idehen, CEO and President of OpenLink, has been a key source of inspiration for the project.
Cycorp is the developer of the Cyc knowledge base, with more than 1,000 person-years of effort behind it, from which the OpenCyc open source version is derived.Since the initial selection of OpenCyc for UMBEL, Cycorp staff have devoted many person-months of effort to help explain the underlying system and, then, most recently, to make improvements and revisions to OpenCyc and its OWL version in response to project input. Larry Lefkowitz, VP of business development, has been a very effective interface with the project.
YAGO is a project from Fabian Suchanek, Gjergji Kasneci and Gerhard Weikum of the Max-Planck-Institute for Computer Science, Saarbruecken, Germany. It is based on extracting and organizing entities from Wikipedia according to the WordNet concept structure.YAGO demonstrated the methodology for how to replace the native Wikipedia structure with alternate external structures and provided the starting set of named entities used within UMBEL. Fabian has been especially helpful in data, software and methodology support to the project.
The Cyc Foundation and its members have been devoted to Web exposure of OpenCyc and have provided great guidance to the project in learning and navigating the knowledge base. Their concepts browser and other Web services have also been extremely helpful to the project’s initial ideas and testing.Mark Baltzegar and John De Oliveira, the two lead directors of the Cyc Foundation, have been particularly helpful.
Moritz Stefaner is one of the innovators and rising stars in large-scale data visualization.Moritz has kindly contributed his cool Flash explorer implementation used in UMBEL’s Subject Concept Explorer and continues to make ongoing improvements to UMBEL’s visualization.Moritz’s Web site and separate blog are each worth perusing for neat graphics and ideas.

Thanks, all of you! This is a day we have worked long and hard to see come to reality. As Fred puts it, let the fun begin!

Posted by AI3's author, Mike Bergman Posted on July 16, 2008 at 2:19 pm in Adaptive Innovation, Linked Data, Semantic Web, UMBEL | Comments (4)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:July 6, 2008

Breakthroughs in the Basis, Nature and Organization of Information Across Human History

I’m pleased to present a timeline of 100 or so of the most significant events and developments in the innovation and management of information and documents from cave paintings ( ca 30,000 BC) to the present. Click on the link to the left or on the screen capture below to go to the actual interactive timeline.

This timeline has fast and slow scroll bands — including bubble popups with more information and pictures for each of the entries offered. (See the bottom of this posting for other usage tips.)

Note the timeline only presents non-electronic innovations and developments from alphabets to writing to printing and information organization and conventions. Because there are so many innovations and they are concentrated in the last 100 years or fewer, digital and electronic communications are somewhat arbitrarily excluded from the listing.

I present below some brief comments on why I created this timeline, some caveats about its contents, and some basic use tips. I conclude with thanks to the kind contributors.

Why This Timeline?

Readers of this AI3 blog or my detailed bio know that information — biological embodied in genes, or cultural embodied in human artefacts — has been my lifelong passion. I enjoy making connections between the biological and cultural with respect to human adaptivity and future prospects and I like to dabble on occasion as an amateur economic or information science historian. SIMILE Timeline

About 18 months ago I came across David Huynh‘s nifty Exhibit lightweight data display widget, gave it a glowing review, and then proceeded to convert my growing Sweet Tools listing of semantic Web and related tools to that format. Exhibit still powers the listing (which I just updated yesterday for the twelfth time or so).

At the time of first rolling out Exhibit I also noted that David had earlier created another lightweight timeline display widget that looked similarly cool (and which was also the first API for rendering interactive timelines in Web pages). (In fact, Exhibit and Timeline are but two of the growing roster of excellent lightweight tools from David.) Once I completed adopting Exhibit, I decided to find an appropriate set of chronological or time-series data to play next with Timeline.

I had earlier been ruminating on one of the great intellectual mysteries of human development: Why, roughly beginning in 1820 to 1850 or so, did the historical economic growth patterns of all prior history suddenly take off? I first wrote on this about two years ago in The Biggest Disruption in History: Massively Accelerated Growth Since the Industrial Revolution, with a couple of follow-ups and expansions since then.

I realized that in developing my thesis that wood pulp paper and mechanized printing were the key drivers for this major inflection change in growth (as they effected literacy and the broadscale access to written information) I already had the beginnings of a listing of various information innovations throughout history. So, a bit more than a year ago, I began adding to that list in terms of how humans learned to write, print, share, organize, collate, reproduce and distribute information and when those innovations occurred.

There are now about 100 items in this listing (I’m still looking for and researching others; please send suggestions at any time. ;) ). Here are some of the current items in chronological order from upper left to lower right:

cave paintings codex footnotes microforms
ideographs woodblock printing copyrights thesaurus
calendars tree diagram encyclopedia pencil (mass produced)
cuneiform quill pen capitalization rotary perfection press
papyrus (paper) library catalog magazines catalogues
hieroglyphs movable type taxonomy (binomial classification) typewriter
ink almanacs statistics periodic table
alphabet paper (rag) timeline chemical pulp (sulfite)
Phaistos Disc word spaces data graphs classification (Dewey)
logographs registers card catalogs linotype
maps intaglio lithography mimeograph machine
scrolls printing press punch cards kraft process (pulp)
manuscripts advertising (poster) steam-powered (mechanized) papermaking flexography
glossaries bookbinding book (machine-paper) classification (LoC)
dictionaries pagination chemcial symbols classification (UDC)
parchment (paper) punctuation mechanical pencil offset press
bibliographies library catalog (printed) chromolithography screenprinting
concept of categories public lending library paper (wood pulp) ballpoint pen
library dictionaries (alphabetic) rotary press xerographic copier
classification system (library) newspapers mail-order catalog hyperlink
zero Information graphics fountain pen metadata (MARC)
paper scientific journal

So, off and on, I have been working with and updating the data and display of this timeline in draft. (I may someday also post my notes about how to effectively work with the Timeline widget.)

With the listing above, completion was sufficient to finally post this version. One of the neat things with Timeline is the ability to drive the display from a simple XML listing. I will update the timeline when I next have an opportunity to fill in some of the missing items still remaining on my innovations list such as alphabeticization, citations, and table of contents, among many others.

Some Interpretation Caveats

Of course, rarely can an innovation be traced to a single individual or a single moment in time. Historians are increasingly documenting the cultural milieu and multiple individuals that affect innovation.

In these regards, then, a timeline such as this one is simplistic and prone to much error and uncertainty. We have no real knowledge, for examples, for the precise time certain historical innovations occurred, and others (the ballpoint pen being one case in point) are a matter of interpretation as to what and when constituted the first expression. For instances where the record indicated multiple dates, I chose to use the date when released to the publlic.

Nonetheless, given the time scales here of more than 30,000 years, I do think broad trends and rough time frames can be discerned. As long as one interprets this timeline as indicative and not meant as definitive in any scholary sense, I believe this timeline can inform and provide some insight and guidance for how information has evolved over human history.

Some Use Tips

The operation of Timeline is pretty straightforward and intuitive. Here are a couple of tips to get a bit more out of playing with it:

  • The timeline has two scrolling panels, fast and slow. For rapid scolling, use mouse down and left or right movement on the lower panel
  • The lower panel also shows small ticks for each innovation in the upper panel
  • Clicking any icon or label in the upper panel will cause a bubble popup to appear with a bit more detail and a picture for the item; click the ‘X’ to close the bubble
  • Each entry is placed in one or more categories keyed by icon. You may “filter” results by using keywords such as: alphabets, book, calendars, libraries, maps, mechanization, paper, papermaking, printing, organizing, scripts, standardization, statistics, timelines, or typography. Partial strings also match
  • Similarly, you may enter one of those same terms into one of the four color highlight boxes. Partial strings also match.

Sources, Contributions and Thanks

For the sake of consistency, nearly all entries and pictures on the timeline are drawn from the respective entries within Wikipedia. Subsequent updates may add to this listing by reference to original sources, at which time all sources will be documented.

The timeline icons are from David Vignoni’s Nuvola set, available under the LGPL license. Thanks David!

The fantastic Timeline was developed by David Huynh while he was a graduate student at MIT. Timeline and its sibling widgets were developed under funding from MIT’s Simile program. Thanks to all in the program and best wishes for continued funding and innovation.

Finally, my sincere thanks go to Professor Michael Buckland of the School of Information at the University of California, Berkeley, for his kind suggestions, input and provision of additonal references and sources. Of course, any errors or omissions are mine alone. I also thank Professor Buckland for his admonitions about use and interpretation of the timeline dates.

Posted:June 23, 2008

We Offer a Definition and Some Answers to Enterprise Questions

The recent LinkedData Planet conference in NYC marked, I think, a real transition point. The conference signaled the beginning movement of the Linked Data approach from the research lab to the enterprise. As a result, there was something of a schizophrenic aspect at many different levels to the conference: business and research perspectives; realists and idealists; straight RDF and linked data RDF; even the discussions in the exhibit area versus some of the talks presented from the podium.

Like any new concept, my sense was a struggle around terminology and common language and the need to bridge different perspectives and world views. Like all human matters, communication and dialog were at the core of the attendees’ attempts to bridge gaps and find common ground. Based on what I saw, much great progress occurred.

The reality, of course, is that Linked Data is still very much in its infancy, and its practice within the enterprise is just beginning. Much of what was heard at the conference was theory versus practice and use cases. That should and will change rapidly.

In an attempt to help move the dialog further, I offer a definition and Structured Dynamics’ perspective to some of the questions posed in one way or another during the conference.

Linked Data Defined

Sources such as the four principles of Linked Data in Tim Berners-Lee’s Design Issues: Linked Data and the introductory statements on the Linked Data Wikipedia entry approximate — but do not completely express — an accepted or formal or “official” definition of Linked Data per se. Building from these sources and attempting to be more precise, here is the definition of Linked Data we used internally:

Linked Data is a set of best practices for publishing and deploying instance and class data using the RDF data model, naming the data objects using uniform resource identifiers (URIs), and exposing the data for access via the HTTP protocol, while emphasizing data interconnections, interrelationships and context useful to both humans and machine agents.

All references to Linked Data below embrace this definition.

Some Clarifying Questions

I’m sure many other questions were raised, but listed below are some of the more prominent ones I heard in the various conference Q&A sessions and hallway discussions.

1. Does Linked Data require RDF?

Yes. Though other approaches can also model the first order predicate logic of subject-predicate-object at the core of the Resource Description Framework data model, RDF is the one based on the open standards of the W3C. RDF and FOL are powerful because of simplicity, ability to express complex schema and relationships, and suitability for modeling all extant data frameworks for unstructured, semi-structured and structured data.

2. Is publishing RDF sufficient to create Linked Data?

No. Linked Data represents a set of techniques applied to the RDF data model that names all objects as URIs and makes them accessible via the HTTP protocol (as well as other considerations; see the definition above and further discussion below).

Some vendors and data providers claim Linked Data support, but if their data is not accessible via HTTP using URIs for data object identification, it is not Linked Data. Fortunately, it is relatively straightforward to convert non-compliant RDF to Linked Data.

3. How does one publish or deploy Linked Data?

There are some excellent references for how to publish Linked Data. Examples include a tutorial, How to Publish Linked Data on the Web, and a white paper, Deploying Linked Data, using the example of OpenLink’s Virtuoso software. There are also recommended approaches and ways to use URI identifiers, such as the W3C’s working draft, Cool URIs for the Semantic Web.

However, there are not yet published guidelines for also how to meet the Zitgist definition above where there is also an emphasis on class and context matching. A number of companies and consultants, including Zitgist, presently provide such assistance.

The key principles, however, are to make links aggressively between data items with appropriate semantics (properties or relations; that is, the predicate edges between the subject and object nodes of the triple) using URIs for the object identifiers, all being exposed and accessible via the HTTP Web protocol.

4. Is Linked Data just another term or branding for the Semantic Web?

Absolutely not, though this is a source of some confusion at present.

The Semantic Web is probably best understood as a vision or goal where semantically rich annotation of data is used by machine agents to make connections, find information or do things automatically in the background on behalf of humans. We are on a path toward this vision or goal, but under this interpretation the Semantic Web is more of a process than a state. By understanding that the Semantic Web is a vision or goal we can see why a label such as ‘Web 3.0′ is perhaps simplistic and incomplete.

Linked Data is a set of practices somewhere in the early middle of the spectrum from the initial Web of documents to this vision of the Semantic Web. (See my earlier post at bottom for a diagram of this spectrum.)

Linked Data is here today, doable today, and pragmatic today. Meaningful semantic connections can be made and there are many other manifest benefits (see below) with Linked Data, but automatic reasoning in the background or autonomic behavior is not yet one of them.

Strictly speaking, then, Linked Data represents doable best practices today within the context both of Web access and of this yet unrealized longer-term vision of the Semantic Web.

5. Does Linked Data only apply to instance data?

Definitely not, though early practice has been interpreted by some as such.

One of the stimulating, but controversial, keynotes of the conference was from Dr. Anant Jhingran of IBM, who made the strong and absolutely correct observation that Linked Data requires the interplay and intersection of people, instances and schema. From his vantage, early exposed Linked Data has been dominated by instance data from sources such as Wikipedia and have lacked the schema (class) relationships that enterprises are based upon. The people aspect in terms of connections, collaboration and joint buy-in is also the means for establishing trust and authority to the data.

In Zitgist’s terminology, class-level mappings ‘explode the domain’ and produce information benefits similar to Metcalfe’s Law as a function of the degree of class linkages [1]. While this network effect is well known to the community, it has not yet been shown much in current Linked Data sets. As Anant pointed out, schemas define enterprise processes and knowledge structures. Demonstrating schema (class) relationships is the next appropriate task for the Linked Data community.

6. What role do “ontologies” play with Linked Data?

In an RDF context, “ontologies” are the vocabularies and structures that capture the schema structures noted above. Ontologies embody the class and instance definitions and the predicate (property) relations that enable legacy schemas and data to be transformed into Linked Data graphs.

Though many public RDF vocabularies and ontologies presently exist, and should be re-used where possible and where the semantics match the existing legacy information, enterprises will require specific ontologies reflective of their own data and information relationships.

Despite the newness or intimidation perhaps associated with the “ontology” term, ontologies are no more complex — indeed, are simpler and more powerful — than the standard relational schema familiar to enterprises. If you’d like, simply substitute schema for ontology and you will be saying the same thing in an RDF context.

7. Is Linked Data a centralized or federated approach?

Neither, really, though the rationale and justification for Linked Data is grounded in federating widely disparate sources of data that can also vary widely in existing formalism and structure.

Because Linked Data is a set of techniques and best practices for expressing, exposing and publishing data, it can easily be applied to either centralized or federated circumstances.

However, the real world where any and all potentially relevant data can be interconnected is by definition a varied, distributed, and therefore federated world. Because of its universal RDF data model and Web-based techniques for data expression and access, Linked Data is the perfect vehicle, finally, for data integration and interoperability without boundaries.

8. How does one maintain context when federating Linked Data?

The simple case is where two data sources refer to the exact same entity or instance (individual) with the same identity. The standard sameAs predicate is used to assert the equivalence in such cases.

The more important case is where the data sources are about similar subjects or concepts, in which case a structure of well-defined reference classes is employed. Furthermore, if these classes can themselves be expressed in a graph structure capturing the relationships amongst the concepts, we now have some fixed points in the conceptual information space for relating and tieing together disparate data. Still further, such a conceptual structure also provides the means to relate the people, places, things, organizations, events, etc., of the individual instances of the world to one another as well.

Any reference structure that is composed of concept classes that are properly related to each other may provide this referential “glue” or “backbone”.

One such structure provided in open source by Zitgist is the 21,000 subject concept node structure of UMBEL, itself derived from the Cyc knowledge base. In any event, such broad reference structures may often be accompanied by more specific domain conceptual ontologies to provide focused domain-specific context.

9. Does data need to be “open” to qualify as Linked Data?

No, absolutely not.

While, to date, it is the case that Linked Data has been demonstrated using public Web data and many desire to expose more through the open data movement, there is nothing preventing private, proprietary or subscription data from being Linked Data.

The Linking Open Data (LOD) group formed about 18 months ago to showcase Linked Data techniques began with open data. As a parallel concept to sever the idea that it only applies to open data, François-Paul Servant has specifically identified Linking Enterprise Data (and see also the accompanying slides).

For example, with Linked Data (and not the more restrictive LOD sense), two or more enterprises or private parties can legitimately exchange private Linked Data over a private network using HTTP. As another example, Linked Data may be exchanged on an intranet between different departments, etc.

So long as the principles of URI naming, HTTP access, and linking predicates where possible are maintained, the approach qualifies as Linked Data.

10. Can legacy data be expressed as Linked Data?

Absolutely yes, without reservation. Indeed, non-transactional legacy data perhaps should be expressed as Linked Data in order to gain its manifest benefits. See #14 below.

11. Can enterprise and open or public data be intermixed as Linked Data?

Of course. Since Linked Data can be applied to any data formalism, source or schema, it is perfectly suited to integrating data from inside and outside the firewall, open or private.

12. How does one query or access Linked Data?

The basic query language for Linked Data is SPARQL (pronounced “sparkle”), which bears close resemblance to SQL only applicable to an RDF data graph. The actual datastores applied to RDF may also add a fourth aspect to the tuple for graph namespaces, which can bring access and scale efficiencies. In these cases, the system is known as a “quad store”. Additional techniques may be added to data filtering prior to the SPARQL query for further efficiencies.

Templated SPARQL queries and other techniques can lead to very efficient and rapid deployment of various Web services and reports, two techniques often applied by Zitgist and other vendors. For example, all Zitgist DataViewer views and UMBEL Web services are expressed using such SPARQL templates.

This SPARQL templating approach may also be combined with the use of templating standards such as Fresnel to bind instance data to display templates.

13. How is access control or security maintained around Linked Data?

In Zitgist’s view, access control or security occurs at the layer of the HTTP access and protocols, and not at the Linked Data layer. Thus, the same policies and procedures that have been developed for general Web access and security are applicable to Linked Data.

However, standard data level or Web server access and security can be enhanced by the choice of the system hosting the data. Zitgist, for example, uses OpenLink’s Virtuoso universal server that has proven and robust security mechanisms. Additionally, it is possible to express security and access policies using RDF ontologies as well. These potentials are largely independent of Linked Data techniques.

The key point is that there is nothing unique or inherent to Linked Data with respect to access or control or security that is not inherent with standard Web access. If a given link points to a data object from a source that has limited or controlled access, its results will not appear in the final results graph for those users subject to access restrictions.

14. What are the enterprise benefits of Linked Data? (Why adopt it?)

For more than 30 years — since the widespread adoption of electronic information systems by enterprises — the Holy Grail has been complete, integrated access to all data. With Linked Data, that promise is now at hand. Here are some of the key enterprise benefits to Linked Data, which provide the rationales for adoption:

  • Via the RDF model, equal applicability to unstructured, semi-structured, and structured data and content
  • Elimination of internal data “silos”
  • Integration of internal and external data
  • Easy interlinkage of enterprise, industry-standard, open public and public subscription data
  • Complete data modeling of any legacy schema
  • Flexible and easy updates and changes to existing schema
  • An end to the need to re-architect legacy schema resulting from changes to the business or M & A
  • Report creation and data display based on templates and queries, not IT departments
  • Data access, analysis and manipulation pushed out to the user level, and, generally
  • The ability of internal Linked Data stores to be maintained by existing DBA procedures and assets.

15. What are early applications or uses of Linked Data?

Linked Data is well suited to traditional knowledge base or knowledge management applications. Its near-term application to transactional or material process applications is less apparent.

Of special use is the value-added from connecting existing internal and external content via the network effect from the linkages [1].

A Hearty Thanks

Johnnie Linked Data is starting to grow up. Our little semantic Web toddler is moving beyond ga-ga-goo-goo to saying his first real sentences. Language acquisition will come rapidly, and, like what all of us have seen with our own children, they will grow up faster than we can imagine.

There were so many at this meeting that had impact and meaning to this exciting transition point that I won’t list specific names at risk of leaving other names off. Those of you who made so many great observations or stayed up late interacting with passion know who you are. Let me simply say: Thanks!

The LinkedData Planet conference has shown, to me, that enterprises are extremely interested in what our community has developed and now proven. They are asking hard questions and will be difficult task masters, but we need to listen and respond. The attendees were a selective and high-quality group, understanding of their own needs and looking for answers. We did an OK job of providing those answers, but we can do much, much better.

I reflect on these few days now knowing something I did not truly know before: the market is here and it is real. The researchers who have brought us to this point will continue to have much to research. But, those of us desirous of providing real pragmatic value and getting paid for it, can confidently move forward knowing both the markets and the value are real. Linked Data is not magic, but when done with quality and in context, it delivers value worth paying for.

To all of the fellow speakers and exhibitors, to all of the engaged attendees, and to the Juperitermedia organizers and Bob DuCharme and Ken North as conference chairs, let me add my heartfelt thanks for a job well done.

Next Steps and Next Conference

The next LinkedData Planet conference and expo will be October 16-17, 2008, at the Santa Clara Hyatt in Santa Clara, California. The agenda has not been announced, but hopefully we will see a continuing enterprise perspective and some emerging use cases.

Zitgist as a company will continue to release and describe its enterprise products and services, and I will continue to blog on Linked Data matters of specific interest to the enterprise. Pending topics include converting legacy data to Linked Data, converting relational data and schema to Linked Data, placing context to Linked Data, and many others. We think you will like the various announcements as they arise. ;)

Zitgist is also toying with the use of a distinctive icon A Linked Data iconto indicate the availability of Linked Data conforming to the principles embodied in the questions above. (The color choice is an adoption of the semantic Web logo from the W3C.) The use of a distinctive icon is similar to what RSS feeds A Linked Data iconor microformats A Linked Data iconhave done to alert users to their specific formats. Drop me a line and let us know what you think of this idea.

[1] Metcalfe’s law states that the value of a telecommunications network is proportional to the square of the number of users of the system(n²), where the linkages between users (nodes) exist by definition. For information bases, the data objects are the nodes. Linked Data works to add the connections between the nodes. We can thus modify the original sense to become Zitgist’s Law: the value of a Linked Data network is proportional to the square of the number of links between the data objects.