UMBEL (Upper-level Mapping and Binding Exchange Layer) is a lightweight reference structure for placing Web content, named entities and data in context with other data. It is comprised of about 21,000 subject concepts and their relationships — with one another and with external vocabularies and named entities.
UMBEL (Upper-level Mapping and Binding Exchange Layer) is a lightweight reference structure for placing Web content and data in context with other data. It is comprised of about 21,000 subject concepts and their relationships — with one another and with external vocabularies and named entities.
Each UMBEL subject concept represents a defined reference point for asserting what a given chunk of content is about. These fixed hubs enable similar content to be aggregated and then placed into context with other content. These subject context hubs also provide the aggregation points for tying in their class members, the named entities which are the people, places, events, and other specific things of the world.
The backbone to UMBEL is the relationships amongst these subject concepts. It is this backbone that provides the contextual graph for inter-relating content. UMBEL’s subject concepts and their relationships are derived from the OpenCyc version of the Cyc knowledge base.
UMBEL’s backbone is also a reference structure for more specific domains or ontologies, thereby enabling further context for inter-relating additional content. Much of the sandbox shows these external relationships.
These first set of Web services provide online demo sandboxes, and descriptions of what they are about and their API documentation. The first 11 services are:
The single service that provides the best insight to what UMBEL is all about is the Subject Concept Detailed Report. (That is probably because this service is itself an amalgam of some of the others.)
Starting from a single concept amongst the 21,000, in this case ‘Mammal’, we can get descriptions or definitions (the proper basis for making semantic relationships, not the ‘Mammal’ label), aliases and semsets, equivalent classes (in OWL terms), named entities (for leaf concepts), more general or specific external classes, and domain and range relationships with other ontologies. Here is the sample report for ‘Mammal’:
The discerning eye likely observes that while there are a rich set of relationships to the internal UMBEL subject concepts, coverage is still light for external classes and named entities. This sandbox is, after all, a first release and we are early in the mapping process.
But, it should also start to become clear that the ability of this structure to map and tie in all forms of external concepts and class structures is phenomenal. Once such class relationships are mapped (to date, most other Linked Data only occurs at the instance level), all external relationships and properties can be inherited as well. And, vice versa.
So, for aficionados of the network effect, stand back! You ain’t seen nothing yet. If we have seen amazing emergent properties arising from the people and documents on the Web, with data we move to another quantum level, like moving from organisms to cells. The leverage of such concept and class structures to provide coherence to atomic data is literally primed to explode.
To put it mildly, trying to get one’s mind around the idea of 21,000 concepts and all of their relationships and all of their possible tie in points and mappings to still further ontologies and all of their interactions with named entities and all of their various levels of aggregation or abstraction and all of their possible translations into other languages or all of their contextual descriptions or all of their aliases or synonyms or all of their clusterings or all of their spatial relationships or all of the still more detailed relationships and instances in specific domains or, well, whew! You get the idea.
It is all pretty complex and hard to grasp.
One great way to wrap one’s mind around such scope is through interactive visualization. The first UMBEL service to provide this type of view is the Subject Concept Explorer, a screenshot of which is shown here:
But really, to gain the true feel, go to the service and explore for yourself. It feels like snorkeling through those schools of billions of tiny silver fish. Very cool!
These amazing visualizations are being brought to us by Moritz Stefaner, imho one of the best visualization and Flash gurus around. We will be showcasing more about Moritz’s unbelievable work in some forthcoming posts, where some even cooler goodies will be on display. His work is also on display at a couple of other sites that you can spend hours drooling over. Thanks, Moritz!
You should note that developer access to the actual endpoints and external exposure of the subject concepts as Linked Data are not yet available. The endpoints, Linked Data and further technical documentation will be forthcoming shortly.
The currently displayed services and demos provided on this UMBEL Web services site are a sandbox for where the project is going. Next releases will soon provide as open source under attribution license:
When we hit full stride, we expect to be releasing still further new Web services on a frequent basis.
BTW, for more technical details on this current release, see Fred Giasson’s accompanying post. Fred is the magician who has brought much of this forward.
Exactly one month ago I wrote in The Shaky Semantics of the Semantic Web, “The time is now and sorely needed to get the issues of representation, resources and reference cleaned up once and for all.”
The piece was prompted by growing rumblings on semantic Web mailing lists and elsewhere about semantic Web terminology, plus concerns that lack of clarity was opening the door for re-branding or appropriating the semantic Web ‘space.’ I observed these issues were “complex and vexing boils just ready to erupt through the surface.”
My own post was little noticed but the essential observations, I think, were correct. In the past month the rumblings have become a distinct growl and aspects of the debate are now coming into direct focus. I think the perspective has (thankfully) shifted from wanting to not re-open the so-called arcanely named “httpRange-14” debate (a less technical explanation is in Wikipedia on its role in bringing the concept of “information resource” to the Web) of three years past to perhaps finally lancing the boil.
Many of us monitor multiple mailing lists; they seem to have their own ebb and flow, most often quiet, but sometimes rat-a-tat-tat furious. In and of itself, it is fascinating to see which topics and threads catch fire while others remain fallow.
One mail list that I monitor is the W3C‘s Technical Architecture Group, in essence the key deliberation body for technical aspects of the Web. Key authors of the Web such as Tim Berners-Lee, Roy Fielding and many, many others of stature and knowledge either are on the TAG or participate in its deliberations. The TAG’s public mailing list is immensely helpful to learn about technical aspects of the Web and to get a bit of early warning regarding upcoming issues. The W3C and its TAG are exemplars of open community process and governance in the Internet era.
I assume many hundreds monitor the TAG list; most, like me, comment rarely or not at all. The matters can indeed be quite technical and there is much history and well-thought rationale behind the architecture of the Web.
Xiaoshu Wang has recently been a quite active participant. English is not Xiaoshu’s native language, but because of his passion he has nonetheless been a determined protagonist to probe the basis and rationale behind the use of resources, representations and descriptions on the Web. These are difficult concepts under the best of circumstances, made all the harder due to language differences and special technical senses that have been adopted by the TAG in its prior deliberations.
These concerns were first and most formally expressed in a technical report, URI Identity and Web Architecture Revisited, by Xiaoshu and colleagues in November 2007.
My layman’s explanation of Xiaoshu’s concerns is that the earlier httpRange-14 decision to establish a technical category of “information resources” begs and leaves open the question of the inverse — what has been called a “non-information resource” — and actually violates prior semantics and understandings of what should be better understood as representations.
This discussion arose in relation to the Uniform Access to Descriptions , a thread begun by Jonathan Rees of the TAG to assemble use cases related to HttpRedirections-57, a proposal to standardize the description of URI things, such as documents, by rejuvenating the link header. Because of its topic, discussion of httpRange-14 was discouraged since putatively the core definition of “information resource” was not at issue.
However, after introduction of a most interesting pre-print, In Defense of Ambiguity , co-author Harry Halpin perhaps inadvertenly opened the door to the httpRange-14 question again. Then, Xiaoshu began submitting and commenting in earnest, and Stuart Williams of the TAG, in particular, was helpful and patient to help draw out and articulate the points.
My observation is that Xiaoshu was never advocating a change in the basic or current architecture of the Web, but perhaps that was not apparent or readily clear. Again, the frailty of human communications compounded by language and perspective have been much in evidence.
Pat Hayes, the editor of the excellent RDF Semantics W3C recommendation, then intervened as interlocuter for Xiaoshu’s basic positions. Many, many others, notably including Berners-Lee and Fielding, have also joined the fray. The entire thread  is worth reading and study.
Since Xiaoshu has publicly endorsed Hayes’ interpretation, here are some important snippets from Pat’s articulation :
There simply is no other word [than 'represents'] that will do. And the size, history and, I’m sorry, but scholarly and intellectual authority of the community which uses a wider sense of ‘represent’ so greatly exceeds the AWWW [W3C Web] community that I don’t think you can reasonably claim possession of such a basic and central term for such a very narrow, arcane and special (and, by the way, under-defined) sense.
If AWWW had used a technical word in a new technical way, then this would likely have been harmless. Mathematics re-used ‘field’ without getting confused with agriculture. But the AWWW/semantics clash over the meaning of ‘represent’ is harmful because the senses are not independent: the AWWW usage is a (very) special case of the original meaning, so it is inherently ambiguous every time it is used; and, still worse, we need the broader meaning in these very discussions, because the TAG has decreed that URIs can denote anything: so we are here discussing semantics in a broad sense whether we like it or not. And if the word ‘represent’ is to be co-opted to be used only in one very narrow sense, then we have no word left for the ordinary semantic sense. To adopt a usage like this is almost pathological in the way it is likely to generate confusion (as it already has, and continues to do so, in spades.)
The way we name Web pages is a special case of this picture, where the ‘storyteller’ is the same thing as the resource. Things that can be their own storytellers fit nicely within current AWWW, with its official understanding of words like ‘represent’. (In fact, capable of being ones own storyteller might be a way to define ‘information resource’.) But the nice thing about this picture [as presented by Xiaoshu] is that other kinds of resource, which do not fit at all within the AWWW – things that aren’t documents, ‘non-information resources’ – also fit within it; still, ironically, using the AWWW language, but with a semantic rather than AWWW sense of ‘represent’.
Right now, the semantic web really does not have a coherent story to tell about how it works with non-information resources, other than it should use RDF (plus whatever is sitting on it in higher levels) to describe them; which says nothing, since RDF can describe anything. URIs in RDF are just names, their Web role as http entities semantically irrelevant. Http-range-14 connects access and denotation for document-ish things, but for other things we have no account of how they should or should not be related, or what anything a URI might access via http has got to do with what it denotes.
The way that the three participants (denoted-thing, URI-name and Web-information-resource ‘storyteller’) interact must be basically different when the denoted-thing isn’t an information resource from when it is. All that being suggested here is that there is an account that we could give about this, one that works in both cases and which fits the language of AWWW quite, er, nicely.
A person exists and has properties entirely separate from the Web. Many people have nothing to do with the Web in their entire lives. People are not Web objects. And when the URI is being used in an RDF graph to refer to a person, the fact that it starts with http: is nothing more than a lexical accident, which has no bearing whatever on the role of the URI as a name denoting a person.
I think this particular shoe is on the other foot. If you can actually say, clearly enough to prevent continual trails of endless email debate, what AWWW actually means by ‘represent’, then I’d be delighted if you would use some technical word to refer to that elusive notion. But the word ‘represent’ and its cognates has been a technical word in far larger and more precisely stated forums for over a century; and since the day that Web science has included the semantic web, AWWW has taken an irrevocable step into the same academy. You are using the language of semantics now. If you want to be understood, you have to learn to use it correctly.
All it would do is move the responsibility of deciding what a URI denotes from a rather messy and widely ill-understood distinction based on http codes, to a matter of content negotiation. This would allow phenomena which violate http-range-14, but it would by no means insist on such violations in all cases. In fact, if we were to agree on some simple protocols for content negotiation which themselves referred to http codes, it could provide a uniform mechanism for implementing the http-range decision.
Moreover, this approach would put ‘information resources’ on exactly the same footing as all other things in the matter of how to choose representations of them for various purposes, a uniformity which means little at present but is likely to increase in value in the future.
But right now, for the case where a URI is understood to denote something other than an information resource, we have a completely blank slate. There is nothing which tells our software how to interoperate in this case. Our situation is not a kind of paradise of reference-determination from which Xiaoshu and I are threatening to have everyone banished. Right now for the semantic web, things are about as bad as they can get.
. . . we, as a society, can use [the conventions we decide] for whatever we decide and find convenient. The Web and the Internet are replete with mechanisms which are being used for purposes not intended by their original designers, and which are alien to their original purpose. For a pertinent example, the http-range-14 decision uses http codes in this way. That isn’t what http codes are for.
I have repeated much of this material because I believe it to be of wide import to the semantic Web’s development and future. Obviously, for better understanding, the full thread  plus its generous sprinkling of excellent prior documents and discussions is most recommended.
There are certainly technical aspects to this debate that go well beyond my ken. I strongly suspect there are edge cases for which more complicated technical guidance is warranted.
And, it is true, I have been selective in which sides of this debate I am highlighting and therefore supporting. This is not accidental.
While some in this debate have claimed the need to conform to existing doctrine in order to ensure interoperability or the integrity of software systems, from my different perspective as someone desiring to help build a market by extending reach into the broader public, that argument is false. Let’s take the existing architecture we have, but make our best practices recipes simple, our language clear, and our semantics correct. How can we really promote and grow the semantic Web when our own semantics are so patently challenged?
Our community faces a challenge of poor terminology and muddled concepts (or, perhaps more precisely, concepts defined in relation to the semantic Web that are not in conformance with standard understandings). My strong suspicion is that we risk at present over-specification and just plain confusion in the broader public.
This mailing list debate is hugely important, informative and thought provoking. Xiaoshu deserves thanks for his courage and tenacity in engaging this debate in a non-native language; Pat Hayes deserves thanks for trying to capture the arguments in language and terminology more easily understandable to the rest of us and to add his own considerable experience to the debate, and many of the mail list regulars deserve sincere thanks for being patient and engaged to allow the nuances of these arguments to unfold.
From my standpoint there is real pragmatic value to these arguments that would bring the terminology and semantics of the semantic Web into better understood and more easily communicated usage, all without affecting or changing the underlying architecture of the Web. (Or, so, to my naÃ¯ve viewpoint, the argument seems to suggest.)
So long as the semantic Web’s practitioners still number in the hundreds, and those with nuanced understanding of these arcane matters likely only in the scores, the time is ripe to get the language and concepts right. Doing so can help our enterprise reach millions and much more quickly.
As late as 2002, no single search engine indexed the entire surface Web. There is much that has been written about that time, but emergence of Google (indeed others, it was a key battle at the time), worked to extend full search coverage to the Web, ending the need for so-called desktop metasearchers, then the only option for getting full Web search coverage.
Strangely, though full coverage of document indexing had been conquered for the Web, dynamic Web sites and database-backed sites fronted by search forms were also emerging. Estimates as of about 2001, made by myself and others, suggested such ‘deep Web‘ content was many, many times larger than the indexable document Web and was found in literally hundreds of thousands of sites.
Standard Web crawling is a different technique and technology than “probing” the contents of searchable databases, which require a query to be issued to a site’s search form. A company I founded, BrightPlanet, but many others such as Copernic or Intelliseek and others, many of which no longer exist, were formed with the specific aim to probe these thousands of valuable content sites.
From those company’s standpoints, mine at that time as well, there was always the threat that the major search engines would draw a bead on deep Web content and use their resources and clout to appropriate this market. Yahoo, for example, struck arrangements with some publishers of deep content to index their content directly, but that still fell short of the different technology that deep Web retrieval requires.
It was always a bit surprising that this rich storehouse of deep Web content was being neglected. In retrospect, perhaps it was understandable: there was still the standard Web document content to index and conquer.
Today, however, Google posted on one of its developer blog sites, Crawling through HTML forms, written by Jayant Madhavan and Alon Halevy, noted search and semantic Web researcher, announcing its new deep Web search:
In the past few months we have been exploring some HTML forms to try to discover new web pages and URLs that we otherwise couldn’t find and index for users who search on Google. Specifically, when we encounter a <FORM> element on a high-quality site, we might choose to do a small number of queries using the form. For text boxes, our computers automatically choose words from the site that has the form; for select menus, check boxes, and radio buttons on the form, we choose from among the values of the HTML. Having chosen the values for each input, we generate and then try to crawl URLs that correspond to a possible query a user may have made. If we ascertain that the web page resulting from our query is valid, interesting, and includes content not in our index, we may include it in our index much as we would include any other web page.
To be sure, there are differences and nuances to retrieval from the deep Web. What is described here is not truly directed nor comprehensive. But, the barrier has fallen. With time, and enough servers, the more inaccessible aspects of the deep Web will fall to the services of major engines such as Google.
And, this is a good thing for all consumers desiring full access to the Web of documents.
So, an era is coming to a close. And this, too, is appropriate. For we are also now transitioning into the complementary era of the Web of data.
Just as DBpedia has provided the nucleating point for linking instance data (see Part 2), UMBEL is designed to provide a similar reference structure for concepts. These concepts provide some fixed positions in space to which other sources can link and relate. And, like references for instance data, the existence of reference concepts can greatly diminish the number of links necessary in the Linked Data environment.
Clearly, the combination of the representativeness of UMBEL’s subject concepts (the “scope” of the ontology) and their relationships (the “structure” of the backbone) is fundamental. These factors in turn express the functional capabilities of the system.
The first fundamental point deserving emphasis is that a reference structure of almost any nature has value. We can argue later about what is the best reference structure, but the first task is to just get one in place and begin bootstrapping. Indeed, over time, it is likely that a few reference structures will emerge and compete and get supplemented by still further structures. This evolution is expected and natural and desirable in that it provides choice and options.
A reference structure of concepts has the further benefit of providing a logical reference structure for instances as well. While Wikipedia is perhaps the most comprehensive collection of humanity-wide instances, no single source can or will be complete in scope. Thus, we foresee specialty sources ranging from the companies in Wikicompany to plants and animals in the Encyclopedia of Life or thousands of other rich instance sources also acting as reference hubs.
How do each of these rich instance sources relate to one another? What is the subject concept or topical basis by which they overlap or complement? What is the framework and graph structure of knowledge to give this information context? These are the benefits brought by a structure of reference concepts, independent from the specifics of the reference structure itself.
Another key consideration is that broad-scale acceptance is important. An express purpose of UMBEL is to aid the interconnection of related content using broadly accepted foundations.
Since the Web’s inception fifteen years ago, there have been various alternatives tried or in ascendance for organizing and bringing structure to Web content. Some of these may be too static and inflexible, others perhaps too arbitrary or parochial. All approaches to date have had little collective success.
There are also new and exciting developments in social networks and user-driven content and structure arising from areas such as tagging or Wikipedia (and wikis in general). But it is not clear that bottom-up contributions suitable to individual articles or topics can lead to coherent structural frameworks; arguably, they have not yet so far. And then there are sporadic government or corporate or trade association initiatives as well.
Here is a summary of alternate approaches:
Since inception, the stated intent of the UMBEL project was to base its subject structure on extant systems. To minimize development time, the structure needed to be drawn from one of the categories above. Possible development of a de novo structure was rejected because of development time and the low probability of gaining acceptance in the face of so many competing alternatives.
The granddaddy of knowledge bases suitable to all human content and knowledge is Cyc. Because of its more than 20-year history, Cyc brings with it considerable strengths and some weaknesses.
Amongst all alternatives, Cyc rapidly emerged as the leading candidate. While its strengths warranted close attention, its weaknesses also suggested a considerable effort to overcome them. This combination compelled the need for a significant investigation and due diligence.
First, here are OpenCyc’s strengths:
Literally, after months of investigation and involvement, the richness of practical uses to which the OpenCyc knowledge base can be applied are still revealing themselves.
But there are weaknesses and problems with Cyc.
To be sure, there are some individuals and perhaps some historical criticisms of Cyc that involved fears of Big Brother or grandiose claims about artificial intelligence or machine reasoning. These are criticisms of hype, immaturity or ignorance; they are different than the drawbacks observed by our UMBEL project and not further discussed here.
In UMBEL’s investigation of Cyc, we observed these drawbacks:
Surprisingly, for a system of its age and evolution, Cyc seems to have adhered well to naming conventions and other standards.
UMBEL’s project diligence thus found the biggest issue going forward to be the cruft in the system. There is a solid structure underneath Cyc, but one that is too often obscured and not made as shiny and clean as it deserves.
Five months of nearly full-time due diligence was devoted to this question of the suitability of Cyc as the intellectual grounding for UMBEL.
On balance, OpenCyc’s benefits significantly outweighed its weaknesses. This balance also stands considerably superior to all potential alternatives.
An important factor through this deliberation was the commitment of Cycorp and The Cyc Foundation to the aims of UMBEL, and the willingness of those organizations to lend time and effort to promote UMBEL’s aims. Twenty years of development and the investment of decades of human effort and scrutiny provides a foundation of immense solidity.
Though perhaps Wikipedia (or something like it also based on broad Web input) might emerge with the scope and completeness of Cyc, that prospect is at minimum some years away and by no means certain. No other current framework than Cyc can meet UMBEL’s immediate purposes. Moreover, as stated at the outset, UMBEL’s purpose is pragmatic. We will leave it to others to argue the philosophical nuances of ontology design and “truth” while we get on with the task of creating context of real value.
The next decision was to base all UMBEL subject concepts on existing concepts in OpenCyc.
This means that UMBEL inherits all of the structural relations already in OpenCyc. It also means that UMBEL can act as a sort of contextual middleware between unstructured Web content and the inferential and tools infrastructure within OpenCyc (and beyond into ResearchCyc and Cyc for commercial purposes) and back again to the Web. We term this “roundtripping” and the capability is available for any of the 21,000 subject concepts vetted from OpenCyc within UMBEL.
Having made these commitments, our next effort was to break out the brushes, roll up the sleeves, and plunge into a Spring session of deep cleaning. This effort to vet and clean OpenCyc will be documented in the Technical Report to accompany the first release of the UMBEL ontology. We think you’ll like its shiny new look.