Posted:September 2, 2009

Segmented UMBEL (Upper Mapping and Binding Exchange Layer)The Significant Advantages to a Logically Segmented TBox

The Message Understanding Conferences (MUC) were initiated in 1987 and financed by DARPA to encourage the development of new and better methods of information extraction (IE). It was a seminal series that resulted in basic measures of retrieval and semantic efficacy, recall (R) and precision (P) and the combined F-measure, and other core terminology and constructs used by IE today.

By the sixth version in the series (MUC-6), in 1995, the task of recognition of named entities and coreference was added. That initial slate of named entities included the basic building blocks of person (PER), location (LOC), and organization (ORG); to these were added the numeric building blocks of time, percentage or quantity. The very terminology of named entity was coined for this seminal meeting, as was the idea of inline markup [1].

What is a ‘Nameable Thing’?

The intuition surrounding “named entity” and nameable “things” was that they were discrete and disjoint. A rock is not a person and is not a chemical or an event. As initially used, all “named entities” were distinct individuals. But, there also emerged the understanding that some classes of things could also be treated as more-or-less distinct nameable “things”: beetles are not the same as frogs and are not the same as rocks. While some of these “things” might be a true individual with a discrete name, such as Kermit the Frog, or The Rock at Northwestern University, most instances of such things are unnamed.

The “nameability” (or logical categorization) of things is perhaps best kept separate from other epistemological issues of distinguishing sets, collections, or classes from individuals, members or instances.

In a closed-world system it is easier to enforce clean distinctions. The Cyc knowledge base, for example, the basis for UMBEL (Upper Mapping and Binding Exchange Layer),  makes clear the distinction between individuals and collections. In the semantic Web and RDF, this can become smeared a bit with the favored terminology shifting to instances and classes, and in pragmatic, real-world terms we (as humans) readily distinguish John Smith as distinct from Jane Doe but don’t generally (unless we’re entomologists!) make such distinctions for individual beetles, let alone entire genera or species of beetles.

Under precise conditions, these distinctions are important. The fact that Cyc, for example, is assiduous in its application of these distinctions is a major reason for the overall coherence of its knowledge base. But, for most circumstances, we think it is OK to accept a distinction between “nameable” things such as frogs and beetles, but also to accept that there may be nameable individuals at times in those groupings such as Kermit that are truly an individual in that more refined sense.

This digression sets the background for a natural progression from that first MUC-6 conference. If we could cluster persons or organizations, why not other categories of distinct and disjoint things such as frogs or beetles or rocks?

From the first six entity categories of MUC-6 we begin to see an expansion to broader coverage. Readers of this blog will recall that I have been a fan for quite some time of the expanded coverage of 64 classes of entities proposed by BBN or the 200 proposed by Sekine [2] (as discussed, for example in the April 2008 Subject Concepts and Named Entities article). Again, the intuition was that real things in the real world could be logically categorized into discrete and disjoint categories.

Thus, “named entities” inexorably moved to become a categorization system, where the degree of familiarity and distinction dictated whether it was the individual (with a unique name, such as Abraham Lincoln or Mt. Rushmore) or groupings such as animal or plant species and their common names (such as beetle or oak) that was the standard “handle” for assigning a name to the “nameable thing”.

While many can argue these individual <–> grouping distinctions and whether we are talking about true, unique, named individuals or names of convenience, I think that (at least for this blog post and discussion), that misses the real, fundamental point.

The real, fundamental point is that some “things” (whether individuals, instances or classes) are distinct from other “things”. Such disjoint distinctions are a powerful concept that should not be lost sight of by “angels dancing on the head of a pin” epistemological arguments. A frog is not a rock, despite neither are “individuals”, and how can we take advantage of that realilty?

What Works for Entities, Works for Concepts

Nearly from the outset of our work with UMBEL as a ‘TBox’ [3] — that is, as a set of 20,000 or so common “subject concepts” — the natural question was what the relation or correspondence was of these concepts to the underlying “things” (entities) that they organized. As we probed the disjoint categories within the Sekine 200 entity types, for example, we began to see significant parallels and overlap. Also gnawing at our sense of order was the rather artificial and arbitrary class of concepts in UMBEL that we termed “Abstract Concepts”.

We introduced Abstract Concepts in the first release of UMBEL. When introduced, we defined “Abstract concepts [as] representing abstract or ephemeral notions such as truth, beauty, evil or justice, or [as] thought constructs useful to organizing or categorizing things but are not readily seen in the experiential world.” In pragmatic terms, Abstract Concepts in UMBEL were often pivotal nodes in the UMBEL subject graph necessary to maintain a high degree of concept interconnectivity.

In any world view that attempts to be more-or-less comprehensive, there is a gradation of concepts from the concrete and observable to the abstract and ephemeral. The recognition that some of these concepts may be more abstract, then, was not the issue. The issue was that there was no definable basis for segregating a concrete Subject Concept from the more Abstract Concept. Where was the bright line? What was the actionable distinction?

Off and on we have probed this question for more than a year, and have looked at what might constitute a more natural and logical ordering and segmentation within UMBEL. After many tests and detailed analysis, we are now releasing the first results of our investigations.

For, like nameable entities or things, we can see a logical segmentation of (mostly) disjoint concepts within the UMBEL TBox. Here are the summary percentages of these high-level splits:

Disjoint Concepts 90%
Attributes 1%
Classifications 9%
TOTAL 100%

(Because the analysis is still being refined, exact counts and percentages for the 20,000 concepts in UMBEL are not provided.)

Why a Logical Segmentation?

As we dove deeper into these ideas, not only could we see the basis for a logical segmentation within UMBEL’s concepts, but manifest benefits from doing so as well. Remember that UMBEL’s concept structure performs two main roles. It:  1) provides a coherent framework for relating and “mapping” other external ontologies; and 2) provides conceptual binding points for organizing entities and instances [4]. Via logical segmentation, we get benefits for both roles.

Here are some of the broad areas of benefit from a logical UMBEL segmentation that we have identified:

  • Template-driven — as we discuss elsewhere, Structured Dynamics also uses its ontologies to “drive applications” and the user interfaces (UI) that support them. By proper segmentation of UMBEL concepts, we are able to determine to what “cluster” of things (which we call either dimensions or superTypes; see below) a given thing belongs. This identification means we can also determine how best to display information about that “thing”. This determination can include either the attributes or the display templates appropriate for that thing. For example, location-based things or time-based things might invoke map or calendar or timeline type displays. Moreover, because of the logical segmentation of concepts, we can also use the power of the concept graph to infer more generic display templates when specific matches are absent
  • Computational Efficiency — as the percentages above indicate, once we identify what superType concept to which a given instance belongs, we can eliminate nearly all remaining UMBEL concepts from consideration. This logical winnowing leads to computational efficiencies at all levels in the system. The fastest computational work is not to do it, and when large chunks of data are removed from consideration, many performance advantages accrue
  • Disambiguation — via this approach we now can assess concept matches in addition to entity matches. This means we can triangulate between the two assessments to aid disambiguation. Because of these logical segmentations, we also have multiple “clusters” (that is, either the concept, type, superType or dimension) upon which to do our disambiguation evaluations, either between concepts and entities or within the various concept clusters. We can do so via either multiple semantic vectors (for statistical-based methods) or multiple features (for machine learning methods). In other words, because of logical segmentation, we have increased the informational power of our concept graph
  • Structure and Integrity Testing — the very mindset of looking for logical segmentation has led to much learning about the UMBEL structure and OpenCyc upon which it is based. In the process, missing nodes (concepts), erroneous assignments, and superfluous nodes are all being discovered. Further, many of these tests can be automated using basic logical and inference approaches. The net result is a constant improvement to the scope and completeness of the structure. Lastly, these same approaches can be applied when mapping external ontologies to UMBEL, providing similar consistency benefits.

With these benefits in mind, we have undertaken concerted analysis of UMBEL to discern what this “logical segmentation” might be. This investigation has occurred over three concentrated periods over the past year. (Intervening priorities or other work prevented concentrating solely on this task.)

We are now complete with our first full iteraton of investigation. In this post, and then the subsequent release of UMBEL version 0.80 in the coming weeks, the fruits of this effort should be evident. However, it should also be noted that we are still learning much from this new mindset and approach. UMBEL structure refinement may be likely for some time to come.

UMBEL Analysis

Most things and concepts about them are based on real, observable, physical things in the real world. Because most of these things can not occupy both the same moment in time and the same location in physical space, a useful criterion for looking at these things and concepts is disjointedness.

In a broad sense, then, we can split our concepts of the world between those ideas that are disjoint because they pertain to separable objects or ideas and those that are cross-cutting or organizational or classificatory. Attributes, such as color (pink, for example), are often cross-cutting in that they can be used to describe quite disparate things. Inherent classification schemes such as academic fields of study or library catalog systems — while useful ways to organize the world — are not themselves in-and-of the world or discrete from other ideas. Thus, classificatory or organizational concepts are inherently not disjoint.

With the criterion of disjointedness in hand, then, we began an evaluation process of the UMBEL subject concepts. We looked to organizational schema such as the entity types of Sekine or BBN for some starting guidance. We also kept in mind that we also wanted our categories to inform logical clusterings of possible data presentation, such as media types or locations or time.

For terminology, we adopted the term superType to denote the largest cluster designation upon which this disjointedness may occur. As a way to test the basic coherence of these superTypes, we also collected them into larger groups which we termed dimensions.

Our analysis process began with branch-by-branch testing of the UMBEL concept graph using automated scripts, attempting to find pivotal nodes where child instance members were disjoint from other superTypes. This we term the “top-down” method.

This automated analysis was then supplemented with a complete manual inspection of all unassigned and assigned concepts, with a “bottom up” assignment of concepts or corrections to the automated approach. This inspection then led to new insights and identification of missing concepts that needed to be added into UMBEL.

We are still converging between these two methods. Optimally, we should be able to tease out all UMBEL superTypes with a relatively few number of union, intersection, or complement set operations. In its current form, we are close, but there are still some rough spots.

Nonetheless, this analysis method has led us to identify some 33 superTypes [5], clustered into 9 dimensions. Of these, 29 superTypes and 8 dimensions are mostly disjoint. The one dimension of Classificatory includes the four cross-cutting superTypes of attributes and organizational schema that can apply to any of the 29 disjoint superTypes.

UMBEL superTypes

Here is the schema, with the descriptions of each:

Dimension superType Description/Sub-types
Natural World Natural Phenomena This superType includes natural phenomena and natural processes such as weather, weathering, erosion, fires, lightning, earthquakes, tectonics, etc. Clouds and weather processes are specifically included. Also includes climate cycles, general natural events (such as hurricanes) that are not specifically named, and biochemical processes and pathways.
Natural Substances Notable inclusions are minerals, compounds, chemicals, or physical objects that are not the outcome of purposeful human effort, but are found naturally occurring. Other natural objects (such as rock, fossil, etc.) are also found under this superType.
Earthscape The Earthscape superType consists mostly of the collection of cartographic features that occur on the surface of the Earth. Positive examples include Mountain, Ocean, and Mesa. Artificial features such as canals are excluded. Most instances of these features have a fixed location in space.Underground and underwater are also explicitly contained.This superType is explicitly disjoint with Extraterrestrial (see below).
Extraterrestrial This superType includes all natural things not specifically terrestrial, including celestial bodies (planets, asteroids, stars, galaxies, etc., that can be located within a sky map)
Living Things Prokaryotes The Prokaryotes include all prokaryotic organisms, including the Monera, Archaebacteria, Bacteria, and Blue-green algas. Also included in this superType are viruses and prions.
Protists or Fungus This is the remaining cluster of eukaryotic organisms, specifically including the fungus and the protista (protozoans and slime molds).
Plants This superType includes all plant types and flora, including flowering plants, algae, non-flowering plants, gymnosperms, cycads, and plant parts and body types. Note that all Plant Parts are also included.
Animals This large superType includes all animal types, including specific animal types and vertebrates, invertebrates, insects, crustaceans, fish, reptiles, amphibia, birds, mammals, and animal body parts. Animal parts are specifically included. Also, groupings of such animals are included. Humans, as an animal, are included (versus as an individual Person). Diseases are specifically excluded.
Diseases Diseases are atypical or unusual or unhealthy conditions for (mostly human) living things, generally known as conditions, disorders, infections, diseases or syndromes. Diseases only affect living things and sometimes are caused by living things. This superType also includes impairments, disease vectors, wounds and injuries, and poisoning
Person Types The appropriate superType for all named, individual human beings. This superType also includes the assignment of formal, honorific or cultural titles given to specific human individuals. It further includes names given to humans who conduct specific jobs or activities (the latter case is known as an avocation). Examples include steelworker, waitress, lawyer, plumber, artisan. Ethnic groups are specifically included.
Human Activities Organizations Organization is a broad superType and includes formal collections of humans, sometimes by legal means, charter, agreement or some mode of formal understanding. Examples include geopolitical entities such as nations, municipalities or countries; or companies, institutes, governments, universities, militaries, political parties, game groups, international organizations, trade associations, etc. All institutions, for example, are organizations.Also included are informal collections of humans. Informal or less defined groupings of humans may result from ethnicity or tribes or nationality or from shared interests (such as social networks or mailing lists) or expertise (“communities of practice”). This dimension also includes the notion of identifiable human groups with set members at any given point in time. Examples include music groups, cast members of a play, directors on a corporate Board, TV show members, gangs, mobs, juries, generations, minorities, etc.Finally, Organizations contain the concepts of Industries and Programs and Communities.
Finance & Economy This superType pertains to all things financial and with respect to the economy, including chartable company performance, stock index entities, money, local currencies, taxes, incomes, accounts and accounting, mortgages and property.
Culture, Issues, Beliefs This category includes concepts related to political systems, laws, rules or cultural mores governing societal or community behavior, or doctrinal, faith or religious bases or entities (such as gods, angels, totems) governing spiritual human matters. Culture, Issues, beliefs and various activisms (most -isms) are included
Activities These are ongoing activities that result (mostly) from human effort, often conducted by organizations to assist other organizations or individuals (in which case they are known as services, such as medicine, law, printing, consulting or teaching) or individual or group efforts for leisure, fun, sports, games or personal interests (activities)
Human Works Products This is the largest superType and includes any instance offered for sale or performed as a commercial service. Often physical object made by humans that is not a conceptual work or a facility, such as vehicles, cars, trains, aircraft, spaceships, ships, foods, beverages, clothes, drugs, weapons. Products also include the concept of ‘state’ (e/g/., on/off)
Food or Drink This superType is any edible substance grown, made or harvested by humans. The category also specifically includes the concept of cuisines
Drugs This superType is an drug, medication or addictive substance
Facilities Facilities are physical places or buildings constructed by humans, such as schools, public institutions, markets, museums, amusement parks, worship places, stations, airports, ports, carstops, lines, railroads, roads, waterways, tunnels, bridges, parks, sport facilities, monuments. All can be geospatially located.Facilities also include animal pens and enclosures and general human “activity” areas (golf course, archeology sites, etc.). Importantly, Facilities include infrastructure systems such as roadways and physical networks.Facilities also include the component parts that go into making them (such as foundations, doors, windows, roofs, etc.)
Information Chemistry (n.o.c) This superType is a residual category (n.o.c., not otherwise categorized) for chemical bonds, chemical composition groupings, and the like. It is formed by what is not a natural substance or living thing (organic) substance.
Audio Info This superType is for any audio-only human work. Examples include live music performances, record albums, or radio shows or individual radio broadcasts
Visual Info This superType includes any still image or picture or streaming video human work, with or without audio. Examples include graphics, pictures, movies, TV shows, individual shows from a TV show, etc.
Written Info This superType includes any general material written by humans including books, blogs, articles, manuscripts, but any written information conveyed via text.
Structured Info This information superType is for all kinds of structured information and datasets, including computer programs, databases, files, Web pages and structured data that can be presented in tabular form
Notations & References Akin to conceptual works, these are codified means of human expression. Examples range from human languages themselves, to more domain-specific cases such as chemical symbols, genetic code (A-G-C-T), protocols, and computer languages, mathematical and set notations, etc.Identifiers (numeric or alphanumeric identifiers for objects, often in a highly patterned way, such as phone numbers, URLs, zip and postal codes, SKUs, product codes, etc.), Units (any of the various ways in which measurement, space, volume, weight, speed, intensity, temperature, calories, siesmic intensity or other quantitative descriptions of phenomena can be made) and key reference types are also included in this superType
Numbers This unique superType is for any abstract representation of numbers and numerics
Human Places Geopolitical Named places that have some informal or formal political (authorized) component. Important subcollections include Country, IndependentCountry, State_Geopolitical, City, and Province.
Workplaces, etc. These are various workplaces and areas of human activities, ranging from single person workstations to large aggregations of people (but which are not formal political entities)
Time-related Events These are nameable occasions, games, sports events, conferences, natural phenomena, natural disasters, wars, incidents, anniversaries, holidays, or notable moments or periods in time
Time This superType is for specific time or date or period (such as eras, or days, weeks, months type intervals) references in various formats
Descriptive Attributes This general superType category is for descriptive attributes of all kinds. Think of the specific attributes in Wikipedia “infoboxes” to understand the purpose and coverage of this superType. It includes colors, shapes, sizes, or other descriptive characteristics about an object
Classificatory Abstract-level This general superType category is largely composed of former AbstractConcepts, and represent some of the more abstract upper-level nodes for connecting the UMBEL structure together. This superType also includes theories or processes or methods for humans to do stuff or any human technology
Topics/Categories This largely subject-oriented superType is a means for using controlled vocabularies and classification schemes for characterizing what content “is about”. The key constituents of this category are Types, Classifications, Concepts, Topics, and controlled vocabularies
Markets & Industries This superType is a specialized classificatory system for markets and industries. It could be combined with the superType above, but is kept separate in order to provide a separate, economy-oriented system.

These may undergo some further refinement prior to release of UMBEL v 0.80, and some of the definitions will be tightened up.

(Note: It should also be mentioned that some of these superTypes further lend themselves to further splits and analysis. The Product superType, for example, is ripe for such treatment.)

Distribution of superTypes

The following diagram shows the distribution of these 20,000 UMBEL concepts across major area. By far the largest superType is Products, even with further splits into Food and Drinks and Pharmaceuticals. The next largest categories are Person and Places and Events superTypes, with Organizations and Animals not far behind:

Even in its generic state, UMBEL provides a very rich vocabulary for describing things or for tying in more detailed external ontologies. There are nearly 5,000 concepts across products of all types, for example.

Possible Overlaps (non-disjoint) between superTypes

You may recall that our analysis showed 29 of the superTypes to be “mostly disjoint.”  This is because there are some concepts — say, MusicPerformingAgent — that can apply to either a person or a group (band or orchestra, for example). Thus, for this concept alone, we have a bit of overlap between the normally disjoint Person and Organization superTypes.

The following shows the resulting interaction matrix where there may be some overlap between superTypes:

This kind of interaction diagram is also useful for further analyzing the concept graph structure, as well.

Even Where Overlaps Occur, They are Minor

Of the 29 “mostly” disjoint superTypes, only a relatively few show potential interactions, and then only in minor ways. We can illustrate this (drawn to scale) for the interaction between the Product, Food & Drink and Drug (Pharmaceuticals) superTypes, with the fully disjoint Organization superType thrown in for comparison:

Example superTypes Overlap

Across all 20,000 concepts, then, fully 85% are disjoint from one another (5% is lost due to overlaps between “mostly” disjoint superTypes). This is a surprising high percentage, with even better likelihood to deliver the benefits previously noted.

Interim Conclusions and Observations

These are exciting findings that bode well for UMBEL’s ongoing role and usefulness. Also, the very detailed analysis that has led to these interim findings very much reaffirms the wisdom of basing UMBEL on Cyc.  Cyc showed itself to be admirably coherent and remarkably complete. (It also appears that the first versions of UMBEL were also extracted well in terms of good coverage.)

This approach now gives us an understandable and defensible basis for logical segementation of UMBEL. It also provides a much-desired alternative to the earlier Abstract Concepts, which will now be dropped entirely as a schema concept.

One area deserving further attention is in the Attribute superType. We are in the process, for example, of analyzing attributes across Wikipedia and need to look through a slightly different lens at this superType [6]. This area is further important in its strong interaction with the Instance Record Vocabulary that is accompanying this effort on the entity side.

Another lesson for us has been to back away from the terminology of named entity, introduced at MUC-6. The expansions of that idea into other “nameable” things has caused us to embrace the “instance” nomenclature, as evidenced by our emerging IRV.

It is rewarding to prepare this next iteration release of UMBEL with its new mindset of logical segmentation and disjointedness. But — what is also clear — there are many treasures left to mine still hidden in the inherent structure of UMBEL and its Cyc parent.


[1] The original labels were ENAMEX for entity named expression and NUMEX for numeric expression. The markup format specified was also SGML. For an interesting history of this MUC-6 watershed, see Ralph Grishman and Beth Sundheim, 1996. Message Understanding Conference – 6: A Brief History, in Proceedings of the 16th International Conference on Computational Linguistics (COLING), I, Kopenhagen, 1996, 466–471.

[2] In a named entity, the word named applies to entities that have a “rigid designators” as defined by Kripke for the referent. For instance, the automotive company created by Henry Ford in 1903 is referred to as Ford or Ford Motor Company. Rigid designators include proper names as well as certain natural kind of terms like biological species and substances.Sekine’s extended hierarchy proposed in 2002 is made up of 200 subtypes, with 32 larger clusters within that. Here is the top level of the Sekine type system:

Name-Other Title Timex Frequency
Person Unit Periodx Rank
Organization Vocation Numex-Other Age
Location Disease Money School Age
Facility God Stock Index Latitude Longitude
Product ID Number Point Measurement
Event Color Percent Countx
Natural Object Time-Other Multiplication Ordinal Number

Though developed separately and for different purposes, BBN categories also proposed in 2002 consists of 29 types and 64 subtypes. Here are the BBN types (Note: BBN claims 29 types because there are double entries or considerations for the first five entries):

Person Time Animal
NORP (adjectival GPEs) Percent Substance
Facility Money Disease
Organization Quantity Work of Art
GPE (geopolitical places) Ordinal Law
Location Cardinal Language
Product Events Contact Info
Date Plant Game

Of course, other entity extraction systems have similar clusterings and approaches. Though less formal in the sense of a hierarchy or purported complete entity coverage, here for example is the listing of entity types within Calais:

Anniversary FaxNumber NaturalFeature RadioProgram
City Holiday OperatingSystem RadioStation
Company IndustryTerm Organization Region
Continent MarketIndex Person SportsEvent
Country MedicalCondition PhoneNumber SportsGame
Currency Movie Position SportsLeague
EmailAddress MusicAlbum Product Technology
EntertainmentAwardEvent MusicGroup ProgrammingLanguage TVShow
Facility NaturalDisaster ProvinceOrState TVStation
PublishedMedium URL

See further the Wikipedia entry on named entity recognition.

[3] We use the reference to “TBox” in accordance with our working definition for description logics:

“Description logics and their semantics traditionally split concepts and their relationships from the different treatment of instances and their attributes and roles, expressed as fact assertions. The concept split is known as the TBox (for terminological knowledge, the basis for T in TBox) and represents the schema or taxonomy of the domain at hand. The TBox is the structural and intensional component of conceptual relationships. The second split of instances is known as the ABox (for assertions, the basis for A in ABox) and describes the attributes of instances (and individuals), the roles between instances, and other assertions about instances regarding their class membership with the TBox concepts.”
[4] UMBEL also provides a SKOS-based vocabulary extension for describing other domains and mappings between classes and instances. This purpose, however, is outside of the scope of this current article.
[5] As a reference roadmap, UMBEL was specifically designed not to include meronymous (part of) relationships (see further this reference). Thus, all “part of” type concepts were assigned to the whole superType category for which they are a part. Thus, “animal parts” are assigned to the superType Animal; “car parts” to the superType Product.
[6] For a general discussion of attributes and their relation to entities, see Satoshi Sekine, 2008. Extended Named Entity Ontology with Attribute Information, in Proceedings of the 6th edition of the Language Resources and Evaluation Conference (LREC 2008). Marrakech, Morocco. See http://www.lrec-conf.org/proceedings/lrec2008/pdf/21_paper.pdf.
Posted:August 23, 2009

Snake Shedding

In the Future, All of us May be SysAdmins

OK, well, I just finished moving and upgrading some dozen Web sites and wikis, including this one — my main blog — over the weekend, from fixed stuff to the “clouds“. Believe you me, there were some pretty massive changes required.

For someone like me who is relatively clueless about such things, the process has been interesting (to say the least).

It seems like our modern era either involves moving digital things or converting digital things. As for moving, we all experience that laptop or hard drive dying, and then the move. (The Death of a Laptop actually happened to my wife this past week.) But it also is changing providers and venues — what caused me to move all of these Web sites.

Shedding the Snake Skin

So, the mainstream digital age has existed for what, now, some 40 years? How many data formats have we transitioned (ASCII, EBCDIC, UTF-8, an immense number)? And, how many systems and environments have we transitioned?

At the risk of dating myself, when I was in college we still used slide rules; truly the end of an era. Just a year or two later everyone transitioned to having TI or HP calculators, some they wore on their hips like some PDAs and cell phones today.

I won’t bore everyone with my own transition from my first computer (an HP 9100 with 4K RAM and program listings on cash register tapes) through many others including a DEC Rainbow PC with CP/M (a beauty!). For many years, as we moved into the PC era and IBM legitimized the shift, every computer I bought seemed to cost about $3000. Each one was more capable, etc., but they all cost the same.

And, then, about the late 1990s, that changed. In fact, my last capable desktop machine cost way south of $1000.

But, I digress.

What has been the real constant across these decades has been system and data migration. Granted, many of the docs and many of the systems in my own experience from 30 yrs ago have no relevance today (god, do I miss WordPerfect with its embedded, editable codes!), but actually an important minor portion do.

For these, I need to move both apps and data (with readable formats) for each generational transition.

I know that organizations, like the Library of Congress in its NDIIPP program, need to worry about digital preservation, potentially for millenia. These are worthwhile concerns.

But, from my own more prosaic standpoint, I see this issue with my own lens and own bas relief. I am constantly moving apps and data, each transition much like a snake shedding its skin.

It makes one wonder about the effort and process by which the entire meaningful cultural history of our species continues to adapt and transition forward.

Getting Back to Real

Hmmm. All of us have seen these transitions and the loss of productivity they bring in that shift. (Some might argue that the lack of productivity gains from computers until this decade was due to such transitions, which at least now with the Web we see a more common migration framework.)

I think we have no choice but to transition to the next latest and greatest as it emerges. Automated means at acceptable cost for doing such transitions will also be attractive.

But the real point, I think, is that such transitions are inevitable. Faster apps: Check! Better apps: Check! Easier data exchange: Check!!

Living with transition thus becomes a clear constant for all us as we move forward. And, part of that is accepting downtime to screw around moving the keepable old to the potentially useful new.

After this weekend, I’m now ready for a couple of days off before the real work week begins (yeah, right, keep dreaming).

Posted:April 21, 2009

SearchMonkey

SearchMonkey’s Recommended Vocabularies a Useful Resource

I am pleased to report that UMBEL is now included as one of the recommended vocabularies for the Yahoo! SearchMonkey service. Using SearchMonkey, developers and site owners can use structured data to enhance the value of standard Yahoo! search results and customize their presentation, including through “infobars“. SearchMonkey is integral to a concerted effort by Yahoo! to embrace structured data, RDF and the semantic Web.

SearchMonkey was first announced in February 2008 with a beta release in April and then public release in May with 28 supported vocabularies. Then, last October, an additional set of common, external vocabularies were recommended for the system including DBpedia, Freebase, GoodRelations and SIOC. At the same time, some further internal Yahoo! vocabularies and standard Web languages (e.g., OWL, XHTML) were also added.

This is the first vocabulary update since then. Besides UMBEL, the AB Meta and Semantic Tags vocabularies have also been added to this latest revision. (There have also been a few deprecations over time.)

A recommended vocabulary means that its namespace prefix is recognized by SearchMonkey. The namespaces for the recommended vocabularies are reserved. Though site owners may customize and add new SearchMonkey structure, they must be explicitly defined in specific DataRSS feeds.

Structured data may be included in Yahoo! search results from these sources:

  • Yahoo! Index — the core Yahoo! search data with limited structure such as the page’s title, summary, file size, MIME type, etc. This structure is only provided by Yahoo!
  • Semantic Web Data — including microformats and RDF data embedded in the host page
  • Data Feed — A feed of Yahoo! native DataRSS provided by a third party site
  • Custom Data Service — Any data extracted from an (X)HTML page or web service and represented within SearchMonkey as DataRSS.

As a recommended vocabulary, UMBEL namespace references can now be embedded and recognized (and then presented) in Yahoo! search results.

The Current Vocabulary Set

Here are the 34 current vocabularies (plus five deprecated) recognized by the system:

Prefix Name Namespace
abmeta AB Meta http://www.abmeta.org/ns#
action SearchMonkey Actions http://search.yahoo.com/searchmonkey/action/
assert SearchMonkey Assertions (deprecated) http://search.yahoo.com/searchmonkey/assert/
cc Creative Commons http://creativecommons.org/ns#
commerce SearchMonkey Commerce http://search.yahoo.com/searchmonkey/commerce/
context SearchMonkey Context (deprecated) http://search.yahoo.com/searchmonkey/context/
country SearchMonkey Country Datatypes http://search.yahoo.com/searchmonkey-datatype/country/
currency SearchMonkey Currency Datatypes http://search.yahoo.com/searchmonkey-datatype/currency/
dbpedia DBPedia http://dbpedia.org/resource/
dc Dublin Core http://purl.org/dc/terms/
fb Freebase http://rdf.freebase.com/ns/
feed SearchMonkey Feed http://search.yahoo.com/searchmonkey/feed/
finance SearchMonkey Finance http://search.yahoo.com/searchmonkey/finance/
foaf FOAF http://xmlns.com/foaf/0.1/
geo GeoRSS http://www.georss.org/georss#
gr GoodRelations http://purl.org/goodrelations/v1#
job SearchMonkey Jobs http://search.yahoo.com/searchmonkey/job/
media SearchMonkey Media http://search.yahoo.com/searchmonkey/media/
news SearchMonkey News http://search.yahoo.com/searchmonkey/news/
owl OWL ontology language http://www.w3.org/2002/07/owl#
page SearchMonkey Page (deprecated) http://search.yahoo.com/searchmonkey/page/
product SearchMonkey Product http://search.yahoo.com/searchmonkey/product/
rdf RDF http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs RDF Schema http://www.w3.org/2000/01/rdf-schema#
reference SearchMonkey Reference http://search.yahoo.com/searchmonkey/reference/
rel SearchMonkey Relations (deprecated) http://search.yahoo.com/searchmonkey-relation/
resume SearchMonkey Resume http://search.yahoo.com/searchmonkey/resume/
review Review http://purl.org/stuff/rev#
sioc SIOC http://rdfs.org/sioc/ns#
social SearchMonkey Social http://search.yahoo.com/searchmonkey/social/
stag Semantic Tags http://semantictagging.org/ns#
tagspace SearchMonkey Tagspace (deprecated) http://search.yahoo.com/searchmonkey/tagspace/
umbel UMBEL http://umbel.org/umbel/sc/
use SearchMonkey Use Datatypes http://search.yahoo.com/searchmonkey-datatype/use/
vcal VCalendar http://www.w3.org/2002/12/cal/icaltzd#
vcard VCard http://www.w3.org/2006/vcard/ns#
xfn XFN http://gmpg.org/xfn/11#
xhtml XHTML http://www.w3.org/1999/xhtml/vocab#
xsd XML Schema Datatypes http://www.w3.org/2001/XMLSchema#

In addition, there are a number of standard datatypes recognized by SearchMonkey, mostly a superset of XSD (XML Schema datatypes).

What is emerging from this Yahoo! initiative is a very useful set of structured data definitions and vocabularies. These same resources can be great starting points for non-SearchMonkey applications as well.

For More Information

There is quite a bit of online material now available for SearchMonkey, with new expansions and revisions also accompanying this most recent release. As some starting points, I recommend:

Posted:April 8, 2009

RDF logoA 10th Birthday Salute to RDF’s Role in Powering Data Interoperability

There has been much welcomed visibility for the semantic Web and linked data of late. Many wonder why it has not happened earlier; and some observe progress has still been too slow. But what is often overlooked is the foundational role of RDF — the Resource Description Framework.

From my own perspective focused on the issues of data interoperability and data federation, RDF is the single most important factor in today’s advances. Sure, there have been other models and other formulations, but I think we now see the Goldilocks “just right” combination of expressiveness and simplicity to power the foreseeable future of data interoperability.

So, on this 10th anniversary of the birth of RDF [1], I’d like to re-visit and update some much dated discussions regarding the advantages of RDF, and more directly address some of the mis-perceptions and myths that have grown up around this most useful framework.

By request, this article is now available as a PDF download.

A Simple Intro to RDF

RDF is a data model that is expressed as simple subject-predicate-object “triples”. That sounds fancy, but just substitute verb for predicate and noun for subject and object. In other words: Dick sees Jane; or, the ball is round. It may sound like a kindergarten reader, but it is how data can be easily represented and built up into more complex structures and stories.

A triple is also known as a “statement” and is the basic “fact” or asserted unit of knowledge in RDF. Multiple statements get combined together by matching the subjects or objects as “nodes” to one another (the predicates act as connectors or “edges”). As these node-edge-node triple statements get aggregated, a network structure emerges, known as the RDF graph.

The referenced “resources” in RDF triples have unique identifiers, IRIs, that are Web-compatible and Web-scalable. These identifiers can point to precise definitions of predicates or refer to specific concepts or objects, leading to less ambiguity and clearer meaning or semantics.

In my own company’s approach to RDF, basic instance data is simply represented as attribute-value pairs where the subject is the instance itself, the predicate is the attribute, and the object is the value. Such instance records are also known as the ABox. The structural relationships within RDF are defined in ontologies, also known as the TBox, which are basically equivalent to a schema in the relational data realm.

RDF triples can be applied equally to all structured, semi-structured and unstructured content. By defining new types and predicates, it is possible to create more expressive vocabularies within RDF. This expressiveness enables RDF to define controlled vocabularies with exact semantics. These features make RDF a powerful data model and language for data federation and interoperability across disparate datasets.

There are many excellent introductions or tutorials to RDF; a recommended sampling is shown in the endnotes [2].

Is RDF a Framework, Data Model or Vocabulary?

Well, the answer to the rhetorical question is, all three!

The RDF data model provides an abstract, conceptual framework for defining and using metadata and metadata vocabularies. See: We were able to use all three concepts in a single sentence!

The RDF model draws on well-established principles from various data representation communities. RDF properties may be thought of as attributes of resources and in this sense correspond to traditional attribute-value pairs. RDF properties also represent relationships between resources and an RDF model can therefore resemble an entity-relationship diagram. . . . In object-oriented design terminology, resources correspond to objects and properties correspond to instance variables. [1]

But, actually, because RDF is simultaneously a framework, data model and basis for building more complex vocabularies, it is both simple and complex at the same time.

It is first perhaps best to understand basic RDF as a data model of triples with very few (or unconstrained) semantics [3]. In its base form, it has no range or domain constraints; has no existence or cardinality constraints; and lacks transitive, inverse or symmetrical properties (or predicates) [4]. As such, basic RDF has limited reasoning support. It is, however, quite useful in describing static things or basic facts.

In this regard, RDF in its base state is nearly adequate for describing the simple instances and data records of the world, what is called the ABox in description logics.

RDFS (RDF Schema) is the next layer in the RDF stack designed to overcome some of these baseline limitations. RDFS introduces new predicates and classes that bound these semantics. Importantly, RDFS establishes the basic constructs necessary to create new vocabularies, principally through adding the class and subClass declarations and adding domain and range to properties (the RDF term for predicates). Many useful vocabularies have been created with RDFS and it is possible to apply limited reasoning and inference support against them.

The next layer in the RDF stack is OWL, the Web Ontology Language. It, too, is based on RDF. The first versions of OWL were themselves layered from OWL Lite to OWL DL to OWL Full. OWL Lite and OWL DL are both decidable through the first-order logic basis of description logics (the basis for the acronym in OWL DL). OWL Full is not decidable, but provides an OWL counterpart to fragmented RDF and RDFS statements that are desirable in the aggregate, with reasoning applied where possible.

OWL provides sufficient expressive richness to be able to describe the relationships and structure of entire world views, or the so-called terminological (TBox) construct in description logics. Thus, we see that the complete structural spectrum of description logics can be satisfied with RDF and its schematic progeny, with a bit of an escape hatch for combining poorly defined or structural pieces via using OWL Full [5].

However, RDF is NOT a particular serialization. Though XML was the original specified serialization and still is the defined RDF MIME type (application/rdf+xml; other serializations take the form text/turtle or text/n3 or similar), it is not necessary to either write or transmit RDF in the XML syntax.

In any event, depending on its role and application, we can see that RDF is a foundation, in careful expressions based in description logics, that lends itself to a clean expression and separation of concerns. With RDF and RDFS, we have a data model and a basis for vocabularies well suited to instance data (ABox). With RDFS and OWL, we have an extended schema structure and ontologies suitable for describing and modeling the relationships in the world (TBox). Thus, RDF is a framework for modeling all forms of data, for describing that data through vocabularies, and for interoperating that data through shared conceptualizations (ontologies) and schema.

Rationale for a Canonical Data Model

In the context of data interoperability, a critical premise is that a single, canonical data model is highly desirable. Why? Simply because of 2N v N2. That is, a single reference (“canon”) structure means that fewer tool variants and converters need be developed to talk to the myriad of data formats in the wild. With a canonical data model, talking to external sources and formats (N) only requires converters to the canonical form (2N). Without a canonical model, the combinatorial explosion of required format converters becomes N2 [6].

Note, in general, such a canonical data model merely represents the agreed-upon internal representation. It need not affect data transfer formats. Indeed, in many cases, data systems employ quite different internal data models from what is used for data exchange. Many, in fact, have two or three favored flavors of data exchange such as XML, JSON or the like.

In most enterprises and organizations, the relational data model with its supporting RDBMs is the canonical one. In some notable Web enterprises — say, Google for example — the exact details of its internal canonical data model is hidden from view, with APIs and data exchange standards such as GData being the only visible portions to outside consumers.

Generally speaking, a canonical, internal data standard should meet a few criteria:

  • Be expressive enough to capture the structure and semantics of any contributing dataset
  • Have a schema itself which is extensible
  • Be performant
  • Have a model to which it is relatively easy to develop converters for different formats and models
  • Have published and proven standards, and
  • Have sufficient acceptance so as to have many existing tools and documentation.

Other desired characteristics might be for the model and many of its tools to be free and open source, suitable to much analytic work, efficient in storage, and other factors.

Though the relational data model is numerically the most prevalent one in use, it has fallen out of favor for data federation purposes. This loss of favor is due, in part, to the fragile nature of relational schema, which increases maintenance costs for the data and their applications, and incompatibilities in standards and implementation.

Though still comparatively young with a smaller-than-desirable suite of tools and applications support [7], RDF is perhaps the ideal candidate for the canonical data model. To understand why, let’s now switch our discussion to the advantages of RDF.

Advantages of RDF

It is surprisingly difficult to find a consolidated listing of RDF’s advantages. The W3C, the developer of the specification, first published on this topic in the late 1990s, but it has not been updated for some time [8]. Graham Klyne has a better and more comprehensive presentation, but still one that has not been updated since 2004 [4].

I believe data interoperability to be RDF’s premier advantage, but there are many, many others.

Another advantage that is less understood is that RDF and its progeny can completely switch the development paradigm: data can now drive the application, and not the other way around. Frankly, we are just at the beginning realizations of this phase with such developments as linked data and even whole applications or application languages being written in RDF [9], but I think time will prove this advantage to be game-changing.

But, there are many perspectives that can help tease out RDF’s advantages. Some of these are discussed below, with the accompanying table attempting to list these ‘Top Sixty’ advantages in a single location.

Standard, Open and Expressive

In its ten year history, RDF has spawned many related languages and standards. The W3C has been the shepherd for this process, and there are many entry locations on the World Wide Web Consortium’s Web site to begin exploring these options [10]. These standards extend from the RDF, RDFS and OWL vocabularies and languages noted above that give RDF its range of expressiveness, to query languages (e.g., SPARQL), transformation languages (e.g., GRDDL), rule languages (e.g., RIF), and many additional constructs and standards.

The richness of this base of standards is only now being tapped. The combination of these standards and the tools they are spawning is just beginning. And, because it is so easily serialized as XML, a further suite of tools and capabilities such as XPath or XSLT or XForms may be layered onto this base.

Moreover, one is not limited in any way to XML as a serialization. RDF itself has been serialized in a number of formats including RDF/XML, N3, RDFa, Turtle, and N-triples. Also, RDF’s simple subject-predicate-object data model can readily convert human-readable and easily authored instance records (subject) written in the style of attribute-value pairs (predicate-object). As such, RDF is an excellent conversion target for all forms of naïve data structs [11].

Data Interoperability

Indeed, it is in data exchange and interoperability that RDF really shines. Via various processors or extractors, RDF can capture and convey the metadata or information in unstructured (say, text), semi-structured (say, HTML documents) or structured sources (say, standard databases). This makes RDF almost a “universal solvent” for representing data structure.

“The semantic Web’s real selling point is URI-based data integration.”
Harry Halpin [12]

Because of this universality, there are now more than 100 off-the-shelf ‘RDFizers’ for converting various non-RDF notations (data formats and serializations) to RDF [13]. Because of its diversity of serializations and simple data model, it is also easy to create new converters. Generalized conversion languages such as GRDDL provide framework-specific conversions, such as for microformats.

Once in a common RDF representation, it is easy to incorporate new datasets or new attributes. It is also easy to aggregate disparate data sources as if they came from a single source. This enables meaningful composition of data from different applications regardless of format or serialization.

Simple RDF structures and predicates enable synonyms or aliases to also be easily mapped to the same types or concepts. This kind of semantic matching is a key capability of the semantic Web. It becomes quite easy to say that your glad is my happy, and they indeed talk about the same thing.

What this mapping flexibility points to is the immense strengths of RDF in representing diverse schema, the next major advantage.

Schema Unbound

The single failure of data integration since the inception of information technologies — for more than 30 years, now — has been schema rigidity or schema fragility. That is, once data relationships are set, they remain so and can not easily be changed in conventional data management systems nor in the applications that use them.

Relational database management (RDBM) systems have not helped this challenge, at all. While tremendously useful for transactions and enabling the addition of more data records (instances, or rows in a relational table schema), they are not adaptive nor flexible.

Why is this so?

In part, it has to do with the structural “view” of the world. If everything is represented as a flat table of rows and columns, with keys to other flat structures, as soon as that representation changes, the tentacled connections can break. Such has been the fragility of the RDBMS model, and the hard-earned resistance of RDBMS administrators to schema growth or change.

Yet, change is inevitable. And thus, this is the source of frustration with virtually all extant data systems.

RDF has no such limitations. And, for those from a conventional data management perspective, this RDF flexibility can be one of the more unbelievable aspects of this data model.

As we have noted earlier, RDF is well suited and can provide a common framework to represent both instance data and the structures or schema that describe them, from basic data records to entire domains or world views. In fact, whatever schema or structure that characterizes the input data — from simple instance record layouts and attributes to complete vocabularies or ontologies — also embodies domain knowledge. This structure can be used at time of ingest as validity or consistency checks.

As a framework for data interoperability, RDF and its progeny can ingest all relations and terminology, with connections made via flexible predicates that assert the degree and nature of relatedness. There is no need for ingested records or data to be complete, nor to meet any prior agreement as to structure or schema.

Increment, Evolve, Extend, Adapt . . .

Indeed, the very fluidity of RDF and structures based on it is another key strength. Since a basic RDF model can be processed even in the absence of more detailed information, input data and basic inferences can proceed early and logically as a simple fact basis. This strength means that either data or schema may be ingested and then extended in an incremental or partial manner. Partial representations can be incorporated as readily as complete ones, and schema can extend and evolve as new structure is discovered or encountered.

This is revolutionary. RDF provides a data and schema representation framework that can evolve and adapt to what data exists and what structure is known. As new data with new attributes are discovered, or as new relationships are found or realized, these can be added to the existing model without any change whatsoever to the prior existing schema.

This very adaptability is what enables RDF to be viewed as data-driven design. We can deal with a partial and incomplete world; we can learn as we go; we can start small and simple and evolve to more understanding and structure; and we can preserve all structure and investments we have previously made.

And applications based on RDF work the same way: they do not need to process or account for information they don’t know or understand. We can easily query RDF models without being affected whatsoever by unreferenced or untyped data in the basic model.

By replacing the rigid relational data model with one based on RDF, we gain robustness, flexibility, universality and structural persistence over fragility.

Existing technologies such as SQL and the relational model were devised without the specific requirements of disparate, uncontrolled, large-scale integration. Though the relational model enabled us to build efficient data silos and transaction systems, RDF now enables us to finally federate them.

‘Top Sixty’ Benefits of RDF
  • A foundation based in description logics that lends itself to clean expression and separation of concerns regarding instance data (ABox) and schema structure (TBox)
  • RDF’s unique identifiers, IRIs, are Web-compatible and are Web-scalable
  • Potential use of inferencing to contextually broaden search, retrieval and analysis
  • Potential use of its structure to automatically drive applications and tools, including populating context-relevant dropdown lists and auto-completion
  • Based on open source, languages and standards
  • A comparatively complete suite of specifications including languages, schema and tools (e.g., RDF, RDF Schema, OWL, RIF, SPARQL, GRDDL, etc.)
  • A choice of a variety of serializations and notations, including RDF/XML, N3, RDFa, Turtle, N-triples, as well as possible expression in many non-RDF notations
  • Instance records in human-readable, easily authored attribute-value formats can be readily converted to the s-p-o RDF “triple” data model
  • Can capture metadata and structure from unstructured, semi-structured and structured data
  • More than 100 off-the-shelf ‘RDFizers’ exist for converting various non-RDF notations (data formats and serializations)
  • Easy and cost-effective incorporation of new datasets wherein only new attributes require a structure update; all others simply get mapped
  • Aggregate processing of disparate sources as if they came from a single source
  • A ready structure for synonym and alias matching when merging or matching datasets
  • In converting non-RDF data, the ability to bring a more formal class structure to the description of things
  • Common framework and vocabulary for representing instance data
  • Common framework and vocabulary for representing data structures and schema
  • Can describe simple data structs to complete vocabularies/ontologies to processing and inferencing rules
  • Schema can be calculated from the ingested triples; thus, can either generate schema from scratch or be used to cross-check prior schema
  • Can accept and store data with different structure in a general RDF container (e.g., all animals v a specific bird)
  • Eliminates the trade-off between good design and performance for related structure (e.g., full names v first and last names)
  • Untyped relations can still have single operations performed against them
  • More formal RDF structures (e.g., ontologies) embody domain expertise within their subject structure
  • Readily extensible with schema that are also machine readable, bringing about a high degree of automation
  • Allows data that is structured slightly differently to be stored together in the lowest common denominator of an RDF statement
  • No need for upfront schema agreement; can evolve, extend and adapt
  • Allows the schema to change independently of the data without requiring any existing data to be thrown away or padded with NULLs
  • The basic RDF model can be processed in the absence of more detailed information as a simple fact basis
  • Schema based on RDF can be extended and grown incrementally without impacting the existing datastore
  • As a corollary, development based on RDF can also be incremental, reducing the need to “design it at once” or “design it right” up front
  • RDF models and apps lend themselves to experimentation and agile development
  • Information can be gathered incrementally from multiple sources
  • Data and schema can be ingested, represented and conveyed in “partial” form
  • Structure and schema can evolve incrementally in concert with new understandings and new data
  • All prior investments in structure and schema can be maintained
  • Because of conceptual closeness to the relational data model, it is possible to represent RDF in a relational database and vice versa
  • RDF thus has the ability to take advantage of historical RDBMs and SQL query optimizations
  • Ability to create RDF “views” or wrappers over relational schema that can be queried via SPARQL
  • A common storage format based on the triple or quad; suitable for datastore hosting by relational database management systems
  • The use of untyped relations reduces the total number of relations to be handled, with operations over them only needed once
  • Relational systems can serve instance data in situ (ABox) while interoperability is provided by an RDF structural and schema layer (TBox)
  • Ability to do specialized work, such as inferencing
  • Use of a set-based semantics and queries
  • Via its SPARQL query language, easy mechanisms to drive faceted search and other browsing and viewing tools
  • Because of how RDF works it is possible to query a dataset without knowing anything about the data in advance
  • Ability to generalize selection, viewing and publishing tools driven solely from the RDF structure; as the structure changes, tools automatically reflect those changes (e.g., plug-and-play)
  • Can easily create and apply inferencing tables over RDF datastores [14]
  • The RDF graph brings all the advantages and generality of structuring information using graphs
  • A graph is, itself, a unique form of data type with unique algorithms and analytic features
  • Graphs are modular and can be readily combined or broken apart
  • Graphs can be used for scalable, parallelized information processing
  • Unique types of search and discovery can occur with RDF graphs
  • Graphs provide the ability to visualize and navigate large network structures
  • Queries are unaffected by unreferenced values in the source data
  • Emerging lingua franca of the semantic Web and ‘Web of data’
  • Strong compatibility with “linked data” based on Web access (HTTP) and IRI identifiers
  • RDF is readily adaptable to the open-world assumption (OWA)
  • Relation to the semantic Web means much global information and data can be admixed with local content
  • Across all global sources the potential for powerful data “mesh-ups” conjoining related information
  • Network effects such as shared vocabularies, shared background knowledge, collective authoring, annotating and curating, and
  • RDF is an emergent data model.

Yet, Still Kissing Cousins with the Relational Model

Despite these differences in fragility and robustness, there are in fact many logical and conceptual affinities between the relational model and the one for RDF. An excellent piece on those relations was written by Andrew Newman a bit over a year ago [15].

RDF can be modeled relationally as a single table with three columns corresponding to the subject-predicate-object triple. Conversely, a relational table can be modeled in RDF with the subject IRI derived from the primary key or a blank node; the predicate from the column identifier; and the object from the cell value. Because of these affinities, it is also possible to store RDF data models in existing relational databases. (In fact, most RDF “triple stores” are RDBM systems with a tweak, sometimes as “quad stores” where the fourth tuple is the graph.) Moreover, these affinities also mean that RDF stored in this manner can also take advantage of the historical learnings around RDBMS and SQL query optimizations.

Just as there are many RDFizers as noted above, there are also nice ways to convert relational schema to RDF automatically. OpenLink Software, for example, has its RDF “Views” system that does just that [16]. Given these overall conceptual and logical affinities the W3C is also in the process of graduating an incubator group to an official work group, RDB2RDF [17], focused on methods and specifications for mapping relational schema to RDF.

What is emerging is one vision whereby existing RDBM systems retain and serve the instance records (ABox), while RDF and its progeny provide the flexible schema scaffolding and structure over them (TBox). Architectures such as this retain prior investments, but also provide a robust migration path for interoperating across disparate data silos in a performant way.

Data-driven Applications

As developers, one of our favorite advantages of RDF is its ability to support data-driven applications. This makes even further sense when combined with a Web-oriented architecture that exposes all tools and data as RESTful Web services [18].

Two tool foundations are the RDF query language, SPARQL [19], and inferencing. SPARQL provides a generalized basis for driving reports and templated data displays, as well as standard querying. Utilizing RDF’s simple triple structure, SPARQL can also be used to query a dataset without knowing anything in advance about the data. This provides a very useful discovery mode.

Simple inferencing can be applied to broaden and contextualize search, retrieval and analysis. Inference tables can also be created in advance and layered over existing RDF datastores [14] for speedier use and the automatic invoking of inferencing. More complicated inferencing means that RDF models can also perform as complete conceptual views of the world, or knowledge bases. Quite complicated systems are emerging in such areas as common sense (with OpenCyc) and biological systems [20], as two examples.

RDF ontologies and controlled vocabularies also have some hidden power, not yet often seen in standard applications: by virtue of its structure and label properties, we can populate context-relevant dropdown lists and auto-complete entries in user interfaces solely from the input data and structure. This ability is completely generalizable solely on the basis of the input ontology(ies).

A Graph Representation

As the intro noted, when RDF triples get combined, a graph structure emerges. (Actually, it can most formally be described as a directed graph.) A graph structure has many advantages. While we are seeing much starting to emerge in the graph analysis of social networks, we could also fairly argue that we are still at the early stages of plumbing the unique features of graph (“network”) structures.

Graphs are modular and can be both readily combined and broken apart. From a computational standpoint, this can lend itself to parallelized information processing (and, therefore, scalability). With specific reference to RDF it also means that graph extractions are themselves valid RDF models.

Graph algorithms are a significant field of interest within mathematics, computer science and the social sciences. Via approaches such as network theory or scale-free networks, topics such as relatedness, centrality, importance, influence, “hubs” and “domains”, link analysis, spread, diffusion and other dynamics can be analyzed and modeled.

Graphs also have some unique aspects in search and pattern matching. Besides options like finding paths between two nodes, depth-first search, breadth-first search, or finding shortest paths, emerging graph and pattern-matching approaches may offer entirely new paradigms for search.

Graphs also provide new approaches for visualization and navigation, useful for both seeing relationships and framing information from the local to global contexts. The interconnectedness of the graph allows data to be explored via contextual facets, which is revolutionizing data understanding in a way similar to how the basic hyperlink between documents on the Web changed the contours of our information spaces [21].

Many would argue (as do I), that graphs are the most “natural” data structure for capturing the relationships of the real world. If so, we should continue to see new algorithms and approaches emerge based on graphs to help us better understand our information. And RDF is a natural data model for such purposes.

Open World Applications and the Semantic Web

Ultimately, data interoperability implies a global context. The design of RDF began from this perspective with the semantic Web.

This perspective is firstly grounded in the open-world assumption: that is, the information at hand is understood to be incomplete and not self-contained. Missing values are to be expected and do not falsify what is there. A corollary assumption is there is always more information that can be added to the system, and the design should not only accommodate, but promote, that fact.

As the lingua franca for the semantic Web, using RDF means that many new data, structures and vocabularies now become available to you. So, not only can RDF work to interoperate your own data, but it can link in useful, external data and schema as well.

Indeed, the concept of linked data now becomes prominent whereby RDF data with unique IRIs as their universal identifiers are exposed explicitly to aid discovery and interlinking. Whether internal data is exposed in the linked data manner or not, this external data can now be readily incorporated into local contexts. The Linking Open Data movement that is promoting this pattern has become highly successful, with billions of useful RDF statements now available for use and consumption [10].

The semantic Web and RDF is enabling the data federation scope to extend beyond organizational boundaries to embrace (soon) virtually all public information. That means that, say, local customer records can now be supplemented with external information about specific customers or products. We are really just at the nascent stages of such data “mesh-ups” with many unforeseen benefits (and, challenges, too, such as privacy and identity and ownership) likely to emerge.

At Web scales, we will see network effects also emerge in areas such as shared vocabularies, shared background knowledge, and collective authoring, annotating and curating. To be sure the traditional work of trade associations and standards bodies will continue, but likely now in much more operable ways.

Myths of RDF

Throughout the years, a number of myths have grown up around RDF. Some, unfortunately, were based on the legacy of how RDF was first introduced and described. Other myths arise from incomplete understanding of RDF’s multiple roles as a framework, data model, and basis for vocabularies and conceptual descriptions of the world.

The accompanying table lists the “Top Ten” of myths I have found to date. I welcome other pet submissions. Perhaps soon we can get to the point of a clearer understanding of RDF.

‘Top Ten’ Myths of RDF
  1. RDF is equivalent to XML — perhaps the biggest PR error in RDF’s first introduction was to tie RDF so closely with XML. RDF is a data model as described herein that has no dependence on XML and exists in abstract form separate from it
  2. RDF is written or expressed in XML — in a related way, RDF can be serialized (expressed) in many forms other than XML
  3. RDF and OWL are independent — OWL is a language grounded in RDF and a natural extension of the RDF “stack”; OWL is at the full expressiveness end of the spectrum [22, 23]
  4. RDF is a serialization — no; XML is a serialization, RDF is a data model, framework and basis for constructing vocabularies
  5. Basic RDF has no semantics — though limited and purposefully free, basic RDF in fact has extremely well considered semantics; an essential document for any practitioner is [3]
  6. RDF is too complex — it depends, right? At the level of the basic triple, RDF is extremely simple and is the best place to start learning about RDF
  7. RDF is too simple — it depends, right? At the level of OWL ontologies, RDF can capture virtually any relationship and aspect of the world; see [5] for a great start
  8. RDF is useful for “large” datasets only — the real purpose of RDF is data interoperability, which is needed any time two or more datasets are combined, regardless of size
  9. (Conversely and paradoxically), RDF is not scalable — this premise is still being tested, but we now have very large-scale experience with the government and in the Billion Triples Challenge
  10. RDF is not performant — daily we keep learning more about optimizations, query and re-write strategies, and the like. Orri Erling [24] does some of the best work around in this area and writes lucid explanations on his blog. Moreover, RDF systems are easily embedded in WOA architectures, which prove themselves daily at global Web scales.

Conclusion

Emergence is the way complex systems arise out of a multiple of relatively simple interactions, exhibiting new and unforeseen properties in the process. RDF is an emergent model. It begins as simple “fact” statements of triples, that may then be combined and expanded into ever-more complex structures and stories.

As an internal, canonical data model, RDF has advantages over any other approach. We can represent, describe, combine, extend and adapt data and their organizational schema flexibly and at will. We can explore and analyze in ways not easily available with other models.

And, importantly, we can do all of this without the need to change what already exists. We can augment our existing relational data stores, and transfer and represent our current information as we always have.

We can truly call RDF a disruptive data model or framework. But, it does so without disrupting what exists in the slightest. And that is a most remarkable achievement.


[1] Actually, it is just a few weeks past. The first RDF specification was published as: Ora Lassila and Ralph R. Swick, eds., 1999. “Resource Description Framework (RDF) Model and Syntax Specification,” W3C Recommendation, 22 February 1999; see http://www.w3.org/TR/1999/REC-rdf-syntax-19990222/. Of course, RDF had been in development under various names for some time. To my knowledge, the first public explanation specific to the RDF name was by Tim Bray, “RDF and Metadata,” on XML.com, June 09, 1998; see http://www.xml.com/pub/a/98/06/rdf.html. I’m measuring RDF’s birthday in relation to it being published as an official standard (recommendation) per the first reference.
[2] I first recommend an older introduction by Ian Davis, http://research.talis.com/2005/rdf-intro/. There is a more recent, shorter version by Davis and Tom Heath, The 30 Minute Guide to RDF and Linked Data, at http://www.slideshare.net/iandavis/30-minute-guide-to-rdf-and-linked-data. Also, Joshua Tauberer’s write-up at http://www.rdfabout.com/intro/? is quite excellent.
[3] Patrick Hayes, 2004. “RDF Semantics,” a W3C Recommendation, February 2004. See http://www.w3.org/TR/rdf-mt/.
[4] Graham Klyne, 2004. “Semantic Web and RDF,” on the Nine by Nine Web site (http://www.ninebynine.net/), 4 May 2004; see http://www.ninebynine.org/Presentations/20040505-KelvinInsitute.pdf.
[5] The soon-to-be-released recommendation of OWL 2 is best introduced through the recent: OWL 2 Working Group, eds., 2009. “OWL 2 Web Ontology Language: Document Overview,” W3C Working Draft, 27 March 2009; see http://www.w3.org/TR/owl2-overview/.
[6] The canonical data model is especially prevalent in enterprise application integration. An interesting animated visualization of the canonical data model may be found at: http://soa-eda.blogspot.com/2008/03/canonical-data-model-visualized.html.
[7] Still, my own Sweet Tools listing of RDF and -related tools now contains nearly 800 items.
[8] The RDF Advantages Page; see http://www.w3.org/RDF/advantages.html.
[9] See, for example, Neno, the Semantic Web Programming Environment, at: http://neno.lanl.gov/Home.html; and Ripple, at http://code.google.com/p/ripple/. The developers of these systems are now combining efforts.
[10] Here are some useful starting points for RDF at the World Wide Web Consortium (W3C): Begin at the W3C’s ESW wiki. The Linking Open Data community maintains its own people and projects listings as well. Current topics are discussed on the W3C’s semantic Web mailing lists. The W3C maintains a good general semantic Web tools, with specific listings of RDF Triplestores.
[11] Michael Bergman, 2009. “‘Structs’: Naïve Data Formats and the ABox,” on the AI3 blog, January 22, 2009; see http://www.mkbergman.com/?p=471. And, Ibid, 2009. “‘Making Linked Data Reasonable using Description Logics, Part 4,” on the AI3 blog, February 23, 2009; see http://www.mkbergman.com/?p=478.
[12] Harry Halpin, video interview with Marcos Caceres, “GRDDL, Bridging the Interwebs?,” August 4, 2008, on StandardsSuck.org. See http://standardssuck.org/grddl-bridging-the-interwebs.
[13] See, for example, these Virtuoso RDF cartridges (http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtSponger) or listing of RDFizers (http://simile.mit.edu/wiki/RDFizers).
[14] OpenLink Software, 2009. “17.6. Inference Rules & Reasoning,”, part of the online Virutoso User Manual; see: http://docs.openlinksw.com/virtuoso/rdfsparqlrule.html.
[15] Andrew Newman, 2007. “A Relational View of the Semantic Web,” published on XML.com, March 14, 2007; see http://www.xml.com/pub/a/2007/03/14/a-relational-view-of-the-semantic-web.html.
[16] OpenLink Software, 2009. “17.4.3. RDF Views over RDBMS Data Source,” part of the online Virutoso User Manual; see: http://docs.openlinksw.com/virtuoso/rdfsparqlintegrationmiddleware.html#rdfviews. Also see OpenLink Software, 2007. Virtuoso RDF Views — Getting Started Guide, v1.1, June 2007; see http://virtuoso.openlinksw.com/Whitepapers/pdf/Virtuoso_SQL_to_RDF_Mapping.pdf.
[17] W3C, 2009. RDB2RDF Working Group Charter, revised February 24, 2009; see http://www.w3.org/2005/Incubator/rdb2rdf/WG-draft-charter/.
[18] See further my various blog posts on Web-oriented architecture (WOA).
[19] Especially recommended as an introductory tutorial is: Lee Feigenbaum, 2008. “SPARQL By Example: A Tutorial,” Sept. 17, 2008; see http://www.cambridgesemantics.com/2008/09/sparql-by-example.
[20] Many disciplines are embracing RDF. But, in biology, some exemplar projects are the Bio2RDF genomics project; the Linking Open Drug Data (LODD) initiative, which is a sub-project of the W3C’s broader Health Care and Life Sciences Interest Group (HCLSIG); the Neurocommons project; and the RDF branches of the Open Biomedical Ontologies (OBO) project and foundry.
[21] A very nice visualization of graph-driven structures in relation to information discovery and navigation is provided by Rama Hoetzlein, 2007. Quanta: The Organization of Human Knowedge: Systems for Interdisciplinary Research, a Master’s Thesis, University of California, Santa Barbara, June 2007; see http://www.rchoetzlein.com/quanta/index.htm.
[22] The original phrasing of this Myth used the term “distinct”, which Ted Thibodeau Jr rightly questioned. This myth goes to the heart of what I think is a false separation of the RDF and OWL “camps”. As the intro noted, I see a natural progression from RDF –> RDFS –> OWL, with the transition representing more precise semantics and expressiveness. Describing simple things simply, especially for linked data as mostly practiced, works well in RDF and RDFS. Once world views and conceptual schema are desired for inter-relating these things, RDFS and OWL become the better option. OWL Full (including OWL 2, see [23]) is fully grounded in RDF semantics. However, since OWL Full is not decidable, a subset of that, OWL DL, is still expressible as RDF but now consistent with description logics. This approach can provide more inferencing and reasoning power, at the slight cost of greater care in the semantics used and relationships asserted. In the end, the “distinction” between RDF and OWL is really a difference in use cases and intentions, imo.
[23] Michael Schneider, ed., 2009. OWL 2 Web Ontology Language RDF-Based Semantics, W3C Working Draft 21 April 2009; see http://www.w3.org/TR/owl2-rdf-based-semantics/.
[24] See Orri Erling’s Weblog at: http://www.openlinksw.com/weblogs/oerling/.
Posted:March 27, 2009

Google logo

The Recent ‘The Unreasonable Effectiveness of Data‘ Provides Important Hints

To even the most casual Web searcher, it must now be evident that Google is constantly introducing new structure into its search results. This past week three world-class computer scientists, all now research directors or scientists at Google, Alon Halevy, Peter Norvig and Fernando Pereira, published an opinion piece in the March/April 2009 issue of IEEE Intelligent Systems titled, ‘The Unreasonable Effectiveness of Data.’ It provides important framing and hints for what next may emerge in semantics from the Google search engine.

I had earlier covered Halevy and Google’s work on the deep Web. In this new piece, the authors describe the use of simple models working on very large amounts of data as means to trump fancier and more complicated algorithms.

“Unfortunately, the fact that the word ‘semantic’ appears in both ‘Semantic Web’ and ‘semantic interpretation’ means that the two problems have often been conflated, causing needless and endless consternation and confusion. The ‘semantics’ in Semantic Web services is embodied in the code that implements those services in accordance with the specifications expressed by the relevant ontologies and attached informal documentation.”

Some of the research they cite is related to WebTables [1] and similar efforts to extract structure from Web-scale data. The authors describe the use of such systems to create ‘schemata’ of attributes related to various types of instance records — in essence, figuring out the structure of ABoxes [2], for leading instance types such as companies or automobiles [3].

These observations, which they call the semantic interpretation problem and contrast with the Semantic Web, they generalize as being amenable to a kind of simple, brute-force, Web-scale analysis: “Relying on overt statistics of words and word co-occurrences has the further advantage that we can estimate models in an amount of time proportional to available data and can often parallelize them easily. So, learning from the Web becomes naturally scalable.”

Google had earlier posted their 1 terabyte database of n-grams, and I tend to agree that such large-scale incidence mining can lead to tremendous insights and advantages. The authors also helpfully point out that certain scale thresholds occur for doing such analysis, such that researchers need not have access to indexes the scale of Google to do meaningful work or to make meaningful advances. (Good news for the rest of us!)

As the authors challenge:

  1. Choose a representation that can use unsupervised learning on unlabeled data
  2. Represent the data with a non-parametric model, and
  3. Trust the important concepts will emerge from this analysis because human language has already evolved words for it.

My very strong suspicion is that we will see — and quickly — much more structured data for instance types (the ‘ABox’) rapidly emerge from Google in the coming weeks. They have the insights and approaches down, and clearly they have the data to drive the analysis! I also suspect many of these structured additions will just simply show up on the results listings to little fanfare.

The structured Web is growing all around us like stalagmites in a cave!


[1] Michael J. Cafarella, Alon Y. Halevy, Daisy Zhe Wang, Eugene Wu and Yang Zhang, 2008. “WebTables: Exploring the Power of Tables on the Web,” in the 34th International Conference on Very Large Databases (VLDB), Auckland, New Zealand, 2008. See http://web.mit.edu/y_z/www/papers/webtables-vldb08.pdf.
[2] As per our standard use:

"Description logics and their semantics traditionally split concepts and their relationships from the different treatment of instances and their attributes and roles, expressed as fact assertions. The concept split is known as the TBox (for terminological knowledge, the basis for T in TBox) and represents the schema or taxonomy of the domain at hand. The TBox is the structural and intensional component of conceptual relationships. The second split of instances is known as the ABox (for assertions, the basis for A in ABox) and describes the attributes of instances (and individuals), the roles between instances, and other assertions about instances regarding their class membership with the TBox concepts."
[3] I very much like the authors’ use of ‘schemata’ as the way to describe the attribute structure of various instance record types for the ABox, in contrast to the more appropriate ‘ontology’ applied to the TBox.