Posted:February 28, 2011

Photo courtesy goldonomic.comWikipedia + UMBEL + Friends May Offer One Approach

In the first part of this series we argued for the importance of reference structures to provide the structures and vocabularies to guide interoperability on the semantic Web. The argument was made that these reference structures are akin to human languages, requiring a sufficient richness and terminology to enable nuanced and meaningful communications of data across the Web and within the context of their applicable domains.

While the idea of such reference structures is great — and perhaps even intuitive when likened to human languages — the question is begged as to what is the basis for such structures? Just as in human languages we have dictionaries, thesauri, grammar and style books or encyclopedia, what are the analogous reference sources for the semantic Web?

In this piece, we tackle these questions from the perspective of the entire Web. Similar challenges and approaches occur, of course, for virtually every domain and specific community. But, by focusing on the entirety of the Web, perhaps we can discern the grain of sand at the center of the pearl.

Bootstrapping the Semantic Web

The idea of bootstrapping is common in computers, compilers or programming. Every computer action needs to start from a basic set of instructions from which further instructions or actions are derived. Even starting up a computer (“booting up”) reflects this bootstrapping basis. Bootstrapping is the answer to the classic chicken-or-egg dilemma by embedding a starting set of instructions that provides the premise at start up [1]. The embedded operand for simple addition, for example, is the basis for building up more complete mathematical operations.

So, what is the grain of sand at the core of the semantic Web that enables it to bootstrap meaning? We start with the basic semantics and “instructions” in the core RDF, RDFS and OWL languages. These are very much akin to the basic BIOS instructions for computer boot up or the instruction sets leveraged by compilers. But, where do we go from there? What is the analog to the compiler or the operating system that gives us more than these simple start up instructions? In a semantics sense, what are the vocabularies or languages that enable us to understand more things, connect more things, relate more things?

To date, the semantic Web has given us perhaps a few dozen commonly used vocabularies, most of which are quite limited and simple pidgin languages such as DC, FOAF, SKOS, SIOC, BIBO, etc. We also have an emerging catalog of “things” and concepts from Wikipedia (via DBpedia) and similar. (Recall, in this piece, we are trying to look Web-wide, so the many fine building blocks for domain purposes such as found in biology, medicine, finance, astronomy, etc., are excluded.) The purposes and scope of these vocabularies widely differ and attack quite different slices of the information space. SKOS, for example, deals with describing simple knowledge structures like taxonomies or thesauri; SIOC is for describing social media.

By virtue of adoption, each of these core languages has proved its usefulness and role. But, as skew lines in space, how do these vocabularies relate to one another? And, how can all of the specific domain vocabularies also relate to those and one another where there are points of intersection or overlap? In short, after we get beyond the starting instructions for the semantic Web, what is our language and vocabulary? How do we complete the bootstrap process?

Clearly, like human languages, we need rich enough vocabularies to describe the things in our world and a structure of the relationships amongst those things to give our communications meaning and coherence. That is precisely the role provided by reference structures.

The Use and Role of ‘Gold Standards’

To prevent reference structures from being rubber rulers, some fixity or grounding needs to establish the common understanding for its referents. Such fixed references are often called ‘gold standards‘. In money, of course, this used to be a fixed weight of gold, until that basis was abandoned in the 1970s. In the metric system, there are a variety of fixed weights and measures that are employed. In the English language, the Oxford English Dictionary (OED) is the accepted basis for the lexicon. And so on.

Yet, as these examples show, none of these gold standards is absolute. Money now floats; multiple systems of measurement compete; a variety of dictionaries are used for English; most languages have their own reference sets; etc. The key point in all gold standards, however, is that there is wide acceptance for a defined reference for determining alignments and arbitrating differences.

Gold standards or reference standards play the role of referees or arbiters. What is the meaning of this? What is the definition of that? How can we tell the difference between this and that? What is the common way to refer to some thing?

Let’s provide one example in a semantic Web context. Let’s say we have a dataset and its schema A that we are aligning with another dataset with schema B. If I say two concepts align exactly across these datasets and you say differently, how do we resolve this difference? On one extreme, each of us can say our own interpretation is correct, and to heck with the other. On the other extreme, we can say both interpretations are correct, in which case both assertions are meaningless. Perhaps papering over these extremes is OK when only two competing views are in play, but what happens when real problems with many actors are at stake? Shall we propose majority rule, chaos, or the strongest prevails?

These same types of questions have governed human interaction from time immemorial. One of the reasons to liken the problem of operability on the semantic Web to human languages, as argued in Part I, is to seek lessons and guidance for how our languages have evolved. The importance of finding common ground in our syntax and vocabularies — and, also, critically, in how we accept changes to those — is the basis for communication. Each of these understandings needs to be codified and documented so that they can be referenced, and so that we can have some confidence of what the heck it is we are trying to convey.

For reference structures to play their role in plugging this gap — that is, to be much more than rubber rulers — they need to have such grounding. Naturally, these groundings may themselves change with new information or learning inherent to the process of human understanding, but they still should retain their character as references. Grounded references for these things — ‘gold standards’ — are key to this consensual process of communicating (interoperating).

Some ‘Gold Standards’ for the Semantic Web

The need for gold standards for the semantic Web is particularly acute. First, by definition, the scope of the semantic Web is all things and all concepts and all entities. Second, because it embraces human knowledge, it also embraces all human languages with the nuances and varieties thereof. There is an immense gulf in referenceability from the starting languages of the semantic Web in RDF, RDFS and OWL to this full scope. This gulf is chiefly one of vocabulary (or lack thereof). We know how to construct our grammars, but we have few words with understood relationships between them to put in the slots.

The types of gold standards useful to the semantic Web are similar to those useful to our analogy of human languages. We need guidance on structure (syntax and grammar), plus reference vocabularies that encompass the scope of the semantic Web (that is, everything). Like human languages, the vocabulary references should have analogs to dictionaries, thesauri and encyclopedias. We want our references to deal with the specific demands of the semantic Web in capturing the lexical basis of human languages and the connectedness (or not) of things. We also want bases by which all of this information can be related to different human languages.

To capture these criteria, then, I submit we should consider a basic starting set of gold standards:

  • RDF/RDFS/OWL — the data model and basic building blocks for the languages
  • Wikipedia — the standard reference vocabulary of things, concepts and entities, plus other structural guidances
  • WordNet — lexical language references as an aid to natural language processing, and
  • UMBEL — the structural reference for the connectedness of things for basic coherence and inference, plus a vocabulary for mapping amongst reference structures and things.

Each of these potential gold standards is next discussed in turn. The majority of discussion centers on Wikipedia and UMBEL.

RDF/RDFS/OWL: The Language

Naturally, the first suggested gold standard for the semantic Web are the RDF/RDFS/OWL language components. Other writings have covered their uses and roles [2]. In relation to their use as a gold standard, two documents, one on RDF semantics [3] and the other an OWL [4] primer, are two great starting points. Since these languages are now in place and are accepted bases of the semantic Web, we will concentrate on the remaining members of the standard reference set.

Wikipedia: The Vocabulary (and More)

The second suggested gold standard for the semantic Web is Wikipedia, principally as a sort of canonical vocabulary base or lexicon, but also for some structural aspects. Wikipedia now contains about 3.5 million English articles, by far larger than any other knowledge base, and has more than 250 language versions. Each Wikipedia article acts as more or less a reference for the thing it represents. In addition, the size, scope and structure of Wikipedia make it an unprecedented resource for researchers engaged in natural language processing (NLP), information extraction (IE) and semantic Web-related tasks.

For some time I have been maintaining a listing called SWEETpedia of academic and research articles focused on the use of Wikipedia for these tasks. The latest version tracks some 250 articles [5], which I guess to be about one half or more of all such research extant. This research shows a broad variety of potential roles and contributions from Wikipedia as a gold standard for the semantic Web, some of which is detailed in the tables below.

An excellent report by Olena Medelyan et al. from the University of Waikato in New Zealand, Mining Meaning from Wikipedia, organized this research up through 2008 and provided detailed commentary and analysis of the role of Wikipedia [6]. They noted, for example, that Wikipedia has potential use as an encyclopedia (its intended use), a corpus for testing and modeling NLP tasks, as a thesaurus, a database, an ontology or a network structure. The Intelligent Wikipedia project from the University of Washington has also done much innovative work on “automatically learned systems [that] can render much of Wikipedia into high-quality semantic data, which provides a solid base to bootstrap toward the general Web” [7].

However, as we proceed through the next discussions, we’ll see that the weakest aspect of Wikipedia is its category structure. Thus, while Wikipedia is unparalleled as the gold standard for a reference vocabulary for the Web, and has other structural uses as well, we will need to look elsewhere for how that content is organized.

Major Wikipedia Initiatives

Many groups have recognized these advantages for Wikipedia, and have built knowledge bases around it. Also, many of these groups have also recognized the category (schema) weaknesses in Wikipedia and have proposed alternatives. Some of these major initiatives, which also collectively represent a large number of the research articles in SWEETpedia, include:

Project Schema Basis Comments
DBpedia Wikipedia Infoboxes excellent source for URI identifiers; structure extraction basis used by many other projects
Freebase User Generated schema are for domains based on types and properties; at one time had a key dependence on Wikipedia; has since grown much from user-generated data and structure; now owned by Google
Intelligent Wikipedia Wikipedia Infoboxes a broad program and a general set of extractors for obtaining structure and relationships from Wikipedia; was formerly known as KOG; from Univ of Washington
SIGWP Wikipedia Ontology the Special Interest Group of Wikipedia (Research or Mining); a general group doing research on Wikipedia structure and mining; schema basis is mostly from a thesaurus; group has not published in two years
UMBEL UMBEL Reference Concepts RefConcepts based on the Cyc knowledge base; provides a tested, coherent concept schema, but one with gaps regarding Wikipedia content; has 28,000 concepts mapped to Wikipedia
WikiNet Extracted Wikipedia Ontology part of a long-standing structure extraction effort from Wikipedia leading to an ontology; formerly known as WikiRelate; from the Heidelberg Institute for Theoretical Studies (HITS)
Wikipedia Miner N/A generalized structure extractor; part of a wider basis of Wikipedia research at the Univ of Waikato in New Zealand
Wikitology Wikipedia Ontology general RDF and ontology-oriented project utilizing Wikipedia; effort now concluded; from the Ebiquity Group at the Univ of Maryland
YAGO WordNet maps Wordnet to Wikipedia, with structured extraction of relations for characterizing entities

 

It is interesting to note that none of the efforts above uses the Wikipedia category structure “as is” for its schema.

Structural Sources within Wikipedia

The surface view of Wikipedia is topic articles placed into one or more categories. Some of these pages also include structured data tables (or templates) for the kind of thing the article is; these are called infoboxes. An infobox is a fixed-format table placed at the top right of articles to consistently present a summary of some unifying aspect that the articles share. For example, see the listing for my home town, Iowa City, which has a city infobox.

However, this cursory look at Wikipedia in fact masks much additional and valuable structure. Some early researchers noted this [8]. The recognition of structure has also been a key driver for the interest in Wikipedia as a knowledge base (in addition to its global content scope). The following table is a fairly complete listing of structure possibilities within Wikipedia (see Endnotes for any notes):

Wikipedia Structure Potential Applications Note
Corpus
Entire Corpus
knowledge base; graph structure; corpus for n-grams, other constructions [9]
Categories
Category
category suggestion; semantic relatedness; query expansion; potential parent category
Contained Articles
semantically-related terms (siblings)
Hierarchy
hyponymic and meronymic relations between terms
Listing Pages/Categories
semantically-related terms (siblings)
Patterned Categories
functional metadata [9]
Infobox Templates
Attributes
synonyms; key-value pairs
Values
units of measure; fact extraction [9]
Items
category suggestion; entity suggestion
Geolocational
coordinates; places; geolocational; (may also appear in full article text)
Issue Templates
Multiple Types
exclusion candidates; other structural analysis; examples include Stub, Message Boxes, Multiple Issues [9]
Category Templates [13]
Category Name
disambiguation; relatedness
Category Links
semantic relatedness
Articles
First Paragraph
definition; abstract
Full Text
complete discussion; related terms; context; translations; NLP analysis basis; relationships; sentiment
Redirects
synonymy; spelling variations, misspellings; abbreviations; query expansion
Title
named entities; domain specific terms or senses
Subject
category suggestion (phrase marked in bold in first paragraph)
Section Heading(s)
category suggestion; semantic relatedness [9]
See Also
related concepts; query expansion [9]
Further Reading
related concepts [9,10]
External Links
related concepts; external harvest points
Article Links
Context
related terms; co-occurrences
Label
synonyms; spelling variations; related terms; query expansion
Target
link graph; related terms
LinksTo
category suggestion; functional metadata
LinkedFrom
category suggestion; functional metadata
References
Citations
external harvest points [9,10]
Media
Images
thumbnails; image recognition for disambiguation; controversy (edit/upload frequency) [11]
Captions
related concepts; related terms; functional metadata [9]
Disambiguation Pages
Article Links
sense inventory
Discussion Pages
Discussion Content
controversy
Redux for Article Structure
see Articles for uses
History Pages
Edit Frequency
topicalness; controversy (diversity of editors, reversions)
Edit Basis
lexical errors [9]
Lists
Hyponyms
instances; named entity candidates
Alternate Language Versions
Redux for All Structures
see all items above; translation; multilingual alignment; entity disambiguation [12]

The potential for Wikipedia to provide structural understandings is evident from this table. However, it should be noted that, aside from some stray research initiatives, most effort to date has focused on the major initiatives noted earlier or from analyzing linking and infoboxes. There is much additional research that could be powered by the Wikipedia structure as it presently exists.

From the standpoint of the broader semantic Web, the potential of Wikipedia in the areas of metadata enhancement and mapping to multiple human languages [12] are particularly strong. We are only now at the very beginning phases of tapping this potential.

Structural Weaknesses

The three main weaknesses with Wikipedia are its category structure [14], inconsistencies and incompleteness. The first weakness means Wikipedia is not a suitable organizational basis for the semantic Web; the next two weaknesses, due to the nature of Wikipedia’s user-generated content, are constantly improving.

Our recent effort to map between UMBEL and Wikipedia, undertaken as part of the recent UMBEL v 1.00 release, spent considerable time analyzing the Wikipedia category structure [15]. Of the roughly half million categories in Wikipedia, only about 85,000 were found to be suitable candidates to participate in an actual schema structure. Further breakdowns are shown by this table resulting from our analysis:

Wikipedia Category Breakdowns
Removals 20.7%
Administrative 15.7%
Misc Cleaning 5.0%
Functional (not schema) 61.8%
Fn Dates 10.1%
Fn Nationalities 9.6%
Fn Listings, related 0.8%
Fn Occupations 1.0%
Fn Prepositions 40.4%
Candidates 17.4%
SuperTypes 1.7%
General Structure 15.7%
TOTAL 100.0%

Fully 1/5 of the categories are administrative or internal in nature. The large majority of categories are, in fact, not structural at all, but what we term functional categories, which means the category contains faceting information (such as subclassifying musicians into British musicians) [16]. Functional categories can be a rich source of supplementary metadata for its assigned articles — though, no one has yet processed Wikipedia in this manner — but are not a useful basis for structural conceptual relationships or inferencing.

This weakness in the Wikipedia category system has been known for some time [17], but researchers and others still attempt to do mappings on mostly uncleaned categories. Though most researchers recognize and remove internal or administrative categories in their efforts, using the indiscriminate remainder of categories still leads to poor precision in resulting mappings. In fact, in comparison to one of the more rigorous assessments to date [18], our analysis still showed a 6.8% error rate in hand inspected categories.

Other notable category problems include circular references, skipped intermediate categories, misassigned categories and incomplete assignments.

Nonetheless, Wikipedia categories do have a valuable use in the analysis of local relationships (one degree of relatedness) and for finding missing category candidates. And, as noted, the functional categories are also a rich and untapped source of additional article metadata.

Like any knowledge base, Wikipedia also has inconsistent and incomplete coverage of topics [19]. However, as more communities accept Wikipedia as a central resource deserving completeness, we should see these gaps continue to get filled.

The DBpedia Implementation

One of the first database versions of Wikipedia built for semantic Web purposes is DBpedia. DBpedia has an incipient ontology useful for some classification purposes. Its major structural organization is built around the Wikipedia infoboxes, which are applied to about a third of Wikipedia articles. DBpedia also has multiple language versions.

DBpedia is a core hub of Linked Open Data (LOD), which now has about 300 linked datasets; has canonical URIs used by many other applications; has extracted versions and tools very useful for further processing; and has recently moved to incorporate live updates from the source Wikipedia [20]. For these reasons, the DBpedia version of Wikipedia is the suggested implementation version.

WordNet: Language Relationships

The third suggested gold standard for the Semantic Web is WordNet, a lexical database for the English language. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. The purpose is twofold: to produce a combination of dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and artificial intelligence applications. There are over 50 languages covered by wordnet approaches, most mapped to this English WordNet [21].

Though it has been used in many ontologies [22], WordNet is most often mapped for its natural language purposes and not used as a structure of conceptual relationships per se. This is because it is designed for words and not concepts. It contains hundreds of basic semantic inconsistencies and also lacks much domain applicability. Entities, of course, are also lacking. In those cases where WordNet has been embraced as a schema basis, much work is generally expended to transform it into an ontology suitable for knowledge representation.

Nonetheless, for word sense disambiguation and other natural language processing tasks, as well as for aiding multi-lingual mappings, WordNet and its various other language variants is a language reference gold standard.

UMBEL: A Coherent Structure

So, with these prior gold standards we gain a basic language and grammar; a base (canonical) vocabulary and some structure guidance; and a reference means for processing and extracting information from input text. Yet two needed standards remain.

One needed standard is a conceptual organizing structure (or schema) by which the canonical vocabulary of concepts and instances can be related. This core structure should be constructed in a coherent [23] manner and expressly designed to support inferencing and (some) reasoning. This core structure should be sufficiently large to embrace the scope of the semantic Web, but not so detailed as to make it computationally inefficient. Thus, the core structure should be a framework that allows more focused and purposeful vocabularies to be “plugged in”, depending on the domain and task at hand. Unfortunately, the candidate category structures from our other gold standards in Wikipedia and WordNet do not meet these criteria.

A second needed standard is a bit of additional vocabulary “glue” specifically designed for the purposes of the semantic Web and ontology and domain incorporation. We have multiple and disparate world views and contexts, as well as the things described by them [24]. To get them to interoperate — and to acknowledge differences in alignment or context — we need a set of relational predicates (vocabulary) that can capture a range of mappings from the exact to the approximate [25]. Unlike other reference vocabularies that attempt to capture canonical definitions within defined domains, this vocabulary is expressly required by the semantic Web and its goal to federate different data and schema.

UMBEL has been expressly designed to address both of these two main needs [26]. UMBEL is a coherent categorization structure for the semantic Web and a mapping vocabulary designed for dataset and conceptual interoperability. UMBEL’s 28,000 reference concepts (RefConcepts) are based on the Cyc knowledge base [27], which itself is expressly designed as a common sense representation of the world with express variations in context supported via its 1000 or so microtheories. Cyc, and UMBEL upon which it is based, are by no means the “correct” or “only” representations of the world, but they are coherent ones and thus internally consistent.

UMBEL’s role to allow datasets to be “plugged in” and related through some fixed referents was expressed by this early diagram [28]:

Lightweight Binding to an Upper Subject Structure Can Bring Order
[Click on image for full-size pop-up]

The idea — which is still central to this kind of reference structure — is that a set of reference concepts can be used by multiple datasets to connect and then inter-relate. These are shown by the nested subjects (concepts) in the umbrella structure.

UMBEL, of course, is not the only coherent structure for such interoperability purposes. Other major vocabularies (such as LCSH; see below) or upper-level ontologies (such as SUMO, DOLCE, BFO or PROTON, etc.) can fulfill portions of these roles, as well. In fact, the ultimate desire is for multiple reference structures to emerge that are mapped to one another, similar to how human languages can inter-relate. Yet, even in that desired vision, there is still a need for a bootstrapped grounding. UMBEL is the first such structure expressly designed for the two needed standards.

Mappings to the Other Standards

UMBEL is already based on the central semantic Web languages of RDF, RDFS, SKOS, and OWL 2. The recent version 1.00 now maps 60% of UMBEL to Wikipedia, with efforts for the remaining in process. UMBEL provides mappings to WordNet, via its Cyc relationships. More of this is in process and will be exposed. And the mappings between UMBEL and GeoNames [29] for locational purposes is also nearly complete.

The Gold Resides in Combining These Standards

Each of these reference structures — RDF/OWL, Wikipedia, WordNet, UMBEL — is itself coherent and recognized or used by multiple parties for potential reference purposes on the semantic Web. The advocacy of them as standards is hardly radical.

However, the gold lies in the combination of these components. It is in this combination that we can see a grounded knowledge base emerge that is sufficient for bootstrapping the semantic Web.

The challenge in creating this reference knowledge base is in the mapping between the components. Fortunately, all of the components are already available in RDF/OWL. WordNet already has significant mappings to Wikipedia and UMBEL. And 60% of UMBEL is already mapped to Wikipedia. The remaining steps for completing these mappings are very near at hand. Other vocabularies, such as GeoNames [29], would also beneficially contribute to such a reference base.

Yet to truly achieve a role as a gold standard, these mappings should be fully vetted and accurate. Automated techniques that embed errors are unacceptable. Gold standards should not themselves be a source for propagation of errors. Like dictionaries or thesauri, we need reference structures that are quality and deserving of reference. We need canonical structures and canonical vocabularies.

But, once done, these gold standards themselves become reference sources that can aid automatic and semi-automatic mappings of other vocabularies and structures. Thus, the real payoff is not that these gold standards themselves get actually embedded in specific domain uses or whatever, but that they can act as reference referees for helping align and ground other structures.

Like the bootstrap condition, more and more reference structures may be brought into this system. A reference structure does not mean reliance; it need not even have more than minimal use. As new structures and vocabularies are brought into the mix, appropriate to specific domains or purposes, reference to other grounding structures will enable the structures and vocabularies to continue to expand. So, not only are reference concepts necessary for grounding the semantic Web, but we also need to pick good mapping predicates for properly linking these structures together.

In this manner, many alternative vocabularies can be bootstrapped and mapped and then used as the dominant vocabularies for specific purposes. For example, at the level of general knowledge categorization, vocabularies such as LCSH, the Dewey Decimal Classification, UDC, etc., can be preferentially chosen. Other specific vocabularies are at the ready, with many already used for domain purposes. Once grounded, these various vocabularies can also interoperate.

Grounding in gold standards enables the freedom to switch vocabularies at will. Establishing fixed reference points via such gold standards will power a virtuous circle of more vocabularies, more mappings, and, ultimately, functional interoperability no matter the need, domain or world view.

This is the last of a two-part series on the importance and choice of reference structures (Part I) and gold standards (Part II) on the semantic Web.

[1] For example, according to the Wikipedia entry on Machine code, “A machine code instruction set may have all instructions of the same length, or it may have variable-length instructions. How the patterns are organized varies strongly with the particular architecture and often also with the type of instruction. Most instructions have one or more opcode fields which specifies the basic instruction type (such as arithmetic, logical, jump, etc) and the actual operation (such as add or compare) and other fields that may give the type of the operand(s), the addressing mode(s), the addressing offset(s) or index, or the actual value itself.”
[2] See, for example, M.K. Bergman, 2009. “Advantages and Myths of RDF,” AI3:::Adaptive Information blog, April 8, 2009; see https://www.mkbergman.com/483/advantages-and-myths-of-rdf/ and M.K. Bergman, 2010. “Ontology Tutorial Series,” AI3:::Adaptive Information blog, September 27, 2010; see https://www.mkbergman.com/916/ontology-tutorial-series/.
[3] Patrick Hayes, ed., 2004. RDF Semantics, W3C Recommendation 10 February 2004. See http://www.w3.org/TR/rdf-mt/.
[4] Pascal Hitzler et al., eds., 2009. OWL 2 Web Ontology Language Primer, a W3C Recommendation, 27 October 2009; see http://www.w3.org/TR/owl2-primer/.
[5] See SWEETpedia from the AI3:::Adaptive Information blog, which currently lists about 250 articles and citations.
[6] Olena Medelyan, Catherine Legg, David Milne and Ian H. Witten, 2008. Mining Meaning from Wikipedia, Working Paper Series ISSN 1177-777X, Department of Computer Science, The University of Waikato (New Zealand), September 2008, 82 pp. See http://arxiv.org/ftp/arxiv/papers/0809/0809.4530.pdf. This paper and its findings is discussed more in M.K. Bergman, 2008. “Research Shows Natural Fit between Wikipedia and Semantic Web,” AI3:::Adaptive Information blog, October 15, 2008; see https://www.mkbergman.com/460/research-shows-natural-fit-between-wikipedia-and-semantic-web/.
[7] For a comprehensive treatment, see Fei Wu, 2010. Machine Reading: from Wikipedia to the Web, a doctoral thesis to the Department of Computer Science, University of Washington, 154 pp; see http://ai.cs.washington.edu/www/media/papers/Wu-thesis-2010.pdf. To my knowledge, this paper also was the first to use the “bootstrapping” metaphor.
[8] Quite a few research papers have characterized various aspects of the Wikipedia structure. One of the first and most comprehensive was Torsten Zesch, Iryna Gurevych, Max Mühlhäuser, 2007b. Analyzing and Accessing Wikipedia as a Lexical Semantic Resource, and the longer technical report. See http://www.ukp.tu-darmstadt.de/software/JWPL. Also, 2008. In Proceedings of the Biannual Conference of the Society for Computational Linguistics and Language Technology, pp. 213221. Also, for another early discussion, see Linyun Fu, Haofen Wang, Haiping Zhu, Huajie Zhang, Yang Wang and Yong Yu, 2007. Making More Wikipedians: Facilitating Semantics Reuse for Wikipedia Authoring. See http://data.semanticweb.org/pdfs/iswc-aswc/2007/ISWC2007_RT_Fu.pdf
[9] This structural basis in Wikipedia is largely untapped.
[10] Citations and references appear to be highly selective (biased) in Wikipedia; nonetheless, those available are useful seeding points for more suitable harvests.
[11] Images have been used a thumbnails and linked references to the articles they are hosted in, but have not been analyzed much for semantics or file names.
[12] There are a variety of efforts underway to use Wikipedia as a multi-language cross-reference based on its 250 language versions; search, for example, on “multiple language” in SWEETpedia. Both named entity and concept matches can be used to correlate in multiple languages. This is greatly aided by inter-language links.
[13] When present, these appear at the bottom of an article and have many related categories; see this one for the semantic Web.
[14] See further http://en.wikipedia.org/wiki/Wikipedia:Category and http://en.wikipedia.org/wiki/Wikipedia:Categorization_FAQ for a discussion of use and guidelines for Wikipedia categories.
[15] For the release notice, see http://umbel.org/content/finally-umbel-v-100. Annex H to the UMBEL Specifications provides a description of the mapping methodologies and results.
[16] Functional categories combine two or more facets in order to split or provide more structured characterization of a category. For example, Category:English cricketers of 1890 to 1918, has as its core concept the idea of a cricketer, a sports person. But, this is also further characterized by nationality and time period. Functional categories tend to have a A x B x C construct, with prepositions denoting the facets. From a proper characterization standpoint, the items in this category should be classified as a Person –> Sports Person –> Cricketer, with additional facets (metadata) of being English and having the period 1890 to 1981 assigned.
[17] See, for example, Massimo Poesio et al., 2008. ELERFED: Final Report, see http://www.cl.uni-heidelberg.de/~ponzetto/pubs/poesio07.pdf, wherein they state, “We discovered that in the meantime information about categories in Wikipedia had grown so much and become so unwieldy as to limit its usefulness.” Additional criticisms of the category structure may be found in S. Chernov, T. Iofciu, W. Nejdl and X. Zhou, 2006. “Extracting Semantic Relationships between Wikipedia Categories,” in Proceedings of the 1st International Workshop: SemWiki’06—From Wiki to Semantics., co-located with the 3rd Annual European Semantic Web Conference ESWC’06 in Budva, Montenegro, June 12, 2006; and L Muchnik, R. Itzhack, S. Solomon and Y. Louzon, 2007. “Self-emergence of Knowledge Trees: Extraction of the Wikipedia Hierarchies,” in Physical Review E 76(1). Also, this blog post from Bob Bater at KOnnect, “Wikipedia’s Approach to Categorization,” September 22, 2008, provides useful comments on category issues; see http://iskouk.wordpress.com/2008/09/22/wikipedias-approach-to-categorization/.
[18] Olena Medelyan and Cathy Legg, 2008. Integrating Cyc and Wikipedia: Folksonomy Meets Rigorously Defined Common-Sense, in Proceedings of the WIKI-AI: Wikipedia and AI Workshop at the AAAI08 Conference, Chicago, US. See http://www.cs.waikato.ac.nz/~olena/publications/Medelyan_Legg_Wikiai08.pdf.
[19] As two references among many, see A. Halavais and D. Lackaff, 2008. “An Analysis of Topical Coverage of Wikipedia,” in Journal of Computer-Mediated Communication 13 (2): 429–440; and A. Kittur, E. H. Chi and B. Suh, 2009. “What’s in Wikipedia? Mapping Topics and Conflict using Socially Annotated Category Structure,” in Proceedings of the 27th Annual CHI Conference on Human Factors in Computing Systems, pp 4–9.
[20] See DBpedia.org, especially DBpedia reference.
[21] See http://www.globalwordnet.org/gwa/wordnet_table.htm for a listing of known wordnets by language.
[22] For example, see this listing in Wikipedia.
[23] M.K. Bergman, 2008. “When is Content Coherent?,” AI3:::Adaptive Information blog, July 25, 2008; see https://www.mkbergman.com/450/when-is-content-coherent/.
[24] For a couple of useful references on this topic, first see this discussion regarding contexts (and the possible relation to Cyc microtheories): Ramanathan V. Guha, Rob McCool, and Richard Fikes, 2004. “Contexts for the Semantic Web,” in Sheila A. McIlraith, Dimitris Plexousakis, and Frank van Harmelen, eds., International Semantic Web Conference, volume 3298 of Lecture Notes in Computer Science, pp. 32-46. Springer, 2004. See http://citeseer.ist.psu.edu/viewdoc/download?doi=10.1.1.58.2368&rep=rep1&type=pdf. For another discussion about local differences and contexts and the difficulty of reliance on “common” understandings, see: Krzysztof Janowicz, 2010. “The Role of Space and Time for Knowledge Organization on the Semantic Web,” in Semantic Web 1: 25–32; see http://iospress.metapress.com/content/636610536×307213/fulltext.pdf.
[25] OWL already provides the exact predicates; see further M.K. Bergman, 2010. “The Nature of Connectedness on the Web,” AI3:::Adaptive Information blog, November 22, 2010, 2008; see https://www.mkbergman.com/935/the-nature-of-connectedness-on-the-web/ and the UMBEL mapping predicates in this vocabulary listing.
[26] UMBEL is a reference of 28,000 concepts (classes and relationships) derived from the Cyc knowledge base. The reference concepts of UMBEL are mapped to Wikipedia, DBpedia ontology classes, GeoNames and PROTON. UMBEL is designed to facilitate the organization, linkage and presentation of heterogeneous datasets and information. It is meant to lower the time, effort and complexity of developing, maintaining and using ontologies, and aligning them to other content. See further the UMBEL Specifications (including Annexes A – H), Vocabulary and RefConcepts.
[27] Cyc is an artificial intelligence project that has assembled a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal to provide human-like reasoning. The OpenCyc version 3.0 contains nearly 200,000 terms and millions of relationship assertions. Started in 1984, by 2010 an estimated 1000 person years had been invested in its development.
[28] This image and more related to the general question of interoperability in relation to a reference structure is provided in M.K. Bergman, 2007, “Where are the Road Signs for the Structured Web?,” AI3:::Adaptive Information blog, May 29, 2007; see https://www.mkbergman.com/375/where-are-the-road-signs-for-the-structured-web/.
[29] GeoNames is a geographical database available for free download under a Creative Commons Attribution license. It contains over 10 million geographical names and consists of 7.5 million unique features, of which 2.8 million are populated places. All features are categorized into one out of nine feature classes and further subcategorized into one out of 645 feature codes. Given the importance of locational information, GeoNames is a natural complement to the gold standards mentioned herein. See further its Web site, which also showcases a nifty browser of mappings to Wikipedia.

Posted by AI3's author, Mike Bergman Posted on February 28, 2011 at 12:07 am in Semantic Web, Structured Web, UMBEL | Comments (2)
The URI link reference to this post is: https://www.mkbergman.com/947/in-search-of-gold-standards-for-the-semantic-web/
The URI to trackback this post is: https://www.mkbergman.com/947/in-search-of-gold-standards-for-the-semantic-web/trackback/
Posted:February 21, 2011

Hitting the Sweet SpotReference Structures Provide a Third Way

Since the first days of the Web there has been an ideal that its content could extend beyond documents and become a global, interoperating storehouse of data. This ideal has become what is known as the “semantic Web“. And within this ideal there has been a tension between two competing world views of how to achieve this vision. At the risk of being simplistic, we can describe these world views as informal v formal, sometimes expressed as “bottom up” v “top down” [1,2].

The informal view emphasizes freeform and diversity, using more open tagging and a bottoms-up approach to structuring data [3]. This group is not anarchic, but it does support the idea of open data, open standards and open contributions. This group tends to be oriented to RDF and is (paradoxically) often not very open to non-RDF structured data forms (as, for example, microdata or microformats). Social networks and linked data are quite central to this group. RDFa, tagging, user-generated content and folksonomies are also key emphases and contributions.

The formal view tends to support more strongly the idea of shared vocabularies with more formalized semantics and design. This group uses and contributes to open standards, but is also open to proprietary data and structures. Enterprises and industry groups with standard controlled vocabularies and interchange languages (often XML-based) more typically reside in this group. OWL and rules languages are more often typically the basis for this group’s formalisms. The formal view also tends to split further into two camps: one that is more top down and engineering oriented, with typically a more closed world approach to schema and ontology development [4]; and a second that is more adaptive and incremental and relies on an open world approach [5].

Again, at the risk of being simplistic, the informal group tends to view many OWL and structured vocabularies, especially those that are large or complex, as over engineered, constraining or limiting freedom. This group often correctly points to the delays and lack of adoption associated with more formal efforts. The informal group rarely speaks of ontologies, preferring to use the term of vocabularies. In contrast, the formal group tends to view bottoms-up efforts as chaotic, poorly structured and too heterogeneous to allow machine reasoning or interoperability. Some in the formal group sometimes advocate certification or prescribed training programs for ontologists.

Readers of this blog and customers of Structured Dynamics know that we more often focus on the formal world view and more specifically from an open world perspective. But, like human tribes or different cultures, there is no one true or correct way. Peaceful coexistence resides in the understanding of the importance and strength of different world views.

Shared communication is the way in which we, as humans, learn to understand and bridge cultural and tribal differences. These very same bases can be used to bridge the differences of world views for the semantic Web. Shared concepts and a way to communicate them (via a common language) — what I call reference structures [6] — are one potential “sweet spot” for bridging these views of the semantic Web [7].

Referring to Referents as Reference

According to Merriam Webster and Wikipedia, a reference is the intentional use of one thing, a point of reference or reference state, to indicate something else. When reference is intended, what the reference points to is called the referent. References are indicated by sounds (like onomatopoeia), pictures (like roadsigns), text (like bibliographies), indexes (by number) and objects (a wedding ring), but many other methods can be used intentionally as references. In language and libraries, references may include dictionaries, thesauri and encyclopedias. In computer science, references may include pointers, addresses or linked lists. In semantics, reference is generally construed as the relationships between nouns or pronouns and objects that are named by them.

The Building Blocks of Language

Structures, or syntax, enable multiple referents to be combined into more complex and meaningful (interpretable) systems. Vocabularies refer to the set of tokens or words available to act as referents in these structures. Controlled vocabularies attempt to limit and precisely define these tokens as a means of reducing ambiguity and error. Larger vocabularies increase richness and nuance of meaning for the tokens. Combined, syntax, grammar and vocabularies are the building blocks for constructing understandable human languages.

Many researchers believe that language is an inherent capability of humans, especially including children. Language acquisition is expressly understood to be the combined acquisition of syntax, vocabulary and phonetics (for spoken language). Language development occurs via use and repetition, in a social setting where errors are corrected and communication is a constant. Via communication and interaction we learn and discover nuance and differences, and acquire more complex understandings of syntax structures and vocabulary. The contact sport of communication is itself a prime source for acquiring the ability to communicate. Without the structure (syntax) and vocabulary acquired through this process, our language utterances are mere babblings.

Pidgin languages emerge when two parties try to communicate, but do not share the same language. Pidgin languages result in much simplified vocabularies and structure, which lead to frequent miscommunication. Small vocabularies and limited structure share many of these same limitations.

Communicating in an Evanescent Environment

Information theory going back to Shannon defined that the “fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point” [8]. This assertion applies to all forms of communication, from the electronic to animal and human language and speech.

Every living language is undergoing constant growth and change. Current events and culture are one driver of new vocabulary and constructs. We all know the apocryphal observation that northern peoples have many more words for snow, for example. Jargon emerges because specific activities, professions, groups, or events (including technical change) often have their own ideas to communicate. Slang is local or cultural usage that provides context and communication, often outside of “formal” or accepted vocabularies. These sources of environmental and other changes cause living languages to be constantly changing in terms of vocabulary and (also, sometimes) structure.

Natural languages become rich in meaning and names for entities to describe and discern things, from plants to people. When richness is embedded in structure, contexts can emerge that greatly aid removing ambiguity (“disambiguating”). Contexts enable us to discern polysemous concepts (such as bank for river, money institution or pool shot) or similarly named entities (such as whether Jimmy Johnson is a race car driver, football coach, or a local plumber). As with vocabulary growth, contexts sometimes change in meaning and interpretation over time. It is likely the Gay ’90s would not be used again to describe a cultural decade (1890s) in American history.

All this affirms what all of us know about human languages:  they are dynamic and changing. Adaptable (living) languages require an openness to changing vocabulary and changing structure. The most dynamic languages also tend to be the most open to the coining of new terminology; English, for example, is estimated to have 25,000 new words coined each year [9].

The Semantic Web as a Human Language

One could argue that similar constructs must be present within the semantic Web to enable either machine or human understanding. At first blush this may sound a bit surprising:  Isn’t one premise of the semantic Web machine-to-machine communications with “artificial intelligence” acting on our behalf in the background? Well, hmmm, OK, let’s probe that thought.

Recall there are different visions about what constitutes the semantic Web. In the most machine-oriented version, the machines are posited to replace some of what we already do and anticipate what we already want. Like Watson on Jeopardy, machines still need to know that Toronto is not an American city [10]. So, even with its most extreme interpretation — and one that is more extreme than my own view of the near-term semantic Web — machine-based communication still has these imperatives:

  • Humans, too, interact with data and need to understand it
  • Much of the data to be understood and managed is based on human text (unstructured), and needs to be adequately captured and represented
  • There is no basis to think that machine languages can be any simpler in representing the world than human languages.

These points suggest that machine languages, even in the most extreme machine-to-machine sense, still need to have a considerable capability akin to human languages.  Of course, computer programming languages and data exchange languages as artificial languages need not read like a novel. In fact, most artificial languages have more constraints and structure limitations than human languages. They need to be read by machines with fixed instruction sets (that is, they tend to have fewer exceptions and heuristics).

But, even with software or data, people write and interact with these languages, and human readability is a key desirable aspect for modern artificial languages [11]. Further, there are some parts of software or data that also get expressed as labels in user interfaces or for other human factors. The admonition to Web page developers to “view source” is a frequent one. Any communication that is text based — as are all HTTP communications on the Web, including the semantic Web — has this readability component.

Though the form (structure) and vocabulary (tokens) of languages geared to machine use and understanding most certainly differ from that used by humans, that does not mean that the imperatives for reference and structure are excused. It seems evident that small vocabularies, differing vocabularies and small and incompatible structures have the same limiting effect on communications within the semantic Web as they do for human languages.

Yet, that being said, correcting today’s relative absence of reference and structure on the nascent semantic Web should not then mean an overreaction to a solution based on a single global structure. This is a false choice and a false dichotomy, belied by the continued diversity of human languages [12]. In fact, the best analog for an effective semantic Web might be human languages with their vocabularies, references and structures. Here is where we may find the clues for how we might improve the communications (interoperability) of the semantic Web.

A Call for Vehement Moderation

Freeform tagging and informal approaches are quick and adaptive. But, they lack context, coherence and a basis for interoperability. Highly engineered ontologies capture nuance and sophistication. But, they are difficult and expensive to create, lack adoption and can prove brittle. Neither of these polar opposites is “correct” and each has its uses and importance. Strident advocacy of either extreme alone is shortsighted and unsuited to today’s realities. There is not an ineluctable choice between freedom and formalism.

An inherently open and changing world with massive growth of information volumes demands a third way. Reference structures and vocabularies sufficient to guide (but not constrain) coherent communications are needed. Structure and vocabulary in an open and adaptable language can provide the communication medium. Depending on task, this language can be informal (RDF or data struct forms convertible to RDF) or formal (OWL). The connecting glue is provided by the reference vocabularies and structures that bound that adaptable language. This is the missing “sweet spot” for the semantic Web.

Just like human languages, these reference structures must be adaptable ones that can accommodate new learning, new ideas and new terminology. Yet, they must also have sufficient internal consistency and structure to enable their role as referents. And, they need to have a richness of vocabulary (with defined references) sufficient to capture the domain at hand. Otherwise, we end up with pidgin communications.

We can thus see a pattern emerging where informal approaches are used for tagging and simple datasets; more formal approaches are used for bounded domains and the need for precise semantics; and reference structures are used when we want to get multiple, disparate sources to communicate and interoperate. So long as these reference structures are coherent and designed for vocabulary expansion and accommodation for synonyms and other means for terminology mapping, they can adapt to changing knowledge and demands.

For too long there has been a misunderstanding and mischaracterization of anything that smacks of structure and referenceability as an attempt to limit diversity, impose control, or suggest some form of “One Ring to rule them all” organization of the semantic Web. Maybe that was true of other suggestions in the past, but it is far from the enabling role of reference structures advocated herein. This reaction to structure has something of the feeling of school children adverse to their writing lessons taking over the classroom and then saying No! to more lessons. Rather than Lord of the Rings we get Lord of the Flies.

To try to overcome this misunderstanding — and to embrace the idea of language and communication for the semantic Web — I and others have tried in the past to find various analogies or imagery to describe the roles of these reference structures. (Again, all of those vagaries of human language and communication!). Analogies for these reference structures have included [13]:

  • backbones, to signal their importance as dependable structures upon which we can put “meat on the bones”
  • scaffoldings, to emphasize their openness and infrastructural role
  • roadmaps, as orienting and navigational frameworks for information
  • docking ports, as connection points for diverse datasets on the Web
  • forest paths, to signal common traversals but with much to discover once we step off the paths
  • infoclines, to represent the information interface between different world views,
  • and others.

What this post has argued is the analogy of reference structures to human language and communication. In this role, reference structures should be seen as facilitating and enabling. This is hardly a vision of constraints and control. The ability to articulate positions and ideas in fact leads to more diversity and freedom, not less.

To be sure, there is extra work in using and applying reference structures. Every child comes to know there is work in learning languages and becoming articulate in them. But, as adults, we also come to learn from experience the frustration that individuals with speech or learning impairments have when trying to communicate. Knowing these things, why do we not see the same imperatives for the semantic Web? We can only get beyond incoherent babblings by making the commitment to learn and master rich languages grounded in appropriate reference structures. We are not compelled to be inchoate; nor are our machines.

Yet, because of this extra work, it is also important that we develop and put in place semi-automatic [14] ways to tag and provide linkages to such reference structures. We have the tools and information extraction techniques available that will allow us to reference and add structure to our content in quick and easy ways. Now is the time to get on with it, and stop babbling about how structure and reference vocabularies may limit our freedoms.

This is the first of a two-part series on the importance and choice of reference structures (Part I) and gold standards (Part II) on the semantic Web.

[1] This is reflected well in a presentation from the NSF Workshop on DB & IS Research for Semantic Web and Enterprises, April 3, 2002, entitled “The “Emergent, Semantic Web: Top Down Design or Bottom Up Consensus?“. This report defines top down as design and committee-driven; bottom up is more decentralized and based on social processes. Also, see Ralf Klischewski, 2003. “Top Down or Bottom Up? How to Establish a Common Ground for Semantic Interoperability within e-Government Communities,” pp. 17-26, in R. Traunmüller and M Palmirani, eds., E-Government: Modelling Norms and Concepts as Key Issues: Proceedings of 1st International Workshop on E-Government at ICAIL 2003, Bologna, Italy. Also, see David Weinberger, 2006. “The Case for Two Semantic Webs,” KM World, May 26, 2006; see http://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=15809.
[2] For a discussion about formalisms and the nature of the Web, see this early report by F.M. Shipman III and C.C. Marshall, 1994. “Formality Considered Harmful: Experiences, Emerging Themes, and Directions,” Xerox PARC Technical Report ISTL-CSA-94-08-02, 1994; see http://www.csdl.tamu.edu/~shipman/formality-paper/harmful.html.
[3] Others have posited contrasting styles, most often as “top down” v. “bottom up.” However, in one interpretation of that distinction, “top down” means a layer on top of the existing Web; see further, A. Iskold, 2007. “Top Down: A New Approach to the Semantic Web,” in ReadWrite Web, Sept. 20, 2007. The problem with this terminology is that it offers a completely different sense of “top down” to traditional uses. In Iskold’s argument, his “top down” is a layering on top of the existing Web. On the other hand, “top down” is more often understood in the sense of a “comprehensive, engineered” view, consistent with [1].
[4] See M. K. Bergman, 2009. The Open World Assumption: Elephant in the Room, December 21, 2009. The open world assumption (OWA) generally asserts that the lack of a given assertion or fact being available does not imply whether that possible assertion is true or false: it simply is not known. In other words, lack of knowledge does not imply falsity. Another way to say it is that everything is permitted until it is prohibited. OWA lends itself to incremental and incomplete approaches to various modeling problems.
The closed world assumption (CWA) is a key underpinning to most standard relational data systems and enterprise schema and logics. CWA is the logic assumption that what is not currently known to be true, is false. For semantics-related projects there is a corollary problem to the use of CWA which is the need for upfront agreement on what all predicates “mean”, which is difficult if not impossible in reality when different perspectives are the explicit purpose for the integration.
[5] See M.K. Bergman, 2010. “Two Contrasting Styles for the Semantic Enterprise,” AI3:::Adaptive Information blog post, February 15, 2010. See https://www.mkbergman.com/866/two-contrasting-styles-for-the-semantic-enterprise/.
[6] I first used the term in passing in M.K. Bergman, 2007. “An Intrepid Guide to Ontologies,” AI3:::Adaptive Information blog post, May 16, 2007. See https://www.mkbergman.com/374/an-intrepid-guide-to-ontologies/, then more fully elaborated the idea in “Where are the Road Signs for the Structured Web,” AI3:::Adaptive Information blog post, May 29, 2007. See https://www.mkbergman.com/375/where-are-the-road-signs-for-the-structured-web/.
[7] See Catherine C. Marshall and Frank M. Shipman, 2003. “Which Semantic Web?,” in Proceedings of ACM Hypertext 2003, pp. 57-66, August 26-30, 2003, Nottingham, United Kingdom; http://www.csdl.tamu.edu/~marshall/ht03-sw-4.pdf, for a very different (but still accurate and useful) way to characterize the “visions” for the semantic Web. In this early paper, the authors posit three competing visions: 1) the development of standards, akin to libraries, to bring order to digital documents; this is the vision they ascribe to the W3C and has been largely adopted via use of URIs as identifiers, and languages such as RDF and OWL; 2) a vision of a globally distributed knowledge base (which they characterize as being Tim Berners-Lee’s original vision, with examples being Cyc or Apple’s (now disbanded) Knowledge Navigator; and 3) a vision of an infrastructure for the coordinated sharing of data and knowledge..
[8] See Claude E. Shannon‘s classic paper “A Mathematical Theory of Communication” in the Bell System Technical Journal in July and October 1948.
[9] This reference is from the Wikipedia entry on the English language: Kister, Ken. “Dictionaries defined.” Library Journal, 6/15/92, Vol. 117 Issue 11, p 43.
[10] See http://www-943.ibm.com/innovation/us/watson/related-content/toronto.html, or simply do a Web search on “watson toronto jeopardy” (no quotes).
[11] Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. It has been known for at least three decades that a few simple readability transformations can make code shorter and drastically reduce the time to understand it. See James L. Elshoff and Michael Marcotty, 1982. “Improving Computer Program Readability to Aid Modification,” Communications of the ACM, v.25 n.8, p. 512-521, Aug 1982; see http://doi.acm.org/10.1145/358589.358596. From the Wikipedia entry on Readability.
[12] According the the Wikipedia entry on Language, there are an estimated 3000 to 6000 active human languages presently in existence.
[13] The forest path analogy comes from Atanas Kiryakov of Ontotext. The remaining analogies come from M.K. Bergman in his AI3:::Adaptive Innovation blog: “There’s Not Yet Enough Backbone,” May 1, 2007 (backbone); “The Role of UMBEL: Stuck in the Middle with you …,” May 11, 2008 (infocline, scaffolding and docking port); “Structure Paves the Way to the Semantic Web,” May 3, 2007 (roadmap).
[14] Semi-automatic methods attempt to apply as much automated screening and algorithmic- or rules-based scoring as possible, and then allow the final choices to be arbitrated by humans. Fully automated systems, particularly involving natural language processing, are not yet acceptable because of (small, but) unacceptably high error rates in precision. The best semi-automated approaches handle all tasks that are rote or error-free, and then limit the final choices to those areas where unacceptable errors are still prevalent. As time goes on, more of these areas can be automated as algorithms, heuristics and methodologies improve. Eventually, of course, this may lead to fully automated approaches.

Posted by AI3's author, Mike Bergman Posted on February 21, 2011 at 2:27 am in Semantic Web, Structured Web, UMBEL | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/946/seeking-a-semantic-web-sweet-spot/
The URI to trackback this post is: https://www.mkbergman.com/946/seeking-a-semantic-web-sweet-spot/trackback/
Posted:February 15, 2011

UMBEL Vocabulary and Reference Concept OntologyA Seminal Release by SD and Ontotext; Links to Wikipedia and PROTON

Structured Dynamics and Ontotext are pleased to announce — after four years of iterative refinement — the release of version 1.00 of UMBEL (Upper Mapping and Binding Exchange Layer). This version is the first production-grade release of the system. UMBEL’s current implementation is the result of much practical experience.

UMBEL is primarily a reference ontology, which contains 28,000 concepts (classes and relationships) derived from the Cyc knowledge base. The reference concepts of UMBEL are mapped to Wikipedia, DBpedia ontology classes, GeoNames and PROTON.

UMBEL is designed to facilitate the organization, linkage and presentation of heterogeneous datasets and information. It is meant to lower the time, effort and complexity of developing, maintaining and using ontologies, and aligning them to other content.

This release 1.00 builds on the prior five major changes in UMBEL v. 0.80 announced last November. It is open source, provided under the Creative Commons Attribution 3.0 license.

Profile of the Release

In broad terms, here is what is included in the new version 1.00:

  • A core structure of 27,917 reference concepts (RCs)
  • The clustering of those concepts into 33 mostly disjoint SuperTypes (STs)
  • Direct RC mapping to 444 PROTON classes
  • Direct RC mapping to 257 DBpedia ontology classes
  • An incomplete mapping to 671 GeoNames features
  • Direct mapping of 16,884 RCs to Wikipedia (categories and pages)
  • The linking of 2,130,021 unique Wikipedia pages via 3,935,148 predicate relations; all are characterized by one or more STs
    • 876,125 are assigned a specific rdf:type
  • The UMBEL RefConcepts have been re-organized, with most local, geolocational entities moved to a supplementary module. 577 prior (version 0.80) UMBEL RCs and a further 3204 new RCs have been added to this geolocational module. This module is not being released for the current version because testing is incomplete (watch for a pending version 1.0x)
  • Some vocabulary changes, including some new and some dropped predicates (see next), and
  • Added an Annex H that describes the version 1.00 changes and methods.

Vocabulary Summary

UMBEL’s basic vocabulary can also be used for constructing specific domain ontologies that can easily interoperate with other systems. This release sees a number of changes in the UMBEL vocabulary:

  • A new correspondsTo predicate has been added for nearly or approximate sameAs mappings (symmetric, transitive, reflexive)
  • A controlled vocabulary of qualifiers was developed for the hasMapping predicate
  • 31 new relatesToXXX predicates have been added to relate external entities or concepts to UMBEL SuperTypes
  • Some disjointedness assertions between SuperTypes were added or changed.

The UMBEL Vocabulary defines three classes:

The UMBEL Vocabulary defines these properties:

The UMBEL vocabulary also has a significant reliance on SKOS, among other external vocabularies.

Access and More Information

Here are links to various downloads, specifications, communities and assistance.

Specifications and Documentation

All documentation from the prior v 0.80 has been updated, and some new documentation has been added:

Major updates were made to the specifications and Annex G; Annex H is new. Minor changes were also made to Annexes A and B. All remaining Annexes only had minor header changes. All spec documents with minor or major changes were also versioned, with the earlier archives now date stamped.

Files and Downloads

All UMBEL files are listed on the Downloads and SVN page on the UMBEL Web site. The reference concept and mapping files may also be obtained from http://code.google.com/p/umbel/source/browse/#svn/trunk.

Additional Information

To learn more about UMBEL or to participate, here are some additional links:

Acknowledgements

These latest improvements to UMBEL and its mappings have been undertaken by Structured Dynamics and Ontotext. Support has also been provided by the European Union research project RENDER, which aims to develop diversity-aware methods in the ways Web information is selected, ranked, aggregated, presented and used.

Next Steps

This release continues the path to establish a gold standard between UMBEL and Wikipedia to guide other ontological, semantic Web and disambiguation needs. For example, the number of UMBEL reference concepts was expanded by some 36% from 20,512 to 27,917 in order to provide a more balanced superstructure for organizing Wikipedia content. And across all mappings, 60% of all UMBEL reference concepts (or 16,884) are now linked directly to Wikipedia via the new umbel:correspondsTo property. A later post will describe the design and importance of this gold standard in greater detail.

Next releases will expand this linkage and coverage, and bring in other important reference structures such as GeoNames and others. This version of UMBEL will also be incorporated into the next version of FactForge. We will also be re-invigorating the Web vocabulary access and Web services, and adding tagging services based on UMBEL.

We invite other players with an interest in reusable and broadly applicable vocabularies and reference concepts to join with us in these efforts.


[1] Note, for legacy reasons, you may still encounter reference to ‘subject concepts’ in earlier UMBEL documentation. Please consider that term as interchangeable with the current ‘reference concepts’.

Posted by AI3's author, Mike Bergman Posted on February 15, 2011 at 12:39 am in Ontologies, Open Source, Semantic Web, UMBEL | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/945/announcing-the-first-production-grade-umbel/
The URI to trackback this post is: https://www.mkbergman.com/945/announcing-the-first-production-grade-umbel/trackback/
Posted:February 10, 2011

Image courtesy of http://acts2fellowship.wordpress.com/Changes Instigated by UMBEL May Benefit Others

In the semantic Web, arguably SKOS is the right vocabulary for representing simple knowledge structures [1] and OWL 2 is the right language for asserting axioms and ontological relationships. In the early days we chose a reliance on SKOS for the UMBEL reference concept ontology, because of UMBEL’s natural role as a knowledge structure. Most recently we also migrated UMBEL to OWL 2 to gain (among other reasons) the metamodeling advantages of “punning”, which allows us to treat things as either classes or instances depending on modeling needs, context and viewpoint [2].

But — until today — we could not get SKOS and OWL 2 to play together nicely. This meant we could not take full advantage of each language’s respective strengths.

Happily, that gap has now been closed. By action of a relatively minor change by the SKOS Work Group (WG) and the addition of a simple statement, SKOS more fully interoperates with OWL 2.

Brief Description of the Issue

SKOS was and is purposefully designed to be simple and to stick to its knowledge system scope. The authors of the language avoided many restrictions and kept axioms regarding the language to a minimum. As a result, since its inception, in relation to OWL, the core SKOS has been expressed as OWL Full (that is, undecidable and more free-form in interpretation).

However, also for some time it has been understood that, with some relatively minor changes, the core SKOS language could be modified to be OWL DL (decidable). In fact, since June 2009 there has been a DL version of SKOS, called by the editors the “DL prune” [3]. A useful discussion of the specific axioms that cause the core SKOS to require interpretation as OWL Full is provided by the Semantic Web and Interoperability Group at the University of Patras [4].

Most of the conditions in question relate to the various annotation properties in SKOS, such as skos:prefLabel, skos:altLabel and skos:hiddenLabel. These are core constructs within the use of “semsets” to describe and refer to reference concepts in UMBEL [5]. Among others, a simple explanation for one issue is that these labels within SKOS are defined as sub-properties of rdfs:label, which is not allowable in OWL 2. Where such conflicts exist, they are removed (“pruned”) from the SKOS DL version. There are only a minor few of these, and not (in our view) central to the purpose or usefulness of SKOS.

For UMBEL, and we think other vocabularies, the transition to OWL 2 is important for many reasons, including the “punning” of individuals and classes. Other reasons include better handling of annotation properties, a better emerging set of tools, and the use of inference and reasoning engines.

Thus, to square this circle when using SKOS in OWL 2, it is important to refer to the DL version of SKOS and not SKOS core. However, since there was no version acknowledgment to the core in the SKOS DL namespace, it was not possible to make this reference while retaining general SKOS namespace (skos:XXX) compatibility.

How the SKOS WG Resolved the Issue

My UMBEL co-editor, Fred Giasson, first brought this issue to the SKOS WG’s attention in November [6]. He further suggested a relatively simple means to resolve the problem. By including a version reference in the SKOS DL version to SKOS core, consistent with the current OWL 2 specification [7], OWL 2-compliant tools will now know how to discern the proper references. This fix looks as follows:

   <rdf:Description rdf:about="http://www.w3.org/2004/02/skos/core">
      <owl:versionIRI rdf:resource="http://www.w3.org/TR/skos-reference/skos-owl1-dl.rdf"/>
   </rdf:Description>

Now, in our ontology (UMBEL), we define the skos namespace (N3 format):

   @prefix skos: <http://www.w3.org/2004/02/skos/core#> .

and, then, import the SKOS DL version:

   <http://umbel.org/umbel> rdf:type owl:Ontology ;
      <-- statements --> ;
      owl:imports <http://www.w3.org/TR/skos-reference/skos-owl1-dl.rdf> .

Pretty simple and straightforward.

With the expeditious treatment of the SKOS group [8], this proposal was floated and commented upon and then passed and adopted today. The change was also posted as an erratum to the SKOS specification [9]. As Fred reported [10], his testing of the fix with available tools (such as Protégé 4.1 and reasoners) also appears to be working properly (on a 58 MB file with 28 K concepts and all of their annotations and relationships!).

Some Final Notes

This is a good example of how even simple changes may make major differences. It also shows how even simple changes may take some time and effort in a standards-making environment. But, even though simple, we think this will be highly useful to keeping SKOS a central vocabulary within OWL 2-based systems. Within the context of conceptual and domain vocabularies, such as represented by UMBEL or other ontologies based on the UMBEL vocabulary or similar approaches, we see this as a major win.

Finally, as for UMBEL itself, this change has allowed us to remove duplicate predicates in favor of those from SKOS. It has also caused a delay in our release of UMBEL v. 1.00, planned for today, until February 15 to complete testing and documentation revisions. But we’ll gladly take a change like this in trade for a minor delay any day.

Thanks, SKOS!


[1] SKOS, or Simplified Knowledge Organization System, is a family of formal languages designed for representation of thesauri, classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary. SKOS is built upon RDF and RDFS, and its main objective is to enable easy publication of controlled structured vocabularies for the semantic Web.
[2] For a full explanation of this topic and its rationale for our work, see M. K. Bergman, 2010. “Metamodeling in Domain Ontologies,” September 20, 2010 posting on the AI3:::Adaptive Information blog; https://www.mkbergman.com/913/metamodeling-in-domain-ontologies/.
[3] The first appearance of the DL-compliant listing, SKOS-RDF-OWL1-DL, appears in the June 2009 version of the latest SKOS specifications. It is described as a “pruned” subset of the SKOS specifications that conforms to OWL DL. The RDF file itself helpfully annotates the specific issues in the core language; see next note.
[4] See http://swig.hpclab.ceid.upatras.gr/SKOS/Skos2Owl2. Also, there is other useful discussion on the SKOS mailing list by Antoine Isaac and the OWL Working Group’s comments to the SKOS WG on their last efforts.
[5] See this section in the UMBEL Specifications regarding the use of labels and “semsets.”
[8] We’d like to thank Tom Baker for leading the effort with the SKOS group, and also thank former group members Rinke Hoekstra, Uli Sattler and Bijan Parsia for their expressions of support.

Posted by AI3's author, Mike Bergman Posted on February 10, 2011 at 11:01 pm in Ontologies, Semantic Web, UMBEL | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/944/skos-now-interoperates-with-owl-2/
The URI to trackback this post is: https://www.mkbergman.com/944/skos-now-interoperates-with-owl-2/trackback/
Posted:February 7, 2011

Sweet Tools ListingNow Presented as a Semantic Component; Grows to 900+ Tools

Sweet Tools, AI3‘s listing of semantic Web and -related tools, has just been released with its 17th update. The listing now contains more than 900 tools, about a 10% increase over the last version. Significantly the listing is also now presented via its own semantic tool, the structSearch sComponent, which is one of the growing parts to Structured Dynamics‘ open semantic framework (OSF).

So, we invite you to go ahead and try out this new Flex/Flash version with its improved search and filtering! We’re pretty sure you’ll like it.

Summary of Major Changes Sweet Tools structSearch View

Sweet Tools now lists 907 919 tools, an increase of 72 84 (or 8.6 10.1%) over the prior version of 835 tools. The most notable trend is the continued increase in capabilities and professionalism of (some of) the new tools.

This new release of Sweet Tools — available for direct play and shown in the screenshot to the right — is the first to be presented via Structured Dynamics’ Flex-based semantic component technology. The system has greatly improved search and filtering capabilities; it also shares the superior dataset management and import/export capabilities of its structWSF brethren.

As a result, moving forward, Sweet Tools updates will now be added on a more regular basis, reducing the big burps that past releases have tended to follow. We will also see much expanded functionality over time as other pieces of the structWSF and sComponents stack get integrated and showcased using this dataset.

This release is the first in WordPress, and shows the broad capabilities of the OSF stack to be embedded in a variety of CMS or standalone systems. We have provided some updates on Structured Dynamics’ OSF TechWiki for how to modify, embed and customize these components with various Flex development frameworks (see one, two or three), such as Flash Builder or FlashDevelop.

We should mention that the OSF code group is also seeing external parties exposing these capabilities via JavaScript deployments as well. This recent release expands on the conStruct version with its capabilities described in a post about a year ago.

Retiring the Exhibit Version

However, this release does mark the retirement of the very fine Exhibit version of Sweet Tools (an archive version will be kept available until it gets too long in the tooth). I was one of the first to install a commercial Exhibit system, and the first to do so on WordPress, as I described in an article more than four years ago.

Exhibit has worked great and without a hitch, and through a couple of upgrades. It still has (I think) a superior faceting system and sorting capabiities to what we presently offer with our own sComponent alternative. However, the Exhibit version is really a display technology alone, and offers no search, access control or underlying data management capabilities (such as CRUD), all of which are integral to our current system. It is also not grounded in RDF or semantic technologies, though it does have good structural genes. And, Sweet Tools has about reached the limits of the size of datasets Exhibit can handle efficiently.

Exhibit has set a high bar for usability and lightweight design. As we move in a different direction, I’d like again to publicly thank David Huynh, Exhibit’s developer, and the MIT Simile program for when he was there, for putting forward one of the seminal structured data tools of the past five years.

Updated Statistics

The updated Sweet Tools listing now includes nearly 50 different tools categories. The most prevalent categories are browser tools (RDF, OWL), information extraction, ontology tools, parsers or converters, and general RDF tools. The relative share by category is shown in this diagram (click to expand):

Since the last listing, the fastest growing categories have been utilities (general and RDF) and visualization. Linked data listings have also grown by 200%, but are still a relatively small percentage of the total.

These values should be taken with a couple of grains of salt. First, not all of these additions are organic or new releases. Some are the result of our own tools efforts and investigations, which can often surface prior overlooked tools. Also, even with this large number of application categories, many tools defy characterization, and can reside in multiple categories at once or are even pointing to new ones. So, the splits are illustrative, but not defining.

General language percentages have been keeping pretty constant over the past couple of years. Java remains the leading language with nearly half of all applications, a percentage it has kept steady for four years. PHP continues to grow in popularity, and actually increased the largest percentage amount of any language over this past census. The current language splits are shown in the next diagram (click to expand):

C/C++ and C# have really not grown at all over the past year. Again, however, for the reasons noted, these trends should be interpreted with care.

Tasty Dogfood?Dogfood Never Tasted So Good

Tools development is hard and the open source nature of today’s development tends to require a certain critical mass of developer interest and commitment. There are some notable tools that have much use and focus and are clearly professional and industrial grade. Yet, unfortunately, too many of the tools on the Sweet Tools listing are either proofs-of-concept, academic demos, or largely abandoned because of lack of interest by the original developer, the community or the market as a whole.

There is a common statement within the community about how important it is for developers to “eat their own dogfood.” On the face of it, this makes some sense since it conveys a commitment to use and test applications as they are developed.

But looked at more closely, this sentiment carries with it a troublesome reflection of the state of (many) tools within the semantic Web: too much kibble that is neither attractive nor tasty. It is probably time to keep the dogfood in the closet and focus on well-cooked and attractive fare.

We at Structured Dynamics are not trying to hold ourselves up as exemplars or the best chefs of tasty food. We do, however, have a commitment to produce fare that is well prepared and professional. Let’s stop with the dogfood and get on with serving nutritious and balanced fare to the marketplace.

Posted by AI3's author, Mike Bergman Posted on February 7, 2011 at 1:47 am in Open Source, Semantic Web Tools, Structured Web | Comments (1)
The URI link reference to this post is: https://www.mkbergman.com/942/tasty-new-sweet-tools-release/
The URI to trackback this post is: https://www.mkbergman.com/942/tasty-new-sweet-tools-release/trackback/