Posted:November 9, 2011

UMBEL Vocabulary and Reference Concept OntologyThe OSF Browser is Now More Configurable

This continues our series on the new UMBEL portal. UMBEL, the Upper Mapping and Binding Exchange Layer, is an upper ontology of about 28,000 reference concepts and a vocabulary designed for domain ontologies and ontology mapping [1]. This part three deals with the portal’s navigational tool, the concept or relation browser [2]. It is a favorite component of the open semantic framework (OSF).

Discovery and navigation in a graph structure — as is the basis of ontologies and the UMBEL structure — can be difficult. It is made even more difficult when the number of concepts in the object space is large. With 28,000 concepts, UMBEL is one of the largest public ontologies extant. The relation browser is designed specifically to address these difficulties.

The concept browser in UMBEL is invoked via the main menu option or by clicking on the browser icon [] shown in conjuction with a given concept. Here is an example for the concept ‘tractor’:

Note in this case that the More details … link brings you to a detailed concept view, as was covered in the previous part in this series.

With its extreme configurability and flexibility — see further below — the relation browser can be an essential foundation to an open semantic framework installation. But, the best part about the relation browser is that it is fun to use. Clicking bubbles and dynamically moving through the graph structure is like snorkeling through a massive school of silvery fish.

Origins of the Relation Browser

We have been featuring the relation browser since April 2008 when the first UMBEL sandbox was released:

The relation browser is the innovation of Moritz Stefaner, one of the best data and information visualization gurus around. He continues to innovate in large-scale information spaces, and is a frequent speaker at information visualization conferences. Moritz’s Web site and separate blog are each worth perusing for neat graphics and ideas.

Configurability

Since our first efforts with the browser, we have worked to extend its applicability and configurability. The relation browser can be downloaded separately from our semantic components code distribution site.

The relation browser is configured via an XML specification file. Separate specifications are available for the nodes (classes or concept) and connecting edges (predicates or properties). Here are the current configuration options:

NODE PARAMETERS
label
label is the label assigned to a given node; by default, the end of the URI of the type will be used as the label
displayNodeLabel a Boolean value whether to display or hide a label for a specific node
tooltips the tooltip to be displayed when mousing over a specific node
textFont defines the font of the text label on the node; for example: “Verdana”
textColor defines the color of the text label on the node; value in RGB hex format
textSize defines the size of the text to display in the node
image
a URL to an image to use to display at the position of the node
shape a shape of the node to display; available values are “circle”, “block”, “polygon”, “square”, “cross”, “X”, “triangleUp”, “triangleDown”,
“triangleLeft”, “triangleRight”, “diamond”
lineWeight defines the size of the line of the border for the node’s shape
lineColor defines the color of the line of the border for the node’s shape; value in RGB hex format
fillColor defines the color to use within the shape for the node; value in RGB hex format
radius
defines the radius of the node. The radius is an invisible boundary where the edges get attached
backgroundScaleFactor scale factor for the node’s shape background; a scale factor of 1.25 means that it is 125% normal size
textScaleFactor scale factor of the node’s text label
textOffsetX X Offset where to start displaying the text within the node’s shape
textOffsetY Y Offset where to start displaying the text within the node’s shape
textMultilines multi-lines means that each word of a label is displayed on a single line
textMaxWidth maximum width of the text; if longer, then it is truncated with an ellipsis (“…”) appended
textMaxHeight maximum height of the text; if higher, then it is truncated with an ellipsis (“…”) appended
selectedNodeColorOverlay defines a color to overlay on the center (selected) node of the graph; it is defined by a series of 4 different offsets [alpha, red, green, blue] ranging from -255 to 255 in relation to the base node’s values; can, for example, to make the central node of the graph brighter
overNodeColorOverlay defines a color to overlay on a moused over node of the graph; it is defined by a series of 4 different offsets [alpha, red, green, blue] ranging from -255 to 255 in relation to the base node’s values; can, for example, to make a moused over node of the graph brighter
EDGE PARAMETERS
displayLabel the label to display over the center of the edge
tooltipLabel the tooltip to be displayed when mousing over a specific edge
directedArrowHead defines the type of the arrow for the edge; available values are “none”, “triangle”, “lines”
textFont defines the font of the text label on the edge
textColor defined the color of the text label on the edge; value in RGB hex format
textSize defines the size of the text to display on the edge
image a URL to an image to use to display over the edge at middle of the two connected nodes
lineWeight defines the size of the line for the edge connector
lineColor defines the color of the line for the edge connector; value in RGB hex format

 

It is also possible to specify a breadcrumb in association with the browser.

Besides these configurations, the API for the relation browser also provides for methods to:

  • Link Nodes to Objects
  • Link Nodes to Displays

Via these mechanisms, the relation browser can become a central focal point for any OSF installation. See further the specifications for additional ideas and tips.

Some Other Examples

Here are some other examples of relation browsers you can see across the Web:

UMBEL small logo

This is the third of a multi-part series on the newly updated UMBEL services. Other articles in this series are:


[1] See further the general Wikipedia description of UMBEL or its specification on the official UMBEL Web site.
[2] Various clients and users have named this widget a number of things, including spider, concept explorer, relation browser and concept browser.

Posted by AI3's author, Mike Bergman Posted on November 9, 2011 at 6:10 pm in Open Semantic Framework, UMBEL | Comments (1)
The URI link reference to this post is: http://www.mkbergman.com/987/umbel-services-part-3-concept-browser/
The URI to trackback this post is: http://www.mkbergman.com/987/umbel-services-part-3-concept-browser/trackback/
Posted:November 7, 2011

UMBEL Vocabulary and Reference Concept OntologyOSF Integration with Solr Provides Superior Search

This continues our series on the new UMBEL portal. UMBEL, the Upper Mapping and Binding Exchange Layer, is an upper ontology of about 28,000 reference concepts and a vocabulary designed for domain ontologies and ontology mapping [1]. This part focuses on the search function within the UMBEL portal based on the native engines used by the open semantic framework (OSF).

Search uses the integration of RDF and inferencing with full-text, faceted search using Solr. This has been Structured Dynamics’ standard search function for some time, as Fred Giasson initially described in April 2009. It is a very effective way for finding new and related concepts within the UMBEL structure.

Solr, as the Web service-enabled option for its parent Lucene, has most recently become a not uncommon adjunct to semantic technologies, for the very same reasons as outlined herein. However, in 2008, when we first embraced the option, it was not common at all. To my knowledge, within the semantic technology community, only the SWSE (semantic Web search engine) project was using Lucene when we began to adopt it.

The reasons for embracing Solr (or Lucene) are these:

  • Full-text search with a flexible search syntax
  • Ability to add facets (which is very powerful when combined with the structure of RDF)
  • High performance
  • Extensions for locational and time searches and many additional options, and
  • Open source.

Prior to the adoption of add-ons like Solr, RDF-only search suffered from a number of limitations, especially in the lack of a searchable correspondence of labels in relation to the object URIs used in the RDF model (see some of the limitations of standard RDF search).

Because of its advantages, Solr became the first additional main engine underneath our structWSF Web services framework, complementing the central RDF triple store (Virtuosoin most cases). We have subsequently added other main engines, as well, with a current total of four, which other parts in this UMBEL series will later discuss:

Being a main engine underneath structWSF means that datasets are fully indexed and cross-correlated with the capabilities of the other engines at time of ingest. Ingest most commonly occurs when datasets are ingested by the standard import tool; but, it might also be part of the system’s large dataset import scripts or synchronization routines.

The Search Function and Syntax

The standard UMBEL search box is found at the upper left of most site pages. When searching, you may choose these operators or syntax to add to your keywords, for example:

  • park OR city — provides the most results
  • park AND city — both terms must be present; fewer results
  • park city (no quotes) — both terms must be present, and within 5 words of one another; still fewer results, or
  • “park city” — exact phrase in quotes, with the fewest results.

(At present, Booleean operators apply to full-content search, and not filtered search.)

Upon searching, using the default of searching title, alternative labels (synonyms) and description (“TAD”), the standard search results page is displayed:

This page provides the further filtering options of searching by only title, or all content (including the linkings for each concept to its super classes and sub classes, which may produce a quite inclusive results set). These filter options are helpful in being able to sift through the 28,000 concepts within UMBEL.

The results listing provides the UMBEL concept names, their description, their alternative labels and a link [] to view them in the relation browser (to be discussed in more detail in the next part of this series). A simple pagination utility enables the results to be paged.

structWSF Basis and Options

This UMBEL search uses the structWSF Search Web service. It is what ties into the Solr engine to perform the full text searches on the structured data indexed on a structWSF instance. A search query can be as simple as querying the data store for a single keyword, or to query it using a series of complex filters.

Not all of these query syntax or filtering options are active on the UMBEL instance given the simple concept structure of the UMBEL ontology. Turning these options on or off is a relatively straightforward matter of altering some configuration files and ensuring the right parameters are included in the queries issued by the application to the structWSF search endpoint.

Developers communicate with the Search Web service using the HTTP POST method. You may request one of the following mime types: (1) text/xml, (2) application/rdf+xml, (3) application/rdf+n3 or (4) application/json. The content returned by the Web service is serialized using the mime type requested and the data returned depends on the parameters selected.

A. Optional Available Operators

Optionally, the structWSF Search function may be configured to support these operators and conventions. All operators, by the way, must be entered as ALL CAPS:

  • AND, which is the default operator if more than one key word is entered
  • OR, which needs to be specifically entered
  • NOT
  • Phrases, which are denoted by double quotes as this “search phrase”; single quotes are not accepted
  • Wildcard searches on single characters (?) and multiple characters (*), which can be placed anywhere except the beginning of the query term
  • Field searches, whereby the field name is used in the query followed by a colon
  • Nesting, which allows complicated Boolean expressions to be formed (so long as parentheses are balanced), and many more exotic options.

See further the Lucene search engine syntax specification.

B. Optional Available Filters

Each search query can be applied to all, or a subset of, datasets accessible by the requester. Each Search query can be filtered by these different filtering criteria:

  1. Type of the record(s) being requested
  2. Source dataset(s) for the the record
  3. Presence of an attribute describing the record(s)
  4. A specific value, for a specific attribute describing the record(s)
  5. A distance from a lat/long coordinate (for geo-enabled structWSF instance)
  6. A range of lat/long coordinates (for geo-enabled structWSF instance)

These filtering options allow subset searches to occur, as the example above for title and TAD in UMBEL shows. However, these filters can also be combined into more complete and structured selection options as well. For example, this same search utility applied to Structured Dynamics’ Citizen Dan local government sandbox shows how these additional filters may be applied:

  • Clicking on a given “kind” name causes the results display to be restricted to results only for that kind of class.
  • If so selected, the Filter by Dataset tab is also restricted to the datasets that contain results with that class.
  • Once selected, the filter remains in place. To remove it, click on the Remove filter icon [] to restore the “kinds” back to the original listing for this search.

See the example. Such filtering capabilities present all of the “kinds” (actually, classes that have similar members) that are contained within the structure of the individual results that comprise the search results. The number of records (results) returned for each class may also be shown in parentheses.

Single Result (Concept) View

Clicking on an individual instance result in the UMBEL search results view (see above) provides the single result View for that specific UMBEL concept:

This view now provides a detailed description of the UMBEL concept and its structure and relationships. I briefly describe each item denoted by a checkmark.

The concept title and link to the relation browser [] are provided, followed by the actual concept URI identifier. Then the listing shows the alternative labels (synonyms, jargon and acronyms) provided for that concept followed by its (often detailed) description.

The structured information for that concept appears below that material. First shown is the UMBEL SuperType [2] to which the concept belongs, and then its external (non-UMBEL ontology) and internal (UMBEL) super classes and subclasses. There is also the facility to retrieve named individuals (instances) for that concept (see next).

Named Individual Listing

Choosing the ‘Get Entities from Sources’ button may provide example instances for that concept, as is shown below for the ‘Artist’ concept:

Retrieving Named Individuals

This linkage is relatively new for UMBEL (see the version 1.00 release write-up) and is still being expanded. At present, these linkages are limited to only a subset of UMBEL concepts and only linkages to Wikipedia. This aspect of the system is under active development, with more sources and more linked concepts to be released in the future.

UMBEL small logo

This is the second of a multi-part series on the newly updated UMBEL services. Other articles in this series are:


[1] See further the general Wikipedia description of UMBEL or its specification on the official UMBEL Web site.
[2] SuperTypes are a collection of (mostly) similar Reference Concepts. Most of the SuperType classes have been designed to be (mostly) disjoint from the other SuperType classes. SuperTypes thus provide a higher-level of clustering and organization of Reference Concepts for use in user interfaces and for reasoning purposes. Every Reference Concept in UMBEL is assigned to a SuperType; a few are assigned to more than one where disjointedness conditions are not absolute. Each of the 32 UMBEL SuperTypes has a matching predicate for relating to external topics. See further the discusison of SuperTypes in the UMBEL specification.

Posted by AI3's author, Mike Bergman Posted on November 7, 2011 at 8:28 am in Open Semantic Framework, UMBEL | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/986/umbel-services-part-2-full-text-faceted-search/
The URI to trackback this post is: http://www.mkbergman.com/986/umbel-services-part-2-full-text-faceted-search/trackback/
Posted:October 24, 2011

UMBEL Vocabulary and Reference Concept OntologyNew Portal Update Leverages the Open Semantic Framework

UMBEL, the Upper Mapping and Binding Exchange Layer, is an upper ontology of about 28,000 reference concepts and a vocabulary designed for domain ontologies and ontology mapping [1]. When we first released UMBEL in mid-2008 it was accompanied by a number of Web services and a SPARQL endpoint, and general APIs. In fact, these were the first Web services developed for release by Structured Dynamics. They were the prototypes for what later became the structWSF Web services framework, which incorporated many lessons learned and better practices.

By the time that the structWSF framework had evolved with many additions to comprise the Open Semantic Framework (OSF), those original UMBEL Web services had become quite dated. Thus, upon the last major update to UMBEL to version 1.0 back in February of this year, we removed these dated services.

Like what I earlier mentioned about the cobbler’s children being the last to get new shoes, it has taken us a bit to upgrade the UMBEL services. However, I am pleased to announce we have now completed the transition of UMBEL’s earlier services to use the OSF framework, and specifically the structWSF platform-independent services. As a result, there are both upgraded existing services and some exciting new ones. We will now be using UMBEL as one of our showcases for these expanding OSF features. We will be elaborating upon these features throughout this series, some parts of which will appear on Fred Giasson’s blog.

In this first part, we provide a broad overview of the new UMBEL OSF implementation. We also begin to foretell some of the parts to come that will describe some of these features in more detail.

The Overall Portal

The new UMBEL portal is a fairly classic example of an OSF installation. The content management system hosting the system is Drupal, supplemented with a standard set of third-party modules and our own conStruct semantic technology modules. The theme is a stripped-down modification of the popular Pixture Reloaded theme:

Like other vocabulary sites, the UMBEL portal contains specifications and links to community resources and downloads. It also has some specialty links not shown on typical standards sites.

Much Better Vocabulary Access and Management

The site now most prominently features our structOntology editing and maintenance tool. Built on the OWL API, the same as Protégé 4, structOntology provides the advantage of enabling edits and management of ontologies directly within the applications in which they are used. This is far superior to needing to fire up an external ontology manager and then to re-import the changed ontology. structOntology also has an arguably simpler interface and operation than other ontology management alternatives:

For the UMBEL site, the standard view of using structOntology is read-only. In a subsequent part we will also discuss structOntology’s full editing and maintenance mode.

Improved Discovery and Navigation

Like all standard OSF installations, there are two superior means for discovery and navigation of the information space:  search and the relation browser.

Search uses the integration of RDF and inferencing with full-text, faceted search using Solr. This has been Structured Dynamics’ standard search function for some time, as Fred initially described in April 2009. It is a very effective way for finding new and related concepts within the UMBEL structure.

The relation browser is what is used for casual navigation and discovery. Any concept found via search or other means within the system can have the browser invoked by clicking on its browser icon []. When done, the standard relation browser appears:

The relation browser is highly configurable, as shown by some of our exemplar installations. Note in this case that the More details … link brings you to a detailed concept view, such as this example:

These various tools provide great means for discovery and navigation within the 28,000 concepts in the UMBEL reference space.

Newly Released Web Services and SPARQL Endpoints

We are also now providing updated endpoints for Ontology: Read, Search, Crud: Read, SPARQL and Scones. These will be described with access and query examples in a later part.

Some Cool New Sandboxes

We will also be discussing our OBIE (ontology-based information extraction) and entity tagger, scones, and export and ontology edit and management functions in subsequent posts.

Looking Ahead to Remaining Parts

We anticipate eight or nine more parts in this series explaining most of these options in greater detail. We hope to post a couple per week or so over the coming month. We will conclude with a discussion of next pending UMBEL releases.

UMBEL small logo

This is the first of a multi-part series on the newly updated UMBEL services. Other articles in this series so far are:


[1] See further the general Wikipedia description of UMBEL or its specification on the official UMBEL Web site.

Posted by AI3's author, Mike Bergman Posted on October 24, 2011 at 10:35 am in Structured Dynamics, UMBEL | Comments (1)
The URI link reference to this post is: http://www.mkbergman.com/983/umbel-services-part-1-overview/
The URI to trackback this post is: http://www.mkbergman.com/983/umbel-services-part-1-overview/trackback/
Posted:August 8, 2011

Geshi NetworkVisualization + Analysis Pushes Aside Cytoscape

Though I never intended it, some posts of mine from a few years back dealing with 26 tools for large-scale graph visualization have been some of the most popular on this site. Indeed, my recommendation for Cytoscape for viewing large-scale graphs ranks within the top 5 posts all time on this site.

When that analysis was done in January 2008 my company was in the midst of needing to process the large UMBEL vocabulary, which now consists of 28,000 concepts. Like anything else, need drives research and demand, and after reviewing many graphing programs, we chose Cytoscape, then provided some ongoing guidelines in its use for semantic Web purposes. We have continued to use it productively in the intervening years.

Like for any tool, one reviews and picks the best at the time of need. Most recently, however, with growing customer usage of large ontologies and the development of our own structOntology editing and managing framework, we have begun to butt up against the limitations of large-scale graph and network analysis. With this post, we announce our new favorite tool for semantic Web network and graph analysis — Gephi — and explain its use and showcase a current example.

The Cytoscape Baseline and Limitations

Three and one-half years ago when I first wrote about Cytoscape, it was at version 2.5. Today, it is at version 2.8, and many aspects have seen improvement (including its Web site). However, in other respects, development has slowed. For example, version 3.x was first discussed more than three years ago; it is still not available today.

Though the system is open source, Cytoscape has also largely been developed with external grant funds. Like other similarly funded projects, once and when grant funds slow, development slows as well. While there has clearly been an active community behind Cytoscape, it is beginning to feel tired and a bit long in the tooth. From a semantic Web standpoint, some of the limitations of the current Cytoscape include:

  • Difficult conversion of existing ontologies — Cytoscape requires creating a CSV input; there was an earlier RDFscape plug-in that held great promise to bridge the software into the RDF and semantic Web sphere, but it has not remained active
  • Network analysis — one of the early and valuable generalized network analysis plug-ins was NetworkAnalyzer; however, that component has not seen active development in three years, and dynamic new generalized modules suitable for social network analysis (SNA) and small-world networks have not been apparent
  • Slow performance and too-frequent crashes — Cytoscape has always had a quirky interface and frequent crashes; later versions are a bit more stable, but usability remains a challenge
  • Largely supported by the biomedical community — from the beginning, Cytoscape was a project of the biomedical community. Most plug-ins still pertain to that space. Because of support for OBO (Open Biomedical and Biological Ontologies) formats and a lack of uptake by the broader semantic Web community, RDF- and OWL-based development has been keenly lacking
  • Aside from PDFs, poor ability to output large graphs in a viewable manner
  • Limited layout support — and poor performance for many of those included with the standard package.

Undoubtedly, were we doing semantic technologies in the biomedical space, we might well develop our own plug-ins and contribute to the Cytoscape project to help overcome some of these limitations. But, because I am a tools geek (see my Sweet Tools listing with nearly 1000 semantic Web and -related tools), I decided to check out the current state of large-scale visualization tools and see if any had made progress on some of our outstanding objectives.

Choosing Gephi and Using It

There are three classes of graph tools in the semantic technology space:

  1. Ontology navigation and discovery, to which the Relation Browser and RelFinder are notable examples
  2. Ontology structure visualization (and sometimes editing), such as the GraphViz (OWLViz) or OntoGraf tools used in Protégé (or the nice FlexViz, again used by the OBO community), and
  3. Large-scale graph visualization in order to gain a complete picture and macro relationships in the ontology.

One could argue that the first two categories have received the most current development attention. But, I would also argue that the third class is one of the most critical:  to understand where one is in a large knowledge space, much better larger-scale visualization and navigation tools are needed. Unfortunately, this third category is also the one that appears to be receiving the least development attention. (To be sure, large-scale graphs pose computational and performance challenges.)

In the nearly four years since my last major survey of 26 tools in this category, the new entrants appear quite limited. I’ve surely overlooked some, but the most notable are Gruff, NAViGaTOR, NetworkX and Gephi [1]. Gruff actually appears to belong most in Category #2; I could find no examples of graphs on the scale of thousands of nodes. NAViGaTOR is biomedical only. NetworkX has no direct semantic graph importing and — while apparently some RDF libraries can be used for manipulating imports — alternative workflows were too complex for me to tackle for initial evaluation. This leaves Gephi as the only potential new candidate.

From a clean Web site to well-designed intro tutorials, first impressions of Gephi are strongly positive. The real proof, of course, was getting it to perform against my real use case tests. For that, I used a “big” ontology for a current client that captures about 3000 different concepts and their relationships and more than 100 properties. What I recount here — from first installing the program and plug-ins and then setting up, analyzing, defining display parameters, and then publishing the results — took me less than a day from a totally cold start. The Gephi program and environment is surprisingly easy to learn, aided by some great tutorials and online info (see concluding section).

The critical enabler for being able to use Gephi for this source and for my purposes is the SemanticWebImport plug-in, recently developed by Fabien Gandon and his team at Inria as part of the Edelweiss project [2]. Once the plug-in is installed, you need only open up the SemanticWebImport tab, give it the URL of your source ontology, and pick the Start button (middle panel):

SemWeb Plug-in for GephiNote the SemanticWebImport tool also has the ability (middle panel) to issue queries to a SPARQL endpoint, the results of which return a results graph (partial) from the source ontology. (This feature is not further discussed herein.) This ontology load and display capability worked without error for the five or six OWL 2 ontologies I initially tested against the system.

Once loaded, an ontology (graph) can be manipulated with a conventional IDE-like interface of tabs and views. In the right-hand panels above we are selecting various network analysis routines to run, in this case Average Degrees. Once one or more of these analysis options is run, we can use the results to then cluster or visualize the graph; the upper left panel shows highlighting the Modularity Class, which is how I did the community (clustering) analysis of our big test ontology. (When run you can also assign different colors to the cluster families.) I also did some filtering of extraneous nodes and properties at this stage and also instructed the system via the ranking analysis to show nodes with more link connections as larger than those nodes with fewer links.

At this juncture, you can also set the scale for varying such display options as linear or some power function. You can also select different graph layout options (lower left panel). There are many layout plug-in options for Gephi. The layout plugin called OpenOrd, for instance, is reported to be able to scale to millions of nodes.

At this point I played extensively with the combination of filters, analysis, clusters, partitions and rankings (as may be separately applied to nodes and edges) to: 1) begin to understand the gross structure and characteristics of the big graph; and 2) refine the ultimate look I wanted my published graph to have.

In our example, I ultimately chose the standard Yifan Hu layout in order to get the communities (clusters) to aggregate close to one another on the graph. I then applied the Parallel Force Atlas layout to organize the nodes and make the spacings more uniform. The parallel aspect of this force-based layout allows these intense calculations to run faster. The result of these two layouts in sequence is then what was used for the results displays.

Upon completion of this analysis, I was ready to publish the graph. One of the best aspects of Gephi is its flexibility and control over outputs. Via the main Preview tab, I was able to do my final configurations for the published graph:

Publication Options for GephiThe graph results from the earlier-worked out filters and clusters and colors are shown in the right-hand Preview pane. On the left-hand side, many aspects of the final display are set, such as labels on or off, font sizes, colors, etc. It is worth looking at the figure above in full size to see some of the options available.

Standard output options include either SVG (vector image) or PDFs, as shown at the lower left, with output size scaling via slider bar. Also, it is possible to do standard saves under a variety of file formats or to do targeted exports.

One really excellent publication option is to create a dynamically zoomable display using the Seadragon technology via a separate Seadragon Web Export plug-in. (However, because of cross-site scripting limitations due to security concerns, I only use that option for specific sites. See next section for the Zoom It option — based on Seadragon — to workaround that limitation.)

Outputs Speak for Themselves

I am very pleased with the advances in display and analysis provided by Gephi. Using the Zoom It alternative [3] to embedded Seadragon, we can see our big ontology example with:

  • All 3000 nodes labeled, with connections shown (though you must must zoom to see) and
  • When zooming (use scroll wheel or + icon) or panning (via mouse down moves), wait a couple of seconds to get the clearest image refresh:

Note: at standard resolution, if this graph were to be rendered in actual size, it would be larger than 7 feet by 7 feet square at full zoom !!!

To compare output options, you may also;

Still, Some Improvements Would be Welcomed

It is notable that Gephi still only versions itself as an “alpha”. There is already a robust user community with promise for much more technology to come.

As an alpha, Gephi is remarkably stable and well-developed. Though clearly useful as is, I measure the state of Gephi against my complete list of desired functionality, with these items still missing:

  • Real-time and interactive navigation — the ability to move through the graph interactively and to issue queries and discover relationships
  • Huge node numbers — perhaps the OpenOrd plug-in somewhat addresses this need. We will be testing Gephi against UMBEL, which is an order of magnitude larger than our test big ontology
  • More node and edge control — Cytoscape still retains the advantage in the degree to which nodes and edges can be graphically styled
  • Full round-tripping — being able to use Gephi in an edit mode would be fantastic; the edit functionality is fairly straightforward, but the ability to round-trip in appropriate formats (OWL, RDF or otherwise) may be the greater sticking point.

Ultimately, of course, as I explained in an earlier presentation on a Normative Landscape for Ontology Tools, we would like to see a full-blown graphical program tie in directly with the OWL API. Some initial attempts toward that have been made with the non-Gephi GLOW visualization approach, but it is still in very early phases with ongoing commitments unknown. Optimally, it would be great to see a Gephi plug-in that ties directly to the OWL API.

In any event, while perhaps Cytoscape development has stalled a bit for semantic technology purposes, Gephi and its SemanticWebImport plug-in have come roaring into the lead. This is a fine toolset that promises usefulness for many years to come.

Some Further Gephi Links

To learn more about Gephi, also see the:

Also, for future developments across the graph visualization spectrum, check out the Wikipedia general visualization tools listing on a periodic basis.


[1] The R open source math and statistics package is very rich with apparently some graph visualization capabilities, such as the dedicated network analysis and visualization project statnet. rrdf may also provide an interesting path for RDF imports. R and its family of tools may indeed be quite promising, but the commitment necessary to R appears quite daunting. Longer-term, R may represent a more powerful upgrade path for our general toolsets. Neo4j is also a rising star in graph databases, with its own visualization components. However, since we did not want to convert our underlying data stores, we also did not test this option.
[2] Erwan Demairy is the lead developer and committer for SemanticWebImport. The first version was released in mid-April 2011.
[3] For presentations like this blog post, the Seadragon JavaScript enforces some security restrictions against cross-site scripting. To overcome that, the option I followed was to:
  • Use Gephi’s SVG export option
  • Open the SVG in Inkscape
  • Expand the size of the diagram as needed (with locked dimensions to prevent distortion)
  • Save As a PNG
  • Go to Zoom It and submit the image file
  • Choose the embed function, and
  • Embed the link provided, which is what is shown above.
(Though Zoom.it also accepts SVG files directly, I found performance to be spotty, with many graphical elements dropped in the final rendering.)

Posted by AI3's author, Mike Bergman Posted on August 8, 2011 at 3:27 am in Ontologies, Open Source, Semantic Web Tools, UMBEL | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/968/a-new-best-friend-gephi-for-large-scale-networks/
The URI to trackback this post is: http://www.mkbergman.com/968/a-new-best-friend-gephi-for-large-scale-networks/trackback/
Posted:February 28, 2011

Photo courtesy goldonomic.comWikipedia + UMBEL + Friends May Offer One Approach

In the first part of this series we argued for the importance of reference structures to provide the structures and vocabularies to guide interoperability on the semantic Web. The argument was made that these reference structures are akin to human languages, requiring a sufficient richness and terminology to enable nuanced and meaningful communications of data across the Web and within the context of their applicable domains.

While the idea of such reference structures is great — and perhaps even intuitive when likened to human languages — the question is begged as to what is the basis for such structures? Just as in human languages we have dictionaries, thesauri, grammar and style books or encyclopedia, what are the analogous reference sources for the semantic Web?

In this piece, we tackle these questions from the perspective of the entire Web. Similar challenges and approaches occur, of course, for virtually every domain and specific community. But, by focusing on the entirety of the Web, perhaps we can discern the grain of sand at the center of the pearl.

Bootstrapping the Semantic Web

The idea of bootstrapping is common in computers, compilers or programming. Every computer action needs to start from a basic set of instructions from which further instructions or actions are derived. Even starting up a computer (“booting up”) reflects this bootstrapping basis. Bootstrapping is the answer to the classic chicken-or-egg dilemma by embedding a starting set of instructions that provides the premise at start up [1]. The embedded operand for simple addition, for example, is the basis for building up more complete mathematical operations.

So, what is the grain of sand at the core of the semantic Web that enables it to bootstrap meaning? We start with the basic semantics and “instructions” in the core RDF, RDFS and OWL languages. These are very much akin to the basic BIOS instructions for computer boot up or the instruction sets leveraged by compilers. But, where do we go from there? What is the analog to the compiler or the operating system that gives us more than these simple start up instructions? In a semantics sense, what are the vocabularies or languages that enable us to understand more things, connect more things, relate more things?

To date, the semantic Web has given us perhaps a few dozen commonly used vocabularies, most of which are quite limited and simple pidgin languages such as DC, FOAF, SKOS, SIOC, BIBO, etc. We also have an emerging catalog of “things” and concepts from Wikipedia (via DBpedia) and similar. (Recall, in this piece, we are trying to look Web-wide, so the many fine building blocks for domain purposes such as found in biology, medicine, finance, astronomy, etc., are excluded.) The purposes and scope of these vocabularies widely differ and attack quite different slices of the information space. SKOS, for example, deals with describing simple knowledge structures like taxonomies or thesauri; SIOC is for describing social media.

By virtue of adoption, each of these core languages has proved its usefulness and role. But, as skew lines in space, how do these vocabularies relate to one another? And, how can all of the specific domain vocabularies also relate to those and one another where there are points of intersection or overlap? In short, after we get beyond the starting instructions for the semantic Web, what is our language and vocabulary? How do we complete the bootstrap process?

Clearly, like human languages, we need rich enough vocabularies to describe the things in our world and a structure of the relationships amongst those things to give our communications meaning and coherence. That is precisely the role provided by reference structures.

The Use and Role of ‘Gold Standards’

To prevent reference structures from being rubber rulers, some fixity or grounding needs to establish the common understanding for its referents. Such fixed references are often called ‘gold standards‘. In money, of course, this used to be a fixed weight of gold, until that basis was abandoned in the 1970s. In the metric system, there are a variety of fixed weights and measures that are employed. In the English language, the Oxford English Dictionary (OED) is the accepted basis for the lexicon. And so on.

Yet, as these examples show, none of these gold standards is absolute. Money now floats; multiple systems of measurement compete; a variety of dictionaries are used for English; most languages have their own reference sets; etc. The key point in all gold standards, however, is that there is wide acceptance for a defined reference for determining alignments and arbitrating differences.

Gold standards or reference standards play the role of referees or arbiters. What is the meaning of this? What is the definition of that? How can we tell the difference between this and that? What is the common way to refer to some thing?

Let’s provide one example in a semantic Web context. Let’s say we have a dataset and its schema A that we are aligning with another dataset with schema B. If I say two concepts align exactly across these datasets and you say differently, how do we resolve this difference? On one extreme, each of us can say our own interpretation is correct, and to heck with the other. On the other extreme, we can say both interpretations are correct, in which case both assertions are meaningless. Perhaps papering over these extremes is OK when only two competing views are in play, but what happens when real problems with many actors are at stake? Shall we propose majority rule, chaos, or the strongest prevails?

These same types of questions have governed human interaction from time immemorial. One of the reasons to liken the problem of operability on the semantic Web to human languages, as argued in Part I, is to seek lessons and guidance for how our languages have evolved. The importance of finding common ground in our syntax and vocabularies — and, also, critically, in how we accept changes to those — is the basis for communication. Each of these understandings needs to be codified and documented so that they can be referenced, and so that we can have some confidence of what the heck it is we are trying to convey.

For reference structures to play their role in plugging this gap — that is, to be much more than rubber rulers — they need to have such grounding. Naturally, these groundings may themselves change with new information or learning inherent to the process of human understanding, but they still should retain their character as references. Grounded references for these things — ‘gold standards’ — are key to this consensual process of communicating (interoperating).

Some ‘Gold Standards’ for the Semantic Web

The need for gold standards for the semantic Web is particularly acute. First, by definition, the scope of the semantic Web is all things and all concepts and all entities. Second, because it embraces human knowledge, it also embraces all human languages with the nuances and varieties thereof. There is an immense gulf in referenceability from the starting languages of the semantic Web in RDF, RDFS and OWL to this full scope. This gulf is chiefly one of vocabulary (or lack thereof). We know how to construct our grammars, but we have few words with understood relationships between them to put in the slots.

The types of gold standards useful to the semantic Web are similar to those useful to our analogy of human languages. We need guidance on structure (syntax and grammar), plus reference vocabularies that encompass the scope of the semantic Web (that is, everything). Like human languages, the vocabulary references should have analogs to dictionaries, thesauri and encyclopedias. We want our references to deal with the specific demands of the semantic Web in capturing the lexical basis of human languages and the connectedness (or not) of things. We also want bases by which all of this information can be related to different human languages.

To capture these criteria, then, I submit we should consider a basic starting set of gold standards:

  • RDF/RDFS/OWL — the data model and basic building blocks for the languages
  • Wikipedia — the standard reference vocabulary of things, concepts and entities, plus other structural guidances
  • WordNet — lexical language references as an aid to natural language processing, and
  • UMBEL — the structural reference for the connectedness of things for basic coherence and inference, plus a vocabulary for mapping amongst reference structures and things.

Each of these potential gold standards is next discussed in turn. The majority of discussion centers on Wikipedia and UMBEL.

RDF/RDFS/OWL: The Language

Naturally, the first suggested gold standard for the semantic Web are the RDF/RDFS/OWL language components. Other writings have covered their uses and roles [2]. In relation to their use as a gold standard, two documents, one on RDF semantics [3] and the other an OWL [4] primer, are two great starting points. Since these languages are now in place and are accepted bases of the semantic Web, we will concentrate on the remaining members of the standard reference set.

Wikipedia: The Vocabulary (and More)

The second suggested gold standard for the semantic Web is Wikipedia, principally as a sort of canonical vocabulary base or lexicon, but also for some structural aspects. Wikipedia now contains about 3.5 million English articles, by far larger than any other knowledge base, and has more than 250 language versions. Each Wikipedia article acts as more or less a reference for the thing it represents. In addition, the size, scope and structure of Wikipedia make it an unprecedented resource for researchers engaged in natural language processing (NLP), information extraction (IE) and semantic Web-related tasks.

For some time I have been maintaining a listing called SWEETpedia of academic and research articles focused on the use of Wikipedia for these tasks. The latest version tracks some 250 articles [5], which I guess to be about one half or more of all such research extant. This research shows a broad variety of potential roles and contributions from Wikipedia as a gold standard for the semantic Web, some of which is detailed in the tables below.

An excellent report by Olena Medelyan et al. from the University of Waikato in New Zealand, Mining Meaning from Wikipedia, organized this research up through 2008 and provided detailed commentary and analysis of the role of Wikipedia [6]. They noted, for example, that Wikipedia has potential use as an encyclopedia (its intended use), a corpus for testing and modeling NLP tasks, as a thesaurus, a database, an ontology or a network structure. The Intelligent Wikipedia project from the University of Washington has also done much innovative work on “automatically learned systems [that] can render much of Wikipedia into high-quality semantic data, which provides a solid base to bootstrap toward the general Web” [7].

However, as we proceed through the next discussions, we’ll see that the weakest aspect of Wikipedia is its category structure. Thus, while Wikipedia is unparalleled as the gold standard for a reference vocabulary for the Web, and has other structural uses as well, we will need to look elsewhere for how that content is organized.

Major Wikipedia Initiatives

Many groups have recognized these advantages for Wikipedia, and have built knowledge bases around it. Also, many of these groups have also recognized the category (schema) weaknesses in Wikipedia and have proposed alternatives. Some of these major initiatives, which also collectively represent a large number of the research articles in SWEETpedia, include:

Project Schema Basis Comments
DBpedia Wikipedia Infoboxes excellent source for URI identifiers; structure extraction basis used by many other projects
Freebase User Generated schema are for domains based on types and properties; at one time had a key dependence on Wikipedia; has since grown much from user-generated data and structure; now owned by Google
Intelligent Wikipedia Wikipedia Infoboxes a broad program and a general set of extractors for obtaining structure and relationships from Wikipedia; was formerly known as KOG; from Univ of Washington
SIGWP Wikipedia Ontology the Special Interest Group of Wikipedia (Research or Mining); a general group doing research on Wikipedia structure and mining; schema basis is mostly from a thesaurus; group has not published in two years
UMBEL UMBEL Reference Concepts RefConcepts based on the Cyc knowledge base; provides a tested, coherent concept schema, but one with gaps regarding Wikipedia content; has 28,000 concepts mapped to Wikipedia
WikiNet Extracted Wikipedia Ontology part of a long-standing structure extraction effort from Wikipedia leading to an ontology; formerly known as WikiRelate; from the Heidelberg Institute for Theoretical Studies (HITS)
Wikipedia Miner N/A generalized structure extractor; part of a wider basis of Wikipedia research at the Univ of Waikato in New Zealand
Wikitology Wikipedia Ontology general RDF and ontology-oriented project utilizing Wikipedia; effort now concluded; from the Ebiquity Group at the Univ of Maryland
YAGO WordNet maps Wordnet to Wikipedia, with structured extraction of relations for characterizing entities

 

It is interesting to note that none of the efforts above uses the Wikipedia category structure “as is” for its schema.

Structural Sources within Wikipedia

The surface view of Wikipedia is topic articles placed into one or more categories. Some of these pages also include structured data tables (or templates) for the kind of thing the article is; these are called infoboxes. An infobox is a fixed-format table placed at the top right of articles to consistently present a summary of some unifying aspect that the articles share. For example, see the listing for my home town, Iowa City, which has a city infobox.

However, this cursory look at Wikipedia in fact masks much additional and valuable structure. Some early researchers noted this [8]. The recognition of structure has also been a key driver for the interest in Wikipedia as a knowledge base (in addition to its global content scope). The following table is a fairly complete listing of structure possibilities within Wikipedia (see Endnotes for any notes):

Wikipedia Structure Potential Applications Note
Corpus
Entire Corpus
knowledge base; graph structure; corpus for n-grams, other constructions [9]
Categories
Category
category suggestion; semantic relatedness; query expansion; potential parent category
Contained Articles
semantically-related terms (siblings)
Hierarchy
hyponymic and meronymic relations between terms
Listing Pages/Categories
semantically-related terms (siblings)
Patterned Categories
functional metadata [9]
Infobox Templates
Attributes
synonyms; key-value pairs
Values
units of measure; fact extraction [9]
Items
category suggestion; entity suggestion
Geolocational
coordinates; places; geolocational; (may also appear in full article text)
Issue Templates
Multiple Types
exclusion candidates; other structural analysis; examples include Stub, Message Boxes, Multiple Issues [9]
Category Templates [13]
Category Name
disambiguation; relatedness
Category Links
semantic relatedness
Articles
First Paragraph
definition; abstract
Full Text
complete discussion; related terms; context; translations; NLP analysis basis; relationships; sentiment
Redirects
synonymy; spelling variations, misspellings; abbreviations; query expansion
Title
named entities; domain specific terms or senses
Subject
category suggestion (phrase marked in bold in first paragraph)
Section Heading(s)
category suggestion; semantic relatedness [9]
See Also
related concepts; query expansion [9]
Further Reading
related concepts [9,10]
External Links
related concepts; external harvest points
Article Links
Context
related terms; co-occurrences
Label
synonyms; spelling variations; related terms; query expansion
Target
link graph; related terms
LinksTo
category suggestion; functional metadata
LinkedFrom
category suggestion; functional metadata
References
Citations
external harvest points [9,10]
Media
Images
thumbnails; image recognition for disambiguation; controversy (edit/upload frequency) [11]
Captions
related concepts; related terms; functional metadata [9]
Disambiguation Pages
Article Links
sense inventory
Discussion Pages
Discussion Content
controversy
Redux for Article Structure
see Articles for uses
History Pages
Edit Frequency
topicalness; controversy (diversity of editors, reversions)
Edit Basis
lexical errors [9]
Lists
Hyponyms
instances; named entity candidates
Alternate Language Versions
Redux for All Structures
see all items above; translation; multilingual alignment; entity disambiguation [12]

The potential for Wikipedia to provide structural understandings is evident from this table. However, it should be noted that, aside from some stray research initiatives, most effort to date has focused on the major initiatives noted earlier or from analyzing linking and infoboxes. There is much additional research that could be powered by the Wikipedia structure as it presently exists.

From the standpoint of the broader semantic Web, the potential of Wikipedia in the areas of metadata enhancement and mapping to multiple human languages [12] are particularly strong. We are only now at the very beginning phases of tapping this potential.

Structural Weaknesses

The three main weaknesses with Wikipedia are its category structure [14], inconsistencies and incompleteness. The first weakness means Wikipedia is not a suitable organizational basis for the semantic Web; the next two weaknesses, due to the nature of Wikipedia’s user-generated content, are constantly improving.

Our recent effort to map between UMBEL and Wikipedia, undertaken as part of the recent UMBEL v 1.00 release, spent considerable time analyzing the Wikipedia category structure [15]. Of the roughly half million categories in Wikipedia, only about 85,000 were found to be suitable candidates to participate in an actual schema structure. Further breakdowns are shown by this table resulting from our analysis:

Wikipedia Category Breakdowns
Removals 20.7%
Administrative 15.7%
Misc Cleaning 5.0%
Functional (not schema) 61.8%
Fn Dates 10.1%
Fn Nationalities 9.6%
Fn Listings, related 0.8%
Fn Occupations 1.0%
Fn Prepositions 40.4%
Candidates 17.4%
SuperTypes 1.7%
General Structure 15.7%
TOTAL 100.0%

Fully 1/5 of the categories are administrative or internal in nature. The large majority of categories are, in fact, not structural at all, but what we term functional categories, which means the category contains faceting information (such as subclassifying musicians into British musicians) [16]. Functional categories can be a rich source of supplementary metadata for its assigned articles — though, no one has yet processed Wikipedia in this manner — but are not a useful basis for structural conceptual relationships or inferencing.

This weakness in the Wikipedia category system has been known for some time [17], but researchers and others still attempt to do mappings on mostly uncleaned categories. Though most researchers recognize and remove internal or administrative categories in their efforts, using the indiscriminate remainder of categories still leads to poor precision in resulting mappings. In fact, in comparison to one of the more rigorous assessments to date [18], our analysis still showed a 6.8% error rate in hand inspected categories.

Other notable category problems include circular references, skipped intermediate categories, misassigned categories and incomplete assignments.

Nonetheless, Wikipedia categories do have a valuable use in the analysis of local relationships (one degree of relatedness) and for finding missing category candidates. And, as noted, the functional categories are also a rich and untapped source of additional article metadata.

Like any knowledge base, Wikipedia also has inconsistent and incomplete coverage of topics [19]. However, as more communities accept Wikipedia as a central resource deserving completeness, we should see these gaps continue to get filled.

The DBpedia Implementation

One of the first database versions of Wikipedia built for semantic Web purposes is DBpedia. DBpedia has an incipient ontology useful for some classification purposes. Its major structural organization is built around the Wikipedia infoboxes, which are applied to about a third of Wikipedia articles. DBpedia also has multiple language versions.

DBpedia is a core hub of Linked Open Data (LOD), which now has about 300 linked datasets; has canonical URIs used by many other applications; has extracted versions and tools very useful for further processing; and has recently moved to incorporate live updates from the source Wikipedia [20]. For these reasons, the DBpedia version of Wikipedia is the suggested implementation version.

WordNet: Language Relationships

The third suggested gold standard for the Semantic Web is WordNet, a lexical database for the English language. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. The purpose is twofold: to produce a combination of dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and artificial intelligence applications. There are over 50 languages covered by wordnet approaches, most mapped to this English WordNet [21].

Though it has been used in many ontologies [22], WordNet is most often mapped for its natural language purposes and not used as a structure of conceptual relationships per se. This is because it is designed for words and not concepts. It contains hundreds of basic semantic inconsistencies and also lacks much domain applicability. Entities, of course, are also lacking. In those cases where WordNet has been embraced as a schema basis, much work is generally expended to transform it into an ontology suitable for knowledge representation.

Nonetheless, for word sense disambiguation and other natural language processing tasks, as well as for aiding multi-lingual mappings, WordNet and its various other language variants is a language reference gold standard.

UMBEL: A Coherent Structure

So, with these prior gold standards we gain a basic language and grammar; a base (canonical) vocabulary and some structure guidance; and a reference means for processing and extracting information from input text. Yet two needed standards remain.

One needed standard is a conceptual organizing structure (or schema) by which the canonical vocabulary of concepts and instances can be related. This core structure should be constructed in a coherent [23] manner and expressly designed to support inferencing and (some) reasoning. This core structure should be sufficiently large to embrace the scope of the semantic Web, but not so detailed as to make it computationally inefficient. Thus, the core structure should be a framework that allows more focused and purposeful vocabularies to be “plugged in”, depending on the domain and task at hand. Unfortunately, the candidate category structures from our other gold standards in Wikipedia and WordNet do not meet these criteria.

A second needed standard is a bit of additional vocabulary “glue” specifically designed for the purposes of the semantic Web and ontology and domain incorporation. We have multiple and disparate world views and contexts, as well as the things described by them [24]. To get them to interoperate — and to acknowledge differences in alignment or context — we need a set of relational predicates (vocabulary) that can capture a range of mappings from the exact to the approximate [25]. Unlike other reference vocabularies that attempt to capture canonical definitions within defined domains, this vocabulary is expressly required by the semantic Web and its goal to federate different data and schema.

UMBEL has been expressly designed to address both of these two main needs [26]. UMBEL is a coherent categorization structure for the semantic Web and a mapping vocabulary designed for dataset and conceptual interoperability. UMBEL’s 28,000 reference concepts (RefConcepts) are based on the Cyc knowledge base [27], which itself is expressly designed as a common sense representation of the world with express variations in context supported via its 1000 or so microtheories. Cyc, and UMBEL upon which it is based, are by no means the “correct” or “only” representations of the world, but they are coherent ones and thus internally consistent.

UMBEL’s role to allow datasets to be “plugged in” and related through some fixed referents was expressed by this early diagram [28]:

Lightweight Binding to an Upper Subject Structure Can Bring Order
[Click on image for full-size pop-up]

The idea — which is still central to this kind of reference structure — is that a set of reference concepts can be used by multiple datasets to connect and then inter-relate. These are shown by the nested subjects (concepts) in the umbrella structure.

UMBEL, of course, is not the only coherent structure for such interoperability purposes. Other major vocabularies (such as LCSH; see below) or upper-level ontologies (such as SUMO, DOLCE, BFO or PROTON, etc.) can fulfill portions of these roles, as well. In fact, the ultimate desire is for multiple reference structures to emerge that are mapped to one another, similar to how human languages can inter-relate. Yet, even in that desired vision, there is still a need for a bootstrapped grounding. UMBEL is the first such structure expressly designed for the two needed standards.

Mappings to the Other Standards

UMBEL is already based on the central semantic Web languages of RDF, RDFS, SKOS, and OWL 2. The recent version 1.00 now maps 60% of UMBEL to Wikipedia, with efforts for the remaining in process. UMBEL provides mappings to WordNet, via its Cyc relationships. More of this is in process and will be exposed. And the mappings between UMBEL and GeoNames [29] for locational purposes is also nearly complete.

The Gold Resides in Combining These Standards

Each of these reference structures — RDF/OWL, Wikipedia, WordNet, UMBEL — is itself coherent and recognized or used by multiple parties for potential reference purposes on the semantic Web. The advocacy of them as standards is hardly radical.

However, the gold lies in the combination of these components. It is in this combination that we can see a grounded knowledge base emerge that is sufficient for bootstrapping the semantic Web.

The challenge in creating this reference knowledge base is in the mapping between the components. Fortunately, all of the components are already available in RDF/OWL. WordNet already has significant mappings to Wikipedia and UMBEL. And 60% of UMBEL is already mapped to Wikipedia. The remaining steps for completing these mappings are very near at hand. Other vocabularies, such as GeoNames [29], would also beneficially contribute to such a reference base.

Yet to truly achieve a role as a gold standard, these mappings should be fully vetted and accurate. Automated techniques that embed errors are unacceptable. Gold standards should not themselves be a source for propagation of errors. Like dictionaries or thesauri, we need reference structures that are quality and deserving of reference. We need canonical structures and canonical vocabularies.

But, once done, these gold standards themselves become reference sources that can aid automatic and semi-automatic mappings of other vocabularies and structures. Thus, the real payoff is not that these gold standards themselves get actually embedded in specific domain uses or whatever, but that they can act as reference referees for helping align and ground other structures.

Like the bootstrap condition, more and more reference structures may be brought into this system. A reference structure does not mean reliance; it need not even have more than minimal use. As new structures and vocabularies are brought into the mix, appropriate to specific domains or purposes, reference to other grounding structures will enable the structures and vocabularies to continue to expand. So, not only are reference concepts necessary for grounding the semantic Web, but we also need to pick good mapping predicates for properly linking these structures together.

In this manner, many alternative vocabularies can be bootstrapped and mapped and then used as the dominant vocabularies for specific purposes. For example, at the level of general knowledge categorization, vocabularies such as LCSH, the Dewey Decimal Classification, UDC, etc., can be preferentially chosen. Other specific vocabularies are at the ready, with many already used for domain purposes. Once grounded, these various vocabularies can also interoperate.

Grounding in gold standards enables the freedom to switch vocabularies at will. Establishing fixed reference points via such gold standards will power a virtuous circle of more vocabularies, more mappings, and, ultimately, functional interoperability no matter the need, domain or world view.

This is the last of a two-part series on the importance and choice of reference structures (Part I) and gold standards (Part II) on the semantic Web.

[1] For example, according to the Wikipedia entry on Machine code, “A machine code instruction set may have all instructions of the same length, or it may have variable-length instructions. How the patterns are organized varies strongly with the particular architecture and often also with the type of instruction. Most instructions have one or more opcode fields which specifies the basic instruction type (such as arithmetic, logical, jump, etc) and the actual operation (such as add or compare) and other fields that may give the type of the operand(s), the addressing mode(s), the addressing offset(s) or index, or the actual value itself.”
[2] See, for example, M.K. Bergman, 2009. “Advantages and Myths of RDF,” AI3:::Adaptive Information blog, April 8, 2009; see http://www.mkbergman.com/483/advantages-and-myths-of-rdf/ and M.K. Bergman, 2010. “Ontology Tutorial Series,” AI3:::Adaptive Information blog, September 27, 2010; see http://www.mkbergman.com/916/ontology-tutorial-series/.
[3] Patrick Hayes, ed., 2004. RDF Semantics, W3C Recommendation 10 February 2004. See http://www.w3.org/TR/rdf-mt/.
[4] Pascal Hitzler et al., eds., 2009. OWL 2 Web Ontology Language Primer, a W3C Recommendation, 27 October 2009; see http://www.w3.org/TR/owl2-primer/.
[5] See SWEETpedia from the AI3:::Adaptive Information blog, which currently lists about 250 articles and citations.
[6] Olena Medelyan, Catherine Legg, David Milne and Ian H. Witten, 2008. Mining Meaning from Wikipedia, Working Paper Series ISSN 1177-777X, Department of Computer Science, The University of Waikato (New Zealand), September 2008, 82 pp. See http://arxiv.org/ftp/arxiv/papers/0809/0809.4530.pdf. This paper and its findings is discussed more in M.K. Bergman, 2008. “Research Shows Natural Fit between Wikipedia and Semantic Web,” AI3:::Adaptive Information blog, October 15, 2008; see http://www.mkbergman.com/460/research-shows-natural-fit-between-wikipedia-and-semantic-web/.
[7] For a comprehensive treatment, see Fei Wu, 2010. Machine Reading: from Wikipedia to the Web, a doctoral thesis to the Department of Computer Science, University of Washington, 154 pp; see http://ai.cs.washington.edu/www/media/papers/Wu-thesis-2010.pdf. To my knowledge, this paper also was the first to use the “bootstrapping” metaphor.
[8] Quite a few research papers have characterized various aspects of the Wikipedia structure. One of the first and most comprehensive was Torsten Zesch, Iryna Gurevych, Max Mühlhäuser, 2007b. Analyzing and Accessing Wikipedia as a Lexical Semantic Resource, and the longer technical report. See http://www.ukp.tu-darmstadt.de/software/JWPL. Also, 2008. In Proceedings of the Biannual Conference of the Society for Computational Linguistics and Language Technology, pp. 213221. Also, for another early discussion, see Linyun Fu, Haofen Wang, Haiping Zhu, Huajie Zhang, Yang Wang and Yong Yu, 2007. Making More Wikipedians: Facilitating Semantics Reuse for Wikipedia Authoring. See http://data.semanticweb.org/pdfs/iswc-aswc/2007/ISWC2007_RT_Fu.pdf
[9] This structural basis in Wikipedia is largely untapped.
[10] Citations and references appear to be highly selective (biased) in Wikipedia; nonetheless, those available are useful seeding points for more suitable harvests.
[11] Images have been used a thumbnails and linked references to the articles they are hosted in, but have not been analyzed much for semantics or file names.
[12] There are a variety of efforts underway to use Wikipedia as a multi-language cross-reference based on its 250 language versions; search, for example, on “multiple language” in SWEETpedia. Both named entity and concept matches can be used to correlate in multiple languages. This is greatly aided by inter-language links.
[13] When present, these appear at the bottom of an article and have many related categories; see this one for the semantic Web.
[14] See further http://en.wikipedia.org/wiki/Wikipedia:Category and http://en.wikipedia.org/wiki/Wikipedia:Categorization_FAQ for a discussion of use and guidelines for Wikipedia categories.
[15] For the release notice, see http://umbel.org/content/finally-umbel-v-100. Annex H to the UMBEL Specifications provides a description of the mapping methodologies and results.
[16] Functional categories combine two or more facets in order to split or provide more structured characterization of a category. For example, Category:English cricketers of 1890 to 1918, has as its core concept the idea of a cricketer, a sports person. But, this is also further characterized by nationality and time period. Functional categories tend to have a A x B x C construct, with prepositions denoting the facets. From a proper characterization standpoint, the items in this category should be classified as a Person –> Sports Person –> Cricketer, with additional facets (metadata) of being English and having the period 1890 to 1981 assigned.
[17] See, for example, Massimo Poesio et al., 2008. ELERFED: Final Report, see http://www.cl.uni-heidelberg.de/~ponzetto/pubs/poesio07.pdf, wherein they state, “We discovered that in the meantime information about categories in Wikipedia had grown so much and become so unwieldy as to limit its usefulness.” Additional criticisms of the category structure may be found in S. Chernov, T. Iofciu, W. Nejdl and X. Zhou, 2006. “Extracting Semantic Relationships between Wikipedia Categories,” in Proceedings of the 1st International Workshop: SemWiki’06—From Wiki to Semantics., co-located with the 3rd Annual European Semantic Web Conference ESWC’06 in Budva, Montenegro, June 12, 2006; and L Muchnik, R. Itzhack, S. Solomon and Y. Louzon, 2007. “Self-emergence of Knowledge Trees: Extraction of the Wikipedia Hierarchies,” in Physical Review E 76(1). Also, this blog post from Bob Bater at KOnnect, “Wikipedia’s Approach to Categorization,” September 22, 2008, provides useful comments on category issues; see http://iskouk.wordpress.com/2008/09/22/wikipedias-approach-to-categorization/.
[18] Olena Medelyan and Cathy Legg, 2008. Integrating Cyc and Wikipedia: Folksonomy Meets Rigorously Defined Common-Sense, in Proceedings of the WIKI-AI: Wikipedia and AI Workshop at the AAAI08 Conference, Chicago, US. See http://www.cs.waikato.ac.nz/~olena/publications/Medelyan_Legg_Wikiai08.pdf.
[19] As two references among many, see A. Halavais and D. Lackaff, 2008. “An Analysis of Topical Coverage of Wikipedia,” in Journal of Computer-Mediated Communication 13 (2): 429–440; and A. Kittur, E. H. Chi and B. Suh, 2009. “What’s in Wikipedia? Mapping Topics and Conflict using Socially Annotated Category Structure,” in Proceedings of the 27th Annual CHI Conference on Human Factors in Computing Systems, pp 4–9.
[20] See DBpedia.org, especially DBpedia reference.
[21] See http://www.globalwordnet.org/gwa/wordnet_table.htm for a listing of known wordnets by language.
[22] For example, see this listing in Wikipedia.
[23] M.K. Bergman, 2008. “When is Content Coherent?,” AI3:::Adaptive Information blog, July 25, 2008; see http://www.mkbergman.com/450/when-is-content-coherent/.
[24] For a couple of useful references on this topic, first see this discussion regarding contexts (and the possible relation to Cyc microtheories): Ramanathan V. Guha, Rob McCool, and Richard Fikes, 2004. “Contexts for the Semantic Web,” in Sheila A. McIlraith, Dimitris Plexousakis, and Frank van Harmelen, eds., International Semantic Web Conference, volume 3298 of Lecture Notes in Computer Science, pp. 32-46. Springer, 2004. See http://citeseer.ist.psu.edu/viewdoc/download?doi=10.1.1.58.2368&rep=rep1&type=pdf. For another discussion about local differences and contexts and the difficulty of reliance on “common” understandings, see: Krzysztof Janowicz, 2010. “The Role of Space and Time for Knowledge Organization on the Semantic Web,” in Semantic Web 1: 25–32; see http://iospress.metapress.com/content/636610536×307213/fulltext.pdf.
[25] OWL already provides the exact predicates; see further M.K. Bergman, 2010. “The Nature of Connectedness on the Web,” AI3:::Adaptive Information blog, November 22, 2010, 2008; see http://www.mkbergman.com/935/the-nature-of-connectedness-on-the-web/ and the UMBEL mapping predicates in this vocabulary listing.
[26] UMBEL is a reference of 28,000 concepts (classes and relationships) derived from the Cyc knowledge base. The reference concepts of UMBEL are mapped to Wikipedia, DBpedia ontology classes, GeoNames and PROTON. UMBEL is designed to facilitate the organization, linkage and presentation of heterogeneous datasets and information. It is meant to lower the time, effort and complexity of developing, maintaining and using ontologies, and aligning them to other content. See further the UMBEL Specifications (including Annexes A – H), Vocabulary and RefConcepts.
[27] Cyc is an artificial intelligence project that has assembled a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal to provide human-like reasoning. The OpenCyc version 3.0 contains nearly 200,000 terms and millions of relationship assertions. Started in 1984, by 2010 an estimated 1000 person years had been invested in its development.
[28] This image and more related to the general question of interoperability in relation to a reference structure is provided in M.K. Bergman, 2007, “Where are the Road Signs for the Structured Web?,” AI3:::Adaptive Information blog, May 29, 2007; see http://www.mkbergman.com/375/where-are-the-road-signs-for-the-structured-web/.
[29] GeoNames is a geographical database available for free download under a Creative Commons Attribution license. It contains over 10 million geographical names and consists of 7.5 million unique features, of which 2.8 million are populated places. All features are categorized into one out of nine feature classes and further subcategorized into one out of 645 feature codes. Given the importance of locational information, GeoNames is a natural complement to the gold standards mentioned herein. See further its Web site, which also showcases a nifty browser of mappings to Wikipedia.

Posted by AI3's author, Mike Bergman Posted on February 28, 2011 at 12:07 am in Semantic Web, Structured Web, UMBEL | Comments (2)
The URI link reference to this post is: http://www.mkbergman.com/947/in-search-of-gold-standards-for-the-semantic-web/
The URI to trackback this post is: http://www.mkbergman.com/947/in-search-of-gold-standards-for-the-semantic-web/trackback/