Ontotext is the developer of OWLIM, a highly scalable semantic database engine, and KIM, a popular semantic annotation and search platform. Its FactForge and LinkedLifeData services provide the largest curated and interoperable linked data platforms over which inferencing and reasoning may be applied. Some of Ontotext’s major clients include AstraZeneca, BBC and Korea Telecom. Major professional services include its own technologies, plus text mining and semantic annotation. Ontotext has notable and longstanding technical partnerships, such as with the GATE team and many of the other leading technologies and companies in the semantic Web space. We are very pleased to join forces with them.
Our partnership was formed to address some of the key semantic ‘gaps’ in the semantic Web. The partnership will focus on development of the next generation of the UMBEL and PROTON ontologies, as well as tools and applications based on them.
Thanks to the efforts of the W3C (World Wide Web Consortium), we now have the techniques, languages and standards to deliver the “web” portion of the semantic Web. But, the practical “semantics” for actually effecting the semantic Web have heretofore been lacking. Early experience with linked data has exposed many poor practices. The lack of approximate linking predicates and reference concepts undercuts our ability to achieve meaningful semantic interoperability.
In forming our partnership, Ontotext and SD will shine attention on this semantics “gap”. We will also be aggressively seeking additional partners and players to join with us on this challenge. My recent outreach to DCMI (the Dublin Core Metadata Initiative) is one example of this commitment; we will be talking with others in the coming weeks.
Linked data and the prospects of the semantic Web are at a critical juncture. While we have seen much growth in the release of linked data, we are still not seeing much uptake (other than some curated pockets). Linkages between datasets are still disappointingly low, and quality of linkages is an issue. The time has come to stop simply shoveling more triples over the fence.
The combination of UMBEL and PROTON offers a powerful blend to address these weaknesses. Our partnership will first provide a logical mapping and consolidated framework based on the two core ontologies. These will be made available as standard ontologies and via open source semantic annotation tools.
UMBEL (Upper Mapping and Binding Exchange Layer) is both a vocabulary for building domain ontologies and a framework of more than 20,000 reference concepts. The UMBEL reference ontology is used to tag information and map existing schema in order to help link content and promote interoperability. UMBEL’s reference concepts and structure are a direct subset extraction of the Cyc knowledge base.
The PROTON ontology (PROTo ONtology) is a basic upper-level ontology that contains about 300 classes and 100 properties, providing coverage of the general concepts necessary for a wide range of tasks, including semantic annotation, indexing, and retrieval of documents. It is domain independent with coverage suitable to encompass any domain or named entity.
This consolidated framework will then be applied to organize and provide a coherent categorization of the Wikipedia online encyclopedia. One expression of this result will be a new version of Ontotext’s FactForge, already the largest and best performing reasoning engine leveraging linked data. This new version will allow easy access to the most central Linking Open Data (LOD) datasets such as DBpedia, Freebase, and Geonames, through the vocabularies of UMBEL and PROTON. Additional applications in linked data mining and general tagging of standard Web content are also contemplated by the partnership.
Ontotext’s proven reasoning technologies and ability to host extremely large knowledge bases with great performance are tremendous boons to the next iteration of UMBEL. We have been seeking large-scale coherency testing of UMBEL for some time and Ontotext is the perfect answer.
Ontotext’s CEO, Atanas Kiryakov, indicated their interest in UMBEL stemmed from what they saw as some stumbling blocks with linked data while developing FactForge. “The growth and maturation of linked data will require credible ways to orient and annotate the data,” said Kiryakov. “UMBEL is the right scope of comprehensiveness and size to use as one foundation for this,” he said. Ontotext is also the original developer and current maintainer of PROTON, which will also contribute in this role.
The efforts of the partnership will first be seen with release of UMBEL v. 0.80 in the next couple of weeks. This update revises many aspects of the ontology based on two years of applied experience and updates it to OWL 2. Then, this basis will be used for broader mappings and linkages to Wikipedia. Those next mappings are earmarked for UMBEL version 1.00, slated for release by the end of the year. All of these planned efforts will be released as open source.
Among other intended uses, PROTON, UMBEL and FactForge form a layered reference data structure that will be used for data integration within the European Union research project RENDER. The large-scale RENDER project aims to integrate diverse methods in the ways Web information is selected, ranked, aggregated, presented and used.
Beyond that, further relationships and partnerships are being actively sought with players serious about interoperable, high-quality data on the semantic Web. We welcome inquiries or outreach.
I was very pleased to have presented one of the keynotes at the just concluded DC-2010, DCMI’s International Conference on Dublin Core and Metadata Applications, in Pittsburgh, PA. DCMI (Dublin Core Metadata Initiative) is an open organization engaged in the development of interoperable metadata standards that support a broad range of purposes and business models.
I had four main points to share with the audience:
You can view my presentation below:
As I had anticipated, I had a blast at the conference and walked away much impressed with the passion and intelligence of its dedicated members. Though my first, this will certainly not be my last DCMI conference. Thanks to all for the invite and the great conversations and kind welcoming!
I will be speaking this coming Friday, Oct. 22, at DC-2010, DCMI’s International Conference on Dublin Core and Metadata Applications, in Pittsburgh, PA. DCMI (Dublin Core Metadata Initiative) is an open organization engaged in the development of interoperable metadata standards that support a broad range of purposes and business models.
DCMI, the developers and maintainers of Dublin Core© and many affiliated metadata initiatives, is celebrating its 15th year. The organization will be taking both a retrospective look and a prospective look at its accomplishments and next initiatives.
The other keynote speaker is Dr.Stuart Weibel, a former senior research scientist at OCLC, who with OCLC was instrumental in first launching and then managing the DC initiative. I expect we will hear much from Stuart about his perspective on the forming and needed next directions for the initiative.
DC-2010, which runs from Oct 20 to 22, is also being held in conjunction with the ASIS&T conference, which follows from Oct 22. to Oct 27. ASIS&T (the American Society for Information Science and Technology) is the leading society for information professionals, with more than 4,000 members.
I’m looking forward to meeting and speaking with many individuals I have admired in both of these organizations.
DCMI, in my view, is the essential complementary organization to the W3C for providing the authority and leadership for many needed aspects to make linked data and the semantic Web truly effective. I very much appreciate the Initiative’s outreach to me to share some thoughts on possibly useful contributions by DCMI over the next 15 years. It should be a blast!
Since its initial release, Structured Dynamics‘ open source Open Semantic Framework (OSF) has continued to expand its capabilities and add refinements . The OSF and its various contributing open source software modules are now also fully documented and explained on the OSF TechWiki , from which this current article is drawn.
With the kind sponsorship of one of our clients , we were commissioned to create “dashboards.” Dashboards are currently all the rage. A dashboard presents a composite view of data and information, involving generally multiple widgets or individual displays. This, for example, is a dashboard in the context of our client:
But the client’s request did not end there. What they wanted was a general capability to make dashboards — a dashboard-making machine, if you will — because of their desire to provide an information portal that is constantly changing and responsive to current topics and needs.
Note: the example screen above and those that follow are illustrative. They may be:
In most instances, use of the Workbench is reserved for administrators and curators, who use it to create persistent Dashboard views that are what is ultimately shared with end users. However, that is also a matter of policy and design. There is no technical reason why the Workbench could not be exposed to standard users.
What follows, then, is part of the user manuals for working with the Workbench and Dashboards. It assumes you already know much of how Drupal and its conStruct OSF modules work.
From within a Drupal instance, you access the Workbench via either the Admin or Tools links. Then, you will see the Workbench provided as a distinct option:
The Workbench is the environment (presently expressed as a conStruct Drupal module) for creating Dashboard views. As such, if used, it is one of the more complicated components in an Open Semantic Framework instance. The Workbench consists of three panels and a main menu.
The Workbench is comprised of three main panels: the Filter Panel (Item #1), the Record Selector Panel (Item #2) and the Dashboard Panel (Item #3):
Selections in any one of the panels gets reflected and highlighted in all other panels.
These three main panels can be moved or re-sized anywhere around the screen.
The Filter Panel (Item #1) is for making broad “slice-and-dice” selections across the structure. It has three sub-groupings within it:
The Record Selector Panel (Item #2 in the main screen above), based on the filter restrictions, is for selecting the individual attributes and records to display; it works and operates like a spreadsheet (data grid).
Depending on the selections in the previous two panels, the Dashboard Panel (Item #3 in the main screen above) shows the specific data visualization component depending on the display profile of the attribute type (map, story, graph, explorer, etc.). It may also be used to display a similar comparisons for identified “sticky” records (say national or state- or province-level data).
The Workbench main menu (Item #4 on the screen shot above) has these options:
The main purpose of the Workbench, of course, is to select and filter data for display with various widgets. Each of the three main panels participates in this function.
Filtering occurs via the Filter Panel, with its possible selections of datasets, kinds or attributes:
By default, if no items are selected in one of these sub-groups, then all items are deemed to be selected. However, restricting by datasets may filter out otherwise available kinds or attributes, and restricting by kind may filter out otherwise available attributes.
Records AND display attributes are selected via the Record Selector Panel. First, let’s look at some records selections:
If there are restrictions applied via the Filter Panel, then the number of available attributes shown in the Record Selector Panel may be reduced.
Because the actual data display widgets are limited in size, there is a maximum of 50 records that can shown in the Record Selector Panel at any given time.
Attribute selections are made by checking the column item’s checkbox; this causes a new display (sub-panel) to be spawned in the Dashboard Panel (see next).
Record selections are made by clicking anywhere on a record row. Multiple selections can be made through the standard continuous range select (via the Shift key) or discontinuous range select of multiple, individual records (via the Ctrl key). Selections as made add records to all of the sub-panel displays in the Dashboard Panel.
Selection of an attribute column in the Record Selector Panel causes a new display, or widget, to appear as a sub-panel within the Dashboard Panel. If a particular attribute or record type can be displayed with more than one display type, that is selected via the dropdown list at the lower left of each display sub-panel.
Sub-panels are created in the order of the attributes (data) selected in the Records Selector Panel, from left-to-right, top-to-bottom. In the figure above, there are three sub-panels in a 1 x 3 configuration.
But, by adding another attribute, we now add a fourth sub-panel and the overall displays shifts to a 2 x 2 configuration:
Each sub-panel is auto-sized as it is added to the canvas. There is a practical limit of about six (6) sub-panels to any given Dashboard view.
Each sub-panel may be drag-and-dropped to an alternate location within the panel.
Once embedded in a Web page, the actual sub-panel and panel sizes for a given Dashboard view may be re-set for sizes and dimensions.
One of the main menu options is Record Selection Mode. By default, the standard selection mode is list select. Under this mode, all records selected in the Record Selector Panel are added to all Dashboard sub-panels. This is the best initial mode, since it is fast to create similar selections across all display widgets. This option is selected when the Workbench is first accessed, as shown by this menu item:
However, you may also invoke drag-and-drop mode, also selected by this same menu:
Under drag-and-drop, an individual record may be selected in the Record Selector Panel and then dragged to a specific sub-panel (display widget) in the Dashboard panel. This technique is useful when, say, you want to tailor a specific sub-panel view or provide a comparative baseline to various sub-panels.
Whichever selection mode is currently active is reported back in the title header of the Record Selector Panel. You may also switch back-and-forth between selection modes at any time.
The Dashboard main menu option is where you use and re-use Dashboard views. This menu option allows you to:
A Dashboard view with its multiple sub-panels and tabs (see below) may have taken some thought and time to design. For this reason, you may want to re-use it and you may want to protect your work.
When saving a Dashboard view, you are prompted for a name, shown existing views that you might overwrite, and are asked for a password (that is later required to do any modifications) as this popup screen shows:
The same dialog above shows how easy it is to also re-use Dashboard views. All existing saved views are shown in the dialog box. The first obvious use is to allow existing views to be modified or updated.
Another interesting possibility is to use this design for basic view “templates” that get set up, then re-used for specific records or types. In this manner a template baseline can be established that is then called up multiple times for specific tailoring.
Still another advantage of re-use is to create a standard name for a Dashboard view, say, “Main Page” that then gets embedded on the main page of your application (using the “embed” procedures noted below). Because the hosting Web page is configured to accept this named view, you can actually change the specifics of the view under the Workbench — conceivably including quite different records or widget displays — and then save it for automatic re-loading on the main page.
Another series of menu options from the Dashboard menu relate to “tabs”. Tabs are additional sub-panels nested under a Dashboard view. As noted before, an individual panel in a Dashboard view is practically limited to six to eight sub-panels; with tabs, this can be expanded substantially.
To begin the process of adding a tab you invoke the new tab option under the Dashboard menu:
Once named, the tab then appears as a tab button on the Dashboard view and a blank canvas is presented for adding more sub-panels (as described above):
Once saved, these tabs also get included with the persistent Dashboard view and can also be embedded in other Web pages.
Once a Dashboard view is created, there are two ways to use or embed them: generate HTML code or treat as a Drupal node.
You invoke the generate code option from the Dashboard menu using the Get Code choice:
A “Get HTML Code to Embed” window will appear in the workbench.
You have to provide two pieces of information before you can generate the HTML code:
The Base URL is the URL where the Portable Control Application is located on your Web server. However, you can leave this field empty if the HTML page you want to generate is in the same folder as the PortableControlApplication.swf file.
Once these fields are completed, can click the “Generate HTML Code” button to generate the HTML code to embed in your HTML page.
The HTML code generation tool will generate code in two places within this popup up window:
The HTML code that appears in the first section has to be copied and pasted into the
<header></header> section of your HTML file.
The HTML code that appears in the second section has to be copied and pasted into the
<body></body> section of your HTML file.
Once you have copied and pasted these codes into the two sections of your HTML page, save it, and then load the resulting Web page into your browser. If you have properly filled in all fields above, you will then see the persistent Dashboard view embedded in the page.
The Dashboard view is displayed within an HTML
<div> </div> container. This container defines the size of the actual Dashboard display within in the Web page (as well as other HTML code or styling you care to insert). We suggest that what is generated in the second text area above be added within such a
<div> </div> tag. Then, you may place the
<div> </div> anywhere you want in your Web page layout. It is this
<div> </div> container that determines the size of the Dashboard that will be displayed to the user (plus any other instructions you care to include). Here is an example of such a
<div> </div> container:
One of the advantages of piggybacking on Drupal is the ability to leverage on native and extended capabilities. A core extension to Drupal is content types via CCK, which can be managed and invoked and themed separately.
We have set up a standard Drupal content (node) type called Dashboard Views. Thus, if you follow the separate set of procedures to embed your Dashboard view in this manner, you can:
We have only just begun to explore the possibilities of the combined Dashboard-content type design.
And, so, the result of the steps above is to create the same static Dashboard view that began this article:
This new capability will be released as open source after the client first presents it publicly, now scheduled for the first week of November. Besides general upgrades across the entire Open Semantic Framework stack, that same release will also include a massive update to the Concept Explorer, which we will cover in a later article.
We have to again thank Richard Cyganiak and Anja Jentzsch — as well as all of the authors and publishers of linked open datasets — for the recent update to the linked data cloud diagram . Not only have we seen admirable growth since the last update of the diagram one year ago, but the datasets themselves are now being registered and updated with standard metadata on the CKAN service. Our own UMBEL dataset of reference subject concepts is one of those listed.
The linked open data (LOD) “cloud” diagram and its supporting statistics and archived versions are also being maintained on the http:lod-cloud.net site . This resource, plus the CKAN site and the linked data site maintained by Tom Heath, provide really excellent starting points for those interested in learning more about linked open data. (Structured Dynamics also provides its own FAQ sheet with specific reference to linked data in the enterprise, including both open and proprietary data.)
As an approach deserving its own name, the practice of linked data is about three years old. The datasets now registered as contributing to this cloud are shown by this diagram, last updated about a week ago :
LOD was initially catalyzed by DBpedia and the formation of the Linked Open Data project by the W3C. In the LOD’s first listing in February 2007, four datasets were included with about 40 million total triples. The first LOD cloud diagram was published three years ago (upper left figure below), with 25 datasets consisting of over two billion RDF triples and two million RDF links. By the time of last week’s update, those figures had grown to 203 data sets (qualified from the 215 submitted) consisting of over 25 billion RDF triples and 395 million RDF links .
This growth in the LOD cloud over the past three years is shown by these archived diagrams from the LOD cloud site :
(click on any to expand)
With growth has come more systematization and standard metadata. The CKAN (comprehensive knowledge archive network) is especially noteworthy by providing a central registry and descriptive metadata for the contributing datasets, under the lodcloud group name.
This growth and increase in visibility is also being backed by a growing advocacy community, which were initially academics but has broadened to also include open government advocates and some publishers like the NY Times and the BBC. But, with the exception of some notable sites, which I think also help us understand key success factors, there is a gnawing sense that linked data is not yet living up to its promise and advocacy. Let’s look at this from two perspectives: growth and usage.
While I find the visible growth in the LOD cloud heartening, I do have some questions:
Perhaps one of these days I will spend some time researching these questions myself. If others have benchmarks or statistics, I’d love to see them.
Such data would be helpful to put linked data and its uptake in context. My general sense is that while linked data is gaining visible traction, it is still not anywhere close to living up to its promise.
I am much more troubled by the lack of actual use of linked data. To my knowledge, despite the publication of endpoints and the availability of central access points like Openlink Software’s lod.openlinksw.com, there is no notable service with any traction that is using broad connections across the LOD cloud.
Rather, for anything beyond a single dataset (as is DBpedia), the services that do have usefulness and traction are those that are limited and curated, often with a community focus. Examples of these notable services include:
These observations lead to some questions:
We’re certainly not the first to note these questions about linked data. Some point to a need for more tools. Recently others have looked to more widespread use of RDFa (RDF embedded in Web pages) as possible enablers. While these may be helpful, I personally do not see either of these factors as the root cause of the problems.
Readers of this blog well know that I have been beating the tom-toms for some time regarding what I see as key gaps in linked data practice . The update of the LOD cloud diagram and my upcoming keynote at the Dublin Core (DCMI) DC-2010 conference in Pittsburgh have caused me to try to better organize my thoughts.
I see four challenges facing the linked data practice. These four problems — the four Ps — are predicates, proximity, provision and provenance. Let me explain each of these in turn.
For some time, the quality and use of linking predicates with linked data has been simplistic and naïve. This problem is a classic expression of Maslow’s hammer,” if all you have is a hammer, everything looks like a nail.” The most abused linking property (predicate) in this regard is owl:sameAs.
In order to make links or connections with other data, it is essential to understand what the nature is of the subject “thing” at hand. There is much confusion about actual “things” and the references to “things” and what is the nature of a “thing” within linked data . Quite frequently, the use or reference or characterization of “things” between different datasets should not be asserted as exact, but as only approximate to some degree.
So, we might be referring to something that is about, or similar to, or approximate with or some other qualified linkage. Yet the actual semantics of the owl:sameAs predicate is quite exact and one with some of the strongest entailments (what do the semantics mean) defined. For sameAs to be applied correctly, every assertion about the linked object in one dataset must be believed to be true for every assertion about that linked object in the matching dataset; in other words, the two instances are being asserted as identical resources.
One of the most vocal advocates of linked data is Kingsley Idehen, and he perpetuates the misuse of this predicate in a recent mailing list thread. The question had been raised about a geographical location in one dataset that mistakenly put the target object into the middle of a lake. To address this problem, Kingsley recommended:
You have two data spaces: [AAA] and [BBB], you should make a third — yours, which I think you have via [CCC].
The point here is not to pick on Kingsley, nor even to solely single out owl:sameAs as a source of this problem of linking predicates. After all, it is reasonable to want to relate two objects to one another that are mostly (and putatively) about the same thing. So we grab the best known predicate at hand.
The real and broader issue of linked data at present is firstly, actual linking predicates are often not used. And, then, secondly, when they are used, their semantics are too often wrong or misleading.
We do not, for example, have sufficient and authoritative linking predicates to deal with these “sort of” conditions. It is a key semantic gap in the linked data vocabulary at present. Just as SKOS was developed as a generalized vocabulary for modeling taxonomies and simple knowledge structures, a similar vocabulary is needed for predicates that reflect real-world usage for linking data objects and datasets with one another .
The idea, of course, with linked data resides in the term linked. And linkage means how we represent the relation between objects in different datasets. Done right, this is the beauty and power of linked data and offers us the prospect of federating information across disparate sources on the Web.
For this vision, then, to actually work, links need to be asserted and they need to be asserted correctly. If they are not, then all we are doing is shoveling triples over the fence.
Going back to our first efforts with UMBEL, a vocabulary of about 20,000 subject concepts based on the Cyc knowledge base , we have argued the importance of using well-defined reference concepts as a way to provide “aboutness” and reference hooks for related information on the Web. These reference points become like stars in constellations, helping to guide our navigation across the sea of human knowledge.
While we have put forward UMBEL as one means to provide these fixed references, the real point has been to have accepted references of any manner. These may use UMBEL, alternatives to UMBEL, or multiples thereof. Without some fixity, preferable of a coherent nature, it is difficult to know if we are sailing east or west. And, frankly, there can and should be multiple such reference structures, including specific ones for specific domains. Mappings can allow multiple such structures to be used in an overlapping manner depending on preference.
When one now looks at the LOD cloud and its constituent datasets, it should be clear that there are many more potential cross-dataset linkages resident in the data than the diagram shows. Reference concepts with appropriate linking predicates are the means by which the relationships and richness of these potential connections can be drawn out of the constituent data.
The use of reference vocabularies is rejected by many in the linked data community for what we believe to be misplaced ideological or philosophical grounds. Saying that something is “about” Topic A (or even Topics B and C in different reference vocabularies) does not limit freedom nor make some sort of “ontological commitment“. There is also no reason why free-form tagging systems (folksonomies) can also not be mapped over time to one or many reference structures to help promote interoperability. Like any language, our data languages can benefit from one or more dictionaries of nouns upon which we can agree.
Linked data practitioners need to decide whether their end goal is actual data interoperability and use, or simply publishing triples to run up the score.
We somewhat controversially questioned the basis of how some linked data was being published in an article late last year, When Linked Data Rules Fail . Amongst other issues raised in the article, one involved publishing large numbers of government datasets without any schema, definitions or even data labels for numerically IDed attributes. We stated in part:
Some of these problems have now been fixed in the subject datasets, but in this circumstance and others we still see way too many instances within the linked data community of no definitions of terms, no human readable labels and the lack of other information by which a user of the data may gauge its meaning, interpretation or semantics. Shame on these publishers.
Really, in the end, the provision of useful information comes down to the need to answer a simple question: Link what?
The what is an essential component to staging linked data for actual use and interoperability. Without it, there is no link in linked data.
There are two common threads in the earlier problems. One, semantics matter, because after all that is the arena in which linked data operates. And, second, some entities need to exert the quality control, completeness and consistency that actually enables this information to be dependable.
Both of these threads intersect in the idea of provenance.
This assertion should not be surprising — the standard Web needed some consistent attention with respect to directories and search engines. That linked data or the Web of data is no different, perhaps even more demanding, should be expected.
When we look to those efforts that are presently getting traction in the linked data arena (with some examples above), we note that all of them have quality control and provenance at their core. I think we can also say that only individual datasets that themselves adhere to quality and consistency will even be considered for inclusion in these curated efforts.
The current circumstance of the semantic Web is that adequate languages and standards are now in place. We also see with linked data that techniques are now being worked out and understood for exposing usable data.
But what appears to be lacking are the semantics and reference metadata under which real use and interoperability take place. The W3C and its various projects have done an admirable job of putting the languages and standards in place and raising the awareness of the potential of linked data. We can now fortunately ask the question: What organizations have the authority to establish the actual vocabularies and semantics by which these standards can be used effectively?
When we look at the emerging and growing LOD cloud we see potential written with a capital P. If the problem areas discussed in this article — the contrasting four Ps — are not addressed, there is a real risk that the hard-earned momentum of linked data to date will dissipate. We need to see real consumption and real use of linked data for real problems in order for the momentum to be sustained.
Of the four Ps, I believe three of them require some authoritative leadership. The community of linked data needs to:
When we boil down all of the commentary above a single question remains: Where will the semantic leadership emerge?