Posted:May 16, 2011

A Video Introduction to a New Online Ontology Editor and Manager

Structured Dynamics is pleased to unveil structOntology — its ontology manager application within the conStruct open source semantic technology suite. We are doing so via a video, which provides a bit more action about this exciting new app.

structOntology has been on our radar for more than two years. But, it was only in embracing the OWLAPI some eight months back that we finally saw our way clear to how to implement the system.

The app, superbly developed by Fred Giasson, has many notable advantages — some of which are covered by the video — but two deserve specific attention:  1) the superior search function (if you have been using Protégé or similar, you will love the fact this search indexes everything, courtesy of Solr); and 2) the availability of its functionality directly within the applications that are driven by the ontologies. Of course, there’s other cool stuff too!:

 

 

(If you have trouble seeing this, here is the direct YouTube link or an alternate local Flash version if you can not access YouTube.)

More information on structOntology will be forthcoming over the coming weeks. We will be posting it as open source as part of the Open Semantic Framework by early summer.

Posted:April 25, 2011

Advances in How to Transfer Semantic Technologies to Enterprise Users structWFS

For some time, our mantra at Structured Dynamics has been, “We’re successful when we are not needed. [1]

In support of this vision, we have been key developers of an entire stack of semantic technologies useful to the enterprise, the open semantic framework (OSF); we have formulated and contributed significant open source deployment guidance to the MIKE2.0 methodology for semantic technologies in the enterprise called Open SEAS; we have developed useful structured data standards and ontologies; and we have made massive numbers of free how-to documents and images available for download on our TechWiki. Today, we add further to these contributions with our workflows guidance. All of these pieces contribute to what we call the total open solution.

Prior documentation has described the overall architecture or layered approach of the open semantic framework (OSF). Those documents are useful, but lack a practical understanding of how the pieces fit together or how an OSF instance is developed and maintained.

This new summary overviews a series of seven different workflows for various aspects of developing and maintaining an OSF (based on Drupal) [2]. In addition, each workflow section also cross-references other key documentation on the TechWiki, as well as points to possible tools that might be used for conducting each specific workflow.

Overview

Seven different workflows are described, as shown in the diagram below. Each of the workflows is color-coded and related to the other workflows. The basic interaction with an OSF instance tends to occur from left-to-right in the diagram, though the individual parts are not absolutely sequential. As each of the seven specific workflows is described below, it is keyed by the same color-coded portion of the overall workflow.

OSF Workflow

Each of the component workflows is itself described as a series of inter-relating activities or tasks.

Installation Workflow

Installation is mostly a one-time effort and proceeds in a more-or-less sequential basis. As various components of the stack are installed, they are then configured and tested for proper installation.

The installation guide is the governing document for this process, with quite detailed scripts and configuration tests to follow. The blue bubbles in the diagram represent the major open source software components of Virtuoso (RDF triple store), Solr (full-text search) and Drupal (content management system).

Install Workflow

Another portion of this workflow is to set up the tools for the backoffice access and management, such as PuTTY and WinSCP (among others).

Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Configure & Presentation Workflow

One of the most significant efforts in the overall OSF process is the configuration and theming of the host portal, generally based on Drupal.

The three major clusters of effort in this workflow are the design of the portal, including a determination of its intended functionality; the setting of the content structure (stubbing of the site map) for the portal; and determining user groups and access rights. Each of these, in turn, is dependent on one or more plug-in modules to the Drupal system.

Some of these modules are part of the conStruct series of OSF modules, and others are evaluated and drawn from the more than 8000 third-party plug-in modules to Drupal.

Configure Workflow

The Design aspect involves picking and then modifying a theme for the portal. These may start as one of the open source existing Drupal themes, as well as those more specifically recommended for OSF. If so, it will likely be necessary to do some minor layout modifications on the PHP code and some CSS (styling) changes. Theming (skinning) of the various semantic component widgets (see below) also occurs as part of this workflow.

The Content Structure aspect involves defining and then stubbing out placeholders for eventual content. Think of this step as creating a site map structure for the OSF site, including major Drupal definitions for blocks, Views and menus. Some of the entity types are derived from the named entity dictionaries used by a given project.

More complicated User assignments and groups are best handled through a module such as Drupal’s Organic Groups. In any event, determination of user groups (such as anonymous, admins, curators, editors, etc.) is a necessary early determination, though these may be changed or modified over time.

For site functionality, Modules must be evaluated and chosen to add to the core system. Some of these steps and their configuration settings are provided in the guidelines for setting up Drupal document.

None of the initial decisions “lock in” eventual design and functionality. These may be modified at any time moving forward.

Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Structured Data Workflow

Of course, a key aspect of any OSF instance is the access and management of structured data.

There are basically two paths for getting structured data into the system. The first, involving (generally) smaller datasets is the manual conversion of the source data to one of the pre-configured OSF import formats of RDF, JSON, XML or CSV. These are based on the irON notation; a good case study for using spreadsheets is also available.

The second path (bottom branch) is the conversion of internal structured data, often from a relational data store. Various converters and templates are available for these transformations. One excellent tool is FME from Safe Software (representing the example shown utilizing a spatial data infrastructure (SDI) data store), though a very large number of options exist for extract, transform and load.

In the latter case, procedures for polling for updates, triggering notice of updates, and only extracting the deltas for the specific information changed can help reduce network traffic and upload/conversion/indexing times.

Structured Data Workflow
Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Content Workflow

The structured data from the prior workflow process is then matched with the remaining necessary content for the site. This content may be of any form and media (since all are supported by various Drupal modules), but, in general, the major emphasis is on text content.

Existing text content may be imported to the portal or new content can be added via various WYSIWYG graphical editors for use within Drupal. (The excellent WYWIWYG Drupal module provides an access point to a variety of off-the-shelf, free WYSIWYG editors; we generally use TinyMCE but multiples can also be installed simultaneously).

The intent of this workflow component is to complete content entry for the stubs earlier created during the configuration phase. It is also the component used for ongoing content additions to the site.

Content Workflow

Content that is tagged by the scones tagger is done so based on the concepts in the domain ontology (see below) and the named entities (as contained in “dictionaries”) used by a given project. Once tagged, this information can also now be related to the other structured data in the system.

Once all of this various content is entered into the system, it is then available for access and manipulation by the various conStruct modules (see figure above) and semantic component widgets (see below).

Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Ontologies Workflow

Though the next flowchart below appears rather complicated, there are really only three tasks that most OSF administrators need worry about with respect to ontologies:

  1. Adding a concept to the domain ontology (a class) and setting its relationships to other concepts
  2. Adding a dataset attribute (data characteristic) for various dataset records, or
  3. Adding or changing an annotation for either of these things, such as the labels or descriptions of the thing.

In actuality, of course, editing, modifying or deleting existing information is also important, but they are easier subsets of activities and user interfaces to the basic add (“create”) functions.

The OSF interface provides three clean user interfaces to these three basic activities [3].

These basic activities may be applied to the three major governing ontologies in any OSF installation:

  • The domain ontology, which captures the conceptual description of the instance’s domain space
  • The semantic components ontology (SCO), which sets what widgets may display what kinds of data, and
  • irON for the instance record attributes and metadata (annotations).

All of the OSF ontology tools work off of the OWLAPI as the intermediary access point. The ontologies themselves are indexed as structured data (RDF with Virtuoso) or full text (Solr) for various search, retrieval and reasoning activities.

Ontologies Workflow

Because of the central use of the OWLAPI, it is also possible to use the Protégé editor/IDE environment against the ontologies, which also provides reasoners and consistency checking.

Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Filter & Select Workflow

The filter and select activities are driven by user interaction, with no additional admin tools required. This workflow is actually the culmination of all of the previous sequences in that it exposes the structured data to users, enables them to slice-and-dice it, and then to view it with a choice of relevant widgets (semantic components).

For example, see this animation:

Animated Filtering and Selection Workflow

Considerable more detail and explanation is available for these semantic components.

Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Maintenance Workflow

The ongoing maintenance of an OSF instance is mostly a standard Drupal activity. Major activities that may occur include moderating comments; rotating or adding new content; managing users; and continued documentation of the site for internal tech transfer and training. If the portal embraces other aspects of community engagement (social media), these need to be handled as part of this workflow as well.

All aspects of the site and its constituent data may be changed, or added to at any time.

Maintenance Workflow
Click here to see the tools associated with this workflow sequence, as described in the TechWiki desktop tools document.

Moving from Here

Total Open SolutionWhen first introduced in our three-part series, we noted the interlocking pieces that constituted the total open solution of the open semantic framework (OSF) (see right). We also made the point — unfortunately still true today — that the relative maturity and completeness of all of these components still does not allow us to achieve fully, “We’re successful when we are not needed.”

As a small firm that is committed to self-funding via revenues, Structured Dynamics is only able to add to its stable of open source software and to develop methodologies and provide documentation based on our client support. Yet, despite our smallness, our superb client support has enabled us to aggressively and rapidly add to all four components of this total open solution. This newest series of ongoing workflow documents (plus some very significant expansions and refinements of the OSF code base) is merely the latest example of this dynamic.

Through judicious picking of clients (and vice versa), and our insistence that new work and documentation be open sourced because it itself has benefitted from prior open source, we and our client partners have been making steady progress to this vision of enterprises being able to adopt and install semantic solutions on their own. Inch-by-inch we are getting there.

The status of our vision today is that we are still needed in most cases to help formulate the implementation plan and then guide the initial set-up and configuration of the OSF. This support typically includes ontology development, data conversion and overall component integration. While it is true that some parties have embraced the OSF code and documentation and are implementing solutions on their own, this still requires considerable commitment and knowledge and skills in semantic technologies.

The great news about today’s status is that — after initial set-up and configuration — we are now able to transfer the technology to the client and walk away. Tools, documentation, procedures and workflows are now adequate for the client to extend and maintain their OSF instance on their own. This great news includes a certification process and program for transferring the technology to client staff and assessing their proficiency in using it.

We have been completely open about our plans and our status. In our commitment to our vision of success, much work is still needed on the initial install and configure steps and on the entire area of ontology creation, extension and mapping [4]. We are working hard to bridge these gaps. We welcome additional partners that share with us the vision of complete, turnkey frameworks — including all aspects of total open solutions. Inch-by-inch we are approaching the realization of a vision that will fundamentally change how every enterprise can leverage its existing information assets to deliver competitive advantage and greater value for all stakeholders. You are welcome aboard!


[1] This has been the thematic message on Structured Dynamics‘ Web site for at least two years. The basic idea is to look at open source semantic technologies from the perspective of the enterprise customer, and then to deliver all necessary pieces to enable that customer to install, deploy and maintain the OSF stack on its own. The sentiment has infused our overall approach to technology development, documentation, technology transfer and attention to methodologies.
[2] The first version of this article appeared as Workflow Perspectives on OSF on the OpenStructs TechWiki on April 19, 2011.
[3] The current release of OSF does not yet have these components included; they will be released to the open source SVNs by early summer.
[4] The best summary of the vision for where ontology development needs to head is provided by the Normative Landscape of Ontology Tools article on the TechWiki; see especially the second figure in that document.
Posted:November 15, 2010

UMBEL Vocabulary and Reference Concept OntologySignificant Upgrades, Changes Based on Two Years of Use

Structured Dynamics and Ontotext are pleased to announce the latest release of UMBEL, version 0.80. It has been more than a year since the last update of UMBEL, and well past earlier announced targets for this upgrade. UMBEL was first publicly released as version 0.70 on July 16, 2008.

UMBEL (Upper Mapping and Binding Exchange Layer) has two roles. It is firstly a vocabulary for building reference ontologies to guide the interoperation of domain information. It is secondly a reference ontology in its own right that contains about 21,000 general reference concepts. With more than two years of practical experience with UMBEL, much has been learned.

This learning has now been reflected into five major changes for the system, embodying numerous minor changes. I summarize these major changes below. The formal release of UMBEL v. 0.80 is also being accompanied by a complete revamping and updating of the project’s Web site. I hope you will find these changes as compelling and exciting as we do.

In the broader context, it is probably best to view this release as but the interim first step of a two-step release sequence leading to UMBEL version 1.00. We are on track to release version 1.00 by the end of this year. This second step will include a complete mapping to the PROTON upper-level ontology and the re-organization and categorization of Wikipedia content into the UMBEL structure. We anticipate the pragmatic challenges in this massive effort will also inform some further refinements to UMBEL itself, which will also lead to further changes in its specification.

Nonetheless, UMBEL v. 0.80 does embody most of the language and structural changes anticipated over this evolution. It is fully ready for use and evaluation; it will, for example, be incorporated into a next version of FactForge. But, do be aware that the major revisions discussed herein are subject to further refinements as the efforts leading to version 1.00 are culminated over the next few weeks.

Let’s now overview these major changes in UMBEL v. 0.80.

Major Change #1: Clarification of Dual Role

The genesis of UMBEL more than three years ago was the recognition that data interoperability on the semantic Web depended on shared reference concepts to link related content. We spent much effort to construct such a reference structure with about 21,000 concepts. That purpose remains.

But, the way in which we created this structure — its vocabulary — has also proven to have value in its own right. The same basic way that we constructed the original UMBEL we have now applied to multiple, specific domain ontologies. With use, it has become clear that the vocabulary for creating reference ontologies is on an equal footing to the reference concepts themselves.

With this understanding has come clarity of role and description of UMBEL. With version 0.80, we now have explicitly split and defined these roles and files.

The UMBEL Vocabulary

Thus, UMBEL’s first purpose is to provide a general vocabulary (the UMBEL Vocabulary) of classes and predicates for describing domain ontologies, with the specific aim of promoting interoperability with external datasets and domains. It is usable exclusive of the UMBEL Reference Concept Ontology.

The UMBEL Vocabulary recognizes that different sources of information have different contexts and different structures. A meaningful vocabulary is necessary that can express potential relationships between two information sources with respect to their differences in structure and scope. By nature, these connections are not always exact. Means for expressing the “approximateness” of relationships are essential.

The vocabulary has been greatly simplified from earlier versions (see Major Change #2 below); it now defines two classes:

  • RefConcept
  • SuperType

These are explained further below. And, the vocabulary has 10 properties:

  • isAbout
  • isRelatedTo
  • isLike
  • hasMapping
  • hasCharacteristic
  • isCharacteristicOf
  • preflabel
  • altLabel
  • hiddenLabel
  • definition.

(Note, the latter four are also in SKOS; see [1].)

In addition, UMBEL re-uses certain properties from external vocabularies. These classes and properties are used to instantiate the UMBEL Reference Concept ontology (see next), and to link Reference Concepts to external ontology classes. For more detail on the vocabulary see Part I: Vocabulary Specification in the specifications.

The UMBEL Reference Concept Ontology

The second purpose of UMBEL is to provide a coherent framework of broad subjects and topics, the “reference concepts” or RefConcepts, expressed as the UMBEL Reference Concept Ontology. The RefConcepts act as binding nodes for mapping relevant Web-accessible content, also with the specific aim of promoting interoperability and to reason over a coherent reference structure and its linked resources. UMBEL presently has about 21,000 of these reference concepts drawn from the Cyc knowledge base, which are organized into more than 30 mostly disjoint SuperTypes (see Major Change #3).

The UMBEL Reference Concept Ontology is, in essence, a content graph of subject nodes related to one another via broader-than and narrower-than relations. In turn, these internal UMBEL RefConcepts may be related to external classes and individuals (instances and named entities) via a set of relational, equivalent, or alignment predicates (the UMBEL Vocabulary, see above).

The actual RefConcepts used are the least changed part in UMBEL from previous versions, and still have the same identifiers as prior versions. The Reference Concept Ontology now uses a recently updated release of the OpenCyc KB v3. Cycorp also added some additional mapping predicates in this release that allows items such as fields of study to be added to the structure. (Thanks, Cycorp!)

Here is a large-graph view of the 21,000 reference concepts in the ontology (click to expand; large file):

UMBEL Reference Concept Ontology

More detail on the RefConcepts is provided in Part II: Reference Concepts Specification of the full specifications.

Major Change #2: Reference Concepts and Predicate Simplification

Another set of major changes was the simplification and streamlining of the predicates and construction of the UMBEL Vocabulary [2]. Again, the specifications detail these changes, but the significant ones include:

Natural World Natural Phenomena
Natural Substances
Earthscape
Extraterrestrial
Living Things Prokaryotes
Protists & Fungus
Plants
Animals
Diseases
Person Types
Human Activities Organizations
Finance & Economy
Society
Activities
Time-related Events
Time
Human Works Products
Food or Drink
Drugs
Facilities
Human Places Geopolitical
Workplaces, etc.
Information Chemistry (n.o.c)
Audio Info
Visual Info
Written Info
Structured Info
Notations & References
Numbers
Descriptive Attributes
Classificatory Abstract-level
Topics/Categories
Markets & Industries
Dimensions and SuperTypes
  • Changed the name of ‘Subject Concepts’ (SubjectConcept, or SC) to ‘Reference Concepts’ (RefConcept, or RC). The umbel:SubjectConcept class got deprecated, and the umbel:RefConcept class got added. As noted by many practitioners, the rather tortured use of the earlier “subject concepts” was questioned. The change in this new version reflects the actual reference use of the concepts and ontologies that employ them
  • Dropped the “SemSet” class, and replaced the same idea of providing multiple tagging options via the best practice of the use of umbel:preLabel and multiple umbel:altLabels and umbel:hiddenLabels. This simplifies the language and brings usage into conformance with standard practice and reasoners
  • With the addition of SuperTypes (see next Major Change), dropped the distinction for “abstract concepts” and rolled their earlier use into the standard RefConcepts
  • The simplification due to OWL 2 metamodeling (see Major Change #4) enabled the removal of many earlier predicates and their inverse properties,
  • With experience gained through linking datasets and their attributes to ontologies [3], added predicates (hasCharacteristic and isCharacteristicOf) for relating external properties, and
  • Many other streamlining changes and improvements to property specifications.

See further the Part II in the full specifications.

Major Change #3: SuperTypes

Shortly after the first public release of UMBEL, it was apparent that the 21,000 reference concepts tended to “cluster” into some natural groupings. Further, upon closer investigation, it was also apparent that most of these concepts were disjoint with one another. As subsequent analysis showed, more fully detailed in the Annex G document, fully 75% of the reference concepts in the UMBEL ontology are disjoint with one another.

Natural clusters provide a tractable way to access and manage some 21,000 items. And, large degrees of disjointedness between concepts also can lead to reasoning benefits and faster processing and selection of those items.

For these reasons a dedicated analysis to analyze and assign all UMBEL reference concepts to a new class of SuperTypes was undertaken. SuperTypes are now a major enhancement to UMBEL v. 0.80. The assignment results and the SuperType specification are discussed in Part II, with full analysis results in Annex G.

In addition, all of these SuperTypes are clustered into nine “dimensions”, which are useful for aggregation and organizational purposes, but which have no direct bearing on logic assertions or disjointedness testing. These nine dimensions, with their associated SuperTypes, are shown in the table to the right. Note the last two dimensions (and four SuperTypes), shown in italics, are by definition non-disjoint.

The construct of the SuperType may be applied to any domain ontology constructed with the UMBEL Vocabulary. The UMBEL Reference Concept Ontology includes all disjoint assertions for all of its RefConcepts.

Major Change #4: OWL 2 Compliance

One of the most challenging improvements in the new UMBEL version 0.80 was to make its vocabulary and ontology compliant with the new OWL 2 Web Ontology Language. We wanted to convert to OWL 2 in order to:

  • Use OWL reasoners
  • Load the full UMBEL into the Protégé 4 ontology editor
  • Use the OWL API, consistent with many other ontology tools we are pursuing, and
  • Take advantage of a neat trick in OWL 2 called “punning“.

The latter reason is the most important given the reference role of UMBEL and ontologies based on the UMBEL Vocabulary. It is not unusual to want to treat things either as a class or an instance in an ontology. Among other aspects, this is known as metamodeling and it can be accomplished in a number of ways. “Punning” is one metamodeling technique that importantly allows us to use concepts in ontologies as either classes or instances, depending on context.

To better understand why we should metamodel, let’s look at a couple of examples, both of which combine organizing categories of things and then describing or characterizing those things. This dual need is common to most domains [4].

As one example, let’s take a categorization of apes as a kind of mammal, which is then a kind of animal. In these cases, ape is a class, which relates to other classes, and apes may also have members, be they particular kinds of apes or individual apes. Yet, at the same time, we want to assert some characteristics of apes, such as being hairy, two legs and two arms, no tails, capable of walking bipedally, with grasping hands, and with some being endangered species. These characteristics apply to the notion of apes as an instance.

As another example we may have the category of trucks, which may further be split into truck types, brands of trucks, type of engine, and so forth. Yet, again, we may want to characterize that a truck is designed primarily for the transport of cargo (as opposed to automobiles for people transport), or that trucks may have different drivers license requirements or different license fees than autos. These descriptive properties refer to trucks as an instance.

These mixed cases combine both the organization of concepts in relation to one another and with respect to their set members, with the description and characterization of these concepts as things unto themselves. This is a natural and common way to express most any domain of interest. It is also a general requirement for a reference ontology, as we use in the sense of UMBEL.

When we combine this “punning” aspect of OWL 2 with our standard way of relating concepts in a hierarchical manner, this general view of the predicates within UMBEL emerges (click to expand):

UMBEL Predicates - click to expand

On the left-hand side (quadrants A and C) is the “class” view of the structure; the right-hand side is the “individual” (or instance) view of the structure (quadrants B and D). These two views represent alternative perspectives for looking at the UMBEL reference concepts based on metamodeling.

The top side of the diagram (quadrants A and B) is an internal view of UMBEL reference concepts (RefConcept) and their predicates (properties). This internal view applies to the UMBEL Reference Concept Ontology or to domain ontologies based on the UMBEL Vocabulary. These relationships show how RefConcepts are clustered into SuperTypes or how hierarchical relationships are established between Reference Concepts (via the skos:narrowerTransitive and skos:broaderTransitive relations). The concept relationships and their structure is a “class” view (quadrant A); treating these concepts as instances in their own right and relating them to SKOS is provided by the right-hand “individual” (instance) view (quadrant B).

The bottom of the diagram (quadrants C and D) shows either classes or individuals in external ontologies. The key mapping predicates cross this boundary (the broad dotted line) between UMBEL-based ontologies and external ontologies. See further Part I in the full specification for more detailed discussed of this figure and its relation to metamodelling.

Major Change #5: Documentation and Packaging

These changes also warranted better documentation and a better project Web site. From a documentation standpoint, the organization was simplified between the actual specifications and related annexes. Also, because of a more collaborative basis resulting from the new partnership with Ontotext, we also established an internal wiki following TechWiki designs. Initial authoring occurs there, with final results re-purposed and published on the project Web site.

The UMBEL Web site also underwent a major upgrade. It is now based on Drupal, and therefore will be able to embrace our conStruct advances in visualization and access over time. We also posted the full Reference Concept Ontology as an OWLDoc portal.

We feel these changes have now resulted in a clean and easy-to-maintain framework for the next phase in UMBEL’s growth and maturation.

Next Steps and Version

As noted in the intro, this version is but an interim step to the pending next release of UMBEL v. 1.00. This next version will provide mappings to leading ontologies and knowledge bases, as well as the upgrade of existing Web services and other language support features. Intended production or commercial uses would best await this next version.

However, the current version 0.80 is fully consistent and OWL 2-compliant. It loads and can be reasoned over with OWL 2 reasoners (see those available with Protégé 4.1, for example). We encourage you to download, test and comment upon this version. Specifics are:

As co-editors, Frédérick Giasson and I are extremely enthused about the changes and cleanliness of version 0.80. It is already helping our client work. We think these improvements are a good harbinger for UMBEL version 1.00 to come by the end of the year. We hope you agree.


[1] Some relevant SKOS properties are now shown in the UMBEL namespace. This is a technical issue with regard to SKOS needing to have a separate namespace for its DL version, which has been brought up with the relevant Work Group individuals at the W3C. As soon as this oversight is rectified, the SKOS predicates now in UMBEL will be deprecated in favor of the appropriate ones in SKOS.
[2] We’d especially like to thank Jack Park for a series of critical email exchanges in November 2008 regarding terminology and purpose. We are, of course, solely responsible for the changes we did invoke.
[3] See, for example, the MyPeg.ca site, with its richness of indicator and demographic data. UMBEL co-editors Bergman and Giasson have each blogged about this site.
[4] Much of this material is drawn from M.K. Bergman, “Metamodeling in Domain Ontologies,” AI3:::Adaptive Information blog, Sept 20, 2010; see http://www.mkbergman.com/913/metamodeling-in-domain-ontologies/. In the reference ontologies that are the focus here, we often want to treat our concepts as both classes and instances of a class. This is known as “metamodeling” or “metaclassing” and is enabled by “punning” in OWL 2. For example, here a case cited on the OWL 2 wiki entry on “punning“:
People sometimes want to have metaclasses. Imagine you want to model information about the animal kingdom. Hence, you introduce a class a:Eagle, and then you introduce instances of a:Eagle such as a:Harry.
(1) a:Eagle rdf:type owl:Class
(2) a:Harry rdf:type a:Eagle
Assume now that you want to say that “eagles are an endangered species”. You could do this by treating a:Eagle as an instance of a metaconcept a:Species, and then stating additionally that a:Eagle is an instance of a:EndangeredSpecies. Hence, you would like to say this:
(3) a:Eagle rdf:type a:Species
(4) a:Eagle rdf:type a:EndangeredSpecies.
This example comes from Boris Motik, 2005. “On the Properties of Metamodeling in OWL,” paper presented at ISWC 2005, Galway, Ireland, 2005. For some other examples, see Bernd Neumayr and Michael Schrefl, 2009. “Multi-Level Conceptual Modeling and OWL (Draft, 2 May – Including Full Example)”; see http://www.dke.jku.at/m-owl/most09_22_full.pdf.

Posted by AI3's author, Mike Bergman Posted on November 15, 2010 at 1:54 am in Ontologies, Semantic Web, Structured Dynamics, UMBEL | Comments (3)
The URI link reference to this post is: http://www.mkbergman.com/930/announcing-a-major-new-umbel-release/
The URI to trackback this post is: http://www.mkbergman.com/930/announcing-a-major-new-umbel-release/trackback/
Posted:November 8, 2010

Innovative Winnipeg Project Powered by SD TechnologyPeg Project

This past Friday the Peg project was unveiled for the first time to an enthusiastic welcome at the Winnipeg Poverty Reduction Partnership Forum. A beta version of its website (www.mypeg.ca) was also launched. Peg is an innovative Web portal for community indicators of well-being for the city of Winnipeg, Manitoba. First conceived in 2002, with much subsequent refinement, its strong consortium of members from the local community and recent backing have now allowed it to be shared with the public.

Since early this year, Structured Dynamics has been the lead technical contractor on the project. But Peg is about people and involvement, not technology. Peg is an effort of community and perspectives and information and stories, all designed to coalesce how to make Winnipeg a better community moving forward. So, while the technology underlying the site is innovative (yes, we’re proud ;) ), more so is the effort and vision of the community making it happen. Though just a beta release, the current site and the commitment behind it points to some exciting future developments.

Here is the main screen for Peg (clicking on any of the screen captures below will take you directly to the relevant part of the site):

Peg Main Page

A Community Perspective Backed by Dynamic Functionality

Winnipeg’s community indicator system (CIS) is organized around themes, cross-cutting issues that bridge across themes, and indicators and supporting data to track and measure the city’s well-being. Peg’s major themes, agreed upon after extensive community consultation, are: basic needs; health; education & learning; social vitality; governance; built environment; economy; and natural environment. In this first beta release, the emphasis has been on the cross-cutting issue of poverty and some of the indicators to track it.

The perspective being brought to bear on these questions of well-being is comprehensive and embracing. Data and demographics and quantitative indicators of well-being are matched with stories and narratives from affected parties, videos, and a variety of display and visualization options. Much of the supporting data is organized by the 236 neighborhoods in Winnipeg, or broader community areas, with comparative baselines to city, province and nation. The information is both hard and soft, and presented in engaging, exciting and dynamic ways. Using the best of current social media, Peg is meant to be a virtual meeting place and town hall for the public to share and engage one another.

This beta is but a first expression of Peg’s longer-term vision, yet already has the backbone to take on these labors. A concept explorer allows the public to explore and navigate through the entire information space; much information is mapped and presented in locational relevance; narratives and stories and videos are linked contextually to topics and issues; and many, many dashboards can be created and displayed for showing trends and comparing neighborhoods, and letting the data speak visually:

Peg Explorer Peg Map Tab
Peg Stories Tab Peg Dashboard Tab

The current beta is but a start. The Peg project, in continued consultation with stakeholders, will be developing further indicators for each of its eight major themes, providing information about past and current trends, and expanding into additional cross-cutting issues. Daily, the site will see an increase in richness and relevance.

Project Participants

Peg has been spearheaded by the United Way of Winnipeg and the International Institute for Sustainable Development (IISD), also based in Winnipeg, with the partnership of the Province of Manitoba, the City of Winnipeg, Health in Common, and a cross section of community interests and members across the city. Peg is a non-profit effort, and is embarking on a new three-year work plan to oversee further funding and expansion.

Peg is governed by a Steering Committee with budgetary and strategic responsibilities. Peg also works with an Engagement Group — a broad-based group of Winnipeggers — that serves as a testing ground for ideas, direction and policy. The site provides credits for the various entities involved and responsible for the effort.

IISD has provided overall project management for the current effort. As personal thanks, we’d especially like to recognize Connie Walker, Laszlo Pinter, Christa Rust and Charles Thrift. Tactica, also of Winnipeg, has been the lead graphics and site designer for Peg. SD has worked closely with them to ensure a smooth launch, and they’ve done a great job. Thanks to all!

Now, This is Semantics Done Right

Of course, for more on the project, go directly to the Peg site or those of its other major participants and contributors. But, in our role as implementers of the behind-the-scenes wizardry powering the site, we would be remiss if we did not mention a couple of technical items.

As lead technical developer, SD was responsible for all data access, management, development and visualization software for the site. The site was developed in Drupal, with Virtuoso as the RDF data store and Solr for faceted site search. As part of its Open Semantic Framework, based on the Citizen Dan local government appliance, SD contributed and extended major open source software for Peg. These contributions included the structWSF Web services framework, conStruct modules for linking the system into Drupal, and the Flex-based semantic Components including the explorer, map, story viewer, browse/search, dashboard, workbench and back office widgets. We also developed the adaptive ontology driving the entire site, based on the Peg framework vocabulary already hashed out by the community participants.

During the course of the project we developed an entirely new workbench capability for creating new, persistent dashboards. We extended the sRelationBrowser semantic component with complete and flexible theming and styling; virtually all aspects of nodes, edges and behavior have now been exposed for tailoring, including fonts, colors and use of images. We enhanced the irON format to make it easier for project participants to submit spreadsheet datasets to the site for new indicator data. We will be migrating these advances to our existing open source software over the coming weeks. Check Fred Giasson’s blog for release details; he has also begun a series on the technology details.

But, in my opinion, what is most remarkable about all of this is that these bloody details are completely hidden from the user. Though real geeks can get RDF and linked data via export options, for the standard user they simple interact and experience the site. No triples are shoved in their face, no technology screams out for attention, and ne’er any URIs are to be found. The thing simply works, all the while being flexible, contextual, attractive and fun.

And that, folks, I submit, is semantics done right!

Posted:November 1, 2010

SemanticWeb.com

Jennifer Zaino of SemanticWeb.com has just published an interview with me regarding our recently announced partnership with Ontotext and its relation to linked data. Thanks, Jenny, for a fair and accurate representation of our conversation!

Some of the questions related to reference vocabularies and linking predicates are somewhat hard to convey. Jenny did a very nice job capturing some nuanced concepts. I invite you to read the article yourself and judge.