Posted:November 22, 2016

CognontoSix Diverse Use Cases Now Available

Cognonto today published two new use cases on how to further leverage KBpedia, its knowledge structure that integrates six major knowledge bases (Wikipedia, Wikidata, OpenCyc, GeoNames, DBpedia and UMBEL), plus mappings to another 20 leading knowledge vocabularies. KBpedia provides a foundation for knowledge-based artificial intelligence (KBAI) by supporting the (nearly) automatic creation of training corpuses and positive and negative training sets and feature sets for deep, unsupervised and supervised machine learning.

The two new use cases are in: 1) dynamic tests and refinements of machine learners enabled by KBpedia’s fast creation of training sets and corpuses and reference (‘gold’) standards; and 2) KBpedia’s unique aspects that provide context for various entity types. With these two additions, Cognonto has now published six diverse use cases:

Each use case is summarized according to the problem and our approach to solving it and the benefits that result. The use cases themselves present general workflows and code snippets for how the use case was tackled.

We will continue to publish use cases using Cognonto’s technologies and KBpedia as they arise.

 

Posted:October 17, 2016

CognontoExtending KBpedia with Domain and Enterprise Data is a Key Theme

Cognonto, our recently announced venture in knowledge-based artificial intelligence (KBAI), has just published three use cases. Two of these use cases are based on extending KBpedia with enterprise or domain data. KBpedia is the KBAI knowledge structure at the heart of Cognonto.

The Cognonto Web site contains longer descriptions of these use cases, with statistics and results where appropriate. We intend to continue to publish more use cases. Notable ones will be broadly announced.

Use Case #1: Word Embedding Corpuses

word2vec is an artificial intelligence ‘word embedding’ model that can establish similarities between terms. These similarities can be used to cluster or classify documents by topic, or to characterize them by sentiment, or for recommendations. The rich structure and entity types within Cognonto’s KBpedia knowledge structure can be used, with one or two simple queries, to create relevant domain “slices” of tens of thousands of documents and entities upon which to train word2vec models. This approach eliminates the majority of effort normally associated with word2vec for domain purposes, enabling available effort to be spent on refining the parameters of the model for superior results.

Some key findings are:

  • Domain-specific training corpuses work better with less ambiguity than general corpuses for these problems
  • Cognonto (through KBpedia) speeds and eases the creation of domain-specific training corpuses for word2vec (and other corpus-based models)
  • Other public and private text sources may be readily added to the KBpedia baseline in order to obtain still further domain-relevant models
  • Such domain-specific training corpuses can be used to establish similarity between local text documents or HTML web pages
  • This method can also be combined with Cognonto’s topics analyzer to first tag text documents using KBpedia reference concepts, and then inform or augment these domain-specific training corpuses, and
  • These capabilities enable rapid testing and refinement of different combinations of “seed” concepts to obtain better desired results.

Use Case #2: Integrating Private Data

KBpedia provides a rich set of 20 million entities in its standard configuration. However, by including relevant entity lists, which may already be in the possession of the enterprise or from specialty domain datasets, significant improvements can be achieved across all of the standard metrics used for entity recognition and tagging. Here is an example of the standard metrics applied by Cognonto in its efforts:

confusion-matrix-wikipedia.png

Cognonto’s standard methodology also includes the creation of reference, or “gold standards”, for measuring the benefits of adding more data or performing other tweaks on the entity extraction algorithms.

Some key findings from this use case in adding private data to KBpedia include:

  • In the example used, adding private enterprise data results in more than a doubling of accuracy (108%) over the standard, baseline KBpedia for identifying the publishing organization of a Web page
  • Some datasets may have a more significant impact than others, but overall, each dataset contributes to the overall improvements of the predictions. Generally adding more data improves results across all measured metrics
  • “Gold standards” are an essential component for testing the value of adding specific datasets or refining machine learning parameters
  • Approx. 500 training instances are sufficient to build a useful “gold standard” for entity tagging; negative training examples are also advisable
  • Even if all specific entities are not identified, flagging a potential “unknown” entity is an important means for targeted next efforts of adding to the current knowledge base
  • KBpedia is a very useful structure and starting point for an entity tagging effort, but that adding domain data is probably essential to gain the overall accuracy desired for enterprise requirements, and
  • This use case is broadly applicable to any entity recognition and tagging initiative.

Use Case #3: The Cognonto Mapper

The Cognonto Mapper includes standard baseline capabilities found in other mappers such as string and label comparisons, attribute comparisons, and the like. But, unlike conventional mappers, the Cognonto Mapper is able to leverage both the internal knowledge graph structure and its use of typologies (most of which do not overlap with one another) to add structural comparators as well. These capabilities lead to more automation at the front end of generating good, likely mapping candidates, leading to faster acceptance by analysts of the final mappings. This approach is in keeping with Cognonto’s philosophy to emphasize “semi-automatic” mappings that combine fast final assignments with the highest quality. Maintaining mapping quality is the sine qua non of knowledge-based artificial intelligence.

Some key findings from this use case are:

  • A capable mapper, including structural considerations, is essential for quickly generating mapping candidates for entities, concepts, vocabularies, schema and ontologies
  • A capable mapper, including structural considerations, is essential for accurate mapping candidates for entities, concepts, vocabularies, schema and ontologies
  • Quality mappings require manual vetting for final acceptance
  • The Cognonto Mapper generates quick and accurate candidates for linking entities, concepts, vocabularies, schema or ontologies
  • The Cognonto Mapper can generate “gold standards” in 15% of the time of standard approaches
  • The Cognonto Mapper scoring can help point to likely mappings even where the candidates are ambiguous
  • The Cognonto Mapper can identity likely, but unknown, entities for inspection and commitment to the knowledge base, and
  • Other structural aspects of KBpedia, such as aspects, relations or attributes, can inform other comparators and mappers.

See the original use case links for further details, code examples, and results and statistics. As noted, we will announce additional use cases as they are published.

Posted:January 20, 2014

Open Semantic Framework New OSF Platform Leapfrogs Earlier Releases in Features and Capabilities

After nearly five years of concentrated development — including the past 20 months of quiet, background efforts — Structured Dynamics is proud to announce version 3.0 of its open-source Open Semantic Framework. OSF is a turnkey platform targeted to enterprises to bring interoperability to their information assets, achieved via a layered architecture of semantic technologies. OSF can integrate information from documents to Web pages and standard databases. Its broad functions range from information ingest and tagging to search and data management to publishing.

Until today, the version available for download was OSF version 1.x. While capable as an enterprise platform — indeed, it has been in use by a number of leading global enterprises since development first began — the capability of the platform was spotty and required consulting expertise to configure and set-up. SD was hired by Healthdirect Australia (HDA) nearly two years ago to enhance OSF’s capabilities and integrate it more closely with the Drupal open-source content management system, among other modern enterprise requirements. The OSF from those developments — the non-public version 2.0 specific to HDA — has now been generalized for broader public use with today’s public announcement of version 3.0.

A More Complete Enterprise Platform

HDA's healthinsite Portal

Not unlike many large organizations, HDA had specific enterprise requirements when it began its recent initiative. Included in these were stringent security, broad use of proven open-source applications, governance and workflow procedures, and strict content authoring and management guidelines. These requirements further needed to express themselves via a sequence of deployment and testing environments, all conducted by a multi-vendor support group following agile development practices.

These requirements placed a premium on performance, scalability and interoperability, all subject to repeatable release procedures and scripts. OSF’s initial development as a more-or-less standalone platform needed to accommodate an enterprise-wide management model involving many players, environments and applications. Prior decisions based on OSF alone now needed to consider and bridge modern enterprise development and deployment practices.

Tighter integration with Drupal was one of these requirements (see next section), but other OSF changes necessary to accommodate this environment included:

  • A new security layer — the initial OSF security model was based on IP authentication. Given the sensitivity of the health data managed by HDA, such a simplistic approach was unacceptable. The actual HDA deployment relied on a third-party security application. However, what was learned from that resulted in a key-based access and validation model in the OSF v 3.0 update
  • A new revisioning system — content authoring and governance required multiple checks in the workflow, and requirements to review prior edits and invoke possible rollbacks. The result was to add a completely new revisioning capability to OSF
  • Middleware integration and APIs — in a multi-vendor environment, OSF operates in part as a central repository for all system information, which third parties must more readily and easily be able to access. Thus, besides the security aspects, a much improved programmatic API and a generalized search API were added to the OSF platform
  • New, additional Web services — the requirements above meant that seven new OSF Web services were added to the system, bringing the total number of current Web services to 27
  • New caching layer — because of its Web-service design, information access and mediation occurs via a large number of endpoint queries, many of which are patterned and repeated. To improve overall performance, a new caching layer was added to OSF that significantly improved performance and reduced access burdens on the OSF engines
  • Workflow integration — improved workflow sequences and screens were required to capture workflow and goverance demands, and
  • Multilingual support — like most larger organizations, HDA has a diversity of native languages throughout its user base. Though OSF had initially been explicitly designed to support multilinguality, specific procedures and capabilities were put in place to more easily support multiple languages in OSF.

Tighter Integration with Drupal

When Fred Giasson and I first designed and architected the Open Semantic Framework in 2009, we made the conscious decision to loosely couple OSF with the initial user interface and content management system, Drupal. We did so thinking that perhaps other CMS frameworks would be cloned onto OSF over time.

Time has not proven this assumption correct. Client experience and HDA’s interests suggested the wisdom of a tighter coupling to Drupal. This shift arose because of the great flexibility of Drupal with its tens of thousands of add-on modules and its ecosystem of capable developers and designers. Our early decision to keep Drupal at arm’s length was making it more difficult to manage an OSF instance. Existing Drupal developers were not able to employ their Drupal expertise to manage OSF portals.

We pivoted on this error by tightening the coupling to Drupal, which involved a number of discrete activities:

  • Upgrade to Drupal 7 — earlier versions of OSF used Drupal 6. We migrated the code base to Drupal 7. That, plus the other Drupal changes noted below, resulted in re-writing about 80% of the OSF code base related to Drupal
  • Alternative Drupal data storage — Drupal’s own evolution in version 7 (and continuing with version 8) is to abstract its underlying information model around entities and fields, abstractions that are much better aligned with OSF’s RDF data model. As these entity and field changes were exposed in Drupal APIs, it was possible for us to write an entirely new information model underlying Drupal. Drupal administrators using OSF are now able to use OSF solely as the data model underneath Drupal (rather than the more standard MySQL)  or any mixed portions in between. The typical OSF for Drupal design now uses OSF for all content storage, with MySQL reserved for internal Drupal settings (à la MVC)
  • Drupal connectors — certain common or core Drupal modules, such as Fields, Entities, Search, and Views, are either common utilities for Drupal developers or are themselves core bases for third-party modules. Because of their centrality, SD developed a series of “connectors” that enable these modules to be used as is while transparently communicating with and writing to OSF. Thus, Drupal developers can use these familiar capabilities without needing particular OSF knowledge
  • Major updates to Drupal modules — because of the changes above, the existing OSF Drupal modules (called conStruct in the earlier versions) were updated to take advantage of the common terminology and tighter integration
  • Major updates to Drupal widgets — similarly, the standard OSF data and visualization widgets used with Drupal (called Semantic Components in the earlier versions) were also updated to work in this more tightly integrated environment.

Expanded Search Capabilities and Web Services

Some of the extended capabilities in OSF v 3.0 are noted above, including the expanded roster of Web services. However, the OSF Search Web service, which is by far the most used OSF endpoint, received massive improvements in this latest release.

First, OSF Search now uses a new query parser, which provides the capability to change the ranking of search results by boosting how specific query components get scored. Types, attributes, datasets or counts may be used to vary any given search result, including different occurrences on the same page. It is also now possible to add restrictions to the search queries, including restricting results to a specified set of attributes.

This flexibility is highly useful wherein certain structured pages contain blocks or sections with patterned search results. This structuring leads to the ability to create generic page templates, wherein search queries and results vary within the layout. An “events” block may score differently than, say, a “related topics” block, all of which in turn can respond to a given context (say, “cancer” versus “automobiles”) for a given page (and its template).

These repeated patterns lend themselves to the use of reusable “search profiles,” which are predefined queries that may include context variables. These profiles, in turn, can be named and placed on page layouts. Existing profiles may be recalled or invoked to become patterns for still further profiles. The flexibility of these search profiles is immense, and the parameters used in constructing them can be quite extensive.

Thus, OSF version 3.0 includes the new Query Builder module. Via an intuitive selection interface, users may construct search queries of any complexity, and then save and reuse them later as search profiles.

Lastly, registering, configuring and managing OSF instances and datasets into Drupal has never been easier. The new OSF Configure module centralizes all the features and options required for these purposes, which are then managed by a new suite of tools (see next).

Automated Installation and Management Tools

Standard enterprise deployments that proceed from development to production require constant updates and versions, both in application code and content. Keeping track and managing these changes — let alone deploying them quickly and without error — requires separate management capabilities in their own right. The new OSF thus has a number of utilities and command-line tools to aid these requirements:

  • OSF Installer — this tool installs and configures all the pieces required by the OSF stack, then runs the OSF Tests Suites to make sure that all functionality is fully operational on the new server
  • OSF Tests Suites — composed of 746 tests and 4139 assertions, these tests may be run every time an OSF instance is deployed or code is changed. The tests measure all of the input parameters of each endpoint, combinations thereof, mime types, and expected errors returned by each endpoint
  • OSF Ontologies Management Tool — (OMT) is used to manage ontologies, list ontologies, create/import new ones, delete existing ones, or to generate underlying ontological structures
  • OSF Datasets Management Tool — (DMT) is used to manage datasets of a OSF instance, enabling the user to create, delete, update, import and export datasets directly from the command line
  • OSF Permissions Management Tool — (PMT) is used to manage, list, create or delete access permissions groups and users
  • OSF Data Validator Tool — (DVT) is used to perform a series of post-indexation data validation tests and return validation errors if any are found.

Tempered via Enterprise Development and Deployments

The methods and processes by which these advances have been made all occurred within the context of state-of-the-art enterprise IT management. Experience with supporting infrastructure tools (such as Jira, Confluence, Puppet, etc.) and agile development methods are part of the ongoing documentation of OSF (see next). This experience also bolsters Structured Dynamics’ ability to work with other third-party applications at the middleware layer or in support of enterprise deployments.

Comprehensive and Completed Updated Documentation

The Open Semantic Framework has evolved considerably since its conception now five years ago. In its early development, components and pieces were sometimes developed in isolation and then brought into the framework. This jagged development path led to a cacophony of names and terms to characterize portions of the OSF stack. This terminology confusion has made it more difficult than it needed to be to understand the vision of OSF, the layers of its architecture, or the interactions between its components and parts.

In making the substantial efforts to update documentation from OSF version 1.x to the current version 3.0, terminology was made consistent and code references were cleaned up to reflect the simpler OSF branding. This clean up has led to necessary updates across multiple Web sites maintained by Structured Dynamics with some relationship to OSF.

The Web site with the most changes required has been the OSF Wiki. In its prior incarnation, called TechWiki, there were nearly 400 technical articles on OSF. That site has now been completely rewritten and re-organized. Nearly two hundred new articles have been written in support of OSF v 3.0. Terminology related to the older cacophony (see correspondance table here) has (hopefully) been updated and corrected. Most architectural and technical diagrams have been updated. Additional documentation is being posted daily, catching up with the experience of the past twenty months.

Moving Beyond the Established Foundation

Open Semantic Framework

SD is pleased that enterprise sponsors want to continue beyond the Open Semantic Framework’s present solid foundations. While we are not at liberty to discuss specific client initiatives, a number of ongoing developments can be described broadly. First, in terms of the key engines that provide the core of OSF’s data management capabilities, initiatives are underway in the areas of visualization, business analytics and workflow orchestration and management. There are also efforts underway in more automated means for direct ingest of quality Web-based information, both based on linked data and from Web APIs. We are also pleased that efforts to further extend OSF’s tight integration with Drupal are also of interest, even while the integration efforts of the past months have not yet been fully exploited.

To Learn More

To learn more, make sure and check out the re-organized OSF wiki. See specifically the complete OSF overview, the list of all the OSF 3.0 features, and the list of all the new features to OSF 3.0. Also, for a complete soup-to-nuts view of what it takes to put up a new OSF installation, see the Users Guide. Lastly, for a broad overview of OSF, see its reference architecture and the overviews on its dedicated OSF Web site.

As a final note, Structured Dynamics would like to thank its corporate sponsors of the past five years for providing the development funds for OSF, and for agreeing with the open source purposes of the Open Semantic Framework.

Posted:May 21, 2013

Neighbourhoods of Winnipeg - NOWFirst and Largest Local Government Site to Exclusively Embrace Semantic Technologies

The City of Winnipeg, the capital and largest city of Manitoba, Canada, just released its “NOW” portal celebrating its diverse and historical 236 neighborhoods. The NOW portal is one of the largest releases of open data by a local government to date, with some 57 varied datasets now available ranging from local neighborhood amenities such as pools and recreation centers, to detailed real estate and economic development information. Nearly one-half million individual Web pages comprise the site, driven exclusively by semantic technologies. Nearly 10 million RDF triples underly the site.

In announcing the site, Winnipeg Mayor Sam Katz said, “We want to attract new investment to the city and, at the same time, ensure that Winnipeg remains healthy and viable for existing businesses to thrive and grow.” He added, “The new web portal, Neighbourhoods of Winnipeg—or NOW—is one way that we are making it easy to do business within the City of Winnipeg.”

NOW provides a single point of access for information such as location of schools and libraries, Census and demographic information, historical data and mapping information. A new Economic Development feature included in the portal was developed in partnership with Economic Development Winnipeg Inc. (EDW) and Winnipeg REALTORS®.

Our company, Structured Dynamics, was the lead contractor for the effort. An intro to the technical details powering the Winnipeg site is provided in the complementary blog post by SD’s chief technologist, Fred Giasson. These intro announcements by SD will be later followed by more detailed discussions on relevant NOW portal topics in the coming weeks.

Background and Formal Release

But the NOW story is really one of municipal innovation and a demonstration of what a staff of city employees can accomplish when given the right tools and frameworks. SD’s real pleasure over the past two years of development and data conversion for this site has been our role as consultants and advisors as the City itself converted the data and worked the tools. The City of Winnipeg NOW (Neighbourhoods of Winnipeg) site is testament to the ability of semantic technologies to be learned and effectively used and deployed by subject matter professionals from any venue.

In announcing the site on May 13, Mayor Sam Katz also released a short four-minute introductory video about the site:

What we find most exciting about this site is how our open source Open Semantic Framework can be adopted to cutting-edge municipal open data and community-oriented portals. Without any semantic technology background at the start of the project, the City has demonstrated its ability to manage, master and then tailor the OSF framework to its specific purposes.

Key Emphases

As its name implies, the entire thrust of the Winnipeg portal is on its varied and historical neighborhoods. The NOW portal itself is divided into seven major site sections with 2,245 static pages and a further 425,000 record-oriented pages. The number of dynamic pages that may be generated from the site given various filtering or slicing-and-dicing choices is essentially infinite.

Neighborhoods

The fulcrum around which all data is organized on the NOW portal are the 236 neighborhoods within the City of Winnipeg, organized into 14 community areas, 15 political wards, and 23 neighborhood clusters. These neighborhood references link to thousands of City of Winnipeg and external sites, as well as have many descriptive pages of their own.

Some 57 different datasets contribute the information to the site, some authored specifically for the NOW portal with others migrated from legacy City databases. Coverage ranges from parks, schools, recreational and sports facilities, and zoning, to libraries, bus routes, police stations, day care facilities, community gardens and more. More than 1,400 attributes characterize this data, all of which may be used for filtering or slicing the data.

Property and Economic Development

A key aspect of the site is its real estate, assessment and zoning information. Every address and parcel in the city — a count nearing 190,000 in the current portal — may be looked up and related to its local and neighborhood amenities. Up to three areas of the City may be mapped and compared to one another, felt to be a useful tool for screening economic development potentials.

Census Data

All of the neighborhood and neighborhood clusters may be investigated and compared for Census data in two time periods (2001 and 2006). Types of Census informaton includes population, education, labor and work, transportation, education, languages, income, minorities and immigration, religion, marital status, and other family and household measures.

Any and all neighborhoods may be compared to one another on any or all of these measures, with results available in chart, table or export form.

Images and History

Images and history pages are provided for each Winnipeg neighborhood.

Mapping

Throughout, there are rich mapping options that can be sliced and displayed on any of these dimensions of locality or type of information or attribute.

More to Come!

The basic dataset authoring framework will enable City staff (and, perhaps, external parties or citizens) to add additional datasets to the portal over time.

Key Functionality and Statistics

The NOW site is rich in functionality and display and visualization options. Some of this functionality includes the:

NOW Ontology Graph

NOW Graph Structure

NOW is entirely an ontology-driven site, with both domain and administrative ontologies guiding all aspects of search, retrieval and organization. There are 12 domain ontologies govering the site, two of which are specific to NOW (the NOW ontology and a Canadian Census ontology). Ten external ontologies (such as FOAF, GeoNames, etc) are also used.

The NOW ontology, shown to the left, has more than 2500 subject concepts within it covering all aspects of municipal governance and specific Winnipeg factors.

Relation Browser

All of the 2500 linked concepts in the NOW ontology graph can be interactively explored and navigated via the relation browser. The central “bubble” also presents related, linked information such as images, Census data, descriptive material and the like. As adjacent “bubbles” are clicked, the user can navigate or “swim through” the NOW graph.

NOW Relation Browser

NOW Web Maps

Web Map

Nearly all of the information on the NOW site — or about 420,000 records — contains geolocational information of one form or another. There are about 200,000 points of interest records, another 200,000 area or polygon records, and about 7,000 paths and routes such as bus routes in the system.

All 190,000 property addresses in Winnipeg may be looked up and mapped.

Virtually all of the 57 datasets in the system may be filtered by category or type or attribute. This information can be filtered or searched using about 1400 different facets, singly or in combination with one another.

Various map perspectives are provided from facilities (schools, parks, etc.) to economic development and history, transportation routes and bus stops, and property, real estate and zoning records.

Templates

Depending on the type of object at hand, one of more than 50 templates may be invoked to govern the display of its record information. These templates are selected contextually from the ontology and present different layouts of map, image, record attribute or other information, all keyed by the governing type.

Each template is thus geared to present relevant information for the type of object at hand, in a layout specific to that object.

Objects lacking their own specific templates default to the display type of their parent or grandparent objects such that no object type lacks a display format.

Multiple templates are displayed on search pages, depending upon the diversity of object types returned by the given search.

Example of a NOW Record Template

Example of a NOW Census Chart

Graph Statistics

The NOW site provides a rich set of Census statistics by neighborhood or community area for comparison purposes. The nearly half million data points may be compared between neighborhoods (make sure and pick more than one) in graph form (shown) or in tabular form (not shown).

Census information spans from demographics and income to health, schooling and other measures of community well-being.

Like all other displays, the selected results can also be exported as open data (see below).

Image Gallery

The NOW portal presently has about 2700 images on site organized by object type, neighborhood, and historical. These images are contextually available in multiple locations throughout the site.

The History topic section also matches these images to historical neighborhood narratives.

Example of a NOW Image Gallery

Example conStruct Tool: structOntology

conStruct Tools

A series of twenty or so back office tools are available to City of Winnipeg staff to grow, manage and otherwise maintain the portal. Some of these tools are exposed in read-only form to the general public (see Geeky Tools next).

The example at left is the structOntology tool for managing the various ontologies on the site.

Geeky Tools

As a means to show what happens behind the scenes, the Geeky Tools section presents a number of the back office tools in read-only form. These are also good ways to see the semantic technologies in action.

The Geeky Tools section provides access to Search, Browse, Ontology, and Export (see next) tools.

NOW's Geeky Tools

The NOW Export Function

Open Data Exports

On virtually any display or after any filter selection, there is an “export” button that allows the active data to be exported in a variety of formats. Under Geeky Tools it is also possible to export whole datasets or slices of them. Some of the key formats include:

Some of these are serializations that are not standard ones for RDF, but follow a notation that retains the unique RDF aspects.

Some Early Lessons

Though the technical aspects of the NOW site have been ready for quite some time, with limited staff and budget it took City staff some time to convert all of its starting datasets and to learn how to develop and manage the site on its own. As a result, some of the design decisions made a couple of years back now appear a bit dated.

For example, the host content management system is Drupal 6, though Drupal 8 is getting close to its own release. Similarly, some of the display widgets are based on Flash, which Adobe announced last year it will continue to maintain, but will no longer develop. In the two years since design decisions were originally made, the importance of mobile apps and smartphones and tablets has also grown tremendously in importance.

These kinds of upgrades are a constant in the technology world, and apply to NOW as well. Fortunately, the underlying basis of the entire portal in its data and stack were architected to enable eventual upgrades.

Another key aspect of the site will be the degree to which external parties contribute additional data. It would be nice, for example, to see the site incorporate events announcements and non-City information on commercial and non-profit services and facilities.

Conclusion

Structured Dynamics is proud about the suitability of our OSF technology stack and is impressed with all the data that is being exposed. Our informal surveys suggest this is the largest open data portal by a major city worldwide to be released to date. It is certainly the first to be powered exclusively by semantic technologies.

Yet, despite those impressive claims, we submit that the real achievement of this project is something different. The fact that this entire portal is fully maintained and operated by the City’s own internal IT staff is a game changer. The IT staff of the City of Winnipeg had no prior internal semantic Web knowledge, nor any knowledge in RDF, OWL or any other underlying technologies used by the portal. What they had is a vision of their project and what they wanted. They placed significant faith and made a commitment to master the OSF technology stack, and the underlying semantic Web concepts and principles to make their vision a reality. Much of SD’s 430+ documents on the OSF TechWiki are a result of this collaborative technology transfer between us and the City.

We are truly grateful that the City of Winnipeg has taken open source and open data quite seriously. In our partnership wth them they have been extremely supportive of what we have done to progress the technology, release it as open source, and then to document our lessons and experiences for other parties to utilize as documented on the TechWiki. The City of Winnipeg truly shows enlightened government at its best. Thank you, especially to our project managers, Kelly Goldstrand and Don Conolly.

Structured Dynamics has long stated its philosophy as, “We are successful when we are no longer needed. We’re extremely pleased and proud that the NOW portal and the City of Winnipeg show this objective is within realistic reach.

Posted:May 15, 2013

So Many QuestionsThinking About the Interstices of the Journey

It actually is a dimmer memory than I would like: the decision to begin a blog eight years ago, nearly to the day ([1]). Since then, for every month and more often many more times per month, I have posted the results of my investigations or ramblings, mostly like clockwork. But, in a creeping realization, I see that I have not posted any new thoughts on this venue for more than two months! Until that hiatus, I had been biologically consistent.

Maybe, as some say, and I don’t disagree, the high-water mark of the blog has passed. Certainly blog activity has dropped dramatically. The emergence of ‘snippet communications’ now appears dominant based on messages and bandwidth. I don’t loathe it, nor fear it, but I find a world dominated by 140 characters and instant babbling mostly a bore.

From a data mining perspective — similar to peristalsis or the wave in a sports stadium — there is worth in the “crowd” coherence/incoherence and spontaneity. We can see the waves, but most are transient. I actually think that broad scrutiny helps promote separating the wheat from chaff. We can expose free radicals to the sanitizing effect of sunlight. Yet these are waves, only very rarely trends, and most generally not truths. That truth stuff is some really slippery stuff.

But, that is really not what is happening for me. (Though I really live to chaw on truth.) Mostly, I just had nothing interesting to say, so there was no reason to blog about it. And, now, as I look at why I broke my disciplined approach to blogging and why it has gone on hiatus, I still am a bit left scratching my head as to why my pontifications stalled.

Two obvious reasons are that our business is doing gangbusters, and it is hard to sneak away from company good-fortune. Another is that my family and children have been joyously demanding.

Yet all of that deflects from the more relevant explanation. The real reason, I think, that I have not been writing more recently actually relates to the circumstance of semantic techologies. Yes, progress is being made, some instances are notable, but the general “semantic web” or “linked data” enterprise is stalled. The narrative for these things — let alone their expression and relevance — needs to change substantially.

I feel we are in the midst of this intense interstice, but the framing perspective for the next discussions have yet to emerge.

The strange thing about that statement is not the basis in semantic technologies, which are now understood and powerful, but the incorporation of these advantages into enterprise practices and environments. In this sense, semantic technologies are now growing up. Their logic and role is clear and explainable, but how they fit into corporate practice with acceptable maintenance costs is still being discovered.