Posted:July 16, 2014

Battle of Niemen, WWI, photo from WikimediaAre We Losing the War? Was it Even the Right One?

Cinemaphiles will readily recognize Akira Kurosawa‘s Rashomon film of 1951. and in the 1960s one of the most popular book series was Lawrence Durrell‘s The Alexandria Quartet. Both, each in its own way, tried to get at the question of what is truth by telling the same story from the perspective of different protagonists. Whether you saw this movie or read these books you know the punchline: the truth was very different depending on the point of view and experience — including self-interest and delusion — of each protagonist. All of us recognize this phenomenon of the blind men’s view of the elephant.

I have been making my living and working full time on the semantic Web and semantic technologies now for a full decade. So has my partner at Structured Dynamics, Fred Giasson. Others have certainly worked longer in this field. The original semantic Web article appeared in Scientific American in 2000 [1], and the foundational Resource Description Framework data model dates from 1999. Fred and I have our own views of what has gone on in the trenches of the semantic Web over this period. We thought a decade was a good point to look back, share what we’ve experienced, and discover where to point our next offensive thrusts.

What Has Gone Well?

The vision of the semantic Web in the Scientific American article painted a picture of globally interconnected data leveraged by agents or bots designed to make our lives easier and more automated. However, by the time that I got directly involved, nearly five years after standards first started to be published, Tim Berners-Lee and many leading proponents of RDF were beginning to shift focus to linked data. The agents, and automation, and ontologies of the initial vision were being downplayed in favor of effective means to publish and consume data based on RDF. In many ways, linked data resembled a re-branding.

This break had been coming for a while, memorably captured by a 2008 ISWC session led by Peter F. Patel-Schneider [2]. This internal division of viewpoint likely caused effort to be split that would have been better spent in proselytizing and improving tools. It also diverted somewhat into internal squabbles. While many others have pointed to a tactical mistake of using an XML serialization for early versions of RDF as a key factor is slowing initial adoption, a factor I agree was at play, my own suspicion is that the philosophical split taking place in the community was the heavier burden.

Whatever the cause, many of the hopes of the heady days of the initial vision have not been obtained over the past fifteen years, though there have been notable successes.

The biomedical community has been the shining exemplar for data interoperability across an entire discipline, with earth sciences, ecology and other science-based domains also showing interoperability success [3]. Families of ontologies accompanied by tooling and best practices have characterized many of these efforts. Sadly, though, most other domains have not followed suit, and commercial interoperability is nearly non-existent.

Most all of the remaining success has resided in single-institution data integration and knowledge representation initiatives. IBM’s Watson and Apple’s Siri are two amazing capabilities run and managed by single institutions, as is Google’s Knowledge Graph. Also, some individual commercial and government enterprises, willing to pay support to semantic technology experts, have shown success in data integration, using RDF, SKOS and OWL.

We have seen the close kinship between natural language, text, and Q & A with the semantic Web, also demonstrated by Siri and more recent offshoots. We have seen a trend toward pairing great-performing open source text engines, notably Solr, with RDF and triple stores. Recommendation systems have shown some success. Linked data publishing has also had some notable examples, including the first of the lot, DBpedia, with certain institutional publishers (such as the Library of Congress, Eurostat, The Getty, Europeana, OpenGLAM [galleries, archives, libraries, and museums]) showing leadership and the commitment of significant vocabularies to linked data form.

On the standards front, early experience led to new and better versions of the SPARQL query language (SPARQL 1.1 was greatly improved in the last decade and appears to be one capability that sells triple stores), RDF 1.1 and OWL 2. Certain open source tools have become prominent, including Protégé, Virtuoso (open source) and Jena (among unnamed others, of course). At least in the early part of this history, tool development was rapid and flourishing, though the innovation pace has dropped substantially according to my tracking database Sweet Tools.

What Has Disappointed?

My biggest disappointments have been, first, the complete lack of distributed data interoperability, and, second, the lack or inability of commercial enterprises to embrace and adopt semantic technologies on their own. The near absence of discussion about instance records and their attributes helps frame the current maturity of the semantic Web. Namely, it has yet to crack the real nuts of data integration and interoperability across organizations. Again, with the exception of the biomedical community, neither in the linked data realm nor in the broader semantic Web, can we point to information based on semantic Web principles being widely shared between systems and organizations.

Some in the linked data community have explicitly acknowledged this. The abstract for the upcoming COLD 2014 workshop, for example, states [4]:

. . . applications that consume Linked Data are not yet widespread. Reasons may include a lack of suitable methods for a number of open problems, including the seamless integration of Linked Data from multiple sources, dynamic discovery of available data and data sources, provenance and information quality assessment, application development environments, and appropriate end user interfaces.

We have written about many issues with linked data, ranging from the use of improper mapping predicates; to the difficulty in publishing; and to dereferencing URIs on the Web since they are sparse and not always properly implemented [5]. But ultimately, most linked data is just instance data that can be represented in simpler attribute-value form. By shunning a knowledge representation language (namely, OWL) at the processing end, we have put too much burden on what are really just instance records. Linked data does not get the balance of labor right. It ignores the reality that data consumers want actionable information over being able to click from data item to data item, with overall quality reduced to the lowest common denominator. If a publisher has the interest and capability to publish quality linked data, great! It should become part of the data ingest pool and the data becomes easy to consume. But to insist on linked data across the board creates unnecessary barriers. Linked data growth has not nearly kept pace with broader structured data growth on the Web [6].

At the enterprise level, the semantic technology stack is hard to grasp and understand for newcomers. RDF and OWL awareness and understanding are nearly nil in companies without prior semantic Web experience, or 99.9% of all companies. This is not a failure of the enterprises; it is the failure of us, the advocates and suppliers. While we (Structured Dynamics) have developed and continue to refine the turnkey Open Semantic Framework stack, and have spent more efforts than most in documenting and explicating its use, the systems are still too complicated. We combine complicated content management systems as user front-ends to a complicated semantic technology stack that needs to be driven by a complicated (to develop) ontology. And we think we are doing some of the best technology transfer around!

Moreover, while these systems are good at integrating concepts and schema, they are virtually silent on the question of actual data integration. It is shocking to say, but the semantic Web has no vocabularies or tools sufficient to enable data items for the same entity from two different datasets to be combined or reconciled [7]. These issues can be solved within the individual enterprise, but again the system breaks when distributed interoperability is the desire. General Web-based inconsistencies, such as in HTML coding or mime types, impose hurdles on distributed interoperability. These are some of the reasons why we see the successes in the context (generally) of single institutions, as opposed to anything that is truly yet Web-wide.

These points, as is often the case with software-oriented technologies, come down to a disappointing state of tooling. Markets drive developer interest, and market share has been disappointing; thus, fewer tools. Tool interest comes from commercial engagements, and not generally grants, the major source of semantic Web funding, particularly in the European Union. Pragmatic tools that solve real problems in user adoption are rarely a sufficient basis for getting a Ph.D.

The weaknesses in tooling extend from basic installation, to configuration, unit and integrated tests, data conversion and lifting, and, especially, all things ontology. Weaknesses in ontology tooling include (critically) mapping, consistency and coherency checking, authoring, managing, version control, re-factoring, optimization, and workflows. All of these issues are solvable; they are standard software challenges. But it is hard to conquer markets largely with the wrong army pursuing the wrong objectives in response to the wrong incentives.

Yet, despite the weaknesses in tooling, we believe we have been fairly effective in transferring technology to our clients. It takes more documentation and more training and, often, accompanying tool development or improvement in the workflow areas critical to the project. But clients need to be told this as well. In these still early stages, successful clients are going to have to expend more staff effort. With reasonable commitment, it is demonstrable that an enterprise can take over and manage a large-scale semantic engagement on its own. Still, for semantic technologies to have greater market penetration, it will be necessary to lower those commitments.

How Has the Environment Changed?

Of course, over the period of this history, the environment as a whole has changed markedly. The Web today is almost unrecognizable from the Web of 15 years ago. If one assumes that Web technologies tend to have a five year or so period of turnover, we have gone through at least two to three generations of change on the Web since the initial vision for the semantic Web.

The most systemic changes in this period have been cloud computing and the adoption of the smartphone. These, plus the network of workstations approach to data centers, have radically changed what is desirable in a large-scale, distributed architecture. APIs have become RESTful and database infrastructures have become flatter and more distributed. These architectures and their supporting infrastructure — such as virtual servers, MapReduce variants, and many applications — have in turn opened the door to performant management of large volumes of flat (key-value or graph) data, or big data.

On the Web side, JavaScript, just a few years older than the semantic Web, is now dominant in Web pages and taking on server-side roles (such as through Node.js). In turn, JSON has now grown in popularity as a form of data representation and transfer and is being adopted to the semantic Web (along with codifying CSV). Mobile, too, affects the Web side because of the need for multiple-platform deployments, touchscreen use, and different user interface paradigms and layout designs. The app ecosystem around smartphones has become a huge source for change and innovation.

Extremely germane to the semantic Web — indeed, overall, for artificial intelligence — has been the occurrence of knowledge-based AI (KBAI). The marrying of electronic Web knowledge bases — such as Wikipedia or internal ones like the Google search index or its Knowledge Graph — with improvements in machine-learning algorithms is systematically mowing down what used to be called the Grand Challenges of computing. Sensors are also now entering the picture, from our phones to our homes and our cars, that exposes the higher-order requirement for data integration combined with semantics. NLP kits have improved in terms of accuracy and execution speed; many semantic tasks such as tagging or categorizing or questioning already perform at acceptable levels for most projects.

On the tooling side, nearly all building blocks for what needs to be done next are available in open source, with some platform areas quite functional (including OSF, of course). We have also been successful in finding clients that agree to open source the development work we do for them, since they are benefiting from the open source development that went on before them.

What Did We Set Out to Achieve?

When Structured Dynamics entered the picture, there were already many tools available and core languages had been released. Our view of the world at that time led us to adopt two priorities for what we thought might be a five year or so plan. We have achieved the objectives we set for ourselves then, though it has taken us a couple of years longer to realize.

One priority was to develop a reference structure for concepts to serve as a “grounding” basis for relating datasets, vocabularies, schema, taxonomies, or ontologies. We achieved this with our first commercial release (v 1.00) of UMBEL in February 2011. Subsequent to that we have progressed to v 1.05. In the coming months we will see two further major updates that have been under active effort for about eight months.

The other priority was to create a turnkey foundation for a semantic enterprise. This, too, has been achieved, with many more releases. The Open Semantic Framework (OSF) is now in version 3.00, backed by a 500-article training documentation and technical wiki. Support tooling now includes automated installation, testing, and data transfer and synchronization.

Because our corporate objectives were largely achieved it was time to look at lessons learned and set new directions. This article, in part, is a result of that process.

How Did Our Priorities Evolve Over the Decade?

I thought it would be helpful to use the content of this AI3 blog to track how concerns and priorities changed for me and Structured Dynamics over this history. Since I started my blog quite soon after my entry into the semantic Web, the record of my perspectives was conterminous and rather complete.

The fifty articles below trace my evolution in knowledge and skills, as well as a progression from structured data to the semantic Web. These 50 articles represent about 11% of all articles in my chronological archive; they were selected as being the most germane to the question of evolution of the semantic Web.

After early ramp up, most of the formative discussion below occurred in the early years. Posts have declined most recently as implementation has taken over. Note most of the links below have  PDFs available from their main pages.

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

The early years of this history were concentrated on gathering background information and getting educated. The release of DBpedia in 2007 showed how knowledge bases would become essential to the semantic Web. We also identified that a lack of shared reference concepts was making it difficult to “ground” different semantic Web datasets or schema to one another. Another key theme was the diversity of native data structures on the Web, but also how all of them could be readily represented in RDF.

By 2008 we began to study the logical underpinnings to the semantic Web as we were coming to understand how it should be practiced. We also began studying Web-oriented architectures as key design guidance going forward. These themes continued into 2009, though now informed by clients and applications, which was expanding our understanding of requirements (and, sometimes, shortcomings) in the enterprise marketplace. The importance of an open world approach to the basic open nature of knowledge management was cementing a clarity of the role and fit of semantic solutions in the overall informaton space. The general community shift to linked data was beginning to surface worries.

2010 marked a shift for us to become more of a popularizer of semantic technologies in the enterprise, useful to attract and inform prospects. The central role of ontologies as the guiding structures (either as codified knowledge structures or as instruction sets for the platform) for OSF opened realizations that generic functional software could be designed that can be re-used in most any knowledge domain by simply changing the data and ontologies guiding them. This increased our efforts in ontology tooling and training, now geared more to the knowledge worker.  The importance of groundings for aligning schema and data caused us to work hard on UMBEL in 2011 to get it to a commercial release state.

All of these efforts were converging on design thoughts about the nature of information and how it is signified and communicated. The bases of an overall philosophy regarding our work emerged around the teachings of Charles S Peirce and Claude Shannon. Semantics and groundings were clearly essential to convey accurate messages. Simple forms, so long as they are correct, are always preferred over complex ones because message transmittal is more efficient and less subject to losses (inaccuracies). How these structures could be represented in graphs affirmed the structural correctness of the design approach. The now obvious re-awakening of artificial intelligence helps to put the semantic Web in context: a key subpart, but still a subset, of artificial intelligence. The percentage of formative articles directly related over these last couple of years to the semantic Web drops much, as the emphasis continues to shift to tech transfer.

What Else Did We Learn?

Not all lessons learned warranted an article on their own. So, we have also reflected on what other lessons we learned over this decade. The overall theme is: Simpler is better.

Distributed data interoperability across the Web is a fundamental weakness. There are no magic tricks to integrate data. Data mapping and integration will always require massaging. Each data integration activity needs its own solution. However, it can greatly be helped with ontologies and with better tooling.

In keeping with the lesson of grounding, a reference ontology for attributes is missing. It is needed as a bridge across disparate datasets describing similar entities or with different attributes for the same entities. It is also a means to reduce the pairwise combinatorial issue of integrating multiple datasets. And, whatever is done in the data integration area, an open world approach will be essential given the nature of knowledge information.

There is good design and best practice for distributed architectures. The larger these installations become, the more important it is to use a lightweight, loosely-coupled design. RESTful Web services and their interfaces are key. Simpler services with fewer functions can be designed to complement one another and increase throughput effectiveness.

Functional programming languages align well with the data and schema in knowledge management functions. Ontologies, as structures, also fit well with functional languages. The ability to create DSLs should continue to improve bringing the knowledge management function directly into the hands of its users, the knowledge workers.

In a broader sense, alluded to above, the semantic Web is but a set of concepts. There are multiple ways to use it. It can be leveraged without requiring “core” semantic Web tools such a triple stores. Solr can act as a semantic store because semantics, NLP and search are naturally married. But, the semantic Web, in turn, needs to become re-embedded in artificial intelligence, now backed by knowledge bases, which are themselves creatures of the semantic Web.

Design needs to move away from linked data or the semantic Web as the goals. The building blocks are there, though perhaps not yet combined or expressed well. The real improvements now to the overall knowledge function will result from knowledge bases, artificial intelligence, and the semantic Web working together. That is the next frontier.

Overall, we perhaps have been in the wrong war for the wrong reasons. Linked data is certainly not an end and mostly appears to represent work, rather than innovation. The semantic Web is no longer the right war, either, because improvements there will not come so much from arguing semantic languages and paradigms. Learning how to master distributed data integration will teach the semantic Web much, and coupling artificial intelligence with knowledge bases will do much to improve the most labor-intensive stumbling blocks in the knowledge management workflow: mappings and transformations. Further, these same bases will extend the reach into analytical and statistical realms.

The semantic Web has always been an infrastructure play to us. On that basis, it will be hard to ever judge market penetration or dominance. So, maybe in terms of a vision from 15 years ago the growth of the semantic Web has been disappointing. But, for Fred and me, we are finally seeing the landscape clearly and in perspective, even if from a viewpoint that may be different from others’. From our vantage point, we are at the exciting cusp of a new, broader synthesis.

NOTE: This is Part I of a two-part series. Part II will appear shortly.

[1] Tim Berners-Lee, James Hendler, and Ora Lassila, “The Semantic Web,” in Scientific American 284(5): pp 34-43, 2001. See http://www.scientificamerican.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21&catID=2.
[2] For those with a spare 90 minutes or so, you may also want to view this panel session and debate that took place on “An OWL 2 Far?” at ISWC ’08 in Karlsruhe, Germany, on October 28, 2008. The panel was chaired by Peter F. Patel-Schneider (Bell Labs, Alcathor) with the panel members of Stefan Decker (DERI Galway), Michel Dumontier (Carleton University), Tim Finin (University of Maryland) and Ian Horrocks (University of Oxford), with much audience participation. See http://videolectures.net/iswc08_panel_schneider_owl/
[3] Open Biomedical Ontologies (OBO) is an effort to create controlled vocabularies for shared use across different biological and medical domains. As of 2006, OBO formed part of the resources of the U.S. National Center for Biomedical Ontology (NCBO). As of the date of this article, there were 376 ontologies listed on the NCBO’s BioOntology site. Both OBO and BioOntology provide tools and best practices.
[4] Fifth International Workshop on Consuming Linked Data (COLD 2014), co-located with the 13th International Semantic Web Conference (ISWC) in Riva del Garda, Italy, October 19-20.
[7] See the thread on the W3C semantic web mailing list beginning at http://lists.w3.org/Archives/Public/semantic-web/2014Jul/0129.html.
Posted:June 30, 2014

Open Semantic Framework Structured Dynamics Moves to Integrate Key Initiatives

Structured Dynamics is pleased to announce its new UMBEL Web site and set of Web services.

Our first release of the UMBEL site occurred in 2007 while UMBEL was still under development. That site used its own homegrown HTML. The release was followed in 2008 by the addition of our own Web services. The Web services were well-received, which caused Structured Dynamics to develop the more general structWSF Web services framework (most recently updated as the OSF Web Services). We subsequently migrated the earlier UMBEL Web services to this more general framework, and also migrated to Drupal as the standard content management and Web site component for OSF.

For most reasons, including all client work to date, our OSF framework (Web services + Drupal 7) has been performant and met client site needs. However, the operation of the UMBEL Web services was often problematic after moving to the Drupal (full OSF) version. Unfortunately, we have seen both performance and stability problems, though calculations over a full 28,000 node graph are a challenge in any environment.

Since the UMBEL structure was an order of magnitude larger than our client work to date, we have frankly adopted a posture of occasional monitoring and reboots to keep the UMBEL Web site up. This posture was not limiting use of UMBEL for general browsing purposes, but was limiting its usefulness as a working API.

Because the cobbler’s son is often the last to get shoes, we have let the UMBEL Web site chill to a degree in the background. But, now, with other imperatives underway and some dedicated time to look directly at performance of larger-scale ontologies, we have looked at these items anew. The report card on our current evaluations is contained in a newly released UMBEL Web site with services, which I summarize and provide context for below. What emerges is an interesting story of discovery and growth.

Basis of the New Site

The new UMBEL site and its underlying 28,000 concept graph is consistent with the OSF layered architecture. However, the Web services are now written in Clojure and the Web site framework uses Bootstrap and plain ol’ HTML. These structured and foundational changes have been championed by Fred Giasson, SD’s chief technology officer, who is also putting forth a blog series on Clojure in particular. He also has a current post from a technical basis on these UMBEL site and service changes.

In essence, we have learned two important things about our prior practice with respect to making UMBEL Web services broadly available. First, for UMBEL, we do not need or want our standard configuration of having a Drupal front-end as the interface into OSF. Access to a knowledge graph does not need — and is ill-served — by having a complicated interface stand atop a large-scale concept model. APIs and Web services are the most important interaction points with the UMBEL knowledge graph, not a user-oriented Web site.

Second, in the various phases of our work, we had come to embrace the idea of ontology-driven applications (what we have termed ODapps). The compelling vision behind such structures is to place the emphasis on knowledge structures and data, rather than more software. Once one begins to unpack that vision, it can also become clear that software programming languages themselves that look toward “code as data” might be one way to be consistent with that vision.

Seeking a Sense of Harmony

For years I have been writing about data integration and interoperability and our company has been devoted to the topic. I have written extensively about the importance of RDF and description logics to how we organize and represent data. We were also some of the first to supplement RDF with a faceted text-search engine (Solr) to provide the most responsive query environment across structured to unstructured data. We have also adopted ontologies and the OWL 2 (plus SKOS) languages as standards to both foster and enable interoperability. We have explored native data structs to understand how wild forms of information can be efficiently pipelined into interoperable RDF and text forms.

All of this points to the ideal of the democratization of the information function in the enterprise. In other words, to the idea that how data structures get organized and represented (the ontology side of things) is something that knowledge workers can do themselves, rather than accepting the bottleneck of IT and programmers.

This is well and good except there is a critical “last mile” between data representation and data usefulness. This “last mile” deals with how actual data gets manipulated and then organized and presented (visualized). Query responses, reports, analysis and maps continue to be the choke points between knowledge workers and their IT support. And one need not frame this entirely from an enterprise perspective: these same challenges exist for the individual researcher or the small organization.

So, while one can focus on data and its organization and representation, until we address this “last mile” problem, we still are not likely addressing the largest source of frustration and lost opportunities in the knowledge function.

The reason that simple data struct forms and tools like spreadsheets continue to be popular is that they are empirically the best tools for the “last mile”. Web forms and services are increasingly showing their strengths in this realm.

Once one steps back and looks at the entire cycle from basic datum to actionable knowledge, it is clear that the question of data model is but one portion of the challenge. The remaining challenge is how (now) accessible information can be placed into context and acted upon. Further, if one premise is the democratization of the information function, then the challenge should also be how to provide productive capabilities for the last mile to the knowledge worker. Productivity is enhanced when there are the fewest channels and distortions between signal (problem) and channel (user chains).

Fred, in his investigation of functional languages, clearly saw that bringing the languages of code (programming) into the language of data (knowledge workers as expressed in our RDF world view) was one means to reduce the number and lossiness of the channels between problem (signal) and solution. A world view premised on the efficient representation and interoperability of data must logically support the idea of a coding (instructional or language) base aligned as well to problems. Moreover, since software guides the actual computer operations, a form of the software that supports the nature of the data should also provide a more performant framework for moving forward. In technical terms, this is known as homoiconicity.

Whether one looks to the intellectual foundations of Charles S Peirce or Claude Shannon (both of whom we do), one can see that the idea of signs and information theory means finding both data representations and code that minimize communication losses and promote the accurate transfer of the message. Lossless data transmission is one contributor to that vision, but so too is a functional representation for how the information is to be processed and transformed that aligns most closely with the information at hand.

Ergo, a better model for data is not enough. A better model of how to manipulate that data (that is, software) is also needed that aligns with the idea of coherence and structure in the underlying information. For our purposes, we have chosen Clojure as the functional language basis for these new UMBEL Web services. Not only is it performant, but it aligns well with the creation of domain-specific languages (DSLs) that also promise to democratize the computing function for the knowledge worker.

Bringing the Pieces Together

Fred and I founded Structured Dynamics a bit more than five years ago. But, we had worked together much earlier on UMBEL and Zitgist. For nearly ten years now, we have episodically emphasized a few different initiatives and passions.

One of those passions has been the structure of data and information. It is this perspective that brought us to RDF and data structs (and our irON efforts) at various times. The idea of structure is a basis for our company name, and represents the belief that structure can be brought to unstructured forms (via tagging, for example). Structure is perhaps the most common notion or concept in my own writings for a decade.

Another need has been the idea of making semantic technologies operational. We have been keen researchers of the tools space and algorithms and such since the beginning. We observed early on that many innovative and open source semantic programs existed, but most were the result of EU grants or academic efforts elsewhere. Thousands of tools existed, but very few had either been evaluated or stress-tested. By bringing together the best of class tools and integrating them, we could begin to provide a useful semantic platform for enterprises. This motivation was the genesis for the Open Semantic Framework, and has been the major source of our client support since SD was founded. We have finally created an enterprise-capable platform and have done much to transfer its technology. But, these concepts are difficult, and much remains to be done before semantic technologies are a standard option for enterprises.

Still, in another vein, our first love and interest has been knowledge bases. We first identified the need for UMBEL years ago when we perceived an organizing vocabulary would become an essential glue on the Web. We pursued and studied Wikipedia and how it is informing knowledge bases. Instance data and how it is represented is a passion for how these knowledge bases (KBs) get leveraged going forward.

As a smaller consulting and development boutique, we have needed to be opportunistic about when and where we devoted efforts to these pieces. So, over the months or years, we have at various times devoted ourselves to data models and ontologies (structure), the Open Semantic Framework (platform), or UMBEL or Wikipedia (KBs, knowledge bases). Depending on funding and priorities, any one of these threads did receive episodic attention and focus. But, truth is, each one of these pieces has been developed in (project-level) isolation to the whole. Such piecemeal development was essential until each component achieved an appropriate degree of maturity.

I could say we could foresee some years back that all of these pieces would eventually reinforce and bolster one another. Though there is a small bit of truth in that statement, the way things have actually unfolded is to show, as experience and sophistication have been gained, that there is a synergy that comes in the interplay of these various pieces. The goodness is that Structured Dynamics’ efforts (and of its predecessors) were building inexorably to the possible cross-fertilization of these efforts.

Once this kind of realization takes place — that data, code and semantics move hand-in-hand — it then becomes logical to look at the entire knowledge ecosystem. For example, it is not surprising that artificial intelligence, now in the informed guise of KB-backed systems, has again come to the fore. It is also not surprising that what software and programming languages we bring to bear also directly interact with these concerns. Just as Hadoop and non-relational database systems have become prominent, we should also investigate what kind of programming languages and constructs may best fit into this brave new information world.

What we have seen from that investigation is that functional languages (with their DSL offspring) somehow fit into the overall equation moving forward. SD has moved from a single-focus endeavor to one explicitly looking at integration and interoperability issues. What we had earlier seen as (largely) independent pieces we now see as fitting into a broader equation of related emphases:

Structure + Platform + KBs + Functional Language = Knowledge Worker-based Interoperability

We are seeing artificial intelligence moving in these directions. As a subset of AI, I suspect we will also see  the semantic Web moving in the same direction.

We clearly now have the theory, the data, the understanding of semantics, and languages and data representations that can make these democratic interoperabilities become real. This new UMBEL Web site is the first expression of how these pieces can begin to work together into a compelling, accessible whole.

We welcome you to visit and to take advantage of UMBEL’s fully accessible APIs.

Posted:April 24, 2014
Open Semantic Framework

Another Expansion in Documentation for the Open Semantic Framework

The Open Semantic Framework is a complete foundation to bring semantic technology capabilities to the enterprise. OSF has applications from enterprise information integration to collaboration networks and open government. It has been under development since 2009, leveraging a set of robust open source engines and connecting Web services and architecture, and is now in its third major version. OSF is fully integrated as a semantic technology extension to the Drupal content management system.

Structured Dynamics, the developer of OSF, with the generous support of SD clients, has been committed to provide excellent documentation and tech transfer support to OSF since its inception. For examples, OSF now has a nearly 500 document technical support library, plus many automated means for installing and testing the OSF stack.

Yet, as all of us know, written documentation is not always discovered nor read. The paradigm for technology transfer is shifting to online tutorials and screencasts.

In keeping with that trend, SD has committed itself to develop a (hopefully) complete suite of online screencasts and tutorials geared to the nuts-and-bolts of how to install, configure, test, manage and use an OSF installation. Our intent is to aid users to bring semantics into the enterprise without the need for external support or cost.

We call this curriculum of tech transfer screencasts and video tutorials the OSF Academy.

Over the past week we have been releasing the first dozen screencasts in this series. With this foundation, it is now time to make a broader announcement of the OSF Academy.

So, On With it Now

Welcome!

We are on pace to release many dozens of specific screencasts on all use and management aspects of OSF. Please stay tuned over the coming weeks.

You can always see the complete contents of the YouTube channel at the Open Semantic Framework Academy.

Also, as basic grounding, also know that the OSF Wiki section on screencasts is another central access point to this content.

The Series Begins

Most of the screencasts are quite specific in certain use aspects of the Open Semantic Framework. However, tutorial #1 is a useful overview of OSF and the series:

The Next Ten Screencasts

SD’s CTO, Fred Giasson, is the key demo jockey for most of the OSF Academy screencasts. Many of these screencasts are technical, and all are specific and focused. Access each screencast by number below. There is also a (blog post) associated with each screencast that provides useful background information and links.

Thumbnail

Where Next?

We have nearly four dozen additional screencasts in our plan to round out introductory material to OSF. Please monitor our OSF channel on YouTube to stay on top of these releases.

Posted by AI3's author, Mike Bergman Posted on April 24, 2014 at 1:16 am in Open Semantic Framework, Structured Dynamics, Videos | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/1725/osf-academy-inaugurates-with-eleven-screencasts/
The URI to trackback this post is: http://www.mkbergman.com/1725/osf-academy-inaugurates-with-eleven-screencasts/trackback/
Posted:February 4, 2014

Civic Dynamics Logo Some Thoughts on SD’s Gestation of Civic Dynamics

Structured Dynamics (SD) announced yesterday that, in association with its partner Buzzr, it was spinning off a new software company, Civic Dynamics Inc., headquartered in Québec City, Canada. Included in the launch was the introduction of the new company’s Civic Dynamics Platform.  CDP is open-source software and supporting systems to assist municipalities to publish dynamic open government data, and to provide citizens a set of tools for viewing, searching, filtering and analyzing that data.

The announcements of those releases stand on their own. My purpose is not to duplicate them. Rather, now that the efforts needed for the new launch are behind us, I wanted to reflect on why and how such a spin off occurred in the first place. I think these reflections offer some insight into imperatives that face new software ventures, especially those geared to enterprise IT.

A Bit of History

It was just about five years ago that Fred Giasson and I began Structured Dynamics. (This was also after a year working together at Zitigist under the sponsorship of OpenLink Software.) Our mission at SD’s inception was to create a workable platform for bringing semantic technology capabilities to enterprises. Our specific interest was in using semantic technologies and RDF to solve the decades-old challenge of information interoperability in larger organizations. By serendipity, we were able to secure an enterprise client on virtually the first day we started SD. That forced us to grapple immediately with the then current woeful state of semantic technologies for enterprises.

We observed a number of problems at that time. Here is a short list of some of those problems from five years ago, and brief statements of what we initiated to address them:

  • Search — native triple stores at that time were not performant in search, and none captured the full text of documents. Further, semantic search offers unique opportunities in structure and inference. As a result, we were one of the first to adopt Solr for semantic technologies
  • Portal framework — there was a (general) absence of portal front ends that met acceptance in the marketplace. We evaluated and chose Drupal; over time a design choice to have loose coupling with Drupal has transitioned to become more integrated
  • CRUD — basic database management capabilities, such as create, read, update or delete, were not often exposed at the application level. Our choice here was to decouple this access and adopt a distributed design by embracing RESTful web services, endpoints and APIs, all of which were geared to provide a universal abstraction for dealing with all data engines (as colllectively expressed as a “repository”)
  • Architecture — though complete frameworks had been put forward, mostly by academic researchers, most had short lives and all lacked basic enterprise capabilities. We designed an architecture that favored integration and expansion — largely though APIs — while leveraging existing components. We also at this time made a commitment to open-source for all key components of the architecture
  • Stack — there were no complete software (deployment) stacks. Creating one required fragmented piece parts with gaps; and there certainly were not standard deployment or installation abilities. Much of SD’s effort over the past five years has been addressing this gap
  • Access control (security) — virtually all enterprises need to control access to privileged information, and no security existed five years ago for semantic applications. In the early versions of the Open Semantic Framework, the foundation to Civic Dynamics’ CDP platform, we used a simple IP authorization approach based on the interaction of tool (endpoint), dataset and role. Subsequently we have established middleware integrations with third-party security and key-based permission mechanisms when OSF is used standalone
  • Version control — any enterprise content system or repository must also have ways to track revisions and enforce version control. Early semantic technologies completely lacked these considerations. OSF has made progress in integrating with the Drupal revisioning system and in establishing middleware methods for interfacing with third-party version control systems
  • Workflow support — managing enterprise content in general, and more specifically managing the semantic aspects of integration, requires formal workflow and governance procedures. However, historically, and up to and including today, there is zero workflow support in semantic technologies. In fact, there is virtually no discussion of this topic at all. We are only at the beginning stages of incorporating formal workflow methods into OSF. We have development methodologies and best practices, though, and have identified suitable workflow engines to extend the system with formal workflow methods
  • Data ingest — five years ago there was little recognition that data in the wild would not be compliant with W3C standards nor RDF, and as a result demo systems lacked ingest capabilties for legacy information, particularly enterprise database info.  OpenLink Software and its Virtuoso system (one of the core engines in OSF) did, however, recognize this need. The OSF design has very much followed this approach of using “converters” or “RDFizers” for getting all wild data forms into a canonical RDF basis internal to the system
  • Reference vocabularies — the ultimate means of integrating enterprise information to achieve interoperability is premised on semantic approaches and technologies. Yet, apart from some minor vocabularies, there were no suitable vocabularies five years ago in many areas. We have constructed and supported an across-domain reference vocabulary (UMBEL) and a means of representing instance data and records (irON) since then to redress these gaps
  • Tooling — the means to design, manage, and test components of a semantic enterprise stack were nearly totally lacking, since most early semantic efforts were of proofs-of-concept and not production-grade systems. The areas of ontology design and maintenance were (and stilll somewhat are) weak. We have since developed many new tools, some geared at the user level, with administrator and developer tools including test suites, command-line utilities, and automated installers
  • Templates, widgets and visualization — the highly structured nature of RDF data lends itself well to templating records by type, page layouts by type, and widgets by type, which may be further leveraged using inheritance and inference. The recognition of the role of semantic technologies as publishing platforms did not exist five years ago. Our response has been to develop a template inheritance system and semantic data-driven widgets. Our earliest widgets were based on Flash; the libraries are now migrating to JavaScript (d3.js) and HTML5
  • Lack of documentation — lack of documentation is the bane of most open source projects, and early semantic technologies were most often developed for academic theses or as proofs-of-concept. As a result, documentation of use to practitioners and administrators was totally lacking. SD has made a concerted commitment to improved and complete documentation. Our OSF wiki with its nearly 500 technical and metholodogy articles and accompanying images is one expression of that commitment
  • Lack of enterprise rigor — across all of these fronts, early semantic technologies were clearly not designed and developed with enterprise objectives and use cases in mind. SD’s overall commitment has been to rectify this gap.

What we did have five years ago was a growing list of (often) unproven open standards (principally those from the W3C) and a large roster of prototype and research tools [1], most from the academic community. Still, there were some proven engines suitable to a semantic stack (most adopted as core to the Open Semantic Framework), so there were building blocks upon which a complete framework could be based. With the right design and architecture, and appropriate “glue” to tie it all together, it appeared quite feasible to create a working semantic stack suitable for enteprise use. Multi-component, open-source packages — ranging from Alfresco to Talend or Pentaho — were showing the path to such next-generation platforms.

With the development model of an integrated semantic technology stack based on open source components and consistent “glue” in mind, we could then turn our attention to the business model and strategy behind the nascent Open Semantic Framework.

The Business Philosophy

I don’t speak much about my prior ventures because, well, they are in the past. But I have financed ventures via angel funding, venture capital, grants and client revenues. I also have background in ventures ranging across many aspects of enterprise (mostly) and consumer (less so) software.

Our funding prejudice in starting SD was to be self-financed via clients. A customer focus keeps one from getting too abstract or falling in love with innovations for which there may not be a real market. Revenue financing also means that we need not alter business strategy or approach based on a financier’s perspective. Customers call the shots; not the money interests. This funding prejudice has kept us market focused and, as a consequence, profitable since day one.

Our staffing prejudice was to not hire, at least during the framework development phase. Setting the vision for a framework is not a democratic activity, and every hire means less development productivity. To fulfill, we have partnered and employed consultants and sub-contractors, but have not diluted our own efforts in managing employees.  We could stay focused by feeding only our own mouths and our vision.

Such narrow bandwidth also carries other implications. We could not take on too many clients at a given time. We needed to be extremely productive and leveraged, finding opportunities wherever we could to re-purpose prior writings or reusing or generalizing code. We also needed to be quite selective in what projects and what clients we chose. When attempting to make progress on a new platform, it is important to not become simply a contract fulfillment shop. Customers have many options for IT contracting or outsourcing; platform development and growth requires a certain self-selection by clients.

Our standard contract emphasizes that (most) efforts are intended to be open source, and our intellectual property clauses make that explicit. At first we did not know how the market might react to this insistence. For prospects serious enough to commit monies to us, however, we have found a good appreciation that open source leads to lower current project costs because the client is leveraging what has already been developed before. It seems only fair that new developments should also be made available to later customers, as well. Some of our prior clients are now seeing the lower costs and benefits by leveraging intermediate work in upgrading to latest versions and functionality.

Our fulfillment prejudice has been to complete work on time and under budget, document and train the customer in the work, and move on. Though we know they are profitable and a bread-and-butter for most enterprise vendors, we have not sought recurring annuities from our clients in maintenance fees. By keeping our eye squarely on successful tech transfer, we are disciplining ourselves to document as we go, provide tooling and support infrastructure as well as application software, and to find efficiencies in fulfillment. Meanwhile, we are able to progress rapidly on our overall development roadmap without getting bogged down in handholding. We would rather teach the customer how to fish, rather than doing the fishing for them.

Of course, not all enterprises understand or embrace these philosophies. That is fine under our development approach where market understanding and refinements are the drivers of decisions, not maximizing revenues for an increasingly growing staff count. We have been blessed to have new clients arise whenever they are needed, and to be real partners with us in furthering the vision. We have actively rejected some customer prospects because the philosophical fit was not good. We have also actively weaned ourselves from some engagements by insisting on sunsets for our support and encouraging more tech transfer and training.

These prejudices may change as we see the underlying Open Semantic Framework nearing fulfillment of its development vision. But, for an open source platform in a hurry (even considering it has been five years!), we believe these philosophies have served us and our clients well.

An Emphasis on the Open Semantic Framework

The net outcome for the Open Semantic Framework has been to emphasize a generic, enterprise-ready design that can be rapidly embraced and adopted by multiple markets. We have called OSF a platform of ontology-driven applications. ODapps are modular, generic software applications designed to operate in accordance with the specifications contained in one or more ontologies. ODapps fulfill specific generic tasks. Examples of current ontology-driven apps include imports and exports in various formats, dataset creation and management, data record creation and management, reporting, browsing, searching, data visualization and manipulation, user access rights and permissions, and similar. These applications provide their specific functionality in response to the specifications in the ontologies fed to them.

The ODapp vision underlying the design of OSF means we can leverage an architecture of generic tools to respond to virtually any knowledge application or any enterprise domain. The basic idea is shown by this diagram, which we first published about three years ago:

The Open Semantic Framework can Spawn Many Different Domain Instances

(click for full size)

In the five years of development of OSF, now at version 3.x (recently announced), we have had the good fortune to have clients and uses in publishing, tech transfer of R&D, group collaboration, health, automotive, air traffic control, sustainability, community indicators and local government. Demand in the latter two areas has been particularly strong. The strength of that market interest was the source of the dilemma for Structured Dynamics.

Unique Demands of Municipal Markets

The idea of rapid and nimble development of a new platform — especially one expressly designed to be generic across multiple domains — does not readily square with focusing on a specific market segment. This disconnect is particularly true for quite unique markets, as is the case for local governments.

In a past life I spent nearly ten years working for a trade association that represents municipally-owned electric utilities. APPA has members ranging from huge municipalities such as Los Angeles, Toronto, Seattle and San Antonio, to the smallest towns and burgs of the plains of North America. In my former role running the R&D and technical programs for this association, I personally interacted with hundreds of these wide-ranging individual communities.

In the larger communities, the electric utilities were separate departments from the local government per se, and were directed by professional utility managers. But for mid-size and smaller communities, there was often close interaction with all municipal departments.

Though sales lead times are long for all enterprise markets, they are particularly long and (often political) in government. Budgets are perennially tight. Budgets need to be proposed, argued with councils and management, and approved before work can begin. Staff are stretched across multiple functions, so use and maintenance are key factors as are concerns about longer-term support contracts. Portals and Web sites must serve all constituencies and content and tone need to be suitable to taxpayer-supported venues. Yet, because of the number and diversity of communities [2], across the entire market there is surprising innovation and experimentation. Finding better ways to do more with less is a key motivator in the local government market.

Specifically, in our own use of OSF in this market, we also observed some other unique aspects related to open data and Web sites. What constitutes open data and whether and how to make it “open” varies widely by community. Capturing local needs and perspectives often leads to comparatively high costs in theming and customizing the Web sites. The lack of dedicated and trained staff to care and feed a new Web site is always a challenge.

Structured Dynamics, with its generic platform interests and avoidance of staffing, is clearly not the right vehicle to pursue this market. Specific focus on the unique aspects of the local government market is required, plus modifications and specializationis of the platform to address government needs. Possible integration or incorporation of standard local government Web site(s) may also be required. Though we were seeing keen interest from this market, in order to address it properly a different vehicle with different venture imperatives was necessary.

Doing Justice to the Local Government Market

Early on, our good colleague and friend, Steve Ardire, helped point out some gaps in our business development. We saw that three things were missing within Structured Dynamics itself to do the local government and open government data markets justice. First, we needed a dedicated company to focus solely on this market. Second, we needed an executive familiar with the OSF platform and municipal government to head the effort. And, third, we somehow needed a way to overcome the time and costs associated with tailoring the portal for local community needs.

It was actually the last of these things that showed the first solution. We were approached about eighteen months ago by Ed Sussman, the CEO of Buzzr, about possibilities of partnering for the local government market. Buzzr has a one-click solution to theming and customizing individual Web portals, buillt around the Drupal content management system (CMS). Buzzr, a NYC-based company, has impeccable Drupal chops, having been co-founded by one of the leading Drupal shops, Lullabot. Buzzr has proven the applicability of their approach to specific verticals, including retail and education. The fact that Buzzr found us and saw a good fit for the municipal market was a formative discussion. We welcomed Buzzr’s outreach because their approach squarely addresses one of the cost and effort sticking points we were observing.

When Ed first contacted us, the OSF platform was still not sufficiently mature to be a market foundation. We needed more time to refine the platform, as well as to gain more market insight from use and use cases. Fortunately, Ed and Buzzr kept their interest strong while we refined things in the background. By the time we were able to address the other missing items, Buzzr was there to partner with us on the new venture.

Our second requirement was met by hiring Kelly Goldstrand, formerly the project manager for the NOW (Neighbourhoods of Winnipeg) portal, to head up the venture’s business development. NOW is one of the flagship installations of OSF. Her career focus has been on service planning, delivery and evaluation in the area of community health, protection and development. Kelly has significant management experience in local government and clearly understood OSF; her guidance had been pivotal in much of the system’s functionality. Kelly also has a proven track record in mentoring projects through local approvals and training city staff in use and maintenance of new technologies. After early retirement Kelly was ready to consider our opportunity and then graciously agreed to join us.

The last piece of the puzzle was forming the new venture. We had been working with the Civic Dynamics name for some time, and had also played around a bit with logo and Web site. Once the other things fell into place, we incorporated Civic Dynamics, Inc. in Québec (where it is also known as Dynamique Civique), given the strong market interest shown in Canada to date, and began preparing for the formal launch of the venture. We also needed to await the completion of OSF v 3.0.

A Report Card on SD’s Multi-year Plan

It now appears likely that the five-year plan we set for ourselves at the founding of Structured Dynamics may actually take six to seven years to achieve. This time extension derives from the realities of our client work over this time frame. One reality is that client-specific needs have caused us to necessarily divert from our own internal development path. Not all development can contribute to fulfilling a generic platform. Every client has unique needs and circumstances that are not generalizable to others. A second reality is that only through real client engagements can market requirements be truly discerned. Customer-centric development is absolutely essential to keep software grounded.

Meanwhile, Back at Civic Dynamics

We are as curious as the next person to see whether a dedicated spin-off is the right way to handle a specific vertical market. It will also be interesting to see how coordination and support can best be provided between the dynamics duo (Structured and Civics).

Nonetheless, we are excited about finally getting postured to pursue the growing market for open, local government data. We’d like to thank Kelly and Ed and all of our original sponsors for helping to gestate the venture to this point. Now that it has been birthed, we hope to nurture it and get it on its own two feet as soon as possible. Before we know it, and assuming we’ve raised it properly, Civic Dynamics will be celebrating its own life events!


[1] See our Sweet Tools listing of about 1000 semantic technology tools
[2] There is a total of about 24,000 municipal governments across the United States and Canada.
Posted:January 27, 2014


Open Government Data
It’s Time to Move Beyond Static Dataset Dumps

It would be an understatement to say that open data has been transforming how government does business. Over the past five years — ranging from national governments such as the United States and the United Kingdom to hundreds of local governments and municipalities and all forms of government in between — a veritable revolution in opening up data to the public has been underway. The open data in government (OGD) movement has spawned an entirely new cottage industry in open data advocacy and tools. Literally hundreds of government organizations are committed to open data, supported by an ecosystem of advocacy, technology and consulting groups.

Open data, of course, is not limited to governments. Open data in science and from the Web and for-profit entities are legitimate focal points in their own right. But, because data generated by governments are both sanctioned and developed using taxpayer monies, open data in government (OGD) occupies a special place in the conversation. Now, with experience and practice, we are beginning to see a generational shift in how open data is being handled by governments. The first generation, still mostly the current practice, was built around the idea of just making the data public and open. This current generation of open data is characterized by the publishing of datasets via catalogs. The datasets are static, unconnected and dumb. Mostly, too, the data within those datasets are poorly described and documented, often lacking standard metadata. What is now exciting, however, is the emergence of what can best be called dynamic open data. What this is and how it offers advantages is the focus of this article.

The 8 Initial Principals of Open Government Data

In October 2007, 30 open government advocates met in Sebastopol, California to discuss how government could open up electronically-stored government data for public use. Up until that point, the federal and state governments had made some data available to the public, usually inconsistently and incompletely, which had whetted the advocates’ appetites for more and better data. The conference, led by Carl Malamud and Tim O’Reilly and funded by a grant from the Sunlight Foundation, resulted in eight principles that, if implemented, would empower the public’s use of government-held data. These principles, no longer online, were summarized by Joshua Tauberer in his Open Government Data book as:

  1. Data Must be Complete
    All public data are made available. Data are electronically stored information or recordings, including but not limited to documents, databases, transcripts, and audio/visual recordings. Public data are data that are not subject to valid privacy, security or privilege limitations, as governed by other statutes.
  2. Data Must be Primary
    Data are published as collected at the source, with the finest possible level of granularity, not in aggregate or modified forms.
  3. Data Must be Timely
    Data are made available as quickly as necessary to preserve the value of the data.
  4. Data Must be Accessible
    Data are available to the widest range of users for the widest range of purposes.
  5. Data Must be Machine Processable
    Data are reasonably structured to allow automated processing of it.
  6. Access Must be Non-Discriminatory
    Data are available to anyone, with no requirement of registration.
  7. Data Formats Must be Non-Proprietary
    Data are available in a format over which no entity has exclusive control.
  8. Data Must be License-free
    Data are not subject to any copyright, patent, trademark or trade secret regulation. Reasonable privacy, security and privilege restrictions may be allowed as governed by other statutes.

These basic principles were then updated and re-phrased by the Sunlight Foundation in August 2010 to now number 10 principles, including the use of open standards, making data permanent, and keeping usage costs to an absolute minimum. All of these are laudable points. Each may or may not be provided in a fully open way by any given governmental entity.

This first step in the open data process has led to systems that are oriented to posting and publishing downloadable datasets. Existing open government data platforms, for example, such as Socrata or DKAN, can best be described as catalog systems. Listings of datasets with associated descriptions and metadata are presented. Users or the public may then chose among one or more machine-readable formats to download the entire dataset.

The 5 Added Principles of Dynamic Open Data

Of course, simply throwing data over the fence does not make it useful. Once we can get past the first threshold of making data publicly accessible, we next face the challenge of making that data meaningful and relevant. Since relevance is in the eye of the user, we no longer can think about information solely in terms of static, dumb datasets. We now need to expose the underlying data dynamically, such that users may request and filter and correlate what they need and only what they need.

Thus, there are five principles — or dimensions — by which we need to judge next-generation dynamic open data:

  1. Data Should be Filterable
    Data should be selectable by type (class), attribute or value such that only the data of interest is exposed to the user. This means the data should be structured in some way with facets that can be used dynamically to filter and make those selections.
  2. Data Should be Atomic
    Data should be exposed as individual entities or concepts with their attributes and values. The unit of manipulation thus becomes the datum, rather than the dataset.
  3. Data Should be Connected
    Because we are now collecting by datum and not dataset, connections between relevant things must be made explicit across relevant datasets. Similar things should be retrievable together. To achieve this aim, some schema or data definition framework must be layered over the data and datasets.
  4. Data Should be Expandable
    Since new data and new instances and new datasets will constantly arise, the design of the overall data management system must itself be “open”, enabling expansion of the available datastore at acceptable cost and effort.
  5. Data Should be Documented
    In order for these dynamic selections to be achievable, the data in the system must be fully documented, specifically including the full description and units used for attributes and values and the scope of entities and concepts. Only through such complete documentation can accurate connections and relevant selections per above be made.

There is no set order to the principles above. They are presented in the order shown so as to help remember them through the FACED mneumonic.

Parallels with Linked Data

Though the principles above do not call out linked data as a requirement, they do share many parallels with the early growth and maturation of linked data. A number of years back Fred Giasson and I commented on When Linked Data Rules Fail. Two of the points made in that article are the absence of suitable data descriptions and lack or wrong connections in data.gov and the NY Times datasets. I subsequently expanded on these types of problems in Practical P-P-P-Problems with Linked Data.

Official data from governments can avoid many of the provenance issues associated with general linked data, but in other areas there are important parallels. Like any emerging new practice, it takes a while to learn and formalize best practices. It is not surprising that we are seeing open data in government needing to transition from dumb datasets to actionable information. Making data actionable is when government information assets will finally become effective for the broader public.

Also, like linked data, it is likely the platforms built around semantic technologies and knowledge graphs (schema) will also come to the fore. Our own Open Semantic Framework is one such example, but there are a few now emerging in the linked data and semantic technology space. It will be through different practices and these newer platforms that we will see the next generation of open government data truly emerge.

Posted by AI3's author, Mike Bergman Posted on January 27, 2014 at 2:55 pm in Open Semantic Framework, Structured Dynamics | Comments (2)
The URI link reference to this post is: http://www.mkbergman.com/1707/welcoming-a-new-era-of-dynamic-open-data/
The URI to trackback this post is: http://www.mkbergman.com/1707/welcoming-a-new-era-of-dynamic-open-data/trackback/