Posted:April 16, 2007

String of PearlsVirtuoso and Related Tools String Together a Surprisingly Complete Score

NOTE: This is a lengthy post.

Jewels & DoubloonsOne of my objectives in starting the comprehensive Sweet Tools listing of 500+ semantic Web and -related tools was to surface hidden gems deserving of more attention. Partially to that end I created this AI3 blog’s Jewels and Doubloons award to recognize such treasures among the many offerings. The idea was and is that there is much of value out there; all we need to do is spend the time to find it.

I am now pleased to help better expose one of those treasures — in fact, a whole string of pearls — from OpenLink Software.

The semantic Web through its initial structured Web expression is not about to come — it has arrived. Mark your calendar. And OpenLink’s offerings are a key enabler of this milestone.

Though having been in existence since 1992 as a provider of ODBC middleware and then virtual databases, since at least 2003 OpenLink has been active in the semantic Web space [1]. The company was an early supporter of open source beginning with iODBC in 1999. But OpenLink’s decision to release its main product lines under dual-license open source in mid-2006 really marked a significant milestone for semantic Web toolsets. Though generally known mostly to the technology cognoscenti and innovators, these events now poise OpenLink for much higher visibility and prominence in the structured Web explosion now unfolding.

OpenLink’s current tools and technologies span the entire spectrum from RDF browsing and structure extraction to create RDF-enabled Web content, to format conversions and basic middleware operations, to ultimate data storage, query and management. (It is also important to emphasize that the company’s offerings support virtual databases, procedure hosting, and general SQL and XML data systems that provide value independent of RDF or the semantic Web.) There are numerous alternatives in other semantic Web tools at specific spots across this spectrum, most of which are also open source [2].

What sets OpenLink apart is the breadth and consistency of vision — correct in my view — that governs the company’s commitment and total offerings. It is this breadth of technology and responsiveness to emerging needs in the structured Web that signals to me that OpenLink will be a deserving player in this space for many years to come [3].

An Overview of OpenLink Products [4]

OpenLink SoftwareOpenLink provides a combination of open source and commercial software and associated services. While most attention in the remainder of this piece is given to its open source offerings, the company’s longevity obviously stems from its commercial success. And that success first built from its:

  • Universal Data Access Drivers — high-performance data access drivers for ODBC, JDBC, ADO.NET, and OLE DB that provide transparent access to enterprise databases. These UDA products also were the basis for OpenLink’s first foray into open source with the iODBC initiative (see below). This history of understanding relational database management systems (RDBMSs) and the needs of data transfer and interoperability are key underpinnings to the company’s present strengths.

This basis in data federation and interoperability then led to a constant evolution to more complex data integration and application needs, resulting in major expansion of the company’s product lines:

  • OpenLink Virtuoso — is a cross-platform ‘Universal Server’ for SQL, XML, and RDF data, including data management, that also includes a powerful virtual database engine, full-text indexing, native hosting of existing applications, Web Services (WS*) deployment platform, Web application server, and bridges to numerous existing programming languages. Now in version 5.0, Virtuoso is also offered in an open source version; it is described in focus below. The basic architecture of Virtuoso and its robust capabilities, as depicted by OpenLink, is:
Virtuoso Architecture
[Click on image for full-size pop-up]
  • OpenLink Data Spaces — also known as ODS, these are a series of distributed collaboration components built on top of Virtuoso that provide semantic Web and Web 2.0 support. Integrated components are provided for Web blogs, wikis, bookmark managers, discussion forums, photo galleries, feed aggregators and general Web site publishing using a clean scripting language. Support for multiple data formats, query languages, feed formats, and data services such as SPARQL, GData, GRDDL, microformats, OpenSearch, SQLX and SQL is included, among others (see below). This remarkable (but little known!) platform is extensible and provided both commercially and as open source; it is described in focus below.
OpenLink Software Demos
OpenLink has a number of useful (and impressive) online demos of capabilities. Here are a few of them, with instructions (if appropriate) on how best to run them:

  • OAT components – this series of demos, at http://demo.openlinksw.com/DAV/JS/demo/index.html?about, provide live use and JavaScript for all components in the toolkit. Simply choose one of the options in the menu to the left and then run the widget or app
  • Extract RDF from a standard Web page — in this demo, simply pasting a URL will allow you to extract RDF information from any arbitrary Web page. Go to http://demo.openlinksw.com/DAV/JS/rdfbrowser/index.html (demo / demo for username and password) and enter a URL of your choice (include http://!) in the Data Source URI box and then hit query. Alternatively, you can enter http://www.openlinksw.com/blog/~kidehen/ which will show all use options
  • Browse DBPedia RDF — this demo uses the OpenLink iSPARQL query builder and viewer, found at http://demo3.openlinksw.com:8890/isparql/ (demo / demo for username and password). At entry, then pick QBE > Open from the menu, then choose one of the listed queries. At this point, begin to explore the results. Note that clicking on a link in the lower pane will pull up the ‘explorer’, which allows you to navigate graph links between resources
  • Construct a SPARQL query — in the same example above, use the graph or generate options to either create a new query with the SPARQL language directly or via modifying the node graph in the middle pane (using its corresponding symbol buttons).
  • OAT (OpenLink Ajax Toolkit) — is a JavaScript-based toolkit for browser-independent rich-Internet application development. It includes a complete collection of UI widgets and controls, a platform-independent data access layer called Ajax Database Connectivity, and full-blown applications including unique database and table creation utilities, and RDF browser and SPARQL query builder, among others. It is offered as open source and is described in focus below.
  • iODBC — is another open source initiative that provides a comprehensive data access SDK for ODBC. iODBC is both a C-based API and cross-language hooks to C++, Java, Perl, Python, TCL and Ruby. iODBC has been ported to most modern platforms.
  • odbc-rails — is an open-source data adapter for Ruby on Rail‘s ActiveRecord that provides ODBC access to databases such as Oracle, Informix, Ingres, Virtuoso, SQL Server, Sybase, MySQL, PostgreSQL, DB2, Firebird, Progress, and others.
  • And, various other benchmarking and diagnostic utilities.

Relation to the Overall Semantic Web

One remarkable aspect of the OpenLink portfolio is its support for nearly the full spectrum of semantic Web requirements. Without the need to consider tools from any other company or entity, it is now possible to develop, test and deploy real semantic Web instantiations today, and with no direct software cost. Sure, some of these pieces are rougher than others, and some are likely not the “best” as narrowly defined, and there remain some notable and meaningful gaps. (After all, the semantic Web and its supporting tools will be a challenge for years to come.) But it is telling about the actual status of the semantic Web and its readiness for serious attention that most of the tools to construct a meaningful semantic Web deployment are available today — from a single supplier [5].

We can see the role and placement of OpenLink tools within the broader context of the overall semantic Web. The following diagram represents the W3C’s overall development roadmap (as of early 2006), with the roles played by OpenLink components shown in the opaque red ovals. While this is not necessarily the diagram I would personally draw to represent the current structured Web, it does represent consensus of key players and does represent a comprehensive view of the space. By this measure, OpenLink components contribute in many, many areas and across the full spectrum of the semantic Web architecture (note the full-size diagram is quite large):

OpenLink in Relation to W3C Architecture
[Click on image for full-size pop-up]

This status is no accident. It can be argued that OpenLink’s strong background in structured data, data federation and virtual databases and ODBC were excellent foundations. The company has also been focused on semantic Web issues and RDF support since at least 2003; we are now seeing the obvious fruits of these years of effort. As well, the early roots for some of the basic technology and approaches taken by the company extend back to Lisp and various AI (artificial intelligence) interests. One could also argue that perhaps a certain mindset comes from a self-defined role in “middleware” that lends itself to connecting the dots. And, finally, a vision and a belief in the centrality of data as an organizing principal for the next-generation Web has infused this company’s efforts from the beginning.

This placing of the scope of OpenLink’s offerings in context now allows us to examine some of the company’s individual “pearls” in more depth.

In Focus: Bringing Structure to the Web

No matter how much elegance and consistency of logic some may want to bring to the idea of the semantic Web, the fact will always remain that the medium is chaotic with multiple standards, motivations and statuses of development at work at any point in time. Do you want a consistent format, a consistent schema, a consistent ontology or world view? Forget it. It isn’t going to happen unless you can force it internally or with your suppliers (if you are a hegemonic enterprise, and, likely, not even then).

This real-world context has been a challenge for some semantic Web advocates. (I find it telling and amusing that the standard phrase for real-world applications is “in the wild”.) Many come from an academic background and naturally gravitate to theory or parsimony. Actually, there is nothing wrong with this; quite the opposite. What all of us now enjoy with the Internet would not have come about without the brilliance of understanding of simple and direct standards that early leaders brought to bear. My own mental image is that standards provide the navigation point on the horizon even if there is much tacking through the wind to get there.

But as theory meets practice the next layer of innovation comes from the experienced realists. And it is here that OpenLink (plus many others who recognize such things) provide real benefit. A commitment to data, to data federation, and an understanding of real world data formats and schemas “in the wild” naturally lead to a predilection to conversion and transformation. It may not be grand theory, but real practitioners are the ones who will lead the next phases with prejudices more to workability through such things as “pipeline” models or providing working code [6].

I’ve spoken previously about how RDF has emerged as the canonical storage form for the structured and semantic Web; I’m not sure this was self-evident to many observers up until recently. But it was evident to the folks at OpenLink, and — given their experience in heterogeneous data formats — they acted upon it.

Today, out of the box, you can translate the following formats and schema using OpenLink’s software alone. My guess is that the breadth of the table below would be a surprise to many current semantic Web researchers:

Accepted Data Formats / Schemas Query / Access / Transport Protocols Output Formats
  • RDF
  • XML
  • JSON
  • HTML

(Note, some of these items may reside in multiple categories and thus have been somewhat arbitrarily placed.)

In addition, OpenLink is developing or in beta with these additional formats, application sources or standards: Triplr, Jena, WordPress, Drupal, Zotero (with its support for major citation schemas such as CSL, COinS, etc.), Relax NG, phpBB, MediaWiki, XBRL, and eCRM.

A critical piece in these various transformations is the new Virtuoso Sponger, formally released in version 5.0 (though portions had been in use for quite some time). Depending on the file or format type detected at ingest, Sponger may apply metadata extractors to binary files (a multitude of built-in extractors, Open Office, images, audio, video, etc., plus an API to add new ones), “cartridges” for mapping REST-style Web services and query languages (see above), or cartridges for mapping microformats or metadata or structure embedded in HTML (basic XHTML, eRDF, RDFa, GRDDL profiles, or other sources suitable for XSLT). There is also a UI for simply adding your own cartridges via the Virtuoso Conductor, the system administration console for Virtuoso.

Detection occurs at the time of content negotiation instigated by the retrieval user agent. Sponger first tests for RDF (including N3 or Turtle), then scans for microformats or GRDDL. (If it is GRDDL-based the associated XSLT is used; otherwise Virtuoso’s built-in XSLT processors are used. If it is a microformat, Virtuoso uses its own XSLT to transform and map to RDF.) The next fallback is scanning of the HTML header for different Web 2.0 types or RSS 1.1, RSS 2.0, Atom, etc. In these cases, the format (if supported) is transformed to RDF on the fly using other built-in XSLT processors (via an internal table that associates data sources with XSLT similar to GRDDL patterns but without the dependency on GRRDL profiles). Failing those tests, the scan then uses standard Web 1.0 rules to search in the header tags for metadata (typically Dublin Core) and transform to RDF; other HTTP response header data may also be transformed to RDF.

“Transform” as used above includes the fact that OpenLink generates RDF based on an internal mapping table that associates the data sources with schemas and ontologies. This mapping will vary depending on if you are using Virtuoso with or without the ODS layer (see below). If using the ODS layer, OpenLink maps further to SIOC, SKOS, FOAF, AtomOWL, Annotea bookmarks, Annotea annotations, etc.. depending on the data source [7]. Getting all “scrubbed” or “sponged” data from these Web sources into RDF enables fast and easy mash-ups and, of course, an appropriate canonical form for querying and inference. Sponger is also fully extensible.

The result of these capabilities is simple: from any URL, it is now possible to get a minimum of RDF, and often quite rich RDF. The bridge is now made between the Web and the structured Web. Though we are now seeing such transformation utilities or so-called “RDFizers” rapidly emerge, none to my knowledge offer the breadth of formats, relation to ontology structure, or ease of integration and extension as do these Sponger capabilities from OpenLink.

In Focus: OAT and Tools

Though the youngest of the major product releases from Open Link, OAT — the OpenLink Ajax Toolkit — is also one of the most accessible and certainly flashiest of the company’s offerings. OAT is a JavaScript-based toolkit for browser-independent rich-Internet application development. It includes a robust set of standalone applications, database connectivity utilities, complete and supplemental UI (user interface) widgets and controls, an event management system, and supporting libraries. It works on a broad range of Ajax-capable web browsers in a standalone mode, and has notable on-demand library loading to reduce the total amount of downloaded JavaScript code.

OAT is provided as open source under the GPL license, and was first released in August 2006. It is now at version OAT 1.2 with the most recent release including integration of the iSPARQL QBE (see below) into the OAT Form Designer application. The project homepage is found at http://sourceforge.net/projects/oat; source code may be downloaded from http://sourceforge.net/projects/oat/files.

OAT is one of the first toolkits that fully conforms to the OpenAjax Alliance standards; see the conformance test page. OAT’s development team, led by Ondrej Zara, also recently incorporated the OAT data access layer as a module to the Dojo datastore.

OpenLink Ajax Toolkit (OAT) Overview

There are many Ajax toolkits, some with robust widget sets. What really sets OAT apart is its data access with breadth. This is very much in keeping with OpenLink’s data federation and middleware strengths. OAT’s Ajax database connectivity layer supports data binding to the following data source types:

  1. RDF — via SPARQL (with support in its query language, protocol, and result set serialization in any of the RDF/XML, RDF/N3, RDF/Turtle, XML, or JSON formats)
  2. Other Web data — via GData or OpenSearch
  3. SQL — via XMLA (a somewhat forgotten SOAP protocol for SQL data access that can sit atop ODBC, ADO.NET, OLE-DB, and even JDBC), and
  4. XML — via SOAP or REST style Web services.

Most all of the OAT components are data aware, and thus have the same broad data format flexibilities.The table below shows the full listing of OAT components, with live links to their online demos, selectable from a expanding menu, which itself is an example of the OAT Panelbar (menu) (Notes: use demo / demo for username and password if requested; pick “DSN=Local_Instance” if presented with a dropdown that asks for ‘Connection String’):

Not all of these components are supported on all major browsers; please refer to the OpenLink OAT compatibility chart for any restrictions.

The following screen shot shows one of the OAT complete widgets, the pie charting component:

OAT - Example Pie Graph
[Click on image for full-size pop-up]

One of the more complete widgets is the pivot table, which offers general Excel-level functionality including sorts, totals and subtotals, column and row switching, and the like:

OAT - Example Pie Graph
[Click on image for full-size pop-up]

Another one of the more interesting controls is the WebDAV Browser. WebDAV extends the basic HTTP protocols to allow moves, copies, file accesses and so forth on remote servers and in support of multiple authoring and versioning environments. OAT’s WebDAV Browser provides a virtual file navigation system across one or more Web-connected servers. Any resource that can be designated by a URI/IRI (the Internationalized Resource Identifier is a Unicode generalization of the Uniform Resource Identifier, or URI) can be organized and managed via this browser:

OAT - WebDAV Browser
[Click on image for full-size pop-up]

Again, there are online demos for each of the other standard widgets in the OAT toolkit.

The next subsections cover each of the major standalone applications contained in OAT; most are themselves combinations of multiple OAT components and widgets.

Forms Designer

The Forms Designer is the major UI design component within OAT. The full range of OAT widgets noted above — plus more basic controls such as labels, inputs, text areas, checkboxes, lines, URLs, images, tag clouds and others — can be placed by the designer into a savable and reusable composition. Links to data sources can be specified in any accessible network location with a choice of SQL, SPARQL or GData (and the various data forms associated with them as noted above).

The basic operation of the Forms Designer is akin to other standard RAD-type tools; the screenshot below shows a somewhat complicated data form pop-up overlaid on a map control (there is also a screencast showing this designer’s basic operations):

OAT - Using the Forms Designer
[Click on image for full-size pop-up]

When completed, this design produces the following result. Another screencast shows using the composite mash-up in action:

OAT - Mash-up Result of Using the Forms Designer
[Click on image for full-size pop-up]

Because of the common data representation at the core of OAT (which comes from the OpenLink vision), the ease of creating these mash-ups is phenomenal, the simplicity of which can be missed when viewing static examples. The real benefits actually become apparent when you make these mash-ups on your own.

Database (DB) Designer

The OAT Database (DB) Designer (also the basis for the Data Modeller) addresses a very different problem: laying out and designing a full database, including table structures and schemas. This is one of the truly unique applications within OAT, not shared (to my knowledge) by any other Ajax toolkit.

One starts either by importing a schema using XMLA or begins from scratch or from an existing template. Tables can be easily added, named and modified. As fields are added, it is easy to select data types, which also become color coded based on type on the design palette. Key relations between the tables are shown automatically:

OAT - Database Designer
[Click on image for full-size pop-up]

The result is a very easy, interactive data design environment. Of course, once completed, there are numerous options for saving or committing to actual data population.

SQL Query by Example

Once created and populated, the database is now available for querying and display. The SQL QBE (query-by-example) application within OAT is a fairly standard QBE interface. That may be a comfort to those used to such tools, but it has a more important role in being an exemplar for moving to RDF queries with SPARQL (see next):

OAT - Query-by-Example
[Click on image for full-size pop-up]

OpenLink also provides a screencast for how to make these table linkages and to modify the results display with a now-common SQL format.

iSPARQL Query Builder

OpenLink’s analog to QBE for SQL applied to RDF is its visual iSPARQL Query Builder (SVG-based, which of course is also an XML format). We are now dealing with graphs, not tables. And we are dealing with SPARQL and triples, not standard two-dimensional relational constructs. Some might claim that with triples we are now challenging the ability of standard users to grasp semantic Web concepts. Well, I say, yeah, OK, but I really don’t see how a relational table and schema with its lousy names is any easier than looking at a graph with URLs as the labeled nodes. Look at the following and judge for yourself:

iSPARQL Query Builder - Sample View
[Click on image for full-size pop-up]

OK, it looks pretty foreign. (Therefore, this is the one demo you should really try.) In my opinion, the community is still groping for constructs, visuals and paradigms to make the use of SPARQL intuitive. Though OpenLink has a presentation as good as anyone’s, the challenge of finding a better paradigm remains out there.

RDF Browser

Direct inspection of RDF is also hard to describe and even harder to present visually. What does it mean to have data represented as triples that enables such things such as graph (network) traversals or inference [8]? The broad approach that OpenLink takes for viewing RDF is via what it calls the RDF Browser. Actually, the browser presents a variety of RDF views, some in mash-up mode. The first view in the RDF Browser is for basic triples:

RDF Browser - Browse View
[Click on image for full-size pop-up]

This particular example is quite rich, with nearly 4000 triples with many categories and structure (see full size view). Note the tabs in the middle of the screen; these are where the other views below are selected.

For example, that same information can also be shown in standard RDF “triples” view (subject-object-predicate):

RDF Browser - Triples View
[Click on image for full-size pop-up]

The basic RDF structural relationships can also be shown in a graph view, itself with many settings for manipulating and viewing the data, with mouseovers showing the content of each graph node:

RDF Browser - Graph View
[Click on image for full-size pop-up]

The data can also be used for “mash-ups”; in this case, plotting the subject’s location on a map:

RDF Browser - Map View
[Click on image for full-size pop-up]

Or, another mash-up shows the dates of his blog postings on a timeline:

RDF Browser - Timeline View
[Click on image for full-size pop-up]

Since the RDF Browser uses SVG, some of these views may be incompatible with IE (6 or 7) and Safari. Firefox (1.5+), Opera 9.x, WebKit (Open Source Safari), and Camino all work well.

These are early views about how to present such “linked” information. Of course, the usability of such interfaces is important and can make or break user acceptance (again, see [8]). OpenLink — and all of us in the space — has the challenge to do better.

Yet what I find most telling about this current OpenLink story is that the data and its RDF structure now fully exist to manipulate this data in meaningful ways. The mash-ups are as easy as A connects to B. The only step necessary to get all of this data working within these various views is simply to point the RDF Browser to the starting Web site’s URL. Now that is easy!

So,we are not fully there in terms of compelling presentation. But in terms of the structured Web, we are already there in surprising measure to do meaningful things with Web data that began as unstructured raw grist.

In Focus: Virtuoso

OK, so we just got some sizzle; now it is time for the steak. OpenLink Virtuoso is the core component of the company’s offerings. The diagram at the top of this write-up provides an architectural overview of this product.

Virtuoso can best be described as a complete deployment environment with its own broad data storage and indexing engine; a virtual database server for interacting with all leading data types, third-party database management systems (DBMSs) and Internet “endpoints” (data-access points); and a virtual Web and application server for hosting its own and external applications written in other leading languages. To my knowledge, this “universal server” is the first cross-platform product that implements Web, file, and database server functionality in a single package.

The Virtuoso architecture exposes modular tools that can be strung together in a very versatile information-processing pipeline. Via the huge variety of structure and data format transformations that the product supports (see above), the developer only need worry about interacting with Virtuoso’s canonical formats and APIs. The messy details of real-world diversities and heterogeneities are largely abstracted from view.

Data and application interactions occur through the system’s virtual or federated database server. This core provides internal storage and application facilities, the ability to transparently expose tables and views from external databases, and the capability of exposing external application logic in a homogeneous way [9]. The variety of data sources supported by Virtuoso can be efficiently joined in any number of ways in order to provide a cohesive view of disparate data from virtually any source and in any form.

Since storage is supported for unstructured, semi-structured and structured data in its various forms, applications and users have a choice of retrieval and query constructs. Free-text searching is provided for unstructured data, conventional documents and literal objects within RDF triples; SQL is provided for conventional structured data; and SPARQL is provided for RDF and -related graph data. These forms are also supplemented with a variety of Web service protocols for retrievals across the network.

The major functional components within Virtuoso are thus:

  1. A DBMS engine (object-relational like PostgreSQL and relational like MySQL)
  2. XML data management (with support for XQuery, XPath, XSLT, and XML Schema)
  3. An RDF triple store (or database) that supports the SPARQL query language, transport protocol, and various serialization formats
  4. A service-oriented architecture (SOA) that combines a BPEL engine with an enterprise service bus (ESB)
  5. A Web application server (supporting both HTTP and WebDAV), and
  6. An NNTP-compliant discussion server.

Virtuoso provides its own scripting language (VSP, similar to Microsoft’s ASP) and Web application scripting language (VSPX, similar to Microsoft’s ASPX or PHP) [12]. Virtuoso Conductor is an accompanying system administrator’s console.

Virtuoso currently runs on Windows (XP/2000/2003), Linux (Redhat, Suse) Mac OS X, FreeBSD, Solaris, and other UNIX variants (including AIX and HP-UX and 32- and 64-bit platforms). Exclusive of documentation, a typical install of Virtuoso application code is about 100 MB (with help, documentation and examples, about 300 MB).

The development of Virtuoso first began in 1998. A major re-write with its virtual aspects occurred in 2001. WebDAV was added in early 2004, and RDF support with release of an open source version in 2006. Additional description of the Virtuoso history is available, plus a basic product FAQ and comprehensive online documentation. The most recent version 5.0 was released in April 2007.

RDF and SPARQL Enhancements

For the purposes of the structured Web and the semantic Web, Virtuoso’s addition of RDF and SPARQL support in early 2006 was probably the product’s most important milestone. This update required an expansion of Virtuoso’s native database design to accommodate RDF triples and a mapping and transformation of SPARQL syntax to Virtuoso’s native SQL engine.

For those more technically inclined, OpenLink offers an online paper, Implementing a SPARQL Compliant RDF Triple Store using a SQL-ORDBMS, that provides the details of how RDF and its graph IRI structure were superimposed on a conventional relational table design [10].

Virtuoso’s approach adds the IRI as a built-in and distinct data type. Virtuoso’s ANY type then allows for a single-column representation of a triple’s object (o). Since the graph (g), subject (s) and predicate (p) are always IRIs, they are declared as such. Since an ANY value is a valid key part with a well-defined collation order between non-comparable data types, indices can be built using the object (o) of the triple.

While text indexing is not directly supported in SPARQL, Virtuoso easily added it as an option for selected objects. Virtuoso also adds other SPARQL extensions to deal both with the mapping and transformation to native SQL and for other requirements. Though type cast rules of SPARQL and SQL differ, Virtuoso deals with this by offering a special SQL compiler directive that enables efficient SPARQL translation without introducing needless extra type tests. Virtuoso also extends SPARQL with SQL-like aggregate and group by functionality. With respect to storage, Virtuoso allows a mix of storage options for graphs in a single table or graph-specific tables; in some cases, the graph component does not have to be written in the table at all. Work on a system for declaring storage formats per graph is ongoing.

OpenLink provides an entire section on its RDF and SPARQL implementation in Virtuoso. In addition, there is an interactive SPARQL demo showing these capabilities at http://demo.openlinksw.com/isparql/.

Open Source Version and Differences

OpenLink released Virtuoso in an open source edition in mid-2006. According to Jon Udell at that time, to have “Virtuoso arrive on the scene as an open source implementation of a full-fledged SQL/XML hybrid out into the wild is a huge step because there just hasn’t been anything like that.” And, of course, now with RDF support and the Sponger, the open source uniqueness and advantages (especially for the semantic Web) are even greater.

Virtuoso’s open source home page is at http://virtuoso.openlinksw.com/wiki/main/ with downloads available from http://virtuoso.openlinksw.com/wiki/main/Main/VOSDownload.

Virtually all aspects of Virtuoso as described in this paper are available as open source. The one major component found in the commercial version, but not in the open source version, is the virtual database federation at the back-end, wherein a single call can access multiple database sources. This is likely extremely important to larger enterprises, but can be overcome in a Web setting via alternative use of inbound Web services or accessing multiple Internet “endpoints.”

Latest Release

The April 2007 version 5.0 release of Virtuoso demonstrates OpenLink’s commitment to continued improvements, especially in the area of the structured Web. There was a re-factoring of the database engine resulting in significant performance improvements. In addition, RDF-related enhancements included:

  • Added the built-in Sponger (see above) for transforming non-RDF into RDF “on the fly”
  • Full-text indexing of literal objects within triples
  • Added basic inferencing (subclass and subproperty support)
  • Added SPARQL aggregate functions
  • Improved SPARQL language support (updates, inserts, deletes)
  • Improved XML Schema support (including complex types through a bif:xcontains predicate)
  • Enhanced the SPARQL to SQL compiler’s optimizer, and
  • Further performance improvements to RDF views (SQL to RDF mapping).

In Focus: Open Data Spaces (ODS)

With the emergence of Web 2.0 and its many collaborative frameworks such as blogs, wikis and discussion forums, OpenLink came to realize that each of these locations was a “data space” of meaningful personal and community content, but that the systems were isolated “silos.” Each space was unconnected to other spaces, the protocols and standards for communicating between the spaces were fractured or non-existent, and each had its own organizing framework, means for interacting with it, and separate schema (if it had one at all).

Since the Virtuoso platform itself was designed to overcome such differences, it became clear that an application layer could be placed over the core Virtuoso system that would break down the barriers to these siloed “data spaces,” while at the same time providing all varieties of spaces similiar functionality and standards. Thus was started what became the OpenLink Data Space (or ODS) collaboration application, first released in open source in late 2006.

ODS is now provided as a packaged solution (in both open source and commercial versions) for use on either the Web or behind corporate firewalls, with security and single sign-on (SSO) capabilities. Mitko Iliev is OpenLink’s ODS program manager.

ODS is highly configurable and customizable. ODS has ten or so basic components, which can be included or not on a deployment-wide level, or by community, or by individual. Customizing the look and feel of ODS is also quite easy via CSS and the simple Virtuoso Web application scripting language, VSPX. (See here for examples of the language, which is based on XML.) A sample intro screen for ODS, largely as configured out of the box, is shown by this diagram:

OpenLink Data Space - General View
[Click on image for full-size pop-up]

Note the listing of ODS components on the menu bar across the top. These baseline ODS components are:

  1. Blog — this component is a full-featured blogging platform with support for all standard blogging features; it supports all the major publishing protocols (Atom, Moveable Type, Meta Weblog, and Blogger) and includes automatic generation of content syndication supporting RSS 1.0, RSS 2.0, Atom, SIOC, OPML, OCS (an older Microsoft format), and others
  2. Wiki — a full-Wiki implementation that supports the standard feeds and publishing protocols, plus supports Twiki, MediaWiki and Creole markup and WYSIWYG editing
  3. Briefcase — an integrated location to store and share virtually any electronic resource online; metadata is also extracted and stored
  4. Feed Aggregator — is an integrated application that allows you to store, organize, read, and share content including blogs, news and other information sources; provides automatic extraction of tags and comments from feeds; supports feeds in.RSS 1.0, 2.0, Atom, OPML, or OCS formats
  5. Discussion — this is, in essence, a forum platform that allows newsgroups to be monitored, posts viewed and responded to by thread, and other general discussion features
  6. Image Gallery — is an application to store and to share photos. The photos can be viewed as online albums or slide shows
  7. Polls — a straightforward utility for posting polls online and viewing voted results; organized by calendar as well
  8. Bookmark Manager –is a component for constructing, maintaining, and sharing vast collections of Web resource URLs; supports XBEL-based or Mozilla HTML bookmark imports; also mapped to the Annotea bookmark ontology
  9. Email plaftorm — this is a robust Web-based email client with standard email management and viewing functionality
  10. Community — a general portal showing all group and community listings within the current deployment, as well as means for joining or logging into them.

Here, for example, is the “standard” screen (again, modifiable if desired) when accessing an individual ODS community:

OpenLink Data Space - Community View
[Click on image for full-size pop-up]

And, here is another example, this time showing the “standard” photo image gallery:

OpenLink Data Space - Gallery View
[Click on image for full-size pop-up]

You can check out for yourself these various integrated components by going to http://myopenlink.net/ods/sfront.vspx.

There are subtle, but powerful, consistencies underlying this suite of components that I am amazed more people don’t talk about. While each individual component performs and has functionality quite similar to its standalone brethren, it is the shared functionality and common adherence to standards across all ODS components where the app really shines. All ODS components, for example, share these capabilities where it natively makes sense to the individual component:

  • Single sign-on with security
  • OpenID and Yadis compliance
  • Data exposed via SQL, RDF, XML or related serializations
  • Feed reading formats including RSS 1.0, RSS 2.0, Atom, OPML
  • Content syndication formats including RSS 1.0, RSS 2.0, Atom, OPML
  • Content publishing using Atom, Moveable Type, MetaWeblog, or Blogger protocols
  • Full-text indexing and search
  • WebDAV attachments and management
  • Automatic tagging and structurizing via numerous relevant ontologies (microformats, FOAF, SIOC, SKOS, Annotea, etc.)
  • Conversation (NNTP) awareness
  • Data management and query including SQL, RDF, XML and free text
  • Data access protocols including SQL, SPARQL, GData, Web services (SOAP or REST styles), WebDAV/HTTP
  • Use of structured Web middleware (see full alphabet soup above), and
  • OpenLink Ajax Toolkit (OAT) functionality.

This is a very powerful list, and it doesn’t end there. OpenLink has an API to extend ODS, which has not yet been published, but will likely be so in the near future.

So, with the use of ODS, you can immediately become “semantic Web-ready” in all of your postings and online activities. Or, as an enterprise or community leader, you can immediately connect the dots and remove artificial format, syntax and semantic barriers between virtually all online activities by your constituents. I mean, we’re talking here about hurdles that generally appear daunting, if not unsurmountable, that can be downloaded, installed and used today. Hello!?

Finally, OpenLink Data Spaces are provided as a standard inclusion with the open source Virtuoso.

Performance and Interoperability

I mean, what can one say? With all of this scope and power, why aren’t more people using OpenLink’s software? Why doesn’t the world better know about these capabilities? Is there some hidden flaw in all of this due to lack of performance or interoperability or something else?

I think it’s worth decomposing these questions from a number of perspectives because of what it may say about the state of the structured Web.

First, by way of disclaimer, I have not benchmarked OpenLink’s software v. other applications. There are tremendous offerings from other entities, many on my “to do” list for detailed coverage and perhaps hosannas. Some of those alternatives may be superior or better fits in certain environments.

Second, though, OpenLink is an experienced middleware vendor with data access and interoperability as its primary business. As a relatively small player, it can only compete with the big boys based on performance, responsiveness and/or cost. OpenLink understands this better than I; indeed, the company has both a legacy and reputation for being focused on benchmarks and performance testing.

I believe it is telling that the company has been an active developer and participant in performance testing standards, and is assiduous in posting performance results for its various offerings. This speaks to intellectual and technical integrity. For example, Orri Erling, the OpenLink Virtuoso program manager and lead developer, posts on a frequent basis on both how to improve standards and how OpenLink’s current releases stand up to them. The company builds and releases benchmarking utilities in open source. OpenLink is an active participant in the THALIA data integration benchmarking initiative and the LUBM semantic Web repository benchmark. While RDF performance and benchmarks are obviously still evolving, I think we can readily dismiss performance concerns as a factor limiting early uptake [11].

Third, as for functionality or scope, I honestly can not point to any alternative that spans as broad of a spectrum of structured Web requirements than does OpenLink. Further, the sheer scope of data formats and schemas that OpenLink transforms to RDF is the broadest I know. Indeed, breadth and scope are real technical differentiators for OpenLink and Virtuoso.

Fourth, while back-end operability with other triple stores such as Sesame, Jena or Mulgara/Kowari is not yet active with Virtuoso, there apparently has not yet been a demand for it, the ability to add it is clearly within Open Link’s virtual database and data federation strengths, and no other option — commercial or open source — currently does so. We’re at the cusp of real interoperability demands, but are not yet quite there.

And, last, and this is very recent, we are only now beginning to see large and meaningful RDF datasets begin to appear for which these capabilities are naturally associated. Many have been active in these new announcements. OpenLink has taken an enlightened self-interest role in its active participation in the Open Data movement and in its strong support for the Linking Open Data effort of the SWEO (Semantic Web Outreach and Education) community of the W3C.

So, via a confluence of many threads intersecting at once I think we have the answer: The time was not ripe until now.

What we are seeing at this precise moment is the very forming of the structured Web, the first wave of the semantic Web visible to the general public. Offerings such as OpenLink’s with real and meaningful capabilities are being released; large structured datastores such as DBpedia and Freebase have just been announced; and the pundit community is just beginning to evangelize this new market and new phase for the Web. Though some prognosticators earlier pointed to 2007 as the breakthrough year for the semantic Web, I actually think that moment is now. (Did you blink?)

The semantic Web through its initial structured Web expression is not about to come — it has arrived. Mark your calendar. And OpenLink’s offerings are a key enabler of this most significant Internet milestone.

A Deserving Jewels and Doubloons Winner

OpenLink, its visionary head, Kingsley Idehen, and full team are providing real leadership, services and tools to make the semantic Web a reality. Providing such a wide range of capable tools, middleware and RDF database technology as open source means that many of the prior challenges of linking together a working production pipeline have now been met and are affordable. For these reasons, I gladly announce OpenLink Software as a deserving winner of the (highly coveted, but still cheesy! :) ) AI3 Jewels & Doubloons award.

Jewels & Doubloons An AI3 Jewels & Doubloons Winner

[1] A general history of OpenLink Software and the Virtuoso product may be found at http://virtuoso.openlinksw.com/wiki/main/Main/VOSHistory.

[2] Most, if not all, of my reviews and testing in this AI3 blog focus on open source. That is because of the large percentage of such tools in the semantic Web space, plus the fact that I can install and test all comparable alternatives locally. Thus, there may indeed be better performing or more functional commercial software than what I typically cover.

[3] In terms of disclosure, I have no formal relationship with OpenLink Software, though I do informally participate with some OpenLink staff on some open source and open data groups of mutual interest. I am also actively engaged in testing the company’s products in relation to other products for my own purposes.

[4] An informative introduction to OpenLink and its CEO and founder, Kingsley Idehen, can be found in this April 28, 2006 podcast with Jon Udell, http://weblog.infoworld.com/udell/2006/04/28.html#a1437. Other useful OpenLink screencasts include:

[5] I am not suggesting that single-supplier solutions are unqualifiably better. Some components will be missing, and other third-party components may be superior. But it is also likely the case that a first, initial deployment can be up and running faster the fewer the interfaces that need to be reconciled.

[6] I have to say that Stefano Mazzocchi comes to mind with efforts such as Apache Cocoon and “RDFizers”.

[7] Thus, with ODS you get the additional benefit of having SIOC act as a kind of “gluing” ontology that provides a generic data model of Containers, Items, Item Types, and Associations / Relationships between Items). This approach ends up using RDF (via SIOC) to produce the instance data that makes the model conceptually concrete. (According to OpenLink, this is similar to what Apple formerly used in a closed form in the days of EOF and what Microsoft is grappling with ADO.net; it is also a similar model to what is used by Freebase, Dabble, Ning, Google Base, eBay, Amazon, Facebook, Flickr, and others who use Web services in combination or not with proprietary languages to expose data. Dave Beckett’s recently announced Triplr works in a similar manner.) This similarity of approach is why OpenLink can so readily produce RDF data from such services.

[8] I think these not-yet-fully formed constructs for interacting with RDF and using SPARQL are a legacy from the early semantic Web emphasis on “machines talking to machines via data”. While that remains a goal since it will eventually lead to autonomous agents that work in the background serving our interests, it is clear that we humans need to get intimately involved in inspecting, transforming and using the data directly. It is really only in the past year or three that workable user interfaces for these issues have even become a focus of attention. Paradoxically, I believe that sufficient semantic enabling of Web data to support a vision of intelligent and autonomous agents will only occur after we have innovated fun and intuitive ways for us humans to interact and manipulate that data. Whooping it up in the data playpen will also provide tremendous emphasis to bring still-further structure to Web content. These are actually the twin challenges of the structured Web.

[9] Stored procedures first innovated for SQL have been abstracted to support applications written in most of the leading programming languages. Virtuoso presently supports stored procedures or classes for SQL, Java, .Net (Mono), PERL, Python and Ruby.

[10] Many relational databases have been used for storing RDF triples and graphs. Also dedicated non-relational approaches, such as using bitmap indices as primary storage medium for triples, have been implemented. At present, there is no industry consensus on what constitutes the optimum storage format and set of indices for RDF.

[11] The W3C has a very informative article on RDF and traditional SQL databases; also, there is a running tally of large triple store scaling.

[12] Thanks to Ted Thibodeau, Jr, of the OpenLink staff, “This is basically true, but incomplete. In its Application Server role, Virtuoso can host runtime environments for PHP, Perl, Python, JSP, CLR, and others — including ASP and ASPX. Some of these are built-in; some are handled by dynamically loading local library resources. One of the bits of magic here — IIS is not required for ASP or ASPX hosting. Depending on the functionality used in the pages in question, Windows may not be necessary, either. CLR hosting may mandate Windows and Microsoft's .NET Frameworks, because Mono remains an incomplete implementation of the CLI.” Very cool.

Posted:April 6, 2007

Pinhead from Funnypop.comPinheads Sometimes Get the Last Laugh

The idea of the ‘long tail’ was brilliant, and Chris Anderson’s meme has become part of our current lingo in record time. The long tail is the colloquial name for a common feature in some statistical distributions where an initial high-amplitude peak within a population distribution is followed by a rapid decline and then a relatively stable, declining low-amplitude population that “tails off.” (An asymptotic curve.) This sounds fancy; it really is not. It simply means that a very few things are very popular or topical, most everything else is not.

The following graph is a typical depiction of such a statistical distribution with the long tail shown in yellow. Such distributions often go by the names of power laws, Zipf distributions, Pareto distributions or general Lévy distributions. (Generally, such curves when plotted on a semi-logarithmic scale now show the curve to be straight, with the slope being an expression of its “power”.)

Image:Long tail.svg

It is a common observation that virtually everything measurable on the Internet — site popularity, site traffic, ad revenues, tag frequencies on del.icio.us, open source downloads by title, Web sites chosen to be digg‘ed, Google search terms — follows such power laws or curves.

However, the real argument that Anderson made first in Wired magazine and then in his 2006 book, The Long Tail: Why the Future of Business is Selling Less of More, is that the Internet with either electronic or distributed fulfillment means that the cumulative provision of items in the long tail is now enabling the economics of some companies to move from “mass” commodities to “specialized” desires. Or, more simply put: There is money to be made in catering to individualized tastes.

I, too, agree with this argument, and it is a brilliant recognition of the fact that the Internet changes everything.

But Long Tails Have ‘Teeny Heads’

Yet what is amazing about this observation of long tails on the Internet has been the total lack of discussion of its natural reciprocal: namely, long tails have teeny heads. For, after all, what also is the curve above telling us? While Anderson’s point that Amazon can carry millions of book titles and still make a profit by only selling a few of each, what is going on at the other end of the curve — the head end of the curve?

Well, if we’re thinking about book sales, we can make the natural and expected observation that the head end of the curve represents sales of the best seller books; that is, all of those things in the old 80-20 world that is now being blown away with the long tail economics of the Internet. Given today’s understandings, this observation is pretty prosaic since it forms the basis of Anderson’s new long tail argument. Pre-Internet limits (it’s almost like saying before the Industrial Revolution) kept diversity low and choices few.

Okaaaay! Now that seems to make sense. But aren’t we still missing something? Indeed we are.

Social Collaboration Depends on ‘Teeny Heads’

So, when we look at many of those aspects that make up what is known as Web 2.0 or even the emerging semantic Web, we see that collaboration and user-submitted content stands at the fore. And our general power law curves then also affirm that it is a very few who supply most of that user-generated content — namely, those at the head end, the teeny heads. If those relative few individuals are not motivated, the engine that drives the social content stalls and stutters. Successful social collaboration sites are the ones that are able to marshal “large numbers of the small percentage.”

The natural question thus arises: What makes those “teeny heads” want to contribute? And what makes it so they want to contribute big — that is, frequently and with dedication? So, suddenly now, here’s a new success factor: to be successful as a collaboration site, you must appeal to the top 1% of users. They will drive your content generation. They are the ‘teeny heads’ at the top of your power curve.

Well, things just got really more difficult. We need tools, mindshare and other intangibles to attract the “1%” that will actually generate our site’s content. But we also need easy frameworks and interfaces for the general Internet population to live comfortably within the long tail.

So, heads or tails? Naahh, that’s the wrong question. Keep flipping until you get both!

Posted by AI3's author, Mike Bergman Posted on April 6, 2007 at 1:04 am in Adaptive Information, Adaptive Innovation, Semantic Web | Comments (2)
The URI link reference to this post is: http://www.mkbergman.com/359/long-tails-have-teeny-heads-or-is-it-vice-versa/
The URI to trackback this post is: http://www.mkbergman.com/359/long-tails-have-teeny-heads-or-is-it-vice-versa/trackback/
Posted:April 2, 2007

Meat and Potatoes

DBpedia Serves Up Real Meat and Potatoes (but Bring Your Own Knife and Fork!)

NOTE: Due to demand, I am pleased to provide this PDF version of this posting (1.4 MB) for downloading or printing.

DBpedia is the first and largest source of structured data on the Internet covering topics of general knowledge. You may have not yet heard of DBpedia, but you will. Its name derives from its springboard in Wikipedia. And it is free, growing rapidly and publicly available today.

With DBpedia, you can manipulate and derive facts on more than 1.6 million “things” (people, places, objects, entities). For example, you can easily retrieve a listing of prominent scientists born in the 1870s; or, with virtually no additional effort, a further filtering to all German theoretical physicists born in 1879 who have won a Nobel prize [1]. DBpedia is the first project chosen for showcasing by the Linking Open Data community of the Semantic Web Education and Outreach (SWEO) interest group within the W3C. That community has committed to make portions of other massive data sets — such as the US Census, Geonames, MusicBrainz, WordNet, the DBLP bibliography and many others — interoperable as well.

DBpedia has been unfortunately overlooked in the buzz of the past couple weeks surrounding Freebase. Luminaries such as Esther Dyson, Tim O’Reilly and others have been effusive about the prospects of the pending Freebase offering. And, while, according to O’Reilly Freebase may be “addictive” or from Dyson it may be that “Freebase is a milestone in the journey towards representing meaning in computers,” those have been hard assertions to judge. Only a few have been invited (I’m not one) to test drive Freebase, now in alpha behind a sign-in screen, and reportedly also based heavily on Wikipedia. On the other hand, DBpedia, released in late January, is open for testing and demos and available today — to all [2].

Please don’t misunderstand. I’m not trying to pit one service against the other. Both services herald a new era in the structured Web, the next leg on the road to the semantic Web. The data from both Freebase and DBpedia are being made freely available under either Creative Commons or the GNU Free Documentation License, respectively. Free and open data access is fortunately not a zero sum game — quite the opposite. Like other truisms regarding the network effects of the Internet, the more data that can be meaningfully intermeshed, the greater the value. Let’s wish both services and all of their cousins and progeny much success!

Freebase may prove as important and revolutionary as some of these pundits predict — one never knows. Wikipedia, first released in Jan. 2001 with 270 articles and with only 10 editors, only had a mere 1000 mentions a month by July 2003. Yet today it has more than 1.7 million articles (English version) and is ranked about #10 in overall Web traffic (more here). So, while today Freebase has greater visibility, marketing savvy and buzz than DBpedia, so did virtually every other entity in Jan. 2001 compared to Wikipedia in its infancy. Early buzz is no guarantee of staying power.

What I do know is that DBpedia and the catalytic role it is playing in the open data movement is the kind of stuff from which success on the Internet naturally springs. What I also know is that in open source content a community is required to power a promise to its potential. Because of its promise, its open and collaborative approach, and the sheer quality of its information now, DBpedia deserves your and the Web’s attention and awareness. But, only time will tell whether DBpedia is able to nurture a community or not and overcome current semantic Web teething problems not of its doing.

First, Some Basics

DBpedia represents data using the Resource Description Framework (RDF) model, as is true for other data sources now available or being contemplated for the semantic Web. Any data representation that uses a “triple” of subject-predicate-object can be expressed through the W3C’s standard RDF model. In such triples, subject denotes the resource, and the predicate denotes traits or aspects of the resource and expresses a relationship between the subject and the object. (You can think of subjects and objects as nouns, predicates as verbs.) Resources are given a URI (as may also be given to predicates or objects that are not specified with a literal) so that there is a single, unique reference for each item. These lookups can themselves be an individual assertion, an entire specification (as is the case, for example, when referencing the RDF or XML standards), or a complete or partial ontology for some domain or world-view. While the RDF data is often stored and displayed using XML syntax, that is not a requirement. Other RDF forms may include N3 or Turtle syntax, and variants of RDF also exist including RDFa and eRDF (both for embedding RDF in HTML) or more structured representations such as RDF-S (for schema; also known as RDFS, RDFs, RDFSchema, etc.).

The absolutely great thing about RDF is how well it lends itself to mapping and mediating concepts from different sources into an unambiguous semantic representation (my ‘glad== your ‘happy‘ OR my ‘gladis your ‘glad‘), leading to what Kingsley Idehen calls “data meshups. Further, with additional structure (such as through RDF-S or the various dialects of OWL), drawing inferences and machine reasoning based on the data through more formal ontologies and descriptive logics is also within reach [3].

While these nuances and distinctions are important to the developers and practitioners in the field, they are of little to no interest to actual users [4]. But, fortunately for users, and behind the scenes, practitioners have dozens of converters to get data in whatever form it may exist in the wild such as XML or JSON or a myriad of other formats into RDF (or vice versa) [5]. Using such converters for structured data is becoming pretty straightforward. What is now getting more exciting are improved ways to extract structure from semi-structured data and Web pages or to use various information extraction techniques to obtain metadata or named entity structure from within unstructured data. This is what DBpedia did: it converted all of the inherent structure within Wikipedia into RDF, which then makes it manipulable similar to a conventional database.

And, like SQL for conventional databases, SPARQL is now emerging as a leading query framework for RDF-based “triplestores” (that is, the unique form of databases — most often indexed in various ways to improve performance — geared to RDF triples). Moreover, in keeping with the distributed nature of the Internet, distributed SPARQL “endpoints” are emerging, which represent specific data query points at IP nodes, the results of which can then be federated and combined. With the emerging toolset of “RDFizers“, in combination with extraction techniques, such endpoints should soon proliferate. Thus, Web-based data integration models can either follow the data federation approach or the consolidated data warehouse approach or any combination thereof.

The net effect is that the tools and standards now exist such that all data on the Internet can now be structured and combined and analyzed. This is huge. Let me repeat: this is HUGE. And all of us users will only benefit as practitioners continue their labors in the background. The era of the structured Web is now upon us.

A Short Intro to DBpedia [6]DBpedia Logo

The blossoming of Wikipedia has provided a serendipitous starting point to nucleate this emerging “Web of data“. Like Vonnegut’s ice-9, DBpedia‘s RDF representation — freely available for download and extension — offers the prospect to catalyze many semantic Web data sources and tools. (It is also worth study why Freebase, DBpedia, and a related effort called YAGO from the Max Planck Institute for Computer Science [7] all are using Wikipedia as a springboard.) The evidence that the ice crystals are forming comes from the literally hundreds of millions of new “facts” that were committed to be added as part of the open data initiative within days of DBpedia‘s release (see below).

Wikipedia articles consist mostly of free text, but also contain different types of structured information, such as infobox templates (an especially rich source of structure — see below), categorization information, images, geo-coordinates and links to external Web pages. The extraction methods are described in a paper from DBpedia developers Sören Auer and Jens Lehmann. This effort created the initial DBpedia datasets, including extractions in English, German, French, Spanish, Italian, Portuguese, Polish, Swedish, Dutch, Japanese and Chinese.

The current extraction set is about 1.8 GB of zipped files (about 7.5 GB unzipped), consisting of an estimated 1.6 milllion entities, 8,000 relations/properties/predicates, and 91 million “facts” (generally equated to RDF triples) over about 15 different data files. DBpedia can be downloaded as multiple datasets, with the splits and sizes in (zipped form and number of triples) shown below. Note that each download — available directly from the DBpedia site — may also include multiple language variants:

  • Articles (217 MB zipped, 5.4 M triples) — Descriptions of all 1.6 million concepts within the English version of Wikipedia including English titles and English short abstracts (max. 500 chars long), thumbnails, links to the corresponding articles in the English Wikipedia. This is the baseline DBpedia file
  • Extended Abstracts (447 MB, 1.6 M triples) — Additional, extended English abstracts (max. 3000 chars long)
  • External Links (75 MB, 2.8 M triples) — Links to external web pages about a concept
  • Article Categories (68 MB, 5.5 M triples) — Links from concepts to categories using the SKOS vocabulary
  • Additional Languages (147 MB, 4 M triples) — Additional titles, short abstracts and Wikipedia article links in the 10 languages listed above
  • Languages Extended Abstracts (342 MB, 1.3 M triples) — Extended abstracts in the 10 languages
  • Infoboxes (116 MB, 9.1 M triples) — Information that has been extracted from Wikipedia infoboxes, an example of which is shown below. There are about three-quarter million English templates with more than 8,000 properties (predicates), with music, animal and plant species, films, cities and books the most prominent types. This category is a major source of the DBpedia structure (but not total content volume):
Example Wikipedia Infobox Template
[Click on image for full-size pop-up]
  • Categories (10 MB, 900 K triples) — Information regarding which concepts are a category and how categories are related
  • Persons (4 MB, 437 K triples) — Information on about 58,880 persons (date and place of birth etc.) represented using the FOAF vocabulary
  • Links to Geonames (2 MB, 213 K triples) — Links between geographic places in DBpedia and data about them in the Geonames database
  • Location Types (500 kb, 65 K triples) — rdf:type Statements for geographic locations
  • Links to DBLP (10 kB, 200 K triples) — Links between computer scientists in DBpedia and their publications in the DBLP database
  • Links to the RDF Book Mashup (100 kB, 9 K triples) — Links between books in DBpedia and data about them provided by the RDF Book Mashup
  • Page Links (335 MB, around 60 M triples) — Dataset containing internal links between DBpedia instances. The dataset was created from the internal pagelinks between Wikipedia articles, potentially useful for structural analysis or data mining.

Besides direct download, DBpedia is also available via a public SPARQL endpoint at http://dbpedia.org/sparql and via various query utilities (see next).

Querying and Interacting with DBpedia

Generic RDF (semantic Web) browsers like Disco, Tabulator or the OpenLink Data Web Browser [8] can be used to navigate, filter and query data across data sources. A couple of these browsers, plus endpoints and online query services, are used below to illustrate how you can interact with the DBpedia datasets. However, that being said, it is likely that first-time users will have difficulty using portions of this infrastructure without further guidance. While data preparation and exposure via RDF standards is progressing nicely, the user interfaces, documentation, intuitive presentation, and general knowledge for standard users to use this infrastructure are lagging. Let’s show this with some examples, progressing from what might be called practitioner tools and interfaces to those geared more to standard users [9].

The first option we might try is to query the data directly through DBpedia‘s SPARQL endpoint [10]. But, since that is generally used directly by remote agents, we will instead access the endpoint via a SPARQL viewer, as shown by this example looking for “luxury cars”:

SPARQL Explorer
[Click on image for full-size pop-up]

We observe a couple of things using this viewer. First, the query itself requires using the SPARQL syntax. And, the results are presented in a linked listing of RDF triples.

For practitioners, this is natural and straightforward. Indeed, other standard viewers, now using Disco as the example, also present results in the same tabular triple format (in this case for “Paul McCartney”):

Disco1 Image
[Click on image for full-size pop-up]

While OK for practitioners, these tools pose some challenges for standard users. First, one must know the SPARQL syntax with its SELECT statement and triples specifications. Second, where the resources exist (the URIs) and the specific names of the relations (predicates) must generally be known or looked up in a separate step. And, third, of course, the tabular triples result display is not terribly attractive and may not be sufficiently informative (since results are often the links to the resource, rather than the actual content of that resource).

These limits are well-known in the community; indeed, they are a reflection of RDF’s roots in machine-readable data. However, it is also becoming apparent that humans need to inspect and interact with this stuff as well, leading to consequent attempts to make interfaces more attractive and intuitive. One of the first to do so for DBpedia is the Universität Leipzig’s Query Wikipedia, an example of an online query service. In this first screen, we see that the user is shielded from much of the SPARQL syntax via the three-part (subject-predicate-object) query entry form:

Query Wikipedia Space Missions Query
[Click on image for full-size pop-up]

We also see that the results can be presented in a more attractive form including image thumbnails:

Query Wikipedia Space Missions Results
[Click on image for full-size pop-up]

Use of dropdown lists could help this design still further. Better tools and interface design is an active area of research and innovation (see below).

However, remember the results of such data queries are themselves machine readable, which of course means that the results can be embedded into existing pages and frameworks or combined (“meshed up“) with still other data in still other presentation and visualization frameworks [11]. For example, here is one DBpedia results set on German state capitals presented in the familiar Wikipedia (actually, MediaWiki) format:

German Capitals in Wikipedia
[Click on image for full-size pop-up]

Or, here is a representative example of what DBpedia data might look like when the addition of the planned Geonames dataset is completed:

Geonames Example
[Click on image for full-size pop-up]

Some of these examples, indeed because of the value of the large-scale RDF data that DBpedia now brings, begin to glaringly expose prior weaknesses in tools and user interfaces that were hidden when only applied to toy datasets. With the massive expansion to still additional datasets, plus the interface innovations of Freebase and Yahoo! Pipes and elsewhere, we should see rapid improvements in presentation and usability.

Extensibility and Relation to Open Data

In addition to the Wikipedia, DBLP bibliography and RDF book mashup data already in DBpedia, significant commitments to new datasets and tools have been quick to come, including the addition of full-text search [12], with many more candidate datasets also identified. A key trigger for this interest was the acceptance of the Linking Open Data project, itself an outgrowth of DBpedia proposed by Chris Bizer and Richard Cyganiak, as one of the kick-off community projects of the W3C‘s Semantic Web Education and Outreach (SWEO) interest group. Some of the new datasets presently being integrated into the system — with many notable advocates standing behind them — include:

  • Geonames data, including its ontology and 6 million geographical places and features, including implementation of RDF data services
  • 700 million triples of U.S. Census data from Josh Tauberer
  • Revyu.com, the RDF-based reviewing and rating site, including its links to FOAF, the Review Vocab and Richard Newman’s Tag Ontology
  • The “RDFizers” from MIT Simile Project (not to mention other tools), plus 50 million triples from the MIT Libraries catalog covering about a million books
  • GEMET, the GEneral Multilingual Environmental Thesaurus of the European Environment Agency
  • And, WordNet through the YAGO project, and its potential for an improved hierarchical ontology to the Wikipedia data.

Additional candidate datasets of interest have also been identified by the SWEO interest group and can be found on this page:

Note the richness within music and genetics, two domains that have been early (among others not yet listed) to embrace semantic Web approaches. Clearly, this listing, itself only a few weeks old, will grow rapidly. And, as has been noted, the availability of a wide variety of RDFizers and other tools should cause such open data to continue to grow explosively.

There are also tremendous possibilities for different presentation formats of the results data, as the MediaWiki and Geonames examples above showed. Presentation options include calendars, timelines, lightweight publishing systems (such as Exhibit, which already has a DBpedia example), maps, data visualizations, PDAs and mobile devices, notebooks and annotation systems.

Current Back-end Infrastructure

The online version of DBpedia and its SPARQL endpoint is managed by the open source Virtuoso middleware on a server provided by OpenLink Software. This software also hosts the various third-party browsers and query utilities noted above. These OpenLink programs themselves are some of the more remarkable tools available to the semantic Web community, are open source, and are also very much “under the radar.”

Just as Wikipedia helps provide the content grist for springboarding the structured Web, such middleware and conversion tools are another part of the new magic essential to what is now being achieved via DBpedia and Freebase. I will be discussing further the impressive OpenLink Software tools and back-end infrastructure in detail in an upcoming posting.

Structured Data: Foundation to the Road of the Semantic Web

We thus have an exciting story of a new era — the structured Web — arising from the RDF exposure of large-scale structured data and its accompanying infrastructure of converters, middleware and datastores. This structured Web provides the very foundational roadbed underlying the semantic Web.

Yet, at the same time as we begin traveling portions of this road, we can also see more clearly some of the rough patches in the tools and interfaces needed by non-practitioners. I suspect much of the excitement deriving from both Yahoo! Pipes and Freebase comes from the fact that their interfaces are “fun” and “addictive.” (Excitement also comes from allowing users with community rights to mold ontologies or to add structured content of their own.) Similar innovations will be needed to smooth over other rough spots in query services and use of the SPARQL language, as well as in lightweight forms of results and dataset presentation. And still even greater challenges in mediating semantic heterogeneities lie much further down this road, which is a topic for another day.

I suspect this new era of the structured Web also signals other transitions and changes. Practitioners and the researchers who have long labored in the laboratories of the semantic Web need to get used to business types, marketers and promoters, and (even) the general public crowding into the room and jostling the elbows. Explication, documentation and popularization will become more important; artists and designers need to join the party. While we’ve only seen it in limited measure to date, venture interest and money will also begin flooding into the room, changing the dynamics and the future in unforeseeable ways. I suspect many who have worked the hardest to bring about this exciting era may look back ruefully and wonder why they ever despaired that the broader world didn’t “get it” and why it was taking so long for the self-evident truth of the semantic Web to become real. It now is.

But, like all Thermidorean reactions, this, too, shall pass. Whether we have to call it “Web 3.0″ or “Web 4.0″ or even “Web 98.6“, or any other such silliness for a period of time in order to broaden its appeal, we should accept this as the price of progress and do so. We really have the standards and tools at hand; we now need to get out front as quickly as possible to get the RDFized data available. We are at the tipping point for accelerating the network effects deriving from structured data. With just a few more blinks of the eye, the semantic Web will have occurred before we know it.

DBpedia Is Another Deserving Winner

The DBpedia team of Chris Bizer, Richard Cyganiak and Georgi Kobilarov (Freie Universität Berlin), of Sören Auer, Jens Lehmann and Jörg Schüppel (Universität Leipzig), and of Orri Erling and Kingsley Idehen (OpenLink Software) all deserve our thanks for showing how “RDFizing” available data and making it accessible as a SPARQL endpoint can move from toy semantic Web demos to actually useful data. Chris and Richard also deserve kudos for proposing the Linking Open Data initiative with DBpedia as its starting nucleus. I thus gladly announce DBpedia as a deserving winner of the (highly coveted, to the small handful who even know of it! :) ) AI3 Jewels & Doubloons award.

Jewels & Doubloons An AI3 Jewels & Doubloons Winner

[1] BTW, the answer is Albert Einstein and Max von Laue; another half dozen German physicists received Nobels within the surrounding decade.

[2] Moreover, just for historical accuracy, DBpedia was also the first released, being announced on January 23, 2007.

[3] It is this potential ultimate vision of autonomous “intelligent” agents or bots working on our behalf getting and collating relevant information that many confusedly equate to what is meant to constitute the “Semantic Web” (both capitalized). While the vision is useful and reachable, most semantic Web researchers understand the semantic Web to be a basic infrastructure, followed by an ongoing process to publish and harness more and more data in machine-processable ways. In this sense, the “semantic Web” can be seen more as a journey than a destination.

[4] Like plumbing, the semantic Web is best hidden and only important as a means to an end. The originators of Freebase and other entities beginning to tap into this infrastructure intuitively understand this. Some of the challenges of education and outreach around the semantic Web are to emphasize benefits and delivered results rather than describing the washers, fittings and sweated joints of this plumbing.

[5] A great listing of such “RDFizers” can be found on MIT’s Simile Web site.

[6] This introduction to DBpedia borrows liberally from its own Web site; please consult it for more up-to-date, complete, and accurate details.

[7] YAGO (“yet another great ontology”) was developed by Fabian M. Suchanek, Gjergji Kasneci and Gerhard Weikum at the Max-Plack-Institute for Computer Science at Saarbrücken University. A WWW 2007 paper describing the project is available, plus there is an online query point for the YAGO service and the data can be downloaded. YAGO combines Wikipedia and WordNet to derive a rich hierarchical ontology with high precision and recall. The system contains only 14 relations (predicates), but they are more specific and not directly comparable to the DBpedia properties. YAGO also uses a slightly different data model, convertible to RDF, that has some other interesting aspects.

[8] See also this AI3 blog’s Sweet Tools and sort by browser to see additional candidates.

[9] A nice variety of other query examples in relation to various tools is available from the DBpedia site.

[10] According to the Ontoworld wiki, “A SPARQL endpoint is a conformant SPARQL protocol service as defined in the SPROT specification. A SPARQL endpoint enables users (human or other) to query a knowledge base via the SPARQL language. Results are typically returned in one or more machine-processable formats. Therefore, a SPARQL endpoint is mostly conceived as a machine-friendly interface towards a knowledge base.”

[11] Please refer to the Developers Guide to Semantic Web Toolkits or Sweet Tools to find a development toolkit in your preferred programming language to process DBpedia data.

[12] Actually, the pace of this data increase is amazing. For example, in the short couple of days I was working on this write-up, I received an email from Kingsley Idehen noting that OpenLink Software had created another access portal to the data that now included DBpedia, 2005 US Census Data, all of the data crawled by PingTheSemanticWeb, and Musicbrainz (see above), plus full-text indexing of same! BTW, this broader release can be found at http://dbpedia2.openlinksw.com:8890/isparql. Whew! It’s hard to keep up.

Posted:March 26, 2007

Kevin Kelly“Technology is accelerating evolution; it’s accelerating the way in which we search for ideas” - Kevin Kelly

TED talks are one of the highest quality sources of video podcasts available on the Web; generally all are guaranteed to stimulate thought and expand your horizons. For example, my August 2006 posting on Data Visualization at Warp Speed by Hans Rosling of Gapminder, just acquired by Google, came from an earlier TED (Technology, Entertainment, Design) conference. The other thing that is cool about TED talks is that they are a perfect expression of how the Internet is democratizing the access to ideas and innovation: you now don’t have to be one of the chosen 1000 invited to TED to hear these great thoughts. Voices of one are being brought to conversations with millions.

One of the recent TED talk releases is from Wired‘s founding executive editor and technology pundit, Kevin Kelly (see especially his Technium blog). In this Feb. 2005 talk released in late 2006, Kevin argues for the notable similarities between the evolution of biology and technology, ultimately declaring technology the “7th kingdom of life.” (An addition to the standard six recognized kingdoms of life of plants, animals, etc.; see figure below.) You can see this 20-minute video yourself by clicking on Kelly’s photo above or following this link.

Longstanding readers or those who have read my Blogasbörd backgrounder in which I define the title of this blog, Adaptive Information, know of my interest in all forms of information and its role in human adaptability:

First through verbal allegory and then through written symbols and communications, humans have broken the boundaries of genetic information to confer adaptive advantage. The knowledge of how to create and sustain fire altered our days (and nights), consciousness and habitat range. Information about how to make tools enabled productivity, the creation of wealth, and fundamentally altered culture. Unlike for other organisms, the information which humans have to adapt to future conditions is not limited to the biological package and actually is increasing (and rapidly!) over time. No organism but humans has more potentially adaptive information available to future generations than what was present in the past. Passing information to the future is no longer limited to sex. Indeed, it can be argued that the fundamental driver of human economic activity is the generation, survival and sustenance of adaptive information.

Thus, information, biological or cultural, is nothing more than an awareness about circumstances in the past that might be used to succeed or adapt to the future. Adaptive information is that which best survives into the future and acts to sustain itself or overcome environmental instability.

Adaptive information, like the beneficial mutation in genetics, is that which works to sustain and perpetuate itself into the future.

Kelly takes a similar perspective, but expressed more as the artifact of human information embodied in things and processes — that is, technology. As a way of understanding the importance and role of technology, Kelly asks, “What does technology want?” By asking this question, Kelly uses the trick Richard Dawkins took in The Selfish Gene of analyzing evolutionary imperatives.

Kelly also likens technology to life. He looks at five of the defining characteristics of life — ubiquity, diversity, complexity, specialization and socialization — and finds similar parallels in technology. However, unlike life, Kelly observes that technologies don’t die, even ones that might normally be considered “obsolete.” These contrasts and themes are not dissimilar from my own perspective about the cumulative and permanent nature of human, as opposed to biological, information.

In one sense, this makes Kelly appear to be a technology optimist, or whatever the opposite of a Luddite might be called. But, I think more broadly, that Kelly’s argument is that the uniqueness of humans to overcome biological limits of information makes its expression — technology — the uniquely defining characteristic of our humanness and culture. So, rather than optimism or pessimism, Kelly’s argument is ultimately a more important one of essence.

In this regard, Kelly is able to persuasively argue that technology brings us that unique prospect of a potentially purposeful future that we capture through such words as choice, freedom, opportunity, possibility. Technology is adaptive. What makes us human is technology, and more technology perhaps even makes us more “human.” As Kelly passionately concludes:

“Somewhere today there are millions of young children being born whose technology of self-expression has not yet been invented. We have a moral obligation to invent technology so that every person on the globe has the potential to realize their true difference.”

   
   

[BTW, Today, a new 1-hr podcast interview with Kevin Kelly was released by EconTalk; it is a more current complement to the more thought-provoking video above, including relevant topics to this blog of Web 3.0 and the semantic Web. I recommend you choose the download version, rather than putting up with the spotty real-time server.]

Posted by AI3's author, Mike Bergman Posted on March 26, 2007 at 10:04 am in Adaptive Information, Semantic Web | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/350/technology-the-seventh-kingdom-of-life/
The URI to trackback this post is: http://www.mkbergman.com/350/technology-the-seventh-kingdom-of-life/trackback/
Posted:March 19, 2007

Jewels & Doubloons

If Everyone Could Find These Tools, We’d All be Better Off

About a month ago I announced my Jewels & Doubloons Jewels & Doubloons awards for innovative software tools and developments, most often ones that may be somewhat obscure. In the announcement, I noted that our modern open-source software environment is:

“… literally strewn with jewels, pearls and doubloons — tremendous riches based on work that has come before — and all we have to do is take the time to look, bend over, investigate and pocket those riches.”

That entry begged the question of why this value is often overlooked in the first place. If we know it exists, why do we continue to miss it?

The answers to this simple question are surprisingly complex. The question is one I have given much thought to, since the benefits from building off of existing foundations are manifest. I think the reasons for why we so often miss these valuable, and often obscure, tools of our profession range from ones of habit and culture to weaknesses in today’s search. I will take this blog’s Sweet Tools listing of 500 semantic Web and -related tools as an illustrative endpoint of what a complete listing of such tools in a given domain might look like (not all of which are jewels, of course!), including the difficulties of finding and assembling such listings. Here are some reasons:

Search Sucks — A Clarion Call for Semantic Search

I recall late in 1998 when I abandoned my then-favorite search engine, AltaVista, for the new Google kid on the block and its powerful Page Ranking innovation. But that was tens of billions of documents ago, and I now find all the major search engines to again be suffering from poor results and information overload.

Using the context of Sweet Tools, let’s pose some keyword searches in an attempt to find one of the specific annotation tools in that listing, Annozilla. We’ll also assume we don’t know the name of the product (otherwise, why search?). We’ll also use multiple search terms, and since we know there are multiple tool types in our category, we will also search by sub-categories.

In a first attempt using annotation software mozilla, we do not find Annozilla in the first 100 results. We try adding more terms, such as annotation software mozilla “semantic web”, and again it is not in the first 100 results.

Of course, this is a common problem with keyword searches when specific terms or terms of art may not be known or when there are many variants. However, even if we happened to stumble upon one specific phrase used to describe Annozilla, “web annotation tool”, while we do now get a Google result at about position #70, it is also not for the specific home site of interest:

Google Results - 1

Now, we could have entered annozilla as our search term, assuming somehow we now knew it as a product name, which does result in getting the target home page as result #1. But, because of automatic summarization and choices made by the home site, even that description is also a bit unclear as to whether this is a tool or not:

Google Results - 2

Alternatively, had we known more, we could have searched on Annotea Mozilla and gotten pretty good results, since that is what Annozilla is, but that presumes a knowledge of the product we lack.

Standard search engines actually now work pretty well in helping to find stuff for which you already know a lot, such as the product or company name. It is when you don’t know these things that the weaknesses of conventional search are so evident.

Frankly, were our content to be specified by very minor amounts of structure (often referred to as “facets”) such as product and category, we could cut through this clutter quickly and get to the results we wanted. Better still, if we could also specify only listings added since some prior date, we could also limit our inspections to new tools since our last investigation. It is this type of structure that characterizes the lightweight Exhibit database and publication framework underlying Sweet Tools itself, as its listing for Annozilla shows:

Structured Results

The limitations of current unstructured search grow daily as Internet content volumes grows.

We Don’t Know Where to Look

The lack of semantic search also relates to the problem of not knowing where to look, and derives from the losing trade-offs of keywords v. semantics and popularity v. authoritativeness. If, for example, you look for Sweet Tools on Google using “semantic web” tools, you will find that the Sweet Tools listing only appears at position #11 with a dated listing, even though arguably it has the most authoritative listing available. This is because there are more popular sites than the AI3 site, Google tends to cluster multiple site results using the most popular — and generally, therefore, older (250 v. 500 tools in this case!) — page for that given site, and the blog title is used in preference to the posting title:

Google Sweet Tools

Semantics are another issue. It is important, in part, because you might enter the search term product or products or software or applications, rather than ‘tools‘, which is the standard description for the Sweet Tools site. The current state of keyword search is to sometimes allow plural and single variants, but not synonyms or semantic variants. The searcher must thus frame multiple queries to cover all reasonable prospects. (If this general problem is framed as one of the semantics for all possible keywords and all possible content, it appears quite large. But remember, with facets and structure it is really those dimensions that best need semantic relationships — a more tractable problem than the entire content.)

We Don’t Have Time

Faced with these standard search limits, it is easy to claim that repeated searches and the time involved are not worth the effort. And, even if somehow we could find those obscure candidate tools that may help us better do our jobs, we still need to evaluate them and modify them for our specific purposes. So, as many claim, these efforts are not worth our time. Just give me a clean piece of paper and let me design what we need from scratch. But this argument is total bullpucky.

Yes, search is not as efficient as it should be, but our jobs involve information, and finding it is one of our essential work skills. Learn how to search effectively.

The time spent in evaluating leading candidates is also time well spent. Studying code is one way to discern a programming answer. Absent such evaluation, how does one even craft a coded solution? No matter how you define it, anything but the most routine coding tasks requires study and evaluation. Why not use existing projects as the learning basis, in addition to books and Internet postings? If, in the process, an existing capability is found upon which to base needed efforts, so much the better.

The excuse of not enough time to look for alternatives is, in my experience, one of laziness and attitude, not a measured evaluation of the most effective use of time.

Concern Over the Viral Effects of Certain Open Source Licenses

Enterprises, in particular, have legitimate concerns in the potential “viral” effects of mixing certain open-source licenses such as GPL with licensed proprietary software or internally developed code. Enterprise developers have a professional responsibility to understand such issues.

That being said, my own impression is that many open-source projects understand these concerns and are moving to more enlightened mix-and-match licenses such as Eclipse, Mozilla or Apache. Also, in virtually any given application area, there is also a choice of open-source tools with a diversity of licensing terms. And, finally, even for licenses with commercial restrictions, many tools can still be valuable for internal, non-embedded applications or as sources for code inspection and learning.

Though the license issue is real when it comes to formal deployment and requires understanding of the issues, the fact that some open source projects may have some use limitations is no excuse to not become familiar with the current tools environment.

We Don’t Live in the Right Part of the World

Actually, I used to pooh-pooh the idea that one needed to be in one of the centers of software action — say, Silicon Valley, Boston, Austin, Seattle, Chicago, etc. — in order to be effective and on the cutting edge. But I have come to embrace a more nuanced take on this. There is more action and more innovation taking place in certain places on the globe. It is highly useful for developers to be a part of this culture. General exposure, at work and the watering hole, is a great way to keep abreast of trends and tools.

However, even if you do not work in one of these hotbeds, there are still means to keep current; you just have to work at it a bit harder. First, you can attend relevant meetings. If you live outside of the action, that likely means travel on occasion. Second, you should become involved in relevant open source projects or other dynamic forums. You will find that any time you need to research a new application or coding area, that the greater familiarity you have with the general domain the easier it will be for you to get current quickly.

We Have Not Been Empowered to Look

Dilbert, cubes and big bureaucracies aside, while it may be true that some supervisors are clueless and may not do anything active to support tools research, that is no excuse. Workers may wait until they are “empowered” to take initiative; professionals, in the true sense of the word, take initiative naturally.

Granted, it is easier when an employer provides the time, tools, incentives and rewards for its developers to stay current. Such enlightened management is a hallmark of adaptive and innovative organizations. And it is also the case that if your organization is not supporting research aims, it may be time to get that resume up to date and circulated.

But knowledge workers today should also recognize that responsibility for professional development and advancement rests with them. It is likely all of us will work for many employers, perhaps even ourselves, during our careers. It is really not that difficult to find occasional time in the evenings or the weekend to do research and keep current.

If It’s Important, Become an Expert

One of the attractions of software development is the constantly advancing nature of its technology, which is truer than ever today. Technology generations are measured in the range of five to ten years, meaning that throughout an expected professional lifetime of say, about 50 years, you will likely need to remake yourself many times.

The “experts” of each generation generally start from a clean slate and also re-make themselves. How do they do so and become such? Well, they embrace the concept of lifelong learning and understand that expertise is solely a function of commitment and time.

Each transition in a professional career — not to mention needs that arise in-between — requires getting familiar with the tools and techniques of the field. Even if search tools were perfect and some “expert” out there had assembled the best listing of tools available, they can all be better characterized and understood.

It’s Mindset, Melinda!

Actually, look at all of the reasons above. They all are based on the premise that we have completely within our own lights the ability and responsibility to take control of our careers.

In my professional life, which I don’t think is all that unusual, I have been involved in a wide diversity of scientific and technical fields and pursuits, most often at some point being considered an “expert” in a part of that field. The actual excitement comes from the learning and the challenges. If you are committed to what is new and exciting, there is much room for open-field running.

The real malaise to avoid in any given career is to fall into the trap of “not enough time” or “not part of my job.” The real limiter to your profession is not time, it is mindset. And, fortunately, that is totally within your control.

Gathering in the Riches

Since each new generation builds on prior ones, your time spent learning and becoming familiar with the current tools in your field will establish the basis for that next change. If more of us had this attitude, the ability for each of us to leverage whatever already exists would be greatly improved. The riches and rewards are waiting to be gathered.