Posted:May 6, 2008

Many Kinds of RDF Links Can Provide Linked Data ‘Glue’

In a recent blog post, Kingsley Idehen picked up on the UMBEL project’s mantra of “context, Context, CONTEXT!” as contained in our recent slideshow. He likened context to the real estate phrase of “location, location, location”. Just so. I like Kingsley’s association because it reinforces the idea that context places concepts and things into some form of referential road map with respect to other things and concepts.

To me, context describes the relationships and environmental proximities of what UMBEL calls subject concepts and their instance sub-concepts and named entity members, the whole of which might be visualized as a graph of reference nodes in the firmament of a global knowledge space.

Indeed, it is this very ‘cloud’ of subject concept nodes that we tried to convey in an earlier piece on what UMBEL’s backbone structure of 21,000 subject concepts might look like, shown at right. (Of course, this visualization results from the combination of UMBEL’s OpenCyc contextual framework and specific modeling algorithms; the graph would vary considerably if based on other frameworks or models.)

Yet in a comment to Kingsley’s post, Giovanni Tummarello said, “If you believe in context so much then the linking open data idea goes bananas. Why? Because ‘sameAs’ is fundamentally wrong.. an entity on DBpedia IS NOT sameAs one on GeoNames because the context is different and bla bla… so it all crumbles.” [1]

Well, hmmm. I must beg to differ.

I suspect as we now are seeing Linked Data actually enter into practice, new implications and understandings are coming to the fore. And, as we try new approaches, we also sometimes suffer from the sheer difficulty of explicating those new understandings in the context of the shaky semantics of the semantic Web.

Giovanni’s comment raises two issues:

  1. What the context or meaning of context is, and
  2. The dominant RDF link ‘glue’ for Linked Data that has been used to date, the owl:sameAs predicate.

Therefore, since UMBEL is putting forth the argument for the importance of context in Linked Data, it is appropriate to be precise about the semantics of what is meant.

Context in Context

What is context? The tenth edition of Merriam-Websters Collegiate Dictionary (and the online version) defines it as:

context \ˈkän-ˌtekst\ n.;ME, weaving together of words, Latin contextus connection of words, coherence, from contexere to weave together, from com- + texere to weave ( ca.1586)

1: the parts of a discourse that surround a word or passage and can throw light on its meanings

2: the interrelated conditions in which something exists or occurs: environment, setting <the historical context of the war>.

Another online source I like for visualization purposes is Visuwords, which displays the accompanying graph relationships view based on WordNet.

Both of these references, of course, base their perspective on language and language relationships. But, both also provide the useful perspective that context also conveys the senses of environment, surroundings, interrelationships, connections and coherence.

Context has itself been a focus of much research from linguistics to philosophy and computer science. Each field has its specific take on the concept, but I believe it fair to say that context is consensually used as a holistic reference structure that tries to put all worlds and views, including that of the observer and observed, into a consistent framework. Indeed, when that framework and its assertions fit and make sense, we give that a word, too: coherent.

Hu Yijun [2], for example, intersects the interplay of language, semantics and the behavior and circumstances of human actors to frame context. Yijun observes that an invariably-applied research principle is that meaning is determined by context. Context refers to environmental conditions surrounding a discourse and its parts which are related with it, and provides the framework to interpret that discourse. There are world views, relationships and interrelationships, and assertions by human actors that combine to establish the context of those assertions and the means to interpret them.

In the concept of context, therefore, we see all of the components and building blocks of RDF itself. We have things or concepts (subjects or objects) that are related to one another (via properties or predicates) to form the basic assertions (triples). These are combined together and related in still more complex structures attempting to capture a world view or domain (ontology). These assertions have trust and credibility based on the actors (provenance) that make them.

In short, context is the essence of the semantic Web and Linked Data, not somehow in variance or conflict with it.

Without context, there is no meaning.

While one interpretation might be that the characteristics of one individual (say, Quebec City) might be oriented to latitude and longitude in a GeoNames source, while the characteristics of that individual may have a different context (say, population or municipal government) in the different DBpedia (Wikipedia) source, we need to be very careful of what is meant by context here. The identity of the individual (Quebec City) remains the same in both sources. The context does not change the individual nor its identity, only the nature of the characteristics used to provide different coherent information about it.

Not the Same Old sameAs

With the growth in Linked Data, we are starting to hear the rumblings around possible misuse and misapplication of the sameAs predicate [3]. Frankly, this is good, because I share the view there has been some confusion regarding the predicate and misapplications given its semantics.

The built-in OWL property owl:sameAs links an individual to an individual [4]. Such an owl:sameAs statement indicates that two URI references actually refer to the same thing: the individuals have the same “identity”.

A link is a predicate is an assertion. It by nature ties (“glues”) two resources to one another. Such an assertion can either: (1) be helpful and “correct”; (2) be made incorrectly; (3) assert the wrong or perhaps semantically poor relationship; or (4) be used maliciously or to deceive.

(Unlike email spam, #4 above has not occurred anywhere to my knowledge for Linked Data. Unfortunately, and most sadly, deceitful links will occur at some point, however. This inevitability is a contingency the community must be cognizant of as it moves forward.)

To date, almost all inter-source Linked Data links have occurred via owl:sameAs. If we liken this situation to early child language acquisition, it is like we only have one verb to describe the world. And because our vocabulary is relatively spare, we have tended to apply sameAs to situations and relations that, comparatively, have a bit of semblance to baby-talk.

So long as we have high confidence two disparate sources are referring to the same individual with the same identity, sameAs is the semantically correct RDF link. In all other cases, the use of this predicate should be suspect.

Simple string or label matches are insufficient to make a sameAs assertion. If sameAs can not be confidently asserted, as might be the case where the relation of individual referents is perhaps likely but uncertain, we need to invoke new predicates or make no assertion at all. And, if the resources at hand are not individuals at all but classes, the need for new semantics increases still further.

As we increase the size of the Linked Data ‘cloud’ or show rapid growth in Linked Data, we should be aware that quality, not size, may be the most important metric powering acceptance. The community has made unbelievable progress in finally putting real data behind the semantic Web promise. The challenge now is to add to our vocabulary and ensure quality assertions for the linkages we publish.

Many Predicates Can Richen the RDF Link ‘Glue’

One of UMBEL’s purposes, for example, is to broaden our relations to the class level of subject concepts. As we move beyond the early days of FOAF and other early vocabularies, we will see further richening of our predicates. We also need predicates and predicate language that reflects the open-world nature [5] of public Linked Data and the semantic Web.

So, while sameAs helps us aggregate related information about the same identifiable individual, the predicates of class relations in context to other classes helps to put all information into context. And, if done right — that is, if the semantics and assertions are relatively correct — these desired contextual relations and interlinkages can blossom.

The new predicates forthcoming from the UMBEL project, to be published with technical documentation this month, and related to these purposes will include:

  • isAligned — the predicate for aligning external ontology classes to UMBEL subject concepts
  • isAbout — the predicate for relating individuals and instances to their contextual subject concepts, and
  • isLikely — the predicate for likely relations between the same identifiable individual, but where there is some ambiguity or uncertainty short of a sameAs assertion.

Assertions such as these that are open to ambiguity or uncertainty, while appropriate for much of the open-world nature of the semantic Web, may also be difficult predicates for the community to achieve consensus. Like our early experience with sameAs, these predicates — or others that can just as easily arise in their stead — will certainly prove subject to some growing pains. :)

Any Context is Better than No Context at All

Most people active in the semantic Web and Linked Data communities believe a decentralized Web environment leads to innovation and initiative. Open software, standards activities, and vigorous community participation affirm these beliefs daily.

The idea of context and global frames of reference, such as represented by UMBEL or perhaps any contextual ontology, could appear to be at odds with those ideals of decentralization. But one paradox is that without context, the basis for RDF linkages is made much poorer and therefore the potential for the benefits (and thus adoption) of Linked Data lessen.

The object lesson should therefore not be a rejection of context. Indeed, any context is better than no context at all.

Of course, whether that context gets provided by UMBEL or by some other framework(s) remains to be seen. This is for the market to decide. But the ability of contextual frameworks to richen our semantics should be clear.

The past year with the growth and acceptance of Linked Data have affirmed that the mechanisms for linking and relating data are now largely in place. We have a simple, yet powerful and extensible data model in RDF. We have beginning vocabularies and constructs for conducting the data discourse. We have means for moving legacy data and information into this promising new environment.

Context and Linked Data are not in any way at odds, nor are context and sameAs. Indeed, context itself is an essential framework for how we can orient and grow our semantics. Human language required its referents in the real world in order to grow and blossom. Context is just as essential to derive and grow the semantics and meaning of the semantic Web.

The early innovators of the Linked Open Data community are the very individuals best placed to continue this innovation. Let’s accept sameAs for what it is — one kind of link in a growing menagerie of RDF link predicates — and get on with the mission of putting our enterprise in context. I think we’ll find our data has a lot more meaningfully to say — and with more coherence.


[1] See the original post for the full comment; shown with some minor re-formatting.
[2] Hu Yijun, 2006. On the Essence of Discourse: Context Coherence, see http://www.paper.edu.cn/en/downloadpaper.php?serial_number=200606-221&type=1.
[3] For example, two papers presented at the the Linked Data on the Web (LDOW2008) Workshop at WWW2008, April 22, 2008, Beijing, China, two weeks ago highlight this issue. In the first, Bouquet et al. (Paolo Bouquet, Heiko Stoermer, Daniele Cordioli and Giovanni Tummarello, 2008. An Entity Name System for Linking Semantic Web Data, paper presented at LDOW2008, see http://events.linkeddata.org/ldow2008/papers/23-bouquet-stoermer-entity-name-system.pdf) state, “In fact it appears that the current use of owl:sameAs in the Linked Data community is not entirely in line with this definition and uses owl:sameAs more like a Semantic Web substitute forx [sp] a hyperlink instead of realizing the full logical consequences.”
Also Jaffri et al. (Afraz Jaffri, Hugh Glaser and Ian Millard, 2008. URI Disambiguation in the Context of Linked Data, paper presented at LDOW2008, see http://events.linkeddata.org/ldow2008/papers/19-jaffri-glaser-uri-disambiguation.pdf) note that misuses of sameAs may result in a large percentage of entities being improperly conflated, entity references may be incorrect, and there is potential error propagation when mislabeled sameAs are then applied to new instances. As the authors state, “This will have a major impact on the Semantic Web when such repositories are used as data sources without any attempt to manage the inconsistencies or ‘clean’ the data.” The authors also pose thoughtful mechanisms for addressing these issues built on the hard-earned co-referencing experience gained within the information extraction (IE) community.
These observations are not in any way meant to be critical or alarmist. They simply point to the need for quality control and accurate semantics when asserting relationships. These growing pains are a natural circumstance of rapid growth.
[4] W3C, OWL Web Ontology Language Reference, W3C Recommendation, 10 February 2004. See http://www.w3.org/TR/owl-ref/. Note that in OWL Full sameAs can also be applied to classes, but that is a special case, not applicable to how Linked Data has been practiced to date, and is not further discussed here.
[5] The “open world” assumption is defined in the SKOS ontology reference documentation (W3C, SKOS Simple Knowledge Organization System Reference, W3C Working Draft, 25 January 2008; see http://www.w3.org/TR/2008/WD-skos-reference-20080125/#L881 ) as:
“RDF and OWL Full are designed for systems in which data may be widely distributed ( e.g., the Web). As such a system becomes larger, it becomes both impractical and virtually impossible to “know” where all of the data in the system is located. Therefore, one cannot generally assume that data obtained from such a system is “complete”, i.e., if some data appears to be “missing”, one has to assume, in general, that the data might exist somewhere else in the system. This assumption, roughly speaking, is known as the “open world” assumption.”

Posted by AI3's author, Mike Bergman Posted on May 6, 2008 at 1:18 am in Adaptive Innovation, Semantic Web, UMBEL | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/440/the-semantics-of-context/
The URI to trackback this post is: http://www.mkbergman.com/440/the-semantics-of-context/trackback/
Posted:April 27, 2008

UMBEL’s Eleven,” overviews the project’s first 11 semantic Web services and online demos. The brief slideshow has been posted to Slideshare:

SlideShare | View

UMBEL (Upper-level Mapping and Binding Exchange Layer) is a lightweight reference structure for placing Web content, named entities and data in context with other data. It is comprised of about 21,000 subject concepts and their relationships — with one another and with external vocabularies and named entities.

Recent postings by Fred Giasson and by me discussed these Web services in a bit more detail.

Posted:January 28, 2008

Big GraphCytoscape Thumbnail

Where Has the Biology Community Been Hiding this Gem?

Jewels & DoubloonsI still never cease to be amazed at how wonderful and powerful tools are so often and easily overlooked. The most recent example is Cytoscape, a winner in our recent review of more than 25 tools for large-scale RDF graph visualization.

We began this review because the UMBEL subject concept “backbone” ontology will involve literally thousands of concepts. Graph visualization software suitable to very large graphs would aid UMBEL’s construction and refinement.

Cytoscape describes itself as a bioinformatics software platform for visualizing molecular interaction networks and integrating these interactions with gene expression profiles and other state data. Cytoscape is partially based on GINY and Piccolo, among other open-source toolkits. What is more important to our immediate purposes, however, is that its design also lends itself well to general network and graph manipulation.

Cytoscape was first brought to our attention by François Belleau of Bio2RDF.org. Thanks François, and also for the strong recommendation and tips. Special thanks are also due to Frédérick Giasson of Zitgist for his early testing and case examples. Thanks, Fred!

Requirements

We had a number of requirements and items on our wish list prior to beginning our review. We certainly did not expect most or all of these items to be met:

  • Large scale – the UMBEL graph will likely have about 20,000 nodes or so; we would also like to be able to scale to instance graphs of hundreds of thousands or millions of nodes. For example, here is one representation of the full UMBEL graph (with nodes in pink-orange and different colored lines representing different relationships or predicates):
Full UMBEL Graph
  • Graph filtering – the ability to filter out the graph display by attribute, topology, selected nodes or other criteria. Again, here is an example using the ‘Organic’ layout produced by selecting on the Music node in UMBEL (click for full size):
Music Sub-graph, 'Organic' Layout
  • Graph analysis – the ability to analyze edge (or relation) lengths, cyclic aspects, missing nodes, imbalances across the full graph, etc.
  • Extensibility – the ability to add new modules or plugins to the system
  • Support for RDF – the ease for direct incorporation of RDF graphs
  • Graph editing – the interactive ability to add, edit or modify nodes and relations, to select colors and display options, to move nodes to different locations, cut-and-past operations and other standard edits, and
  • Graph visualization – the ease of creating sub-graphs and to plot the graphs with a variety of layout options.

Cytoscape met or exceeded our wish list in all areas save one: it does not support direct ingest of RDF (other than some pre-set BioPAX formats). However, that proved to be no obstacle because of the clean input format support of the tool. Simple parsing of triples into a CSV file is sufficient for input. Moreover, as described below, there are other cool attribute management functions that this clean file format supports as well.

Features and Attractions

The following screen shot shows the major Cytoscape screen. We will briefly walk through some of its key views (click for full size):

Cytoscape-UMBEL Main Screen

This Java tool has a fairly standard Eclipse-like interface and design. The main display window (A) shows the active portion of the current graph view. (Note that in this instance we are looking at a ‘Spring’ layout for the same Music sub-graph presented above.) Selections can easily be made in this main display (the red box) or by directly clicking on a node. The display itself represents a zoom (B) of the main UMBEL graph, which can also be easily panned (the blue box on B) or itself scaled (C). Those items that are selected in the main display window also appear as editable nodes or edges and attributes in the data editing view (D).

The appearance of the graph is fully editable via the VizMapper (E). An interesting aspect here is that every relation type in the graph (its RDF properties, or predicates) can be visually displayed in a different manner. The graphs or sub-graphs themselves can be selected, but also most importantly, the display can respond to a very robust and flexible filtering framework (F). Filters can be easily imported and can apply to nodes, edges (relations), the full graph or other aspects (depending on plugin). A really neat feature is the ability to search the graph in various flexible ways (G), which alters the display view. Any field or attribute can be indexed for faster performance.

In addition to these points, Cytoscape supports the following features:

  • Load and save previously-constructed interaction networks in GML format (Graph Markup Language)
  • Load and save networks and node/edge attributes in an XML document format called XGMML (eXtensible Graph Markup and Modeling Language)
  • Load and save arbitrary attributes on nodes and edges. For example, input a set of custom annotation terms or confidence values
  • Load and save state of the Cytoscape session in a Cytoscape Session (.cys) file. Cytoscape Session file includes networks, attributes (for node/edge/network), desktop states (selected/hidden nodes and edges, window sizes), properties, and visual styles (which are namable)
  • Customize network data display using powerful visual styles
  • Map node color, label, border thickness, or border color, etc. according to user-configurable colors and visualization schemes
  • Layout networks in two dimensions. A variety of layout algorithms are available, including cyclic and spring-embedded layouts
  • Zoom in/out and pan for browsing the network
  • Use the network manager to easily organize multiple networks, with this structure savable in a session file
  • Use the bird’s eye view to easily navigate large networks
  • Easily navigate large networks (100,000+ nodes and edges) by efficient rendering engine
  • Multiple plugins are available for areas such as subset selections, analysis, path analysis, etc. (see below).

Other Cytoscape Resources

The Cytoscape project also offers:

Unfortunately, other than these official resources, there appears to be a dearth of general community discussion and tips on the Web. Here’s hoping that situation soon changes!

Plugins

There is a broad suite of plugins available for Cytoscape, and directions to developers for developing new ones.

The master page also includes third-party plugins. The candidates useful to UMBEL and its graphing needs — also applicable to standard semantic Web applications — appear to be:

  • AgilentLiteratureSearch – creates a CyNetwork based on searching the scientific literature. Download from here
  • BubbleRouter – this plugin allows users to layout a network incrementally and in a semi-automated way. Bubble Router arranges specific nodes in user-drawn regions based on a selected attribute value. Bubble Router works with any node attribute file. Download from here
  • Cytoscape Plugin (Oracle) – enables a read/write interface between the Oracle database and the Cytoscape program. In addition, it also enables some network analysis functions from cytoscape. The README.txt file within the zipfile has instructions for installing and using this plugin. Download from here
  • DOT – interfaces with the GraphViz package for graph layout. The plugin now supports both simple and rank-cluster layouts. This software uses the dot layout routine from the graphviz opensource software developed at AT&T labs. Download from here
  • EnhancedSearch – performs search on multiple attribute fields. Download from here
  • HyperEdgeEditor – add, remove, and modify HyperEdges in a Cytoscape Network. Download from here
  • MCODE – MCODE finds clusters (highly interconnected regions) in a network. Clusters mean different things in different types of networks. For instance, clusters in a protein-protein interaction network are often protein complexes and parts of pathways, while clusters in a protein similarity network represent protein families. Download from here
  • MONET – is a genetic interaction network inference algorithm based on Bayesian networks, which enables reliable network inference with large-scale data(ex. microarray) and genome-scale network inference from expression data. Network inference can be finished in reasonable time with parallel processing technique with supercomputing center resources. This option may also be applicable to generic networks. Download from here
  • NamedSelection – this plugin provides the ability to “remember” a group of selected nodes. Download from here
  • NetworkAnalyzer – computes network topology parameters such as diameter, average number of neighbors, and number of connected pairs of nodes. It also displays diagrams for the distributions of node degrees, average clustering coefficients, topological coefficients, and shortest path lengths. Download: http://med.bioinf.mpi-inf.mpg.de/netanalyzer/index.html
  • SelConNet – is used to select the connected part of a network. Actually, running this plugin is like calling Select -> Nodes -> First neighbors of selected nodes many times until all the connected part of the network containing the selected nodes is selected. Download from here
  • ShortestPath – is a plugin for Cytoscape 2.1 and later to show the shortest path between 2 selected nodes in the current network. It supports both directed and undirected networks and it gives the user the possibility to choose which node (of the selected ones) should be used as source and target (useful for directed networks). The plugin API makes possible to use its functionality from another plugin. Download from here
  • sub-graph – is a flexible sub-graph creation, node identification, cycle finder, and path finder in directed and undirected graphs. It also has a function to select the p-neighborhood of a selected node or group of nodes which can be selected by pathway name, node type, or by a list in a file. This generates a set of plug-ins called: path and cycle finding, p-neighborhoods and sub-graph selection. Download from here.

Importantly, please note there is a wealth of biology- and molecular-specific plugins also available that are not included in the generic listing above.

Initial Use Tips

Our initial use of the tool suggests some use tips:

  • Try Cytoscape with the yFiles layouts; quicker to perform, and interesting results
  • Try the Organic yFile layout as one of the first
  • Try the search feature
  • Check the manual for examples of layouts
  • Holding the right mouse button down when in the main screen; moving the cursor from the center outward causes zoom in, from the exterior inward, to zoom out
  • Moving and panning nodes can be done in real time without issues
  • The “edge attribute browser” is really nice to find what node links to what other node by clicking on a link (so you don’t have to pan and check, etc)
  • Export to PDF often works best as an output display (though SVG is also supported)
  • If you select an edge and then Ctrl-left-click on the edge, an edge “handle” will appear. This handle can be used to change the shape of the line
  • Use the CSV file to make quick modifications, and then check it with the Organic layout
  • A convenient way to check the propagation of a network is to select a node, then click on Ctrl+6 again and again (Ctrl+6 selects neighborhood nodes of a selected node, so it “shows” you the network created by a node and its relationships)
  • If you want to analyze a sub-graph, search for a node, then press a couple of times on Ctrl+6, then create another graph from that selected node (File -> New -> Network from selected node)
  • If you begin to see slow performance, then save and re-load your session; there appears to be some memory leaks in the program
  • Also, for very large graphs, avoid repeated use of certain layouts (Hierarchy, Orthogonal, etc.) that take very long times to re-draw.

Concluding Observations and Comments

Cytoscape was first released in 2002 and has undergone steady development since. Most recently, the 2.x and especially 2.3 versions forward have seen a flurry of general developments that have greatly broadened the tool’s appeal and capabilities. It was perhaps only these more recent developments that have positioned Cytoscape for broader use.

I suspect another reason that this tool has been overlooked by the general semWeb community is the fact that its sponsors have positioned it mostly in the biological space. Their short descriptor for the project, for example, is: Cytoscape is an open source bioinformatics software platform for visualizing molecular interaction networks and integrating these interactions with gene expression profiles and other state data. That statement hardly makes it sound like a general tool!

Another reason for the lack of attention, of course, is the common tendency for different disciplines not to share enough information. Indeed, one reason for my starting the Sweet Tools listing was hopefully as a means of overcoming artificial boundaries and assembling relevant semantic Web tools in one central place.

Yet despite the product’s name and its positioning by sponsors, Cytoscape is indeed a general graph visualization tool, and arguably the most powerful one reviewed from our earlier list. Cytoscape can easily accommodate any generalized graph structure, is scalable, provides all conceivable visualization and modeling options, and has a clean extension and plugin framework for adding specialized functionality.

With just minor tweaks or new plugins, Cytoscape could directly read RDF and its various serializations, could support processing any arbitrary OWL or RDF-S ontology, and could support other specific semWeb-related tasks. As well, a tool like CPath (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1660554), which enables querying of biological databases and then storing them in Cytoscape format, offers some tantalizing prospects for a general model for other Web query options.

For these reasons, I gladly announce Cytoscape as the next deserving winner of the (highly coveted, but cheesy! :) ) AI3 Jewels & Doubloons award.

Cytoscape’s sponsors — the U.S. National Institute of General Medical Sciences (NIGMS) of the National Institutes of Health (NIH), the U.S. National Science Foundation (NSF) and Unilever PLC — and its developers — the Institute for Systems Biology, the University of California – San Diego, the Memorial Sloan-Kettering Cancer Center, L’Institut Pasteur and Agilent Technologies – are to be heartily thanked for this excellent tool!

Jewels & Doubloons An AI3 Jewels & Doubloons Winner
Posted:August 23, 2007

Production Printing PressWas the Industrial Revolution Truly the Catalyst?

Why, roughly beginning in 1820, did historical economic growth patterns skyrocket?

This is a question of no small import, and one that has occupied economic historians for many decades. We know what some of the major transitions have been in recorded history: the printing press, Renaissance, Age of Reason, Reformation, scientific method, Industrial Revolution, and so forth. But, which of these factors were outcomes, and which were causative?

This is not a new topic for me. Some of my earlier posts have discussed Paul Ormerod’s Why Most Things Fail: Evolution, Extinction and Economics, David Warsh’s Knowledge and the Wealth of Nations: A Story of Economic Discovery, David M. Levy’s Scrolling Forward: Making Sense of Documents in the Digital Age, Elizabeth Eisenstein’s classic Printing Press, Joel Mokyr’s Gifts of Athena : Historical Origins of the Knowledge Economy, Daniel R. Headrick’s When Information Came of Age : Technologies of Knowledge in the Age of Reason and Revolution, 1700-1850, and Yochai Benkler’s, The Wealth of Networks: How Social Production Transforms Markets and Freedoms. Thought provoking references, all.

But, in my opinion, none of them posits the central point.

Statistical Leaps of Faith

Statistics (originally derived from the concept of information about the state) really only began to be collected in France in the 1700s. For example, the first true population census (as opposed to the enumerations of biblical times) occurred in Spain in that same century, with the United States being the first country to set forth a decennial census beginning around 1790. Pretty much everything of a quantitative historical basis prior to that point is a guesstimate, and often a lousy one to boot.

Because no data was collected — indeed, the idea of data and statistics did not exist — attempts in our modern times to re-create economic and population assessments in earlier centuries are truly a heroic — and an estimation-laden exercise. Nonetheless, the renowned economic historian who has written a number of definitive OECD studies, Angus Maddison, and his team have prepared economic and population growth estimates for the world and various regions going back to AD 1 [1].

One summary of their results shows:

Year Ave Per Capita Ave Annual Yrs Required
AD GDP (1990 $) Growth Rate for Doubling
1 461
1000 450 -0.002% N/A
1500 566 0.046% 1,504
1600 596 0.051% 1,365
1700 615 0.032% 2,167
1820 667 0.067% 1,036
1870 874 0.542% 128
1900 1,262 1.235% 56
1913 1,526 1.470% 47
1950 2,111 0.881% 79
1967 3,396 2.836% 25
1985 4,764 1.898% 37
2003 6,432 1.682% 42

Note that through at least 1000 AD economic growth per capita (as well as population growth) was approximately flat. Indeed, up to the nineteenth century, Maddison estimates that a doubling of economic well-being per capita only occurred every 3000 to 4000 years. But, by 1820 or so onward, this doubling accelerated at warp speed to every 50 years or so.

Looking at a Couple of Historical Breakpoints

The first historical shift in millenial trends occurred roughly about 1000 AD, when flat or negative growth began to accelerate slightly. The growth trend looks comparatively impressive in the figure below, but that is only because the doubling of economic per capita wealth has now dropped to about every 1000 to 2000 years (note the relatively small differences in the income scale). These are annual growth rates about 30 times lower than today, which, with compounding, prove anemic indeed (see estimated rates in the table above).

Nonetheless, at about 1000 AD, however, there is an inflection point, though small. It is also one that corresponds somewhat to the adoption of raw linen paper v. skins and vellum (among other correlations that might be drawn).

When the economic growth scale gets expanded to include today, these optics change considerable. Yes, there was a bit of growth inflection around 1000 AD, but it is almost lost in the noise over the longer historical horizon. The real discontinuity in economic growth appears to have occurred in the early 1800s compared to all previous recorded history. At this major inflection point in the early 1800s, historically flat income averages skyrocketed. Why?

The fact that this inflection point does not correspond to earlier events such as invention of the printing press or Reformation (or other earlier notable transitions) — and does more closely correspond to the era of the Industrial Revolution — has tended to cement in popular histories and the public’s mind that it was machinery and mechanization that was the causative factor creating economic growth.

Had a notable transition occurred in the mid-1400s to 1500s it would have been obvious to ascribe more modern economic growth trends with the availability of information and the printing press. And, while, indeed, the printing press had massive effects, as Elizabeth Eisenstein has shown, the empirical record of changes in economic growth is not directly linked with adoption of the printing press. Moreover, as the graph above shows, something huge did happen in the early 1800s.

Pulp Paper and Mass Media

In its earliest incarnations, the printing press was an instrument of broader idea dissemination, but still largely to and through a relatively small and elite educated class. That is because books and printed material were still too expensive — I would submit largely due to the exorbitant cost of paper — even though somewhat more available to the wealthy classes. Ideas were fermenting, but the relative percentage of participants in that direct ferment were small. The overall situation was better than monks laboriously scribing manuscripts, but not disruptively so.

However, by the 1800s, those base conditions change, as reflected in the figure above. The combination of mechanical presses and paper production with the innovation of cheaper “pulp” paper were the factors that truly brought information to the “masses.” Yet, some have even taken “mass media” to be its own pejorative. But, look closely as what that term means and its importance to bringing information to the broader populace.

In Paul Starr’s Creation of the Media, he notes how in 15 years from 1835 to 1850 the cost of setting up a mass-circulation paper increased from $10,000 to over $2 million (in 2005 dollars). True, mechanization was increasing costs, but from the standpoint of consumers, the cost of information content was dropping to zero and approaching a near-time immediacy. The concept of “news” was coined, delivered by the “press” for a now-emerging “mass media.” Hmmm.

This mass publishing and pulp paper were emerging to bring an increasing storehouse of content and information to the public at levels never before seen. Though mass media may prove to be an historical artifact, its role in bringing literacy and information to the “masses” was generally an unalloyed good and the basis for an improvement in economic well being the likes of which had never been seen.

More recent trends show an upward blip in growth shortly after the turn of the 20th century, corresponding to electrification, but then a much larger discontinuity beginning after World War II:

In keeping with my thesis, I would posit that organizational information efforts and early electromechanical and then electronic computers resulting from the war effort, which in turn led to more efficient processing of information, were possible factors for this post-WWII growth increase.

It is silly, of course, to point to single factors or offer simplistic slogans about why this growth occurred and when. Indeed, the scientific revolution, industrial revolution, increase in literacy, electrification, printing press, Reformation, rise in democracy, and many other plausible and worthy candidates have been brought forward to explain these historical inflections in accelerated growth. For my own lights, I believe each and every one of these factors had its role to play.

But at a more fundamental level, I believe the drivers for this growth change came from the global increase and access to prior human information. Surely, the printing press helped to increase absolute volumes. Declining paper costs (a factor I believe to be greatly overlooked but also conterminous with the growth spurt and the transition from rag to pulp paper in the early 1800s), made information access affordable and universal. With accumulations in information volume came the need for better means to organize and present that information — title pages, tables of contents, indexes, glossaries, encyclopedia, dictionaries, journals, logs, ledgers,etc., all innovations of relatively recent times — that themselves worked to further fuel growth and development.

Of course, were I an economic historian, I would need to argue and document my thesis in a 400-pp book. And, even then, my arguments would appropriately be subject to debate and scrutiny.

Information, Not Machines

Tools and physical artifacts distinguish us from other animals. When we see the lack of a direct correlation of growth changes with the invention of the printing press, or growth changes approximate to the age of machines corresponding to the Industrial Revolution, it is easy and natural for us humans to equate such things to the tangible device. Indeed, our current fixation on technology is in part due to our comfort as tool makers. But, is this association with the technology and the tangible reliable, or (hehe) “artifactual”?

Information, specifically non-biological information passed on through cultural means, is what truly distinguishes us humans from other animals. We have been easily distracted looking at the tangible, when it is the information artifacts (“symbols”) that make us the humans who we truly are.

So, the confluence of cheaper machines (steam printing presses) with cheaper paper (pulp) brought information to the masses. And, in that process, more people learned, more people shared, and more people could innovate. And, yes, folks, we innovated like hell, and continue to do so today.

If the nature of the biological organism is to contain within it genetic information from which adaptations arise that it can pass to offspring via reproduction — an information volume that is inherently limited and only transmittable by single organisms — then the nature of human cultural information is a massive shift to an entirely different plane.

With the fixity and permanence of printing and cheap paper — and now cheap electrons — all prior discovered information across the entire species can now be accumulated and passed on to subsequent generations. Our storehouse of available information is thus accreting in an exponential way, and available to all. These factors make the fitness of our species a truly quantum shift from all prior biological beings, including early humans.

What Now Internet?

The information by which the means to produce and disseminate information itself is changing and growing. This is an infrastructural innovation that applies multiplier benefits upon the standard multiplier benefit of information. In other words, innovation in the basis of information use and dissemination itself is disruptive. Over history, writing systems, paper, the printing press, mass paper, and electronic information have all had such multiplier effects.

The Internet is but the latest example of such innovations in the infrastructural groundings of information. The Internet will continue to support the inexorable trend to more adaptability, more wealth and more participation. The multiplier effect of information itself will continue to empower and strengthen the individual, not in spite of mass media or any other ideologically based viewpoint but due to the freeing and adaptive benefits of information itself. Information is the natural antidote to entropy and, longer term, to the concentrations of wealth and power.

If many of these arguments of the importance of the availability of information prove correct, then we should conclude that the phenomenon of the Internet and global information access promises still more benefits to come. We are truly seeing access to meaningful information leapfrog anything seen before in history, with soon nearly every person on Earth contributing to the information dialog and leverage.

Endnote: And, oh, to answer the rhetorical question of this piece: No, it is information that has been the source of economic growth. The Industrial Revolution was but a natural expression of then-current information and through its innovations a source of still newer-information, all continuing to feed economic growth.


[1] The historical data were originally developed in three books by Angus Maddison: Monitoring the World Economy 1820-1992, OECD, Paris 1995; The World Economy: A Millennial Perspective, OECD Development Centre, Paris 2001; and The World Economy: Historical Statistics, OECD Development Centre, Paris 2003. All these contain detailed source notes. Figures for 1820 onwards are annual, wherever possible.

For earlier years, benchmark figures are shown for 1 AD, 1000 AD, 1500, 1600 and 1700. These figures have been updated to 2003 and may be downloaded by spreadsheet from the Groningen Growth and Development Centre (GGDC), a research group of economists and economic historians at the Economics Department of the University of Groningen headed by Maddison. See http://www.ggdc.net/.

Posted:August 18, 2007

RDF123

UMBC’s Ebiquity Program Creates Another Great Tool

In a strange coincidence, I encountered a new project called RDF123 from UMBC’s Ebiquity program a few days back while researching ways to more easily create RDF specifications. (I was looking in the context of easier ways to test out variations of the UMBEL ontology.) I put in on my to-do list for testing, use and a possible review.

Then, this morning, I saw that Tim Finin had posted up a more formal announcement of the project, including a demo of converting my own Sweet Tools to RDF using the very same tool! Thanks, Tim, and also for accelerating my attention on this. Folks, we have another winner!

RDF123, developed by Lushan Han with funding from NSF [1], improves upon earlier efforts from the University of Maryland’s Mindswap lab, which had developed Excel2RDF and the more flexible ConvertToRDF a number of years back. Unlike RDF123, these other tools were limited to creating an instance of a given class for each row in the spreadsheet. RDF123, on the other hand, allows users to define mappings to arbitrary graphs and different templates by row.

It is curious why so little work has been done on spreadsheets as an input and specification mechanism for RDF given the huge use and ubiquity (pun on purpose!) of the format. According to the Ebiquity technical report [1], Topbraid Composer has a spreadsheet utility (one that I have not tested) and there is a new plug-in for Protégé version 4.0 from Jay Kola that was also on my to-do list for testing (which requires upgrading to the beta version of Protégé) that has support for imports of OWL and RDF Schema.

I have also been working with the Linking Open Data group at the W3C regarding converting the Sweet Tools listing to RDF, and have indeed had a RDF/XML listing available for quite some time [2]. You may want to compare this version with the N3 version produced by RDF123 [3]. The specification for creating this RDF123 file, also in N3 format, is really quite simple:

@prefix d: <http://spreadsheets.google.com/feeds/list/o15870720903820235800 etc., etc.> .
@prefix mkbm: <http://www.mkbergman.com/> .
@prefix exhibit: <http://simile.mit.edu/2006/11/exhibit#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix : <#> .
@prefix e: <http://spreadsheets.google.com/feeds/list/o15870720903820235800 etc., etc.> .
<Ex:e+$1>
  a exhibit:Item ;
  rdfs:label "Ex:$1" ;
  exhibit:origin "Ex:mkbm+'#'+$1^^string" ;
  d:Category "Ex:$5" ;
  d:Existing "Ex:$7" ;
  d:FOSS "Ex:$4" ;
  d:Language "Ex:$6" ;
  d:Posted "Ex:$8" ;
  d:URL "Ex:$2^^string" ;
  d:Updated "Ex:$9" ;
  d:description "Ex:$3" ;
  d:thumbnail "Ex:@If($10='';'';mkbm+@Substr($10,12,@Sub(@Length($10),4)))^^string" .

The UMBC approach is somewhat like GRDDL for converting other formats to RDF, but is more direct by bypassing the need to first convert the spreadsheet to XML and then transform with XSLT. This means updates can be automatic, and the difficulty of writing XSLT is replaced itself with a simple notation as above for properly replacing label names.

RDF123 has the option of two interfaces in its four versions. The first interface, used by the application versions, is a graphical interface that allows users to create their mapping in an intuitive manner. The second is a Web service that takes as input a combined URL string to a Google spreadsheet or CSV file and an RDF123 map and output specification [3].

The four versions of the software are the:

RDF123 is a tremendous addition to the RDF tools base, and one with promise for further development for easy use by standard users (non-developers). Thanks NSF, UMBC and Lushan!

And, Thanks Josh for the Census RDF

Along with last week’s tremendous announcement by Josh Tauberer for making 2000 US Census data available as nearly 1 billion RDF triples, this dog week of August in fact has proven to be a stellar one on the RDF front! These two events should help promote an explosion of RDF in numeric data.


[1] Lushan Han, Tim Finin, Cynthia Parr, Joel Sachs, and Anupam Joshi, RDF123: A Mechanism to Translate Spreadsheets to RDF, Technical Report from the Computer Science and Electrical Engineering Dept., University of Maryland, Baltimore County, August 2007, 17 pp. See http://ebiquity.umbc.edu/paper/html/id/368/RDF123-a-mechanism-to-translate-spreadsheets-to-RDF; also, a PDF version of the report is available. The effort was supported by a grant from the National Science Foundation.

[2] This version was created using Exhibit, the lightweight data publishing framework for Sweet Tools. It allows RDF/XML to be copied from the online Exhibit, though it has a few encoding issues, which required the manual adjustments to produce valid RDF/XML. A better RDF export service is apparently in the works for Exhibit version 2.0, slated for soon release.

[3] N3 stands for Notation 3 and is a more easily read serialization of RDF. For direct comparison with my native RDF/XML, you can convert the N3 file at RDFabout.com. Alternatively, you can directly create the RDF/XML output with the slightly different instructions to the online service of: http://rdf123.umbc.edu/server/?src=http://spreadsheets.google.com/pub?key=pGFSSSZMgQNxIJUCX6VO3Ww&map=http://rdf123.umbc.edu/map/demo1.N3&out=XML; note the last statement changing the output format from N3 to XML. Also note the UMBC service address, followed by the spreadsheet address, followed by the specification address (the listing of which is shown above), then ending with the output form. This RDF/XML output validates with the W3C’s RDF validation service, unlike the original RDF/XML created from Sweet Tools that had some encoding issues that required the manual fixing.