Posted:June 23, 2008

We Offer a Definition and Some Answers to Enterprise Questions

The recent LinkedData Planet conference in NYC marked, I think, a real transition point. The conference signaled the beginning movement of the Linked Data approach from the research lab to the enterprise. As a result, there was something of a schizophrenic aspect at many different levels to the conference: business and research perspectives; realists and idealists; straight RDF and linked data RDF; even the discussions in the exhibit area versus some of the talks presented from the podium.

Like any new concept, my sense was a struggle around terminology and common language and the need to bridge different perspectives and world views. Like all human matters, communication and dialog were at the core of the attendees’ attempts to bridge gaps and find common ground. Based on what I saw, much great progress occurred.

The reality, of course, is that Linked Data is still very much in its infancy, and its practice within the enterprise is just beginning. Much of what was heard at the conference was theory versus practice and use cases. That should and will change rapidly.

In an attempt to help move the dialog further, I offer a definition and Structured Dynamics’ perspective to some of the questions posed in one way or another during the conference.

Linked Data Defined

Sources such as the four principles of Linked Data in Tim Berners-Lee’s Design Issues: Linked Data and the introductory statements on the Linked Data Wikipedia entry approximate — but do not completely express — an accepted or formal or “official” definition of Linked Data per se. Building from these sources and attempting to be more precise, here is the definition of Linked Data we used internally:

Linked Data is a set of best practices for publishing and deploying instance and class data using the RDF data model, naming the data objects using uniform resource identifiers (URIs), and exposing the data for access via the HTTP protocol, while emphasizing data interconnections, interrelationships and context useful to both humans and machine agents.

All references to Linked Data below embrace this definition.

Some Clarifying Questions

I’m sure many other questions were raised, but listed below are some of the more prominent ones I heard in the various conference Q&A sessions and hallway discussions.

1. Does Linked Data require RDF?

Yes. Though other approaches can also model the first order predicate logic of subject-predicate-object at the core of the Resource Description Framework data model, RDF is the one based on the open standards of the W3C. RDF and FOL are powerful because of simplicity, ability to express complex schema and relationships, and suitability for modeling all extant data frameworks for unstructured, semi-structured and structured data.

2. Is publishing RDF sufficient to create Linked Data?

No. Linked Data represents a set of techniques applied to the RDF data model that names all objects as URIs and makes them accessible via the HTTP protocol (as well as other considerations; see the definition above and further discussion below).

Some vendors and data providers claim Linked Data support, but if their data is not accessible via HTTP using URIs for data object identification, it is not Linked Data. Fortunately, it is relatively straightforward to convert non-compliant RDF to Linked Data.

3. How does one publish or deploy Linked Data?

There are some excellent references for how to publish Linked Data. Examples include a tutorial, How to Publish Linked Data on the Web, and a white paper, Deploying Linked Data, using the example of OpenLink’s Virtuoso software. There are also recommended approaches and ways to use URI identifiers, such as the W3C’s working draft, Cool URIs for the Semantic Web.

However, there are not yet published guidelines for also how to meet the Zitgist definition above where there is also an emphasis on class and context matching. A number of companies and consultants, including Zitgist, presently provide such assistance.

The key principles, however, are to make links aggressively between data items with appropriate semantics (properties or relations; that is, the predicate edges between the subject and object nodes of the triple) using URIs for the object identifiers, all being exposed and accessible via the HTTP Web protocol.

4. Is Linked Data just another term or branding for the Semantic Web?

Absolutely not, though this is a source of some confusion at present.

The Semantic Web is probably best understood as a vision or goal where semantically rich annotation of data is used by machine agents to make connections, find information or do things automatically in the background on behalf of humans. We are on a path toward this vision or goal, but under this interpretation the Semantic Web is more of a process than a state. By understanding that the Semantic Web is a vision or goal we can see why a label such as ‘Web 3.0′ is perhaps simplistic and incomplete.

Linked Data is a set of practices somewhere in the early middle of the spectrum from the initial Web of documents to this vision of the Semantic Web. (See my earlier post at bottom for a diagram of this spectrum.)

Linked Data is here today, doable today, and pragmatic today. Meaningful semantic connections can be made and there are many other manifest benefits (see below) with Linked Data, but automatic reasoning in the background or autonomic behavior is not yet one of them.

Strictly speaking, then, Linked Data represents doable best practices today within the context both of Web access and of this yet unrealized longer-term vision of the Semantic Web.

5. Does Linked Data only apply to instance data?

Definitely not, though early practice has been interpreted by some as such.

One of the stimulating, but controversial, keynotes of the conference was from Dr. Anant Jhingran of IBM, who made the strong and absolutely correct observation that Linked Data requires the interplay and intersection of people, instances and schema. From his vantage, early exposed Linked Data has been dominated by instance data from sources such as Wikipedia and have lacked the schema (class) relationships that enterprises are based upon. The people aspect in terms of connections, collaboration and joint buy-in is also the means for establishing trust and authority to the data.

In Zitgist’s terminology, class-level mappings ‘explode the domain’ and produce information benefits similar to Metcalfe’s Law as a function of the degree of class linkages [1]. While this network effect is well known to the community, it has not yet been shown much in current Linked Data sets. As Anant pointed out, schemas define enterprise processes and knowledge structures. Demonstrating schema (class) relationships is the next appropriate task for the Linked Data community.

6. What role do “ontologies” play with Linked Data?

In an RDF context, “ontologies” are the vocabularies and structures that capture the schema structures noted above. Ontologies embody the class and instance definitions and the predicate (property) relations that enable legacy schemas and data to be transformed into Linked Data graphs.

Though many public RDF vocabularies and ontologies presently exist, and should be re-used where possible and where the semantics match the existing legacy information, enterprises will require specific ontologies reflective of their own data and information relationships.

Despite the newness or intimidation perhaps associated with the “ontology” term, ontologies are no more complex — indeed, are simpler and more powerful — than the standard relational schema familiar to enterprises. If you’d like, simply substitute schema for ontology and you will be saying the same thing in an RDF context.

7. Is Linked Data a centralized or federated approach?

Neither, really, though the rationale and justification for Linked Data is grounded in federating widely disparate sources of data that can also vary widely in existing formalism and structure.

Because Linked Data is a set of techniques and best practices for expressing, exposing and publishing data, it can easily be applied to either centralized or federated circumstances.

However, the real world where any and all potentially relevant data can be interconnected is by definition a varied, distributed, and therefore federated world. Because of its universal RDF data model and Web-based techniques for data expression and access, Linked Data is the perfect vehicle, finally, for data integration and interoperability without boundaries.

8. How does one maintain context when federating Linked Data?

The simple case is where two data sources refer to the exact same entity or instance (individual) with the same identity. The standard sameAs predicate is used to assert the equivalence in such cases.

The more important case is where the data sources are about similar subjects or concepts, in which case a structure of well-defined reference classes is employed. Furthermore, if these classes can themselves be expressed in a graph structure capturing the relationships amongst the concepts, we now have some fixed points in the conceptual information space for relating and tieing together disparate data. Still further, such a conceptual structure also provides the means to relate the people, places, things, organizations, events, etc., of the individual instances of the world to one another as well.

Any reference structure that is composed of concept classes that are properly related to each other may provide this referential “glue” or “backbone”.

One such structure provided in open source by Zitgist is the 21,000 subject concept node structure of UMBEL, itself derived from the Cyc knowledge base. In any event, such broad reference structures may often be accompanied by more specific domain conceptual ontologies to provide focused domain-specific context.

9. Does data need to be “open” to qualify as Linked Data?

No, absolutely not.

While, to date, it is the case that Linked Data has been demonstrated using public Web data and many desire to expose more through the open data movement, there is nothing preventing private, proprietary or subscription data from being Linked Data.

The Linking Open Data (LOD) group formed about 18 months ago to showcase Linked Data techniques began with open data. As a parallel concept to sever the idea that it only applies to open data, François-Paul Servant has specifically identified Linking Enterprise Data (and see also the accompanying slides).

For example, with Linked Data (and not the more restrictive LOD sense), two or more enterprises or private parties can legitimately exchange private Linked Data over a private network using HTTP. As another example, Linked Data may be exchanged on an intranet between different departments, etc.

So long as the principles of URI naming, HTTP access, and linking predicates where possible are maintained, the approach qualifies as Linked Data.

10. Can legacy data be expressed as Linked Data?

Absolutely yes, without reservation. Indeed, non-transactional legacy data perhaps should be expressed as Linked Data in order to gain its manifest benefits. See #14 below.

11. Can enterprise and open or public data be intermixed as Linked Data?

Of course. Since Linked Data can be applied to any data formalism, source or schema, it is perfectly suited to integrating data from inside and outside the firewall, open or private.

12. How does one query or access Linked Data?

The basic query language for Linked Data is SPARQL (pronounced “sparkle”), which bears close resemblance to SQL only applicable to an RDF data graph. The actual datastores applied to RDF may also add a fourth aspect to the tuple for graph namespaces, which can bring access and scale efficiencies. In these cases, the system is known as a “quad store”. Additional techniques may be added to data filtering prior to the SPARQL query for further efficiencies.

Templated SPARQL queries and other techniques can lead to very efficient and rapid deployment of various Web services and reports, two techniques often applied by Zitgist and other vendors. For example, all Zitgist DataViewer views and UMBEL Web services are expressed using such SPARQL templates.

This SPARQL templating approach may also be combined with the use of templating standards such as Fresnel to bind instance data to display templates.

13. How is access control or security maintained around Linked Data?

In Zitgist’s view, access control or security occurs at the layer of the HTTP access and protocols, and not at the Linked Data layer. Thus, the same policies and procedures that have been developed for general Web access and security are applicable to Linked Data.

However, standard data level or Web server access and security can be enhanced by the choice of the system hosting the data. Zitgist, for example, uses OpenLink’s Virtuoso universal server that has proven and robust security mechanisms. Additionally, it is possible to express security and access policies using RDF ontologies as well. These potentials are largely independent of Linked Data techniques.

The key point is that there is nothing unique or inherent to Linked Data with respect to access or control or security that is not inherent with standard Web access. If a given link points to a data object from a source that has limited or controlled access, its results will not appear in the final results graph for those users subject to access restrictions.

14. What are the enterprise benefits of Linked Data? (Why adopt it?)

For more than 30 years — since the widespread adoption of electronic information systems by enterprises — the Holy Grail has been complete, integrated access to all data. With Linked Data, that promise is now at hand. Here are some of the key enterprise benefits to Linked Data, which provide the rationales for adoption:

  • Via the RDF model, equal applicability to unstructured, semi-structured, and structured data and content
  • Elimination of internal data “silos”
  • Integration of internal and external data
  • Easy interlinkage of enterprise, industry-standard, open public and public subscription data
  • Complete data modeling of any legacy schema
  • Flexible and easy updates and changes to existing schema
  • An end to the need to re-architect legacy schema resulting from changes to the business or M & A
  • Report creation and data display based on templates and queries, not IT departments
  • Data access, analysis and manipulation pushed out to the user level, and, generally
  • The ability of internal Linked Data stores to be maintained by existing DBA procedures and assets.

15. What are early applications or uses of Linked Data?

Linked Data is well suited to traditional knowledge base or knowledge management applications. Its near-term application to transactional or material process applications is less apparent.

Of special use is the value-added from connecting existing internal and external content via the network effect from the linkages [1].

A Hearty Thanks

Johnnie Linked Data is starting to grow up. Our little semantic Web toddler is moving beyond ga-ga-goo-goo to saying his first real sentences. Language acquisition will come rapidly, and, like what all of us have seen with our own children, they will grow up faster than we can imagine.

There were so many at this meeting that had impact and meaning to this exciting transition point that I won’t list specific names at risk of leaving other names off. Those of you who made so many great observations or stayed up late interacting with passion know who you are. Let me simply say: Thanks!

The LinkedData Planet conference has shown, to me, that enterprises are extremely interested in what our community has developed and now proven. They are asking hard questions and will be difficult task masters, but we need to listen and respond. The attendees were a selective and high-quality group, understanding of their own needs and looking for answers. We did an OK job of providing those answers, but we can do much, much better.

I reflect on these few days now knowing something I did not truly know before: the market is here and it is real. The researchers who have brought us to this point will continue to have much to research. But, those of us desirous of providing real pragmatic value and getting paid for it, can confidently move forward knowing both the markets and the value are real. Linked Data is not magic, but when done with quality and in context, it delivers value worth paying for.

To all of the fellow speakers and exhibitors, to all of the engaged attendees, and to the Juperitermedia organizers and Bob DuCharme and Ken North as conference chairs, let me add my heartfelt thanks for a job well done.

Next Steps and Next Conference

The next LinkedData Planet conference and expo will be October 16-17, 2008, at the Santa Clara Hyatt in Santa Clara, California. The agenda has not been announced, but hopefully we will see a continuing enterprise perspective and some emerging use cases.

Zitgist as a company will continue to release and describe its enterprise products and services, and I will continue to blog on Linked Data matters of specific interest to the enterprise. Pending topics include converting legacy data to Linked Data, converting relational data and schema to Linked Data, placing context to Linked Data, and many others. We think you will like the various announcements as they arise. ;)

Zitgist is also toying with the use of a distinctive icon A Linked Data iconto indicate the availability of Linked Data conforming to the principles embodied in the questions above. (The color choice is an adoption of the semantic Web logo from the W3C.) The use of a distinctive icon is similar to what RSS feeds A Linked Data iconor microformats A Linked Data iconhave done to alert users to their specific formats. Drop me a line and let us know what you think of this idea.

[1] Metcalfe’s law states that the value of a telecommunications network is proportional to the square of the number of users of the system(n²), where the linkages between users (nodes) exist by definition. For information bases, the data objects are the nodes. Linked Data works to add the connections between the nodes. We can thus modify the original sense to become Zitgist’s Law: the value of a Linked Data network is proportional to the square of the number of links between the data objects.
Posted:April 20, 2008

There’s Some Cool Tools in this Box of Crackerjacks

UMBEL is today releasing a new sandbox for its first iteration of Web services. The site is being hosted by Zitgist. All are welcomed to visit and play.

And, UMBEL is What, Again?

UMBEL (Upper-level Mapping and Binding Exchange Layer) is a lightweight reference structure for placing Web content and data in context with other data. It is comprised of about 21,000 subject concepts and their relationships — with one another and with external vocabularies and named entities.

Each UMBEL subject concept represents a defined reference point for asserting what a given chunk of content is about. These fixed hubs enable similar content to be aggregated and then placed into context with other content. These subject context hubs also provide the aggregation points for tying in their class members, the named entities which are the people, places, events, and other specific things of the world.

The backbone to UMBEL is the relationships amongst these subject concepts. It is this backbone that provides the contextual graph for inter-relating content. UMBEL’s subject concepts and their relationships are derived from the OpenCyc version of the Cyc knowledge base.

The UMBEL ontology is based on RDF and written in the RDF Schema vocabulary of SKOS (Simple Knowledge Organization System) with some OWL Full constructs to aid interoperability.

UMBEL’s backbone is also a reference structure for more specific domains or ontologies, thereby enabling further context for inter-relating additional content. Much of the sandbox shows these external relationships.

UMBEL’s Eleven

These first set of Web services provide online demo sandboxes, and descriptions of what they are about and their API documentation. The first 11 services are:

A CLASSy Detailed Report

The single service that provides the best insight to what UMBEL is all about is the Subject Concept Detailed Report. (That is probably because this service is itself an amalgam of some of the others.)

Starting from a single concept amongst the 21,000, in this case ‘Mammal’, we can get descriptions or definitions (the proper basis for making semantic relationships, not the ‘Mammal’ label), aliases and semsets, equivalent classes (in OWL terms), named entities (for leaf concepts), more general or specific external classes, and domain and range relationships with other ontologies. Here is the sample report for ‘Mammal’:

The discerning eye likely observes that while there are a rich set of relationships to the internal UMBEL subject concepts, coverage is still light for external classes and named entities. This sandbox is, after all, a first release and we are early in the mapping process. :)

But, it should also start to become clear that the ability of this structure to map and tie in all forms of external concepts and class structures is phenomenal. Once such class relationships are mapped (to date, most other Linked Data only occurs at the instance level), all external relationships and properties can be inherited as well. And, vice versa.

So, for aficionados of the network effect, stand back! You ain’t seen nothing yet. If we have seen amazing emergent properties arising from the people and documents on the Web, with data we move to another quantum level, like moving from organisms to cells. The leverage of such concept and class structures to provide coherence to atomic data is literally primed to explode.

Bloomin’ Concepts!

To put it mildly, trying to get one’s mind around the idea of 21,000 concepts and all of their relationships and all of their possible tie in points and mappings to still further ontologies and all of their interactions with named entities and all of their various levels of aggregation or abstraction and all of their possible translations into other languages or all of their contextual descriptions or all of their aliases or synonyms or all of their clusterings or all of their spatial relationships or all of the still more detailed relationships and instances in specific domains or, well, whew! You get the idea.

It is all pretty complex and hard to grasp.

One great way to wrap one’s mind around such scope is through interactive visualization. The first UMBEL service to provide this type of view is the Subject Concept Explorer, a screenshot of which is shown here:

But really, to gain the true feel, go to the service and explore for yourself. It feels like snorkeling through those schools of billions of tiny silver fish. Very cool!

These amazing visualizations are being brought to us by Moritz Stefaner, imho one of the best visualization and Flash gurus around. We will be showcasing more about Moritz’s unbelievable work in some forthcoming posts, where some even cooler goodies will be on display. His work is also on display at a couple of other sites that you can spend hours drooling over. Thanks, Moritz!

Missing Endpoints and Next Steps

You should note that developer access to the actual endpoints and external exposure of the subject concepts as Linked Data are not yet available. The endpoints, Linked Data and further technical documentation will be forthcoming shortly.

The currently displayed services and demos provided on this UMBEL Web services site are a sandbox for where the project is going. Next releases will soon provide as open source under attribution license:

  • The formal UMBEL ontology written in OWL Full and SKOS
  • Technical documentation for the ontology and its use and extension
  • Freely accessible Web services according to the documentation already provided
  • Technical documentation and reports for the derivation of the subject concepts from OpenCyc and the creation and extension of semsets and named entities related to that structure.

When we hit full stride, we expect to be releasing still further new Web services on a frequent basis.

BTW, for more technical details on this current release, see Fred Giasson’s accompanying post. Fred is the magician who has brought much of this forward.

Posted:March 2, 2008

Glut: Mastering Information Through The Ages

Wright’s Book Has Strong Scope, Disappointing Delivery

When I first saw the advanced blurb for Glut: Mastering Information through the Ages by Alex Wright I thought, “Wow, here is the book I have been looking for or wanting to write myself.” As the book jacket explains:

Spanning disciplines from evolutionary theory and cultural anthropology to the history of books, libraries and computer science, Wright weaves an intriguing narrative that connects such seemingly far-flung topics as insect colonies, Stone Age jewelry, medieval monasteries, Renaissance encyclopedias, early computer networks, and the World Wide Web. Finally, he pulls these threads together to reach a surprising conclusion, suggesting that the future of the information age may lie deep in our cultural past.

Wham, bang! The PR snaps with promise and scope!

These are themes that have been my passion for decades, and I ordered the book as soon as it was announced. It was therefore with great anticipation that I cracked open the cover as soon as I received it. (BTW, the actual date of posting for this review is much later only because I left this review in draft for some months; itself an indication of how, unfortunately, I lost interest in it. :( ).

Otlet is a Gem

The best aspect of Glut is the attention it brings to Paul Otlet, quite likely one of the most unique and overlooked innovators in information science in the 20th century. Frankly, I had only an inkling of who Otlet was prior to this book, and Wright provides a real service by bringing more attention to this forgotten hero.

(I have since gone on to try to learn more about Otlet and his pioneering work in faceted classification — as carried on more notably by S. R. Ranganathan with the Colon classification system — and his ideas behind the creation of the Mundaneum in Brussels in 1910. The Mundaneum and Otlet’s ideas were arguably a forerunner to some aspects of the Internet, Wikipedia and the semantic Web. Unfortunately, the Mundaneum and its 14 million ‘permanent encyclopedia’ items were taken over by German troops in World War II. The facility was ravaged and sank into obscurity, as did Otlet’s reputation, who died in 1944 before the war ended. It was not until Boyd Rayward translated many of Otlet’s seminal works to English in the late 1980s that he was rediscovered.)

Alex Wright’s own Google Tech Talk from Oct. 23, 2007, talks much about Otlet, and is a good summary of some of the other topics in Glut.

Stapled Book Reviews

The real disappointment in Glut is the lack of depth and scholarship. The basic technique seemed to be find a prominent book on a given topic, summarize it in a popularized tone, sprinkle in a couple of extra references from the source book relied on for that chapter to show a patina of scholarship, and move on to the next chapter. Then, add a few silly appendices to pad the book length.

So, we see, for example, key dependence on a relative few sources for the arguments and points made. Rather than enumerate them here, one approach if interested is to simply peruse the expanded bibliography on Wright’s Glut Web site. That listing is actually quite a good basis for beginning your own collection.

Books are Different

It seems like today, with blogging and digital content flying everywhere, that a greater standard should be set for creating a book and asking the buying public to actually pay for something. That greater standard should be effort and diligence to research the topic at hand.

I feel like Glut is related to similar efforts where not enough homework was done. For example, see Walter Underwood, who in his review of the Everything is Miscellaneous (not!) book, chastises author David Weinberger on similar grounds. (A conclusion I had also reached after viewing this Weinberger video cast.)

In summary, I give Wright an A for scope and a C or D in execution and depth. I realize that is a pretty harsh review; but it is one occasioned by my substantially unmet high hopes and expectations.

The means by which information and document growth has come to be organized, classified and managed have been major factors in humanity’s progress and skyrocketing wealth. Glut‘s skimpy hors d’œuvre merely whet the appetite: the full historical repast has yet to be served.

Posted:September 29, 2007

zLinks Kicks Out an Old Favorite

zLinks from ZitgistThe issue of popups, thumbnails, link indicators, and other visual clues for blog content has been an interesting and difficult one. When Snap first came out with its preview popup thumbnails of referenced links (“Snap Shots“), it became all the rage until there was a backlash against ‘popupitis‘.

Similarly, many of us, for styling and design considerations (perhaps not always for the best?!), have mucked around with our CSS to the point that a standard link is sometimes hard to discern. You’ve seen them, and I have myself been guilty:

  • different link colors than the original Web 1.0 link blue,
  • sometimes no underlining,
  • sometimes dotted underlines,
  • even boxes, and (horrors!)
  • even upper and lower borders!

As we get clever on this, we then need to compensate with other visual clues for the link.

In my case, about a year ago I adopted the terrific Link Indication WordPress plug-in by Michael Woehrer, which enabled me to type-by-icon the kind of link you, the reader, sees. In my own case, I had icons (for example) for Wikipedia, PDFs, RDF, general external links and some others. The idea, of course, is that faithful readers would learn these subtle distinctions and appreciate the visual cues. (Now for the obligatory, yeah, right!)

To avert symptoms similar to popupitis, it is important to keep these visual cues subtle and (hopefully) unobtrusive. I was actually fairly proud of my Link Indication icons in this regard.

zLinks Raises the Link to the ‘Power of Z’

I then began playing with zLinks about two weeks ago, and wrote a blog posting about it. Check that out and the update blog notice from Fred Giasson to learn more. And, if you have WordPress, you can download and install the plug-in yourself.

But now the game has changed. Instantaneously, my links became more meaningful, and my link representations on my blog more fat.

The links became more meaningful because now I had the wealth of linkages and relationships tied to every single embedded link on my writings. I have been an aggressive “linker” and this has meant a hidden wealth of interlinkages automatically available to my postings and writings. Sure, I don’t often or always want to explore this richness (and, maybe, many if not most of my readers don’t have that interest all the time as well), but, simply having it there has opened my eyes to what has been called ‘linked data.’

Further, the basis of relating a link to a MIME type or similar document-level distinction now seems primitive. The meaningful distinction is no longer whether the document is a Powerpoint or PDF, but what subjects it is about and who, what, where and when it describes. The link now becomes not a doorway to a document house, but a reference to individual rooms or objects therein.

This richness and its implications are only now becoming apparent to me (and in a still-forming way). Moreover, through such things as backlinks, directed connections, implied connections and many others, this now-emerging world of interconnectedness is still revealing itself.

The new branding of the Zitgist Browser Linker to zLinks, I think, is a nice acknowledgement by the developers that something fundamentally new is afoot. It has been exciting (and rewarding to me) that as one of the early users of this capability that the developers (Fred, especially, thanks!) have sought me out for input and ideas.

The enhancements in this most recent Zitgist release tell me we have truly entered the era of the ‘Power of Z.’ Namely, the reach of a zLinks link is to make real today’s basis to deliver data interconnectedness. This is not the future; it is today. And, it is profound and exciting.

A Diet is the Only Cure for Iconitis

So, with a breaking of document classification boundaries (such as MIME type) to one that is now attuned to atomic data, any imaginable classification scheme becomes possible. But in this open typing, how do we handle the poor, overburdened link? How do we convey its power and reach? We’d like to convey some meaning, but where does it end? Readability would never accept Dewey Decimal tags or literal metadata text or any other such construct appended to the standard link.

From a practical standpoint, my first challenge was including the standard zLinks “mini-Z” icon associated with the zLinks popup that is the entree point to all of this interlinkedness richness. (By the way, have you been mousing over these icons to see the cool zLinks popups? Let alone following those reference links to their own Zitgist template reports?) The problem was, here was another new and diverting icon on top of the ones I was using with Link Indication — in other words, my link representations were becoming fat.

To add insult to injury, when I, as blog author, need to annotate or make other local notes on my local zLinks capabilities, I also need to call up and deal with the zLinks annotation facility. And, it too, has its own icon. So, after installing zLinks, I found I was now suffering from a new disease, iconitis, that has symptoms dangerously close to popupitis.

Thus, here is what one of my links looked like with the standard Link Indication icon and the zlinks annotation and standard icons while in authoring mode:

Example Link Icons

My gawd, my links were getting as adorned with all manner of fruits and nuts worse than tutti frutti.

Since I am as much in authoring mode as not, this distraction is in my face about half of the time. So, my decision: Get ‘link lean’ — skinny down those link icons and references, sufficient to where things again become usable and readable.

It was time to say goodbye to Link Indication.

The Scope and Longer-term Paradigm Remains Unclear

There is really no need to make a heavy point of this except to note that the Web will continue to be ubiquitous as an access point to information, that information will devolve to be object- and data-centric and not at the document level, and the link (in keeping with its essence of the Web) will be the essential gateway for access.

I like the decisions Zitgist has made for zLinks: to provide a single, subtle and small icon, that itself brings up its own dialog showing the richness of the linked data support behind the embedded link. This popup is made available only when desired after a mouseover with a short delay (keeping the popup hidden during standard mouse movements). But then, when invoked, a new separate world of data types and links with expandable icons and tooltips is revealed:

zLinks Popoup

This richness can be shown in the following example zLinks popup for the embedded link to Sweet Tools, in which all 600 tools are made available from a single link! This scrollable and extensible design is very much in keeping with growth and potential and meaning for the once lowly link:

zLinks Popoup

So, with zLinks, I and my readers may have now given up showing links by MIME type, but we have gained the power of complete connectedness with the Web.

Let’s all raise a toast to the ‘Power of Z’ and to keeping links lean!

Posted:September 16, 2007

Sweet Tools Listing

AI3's Sweet Tools Listing Updated to Version 10

This AI3 blog maintains Sweet Tools, the largest listing of about 800 semantic Web and -related tools available. Most are open source. Click here to see the current listing!

AI3's listing of semantic Web and -related tools has just been updated to version 10. This version adds 36 new tools since the last update on June 19, bringing the new total to 578 tools.

This version 10 update of Sweet Tools also includes an upgrade to version 2 of the lightweight Exhibit display (thanks again, MIT's Simile program and David Huynh, plus congratulations on your Ph.D, David!) and is separately provided as a simple table for quick download and copying.

Background on prior listings and earlier statistics may be found on these previous posts:

With interim updates periodically over that period.

Because of comments expirations on prior posts, this entry is now the new location for adding a suggested new tool. Simply provide your information in the comments section, and the tool will be included in the next update.