Posted:June 21, 2006

Since Adam Smith first published the Wealth of Nations more than 200 years ago, there has been an episodic tension in the dismal science between the concepts of deceasing returns to scale embodied in the image of the invisible hand and increasing returns to scale due to specialization and the division of labor in the pin factory.  Generally, most of us are more steeped in the views of competition, supply and demand and creative destruction arising from a view of the traditional factors of production of capital, labor and land.  Only in fits and starts has the economics profession turned its attention to the importance of knowledge and readily observed increases in wealth.

David Warsh in his new book, Knowledge and the Wealth of Nations:  A Story of Economic Discovery, attempts to bring Adam Smith’s story back into balance.  Using Paul Romer’s seminal paper, "Endogenous Technological Change," published in the Journal of Political Economy in 1990 as the centerpiece, Warsh first weaves a story of economic growth theories since Smith.  The contemporary economist Paul Krugman, himself a player in this story, noted in his review of this book in the New York Times that the struggle between the Pin Factory and the Invisible Hand had:

On one side, Smith emphasized the huge increases in productivity that could be achieved through the division of labor, as illustrated by his famous example of a pin factory whose employees, by specializing on narrow tasks, produce far more than they could if each worked independently. On the other side, he was the first to recognize how a market economy can harness self-interest to the common good, leading each individual as though "by an invisible hand to promote an end which was no part of his intention."

What may not be obvious is the way these two concepts stand in opposition to each other. The parable of the pin factory says that there are increasing returns to scale — the bigger the pin factory, the more specialized its workers can be, and therefore the more pins the factory can produce per worker. But increasing returns create a natural tendency toward monopoly, because a large business can achieve larger scale and hence lower costs than a small business. So in a world of increasing returns, bigger firms tend to drive smaller firms out of business, until each industry is dominated by just a few players.

But for the invisible hand to work properly, there must be many competitors in each industry, so that nobody is in a position to exert monopoly power. Therefore, the idea that free markets always get it right depends on the assumption that returns to scale are diminishing, not increasing.

Warsh argues that the early dominance of the decreasing returns of the invisible hand is partially due to the general sense of scarcity as embodied in early thinkers like Malthus and because its analysis and maths are more tractable.  It thus required the mathematical maturation of the profession for the question of the pin factory and increasing returns (and declining costs) to find a resolution.  Yet even as the theories and the profession itself become more mathematical in the 20th century, Warsh sticks to a literal and easy-to-read narrative.  His story covers virtually every major economist from Smith and Ricardo in the early years to the Nobel laureates Arrow to Solow prior to Romer.  He traces the eras of neglect with the slow discoveries as to the factors leading to growth.  The story really begins in earnest after World War II when the hidden X factor of technological change — in what came to be expressed as total factor productivity — came to the fore to complete the economic growth equation.

Like the missing "dark matter" still being sought to explain why the universe doesn’t fly apart, this TFP "X factor" remained elusive for many years and was generally viewed as something external — or exogenous — to the standard understanding of growth in output.  What Romer was able to argue in what came to be called New Growth Theory, was that this mysterious "dark matter" was itself knowledge and that it was an internal product — that is, endogenous — to the economic system.  As Warsh states:

Romer’s 1990 paper divided up the economic world along lines different from earlier ones.  Overnight for those who were involved in actually making the intellectual revolution, more slowly for the rest of us, the traditional "factors of production" were redefined.  The fundamental categories of economic analysis ceased to be, as they had been for two hundred years, land, labor, and capital.  This most elementary classification was supplanted by people, ideas, things. . . . Technical change and the growth of knowledge had become endogenous — within the vocabulary and province of economics to explain.

While Warsh has too great a tendency to lionize Romer, this story is indeed a fascinating yarn and very informative for the non-economist.  The theories that have emerged from Romer and other new growth theorists have special relevance in today’s Internet and information economy where increasing returns to scale and network effects so manifestly surround us.

As Krugman noted in his review:

Economic ideas play a large role in shaping the world. ‘Practical men, who believe themselves to be quite exempt from any intellectual
influences,’ John Maynard Keynes said, ‘are usually the slaves of some defunct economist.’ So it’s odd how few popular books have been written describing the social and personal matrix from which economic ideas actually emerge. There have been no economics equivalents of, say, James Watson’s book ‘
The Double Helix,’ or James Gleick’s biography of Richard Feynman.

Warsh fills this gap with this fine tale.  So, if you can put up with a bit of journalistic license and economists-worship, do check out this fast, 400-pp read.  It’s a rousing good tale and educational to boot.  Not all economists can have beautiful minds, but a few fortunate ones can formulate beautiful equations.

Posted by AI3's author, Mike Bergman Posted on June 21, 2006 at 4:16 pm in Adaptive Information, Book Reviews | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/245/knowledge-unravelling-the-x-factor-in-growth-and-wealth/
The URI to trackback this post is: https://www.mkbergman.com/245/knowledge-unravelling-the-x-factor-in-growth-and-wealth/trackback/
Posted:June 18, 2006

This is the last entry in our recent series on data federation. This post compares interoperability models and concludes with a new approach for promoting enterprise interoperability and innovation.

We are now about to conclude this mini-series on data federation. In earlier posts, I described the significant progress in climbing the data federation pyramid, today’s evolution in emphasis to the semantic Web, the 40 or so sources of semantic heterogeneity, and tools and methods for processing and mediating semantic information. We now conclude with a comparison of implementation models for semantic interoperability.

Guido Vetere, an IBM research scientist and one of the clearest writers on this subject has said:

Despite the increasing availability of semantic-oriented standards and technologies, the problem of dealing with semantics in Web-based cooperation, taken in its generality, is very far from trivial, not only for practical reasons, but also because it involves deep and controversial philosophical aspects. Nevertheless, for relatively small communities dealing with well-founded disciplines such as biology, concrete solutions can be effectively put in place. In fact, most of the data structures will represent commonly understood natural kinds (e.g. microorganisms), well-studied processes (e.g. syntheses) and so on. Still, significant differences in the way actual data structures are used to represent these concepts might require complex mappings and transformations. [1]

Yet there are easier ways and harder ways to achieve interoperability. Paul Warren, a frequent citation in this series, observes:[2]

I believe there will be deep semantic interoperability within organizational intranets. This is already the focus of practical implementations, such as the SEKT (Semantically Enabled Knowledge Technologies) project, and across interworking organizations, such as supply chain consortia. In the global Web, semantic interoperability will be more limited.

I agree with this observation. But why might this be the case? Before we conclude why some models for semantic interoperation may work better and earlier than others, let’s first step back and look at four model paradigms.

Four Paradigms of Semantic Interoperability

In late 2005, G. Vetere and M. Lenzerini (V&L) published an excellent synthesis description of “Models for Semantic Interoperability in Service Oriented Architectures” in the IBM Systems Journal. [3] Though obviously geared to an SOA perspective, the observations ring true for any semantic-oriented architecture. These prototypical patterns provide a solid basis for discussing the pros and cons of various implementation approaches.

Any-to-Any Centralized

The ‘Any-to-Any Centralized’ model (also called by Vetere as unmodeled-centralized in the first-cited paper [1]), is “tightly-coupled” in that the mapping requires a semantic negotiation or understanding between the integrated, central parties. The integrated pieces (services, in this instance) are usually atomic, independent and self-contained, The integration takes place within a single instantiation and is not generalized.

V&L diagrammed this model as follows:

Any-to-Any Centralized Semantic Interoperability Model (reprinted from [3])

It is clear in this model that there is no ready “swap out” of components. There needs to be bilateral agreements betweeen all integration components. Semantic errors can only be determined at runtime.

This model is really the traditional one used by system integrators (SIs). It is a one-off, high level-of-effort, non-generalized and non-repeatable model of integration that only works in closed environments. No ontology is involved. It is the most costly and fragile of the interoperability models.

Any-to-One Centralized

In V&L’s ‘Any-to-One Centralized’ model (also known as modeled-centralized), while it may not be explicit, there is a single “ontology” that is a superset of all contributing systems. This “ontology” framework may often take the form of an enterprise service bus, where its internal protocols provides the unifying ontology.

This interoperability model, too, is quite costly in that all suppliers (or service providers) must conform to the single ontology.

It is often remarked that the number of mappings required for the entire system is significantly reduced in any-to-one models, decreasing (in the limit) from N × (N − 1) to N, where N is the number of services involved. However, the reduction in the number of mappings is not the striking difference here. The real difference is in the existence of a business model.

V&L diagrammed this model as follows:

Any-to-One Centralized Semantic Interoperability Model (reprinted from [3])

The extensibility of the business model is therefore the key to the success of this interoperability pattern. Generally, in this case, the enterprise makes the determination of what components to interoperate with and conducts the mapping.

Any-to-Any Decentralized

The normal condition in ‘loosely-coupled’ environments such as the Web in general is called the ‘Any-to-Any Decentralized’ model (also known as unmodeled-decentralized). In this model, the integration logic is distributed, and there are not shared ontologies. This is a peer-to-peer system, sometimes known as P2P information integration systems or ’emergent semantics.’

The semantics are distributed in systems that are strongly isolated from one another, and though grids can help the transaction aspects, much repeat effort occurs as a function of the interoperating components. V&L diagrammed this model as follows:

Any-to-Any Decentralized Semantic Interoperability Model (reprinted from [3])

It is thus the responsibility of each party to perform the mapping to any other party with which it desires to interoperate. The lack of a central model acts to greatly increase the effort needed for much interoperability at the system level.

Any-to-One Decentralized

One way to decrease the effort required is to adopt the ‘Any-to-One Decentralized’ model (also modeled-decentralized) wherein an ontology model provides the mapping guidance. (Note there may need to be multiple layers of ontologies from shared “upper level” ones to those that are domain specific).

In this model, the integration logic is distributed by any service or component implementation, based on a shared ontology. It is this model that is the one generally referred to as the semantic Web approach. (In Web services, this is accomplished via the multiple WS* web service protocols.)

According to V & L:

. . . having business models specified in a sound and rich ontology language and having them based on a suitable foundational layer reduces the risk of misinterpretations injected by agents when mapping their own conceptualizations to such models. A suitable coverage of primitive concepts with respect to the business domain (completeness) that entails the possibility to progressively enhance conceptual schemas (extensibility) are also key factors of success in the adoption of shared ontologies within service-oriented infrastructures. But still there is the risk of inaccuracy, misunderstandings, errors, approximations, lack of knowledge, or even malicious intent that need to be considered, especially in uncontrolled, loosely coupled environments.

V&L then diagrammed this model as follows:

Any-to-One Decentralized Semantic Interoperability Model (reprinted from [3])

In this model, the ontology (or multiple ontologies) need to comprehensively include the semantics and concepts of all participating components. Thus, while the components are “decentralized,” the ontology growth must somehow be “centralized” in that it needs to expand and grow.

A collaboration environment similar to say, Wikipedia, could accomplish this task, though the issues of authoritativeness and quality would also arise as they have for Wikipedia.

Alternative Approaches to Semantic Web Ontologies

There are a number of ways to provide a “centralized” ontology view for this semantic Web collaboration environment. Xiaomeng Su provides three approaches for thinking about this problem:[4]

Approaches to Semantic Web Ontologies (modified from [4])

The simplest, but least tractable approach given the anarchic nature of the Web, is to adopt a single ontology (this is what is implied in the ‘Any-to-One Decentralized’ interoperability model above). A more realistic approach is where there are multiple world views or ontologies, shown in the center of the diagram. This approach recognizes that different parties will have different world views, the reconciliation of which requries semantic mediation (see previous post in this series). Finally, in order to help minimize the effort of mediation, some shared general ontologies may be adopted. This hybrid approach can also rely on so-called upper-level ontologies such as SUMO (Suggested Upper Merged Ontology) or the Dublin Core. While semantic mediation is still required between the local ontologies, the effort is somewhat lessened.

The Best Model Depends on Circumstance

These models illustrate trade-offs and considerations depending on the circumstance where interoperability is an imperative.

For the semantic Web, which is the most difficult environment given the lack of coordination possible between contributing parties, the best model appears to be the ‘Any-to-One Decentralized’ model with a hybrid approach to the ontology model. Besides the need for ontologies to mature, the means for semantic mediation and the tools to help automate the tagging and mediation process also need to mature significantly. Though isolated pockets

Comprehensive Interfaces: A Hybrid Model for Enterprise Interoperability

I have argued in previous posts that enterprises are likely to be the place where semantic interoperability first proves itself. This is because, as we have seen above, centralized models are a simpler design and easier to implement, and because enterprises can provide the economic incentive for contributing players to conform to this model.

So, given the model discussions above, how might this best work?

First, by definition, we have a “centralized” model in that the enterprise is able to call the shots. Second, we do want a “One” model wherein a single ontology governs the semantics. This means that we can eliminate requirements and tools to mediate semantic heterogeneities.

On the other hand, we also want a “loosely-coupled” system in that we don’t want a central command-and-control system that requires upfront decisions as to which components can interoperate.

In other words, how can we gain the advantages of a free market for new tools and components at minimum cost and technical difficulty?

The key to answering this seeming dilemma is to “fix” the weaknesses of the ‘Any-to-One Centralized’ model while retaining its strengths. This hybrid shift is shown by this diagram:

Shifting Interoperability to the Supplier (modified from [3])

The diagram indicates that the enterprise adopts a single ontology model, but exposes its interoperability framework through a complete “recipe book” for interoperating to which external parties may embrace (the green arrows).

The idea is to expand beyond single concepts such as APIs (application programming interfaces), ontologies, service buses and broker and conversion utilities to one of a complete set of community standards, specifications and conversion utilities that enables any outside party to interoperate with the enterprises central model. The interface thus becomes the comprehensive objective, comprehensively defined.

By definition, then, this type of interoperability is “loosely coupled” in that the third (external) party can effect the integration without any assistance or guidance (other than the standard “recipe book”) from the central authority. The central system thus becomes a “black box” as traditionally defined. This means that any aggressive potential supplier can adapt its components to the interface in order to convince the enterprise to buy its wares or services.

This design can suffer the weakness of the potential inefficiencies that result from loosely-coupled integration. However, if the new component proves itself and fits the bill, the central enterprise authority always has the option to go to more efficient, tightly-coupled integration with that third party to overcome any performance bottlenecks.

It should thus be possible for enterprises (central authorities) to both write these comprehensive “recipe books” and to establish “proof-of-concept” interoperability labs where any potential vendor can link in and prove its stuff. This design shifts the cost of overcoming barriers to entry to the potential supplier. If that supplier believes its offerings to be superior, it can incur the time and effort of coding to the interface and then demonstrating its superiority.

There are very exciting prospects in such an entirely new procurement and adoption model that I’ll be discussing further in future postings.

A key to such a design, of course, is comprehensive and easily implemented interfaces. Once comprehensive approach to a similar design is provided by Izza et al.[5] The absolute cool thing about this new design is that today’s new standards and protocols provide easy means for third parties to comply. This new design completely overcomes the limitations of the prior proprietary approaches for enterprises involving high-cost ETL (extract, transform, load) or their later enterprise service bus (ESB) cousins.

NOTE: This posting is part of an occasional series looking at a new category that I and BrightPlanet are terming the eXtensible Semantic Data Model (XSDM). Topics in this series cover all information related to extensible data models and engines applicable to documents, metadata, attributes, semi-structured data, or the processing, storing and indexing of XML, RDF, OWL, or SKOS data. A major white paper will be produced at the conclusion of the series.

[1] G. Vetere, “Semantics in Data Integration Processes,” presented at NETTAB 2005, Napoli, October 4-7, 2005. See http://www.nettab.org/2005/docs/NETTAB2005_VetereOral.pdf.

[2] Paul Warren, “Knowledge Management and the Semantic Web: From Scenario to Technology,” IEEE Intelligent Systems, vol. 21, no. 1, 2006, pp. 53-59. See http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2006/02&file=x1war.xml&xsl=article.xsl&

[3] G. Vetere and M. Lenzerini, “Models for Semantic Interoperability in Service Oriented Architectures, in IBM Systems Journal, Vol. 44, November 4, 2005, pp. 887-904. See http://www.research.ibm.com/journal/sj/444/vetere.pdf

[4] Xiaomeng Su, “A Text Categorization Perspective for Ontology Mapping,” a position paper. See http://www.idi.ntnu.no/~xiaomeng/paper/Position.pdf.

[5] Saïd Izza, Lucien Vincent and Patrick Burlat, “A Unified Framework for Enterprise Integration: An Ontology-Driven Service-Oriented Approach,” pp. 78-89, in Pre-proceedings of the First International Conference on Interoperability of Enterprise Software and Applications (INTEROP-ESA’2005), Geneva, Switzerland, February 23 – 25, 2005, 618 pp. See http://interop-esa05.unige.ch/INTEROP/Proceedings/Interop-ESAScientific/OneFile/InteropESAproceedings.pdf.

Posted by AI3's author, Mike Bergman Posted on June 18, 2006 at 2:07 pm in Semantic Web | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/240/models-of-semantic-interoperability/
The URI to trackback this post is: https://www.mkbergman.com/240/models-of-semantic-interoperability/trackback/
Posted:June 12, 2006

Mediating semantic heterogeneities requires tools and automation (or semi-automation) at scale. But existing tools are still crude and lack across-the-board integration. This is one of the next challenges in getting more widespread acceptance of the semantic Web.

In earlier posts, I described the significant progress in climbing the data federation pyramid, today’s evolution in emphasis to the semantic Web, and the 40 or so sources of semantic heterogeneity. We now transition to an overview of how one goes about providing these semantics and resolving these heterogeneities.

Why the Need for Tools and Automation?

In an excellent recent overview of semantic Web progress, Paul Warren points out:[1]

Although knowledge workers no doubt believe in the value of annotating their documents, the pressure to create metadata isn’t present. In fact, the pressure of time will work in a counter direction. Annotation’s benefits accrue to other workers; the knowledge creator only benefits if a community of knowledge workers abides by the same rules. . . . Developing semiautomatic tools for learning ontologies and extracting metadata is a key research area . . . .Having to move out of a user’s typical working environment to ‘do knowledge management’ will act as a disincentive, whether the user is creating or retrieving knowledge.

Of course, even assuming that ontologies are created and semantics and metadata are added to content, there still remains the nasty problems of resolving heterogeneities (semantic mediation) and efficiently storing and retrieving the metadata and semantic relationships.

Putting all of this process in place requires the infrastructure in the form of tools and automation and proper incentives and rewards for users and suppliers to conform to it.

Areas Requiring Tools and Automation

In his paper, Warren repeatedly points to the need for “semi-automatic” methods to make the semantic Web a reality. He makes fully a dozen such references, in addition to multiple references to the need for “reasoning algorithms.” In any case, here are some of the areas noted by Warren needing “semi-automatic” methods:

  • Assign authoritativemenss
  • Learn ontologies
  • Infer better search requests
  • Mediate ontologies (semantic resolution)
  • Support visualization
  • Assign collaborations
  • Infer relationships
  • Extract entities
  • Create ontologies
  • Maintain and evolve ontologies
  • Create taxonomies
  • Infer trust
  • Analyze links
  • etc.

In a different vein, SemWebCentral lists these clusters of semantic Web-related tasks, each of which also requires tools:[2]

  • Create an ontology — use a text or graphical ontology editor to create the ontology, which is then validated. The resulting ontology can then be viewed with a browser before being published
  • Disambiguate data — generate a mapping between multiple ontologies to identify where classes and properties are the same
  • Expose a relational database as OWL — an editor is first used to create the ontologies that represent the database schema, then the ontologies are validated, translated to OWL and then the generated OWL is validated
  • Intelligently query distributed data — repository and again able to be queried
  • Manually create data from an ontology — a user would use an editor to create new OWL data based on existing ontologies, which is then validated and browsable
  • Programmatically interact with OWL content — custom programs can view, create, and modify OWL content with an API
  • Query non-OWL data — via an annotation tool, create OWL metadata from non-OWL content
  • Visualize semantic data — view semantic data in a custom visualizer.

With some ontologies approaching tens to hundreds of thousands to millions of triples, viewing, annotating and reconciling at scale can be daunting tasks, the efforts behind which would never be taken without useful tools and automation.

A Workflow Perspective Helps Frame the Challenge

A 2005 paper by Izza, Vincent and Burlat (among many other excellent ones) at the first International Conference on Interoperability of Enterprise Software and Applications (INTEROP-ESA) provides a very readable overview on the role of semantics and ontologies in enterprise integration.[3] Besides proposing a fairly compelling unified framework, the authors also present a useful workflow perspective emphasizing Web services (WS), also applicable to semantics in general, that helps frame this challenge:

Generic Semantic Integration Workflow (adapted from [3])

For existing data and documents, the workflow begins with information extraction or annotation of semantics and metadata (#1) in accordance with a reference ontology. Newly found information via harvesting must also be integrated; however, external information or services may come bearing their own ontologies, in which case some form of semantic mediation is required.

Of course, this is a generic workflow, and depending on the interoperation task, different flows and steps may be required. Indeed, the overall workflow can vary by perspective and researcher, with semantic resolution workflow modeling a prime area of current investigations. (As one alternative among scores, see for example Cardoso and Sheth.[4])

Matching and Mapping Semantic Heterogeneities

Semantic mediation is a process of matching schemas and mapping attributes and values, often with intermediate transformations (such as unit or language conversions) also required. The general problem of schema integration is not new, with one prior reference going back as early as 1986. [5] According to Alon Halevy:[6]

As would be expected, people have tried building semi-automated schema-matching systems by employing a variety of heuristics. The process of reconciling semantic heterogeneity typically involves two steps. In the first, called schema matching, we find correspondences between pairs (or larger sets) of elements of the two schemas that refer to the same concepts or objects in the real world. In the second step, we build on these correspondences to create the actual schema mapping expressions.

The issues of matching and mapping have been addressed in many tools, notably commercial ones from MetaMatrix,[7] and open source and academic projects such as Piazza, [8] SIMILE, [9] and the WSMX (Web service modeling execution environment) protocol from DERI. [10] [11] A superb description of the challenges in reconciling the vocabularies of different data sources is also found in the thesis by Dr. AnHai Doan, which won the 2003 ACM’s Prestigious Doctoral Dissertation Award.[12]

What all of these efforts has found is the inability to completely automate the mediation process. The current state-of-the-art is to reconcile what is largely unambiguous automatically, and then prompt analysts or subject matter experts to decide the questionable matches. These are known as “semi-automated” systems and the user interface and data presentation and workflow become as important as the underlying matching and mapping algorithms. According to the WSMX project, there is always a trade-off between how accurate these mappings are and the degree of automation that can be offered.

Also a Need for Efficient Semantic Data Stores

Once all of these reconciliations take place there is the (often undiscussed) need to index, store and retrieve these semantics and their relationships at scale, particularly for enterprise deployments. This is a topic I have addressed many times from the standpoint of scalability, more scalability, and comparisons of database and relational technologies, but it is also not a new topic in the general community.

As Stonebraker and Hellerstein note in their retrospective covering 35 years of development in databases,[13] some of the first post-relational data models were typically called semantic data models, including those of Smith and Smith in 1977[14] and Hammer and McLeod in 1981.[15] Perhaps what is different now is our ability to address some of the fundamental issues.

At any rate, this subsection is included here because of the hidden importance of database foundations. It is therefore a topic often addressed in this series.

A Partial Listing of Semantic Web Tools

In all of these areas, there is a growing, but still spotty, set of tools for conducting these semantic tasks. SemWebCentral, the open source tools resource center, for example, lists many tools and whether they interact or not with one another (the general answer is often No).[16] Protégé also has a fairly long list of plugins, but not unfortunately well organized. [17]

In the table below, I begin to compile a partial listing of semantic Web tools, with more than 50 listed. Though a few are commercial, most are open source. Also, for the open source tools, only the most prominent ones are listed (Sourceforge, for example, has about 200 projects listed with some relation to the semantic Web though most of minor or not yet in alpha release).

NAME

URL

DESCRIPTION

Almo http://ontoware.org/projects/almo An ontology-based workflow engine in Java
Altova SemanticWorks http://www.altova.com/products_semanticworks.html Visual RDF and OWL editor that auto-generates RDF/XML or nTriples based on visual ontology design
Bibster http://bibster.semanticweb.org/ A semantics-based bibliographic peer-to-peer system
cwm http://www.w3.org/2000/10/swap/doc/cwm.html A general purpose data processor for the semantic Web
Deep Query Manager http://www.brightplanet.com/products/dqm_overview.asp Search federator from deep Web sources
DOSE https://sourceforge.net/projects/dose A distributed platform for semantic annotation
ekoss.org http://www.ekoss.org/ A collaborative knowledge sharing environment where model developers can submit advertisements
Endeca http://www.endeca.com Facet-based content organizer and search platform
FOAM http://ontoware.org/projects/map Framework for ontology alignment and mapping
Gnowsis http://www.gnowsis.org/ A semantic desktop environment
GrOWL http://ecoinformatics.uvm.edu/technologies/growl-knowledge-modeler.html Open source graphical ontology browser and editor
HAWK http://swat.cse.lehigh.edu/projects/index.html#hawk OWL repository framework and toolkit
HELENOS http://ontoware.org/projects/artemis A Knowledge discovery workbench for the semantic Web
Jambalaya http://www.thechiselgroup.org/jambalaya Protégé plug-in for visualizing ontologies
Jastor http://jastor.sourceforge.net/ Open source Java code generator that emits Java Beans from ontologies
Jena http://jena.sourceforge.net/ Opensource ontology API written in Java
KAON http://kaon.semanticweb.org/ Open source ontology management infrastructure
Kazuki http://projects.semwebcentral.org/projects/kazuki/ Generates a java API for working with OWL instance data directly from a set of OWL ontologies
Kowari http://www.kowari.org/ Open source database for RDF and OWL
LuMriX http://www.lumrix.net/xmlsearch.php A commercial search engine using semantic Web technologies
MetaMatrix http://www.metamatrix.com/ Semantic vocabulary mediation and other tools
Metatomix http://www.metatomix.com/ Commercial semantic toolkits and editors
MindRaider http://mindraider.sourceforge.net/index.html Open source semantic Web outline editor
Model Futures OWL Editor http://www.modelfutures.com/OwlEditor.html Simple OWL tools, featuring UML (XMI), ErWin, thesaurus and imports
Net OWL http://www.netowl.com/ Entity extraction engine from SRA International
Nokia Semantic Web Server https://sourceforge.net/projects/sws-uriqa An RDF based knowledge portal for publishing both authoritative and third party descriptions of URI denoted resources
OntoEdit/OntoStudio http://ontoedit.com/ Engineering environment for ontologies
OntoMat Annotizer http://annotation.semanticweb.org/ontomat Interactive Web page OWL and semantic annotator tool
Oyster http://ontoware.org/projects/oyster Peer-to-peer system for storing and sharing ontology metadata
Piggy Bank http://simile.mit.edu/piggy-bank/ A Firefox-based semantic Web browser
Pike http://pike.ida.liu.se/ A dynamic programming (scripting) language similar to Java and C for the semantic Web
pOWL http://powl.sourceforge.net/index.php Semantic Web development platform
Protégé http://protege.stanford.edu/ Open source visual ontology editor written in Java with many plug-in tools
RACER Project https://sourceforge.net/projects/racerproject A collection of Projects and Tools to be used with the semantic reasoning engine RacerPro
RDFReactor http://rdfreactor.ontoware.org/ Access RDF from Java using inferencing
Redland http://librdf.org/ Open source software libraries supporting RDF
RelationalOWL https://sourceforge.net/projects/relational-owl Automatically extracts the semantics of virtually any relational database and transforms this information automatically into RDF/OW
Semantical http://semantical.org/ Open source semantic Web search engine
SemanticWorks http://www.altova.com/products_semanticworks.html SemanticWorks RDF/OWL Editor
Semantic Mediawiki https://sourceforge.net/projects/semediawiki Semantic extension to the MediaWiiki wiki
Semantic Net Generator https://sourceforge.net/projects/semantag Utility for generating topic maps automatically
Sesame http://www.openrdf.org/ An open source RDF database with support for RDF Schema inferencing and querying
SMART http://web.ict.nsc.ru/smart/index.phtml?lang=en System for Managing Applications based on RDF Technology
SMORE http://www.mindswap.org/2005/SMORE/ OWL markup for HTML pages
SPARQL http://www.w3.org/TR/rdf-sparql-query/ Query language for RDF
SWCLOS http://iswc2004.semanticweb.org/demos/32/ A semantic Web processor using Lisp
Swoogle http://swoogle.umbc.edu/ A semantic Web search engine with 1.5 M resources
SWOOP http://www.mindswap.org/2004/SWOOP/ A lightweight ontology editor
Turtle http://www.ilrt.bris.ac.uk/discovery/2004/01/turtle/ Terse RDF “Triple” language
WSMO Studio https://sourceforge.net/projects/wsmostudio A semantic Web service editor compliant with WSMO as a set of Eclipse plug-ins
WSMT Toolkit https://sourceforge.net/projects/wsmt The Web Service Modeling Toolkit (WSMT) is a collection of tools for use with the Web Service Modeling Ontology (WSMO), the Web Service Modeling Language (WSML) and the Web Service Execution Environment (WSMX)
WSMX https://sourceforge.net/projects/wsmx/ Execution environment for dynamic use of semantic Web services

Tools Still Crude, Integration Not Compelling

Individually, there are some impressive and capable tools on this list. Generally, however, the interfaces are not intuitive, integration between tools is lacking, and why and how standard analysts should embrace them is lacking. In the semantic Web, we have yet to see an application of the magnitude of the first Mosaic browser that made HTML and the World Wide Web compelling.

It is perhaps likely that a similar “killer app” may not be forthcoming for the semantic Web. But it is important to remember just how entwined tools are to accelerating acceptance and growth of new standards and protocols.

NOTE: This posting is part of an occasional series looking at a new category that I and BrightPlanet are terming the eXtensible Semantic Data Model (XSDM). Topics in this series cover all information related to extensible data models and engines applicable to documents, metadata, attributes, semi-structured data, or the processing, storing and indexing of XML, RDF, OWL, or SKOS data. A major white paper will be produced at the conclusion of the series.

[1] Paul Warren, “Knowledge Management and the Semantic Web: From Scenario to Technology,” IEEE Intelligent Systems, vol. 21, no. 1, 2006, pp. 53-59. See http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2006/02&file=x1war.xml&xsl=article.xsl&

[2] See http://www.semwebcentral.org/index.jsp?page=workflows.

[3] Said Izza, Lucien Vincent and Patrick Burlat, “A Unified Framework for Enterprise Integration: An Ontology-Driven Service-Oriented Approach,” pp. 78-89, in Pre-proceedings of the First International Conference on Interoperability of Enterprise Software and Applications (INTEROP-ESA’2005), Geneva, Switzerland, February 23 – 25, 2005, 618 pp. See http://interop-esa05.unige.ch/INTEROP/Proceedings/Interop-ESAScientific/OneFile/InteropESAproceedings.pdf.

[4] Jorge Cardoso and Amit Sheth, “Semantic Web Processes: Semantics Enabled Annotation, Discovery, Composition and Orchestration of Web Scale Processes,” in the 4th International Conference on Web Information Systems Engineering (WISE 2003), December 10-12, 2003, Rome, Italy. See http://lsdis.cs.uga.edu/lib/presentations/WISE2003-Tutorial.pdf.

[5] C. Batini, M. Lenzerini, and S.B. Navathe, “A Comparative Analysis of Methodologies for Database Schema Integration,” in ACM Computing Survey, 18(4):323-364, 1986.

[6] Alon Halevy, “Why Your Data Won’t Mix,” ACM Queue vol. 3, no. 8, October 2005. See http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=336.

[7] Chuck Moser, Semantic Interoperability: Automatically Resolving Vocabularies, presented at the 4th Semantic Interoperability Conference, February 10, 2006. See http://colab.cim3.net/file/work/SICoP/2006-02-09/Presentations/CMosher02102006.ppt.

[8] Alon Y. Halevy, Zachary G. Ives, Peter Mork and Igor Tatarinov, “Piazza: Data Management Infrastructure for Semantic Web Applications,” Journal of Web Semantics, Vol. 1 No. 2, February 2004, pp. 155-175. See http://www.cis.upenn.edu/~zives/research/piazza-www03.pdf.

[9] Stefano Mazzocchi, Stephen Garland, Ryan Lee, “SIMILE: Practical Metadata for the Semantic Web,” January 26, 2005. See http://www.xml.com/pub/a/2005/01/26/simile.html.

[10] Adrian Mocan, Ed., “WSMX Data Mediation,” in WSMX Working Draft, W3C Organization, 11 October 2005. See http://www.wsmo.org/TR/d13/d13.3/v0.2/20051011.

[11] J.Madhavan , P. A. Bernstein , P. Domingos and A. Y. Halevy, “Representing and Reasoning About Mappings Between Domain Models,” in the Eighteenth National Conference on Artificial Intelligence, pp.80-86, Edmonton, Alberta, Canada, July 28-August 01, 2002.

[12] AnHai Doan, Learning to Map between Structured Representations of Data, Ph.D. Thesis to the Computer Science & Engineering Department, University of Washington, 2002, 133 pp. See http://anhai.cs.uiuc.edu/home/thesis/anhai-thesis.pdf.

[13] Michael Stonebraker and Joey Hellerstein, “What Goes Around Comes Around,” in Joseph M. Hellerstein and Michael Stonebraker, editors, Readings in Database Systems, Fourth Edition, pp. 2-41, The MIT Press, Cambridge, MA, 2005. See http://mitpress.mit.edu/books/chapters/0262693143chapm1.pdf.

[14] John Miles Smith and Diane C. P. Smith, “Database Abstractions: Aggregation and Generalization,” ACM Transactions on Database Systems 2(2): 105-133, 1977.

[15] Michael Hammer and Dennis McLeod, “Database Description with SDM: A Semantic Database Model,” ACM Transactions on Database Systems 6(3): 351-386, 1981.

[16] See http://www.semwebcentral.org/index.jsp?page=home.

[17] See http://protege.cim3.net/cgi-bin/wiki.pl?ProtegePluginsLibraryByType.

Posted by AI3's author, Mike Bergman Posted on June 12, 2006 at 6:04 pm in Adaptive Information, Semantic Web | Comments (5)
The URI link reference to this post is: https://www.mkbergman.com/241/methods-for-semantic-discovery-annotation-and-mediation/
The URI to trackback this post is: https://www.mkbergman.com/241/methods-for-semantic-discovery-annotation-and-mediation/trackback/
Posted:June 9, 2006

In an earlier posting I had some fun with the Website as a Graph utility where you can enter a Web address and the system provides a visual analysis of that individual Web page (not an overall view of the site).  Friends, family, indeed, the entire Web, has been having some fun with this one.

Before I let this toy go, I decided to do some comparative stuff (and some image animation, a relic of the not too distant past).  So, the image below shows my blog at the time of its release about 1 year ago (a single post), and then the structure of this page about one week ago and then yesterday.  What a difference a week (or day) makes!  So here are these changes for my blog site:

Posted by AI3's author, Mike Bergman Posted on June 9, 2006 at 9:39 pm in Site-related | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/242/what-a-change-a-day-makes-aka-get-a-life/
The URI to trackback this post is: https://www.mkbergman.com/242/what-a-change-a-day-makes-aka-get-a-life/trackback/
Posted:June 8, 2006

Somehow, since Bloglines (via its parent Ask.com) announced its new blog and feed search I have noticed my standard search feeds no longer return as many results.  I’ve checked via searches on Google’s Blogsearch and Technorati (not to mention Bloglines itself) and see no mention of this problem from others.  Has anyone else been experiencing Bloglines search degradation?

I sent a ding to the Bloglines tech support, which was acknowledged, but no real response as yet: 

I have noticed that some of my standard ‘Search feeds’ are no longer returning the number of results they did a week or so ago, by massively large amounts. Is this related to the new search function announced this week (which I like)? Or due to some other problem? (BTW, the ‘Search feed’ that most shows this behavior is "semantic web").

If anyone has an insight, I welcome your comments. 

Posted by AI3's author, Mike Bergman Posted on June 8, 2006 at 9:34 pm in Searching | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/243/problems-with-bloglines-search-feed/
The URI to trackback this post is: https://www.mkbergman.com/243/problems-with-bloglines-search-feed/trackback/