Posted:February 1, 2006

IBM has announced it has completed the first step of making the Unstructured Information Management Architecture (UIMA) available to the open source community by publishing the UIMA source code to SourceForge.net. UIMA is an open software framework to aid the creation, development and deployment of technologies for unstructured content. IBM first unveiled UIMA in December of 2004. The source code for the IBM reference implementation of UIMA is currently available and can be downloaded from http://uima-framework.sourceforge.net/ . In addition, the IBM UIMA SDK, with additional facilities and components, can be downloaded for free from http://www.alphaworks.ibm.com/tech/uima .

UIMA has received support from the Defense Advanced Research Projects Agency (DARPA) and is currently in use as part of DARPA’s new human language technology research and development program called GALE (Global Autonomous Language Exploitation). UIMA is also embedded in various IBM products for processing unstructured information.

Posted:December 14, 2005

I just finished participating in a discussion that has mirrored many others I have observed in the past:  We have a complicated problem with much data before us, and we don’t know where it may evolve or trend.  Can we architect a single database schema up front that handles all possible options?

Every programmer or database administrator (DBA) will recommend keeping designs to a single database, schema and vendor.  It makes life easier for them.

However, every real world application and community points to the natural outcomes of multiple schemas and databases.  This reality, in fact, has what has led to the whole topic of data federation and the various needs to resolve physical, semantic, syntactic and other schema heterogeneities.

Designers can certainly be clever with respect to anticipating growth, changes as seen in the past and so forth.  Leaving "open slots" or "generic fields" in schemas are often posited and perhaps may allow for a little bit of growth.  Also, perhaps quite a bit of mitigation for schema evolution can be anticipated up front.

But the reality of diversity remains.  The semantic Web and proliferation of user-generated metadata will only exacerbate these challenges.  Simply talk to the biological or physics communities of what they have seen in finding a single "grand schema."  They haven’t been able to, can’t, and it is a chimera.

Thus, smart design does not begin with a naive single database premise.  It recognizes that information exists in many forms in many places and in many transmutations from many viewpoints.  And what is important today will surely change tomorrow.  Explicit recognition of these realities is critical to successful upfront information management design.

Viva la multiple databases!
 

Posted by AI3's author, Mike Bergman Posted on December 14, 2005 at 2:31 pm in Adaptive Information, Semantic Web | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/174/the-single-database-chimera/
The URI to trackback this post is: http://www.mkbergman.com/174/the-single-database-chimera/trackback/
Posted:December 13, 2005

I just came across an easily readable and accessible short series on the semantic Web by Sunil Goyal of the enventure blog. The four-part series consists of:

  • Part 1 — introduction and overview of various Web services
  • Part 2 — the challenges of data integration
  • Part 3 – RDF and OWL data models and service-oriented middleware, and
  • Part 4 — user applications, enterprise systems, research applications and themes and services.

If you are new to this topic, you may find this series an easy first introduction.

Posted by AI3's author, Mike Bergman Posted on December 13, 2005 at 1:31 pm in Adaptive Information, Semantic Web | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/173/another-good-semantic-web-series/
The URI to trackback this post is: http://www.mkbergman.com/173/another-good-semantic-web-series/trackback/
Posted:December 5, 2005

Earlier posts have noted the near-term importance of the federal government to integrated document management and integration of open source software.  A recent article by Darryl K. Taft of eWeek.com titled "GSA Modernizes With Open-Source Stack" indicates the lead role the General Services Administration will play, at least on the civilian side of the government.  According to the article:

George Thomas, a chief architect at the General Services Administration, said the GSA is leading the effort to deliver an OSERA (Open Source eGov Reference Architecture) that will feature foundational technologies such as MDA (Model Driven Architecture), an ESB (enterprise service bus), an SOA (service-oriented architecture) and the Semantic Web, among other things.

OSERA deserves close tracking as the federal government implements these standards. GSA has set up a Web site on OSERA that is still awaiting content.

Posted:December 2, 2005

A recent article by Cheryl Gerber in the November 21 issue of Military Information Technology online on "Smart Searching"  provides a useful overview of issues and leading vendors dealing with large-scale issues of content search and discovery.  Some of the useful vendors covered in this article include:   Endeca Technologies, Basis Technology, Inxight Software, Insightful, Attensity, Convera, SRA International (NetOwl), ClearForest and BrightPlanet.

The focus of these efforts in the Defense Intelligence Agency is characterized by Gerber as:

The unique requirements of defense intelligence analysts are refining search technology down from mass production, with its vast and sometimes trivial outcomes, to more guided, dynamic navigation able to produce results that are both inclusive and relevant.

As one of the largest collectors of information on the planet, the Defense Intelligence Agency (DIA) is responsible for amassing and analyzing all sources of human intelligence in the field from all information types in a multitude of languages.

"This forces us to deal with huge volumes of data. It's an enormous challenge," said a senior DIA official.

The task is indeed a massive one. Sources of intelligence in the field include feeds from UAVs, intelligence, surveillance and reconnaissance data from a vast array of sensors and overhead platforms, signal intelligence, satellites, film and video, not to mention all the data from the open source world. "We need to manage all that data and make it available as quickly as possible to analysts," the DIA official said.

The intel community, as with others forming in the commercial sector, is also relying on community standards for metadata transfer and management.  In the case of the DIA, these standards are being provided by the Intelligence Community Metadata Working Group (ICMWG), which is charged with establishing standards for the tagging of all data used by DIA systems.

In the article, BrightPlanet’s Duncan Witte commented on the importance of having the abilities to "organize, manage and distribute the huge volume of information as well. You need various specialties that allow collaboration with teammates and effective distribution of information."

This article again affirms that the federal intelligence community continues to assume the lead in large-scale content discovery and evaluation.

As the article notes, the DIA maintains a steady push toward technology improvement. "We try to do the best we can with the volumes. In-house we have a lot of expertise on search algorithms and text analysis. But we need to do a better job of combing through the massive volumes of information to find that which is interesting and nontrivial in a way that leads to knowledge discovery. We need better information retrieval through machine understanding of the semantic meaning of text, regardless of language," the DIA official said.