Posted:March 21, 2011

An Overview of Freely Available, Comprehensive Icon Sets

structWFS It is not unusual when designing up a new project that it is important to find a consistent set of icons for user interface or mapping purposes. Full libraries or icon sets can be important because mixing and matching icons from multiple sources often conveys a bit of chaos or unprofessionalism.

Structured Dynamics monitors freely available icons for these purposes and provides listings to its clients so that they may tailor and choose their own looks-and feel. The material below is the reference listing of about 20 comprehensive sets of open source icons that may be used for the open semantic framework (OSF) or sWebMap interfaces. Links to other listings are also provided. These references are kept up-to-date on the OSF TechWiki.

General Icons

Here are some consistent families of general user interface icons. While there are thousands of free icons available from many venues (check out via search engines), there are fewer that have sufficient diversity and scope to encompass most user interface needs. Since it is noticeably jarring to mix icon styles in the same interface (or, at least to do so indiscriminately), it is important to have a consistent design image.

Here are the candidate choices we have found. Some are provided in either multiple size formats or in vector (generally, SVG), formats:

  • The Silk icons from famfamfam is a set of over 700 16-by-16 pixel icons in PNG format (144 of which are also available as GIF mini-icons, see below). This is the standard open source set used as the basis for Pastel (see below), which is used in the various conStruct tools. There are also other free icons from this site
Famfamfam.png
  • Tango is an icon library that contains a basic set of icons for the most common usage. They come in 16×16 and 22×22 sizes, and some are scalable (vector). There are also a variety of extensions for specific purposes
Tango.png
  • Pastel SVG is an icon set based on the Silk icons noted above from FamFamFam.com. Pastel uses the same style, but comes in the sizes of 16, 24, 32, 48, 64, 72, 96, 128 or 256 pixels square; a sampling is shown below
Pastel_sample.png

Pastel is the standard icon set chosen for conStruct tools.

  • The Fugue icons by Yusuke Kamiyamane is the largest set available, and contains 3000 individual icons in 16×16 PNG format. Here is a sampling:
Fugue.png

Alternatively, there is a smaller set of 400 icons called Diagona also available from the same designer

  • Nuvola is a set of 600 icons in either PNG or SVG format from David Vignoni (Icon King). The PNG come in standard sizes of 16, 22, 32, 48, 64 or 128 pixels square. Here is a sampling:
Nuvola.png

Vignoni also has an alternative set of icons with a similar feel called Oxygen.

  • The Crystal set of more than 1300 icons is organized into six different sizes, and is divided into the categories of actions, apps, files systems, devices and mime types. Here is a sampling:
Crystal.png

Other Sources

According to the Open Icon Library, which has a nice gallery (but which also mixes sources), here are some other key sources of open source icons not already listed above:

See also the icon sets used within Wikipedia itself.

Lastly, and perhaps most usefully, peruse the 750+ icon sets on Icon Finder.

Map Icons

With the emergence of Web 2.0 and locational services, particularly the open API and “thumbtack” aspect of Google Maps, a new category of map markers for web mapping has emerged. This category is still new enough that an accepted terminology has not yet developed. Among other terms, here are some of the ways that these locational markers on maps have been described:

  • Places of interest (POIs)
  • Points of interest (POIs)
  • Pins
  • Pushpins
  • Placemarks
  • Thumbtacks
  • Markers
  • Location markers
  • Map pointers.

Here are some of the consolidated sources of open source markers now available:

  • This is a sampling of 120 markers or so available within the Google MyMaps API (see further this link with shadows and this full listing). All have matching shadows useful for conveying a 3D feeling:
Matt77.png

There are also about 250 standard icons provided within the Google Earth set. You can see those listed here. Also, to see the available icon libraries in Google maps (plus some others), see this link

  • Map Icons Collection is a set of more than 1000 free icons to use as placemarks for POI locations on maps (originally designed for the Google Maps API). Most of these icon markers are square in aspect with a pointer, and are organized by color-coordinated categories such as numbers, cinemas, hotels, banks, etc. Here is a sampling:
Google_map_icons.png
  • The Maki icon set consists of more than 100 black and white 15×15 map markers
  • This listing provides three different colors in the Google Map “teardrop” style for all letters and 99 numbers
  • Geosilk is an extension of the standard Silk icon set noted above. It is more applicable to UI icons relating to map functions than to map markers per se
  • Green Map contains a set of about 170 monochrome (can be colored differently) POI markers, with an orientation to nature or ecological categories. There are also local extensions
  • Map Pins provides 22 alternative map pins and flags:
Map_pins.png
  • 50 monochrome POI and map marker symbols from the US National Park Service (NPS):
Nps_markers.png
  • There is a similar (and complementary in design) set of 50 monochrome pedestrian and transportation symbols from AIGA in cooperation with the US Department of Transportation

Dynamic Markers

Some markers can be created dynamically with the Google Map API. Here are some background articles and links:

Other Listings

Various other listings, many with icons but perhaps not organized into the same uniform sets, include:

Posted by AI3's author, Mike Bergman Posted on March 21, 2011 at 2:30 am in Open Semantic Framework, Open Source, Software Development | Comments (2)
The URI link reference to this post is: http://www.mkbergman.com/952/noteworthy-icon-libraries-for-projects-and-web-mapping/
The URI to trackback this post is: http://www.mkbergman.com/952/noteworthy-icon-libraries-for-projects-and-web-mapping/trackback/
Posted:February 15, 2011

UMBEL Vocabulary and Reference Concept OntologyA Seminal Release by SD and Ontotext; Links to Wikipedia and PROTON

Structured Dynamics and Ontotext are pleased to announce — after four years of iterative refinement — the release of version 1.00 of UMBEL (Upper Mapping and Binding Exchange Layer). This version is the first production-grade release of the system. UMBEL’s current implementation is the result of much practical experience.

UMBEL is primarily a reference ontology, which contains 28,000 concepts (classes and relationships) derived from the Cyc knowledge base. The reference concepts of UMBEL are mapped to Wikipedia, DBpedia ontology classes, GeoNames and PROTON.

UMBEL is designed to facilitate the organization, linkage and presentation of heterogeneous datasets and information. It is meant to lower the time, effort and complexity of developing, maintaining and using ontologies, and aligning them to other content.

This release 1.00 builds on the prior five major changes in UMBEL v. 0.80 announced last November. It is open source, provided under the Creative Commons Attribution 3.0 license.

Profile of the Release

In broad terms, here is what is included in the new version 1.00:

  • A core structure of 27,917 reference concepts (RCs)
  • The clustering of those concepts into 33 mostly disjoint SuperTypes (STs)
  • Direct RC mapping to 444 PROTON classes
  • Direct RC mapping to 257 DBpedia ontology classes
  • An incomplete mapping to 671 GeoNames features
  • Direct mapping of 16,884 RCs to Wikipedia (categories and pages)
  • The linking of 2,130,021 unique Wikipedia pages via 3,935,148 predicate relations; all are characterized by one or more STs
    • 876,125 are assigned a specific rdf:type
  • The UMBEL RefConcepts have been re-organized, with most local, geolocational entities moved to a supplementary module. 577 prior (version 0.80) UMBEL RCs and a further 3204 new RCs have been added to this geolocational module. This module is not being released for the current version because testing is incomplete (watch for a pending version 1.0x)
  • Some vocabulary changes, including some new and some dropped predicates (see next), and
  • Added an Annex H that describes the version 1.00 changes and methods.

Vocabulary Summary

UMBEL’s basic vocabulary can also be used for constructing specific domain ontologies that can easily interoperate with other systems. This release sees a number of changes in the UMBEL vocabulary:

  • A new correspondsTo predicate has been added for nearly or approximate sameAs mappings (symmetric, transitive, reflexive)
  • A controlled vocabulary of qualifiers was developed for the hasMapping predicate
  • 31 new relatesToXXX predicates have been added to relate external entities or concepts to UMBEL SuperTypes
  • Some disjointedness assertions between SuperTypes were added or changed.

The UMBEL Vocabulary defines three classes:

The UMBEL Vocabulary defines these properties:

The UMBEL vocabulary also has a significant reliance on SKOS, among other external vocabularies.

Access and More Information

Here are links to various downloads, specifications, communities and assistance.

Specifications and Documentation

All documentation from the prior v 0.80 has been updated, and some new documentation has been added:

Major updates were made to the specifications and Annex G; Annex H is new. Minor changes were also made to Annexes A and B. All remaining Annexes only had minor header changes. All spec documents with minor or major changes were also versioned, with the earlier archives now date stamped.

Files and Downloads

All UMBEL files are listed on the Downloads and SVN page on the UMBEL Web site. The reference concept and mapping files may also be obtained from http://code.google.com/p/umbel/source/browse/#svn/trunk.

Additional Information

To learn more about UMBEL or to participate, here are some additional links:

Acknowledgements

These latest improvements to UMBEL and its mappings have been undertaken by Structured Dynamics and Ontotext. Support has also been provided by the European Union research project RENDER, which aims to develop diversity-aware methods in the ways Web information is selected, ranked, aggregated, presented and used.

Next Steps

This release continues the path to establish a gold standard between UMBEL and Wikipedia to guide other ontological, semantic Web and disambiguation needs. For example, the number of UMBEL reference concepts was expanded by some 36% from 20,512 to 27,917 in order to provide a more balanced superstructure for organizing Wikipedia content. And across all mappings, 60% of all UMBEL reference concepts (or 16,884) are now linked directly to Wikipedia via the new umbel:correspondsTo property. A later post will describe the design and importance of this gold standard in greater detail.

Next releases will expand this linkage and coverage, and bring in other important reference structures such as GeoNames and others. This version of UMBEL will also be incorporated into the next version of FactForge. We will also be re-invigorating the Web vocabulary access and Web services, and adding tagging services based on UMBEL.

We invite other players with an interest in reusable and broadly applicable vocabularies and reference concepts to join with us in these efforts.


[1] Note, for legacy reasons, you may still encounter reference to ‘subject concepts’ in earlier UMBEL documentation. Please consider that term as interchangeable with the current ‘reference concepts’.

Posted by AI3's author, Mike Bergman Posted on February 15, 2011 at 12:39 am in Ontologies, Open Source, Semantic Web, UMBEL | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/945/announcing-the-first-production-grade-umbel/
The URI to trackback this post is: http://www.mkbergman.com/945/announcing-the-first-production-grade-umbel/trackback/
Posted:February 7, 2011

Sweet Tools ListingNow Presented as a Semantic Component; Grows to 900+ Tools

Sweet Tools, AI3‘s listing of semantic Web and -related tools, has just been released with its 17th update. The listing now contains more than 900 tools, about a 10% increase over the last version. Significantly the listing is also now presented via its own semantic tool, the structSearch sComponent, which is one of the growing parts to Structured Dynamics‘ open semantic framework (OSF).

So, we invite you to go ahead and try out this new Flex/Flash version with its improved search and filtering! We’re pretty sure you’ll like it.

Summary of Major Changes Sweet Tools structSearch View

Sweet Tools now lists 907 919 tools, an increase of 72 84 (or 8.6 10.1%) over the prior version of 835 tools. The most notable trend is the continued increase in capabilities and professionalism of (some of) the new tools.

This new release of Sweet Tools — available for direct play and shown in the screenshot to the right — is the first to be presented via Structured Dynamics’ Flex-based semantic component technology. The system has greatly improved search and filtering capabilities; it also shares the superior dataset management and import/export capabilities of its structWSF brethren.

As a result, moving forward, Sweet Tools updates will now be added on a more regular basis, reducing the big burps that past releases have tended to follow. We will also see much expanded functionality over time as other pieces of the structWSF and sComponents stack get integrated and showcased using this dataset.

This release is the first in WordPress, and shows the broad capabilities of the OSF stack to be embedded in a variety of CMS or standalone systems. We have provided some updates on Structured Dynamics’ OSF TechWiki for how to modify, embed and customize these components with various Flex development frameworks (see one, two or three), such as Flash Builder or FlashDevelop.

We should mention that the OSF code group is also seeing external parties exposing these capabilities via JavaScript deployments as well. This recent release expands on the conStruct version with its capabilities described in a post about a year ago.

Retiring the Exhibit Version

However, this release does mark the retirement of the very fine Exhibit version of Sweet Tools (an archive version will be kept available until it gets too long in the tooth). I was one of the first to install a commercial Exhibit system, and the first to do so on WordPress, as I described in an article more than four years ago.

Exhibit has worked great and without a hitch, and through a couple of upgrades. It still has (I think) a superior faceting system and sorting capabiities to what we presently offer with our own sComponent alternative. However, the Exhibit version is really a display technology alone, and offers no search, access control or underlying data management capabilities (such as CRUD), all of which are integral to our current system. It is also not grounded in RDF or semantic technologies, though it does have good structural genes. And, Sweet Tools has about reached the limits of the size of datasets Exhibit can handle efficiently.

Exhibit has set a high bar for usability and lightweight design. As we move in a different direction, I’d like again to publicly thank David Huynh, Exhibit’s developer, and the MIT Simile program for when he was there, for putting forward one of the seminal structured data tools of the past five years.

Updated Statistics

The updated Sweet Tools listing now includes nearly 50 different tools categories. The most prevalent categories are browser tools (RDF, OWL), information extraction, ontology tools, parsers or converters, and general RDF tools. The relative share by category is shown in this diagram (click to expand):

Since the last listing, the fastest growing categories have been utilities (general and RDF) and visualization. Linked data listings have also grown by 200%, but are still a relatively small percentage of the total.

These values should be taken with a couple of grains of salt. First, not all of these additions are organic or new releases. Some are the result of our own tools efforts and investigations, which can often surface prior overlooked tools. Also, even with this large number of application categories, many tools defy characterization, and can reside in multiple categories at once or are even pointing to new ones. So, the splits are illustrative, but not defining.

General language percentages have been keeping pretty constant over the past couple of years. Java remains the leading language with nearly half of all applications, a percentage it has kept steady for four years. PHP continues to grow in popularity, and actually increased the largest percentage amount of any language over this past census. The current language splits are shown in the next diagram (click to expand):

C/C++ and C# have really not grown at all over the past year. Again, however, for the reasons noted, these trends should be interpreted with care.

Tasty Dogfood?Dogfood Never Tasted So Good

Tools development is hard and the open source nature of today’s development tends to require a certain critical mass of developer interest and commitment. There are some notable tools that have much use and focus and are clearly professional and industrial grade. Yet, unfortunately, too many of the tools on the Sweet Tools listing are either proofs-of-concept, academic demos, or largely abandoned because of lack of interest by the original developer, the community or the market as a whole.

There is a common statement within the community about how important it is for developers to “eat their own dogfood.” On the face of it, this makes some sense since it conveys a commitment to use and test applications as they are developed.

But looked at more closely, this sentiment carries with it a troublesome reflection of the state of (many) tools within the semantic Web: too much kibble that is neither attractive nor tasty. It is probably time to keep the dogfood in the closet and focus on well-cooked and attractive fare.

We at Structured Dynamics are not trying to hold ourselves up as exemplars or the best chefs of tasty food. We do, however, have a commitment to produce fare that is well prepared and professional. Let’s stop with the dogfood and get on with serving nutritious and balanced fare to the marketplace.

Posted by AI3's author, Mike Bergman Posted on February 7, 2011 at 1:47 am in Open Source, Semantic Web Tools, Structured Web | Comments (1)
The URI link reference to this post is: http://www.mkbergman.com/942/tasty-new-sweet-tools-release/
The URI to trackback this post is: http://www.mkbergman.com/942/tasty-new-sweet-tools-release/trackback/
Posted:January 17, 2011

The Hollowing Out of Enterprise ITReasons for and Implications from Innovation Moving to Consumers

Today, the headlines and buzz for information technologies centers on smartphones, social networks, cloud computing, tablets and everything Internet. Very little is now discussed about IT in the enterprise. This declining trend began about 15 years ago, and has been accelerating over time. Letting the air out of the enterprise IT balloon has some profound reasons and implications. It also has some lessons and guidance related to semantic approaches and technologies and their adoption by enterprises.

A Brief Look at Sixty Years of Enterprise IT

One can probably clock the start of enterprise information technology (IT) to the first use of mainframe computers in the early 1950s [1], or sixty years ago. The earliest mainframes were huge and expensive machines that required their own specially air-conditioned rooms because of the heat they generated. The first use of “information technology” as a term occurred in a Harvard Business Review article from 1958 [2].

Until the late 1960s computers were usually supplied under lease, and were not purchased [3]. Service and all software were generally bundled into the lease amount without separate charge and with source code provided. Then, in 1969, IBM led an industry change by starting to charge separately for (mainframe) software and services, and ceasing to supply source code [3]. At about the same time integrated circuits enabled computer sizes to be reduced, with the minicomputers such as from DEC causing a marked expansion in number of potential customers. Enterprise apps became a huge business, with software licensing and maintenance fees achieving a peak of 70% of IT vendor total revenues by the mid-1990s [4]. However, since that peak, enterprise software as a portion of vendor revenues has been steadily eroding.

One of the earliest enterprise applications was in transaction systems and their underlying database management software. The relational database management system (RDBMS) was initially developed at IBM. Oracle, based on early work for the CIA in the late 1970s and its innovation to write in the C programming language, was able to port the RDBMS to multiple operating systems. These efforts, along with those of other notable vendors (most of which like Informix no longer exist), led to the RDBMS becoming more or less the de facto standard for data management within the enterprise by the 1980s. Today Oracle is the largest supplier of RDBMS software globally, and other earlier database system designs such as network databases or object databases fell out of favor [5].

In 1975, the Altair 8800 was introduced to electronics hobbyists as the first microcomputer, followed then by Apple II and the IBM PC in 1981, among others. Rapidly a slew of new applications became available to the individual, including spreadsheets, small databases, graphics programs and word processors. These apps were a boon to individual productivity and the IBM PC in particular brought credibility and acceptance within the enterprise (along with the growth of Microsoft). Novell and local area networks also pointed the way to a more distributed computing future. By the late 1980s virtually every knowledge worker within enterprises had some degree of computer literacy.

The apogee for enterprise software and apps occurred in the 1990s, with whole classes of new applications (most denoted by three-letter acronyms) such as enterprise resource planning (ERP), business intelligence (BI), customer relationship management (CRM), enterprise information systems (EIS) and the like coming to the fore. These systems also began as proprietary software, which resulted in the “stovepiping” or creating of information silos. In reaction and with great market acceptance, vendors such as SAP arose to provide comprehensive, enterprise-wide solutions, though often at high cost and with significant failure rates.

More significantly, the 1990s also saw the innovation of the World Wide Web with its basis in hypertext links on the Internet. Greatly facilitated by the Mosaic Web browser, the basis of the commercial Netscape browser, and the HTML markup language and HTTP transport protocol, millions began experiencing the benefit of creating Web pages and interconnecting. By the mid-1990s, enterprises were on the Web in force, bringing with them larger content volumes, dynamic databases and enterprise portals. The ability for anyone to become a publisher led to a focus and attention on the new medium that led to still further innovations in e-commerce and online advertising. New languages and uses of Web pages and applications emerged, creating a convergence of design, media, content and interactivity. Venture capital and new startups with valuations independent of revenues led to a frenzy of hype and eventually the dot com crash of 2000.

The growth companies of the past 15 years have not had the traditional focus on enterprises, but on the use and development of the Web. From search (Google) to social interactions (Facebook) to media and video (Flickr, YouTube) and to information (Wikipedia), the engines of growth have shifted away from the enterprise.

Meanwhile, the challenges of data integration and interoperability that were such a keen focus going back to initial enterprise computerization remain. Now, however, these challenges are even greater, as we see images, documents (unstructured data) and Web pages, markup and metadata (semi-structured data) become first-class information citizens. What was a challenge in integrating structured data in the 1980s and 1990s via data warehousing, has now become positively daunting for the enterprise with respect to scale and scope.

The paradox is that as these enterprise needs increased, the attractiveness of the enterprise from an IT perspective has greatly decreased. It is these factors we discuss below, with an eye to how Web architecture, design and opportunities may offer a new path through the maze of enterprise information interoperability.

The Current Landscape

Since 1995 the Gartner Group has been producing its annual Hype Cycle [6]. The clientele for this research is the enterprise, so Gartner’s presentation of what’s hot and what’s hype and what is being adopted is a good proxy for the IT state of affairs in enterprises. These graphs are reproduced below since 2006 (click to expand). Note how many of the items shown are not very specific to the enterprise:

References to architectures and content processing and related topics were somewhat prevalent in 2006, but have disappeared most recently. In comparison to the innovations noted under the History discussion, it appears that the items on Gartner’s radar are more related to consumer applications and uses. We no longer see whole new categories of enterprise-related apps or enterprise architectures.

The kinds of innovations that are being discussed as important to enterprises in the coming year [7,8] tend to mostly leverage existing innovations in other areas or to wrinkle existing approaches. One report from Constellation Research, for example, lists the five core disruptive technologies of social, mobile, cloud, analytics and unified communications [7]. Only analytics could be described as enterprise focused or driven.

And, even in analytics, the kinds of things being promoted are self-service reporting or analysis [8]. In essence, these opportunities represent the application of Web 2.0 techniques to bring reporting or analysis directly to the analyst. Though important and long overdue, such innovations are more derivative than fundamental.

Master data management (MDM) is another touted area. But, to read analyst’s predictions in these areas, it feels like one has stepped into a time warp of technologies and options from a decade ago. When has XML felt like an innovation?

Of course, there is a whole industry of analysts that makes their living prognosticating to enterprises about what to expect from information technologies and how to adopt and embrace them. The general observations — across the board — seem to center on items such as smartphones and mobile, moving to the cloud for software or platforms (SaaS, PaaS), and collaboration and social networks. As I note below, there is nothing inherently wrong or unexciting per se about these trends. But, what does appear true is that the locus of innovation has shifted from the enterprise to consumers or the Internet.

Seven Reasons for a Shift in Innovation

The shift in innovation away from the enterprise has been structural, not cyclical. That means that very fundamental forces are at work to cause this change in innovation focus. It does not mean that innovation has permanently shifted away from the enterprise (organizations), but that some form of countervailing structural changes would need to occur to see a return to the IT focus on the enterprise from prior decades.

I think we can point to seven structural reasons for this shift, many of which interact with one another. While all of them are bringing benefits (some yet to be foreseen) to the enterprise, and therefore are to be lauded, they are not strictly geared to address specific enterprise challenges.

#1: The Internet

As pundits say, “The Internet changes everything” [9]. For the reasons noted under the history above, the most important cause for the shift in innovation away from the enterprise has been the Internet.

One aspect that is quite interesting is the use of Internet-based technologies to provide “outsourced” enterprise applications hosted on Web servers. Such “cloud computing” leverages the technologies and protocols inherent to the Internet. It shifts hosting, maintenance and upgrade responsibilities for conventional apps to remote providers. Initially, of course, this simply shifts locus and responsibility from in-house to a virtual party. But, it is also the case that such changes will also promote more subtle shifts in collaboration and interaction possibilities. There is also the fact that quick upgrades of underlying infrastructure and application software can also occur.

The implications for existing enterprise IT staff, traditional providers, and licensing and maintenance approaches are profound. The Internet and cloud computing will perhaps have a greater effect on governance, staffing and management than application functionality per se.

#2: Consumer Innovations

The captivating IT-related innovations at present are mobile (smartphones) and their apps, tablets and e-book readers, Internet TV and video, and social networks of a variety of stripes. Somewhat like the phenomenon of when personal computers first appeared, many of these consumer innovations have applicability to the enterprise, though only as a side effect.

It is perhaps instructive to look back at the adoption of PCs in the enterprise to understand the possible effect of these new consumer innovations. Central IT was never able to control and manage the proliferation of personal computers, and only began to understand years later what benefits and new governance challenges they brought. Enterprise leaders will understand how to embrace and extend today’s new consumer technologies for the enterprise’s benefits; laggards will resist to no avail.

The ubiquity of computing will be enormously impactful on the enterprise. The understanding of what makes sense to do on a mobile basis with a small screen and what belongs on the desk or in the office is merely a glimmer in the current conversation. However, in the end, like most of the other innovations noted in this analysis, the enterprise will largely be a reactive player to these innovations. Yes, the implications will be profound, but their inherent basis are not grounded in unique enterprise challenges. Nonetheless, adapting to them and changing business practice will be critical to asserting enterprise leadership.

#3: Open Source

Open Source Growth

Ten years ago open source was largely dismissed in the enterprise. About five years ago VCs and others began funding new commercial open source ventures, even while there were still rear guard arguments from enterprises resisting open source. Meanwhile, as the figure to the right shows, open source projects were growing exponentially [10].

The shift to open source in the enterprise, still ongoing, has been rapid. Within 5 years, more than 50% of enterprise software will be open source [11] . According to an article in Fortune magazine last year [12], a Forrester Research survey found that 48% of enterprise respondents were using open source operating systems, and 57% were using open source code. A similar Accenture survey of 300 large public and private companies found that half are committed to open source software, with 38% saying they would begin using open-source software for “mission-critical” applications over the next 12 months.

There are likely many reasons for this shift, including the Internet itself and its basis in open source. Many of the most successful companies of the past 15 years including Amazon, Google, Facebook, and virtually any large Web site has shown excellent performance and scalability building their IT infrastructure around open source foundations. Most of the large, existing enterprise IT vendors, notably including IBM, Oracle, Nokia, Intel, Sun (prior to Oracle), Citrix, Novell (just acquired by Attachmate) and SAP have bought open source providers or have visible support for open source initiatives. Even two of the most vocal proprietary source proponents of the past — HP and Microsoft — have begun to make moves toward open source.

The age of proprietary software based on proprietary standards is dead. The monopoly rents formerly associated with unique, proprietary platforms and large-scale enterprise apps are over. Even where software remains proprietary, it is embracing open standards for data interchange and APIs. Traditional enterprise apps such as content management, business intelligence and ETL, among all others, are being penetrated by commercial open source offerings (as examples, Alfresco, Pentaho and Talend, respectively). The shift to services and new business models appears to be an inexorable force.

Declining profit margins, matched with the relatively high cost of marketing and sales to enterprises, means attention and focus have been shifting away from the enterprise. And with these shifts in focus has come a reduction in enterprise-focused innovation.

#4: Slow Development Cycles in Enterprise

It is not unusual to find deployed systems within enterprises as old as thirty years [13]. So long as they work reasonably well, systems once installed — along with their data — tend to remain in operation until their platforms or functionality become totally obsolete. This leads to rather lengthy turnover cycles, and slow development cycles.

Slow cycles in themselves slow innovation. But slow development cycles are also a disincentive to attract the most capable developers. When development tends to focus on maintenance and scripts and more routines of the same nature, the best developers tend to migrate elsewhere (see next).

Another aspect of slow development cycles is the imperative for new enterprise IT to relate to and accommodate legacy systems — again, including legacy data. This consideration is the source of one of the negative implications of a shift away from innovation in the enterprise: the orphaning of existing information assets.

#5: What’s Hot: Developers

Arguably the emphasis on consumer and Internet technologies means that is where the best developers gravitate. Developing apps for smartphones or working at one of the cool Internet companies or joining a passionate community of open source developers is now attracting the best developers. Open source and Web-based systems also lead to faster development cycles. The very best developers are often the founders of the next generation startups and Web and software companies [14].

While, of course, huge numbers of computer programmers and IT specialists are hired by enterprises each year, the motivations tend to be higher pay, better benefits and more job security. The nature of the work and the bureaucracy and routine of many IT functions require such compensation. And, because of the other shifts noted elsewhere, even the software startups that are able to attract the most innovative developers no longer tend to develop for enterprise purposes.

Computer science students have been declining in industrialized countries for some time and that is the category of slowest growth in IT [14]. Meanwhile, existing IT personnel often have expertise in older legacy systems or have been focused on bug fixes and more prosaic tasks like report writing. Narrow job descriptions and work activities also keep many existing IT personnel from getting exposed to or learning about new trends or innovations, such as the semantic Web.

Declining numbers of new talent, plus declining interest by that talent, combined with (often) narrow and legacy expertise of existing talent, creates a disappointing storm of energy and innovation to address enterprise IT challenges. Enterprises have it within their power to create more exciting career opportunities to overcome these limitations, but unfortunately IT management often also appears challenged to get on top of these structural forces.

#6: What’s Hot: Startups

Open source and Internet-based systems have reduced the capital necessary for a new startup by an order of magnitude or so over the past decade. It is now quite possible to get a new startup up and running for tens to hundreds of thousands of dollars, as opposed to the millions of years past. This is leading to more startups, more startups per innovator, and quicker startup and abandonment cycles. Ideas can be tried quickly and more easily thrown away [15].

These dynamics are acting to accelerate overall development cycles and to cause a shift in funding structures and funding amounts by VCs and angels. The kind of market and sales development typical for many enterprise sales does not fit well within these dynamics and is a countervailing force for more capital when all trends point the other way.

In short, all of this is saying that money goes to where the returns are, and returns are not of the same basis as decades past in the enterprise sector. Again, this means a hollowing out of innovation for enterprises.

#7: Declining Software Rents and Consolidation

As an earlier reference noted [4], software revenues as a percent of IT vendor revenues peaked in about the mid-1990s. As profitability for these entities began to decline, so did the overall attractiveness of the sector.

As the next chart shows, coincident with the peak in profitability was the onset of a consolidation trend in the enterprise IT vendor sector [16]. The chart below shows that three of the largest IT vendors today — Oracle, IBM and HP — began an acquisition spree in the mid-1990s that has continued until just recently, as many of the existing major players have already been acquired:

Notable acquisitions over this period include: Oracle — PeopleSoft, Siebel Systems, MySQL, Hyperion, BEA and Sun; HP — EDS, 3Com, VeriFone, Compaq, Palm and Mercury Interactive; IBM — Lotus, Rational, Informix, Ascential, FileNet, Cognos and SPSS. Published acquisition costs exceeded $130 billion, mostly for the larger deals. But terms for 75% of the 262 transactions were not disclosed [16]. The total value of these consolidations likely approaches $200 billion to $300 billion.

Clearly, the market is now favoring large players with large service components. This consolidation trend does belie one early criticism of open source v proprietary software: proprietary software is likely to be better supported. In theory this might be true, but vanishing suppliers does not bode well for support either. Over time, we may likely see successful open source projects showing greater longevity than many IT vendors.

Positive Implications from the Decline

This discussion is not a boo-hoo because the heyday of enterprise IT innovation is past. Much of that innovation was expensive, often failed to achieve successful adoption, and promoted walled gardens and silos. As someone who ran companies directly involved in enterprise software sales, I personally do not miss the meetings, the travel, the suits and the 18-month sales cycles.

The enterprise has gained much from outside innovation in the past, from the personal computer to LANs and browsers and the Internet. To be sure, what we are now seeing with mobile phones has more computing power than the original Space Shuttle [17], and continued mashup and social engagement innovations will have unforeseen and manifest benefits for enterprises. I think this is unalloyed goodness.

We can also see innovations based on the Internet such as the semantic Web and its languages and standards to promote interoperability. Breaking these barriers is critically needed by enterprises of the future. Data models such as RDF [18] and open world mindsets that better accommodate uncertainty and breadth of information [19] can only be seen as positive. The leverage that will come from these non-enterprise innovations may in the end prove to be as important as the enterprise-specific innovations of the past.

Negative Implications from the Decline

Yet a shift to Internet and consumer IT innovation leaves some implications. These concerns have to do with the unique demands and needs of enterprises. One negative implication is that a diminishing supplier base may not lead to actual deployments that are enterprise-ready or -responsive.

The first concern relates to quality and operational integrity. There is an immense gulf between ISO 9000 or Six Sigma and, for example, the “good enough” of standard search results on the Web. Consumer apps do not impose the same thresholds for quality as demanded by paying bosses or paying customers. This is not a value judgment; simply a reality. I see it reflected in the quality of tools and code for many new innovations today on the Web.

Proofs-of-concept and “cool” demos work well for academic theses or basic intros to new concepts. The 20% that gets you 80% goes a long way to point the way to new innovation; but the 80% to get to the last 20% is where enterprises bet their money. Unfortunately, in too many instances, that gap is not being filled. The last 20% is hard work, often boring, and certainly not as exciting as the next Big Thing. And, as the trends above try to explicate, there are also diminishing rewards for living in that territory.

A similar and second concern pervades data interoperability. Data interoperability has been the central challenge of enterprise IT for at least three decades. As soon as we were able to interconnect systems and bridge differences in operating systems and data schema, the Holy Grail has been breaking information barriers and silos. The initial attempts with proprietary data warehouses or enterprise-wide ERP systems were wrongly trying to apply closed solutions to inherently open problems. But, now, finally when we have the open approaches and standards in hand for bridging these gaps, the attractiveness of doing so for the enterprise seems to have vanished.

For example, we see demos, tools and algorithms being published all over the place that show promising advances or improvements in the semantic Web or linked data (among other areas; see [20]). Some of these automated techniques sound wonderful, but real systems require the hard slog of review and manual approval. Quality matters. If Technique A, say, shows an improvement over Technique B of 5%, that is worth touting. But even at 98% percent accuracy, we will still find 20,000 errors in a population of 1 million items. Such errors will simply not work in having trains run on time, seats be available on airplanes, or inventory get to their required destinations.

What can work from the standpoint of linkage or interoperability on the Web according to consumer standards will simply not fly for many enterprises. But, where are the rewards for tackling that hard slog?

Another concern is security and differential access. Open Web systems, bless their hearts, do not impose the same access and need to know restrictions as information systems within enterprises. If we are to adopt Web-based approaches to the next-generation enterprise — a position we strongly advocate — then we are also going to need to figure out how to marry these two world views. Again, there appears to be an effort-reward mismatch here.

What Lessons Might be Drawn?

These observations are not meant to be a polemic, but a statement of more-or-less current circumstances. Since its widescale adoption, the major challenge — and opportunity — of enterprise IT has been how to leverage the value within the enterprise’s existing digital information assets. That challenge is augmented today with the availability of literally a whole world of external digital knowledge. Yet, the energy and emphasis for innovation to address these challenges has seemingly shifted to consumers and away from the enterprise.

Economics abhors a vacuum. I think two responses may be likely to this circumstance. The first is that new vendors will emerge to address these gaps, but with different cost structures and business models. I’d like to think my own firm, Structured Dynamics, is one of these entities. How we are addressing this opportunity and differences in our business model we will discuss at a later time. In any case, any such new player will need to take account of some of the structural changes noted above.

Another response can come from enterprises themselves, using and working the same forces of change noted earlier. Via collaboration and open source, enterprises can band together to contribute resources, expertise and people to develop open source infrastructures and standards to address the challenges of interoperability. We already see exemplars of such responses in somewhat related areas via initiatives such as Eclipse, Apache, W3C, OASIS and others. By leveraging the same tools of collaboration and open data and systems and the Internet, enterprises can band together and ensure their own self-interests are being addressed.

One advantage of this open, collaborative approach is that it is consistent with the current innovation trends in IT. But the real advantage is that it works and is needed. Without it, it is unclear how the enterprise IT challenge — especially in data interoperability — will be met.


[1] Though calculating machines and others extend back to Charles Babbage and more relevant efforts during World War II, the first UNIVAC was delivered to the US Census Bureau in 1951, and the first IBM to the US Defense Department in 1953. Many installations followed thereafter. See, for example, Lectures in the History of Computing: Mainframes.
[2] As provided by “information technology” (subscription required), Oxford English Dictionary (2 ed.), Oxford University Press, 1989, http://dictionary.oed.com/, retrieved 12 January 2011.
[3] See further the Wikipedia entry on proprietary software.
[4] M.K. Bergman, 2006. “Redux: Enterprise Software Licensing on Life Support,” AI3:::Adaptive Information blog, June 2, 2006. See http://www.mkbergman.com/111/the-death-of-enterprise-software-licensing/.
[5] The combination of distributed network systems and table-oriented designs such as Google’s BigTable and related open source Hadoop, plus many scripting languages, is leading to the resurgence of new database designs including NoSQL, columnar, etc.
[6] The Gartner Hype Cycle is a graphical representation of the maturity, adoption and application of technologies. It proceed through five phases beginning with a technology trigger and then, if successful, ultimately adoption. The peak of the curve represents the biggest “hype” for the innovation.The information in these charts is courtesy of Gartner. The sources for the charts are summary Gartner reports for 2010, 2009, 2008, and 2006. 2007 was skipped to provide a bit longer time horizon for comparison purposes.
[7] As summarized by Klint Finley, 2011. “How Will Technology Disrupt the Enterprise in 2011?,” ReadWriteWeb Enterprise blog, January 4, 2011.
[8] Jaikumar Vijayan, 2011. “Self-service BI, SaaS, Analytics will Dominate in 2011,” in Computerworld Online, January 3, 2011.
[9] According to Google on January 12, 2011, there were 251,000 uses of this exact phrase on the Web.
[10] Amit Deshpande and Dirk Riehle, 2008. “The Total Growth of Open Source,” in Proceedings of the Fourth Conference on Open Source Systems (OSS 2008), Springer Verlag, pp 197-209; see http://dirkriehle.com/wp-content/uploads/2008/03/oss-2008-total-growth-final-web.pdf.
[13] For example, according to James Mullarney in 2005, “How to Deal with the Legacy of Legacy Systems,” the average age of IT systems in the insurance industry was 23 years. In that same year, according to Logical Minds, a survey by HAL Knowledge Systems showed the average age of applications running core business processes to be 15 years old, with almost 30 per cent of companies maintaining software that is 25 years old or older.
[14] For general IT employment trends, see the Bureau of Labor Statistics; for example, http://www.bls.gov/oco/ocos303.htm.
[15] See, for example, Paul Graham, 2010. “The New Funding Landscape,” Blog post, October 2010.
[16] This chart was constructed from these sources: Oracle — http://en.wikipedia.org/wiki/List_of_acquisitions_by_Oracle; IBM — http://en.wikipedia.org/wiki/List_of_mergers_and_acquisitions_by_IBM; and HP — http://en.wikipedia.org/wiki/List_of_acquisitions_by_Hewlett-Packard. Of course, other acquisitions occurred by other players over this period as well.
[17] Current smartphones may have around 2 GHz in processing power and 1 GB of RAM; see for example, this Motorola press release. By comparison to the Shuttle, see http://en.wikipedia.org/wiki/Space_Shuttle#Flight_systems.
[18] M. K. Bergman, 2009. “Advantages and Myths of RDF,” AI3:::Adaptive Information blog, April 8, 2009.
[19] M. K. Bergman, 2009. “The Open World Assumption: Elephant in the Room,” AI3:::Adaptive Information blog, Dec. 21, 2009.
[20] See, for example, the Sweet Tools listing of 900 semantic Web and -related tools on this AI3:::Adaptive Information blog.
Posted:October 25, 2010

Objective is to Tackle the ‘Semantics’ Gap in the Semantic Web

OntotextStructured Dynamics I’m pleased to announce that our company, Structured Dynamics, has formed a strategic partnership with Ontotext, a leading semantic technology company for the past 10 years.

Ontotext is the developer of OWLIM, a highly scalable semantic database engine, and KIM, a popular semantic annotation and search platform. Its FactForge and LinkedLifeData services provide the largest curated and interoperable linked data platforms over which inferencing and reasoning may be applied. Some of Ontotext’s major clients include AstraZeneca, BBC and Korea Telecom. Major professional services include its own technologies, plus text mining and semantic annotation. Ontotext has notable and longstanding technical partnerships, such as with the GATE team and many of the other leading technologies and companies in the semantic Web space. We are very pleased to join forces with them.

Semantic ‘Gap’ is Basis of Partnership

Our partnership was formed to address some of the key semantic ‘gaps’ in the semantic Web. The partnership will focus on development of the next generation of the UMBEL and PROTON ontologies, as well as tools and applications based on them.

Volumes of linked data on the Web are growing. This growth is exposing three key weaknesses:

  1. inadequate semantics for how to link disparate information together that recognizes inherently different contexts and viewpoints and (often) approximate mappings
  2. misapplication of many linking predicates, such as owl:sameAs, and
  3. a lack of coherent reference concepts by which to aggregate and organize this linkable content.

Thanks to the efforts of the W3C (World Wide Web Consortium), we now have the techniques, languages and standards to deliver the “web” portion of the semantic Web. But, the practical “semantics” for actually effecting the semantic Web have heretofore been lacking. Early experience with linked data has exposed many poor practices. The lack of approximate linking predicates and reference concepts undercuts our ability to achieve meaningful semantic interoperability.

In forming our partnership, Ontotext and SD will shine attention on this semantics “gap”. We will also be aggressively seeking additional partners and players to join with us on this challenge. My recent outreach to DCMI (the Dublin Core Metadata Initiative) is one example of this commitment; we will be talking with others in the coming weeks.

Linked data and the prospects of the semantic Web are at a critical juncture. While we have seen much growth in the release of linked data, we are still not seeing much uptake (other than some curated pockets). Linkages between datasets are still disappointingly low, and quality of linkages is an issue. The time has come to stop simply shoveling more triples over the fence.

Building Blocks

The combination of UMBEL and PROTON offers a powerful blend to address these weaknesses. Our partnership will first provide a logical mapping and consolidated framework based on the two core ontologies. These will be made available as standard ontologies and via open source semantic annotation tools.

UMBEL PROTONUMBEL (Upper Mapping and Binding Exchange Layer) is both a vocabulary for building domain ontologies and a framework of more than 20,000 reference concepts. The UMBEL reference ontology is used to tag information and map existing schema in order to help link content and promote interoperability. UMBEL’s reference concepts and structure are a direct subset extraction of the Cyc knowledge base.

The PROTON ontology (PROTo ONtology) is a basic upper-level ontology that contains about 300 classes and 100 properties, providing coverage of the general concepts necessary for a wide range of tasks, including semantic annotation, indexing, and retrieval of documents. It is domain independent with coverage suitable to encompass any domain or named entity.

This consolidated framework will then be applied to organize and provide a coherent categorization of the Wikipedia online encyclopedia. One expression of this result will be a new version of Ontotext’s FactForge, already the largest and best performing reasoning engine leveraging linked data. This new version will allow easy access to the most central Linking Open Data (LOD) datasets such as DBpedia, Freebase, and Geonames, through the vocabularies of UMBEL and PROTON. Additional applications in linked data mining and general tagging of standard Web content are also contemplated by the partnership.

Ontotext’s proven reasoning technologies and ability to host extremely large knowledge bases with great performance are tremendous boons to the next iteration of UMBEL. We have been seeking large-scale coherency testing of UMBEL for some time and Ontotext is the perfect answer.

Ontotext’s CEO, Atanas Kiryakov, indicated their interest in UMBEL stemmed from what they saw as some stumbling blocks with linked data while developing FactForge. “The growth and maturation of linked data will require credible ways to orient and annotate the data,” said Kiryakov. “UMBEL is the right scope of comprehensiveness and size to use as one foundation for this,” he said. Ontotext is also the original developer and current maintainer of PROTON, which will also contribute in this role.

What is to Come?

The efforts of the partnership will first be seen with release of UMBEL v. 0.80 in the next couple of weeks. This update revises many aspects of the ontology based on two years of applied experience and updates it to OWL 2. Then, this basis will be used for broader mappings and linkages to Wikipedia. Those next mappings are earmarked for UMBEL version 1.00, slated for release by the end of the year. All of these planned efforts will be released as open source.

Among other intended uses, PROTON, UMBEL and FactForge form a layered reference data structure that will be used for data integration within the European Union research project RENDER. The large-scale RENDER project aims to integrate diverse methods in the ways Web information is selected, ranked, aggregated, presented and used.

Beyond that, further relationships and partnerships are being actively sought with players serious about interoperable, high-quality data on the semantic Web. We welcome inquiries or outreach.