Posted:January 13, 2014

Balderdash and BunkumConsumer Trends are Manifest, but Enterprise Software Has Its Own Imperatives

The end of the year is always the silly season for technology pundits. To gain attention, it is often the “end of this”, the “death of that” or new paradigms or revolutions. Granted, it is hard to get attention when everyone is pontificating on this or that. But hype and hyperbole do not serve helping users understand fundamental changes in the marketplace.

This year’s silliness award goes to Brian Profitt of ReadWrite Web who opined 2014 as the death of the distinction between consumer and enterprise software, stating, “legacy enterprise vendors need to serve business and consumers alike, or risk becoming roadkill.” Balderdash. (And bunkum, BS and brimborion if one wants to be alliterative.)

PCs thirty years ago, local networks twenty years ago, the Web ten years ago, or cloud computing or smartphones today did or will not “kill” enterprise software. Consumer applications and technology will continue to point the way to important new trends. But the fundamental distinctions of enterprise software will also live on. Let’s look at five of these distinctions [1].

1. The Buying Process

Consumer software is an individual purchase; enterprise software purchases are on behalf of a group. That means enterprise sales need to involve many more decisionmakers, some or all of whom may not be the actual users, as when IT is the de facto purchasing agent. Multiple perspectives need to be brought to bear on the enterprise acquisition. Often a single negative voice is sufficient to scuttle a sale. On the other hand, consumer software may be free, notably lower cost, or acquired on a whim.

Traditionally this has led to longer decision cycles and the need to employ dedicated reps for enterprise sales. Though SaaS (software as a service) or PaaS (platforms as a service) can lower initial acquisition costs and improve the fundamental business model, adoption of enterprise software still is a group decision in the enterprise. Enterprises well know that initial adoption carries longer-term costs in integration, interoperability, training and documentation. Software may be “legacy” in the enterprise because of these lifecycle realities and costs.

2. Enterprise “-ilities”

Enterprises, then, in representing the interests of groups or organizations, also have requirements that extend beyond what an individual consumer requires. Many of these correspond to the well-known “-ilities” — reliability, scalability, operability, interoperability, maintainability, and availability. An individual consumer is inconvenienced when there may be failures along any of these dimensions. The enterprise experiences costs, risks or lost opportunities when they occur. In other words: money.

These “-ilities” place a premium on testing and documentation, as well as lead to often requiring a longer-term relationship with the software vendor or its representatives. Because of the financial impacts from failures in “-ilities” it is often necessary to have support agreements or contracts in place to insure risks. The “-ilities” also place additional code and testing requirements upon the software.

3. Security

Though often lumped in with the “-ilities”,  security is an additional enterprise requirement that warrants its own distinction. Whether profit or non-profit, all enterprises are unique, with potential proprietary information both internally and externally (with the public or possible competitors). Though individual consumers also have requirements for privacy and confidentiality, these information flows are strictly between the individual and outside entities. In an enterprise, access may occur and be between many internal individuals and all of their external contacts.

The nature of individual consumer security is more like a ring or protective shell. In enterprises, security must be built fully “into the cake”, capturing distinctions between applications, databases, datasets, and access and modification rights or not at all levels. Like the other “-ilities,” the enterprise security requirement leads to a much different development and coding model than consumer software. And, frankly, it leads to higher development costs.

Initially, these hurdles were some of the causes for slower adoption of open source within enterprises. We are also learning better architectural designs and reliance on APIs that are aiding fulfilling these enterprise requirements at lower costs with greater sustainability. But the importance of security to the enterprise remains.

4. Governance

Security, the “-ilities”, and ongoing reliance on legacy enterprise systems also mean that repeatable workflows and governance need to be at the core of enterprise software use. Are things working well? Where are they breaking down? Need improvement? How can we incorporate a constant influx of new users? How can we manage actual costs and effectiveness?

Any enterprise that needs to maintain a competitive or sustainability edge must be able to address these questions. For software, this means versioning, documentation and training of same, and means to track use and misuse. (Not to mention the additional workflow software to manage these processes.) Every effective enterprise understands that what is not measured can not be managed.

These training, versioning and logging requirements are essential to effective governance of software and the information upon which it operates. These, requirements, too, are different than what an individual needs or wants. They, too, add to costs and requirements above normal consumer software demands.

5. Business Model

These enterprise distinctions help bound the kind of business models that may be applied to enterprise software. Enterprise software requirements are higher and more demanding (and take longer to bring to fruition) than consumer software requirements. Support, longevity, reputation and quality are important factors for software vendors to fulfill in order to overcome legitimate risk questions enterprises ask when contemplating a new (potential legacy) commitment to a new enterprise software adoption.

Fortunately, as systems have become more open with a new architectural model based on the distributed Web, many older enterprise hurdles can now be more readily overcome. These advances are unalloyed goodness. But enterprise imperatives still remain.

We can hope with less risky SaaS or PaaS that much can be done to reduce initial acquisition costs and risks. Open source software is also lowering the cost of initial enterprise software development by orders of magnitude [2]. Nonetheless, higher costs with support commitments distinguishes enterprise software business models from any of the consumer kind. I expect that fundamental distinction to remain.

Consumer Trends DO Affect the Enterprise

These five factors, or other splits that could be reasonably made, are not meant to deny the importance of consumer software. Merely, the point is that enterprise software has its own set of imperatives. Enterprise software is certainly more conservative and slower-paced for the exact reasons of its distinction from consumer software. Talk of convergence or the “consumerization” of enterprise software misses these distinctions and what will continue to be the fundamental differences between the two software categories.

Because of its lesser requirements, meaning in economic terms “lower barriers to entry,” we will also see that consumer software and its devices will be the lodestar for innovation. In my own thirty years in this space, we have seen consumer leadership in device form factors (PCs to smartphones), architecture (Web, APIs and distributed networks), user interfaces (browsers and HTML), data and data models (RDF, XML, JSON), programming languages (scripting, Ruby, Python), business models (open source, cloud computing), software models (apps, SaaS, PaaS), etc. Enterprise software is, by and large, a sink for consumer innovations, not a source.

But to be successful in the enterprise, those innovations must also meet more stringent requirements. And, some of those requirements, such as interoperability, are clearly driven more from the enterprise side of things.

Thus, silly talk about consumer versus enterprise markets, framed as either “death” or “convergence,” really misses the point. Ultimately, they are different markets with different imperatives. Yes, there is a synergy and natural relationship — after all code and devices may be shared in either realm — but the roles and contributions of each differ. Though I don’t deny that some innovations may work equally well in either the consumer or enterprise markets, most innovation will occur in the consumer sector, while higher revenues and income are to be derived from the enterprise sector.

Today’s Enterprise Picture

Despite the silly punditry noted above, major industry analysts and the venture capital community are signalling a shift from the consumer to the enterprise market. Gartner, for example, sees a doubling in enterprise software growth to 5.8% in 2014 over other IT expenditures. CB Insights points to a dramatic shift in venture capital support for enterprise software versus consumer over the past two years [3]:

VC Software Commitments, 2011-2013

In 2013, about 70% of VC software funding went to startups building tech for businesses. (Actually, the shift was much greater in that $450 million of the consumer total went to just two consumer companies, Uber and Pinterest.) VC funding for enterprise software has risen 65% in the past two years; meanwhile, funding of consumer software by VCs has dropped 60%.

Besides the crowded consumer space and perhaps steam being lost behind social networking, these trends suggest that consumer innovations of the past few years are now ripe for “enterprising” within the enterprise market. What can be taken from the consumer side now must be looked at for incorporation and adoption on the enterprise side. This is not the “consumerization” of the enterprise, but the “enterprising” of consumer innovations.

This distinction is important. Adoption of prior consumer innovations will not occur via osmosis (“consumerization”), but by purposeful re-packaging and modification of those innovations to meet enterprise requirements (“enterprising”). That is, those successful in leveraging consumer innovations into the enterprise will do so purposefully by adapting to enterprise imperatives. The target-rich environment of the next couple of years will be adopting prior consumer innovations to the enterprise.


[2] My first enterprise software company from the early 1980s required more than $1 million in start-up software development funds; more recent experience has been on the order of $50,000 to $100,000.
[4] Opening image courtesy of Willow and Stone, UK.

Posted by AI3's author, Mike Bergman Posted on January 13, 2014 at 10:23 am in Software and Venture Capital, Software Development | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/1703/five-fundamental-distinctions-of-enterprise-software/
The URI to trackback this post is: http://www.mkbergman.com/1703/five-fundamental-distinctions-of-enterprise-software/trackback/
Posted:November 16, 2012

Friday     Brown Bag LunchThe New Paradigm of ‘Substantive Marketing’ for Innovative IT

This decade has clearly marked a sea change in the move of enterprise software from proprietary to open source, as I have recently discussed [1]. It is instructive that only a mere six years ago I was in heated fights with my then Board about open source; today, that seems so quaint and dated.World's Tallest Flagpole; see ref [9]

Also during this period many have noted how open source has changed the capital required to begin a new software startup [2]. Open source both provides the tooling and the components for cobbling together specialty apps and extensions. Six and seven and even eight figure startup costs common just a decade ago have now dropped to four or five figures. When we see the explosion of hundreds of thousands of smartphone apps we are seeing the glowing residue of these additional sea changes. Dropping startup costs by one to three orders of magnitude is truly democratizing innovation.

But something else has been going on that is changing the face of enterprise software (besides consolidation, another factor I also recently commented on). And that factor is “marketing”. Much less commentary is made about this change, but it, too, is greatly lowering costs and fundamentally changing market penetration strategies. That topic — and my personal experience with it — is the focus of this article.

Friday      Brown Bag Lunch This Friday brown bag leftover was first placed into the AI3 refrigerator on August 15, 2011. This reprise is unchanged from its original posting and still describes how Structured Dynamics undertakes its marketing.

The Obsolete Recent Past

Besides the few remaining big providers of enterprise software — like IBM, Oracle, HP, SAP — most vendors have totally remade their sales practices of just a few years ago. Large sales forces with big commissions and a year to two year sales cycles can no longer be justified when software license fees and the percentage maintenance annuities that flow from them are dropping rapidly. Today’s mantras are doing more with less and doing it faster, hardly consistent with the traditional enterprise software model. Sure, big enterprises, especially big government and big business, have large sunk costs in legacy systems that will continue to be milked by existing vendors. But the flow is constricting with longer-term trends clear to see. The old enterprise software model is obsolete.

Even if it were not dying, it is hard to square huge investments in sales and marketing when product development has become inexpensive and agile. The proliferation of three-letter marketing acronyms for branding “new” product areas and standard formulas for product hype of just a few years ago also feels old and dated. Cozy relationships with conventional trade press pundits and market analysts seem to be diminishing in importance, possibly because the authoritativeness of their influence is also diminishing. It is harder to justify market firm subscription costs when priority budget items are being cut and new information outlets have emerged.

In response to this, many developers have forsaken the enterprise market for the consumer one. Indeed enterprises themselves are looking more and more to the consumer sector and commodity apps for innovation and answers. But, still, problems unique to enterprises remain and how to effectively reach them in this brave new world is today’s marketing problem for enterprise software vendors.

Most entities today, when opining about these challenges, tend to emphasize the need for “laser focus” and “rifle-shot” targeting of prospects. The advice takes the form of: 1) emphasize well-defined verticals; 2) know your market well; and 3) target and go after your likely prospects. Prospect data mining and targeted ad analysis are the proferred elixirs.

But, there is little evidence such refined methods for prospect identification and targeting are really working. Like politicians doing focus groups and opinion polling to capture the desired “message” of their potential electorates, these are all still “push” models of marketing. Yet we are swamped with pushed messages and marketing everywhere we turn. The model is failing.

Besides message overload, there are two issues with laser targeting. First, despite all that we try to know about ready buyers (for enterprise software), we really don’t know if any particular individual is truly needful, in a position to buy, has the authority to buy, or is the right advocate to make the internal sell. Second, though the idea of “laser” carries with it the image of focus and not flailing, it is in fact expensive to identify the targets and send a focused message their way. Because of these issues, decay rates for laser prospects throughout conventional sales pipelines continue to rise.

A New Marketing ParadigmNew Paradigm Roadsign

There has always been the phenomenon of the “fish jumping into the boat“; that is, the unanticipated inbound inquiry from a previously unknown prospect leading to a surprisingly swift sale. But we have seen this phenomenon increase markedly in recent years. Structured Dynamics‘ current customer base — including recurring customers — comes almost exclusively from this source. As we have noted this trend in comparison with more targeted outreach, we have spent much time trying to understand why it is occurring and how we can leverage what Peter Drucker called the “unexpected success” [3].

What we are seeing, I believe, is a shift from sales to marketing, and within marketing from direct or outbound marketing to a new paradigm of marketing. Others have likened this to inbound marketing [4] or content marketing [5] or permission marketing [6]. What we are seeing at Structured Dynamics bears many resemblances to parts of what is claimed for these other approaches, but not all. And, it is also true that what we are seeing may pertain mostly to innovative IT for emerging enterprise markets, and not a generalized paradigm suitable to other products or markets.

For lack of a better term, what we are seeing we can term “substantive marketing”. By this we mean offering valuable content and solutions-oriented systems for free and without restriction. This shares aspects with content marketing. Then, in keeping with the trend for buyers doing their own research and analysis to fulfill their own needs, similar to the premises of inbound or permission marketing, potential consumers can make their own judgments as to relevance and value of our offerings.

Sometimes, of course, some prospects find our approaches and solutions lacking. Sometimes, they may grab what we have offered for free and use them on their own without compensation to us. But where the match is right — and we need to be honest with both ourselves and the customer when it is not — we can better spend the customer’s limited time and resources to tailor our generic solutions to their specific needs. In doing so, we offer higher value (tailored services) while learning better about another spectrum of consumer need that can virtuously enhance our substantive offerings for the next prospect.

So, let’s decompose these components further to see what they can tell us about this new practice of substantive marketing and how to use it as an engine for moving forward.

Substantive Marketing

The Virtuous Cycle Begins with Substantive Solutions

The premise of substantive marketing is to offer square-deal value to the marketplace in the form of solutions-based content. Like content marketing that offers “the creation or sharing of content for the purpose of engaging current and potential consumer bases” [5], substantive marketing goes even further. The whole basis and premise of the approach is to provide substantive content, in one of more of these areas, preferably all:

  • Knowledge — this substantive area includes papers, commentary, survey results or listings of tools and references useful to the target market
  • Analysis — this content area includes unique analysis of market trends, data, technologies or reviews that pertain to the target market
  • Code — this area relates to the provision of open source code and tools, preferably under licenses that allow users to use the software without restriction (two examples are the Apache 2 license and the MIT license)
  • Documentation — a critical substantive area is the documentation in how to install, use, modify or customize these tools, including a prejudice to APIs and tutorial information
  • Methodologies, workflows and best practices — it is important to also discuss how to properly operate and utilize these tools and information. Taking care to document lessons learned and best practices also helps the user community avoid common mistakes and to speed adoption and utility, and
  • Demos — this area involves setting up (and sharing code and procedures for same) demos that show how the code and its methods actually work. Demos also become first use cases to aid the new user in learning and setting up the code bases.

Further, this substantive content is offered without strings, restrictions or customer fill-in forms. The content is not a come on or a teaser. We are not trying to gather leads or prospect names, because we have no intent to dun them with emails or follow-ups.

This substantive content is as complete as can be to enable new users to adopt the information and tools in their current state without further assistance. (In some cases, the information also educates the marketplace in order to prepare future customers for adoption.) Most importantly, this substantive content is offered for free, either open source (for code) or creative commons for documentation and other content. In return, it is fair to request — and we do — attribution when this material is used.

We have previously termed this complete panoply of substantive content a total open solution [7]. Some might find the provision of such robust information crazy: How can we give away the store of our proprietary knowledge and systems?

But we find this kind of thinking old school. In an open source world where so much information is now available online, with a bit of effort customers can find this information anyway. Rather, our mindset is that customers do not want to pay again for what has already been done, but are willing to pay for what can be done with that knowledge for their own specific problems. Offering the complete storehouse of our knowledge in fact signals our interest in only charging the customer for new answers, new value or new formulations. The customers we like to work with feel they are getting an honest, square deal.

Flagpole Venues Help Increase Awareness

Consider your substantive content to be your flag, a unique banner for conveying and packaging your specific brand. It is thus important to find appropriate flagpoles — in the virtual territories that your customers visit — for raising this content high for them to see. Since the role of these flagpoles is to create awareness in potential prospects — who you do not likely know individually or even by group in advance — it makes sense to raise your offerings up on many flagpoles and on the highest flagpoles. Visibility is the object of the approach.

This approach is distinctly not leafletting or cramming links or emails into as many spaces as possible. The idea of substantive marketing is to fly valuable content high enough that desirous potential customers can discover and then inspect the information on their own, and only if they so choose. In this regard, substantive marketing resembles permission marketing [6].

Being visible helps ensure that the needful, questing prospect that you would never have been able to target on your own is able to see and be aware of your offerings. And, since they are seeking information and answers, your collateral needs to be of a similar nature. Solutions and substance are what they are seeking; what you have run up the flagpole should respond to that.

The mindset here is to respect your prospective customers and to allow them to chose to receive and inspect your offerings, but only if they so choose. If flown in the right venues with the right visibility, customers will see your flags and inspect them if they meet their requirements.

Some of the venues at which you can raise your flags include:

  • Blogs — this venue is especially helpful, since you have complete control over content, message, voice and packaging
  • Social networks — the value of social networks is now accepted, and should be a core component of any visibility strategy. However, it is also important to make sure that your contributions are driven by substance and value and do not become part of the cacophonous background noise
  • Vertical media — there are always existing outlets well-read and -respected by your customer propects. Establishing relationships and value with these third-party outlets can extend your reach
  • Web sites — this venue includes your standard Web sites, of course. But, you should also consider setting up specific project-related sites or sites dedicated to documentation (c.f., our TechWiki site of 300+ technical articles) or to methodologies (the excellent MIKE2.0 site is one great example) or to other ways by which particular content (such as tools with the Sweet Tools site) can raise another flag
  • User forums — user discussion groups and forums also become their own attractants for like-interested prospects, and
  • Conferences and tradeshows — while potentially valuable, presence at conferences and tradeshows must be carefully evaluated. Since participation and opportunity costs are high, the venues should be clearly relevant to your market space with likely decision makers in attendance.

The observant reader will have already concluded that each of these venues develops slowly, and therefore raising visibility is generally a slow-and-steady game that requires patience. Start-up vendors backed by venture firms or those looking for quick visibility and cashout will not find this approach suitable. On the other hand, customer prospects looking for answers and self-sustaining solutions are not much interested in flash in the pan vendors, either.

A Model Responsive to the Changing Nature of Customer Prospects

The real drivers for this changing paradigm come from customer prospects. Sophisticated buyers of enterprise IT and instrumental change agents within organizations share most if not all of these characteristics:

  • They are inundated with marketing messages and jaded about hype and “pushed” messages
  • They are generally knowledgeable about their needs and problem spaces and about approximate technologies. They are eager and desirous of learning independently and know that their recommendations affect their personal reputations and standing within their enterprises
  • With the many volatile external and internal changes, including staff reductions and fluid assignments, leadership for new technology adoption can come from many different and unknown corners of the organization; it is extremely difficult to identify and target prospects
  • The economic and competitive environment places a premium on affordability and low-risk evaluations of new technologies
  • Lock-ins of any kind — be it to specific vendors or technologies — are understood as inherently risky. This understanding is raising the importance of open and standards-based approaches
  • Being the subject of a pushy sales effort is distasteful and a negative to an eventual sale. Education and learning, however, is respected
  • Because of all that is at stake, honesty with no bullshit is highly appreciated. If you as a vendor do not offer an appropriate solution or have fulfillment weaknesses, tell the prospect so. Further, tell them who can supply the solution. One never knows when and where the next problem may arise, and providing trustworthy advice can lead to later engagements.

More often than not we find our customers to have already installed and used our existing substantive materials for some time before they approach us about further work. They appreciate the tutorial information and have taught themselves much in advance. By the time we engage, both parties are able to cost-effectively focus on what is truly missing and needed and to deliver those answers in a quick way. Re-engagements tend to occur when a next set of gaps or challenges arise.

Though it may sound trite or even unbelievable to those who have not yet experienced such a relationship, the square deal value offered by substantive marketing can really lead to true partnerships and trust between vendor and customer. We experience it daily with our customers, and vice versa. We also think this is the adaptive approach that our new environment demands.

The Free Path to Open Source and Solutions

Once prospects learn of our substantive offerings, many may decide independently that what we have is not suitable. Others may simply download and use the information on their own, for which we often never know let alone receive revenue. We are completely fine with this, as shown for three different cases.

First, some of these prospects need no more than what we already have. This increases our user base, increases our visibility and often results in contributions to our forums and documentation.

Then, some of these prospects come to learn they need or want more than what our current offerings provide, leading to two possible forks. In one fork, the second case, they may have sufficient skills internally or with other suppliers to extend the system on their own. Some of this flows back to an improved code base or improved installation or documentation bases.

In the other fork, the third case, they may decide to engage us in tailoring a solution for them. That case is the only one of the three that leads to a direct revenue path.

In all three cases we win, and the customer wins. Maybe enterprise software vendors of decades past rue this reality of lower margins and shared benefits; we agree that the absolute profit potential of substantive marketing is much less. But we gladly accept the more enjoyable work and steady revenue relationships resulting from these changes. We are not engaged in some pollyann-ish altruism here, but in a steely-eyed honest brokering that best serves our own self-interest (and fairly that of the customer, as well).

A Square Deal Baseline for Tailored Services

Great IT product does not come from idle musings or dreamed up functionality. It comes solely and directly from solving customer problems. Only via customers can software be refined and made more broadly usable.

A slipstream of those who have previously become aware and tested our offerings will choose to engage our services. This generally takes the form of an inbound call, where the prospect not only qualifies itself, but also establishes the terms and conditions for the sale. They have chosen to select us; they are fish that have jumped into the boat.

To again quote Peter Drucker, “. . . the aim of marketing is to make selling superfluous. The aim of marketing is to know and understand the customer so well that the product or service fits him and sells itself. Ideally, marketing should result in a customer who is ready to buy. All that should be needed then is to make the product or service available . . .” [8]. This is precisely what I meant earlier about the shift in emphasis from sales to marketing.

Even at this point there may be mismatches in needs and our skills and availabilities. If such is the case, we do not hesitate to say so, and attempt to point the prospect in another direction (from which we also gain invaluable market knowledge). If there is indeed a match, we then proceed to try to find common ground on schedule and budget.

Paradoxically, this square deal and honesty about the readiness and weaknesses of our offerings often leads to forgiveness from our customers. For example, for some time we have lacked automated installation scripts that would make it easier for prospects to install our open semantic framework. But, because of compensating value in other areas, such gaps can be overlooked and tackled later on (indeed, as a current customer is now funding). By not pretending to be everything to everyone, we can offer what we do have without embarrassment and get on with the job of solving problems.

For larger potential engagements, we typically suggest a fixed price initial effort to develop an implementation plan. The interviews and research to support this typical 4- to 6-weeks effort (generally in the $5 K to $10 K range, depending) then result in a detailed fulfillment proposal, with firm tasks, budget and schedule, specific to that customer’s requirements. Just as we respect our prospects’ time and budget, we expect the same and do not conduct these detailed plans without compensation. With respect to fulfillment contracts, we cap contract amount and limit milestone payments to pre-set percentages or time expended, whichever is lower.

This approach ensures we understand the customer’s needs and have budgeted and tasked accordingly. Capped contracts also put the onus on us the contractor to understand our own effort and tasking structures and realities, which leads to better future estimating. For the customer, this approach caps risk and potential exposure, and ensures milestones are being met no matter the time expenditures by us, the contractor. This approach extends our square-deal basis to also embrace risks and payments.

New (and Open Source) Developments Fuel the Substance Pipeline

Thus, when customers engage us, they spend almost solely on new functionality specifically tailored to their needs. In doing so, we suggest they agree to release the new developments they fund as open source. We argue — and customers predominantly agree — that they are already benefitting from lower overall costs because other customers have funded sharable, open source before them. We point out that the new customers that follow them will also be independently creating new functionality, to which they will also later benefit.

(This argument does not apply to specific customer data or ontologies, which are naturally proprietary to the customer. Also, if the customer wants to retain intellectual ownership of extensions, we charge higher development fees.)

Once these new developments are completed, they are fed back into a new baseline of valuable content and code. From this new baseline the cycle of substantive marketing can be augmented anew and perpetuated.

Three Guidelines to Leverage Substantive Marketing

All of these points can really be boiled down to three guidelines for how to make substantive marketing effective:

  • First, whatever your domain or market, provide useful and substantive content. The content you offer is indeed your marketing collateral. Prospective customers can gauge from it directly whether it meets their needs, appears sound and workable, and has value. If you have little of substance to offer, this paradigm is not for you
  • Second, plant many flagpoles and raise your flags high in territories your market prospects are likely to visit. This is a process that requires thoughtfulness and patience. Thoughtfulness, because that is how you determine where to plant your flags. If you yourself are a consumer of what you offer, it is easier to find those venues. And patience, because it takes time to stack valuable content upon valuable content in order to raise visibility
  • And, third, be honest and respectful. Help your prospect work within available budget to achieve the most possible at lowest risk. And help them find others, if need be, who might be better able than you to truly solve their problems.

What we are finding — as we continue to refine our understanding of this new paradigm — is that through substantive marketing the fish are finding us and they sometimes jump into the boat. We like our enterprise customers to pre-qualify themselves and already be “sold” once they knock on the door. One never knows when that phone might ring or the email might come in. But when it does, it often results in a collaborative customer as a partner who is a joy to work with to solve exciting new problems.


[1] M.K. Bergman, 2011. “Declining IT Innovation in the Enterprise,” in AI3:::Adaptive Innovation blog, January 17, 2011. See http://www.mkbergman.com/940/declining-it-innovation-in-the-enterprise/.
[2] Paul Graham has been the most prominent observer of this scene; see P. Graham, 2008. “Why There Aren’t Any More Googles,” April 2008 (see http://www.paulgraham.com/googles.html) and subsequent articles.
[3] See esp. Peter F. Drucker, 1985. Innovation and Entrepreneurialship: Practice and Principals, Harper & Row, New York, NY, 277 pp.
[4] Inbound marketing is a marketing strategy that focuses on getting found by customers. According to David Meerman Scott, inbound marketers “earn their way in” (via publishing helpful information on a blog etc.) in contrast to outbound marketing where they used to have to “buy, beg, or bug their way in” (via paid advertisements, issuing press releases in the hope they get picked up by the trade press, or paying commissioned sales people, respectively). Brian Halligan, cofounder and CEO of HubSpot, claims he first coined the term of inbound marketing.
[5] Content marketing is an umbrella term encompassing all marketing formats that involve the creation or sharing of content for the purpose of engaging current and potential consumer bases. In contrast to traditional marketing methods that aim to increase sales or awareness through interruption techniques, content marketing subscribes to the notion that delivering high-quality, relevant and valuable information to prospects and customers drives profitable consumer action. See also Holger Shulze, 2011. B2B Content Marketing Trends slideshow, see http://www.slideshare.net/hschulze/b2b-content-marketing-report.
[6] Seth Godin coined the term permission marketing wherein marketers obtain permission before advancing to the next step in the purchasing process. It is mostly used by online marketers, notably email marketers and search marketers, as well as certain direct marketers who send a catalog in response to a request. Godin contrasts this approach to traditional “interruption marketing” where messages are sent without prior permission.
[7] See the three-part series, M.K. Bergman, 2010. “Listening to the Enterprise: Total Open Solutions,” “Part 1,” “Part 2” and “Part 3,” AI3:::Adaptive Information blog, May 12 – 31, 2010.
[8] Peter F. Drucker, 1974. Management: Tasks, Responsibilities, Practices. New York, NY: Harper & Row. pp. 864. ISBN 0-06-011092-9.
[9] The intro photo is of the world’s tallest flagpole (at 165 m), in Dushanbe, Tajikistan. The photo is courtesy of CentralAsiaOnline.com.

Posted by AI3's author, Mike Bergman Posted on November 16, 2012 at 9:09 am in Uncategorized | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/1026/brown-bag-lunch-of-flagpoles-and-fishes/
The URI to trackback this post is: http://www.mkbergman.com/1026/brown-bag-lunch-of-flagpoles-and-fishes/trackback/
Posted:October 17, 2011

Structured DynamicsToday’s Post is a Testimony to the Value of Vacations

My partner, Fred Giasson, today posted the second part of his series on open source. Since returning from a well-earned vacation a few weeks back — after more than three years without a break — Fred has been writing and developing up a storm. As someone said to me last week, “Fred’s on fire!” I could not agree more.

I think Fred’s post speaks for itself as to why and how Structured Dynamics has made a conscious choice to embrace open source. The major reason he puts forth — to bootstrap the company without the need for external investment — is unusual in itself. But one thing he is silent about is why this is a compelling reason. I’ll comment on that.

Fred and I have both worked for others dependent on their capital for our ventures (a few more times in my case). Capital is great for expansion and operations, but it can be deadly when visions requiring patience are in play. Structured Dynamics is only now a bit more than halfway through its five-year plan. While semantics technologies are exciting with a world of upside potential, they have also been incubated in academic labs with (as yet) a general lack of practical deployment. The promise is there, but often the delivery and maturation have been lacking. We are committed to play a visible role in correcting that.

The approach Fred outlines was not perhaps easily available to new startups a decade ago. But now, with open source and the Internet, costs of entry and ongoing development have dropped markedly. Yet, surprisingly, the idea of financing a startup via revenues is still not talked about sufficiently — let alone often used as an actual basis for building a company.

I’ve been fortunate to be able to partner with a young, world-class technologist whose maturity exceeds that of individuals many years his senior. He understands that in order to achieve important visions that the stewardship of those ideas can not be left to venture capitalists committed solely or mostly to gaming terms or near-term returns. We’re placing our bets on the paying customer and our own judgment.

So, it is great to see Fred continue his phenomenal development productivity since he returned from Hawaii. The benefit of his vacation is that we are also now getting his insights on his blog again.

Posted by AI3's author, Mike Bergman Posted on October 17, 2011 at 10:19 pm in Uncategorized | Comments (0)
The URI link reference to this post is: http://www.mkbergman.com/981/freds-hair-is-on-fire/
The URI to trackback this post is: http://www.mkbergman.com/981/freds-hair-is-on-fire/trackback/
Posted:April 4, 2011

People in CrowdsSelf-service Information Management for Knowledge Workers

Though I have alluded to it numerous times in my past writings [1], I think one of the most pervasive and important benefits from semantic technologies in the enterprise will come from the democratization of information. These benefits will arise mostly from a fundamental change in how we manage and consume information. A new “system” of semantic technologies is now largely available that can put the collection, assembly, organization, analysis and presentation of information directly in the hands of those who need it most — the consumers of information.

The idea of “democratizing information” has been around for a couple of decades, and has accelerated in incidence since the dominance of the Internet. Most commonly, the idea is associated with developments and notions in such areas as citizen journalism, crowdsourcing, the wisdom of the crowd, social bookmarking (or collaborative tagging), and the democratic (small “d”) access to publishing via new channels such as blogs, microblogs (e.g., Twitter) and wikis. To be sure, these kinds of democratic information will (and are) benefiting from the use and application of semantics.

But the trend I’m focusing on here is much different and quite new. It is the idea that enterprise knowledge workers can now take ownership and control of their knowledge management functions. In the process, prior bottlenecks due to IT can be relieved and massive new benefits can open up to the enterprise.

Decades-long Mismatches Between KM and IT

“Enterprise systems are doing it wrong. And not just a little bit, either. Orders of magnitude wrong. Billions and billions of dollars worth of wrong. Hang-our-heads-in-shame wrong. It’s time to stop the madness.”
– Tim Bray [2]

It is no secret that IT has not served the enterprise knowledge management function well for decades.  Transaction systems and database systems geared to fast indexing and access to datum have not proved well suited to information or knowledge management. KM includes such applications as business intelligence, data warehousing, data integration and federation, enterprise information integration and management, competitive intelligence, knowledge representation, and so forth. Information management is a bit broader category, and adds such functions as document management, data management, enterprise content management, enterprise or controlled vocabularies, systems analysis, information standards and information assets management to the basic functions of KM. Since the purpose of this piece is not to get into the epistemological differences between information and knowledge, I use these terms more-or-less interchangeably herein.

Knowledge and information management is very big business. Given the breadth and differences in defining the KM and IM markets, let’s take as a proxy the business intelligence (BI) market, one of KM’s most important elements. Various estimates from IDC, Gartner and others place the current value of BI software sales somewhere in the range of $9 billion to $11 billion annually [3]. Further, BI ranked number five on the list of the top 10 technology priorities for chief information officers (CIOs) in 2011. And this pertains to the structured component of information alone.

Yet, at the same time, BI-related projects continue to have high failure rates, often cited as in the 65% to higher range [4]. These failure rates are consistent with KM projects in general [5]. These failures are merely one expression of a constant litany of issues and concerns regarding the enterprise KM function:

Conventional KM Problem AreaComments
Inflexible Reports
  • reports are rarely “self-service”
  • new requests need to be placed in queue
  • 90% of stored report templates are never used
  • unlimited “slicing and dicing” not available
Inflexible Analysis
  • analysis is rarely “self-service”
  • new requests need to be placed in queue
  • many requests not accepted due to schema rigidities, cascading changes needed
  • analysis options are “pre-canned”, inflexible
Schema Bottlenecks
  • brittleness of relational data model and typical star schema
  • crossing across schema or databases difficult
  • load and re-indexing cycles can limit access, impose expensive back-end requirements
  • can not (often) accommodate new data, structures
ETL Bottlenecks
  • getting data into the system needs to be placed in queue
  • new external data requires extract, transform and load (ETL) routines to be written
  • schedule and update cycles can be a mismatch to access needs
Reliance on Intermediaries
  • all problems above work through intermediaries
  • disconnect between those with need and decision-makers and those who implement the solutions
  • inherent issues in communicating requirements to implementers
  • related time delays to implementation exacerbate the communication of requirements
Specialized Expertise Required
  • expertise and skill sets needed to implement solutions different from those of the knowledge consumer
  • inherent issues in communicating requirements to implementers
  • high costs for attracting necessary expertise
  • expertise is inherently an overhead function
Slow Response Time
  • all problems above lead to delays, slow response
  • timely communications, analysis, decisions suffer
  • delays mean knowledge management is not an active “contact sport”, becomes mired and unresponsive
  • some needs are just not requested because of these problems
Dependence on External Apps
  • new apps need to be identified, procured
  • design and configuration of apps requires external expertise, programming skills
  • multiple sourcing of apps leads to frequent incompatibilities, high costs for integration, poor interoperability
Unmet Needs
  • many KM needs are simply not requested
  • by the time responses are forthcoming, needs and imperatives have moved on
  • communications, analysis and decisions become hassles
  • the “contact sport” of active discovery and learning is unmet
High Opportunity Costs
  • many KM insights are simply not discovered
  • delays and frustration adds to costs, friction, inefficiencies
  • no way to know the opportunity costs of what is not learned — but, surely is high
High Failure Rates
  • the net impact of all of the problems above is to lead to high failure rates (~60% to 70%) and unacceptable costs
  • reliance on IT for KM has utterly and totally failed

The seeming contradiction between continued growth and expenditures for information management coupled with continued high failure rates and disappointments is really an expression of the centrality of information to the modern enterprise. The funding and growth of the IT function is itself an expression of this centrality and perceived importance. These have been abiding trends in our transition to information or knowledge economies.

Bray [2] places the fault for wasted initiatives within the culture of IT. I believe there is some truth to this — variably, of course, depending on the specific enterprise. But the real culprit, I believe, has been the past need to “intermediate” a layer of software and IT expertise between knowledge workers and their source information. A progression of tasks has been necessary — conducted over decades with advances and learning — to get paper information into electronic form, get those forms to be understood and operate in some common ways, and then to develop tools, architectures and frameworks to make sense of it. Yet, as more tasks with required specialized skills have been added to this layer, the actual gulf between worker and information has increased. For example, enterprises still require the overhead and layers of IT to write SQL to get information out and then to prepare and fix reports.

On average, IT now consumes about 4% of all enterprise expenditures and employs about 6% of enterprise workers [6]. IT has become a very thick intermediary layer, indeed! Yet, because of the advances and learning that has occurred in growing and nurturing this layer, we also now have the basis to begin to “disintermediate” the IT layer. Many, if not all, of the challenges noted in the table above can be improved by doing so.

Early Attempts at Self-service and Semantics

One current buzzword in business intelligence is “self service”. By this term is meant giving knowledge workers the tools and systems for creating reports or doing analysis on their own without needing to work through (or be frustrated by) the IT layer. Self-service software was first postulated in the 1990s as a way for information consumers and authors (typically subject-matter experts) to automate some of their knowledge management tasks. Today, it is most commonly applied to self-service reporting or self-service analytics within the BI realm.

As a general proposition, self-service BI has been more myth than reality [7]. Forrester surveys, for example, indicate that IT still develops most BI applications. Of survey respondents in 2009, 70% responded that IT develops the enterprise’s reports and dashboards [8]. However, that figure is not 100%, as it was just a decade earlier, and there is also notable success to some open source providers such as BIRT that address a wide range of reporting needs within a typical application, ranging from operational or enterprise reporting to multi-dimensional online analytical processing (OLAP).

James Kobelius [8] is particularly bullish on the application of Web 2.0mashup” applications to knowledge worker purposes. Under this approach, Web-based applications are used and accessed directly by knowledge workers for charting and mapping purposes using Ajax or Flash widgets, such as Google Maps. The conventional BI and KM vendors have begun to more more aggressively into this area. Some notable new entrants — such as Tableau, Factual or Good Data — are also showing the way to more direct access, more flexible reporting and analysis widgets, and cleaner service or platform designs.

These initiatives reside at the display or reporting level. There is another group, including James Kobelius, Neil Raden or Seth Earley, that have addressed how to get disparate information to talk together using ontologies. They refer to “semanticizing” such traditional practices such as master data management (MDM), “ontologizing” taxonomies, or adding Web 2.0 mashups to business intelligence. While these thoughts are moving in the right direction, and will bring incremental benefits, they still are far short of the potentials at hand.

Self-service Information Management

So far, in the KM realm, the application of semantics has tended to be limited to information extraction (tagging) of text documents and first attempts at using ontologies. The tagging component is essential to enable the 80% of information presently in textual documents to become first-class citizens within business intelligence or knowledge management. The ontology efforts to date appear to be more like thin veneers over traditional taxonomies. Rather than hierarchical structures, we now see graph-oriented ones, but still intended to fulfill the same tasks of enterprise metadata and vocabulary lookups.

The ontology efforts especially are just nibbling around the edges of what can be done with semantic technologies. Rather than looking upon ontologies as just another dictionary (though that role is true), if we re-orient our thinking to make ontologies central to the KM function, a wealth of new opportunities and benefits arises.

A bit more than a year ago, we formulated the Seven Pillars of the Open Semantic Enterprise, which included ontologies and related as some of the central components. In that article [9], we noted the particular applicability of semantic technologies to the information and knowledge management functions within enterprises. We asserted the benefits for embracing the open semantic enterprise as providing the organization greater insights with lower risk, lower cost, faster deployment, and more agile responsiveness. Since that time we have been deploying such systems and documenting those benefits.

Integral to the seven pillars are those aspects that lead to the democratization of information for the knowledge worker, what combined might be called “self-service information management”. As the figure to the right shows, three of the seven pillars are essential building blocks to this capability, two pillars are further foundations to it, with the remaining two pillars only tangentially important.

What the combination of these pieces means is a fundamental change in how knowledge work is done. Through this approach, we can largely disintermediate IT from the knowledge function, can bring knowledge management directly into the hands of those who need it in real time, and fundamentally alter how knowledge management apps are designed and deployed. The best thing is these benefits are an incremental evolution, and retain the use and value of existing information assets.

Building Block #1: Adaptive Ontologies

Rather than peripheral lookup structures or thin veneers, ontologies play the central role in the design of self-service information management. We use the plural on purpose here: what is deployed is actually a library of complementary and modular ontologies that play a variety of roles. Combined, we call these libraries with their representative functions adaptive ontologies.

This library contains the expected and conventional domain ontologies. These represent the actual knowledge space for the domain at hand, and may be comprised of multiple different ontologies representing different domain or knowledge spaces.  These standard semantic Web ontologies may range from the small and simple to the large and complex, and may perform the roles of defining relationships among concepts, integrating instance data, orienting to other knowledge and domains, or mapping to other schema.

From a best practices standpoint [10], we take special care in constructing these domain ontologies such that we provide labels and cues for user interfaces. Some of the user interface considerations that can be driven by adaptive ontologies include: attribute labels and tooltips; navigation and browsing structures and trees; menu structures; auto-completion of entered data; contextual dropdown list choices; spell checkers; online help systems; etc. We also include a variety of synonyms and aliases (the combination of which we call semsets) for referring to concepts and instances in multiple ways and for aiding information extraction and tagging functions. (In addition to organizing and helping to interoperate contributing information, these domain ontologies are also used for what is called ontology-based information extraction (OBIE) via our scones [11] system.)

In addition the library of adaptive ontologies includes some administrative ontologies that guide how instance data can be imported and inter-related (via the Instance Record Object Notation, or irON); what information types drive what widgets (via the Semantic Component Ontology, or SCO); data mapping vocabularies (UMBEL Vocabulary); how to characterize datasets; and other potential specialty functionality.

A forthcoming article will describe the composition and modularity typically found in a library of these adaptive ontologies.

In combination, these adaptive ontologies are, in effect, the “brains” of the self-service system. The best aspect of these ontologies is that they can be understood, created and maintained by knowledge workers. They constitute the only specification (other than theming, if desired) necessary to create self-service knowledge management environments.

Building Block #2: Ontology-driven Apps

The piece of the puzzle that implements the instruction sets within these adaptive ontologies are the ontology-driven apps, or ODapps. A recent article describes these structures in some detail [12].

ODapps are modular, generic software applications designed to operate in accordance with the specifications contained in the adaptive ontologies. ODapps fulfill specific generic tasks, consistent with their dedicated design to respond to adaptive ontologies. For example, current ontology-driven apps include imports and exports in various formats, dataset creation and management, data record creation and management, reporting, browsing, searching, data visualization and manipulation (through libraries of what we call semantic components), user access rights and permissions, and similar. These applications provide their specific functionality in response to the specifications in the ontologies fed to them.

ODapps are designed more similarly to widgets or API-based frameworks than to the dedicated software of the past, though the dedicated functionality (e.g., graphing, reporting, etc.) is obviously quite similar. The major change in these ontology-driven apps is to accommodate a relatively common abstraction layer that responds to the structure and conventions of the guiding ontologies. The major advantage is that single generic applications can supply shared functionality based on any properly constructed adaptive ontology.

Generic functionality included in these ODapps are things like filtering, setting value ranges, choosing the specific display view, and invoking or not various display templates (akin to the infoboxes on Wikipedia). By nature of the data and the ontologies submitted to them, the ODapp signals to the user or consumer what displays, views, filters or slices-and-dices might be available to them. Fed different data and different ontologies, the ODapp would signal the user differently.

Because of their generic design, driven by the ontologies, only a relatively small number of ODapps needs to be created. Once created with appropriate generic functionality, application development is essentially over. It is through the additions and changes to the adaptive ontologies — done by knowledge workers themselves — that new capability and structure gets exposed through these ontology-driven apps. This innovation shifts the locus from software and programming to data and knowledge structures.

This democratization of IT means that everything in the knowledge management realm can become self service. Users and consumers can create their own analyses; develop their own reports; and package and disseminate what they and their colleagues need, when they need it. Through ontology-driven apps and adaptive ontologies, we turn prior software engineering practice on its head.

Building Block #3: Open World Assumption

Integral to this design is the embrace of the open world assumption [13]. Though not a specific artifact, as are adaptive ontologies or ODapps, the open-world approach is the logical underpinning that allows consumers or knowledge workers to add new information to the system as it is discovered or scoped. This nuance may sound esoteric, but traditional KM systems have a very different underpinning that leads to some nasty implications.

Because the predominant share of KM systems are based on relational database systems, they embody a closed-world design. This works well for transaction systems or environments where the information domain is known and bounded, but does not apply to knowledge and changing information. Moreover, the schema that govern closed-world designs are brittle and hard to change and manage. It is this fact that has put KM squarely in the bailiwick of IT and has often led to delays and frustrations. Re-architecting or adding new schema views to an existing closed-world system can be fiendishly difficult.

This difficulty is a major reason why IT resists casual or constant changes to underlying data schema. Unfortunately, this makes these brittle schema difficult to extend and therefore generally unresponsive to changing and growing knowledge. As an environment for knowledge management, the relational data system and the closed-world approach are lousy foundations.

Other Building Blocks

As the self-service information management diagram above shows, RDF and Web services are two further important foundations. RDF (Resource Description Framework) is the canonical data model upon which all input information is represented. This means that the ODapp tools and the adaptive ontologies can work off a single model of knowledge representation. The Web service and architecture component is also helpful in that it allows Web 2.0 technologies to be brought to bear and allows distributed sources and users for the KM system. This provides scalability and distributed applicability, including on smartphones.

The other two pillars of the open semantic enterprise — the layered approach and linked data — are also helpful, but not necessarily integral to the KM and self-service perspectives presented herein.

Benefits from Self-service Information Management

The benefits and flexibilities from self-service information management extend from top to bottom; from creating data and content to publishing and deploying it. Here is a listing of available potentials for self-service, drawing comparison to the current conventional approach dependent on IT:

Information ActivityConventional Approach (IT)Self-service Information Management
Creating
  • structured data only
  • not generally available directly to the knowledge worker
  • can create own datasets
  • can extract and transform own datasets
  • can tag and integrate non-structured (text + document) information
  • able to handle unstructured, semi-structured and structured data alike
Annotating
  • not generally provided
  • completely open, flexible
  • can define own annotation fields, annotation schema (approaches)
Analyzing
  • pre-canned functions
  • structure pre-defined
  • slow performance
  • all structural dimensions can be filtered
  • all values and ranges thereof can be filtered
  • multiple analysis display widgets selectable depending on the type of input data
  • real-time configuration
  • fast (nearly instantaneous) performance
  • provision of (nearly) real-time analytics
  • additional capabilities in inferencing and reasoning
  • modeling and understanding of complex graph and relationships structures (e.g., social networks)
Reporting
  • pre-canned templates or report writers
  • structure pre-defined
  • user-definable templates
  • templates automatically assignable by types of thing being reported
  • embeddable in Web pages, alternate presentation media
  • styling and theming flexibility
Visualizing
  • very little done through IT
  • variety of visualization widgets available (e.g., maps, charts, graphs, networks)
  • large-scale systems views possible
  • visual interactions (a la Web 2.0) possible
Collaborating
  • very little done through IT
  • collaboration, if done, is via separate social media
  • completely open
  • variable access and permission rights by user or group
  • built-in to the entire infrastructure
Validating
  • not directly done by knowledge worker
  • user input, if done, via problem tickets with delays
  • can be integrated into the business process or workflow
  • “soft” validations and ratings/rankings can also be included
  • consistency checking
  • satisfiability checking
Publishing
  • limited to pre-canned reports
  • any report or analysis is available for publishing
  • documents and images and widget displays are available for publishing
  • multiple export formats means information, slices thereof, or analysis results thereof can be embedded and integrated into multiple presentation media
Re-purposing
  • none directly by the knowledge worker
  • any report or analysis is available for re-purposing
  • documents and images and widget displays are available for re-purposing
  • canonical internal representations (RDF and XHTML) means available information can be deployed for a variety of purposes (Web pages, reports, documents, slide shows, etc.)
New Functionality
  • none known, if not already listed
  • semantic querying
  • data visualization
  • text mining and tagging
  • categorization
  • graph mining
  • logic checking
Developing Apps
  • none via the official systems by the knowledge worker
  • if done, via guerrilla apps
  • only generic apps needed
  • many fewer and more flexible apps push issue into the background
Dashboarding
  • not available to most systems
  • if available, limited number of pre-canned options
  • any report or analysis is available for dashboarding
  • any widget is available for dashboarding
  • complete structure (typing, values, sources) available for filtering, “slicing and dicing”
  • all dashboard objects on a given canvas are linked, interoperate (selections in one widget reflected in other widgets)
  • dashboards may be made persistent for re-use, springboarding new dashboards (as templates)

The fact that any source — internal or external — or format — unstructured, semi-structured and structured — can be brought together with semantic technologies is a qualitative boost over existing KM approaches. Further, all information is exposed in simple text formats, which means it can be readily manipulated and managed with easy to understand tools and applications. Reliance on open standards and languages by semantic technologies also leads to greater use and availability of open source systems.

In short, self-service information management approaches should be cheaper, faster, more responsive and more capable than current approaches.

Great Progress, with Ontology Management the Next Challenge

Given these perspectives, hearing someone tout data-driven applications or advocate ontologies merely for metadata matching sounds positively Neanderthal. The prospects we have with semantic technologies, ontology-driven apps, and self-service information management systems mean so much more. The prospect at hand is to remake the entire knowledge management function, in the process bringing all aspects from creating and distributing knowledge products into the direct hands of the user. This is truly the democratization of information!

The absolutely fantastic news is none of this is theoretical or in the future. All pieces are presently proven, working and in hand. This is a practical vision, ready today.

Granted, like any new innovation, especially one that is infrastructural and systems-oriented, there are some weak or less-developed parts. These current gaps and needs include:

  • Though tools exist, the state of ontology create, edit, manage, update, delete, map and validate tools could be greatly improved [14]. As the central drivers for ODapps, a simplification of tasks geared more to the knowledge worker, and not professional ontologists, is needed (see diagram to right for some of the needed functions). Some of these developments are underway, with more desired
  • A relatively complete starting set of about 20 ODapps widgets is presently available. However, more are needed and for different deployment environments. BI analysis remains one weak area, as is an Ajax-based library
  • The number of infobox templates is small, and better (WYSIWYG or graphical) create and manage utilities would be most useful, and
  • User permission and authorization protocols exist, but are IP-based at present and could be beneficially expanded for different environments and use cases.

Yet, in the grand scheme of things, these gaps are relatively insignificant. The path and general architecture and design for moving forward are now clear.

Self-service information management via appropriately designed semantic technologies is now a reality. It promises to fulfill a vision of information access and control that has been frustrated for decades. We think these are exciting developments for the enterprise — and for the individual knowledge hound. We welcome your inquiries and invite you to join our open OSF group to contribute your ideas.


[1] Including going all the way back to my description of purpose for this blog back in 2005; see the AI3 Blogasbörd where I state, “One of my central arguments [in this blog] is that an inexorable trend through history has been the ‘democratization’ of information.”
[2] Tim Bray, 2010. “Doing it Wrong,” on his blog, January 5, 2010. The extensive comments are also worth a read.
[3] According to Marketwire quoting IDC, “Preliminary market sizing suggests that the business intelligence tools software market grew 2.6% in 2009 to reach $8.1 billion. Given the current market assumptions regarding the global economy and demand drivers in the BI tools software market, IDC forecasts this market to grow at a compound annual growth rate of 6.9% through 2014 to $11.3 billion.” CBR, citing Gartner, indicates the worldwide BI software market will grow 9.7 percent, reaching US$10.8 billion in 2011. Gartner also said BI platforms would continue to be one of the fastest growing software markets. For a very good background on BI, see Rochelle Shaw, 2011. “What is Business Intelligence,” posted in Database Trends and Applications, January 7, 2011.
[4] According to this article, by Antone Gonsalves, Poor Use Of Data Integration Tools Can Waste $500,000 Annually: Gartner (April 27, 2009), which reports on a recent Gartner Report, large global 2000 companies, using several data integration tools with overlapping features, can reduce costs by more than $500,000 annually by eliminating redundant software and leveraging a shared services model. In a further report by Roman Stanek, Business Intelligence Projects are Famous for Low Success Rates, High Costs and Time Overruns (April 25, 2009), Gartner is talking about a dirty little secret in the world of data integration, the fact that the data integration technology in place is based on generations of data integration technology being layered in the enterprise over the years. Thus, technology that was purchased to solve data integration problems, and reduce costs, is actually making the data integration problem more complex and no longer cost efficient.
[5] For example, see Roger Sessions, 2009. Cost of IT Failure, September 28, 2009. This analysis suggests failure rates of 65% with a total estimated worldwide cost of $6.2 trillion in 2009. Commenters have raised questions as to what constitutes failure and have questioned some of the analysis assumptions. Nonetheless, even with over-estimates, the scale of the numbers is alarming; see Jorge Dominguez, 2009. The CHAOS Report 2009 on IT Project Failure, June 16, 2009, which indicates combined failure and challenge rates for IT projects have ranged from 65% to 84% over the period 1994 to 2009; see http://www.education.state.pa.us/portal/server.pt/gateway/PTARGS_0_2_690719_0_0_18/CHAOS%20Summary%202009.pdf. Also see Dan Galorath, 2008. Software Project Failure Costs Billions; Better Estimation & Planning Can Help, June 7, 2008. In this report, Galorath compares and combines many of the available IT failure studies and summarizes that 3 of 5 IT projects do not do what they were supposed to for the expected costs, with 49% showing budget overruns, 47% showing higher than expected maintenance costs, and 41% failing to deliver expected business value; the anecdotal failure rate for years for IT projects has been claimed as 80%, with business intelligence and data warehousing particularly failure-prone areas; in 2001, a study by Mark N. Frolick and Keith Lindsey, Critical Factors for Data Warehouse Failures, for the Data Warehousing Institute noted conventional wisdom says the failure rate of data warehousing projects is 70 to 80 percent, with a then-recent study in the insurance industry found a 90-percent failure rate. This report is useful for combining many historical studies.
[7] Wayne W. Eckerson, 2007. “The Myth of Self-Service Business Intelligence,” in TDWI Online, October 18, 2007; see http://tdwi.org/articles/2007/10/18/the-myth-of-selfservice-bi.aspx. “Business Intelligence projects are famous for low success rates, high costs and time overruns. The economics of BI are visibly broken, and have been for years. Yet BI remains the #1 technology priority according to Gartner.”
[8] See James G. Kobielus, 2009. Mighty Mashups: Do-It-Yourself Business Intelligence For The New Economy, July 23, 2009, see http://www.corda.com/pdfs/mighty-mashups-article.pdf. In this report, Kobelius, the lead author from a Forrester study (August 2008, Global BI And Data Management Online Survey) that surveyed 82 IT decision-makers, noted that just over 70% responded that IT develops their reports and dashboards. About 57% responded that power users did such development. Only 18.3% reported that BI development is done by end users with limited BI skills. .
[9] M.K. Bergman, 2010. “Seven Pillars of the Open Semantic Enterprise,” in AI3:::Adaptive Information blog, January 12, 2010; see http://www.mkbergman.com/859/seven-pillars-of-the-open-semantic-enterprise/.
[10] There are a series of ongoing ontology best practices articles; see http://www.mkbergman.com/category/ontology-best-practices/.
[11] The scones (Subject Concept Or Named EntitieS) tagger provides information extraction of domain-specific subject concepts and entities from unstructured text. It also provides disambiguation of this information based on the context of the source information. See further http://techwiki.openstructs.org/index.php/Category:Scones.
[12] M.K. Bergman, 2011. “Ontology-Driven Apps Using Generic Applications,” in AI3:::Adaptive Information blog, March 7, 2011; see http://www.mkbergman.com/948/ontology-driven-apps-using-generic-applications/.
[13] M.K. Bergman, 2009. “The Open World Assumption: Elephant in the Room,” in AI3:::Adaptive Information blog, December 21, 2009; see http://www.mkbergman.com/852/the-open-world-assumption-elephant-in-the-room/. The open world assumption (OWA) generally asserts that the lack of a given assertion or fact being available does not imply whether that possible assertion is true or false: it simply is not known. In other words, lack of knowledge does not imply falsity. Another way to say it is that everything is permitted until it is prohibited. OWA lends itself to incremental and incomplete approaches to various modeling problems.
[14] M.K. Bergman, 2010. “A New Landscape in Ontology Development Tools,” in AI3:::Adaptive Information blog, Sept. 7, 2010; see http://www.mkbergman.com/909/a-new-landscape-in-ontology-development-tools/.

Posted by AI3's author, Mike Bergman Posted on April 4, 2011 at 3:06 am in Uncategorized | Comments (4)
The URI link reference to this post is: http://www.mkbergman.com/953/democratizing-information-with-semantics/
The URI to trackback this post is: http://www.mkbergman.com/953/democratizing-information-with-semantics/trackback/
Posted:March 21, 2011

An Overview of Freely Available, Comprehensive Icon Sets

structWFS It is not unusual when designing up a new project that it is important to find a consistent set of icons for user interface or mapping purposes. Full libraries or icon sets can be important because mixing and matching icons from multiple sources often conveys a bit of chaos or unprofessionalism.

Structured Dynamics monitors freely available icons for these purposes and provides listings to its clients so that they may tailor and choose their own looks-and feel. The material below is the reference listing of about 20 comprehensive sets of open source icons that may be used for the open semantic framework (OSF) or sWebMap interfaces. Links to other listings are also provided. These references are kept up-to-date on the OSF TechWiki.

General Icons

Here are some consistent families of general user interface icons. While there are thousands of free icons available from many venues (check out via search engines), there are fewer that have sufficient diversity and scope to encompass most user interface needs. Since it is noticeably jarring to mix icon styles in the same interface (or, at least to do so indiscriminately), it is important to have a consistent design image.

Here are the candidate choices we have found. Some are provided in either multiple size formats or in vector (generally, SVG), formats:

  • The Silk icons from famfamfam is a set of over 700 16-by-16 pixel icons in PNG format (144 of which are also available as GIF mini-icons, see below). This is the standard open source set used as the basis for Pastel (see below), which is used in the various conStruct tools. There are also other free icons from this site
Famfamfam.png
  • Tango is an icon library that contains a basic set of icons for the most common usage. They come in 16×16 and 22×22 sizes, and some are scalable (vector). There are also a variety of extensions for specific purposes
Tango.png
  • Pastel SVG is an icon set based on the Silk icons noted above from FamFamFam.com. Pastel uses the same style, but comes in the sizes of 16, 24, 32, 48, 64, 72, 96, 128 or 256 pixels square; a sampling is shown below
Pastel_sample.png

Pastel is the standard icon set chosen for conStruct tools.

  • The Fugue icons by Yusuke Kamiyamane is the largest set available, and contains 3000 individual icons in 16×16 PNG format. Here is a sampling:
Fugue.png

Alternatively, there is a smaller set of 400 icons called Diagona also available from the same designer

  • Nuvola is a set of 600 icons in either PNG or SVG format from David Vignoni (Icon King). The PNG come in standard sizes of 16, 22, 32, 48, 64 or 128 pixels square. Here is a sampling:
Nuvola.png

Vignoni also has an alternative set of icons with a similar feel called Oxygen.

  • The Crystal set of more than 1300 icons is organized into six different sizes, and is divided into the categories of actions, apps, files systems, devices and mime types. Here is a sampling:
Crystal.png

Other Sources

According to the Open Icon Library, which has a nice gallery (but which also mixes sources), here are some other key sources of open source icons not already listed above:

See also the icon sets used within Wikipedia itself.

Lastly, and perhaps most usefully, peruse the 750+ icon sets on Icon Finder.

Map Icons

With the emergence of Web 2.0 and locational services, particularly the open API and “thumbtack” aspect of Google Maps, a new category of map markers for web mapping has emerged. This category is still new enough that an accepted terminology has not yet developed. Among other terms, here are some of the ways that these locational markers on maps have been described:

  • Places of interest (POIs)
  • Points of interest (POIs)
  • Pins
  • Pushpins
  • Placemarks
  • Thumbtacks
  • Markers
  • Location markers
  • Map pointers.

Here are some of the consolidated sources of open source markers now available:

  • This is a sampling of 120 markers or so available within the Google MyMaps API (see further this link with shadows and this full listing). All have matching shadows useful for conveying a 3D feeling:
Matt77.png

There are also about 250 standard icons provided within the Google Earth set. You can see those listed here. Also, to see the available icon libraries in Google maps (plus some others), see this link

  • Map Icons Collection is a set of more than 1000 free icons to use as placemarks for POI locations on maps (originally designed for the Google Maps API). Most of these icon markers are square in aspect with a pointer, and are organized by color-coordinated categories such as numbers, cinemas, hotels, banks, etc. Here is a sampling:
Google_map_icons.png
  • The Maki icon set consists of more than 100 black and white 15×15 map markers
  • This listing provides three different colors in the Google Map “teardrop” style for all letters and 99 numbers
  • Geosilk is an extension of the standard Silk icon set noted above. It is more applicable to UI icons relating to map functions than to map markers per se
  • Green Map contains a set of about 170 monochrome (can be colored differently) POI markers, with an orientation to nature or ecological categories. There are also local extensions
  • Map Pins provides 22 alternative map pins and flags:
Map_pins.png
  • 50 monochrome POI and map marker symbols from the US National Park Service (NPS):
Nps_markers.png
  • There is a similar (and complementary in design) set of 50 monochrome pedestrian and transportation symbols from AIGA in cooperation with the US Department of Transportation

Dynamic Markers

Some markers can be created dynamically with the Google Map API. Here are some background articles and links:

Other Listings

Various other listings, many with icons but perhaps not organized into the same uniform sets, include:

Posted by AI3's author, Mike Bergman Posted on March 21, 2011 at 2:30 am in Uncategorized | Comments (2)
The URI link reference to this post is: http://www.mkbergman.com/952/noteworthy-icon-libraries-for-projects-and-web-mapping/
The URI to trackback this post is: http://www.mkbergman.com/952/noteworthy-icon-libraries-for-projects-and-web-mapping/trackback/