Posted:May 12, 2010

2009-10 Campaign from http://www.pasadenasymphony-pops.org/
“We’re Successful When We’re Not Needed”

Structured Dynamics has been engaged in open source software development for some time. Inevitably in each of our engagements we are asked about the viability of open source software, its longevity, and what the business model is behind it. Of course, I appreciate our customers seemingly asking about how we are doing and how successful we are. But I suspect there is more behind this questioning than simply good will for our prospects.

Besides the general facts that most of us know — of hundreds of thousands of open source projects only a miniscule number get traction — I think there are broader undercurrents in these questions. Even with open source, and even with good code documentation, that is not enough to ensure long-term success.

When open source broke on the scene a decade or so ago [1], the first enterprise concerns were based around code quality and possible “enterprise-level” risks: security, scalability, and the fact that much open source was itself LAMP-based. As comfort grew about major open source foundations — Linux, MySQL, Apache, the scripting languages of PHP, Perl and Python (that is the very building blocks of the LAMP stack) — concerns shifted to licensing and the possible “viral” effects of some licenses to compromise existing proprietary systems.

Today, of course, we see hugely successful open source projects in all conceivable venues. Granted, most open source projects get very little traction. Only a few standouts from the hundreds of thousands of open source projects on big venues like SourceForge and Google Code or their smaller brethren are used or known. But, still, in virtually every domain or application area, there are 2-3 standouts that get the lion’s share of attention, downloads and use.

Conventional Open SourceI think it fair to argue that well-documented open source code generally out-competes poorly documented code. In most circumstances, well-documented open source is a contributor to the virtuous circle of community input and effort. Indeed, it is a truism that most open source projects have very few code committers. If there is a big community, it is largely devoted to documentation and assistance to newbies on various forums.

We see some successful open source projects, many paradoxically backed by venture capital, that employ the “package and document” strategy. Here, existing open source pieces are cobbled together as more easily installed comprehensive applications with closer to professional grade documentation and support. Examples like Alfresco or Pentaho come to mind. A related strategy is the “keystone” one where platform players such as Drupal, WordPress, Joomla or the like offer plug-in architectures and established user bases to attract legions of third-party developers [2].

OK, So What Has This to Do with the Enterprise?

I think if we stand back and look at this trajectory we can see where it is pointing. And, where it is pointing also helps define what the success factors for open source may be moving forward.

Two decades ago most large software vendors made on average 75% to 80% of their revenues from software licences and maintenance fees; quite the opposite is true today [3]. The successful vendors have moved into consulting and services. One only needs look to three of the largest providers of enterprise software of the past two decades — IBM, Oracle and HP — to see evidence of this trend.

How is it that proprietary software with its 15% to 20% or more annual maintenance fees has been so smoothly and profitably replaced with services?

These suppliers are experienced hands in the enterprise and know what any seasoned IT manager knows: the total lifecycle costs of software and IT reside in maintenance, training, uptime and adaptation. Once installed and deployed, these systems assume a life of their own, with actual use lifetimes that can approach two to three decades.

This reality is, in part, behind my standard exhortation about respecting and leveraging existing IT assets, and why Structured Dynamics has such a commitment to semantic technology deployment in the enterprise that is layered onto existing systems. But, this very same truism can also bring insight into the acceptable (or not) factors facing open source.

Great code — even if well documented — is not alone the mousetrap that leads the world to the door. Listen to the enterprise: lifecycle costs and longevity of use are facts.

But what I am saying here is not really all that earthshaking. These truths are available to anyone with some experience. What is possibly galling to enterprises is two smug positions of new market entrants. The first, which is really naïve, is the moral superiority of open source or open data or any such silly artificial distinctions. That might work in the halls of academia, but carries no water with the enterprise. The second, more cynically based, is to wrap one’s business in the patina of open source while engaging in the “wink-wink” knowledge that only the developer of that open source is in a position to offer longer term support.

Enterprises are not stupid and understand this. So, what IT manager or CIO is going to bet their future software infrastructure on a start-up with immature code, generally poor code documentation or APIs, and definitely no clear clue about their business?

The Slow Squeeze

Yet, that being said, neither enterprises nor vendors nor software innovators that want to work with them can escape the inexorable force of open source. While it has many guises from cloud computing to social software or software as a service or a hundred other terms, the slow squeeze is happening. Big vendors know this; that is why there has been the rush to services. Start-up vendors see this; that is why most have gone consumer apps and ad-based revenue models. And enterprises know this, which is why most are doing nothing other than treading water because the way out of the squeeze is not apparent.

The purpose of this three-part series is to look at these issues from many angles. What might the absolute pervasiveness of open source mean to traditional IT functions? How can strategic and meaningful change be effected via these new IT realities in the enterprise? And, how can software developers and vendors desirous of engaging in large-scale initiatives with enterprises find meaningful business models?

Lead-in to the Series: a Total Open SolutionTotal Open Solution

And, after we answer those questions, we will rest for a day.

But, no, seriously, these are serious questions.

There is no doubt open source is here to stay, yet its maturity demands new thinking and perspectives. Just as enterprises have known that software is only the beginning of decades-long IT commitments and (sometimes) headaches, the purveyors and users of open source should recognize the acceptance factors facing broad enterprise adoption and reliance.

Open source offers the wonderful prospect of avoiding vendor “lock-in”. But, if the full spectrum of software use and adoption is also not so covered, all we have done is to unlock the initial selection and install of the software. Where do we turn for modifications? for updates? for integration with other packages? for ongoing training and maintenance? And, whatever we do, have we done so by making bets on some ephemeral start-up? (We know how IBM will answer that question.)

The first generation of open source has been a substitute for upfront proprietary licenses. After that, support has been a roll of the dice. Sure, broadly accepted open source software provides some solace because of more players and more attention, but how does this square with the prospect of decades of need?

The perverse reality in these questions is that most all early open source vendors are being gobbled up or co-opted by the existing big vendors. The reward of successful market entry is often a great sucking sound to perpetuate existing concentrations of market presence. In the end, how are enterprises benefiting?

Now, on the face of it, I think it neither positive nor negative whether an early open source firm with some initial traction is gobbled up by a big player or not. After all, small fish tend to be eaten by big fish.

But two real questions arise in my mind: One, how does this gobbling fix the current dysfunction of enterprise IT? And, two, what is a poor new open source vendor to do?

The answer to these questions resides in the concerns and anxieties that caused them to be raised in the first place. Enterprises don’t like “lock-in” but like even less seeing stranded investments. For open source to be successful it needs to adopt a strategy that actively extends its traditional basis in open code. It needs to embrace complete documentation, provision of the methods and systems necessary for independent maintenance, and total lifecycle commitments. In short, open source needs to transition from code to systems.

We call this approach the total open solution. It involves — in addition to the software, of course — recipes, methods, and complete documentation useful for full-life deployments. So, vendors, do you want to be an enterprise player with open source? Then, embrace the full spectrum of realities that face the enterprise.

“We’re Successful When We’re Not Needed”

The actual mantra that we use to express this challenge is, “We’re Successful When We’re Not Needed“. This simple mental image helps define gaps and tells us what we need to do moving forward.

The basic premise is that any taint of lock-in or not being attentive to the enterprise customer is a potential point of failure. If we can see and avoid those points and put in place systems or whatever to overcome them, then we have increased comfort in our open source offerings.

Like good open source software, this is ultimately a self-interest position to take. If we can increase comfort in the marketplace that they can adopt and sustain our efforts without us, they will adopt them to a greater degree. And, once adopted, and when extensions or new capabilities are needed, then as initial developers with a complete grasp on the entire lifecycle challenges we become a natural possible hire. Granted, that hiring is by no means guaranteed. In fact, we benefit when there are many able players available.

In the remaining two parts of this series we will discuss all of the components that make up a total open solution and present a collaboration platform for delivering the methods and documentation portions. We’re pretty sure we don’t yet have it fully right. But, we’re also pretty sure we don’t have it wrong.


[1] Of course, stalwart open source applications such as Linux and MySQL and even the open source movement extend back about twenty years. But, it was only about a decade ago that real traction and visibility in the enterprise began.
[2] BTW, with regard to the latter, I think it notable that no semantic technology player has played or attracted third parties to any notable extent. That is possibly a topic for a later blog post!
[3] I first wrote about this five years ago (and updated it a year later), with analysis of many public vendors. See M.K. Bergman, Redux: Enterprise Software Licensing on Life Support, June 2, 2006.
Posted:March 23, 2010

Optimus Prime transformer
Open Source, Open World, Web, and Semantics to Transform the Enterprise

Ten years ago the message was the end of obscene rents from proprietary enterprise software licenses. Five years ago the message was the arrival and fast maturing of open source. Today, the message is the open world and semantics.

These forces are conspiring to change much within enterprise IT. And, this change will undoubtedly be for the good — for the enterprise. But these forces are not necessarily good news within conventional IT departments and definitely not for traditional vendors unwilling to transform their business models.

I have been beating the tom-tom on this topic for a few months, specifically in regards to the semantic enterprise. But I have by no means been alone nor unique. The last two weeks have seen an interesting confluence of reports and commentaries by others that richen the story of the changing information technology landscape. I’ll be drawing on the observations of Thomas Wailgum (CIO magazine) [1], John Blossom [2] and Andy Mulholland, CTO of Capgemini [3].

The New Normal

“After nearly five decades of gate-keeping prominence, corporate IT is in trouble and at a crossroads like never before in its mercurial and storied history as a corporate function. You may be too big to fail, but you’re not too big to succeed. What will you do?”

– Thomas Wailgum [1]

Wailgum describes the “New Normal” and how it might kill IT [1]. He picks up on the viewpoint that ties the recent meltdowns in the financial sector as a seismic force for changes in information technology. While he acknowledges many past challenges to IT from PCs and servers and Y2K and software becoming a commodity, he puts the global recession’s impact on business — the “New Normal”– into an entirely different category.

His basic thesis is that these financial shocks are forcing companies to scrutinize IT as never before, in particular “unfavorable licensing agreements and much-too-much shelfware; ill-conceived purchasing and integration strategies; and questionable software married to entrenched business processes.”

Yet, he also argues that IT and its systems are too ingrained into the core business processes of the enterprise to be allowed to fail. IT systems are now thoroughly intertwined with:

  • ERP systems – the financial, administrative and procurement backbone of every organization
  • Business development and BI
  • Operations and forecasting
  • Customer service and call centers
  • Networking and security
  • Sales and marketing via CRM and lead generation
  • Supply chain applications in manufacturing and shipping.

But top management is disappointed and disaffected. IT systems gobble up too many limited resources. They are inflexible. They are old and require still more limited resources to modernize. They are complex. They create and impose delays. And all of these negatives lead to huge losses in opportunity costs. Wailgum notes Gartner, for instance, as saying that by 2012 perhaps 20 percent of businesses will own no IT assets at all in their desire to outsource this headache.

“Enterprise systems are doing it wrong. And not just a little bit, either. Orders of magnitude wrong. Billions and billions of dollars worth of wrong. Hang-our-heads-in-shame wrong. It’s time to stop the madness.”

– Tim Bray, as quoted in [1]

I think this devastating diagnosis is largely correct, though perhaps incomplete in that no mention is made of the flipside: what IT has failed to deliver. I think this flipside is equally damning.

Despite decades of trying, IT still has not broken down the data stovepipes in the enterprise. Rather, they have proliferated like rabbits. And, IT has failed to unlock the data in the 80% of enterprise information contained within documents (unstructured data).

Unfortunately, after largely zeroing in and mostly diagnosing the situation, Wailgum’s remedy comes off sounding like a tired 12-step program. He argues for new mindsets, better communications, getting in touch with customers, being willing to take risks, and being nimble. Well, duh.

So, over the decades of IT failures there has been accompanying decades of criticism, hand-wringing, and hackneyed solutions. Without some more insightful thinking, this analysis can make our understanding of the New Normal look pretty old.

Not Necessarily Good News for Vendors

John Blossom [2] picks up on these arguments and looks at the issues from the vendor’s perspective. Blossom characterizes Wailgum’s piece as “outlining the enormous value gap that’s been arising in enterprise information technologies.” And, while clearly new approaches are needed and farming them out may become more prevalent, Blossom cautions this is not necessarily good news for vendors.

“. . . the trend towards agnosticism in finding solutions to information problems is only going to get stronger. Whatever platform, tool or information service can solve the job today will get used, as long as it’s affordable and helps major organizations adapt to their needs.”

“. . . many solutions oriented at first towards small to medium enterprises are likely to scale up cost-effectively as platforms from which more targeted information services can be launched to meet the needs of larger enterprises. If agility favors smaller companies that lack the legacy of failed IT investments that larger organizations must still bear, then there will be increasing pressure on large organizations to adopt similar methods.”
“. . . if you thought that your business could be segregated from the Web as a whole, increasingly you’ll be dead wrong.”
– John Blossom [2]

As Blossom puts it, “what seems to be happening is that many of the business processes through which these enterprises survived and thrived over the past several decades are shooting blanks. . . . many of the fundamental concepts of IT that have been promoted for the past few decades no longer give businesses operational advantages but they have to keep spending on them anyway.”

As he has been arguing for quite some time, one fundamental change agent has been the Web itself. “The Web has accelerated the flow of information and services that can lead to effective decision-making far more rapidly than enterprise IT managers have been able to accommodate.”

Web search engines and social media tools can begin to replace some of the dedicated expenditures and systems within the enterprise. Moreover, the extent, growth and value of external data and content is readily apparent. Without outreach and accommodation of external data — even if it can solve its own internal data federation challenges — the individual enterprise is at risk of itself becoming a stovepipe.

Prior focuses on strategy and capturing workflows are perhaps being supplanted by the need for operational flexibility and on-the-fly aggregation and rapid service development tools. In an increasingly interconnected and rapidly changing world with massive information growth, being able to control workflows and to depend on central IT platforms may become last decade’s “Old Normal.” Floating on top of these massive forces and riding with their tides is a better survival tactic than digging fixed emplacements in the face of the tsunami.

These factors of Web, open source, agnosticism as to platform or software applications, and the need to mash up innovations from anywhere are not the traditional vendor game. Just as businesses and their IT departments must get leaner, so must the expectation of vendors to extort exorbitant rents from their clients. “Fasten your seatbelts, it’s going to be a bumpy night!” [4]

So, Blossom agrees with the Wailgum diagnosis, but also helps us begin to understand parts of the cure. Blossom argues the importance of:

  • Web approaches and architectures
  • Incorporation of external data
  • Leverage of Web applications, and
  • Use of open standards and APIs to avoid vendor lock-in.

Much, if not all of this, can be provided by open source. But open source is not a sine qua non: commercial products that embrace these approaches can also be compatible components across the stack.

A Semantic Lever on An Open World Fulcrum

But — even with these components — a full cure still lacks a couple of crucial factors.

These remaining gaps are emphasized in Andy Mulholland’s recent blog post [3]. His post was occasioned by the press announcement that Structured Dynamics (my firm) had donated its Semantic Enterprise Adoption and Solutions, or SEAS, methodology to MIKE2.0 [5]. Mulholland was suggesting his audience needed to know about this Method for an Integrated Knowledge Environment because some of the major audit partnerships have decided to get behind MIKE2.0 with its explicit and open source purpose of managing knowledge environments and their data and provenance.

“In ‘closed’ — or some might say normal — IT environments where all data sources can be carefully controlled, all statements are taken to be false unless explicitly known to be true. However most ‘new’ data is from the ‘open’ environment of the web and in semantic data. If this is not specifically flagged as true it is categorised as ‘unknown’ rather then false. This single characteristic to me is in many ways the most crucial issue to understand as we go forward into using mixed data sets to support complex ‘business intelligence’ or ‘decision support’ around externally driven events, and situations.”

– Andy Mulholland, CTO, Capgemini [3]

As Mulholland notes, “. . . it’s not just more data, it’s the forms of data, and what the data is used for, all of which add to the complications. . . . Sadly the proliferation of data has mostly been in unstructured data in formats suitable for direct human use.”

So, one remaining factor is thus how to extract meaning from unstructured (text) content. It is here that semantics and various natural language processing (NLP) components come in. Implied in the incorporation of data extracted from unstructured sources is a data model expressly designed for such integration.

Yet, without a fulcrum, the semantic lever can still not move the world. Mulholland insightfully nails this fundamental missing piece — the “most crucial issue” — as the use of the open world assumption.

From an enterprise perspective and in relation to the points of this article, an open world assumption is not merely a different way to look at the world. More fundamentally, it is a different way to do business and a very different way to do IT.

I have summarized these points before, but they deserve reiteration. Open world frameworks provide some incredibly important benefits for knowledge management applications in the enterprise:

  • Domains can be analyzed and inspected incrementally
  • Schema can be incomplete and developed and refined incrementally
  • The data and the structures within these open world frameworks can be used and expressed in a piecemeal or incomplete manner
  • We can readily combine data with partial characterizations with other data having complete characterizations
  • Systems built with open world frameworks are flexible and robust; as new information or structure is gained, it can be incorporated without negating the information already resident, and
  • Open world systems can readily bridge or embrace closed world subsystems.

Archimedes is attributed to the apocryphal quote, “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.” [6] I have also had lawyer friends tell me that the essence of many court cases is found in a single pivotal assertion or statement in the arguments. I think it fair to say that the open world approach plays such a central role in unlocking the adaptive way for IT to move forward.

Bringing the Factors Together via Open SEAS

Open SEAS
As Mulholland notes, we have donated our Open SEAS methodology [7] to MIKE2.0 in the hopes of seeing greater adoption and collaboration. This is useful, and all are welcome to review, comment and contribute to the methodology, indeed as is the case for all aspects of MIKE2.0.

But the essential point of this article is that Open SEAS also embraces most — if not all — of the factors necessary to address the New Normal IT function.

Pillars of the Open Semantic Enterprise

Open SEAS is explicitly designed to facilitate becoming an open semantic enterprise. Namely, this means an organization that uses the languages and standards of the semantic Web, including RDF, RDFS, OWL, SPARQL and others to integrate existing information assets, using the best practices of linked data and the open world assumption, and targeting knowledge management applications. It does so based on Web-oriented architectures and approaches and uses ontologies as an “integration layer” across existing assets.

The foundational approaches to the open semantic enterprise do not necessarily mean open data nor open source (though they are suitable for these purposes with many open source tools available). The techniques can equivalently be applied to internal, closed, proprietary data and structures. The techniques can themselves be used as a basis for bringing external information into the enterprise. ‘Open’ is in reference to the critical use of the open world assumption.

These practices do not require replacing current systems and assets; they can be applied equally to public or proprietary information; and they can be tested and deployed incrementally at low risk and cost. The very foundations of the practice encourage a learn-as-you-go approach and active and agile adaptation. While embracing the open semantic enterprise can lead to quite disruptive benefits and changes, it can be accomplished as such with minimal disruption in itself. This is its most compelling aspect.

We believe this offers IT an exciting, incremental and low-risk path for moving forward. All existing assets can be left in place and — in essence — modernized in place. No massive shifts and no massive commitments are required. As benefits and budgets allow, the extent of the semantic interoperability layer may be extended as needed and as affordable.

The open semantic enterprise is not magic nor some panacea. Simply consider it as bringing rationality to what has become a broken IT system. Embracing the open semantic enterprise can help the New Normal be a good and more adaptive normal.


[1] Thomas Wailgum, 2010. “Why the New Normal Could Kill IT,” March 12, 2010 online story in CIO Magazine; see http://www.cio.com/article/575563/Why_the_New_Normal_Could_Kill_IT.
[2] John Blossom, 2010. “Enterprise Publishing and the “New Normal” in I.T. – Are You Missing the Trend?,” March 15, 2010 blog post on ContentBlogger; see http://www.shore.com/commentary/weblogs/2010/03/enterprise-publishing-and-new-normal-in.html.
[3] Andy Mulholland, 2010. “Meet MIKE – Methodology for Managing Data and its Use,” March 12, 2010 ePractice.eu Blog post; see http://www.epractice.eu/en/blog/309352. Also, see Mulholland’s Capgemini CTO Blog.
[4] Bette Davis (as Margo Channing) uttered this famous line in All About Eve (1950).
[5] MIKE2.0 is a Method for an Integrated Knowledge Environment is an open source methodology for enterprise information management that provides a framework for information development. The MIKE2.0 Methodology is part of the overall Open Methodology Framework and is a collaborative effort to help organisations who have invested heavily in applications and infrastructures, but haven’t focused on the data and information needs of the business.
[6] As quoted in The Lever, “”Archimedes, however, in writing to King Hiero, whose friend and near relation he was, had stated that given the force, any given weight might be moved, and even boasted, we are told, relying on the strength of demonstration, that if there were another earth, by going into it he could remove this.” from Plutarch (c. 45-120 AD) in the Life of Marcellus, as translated by John Dryden (1631-1700).
[7] The Open SEAS framework is part of the MIKE2.0 Semantic Enterprise Solution capability. It adds some 40 new resources to this area, importantly including reasoning for the validity of statements in ‘open’ situations.
Posted:March 9, 2010

Citizen DAN LogoHuzzah! for Local Government Open Data, Transparency, Community Indicators and Citizen Journalism

While the Knight News Challenge is still working its way through the screening details, Structured Dynamics Citizen DAN proposal remains in the hunt. Listen to this:

To date, we have been the most viewed proposal by far (2x more than the second most viewed!!! Hooray!) and are in the top five of highest rated (have also been at #1 or #2, depending. Hooray!). Thanks to all of you for your interest and support.

There is much to recommend this KNC approach, not the least of which being able to attract some 2,500 proposals seeking a piece of the 2010 $5 million potential grant awards. Our proposal extends SD’s basic structWSF and conStruct Drupal frameworks to provide a data appliance and network (DAN) to support citizen journalists with data and analysis at the local, community level.

None of our rankings, of course, guarantees anything. But, we also feel good about how the market is looking at these frameworks. We have recently been awarded some pretty exciting and related contracts. Any and all of these initiatives will continue to contribute to the open source Citizen DAN vision.

And, what might that vision be? Well, after some weeks away from it, I read again our online submission to the Knight News Challenge. I have to say: It ain’t too bad! (Plus many supporting goodies and details.)

So, I repeat in its entirety below, the KNC questions and our formal responses. This information from our original submittal is unchanged, except to add some live links where they could not be submitted as such before. (BTW, the bold headers are the KNC questions.) Eventual winners are slated to be announced around mid-June. We’re keeping our fingers crossed, but we are pursuing this initiative in any case.


Describe your project:

Citizen DAN is an open source framework to leverage relevant local data for citizen journalists. It is a:

  • Appliance for filtering and analyzing data specific to local community indicators
  • Means to visualize local data over time or by neighborhood
  • Meeting place for the public to upload and share local data and information
  • Web data portal that can be individually tailored by any local community
  • Node in a global network of communities across which to compare indicators of community well-being.

Good decisions and good journalism require good information. Starting with pre-loaded government data, Citizen DAN provides any citizen the framework to learn and compare local statistics and data with other similar communities. This helps to promote the grist for citizen journalism; it is also a vehicle for discovery and learning across the community.

Citizen DAN comes pre-packaged with all necessary deployment components and documentation, including local data from government sources. It includes facilities for direct upload of additional local data in formats from spreadsheets to standard databases. Many standard converters are included with the basic package.

Citizen DAN may be implemented by local governments or by community advocacy groups. When deployed, using its clear documentation, sponsors may choose whether or what portions of local data are exposed to the broader Citizen DAN network. Data exposed on the network is automatically available to any other network community for comparison and analysis purposes.

This data appliance and network (DAN) is multi-lingual. It will be tested in three cities in Canada and the US, showing its multi-lingual capabilities in English, Spanish and French.

How will your project improve the way news and information are delivered to geographic communities?

With Citizen DAN, anyone with Web access can now get, slice, and dice information about how their community is doing and how it compares to other communities. We have learned from Web 2.0 and user-generated content that once exposed, useful information can be taken and analyzed in valuable and unanticipated ways.

The trick is to get information that already exists. Citizen journalists of the past may not have either known:

  1. Where to find relevant information, or
  2. How to ‘slice-and-dice’ that information to extract meaningful nuggets.

By removing these hurdles, Citizen DAN improves the ways information is delivered to communities and provides the framework for sifting through it to extract meaning.

How is your idea innovative? (new or different from what already exists)

Government public data in electronic tabular form or as published listings or tables in local newspapers has been available for some time. While meeting strict ‘disclosure’ requirements, this information has neither been readily analyzable nor actionable.

The meaning of information lies in its interpretation and analysis.

Citizen DAN is innovative because it:

  1. Is a platform for accessing and exposing available community data
  2. Provides powerful Web-based tools for drilling down and mining data
  3. Changes the game via public-provided data, and
  4. Packages Citizen DAN in a Web framework that is available to any local citizen and requires no expertise other than clicking links.

What experience do you or your organization have to successfully develop this project?

Structured Dynamics has already developed and released as open-source code structWSF and conStruct , the basic foundations to this proposal. structWSF provides the network and dataset “backbone” to this proposal; conStruct provides the Drupal portal and Web site framework.

To this foundation we add proven experience and knowledge of datasets and how to access them, as well as tools and converters for how to stage them for standard public use. A key expertise of Structured Dynamics is the conversion of virtually any legacy data format into interoperable canonical forms.

These are important challenges, which require experience in the semantics of data and mapping from varied forms into useful and common frameworks. Structured Dynamics has codified its expertise in these areas into the software underlying Citizen DAN.

Structured Dynamics’ principals are also multi-lingual, with language-neutral architectures and code. The company’s principals are also some of the most prominent bloggers and writers in the semantic Web. We are acknowledged as attentive to documentation and communication.

Finally, Structured Dynamics’ principals have more than a decade of track record in successful data access and mining, and software and venture development.

To this strong basis, we have preliminary city commitments for deploying this project in the United States (English and Spanish) and Canada (French and English).

What unmet need does your proposal answer?

ThisWeKnow offers local Census data, but no community or publishing aspects. Data sharing is in DataSF and DataMine (NYC), but they lack collaboration, community networks and comparisons, or powerful data visualization or mapping.

Citizen DAN is a turnkey platform for any size community to create, publish, search, browse, slice-and-dice, visualize or compare indicators of community well-being. Its use makes the Web more locally focused. With it, researchers, watchdog groups, reporters, local officials and interested citizens can now discover hard data for ‘new news’ or fact-check mainstream media.

What tasks/benchmarks need to be accomplished to develop your project and by when will you complete them?

There are two releases with feedback. Each task summary, listing of task hours (hr) and duration in months (mo), in rough sequence order with overlaps, is:

  1. Dataset Prep/Staging: identify, load and stage baseline datasets; provide means for aggregating data at different levels; 420 hr; 2.5 mo
  2. Refine Data Input Facility: feature to upload other external data, incl direct from local sources; XML, spreadsheet, JSON forms; dataset metadata; 280 hr; 3 mo
  3. Add Data Visualization Component: Flex mapping/data visualization (charts, graphs) using any slice-and-dice; 390 hr; 3 mo
  4. Make Multi-linguality Changes: English, French, Spanish versions; 220 hr; 2 mo
  5. Refine User Interface: update existing interface in faceted browse; filter; search; record create, manage and update; imports; exports; and user access rights; 380 hr; 3 mo
  6. Standard Citizen DAN Ontologies: the coherent schema for the data; 140 hr; 3 mo
  7. Create Central Portal: distribution and promotion site for project; 120 hr; 2 mo
  8. Deploy/Test First Release: release by end of Mo 5 @ 3 test sites; 300 hr; 4 mo
  9. Revise Based on Feedback: bug fixing and 4 mo testing/feedback, then revision #2; 420 hr
  10. Package/Document: component packaging for easier installs; increased documentation; 310 hr; 2 mo
  11. Marketing/Awareness: see next question; 240 hr; 12 mo
  12. Project Management: standard PM/interact with test communities, partners; 220 hr; 12 mo.

See attached task details.

What will you have changed by the end of your project?

"Information is the currency of democracy." Thomas Jefferson (n.b.)

We intuitively understand that an informed citizenry is a healthy polity. At the global level and in 250 languages, we see how Wikipedia, matched with the Internet and inexpensive laptops, is bringing unforeseen information and enrichment to all. Across the board, we are seeing the democratization of information.

But very little of this revolution has percolated to the local level.

Only in the past decade or so have we seen free, electronic access to national Census data. We still see local data only published in print or not available at all, limiting both awareness but more importantly understanding and analysis. Data locked up in municipal computers or available but not expressed via crowdsourcing is as good as non-existent.

Though many citizens at the local level are not numeric, intuition has to tell us that the absense of empirical, local data hurts our ability to understand, reason and debate our local circumstances. Are we doing better or worse than yesterday? Than in comparison with our peers? Under what measures does this have meaning about community well being?

The purpose of the Citizen DAN project is to create an appliance — in the same sense of refrigerators keeping our food from spoiling — by which any citizen can crack open and expose relevant data at the local level. Citizen DAN is about enrichening our local information and keeping our communities healthy.

How will you measure progress and ultimately success?

We will measure the progress of the project by the number of communities and local organizations that use the Citizen DAN platform to create and publish community data. Subsidiary measures include the number of:

  • Individual users across all installations
  • Users contributing uploaded datasets
  • Contributed datasets
  • Contributed applications based on the platform
  • Interconnected sites in the network
  • Different Citizen DAN networks
  • Substantive articles and blog posts on Citizen DAN
  • Mentions of ‘Citizen DAN’ (and local naming or variants, which will be tracked) in news articles
  • Contributed blog posts on the central Citizen DAN portal
  • Software package downloads, and
  • Google citations and hits on ‘Citizen DAN’ (and prominent variants).

These measures, plus active sites with profiles of each, will be monitored and tracked on the central Citizen DAN portal.

‘Ultimate success’ is related to the general growth in transparent government at the local level. Growth in Citizen DAN-related measures on a year-over-year basis or in relation to Gov2.0 would indicate success.

Do you see any risk in the development of your project?

There is no technical risk to this proposal, but there are risks in scope, awareness and acceptance. Our system has been operational for one year for relevant use cases; all components have been integrated, debugged, and put into production.

Scope risks relate to how much data the Citizen DAN platform is loaded with, and how much functionality is included. We balance the data question by using common public datasets for baseline data, then add features for localities to “crowdsource” their own supplementary data. We balance the functionality question by limiting new development to data visualization/mapping and to upload functions (per above), and then to refine what already exists.

Awareness risks arise from a crowded attention space. We can overcome this in two ways. The first is to satisfy users at our test sites. That will result in good recommendations to help seed a snowball effect. The second way is to use social media and our existing Web outlets aggressively. We have been building awareness for our own properties in steady, inch-by-inch measures. While a notable few Web efforts may go viral, the process is not predictable. Steady, constant focus is our preferred recipe.

Acceptance risk is intimately linked with awareness and use. If we can satisfy each Citizen DAN community, then new datasets, new functionality and new awareness will naturally arise. More users and more contributions through the network effect are the best way to broad acceptance.

What is your marketing plan? How will people learn about what you are doing?

Marketing and awareness efforts will include our use of social media, dedicated Web sites, support from test communities, and outreach to relevant community Web sites.

Our own blogs are popular in the semantic Web and structured data space (~3K uniques daily); we have published two posts on Citizen DAN and will continue to do so with more frequency once the effort gets underway.

We will create a central portal (http://citizen-dan.org) based on the project software (akin to our other project sites). The model for this apps and deployments clearinghouse is CrimeReports.com. Using social aspects and crowdsourcing, the site will encourage sharing and best practices amongst the growing number of Citizen DAN communities.

We will blog and post announcements for key releases and milestones on relevant external Web sites including various Gov 2.0 sites, Community Indicators Consortium, GovLoop, Knight News Challenge, the Sunlight Foundation, and so forth. In addition, we will collate and track individual community efforts (maintained on the central Citizen DAN site) and make specific outreach to community data sites (such as DataSF or DataMine at NYC.gov). We will use Twitter (#CitizenDAN, etc) and the social networks of LinkedIn, Facebook, and Meetup to promote Citizen DAN activity.

We will interact with advocates of citizen journalism, and engage civic organizations, media, and government officials (esp in our three test communities) to refine our marketing plan.

Is this a one-time experiment or do you think it will continue after the grant?

Citizen DAN is not an experiment. It is a working framework that gives any locality and its citizenry the means to assemble, share and compare measures of its community well-being with other communities. These indicators, in turn, provide substance and grist for greater advocacy and writing and blogging (“journalism”) at the local level.

Granted, there are unknowns: How many localities will adopt the Citizen DAN appliance? How essential will its data be to local advocacy and news? How active will each Citizen DAN installation be in attracting contributions and local data?

We submit the better way to frame the question is the degree of adoption, as opposed to will it work.

Web-based changes in our society and social interaction are leading to the democratization of information, access to it, and channels for expression. Whether ultimately successful in the specific form proposed herein, Citizen DAN and its open source software and frameworks will surely be adopted in one form or another — to one degree or another — in the unassailable trend toward local government transparency and citizen involvement.

In short, Yes: We believe Citizen DAN will continue long after the grant.

If it is to be self-sustainable, what is the plan for making that happen?

Our plan begins with the nature of Citizen DAN as software and framework. Sustainability is a question of whether the appliance itself is useful, and how users choose to leverage it.

Mediawiki, the software behind Wikipedia, is an analog. Mediawiki is an enabling infrastructure. Some sites using it are not successful; others wildly so. Success has required the combination of a good appliance with topicality and good management. The same is true for Citizen DAN.

Our plan thus begins with Citizen DAN as a useful appliance, as free open source with great documentation and prominent initial use cases. Our plan continues with our commitment to the local citizen marketplace.

We are developing Citizen DAN because of current trends. We foresee many hundreds of communities adopting the system. Most will be able to do so on their own. Some others may require modifications or assistance. Our self-interest is to ensure a high level of adoption.

An era of citizen engagement is unfolding at the local level, fueled by Web technologies and growing comfort with crowdsourcing and social networks. Meanwhile, local government constraints and pressures for transparency are unleashing locked-up data. These forces will create new opportunities for data literacy by the public, that will itself bring new understanding and improvements in governance and budgeting. We plan on Citizen DAN and its offspring to be one of the catalysts for those changes.

Posted:March 1, 2010

New Release Builds on the MIKE2.0 Methodology and Deliverables

Open SEAS

Today, Structured Dynamics is pleased to release Open SEAS, its methodology for Semantic Enterprise Adoption and Solutions. At the same time, we are donating the framework to the open source MIKE2.0 Method for an Integrated Knowledge Environment project.

Open SEAS provides a framework for the enterprise to establish a coherent, consistent and interoperable layer across its information assets. It is compliant with the MIKE2.0 Semantic Enterprise Solution Offering.

Open SEAS has been developed for enterprises desiring to initiate or extend their involvement with semantic technologies. It is inherently incremental, low-cost and low-risk.

Donation and Relation to MIKE2.0

Concurrent with this release, Structured Dynamics is also donating the methodology and all of its related intellectual assets to the MIKE2.0 project. Under Creative Commons license and MIKE2.0′s content governance policies, the community’s current 2000+ members are now free to expand and use the Open SEAS methodology in any manner they see fit.
MIKE2.0 Logo

Last week, I began to introduce MIKE2.0 and its methodology to the readers of this blog. MIKE2.0 provides a complete delivery environment and methodology for information management projects in the enterprise. Solutions — from the specific to the composite — are described and packaged with respect to plans, management communications, products (open source and proprietary), activities, benchmarks, and deliverables. Delivery is accomplished over multiple increments, split into five phases from definition and planning to deployment. The assets associated with this framework first are based on templates and guidelines that can be applied to any information management area. The framework allows for multiple projects to be combined and inter-related, all under a common methodology. More information and a good entry point is provided on the What is MIKE2.0? page on the project’s main Web site.

MIKE2.0 presently has some 800 resources across about 40 solution areas. With Structured Dynamics’ donation, there are now about 40 resources related to the semantic enterprise, many of them major, accompanied by many images and figures. This contribution makes the Semantic Enterprise Solution Offering instantly one of the more complete within MIKE2.0. As noted below, this contribution is also just a beginning of our commitment.

Basic Overview of Open SEAS

The Open SEAS framework is Structured Dynamics’ specific implementation framework for MIKE2.0′s Semantic Enterprise Solution Offering. This section overviews some of Open SEAS‘ key facets.

A Grounding in the Open World Approach

Many enterprise information systems, particularly relational ones, embody a closed world assumption that holds that any statement that is not known to be true is false. This premise works well where there is complete coverage of specific items, such as the enumeration of all customers or all products.

Yet, in most areas of the real (”open”) world there is no guarantee or likelihood of complete coverage. Under an open world assumption the lack of a given assertion or fact does not imply whether that possible assertion is true or false: it simply is not known. An open world assumption is one of the key factors that defines the open Semantic Enterprise Offering and enables it to be deployed incrementally. It is also the basis for enabling linkage to external (often incomplete) datasets.
Pillars of the Open Semantic Enterprise

Fortunately, there is no requirement for enterprises to make some philosophical commitment to either closed- or open-world systems or reasoning. It is perfectly acceptable to combine traditional closed-world relational systems with open-world reasoning. It is also not necessary to make any choices or trade-offs about using public v. private data or combinations thereof. All combinations are acceptable when the basis for integration is an open-world one.

Open SEAS is grounded in this “open” style. It can be employed in virtually any enterprise circumstance and at any scope, and expanded in a similar way as budget and needs allow.

Other Basic Pillars to the Framework

Open SEAS is based on seven pillars, which themselves inform the basis for the MIKE2.0 Guiding Principles for the Open Semantic Enterprise. These principles cover data model, architecture, deployment practices and approach for how an enterprise can begin and then extend its use of semantics for information interoperability.

Important aspects are linked data or Web-oriented architecture, but it is really the unique combination of open-world approach and the RDF data model and its semantic power that provide the distinctive differences for Open SEAS. An exciting prospect — but still in its early stages of discovery and implementation — is the role of adaptive ontologies to power ontology-driven applications. These prospects, if fully realized, could totally remake how knowledge workers interact and specify the applications that manage their information environment.

Embracing the Layered Semantic Enterprise Architecture

Open SEAS also fully embraces the Layered Semantic Enterprise Architecture of MIKE2.0′s Semantic Enterprise Offering. This architecture acts as a subsequent set of functions or middleware with respect to the MIKE2.0′s standard SAFE Architecture. Most of the existing SAFE architecture resides in the Existing Assets layer. The specific aspects of Open SEAS resides in the layers above, namely Access/Conversion, Ontologies and the Applications Layers.

Using (Mostly) Open Source to Fill Gaps in the Technology Stack

Stitching together this interoperability layer above existing information and infrastructure assets requires many diverse tools and products, and there still are gaps. The layer figure below shows the semantic enterprise architecture overlaid with some representative open source projects and tools that plug some of those gaps.

Open SEAS also maintains a comprehensive roster of open source and proprietary tools in all aspects of semantic technology, ranging from data storage and converters, to Web services and middleware, and then to ultimate user applications. A database of nearly 1,000 tools in all areas is maintained for potential applicability to the methodology.

Quick, Adaptive, Agile Increments

The inherently incremental nature of the Open SEAS framework encourages experimentation, affordable deployments, and experience gathering. Because the systems and deployments put into place with this framework are based on the open world approach and use the extensible RDF data model, expansions in scope, sophistication or domain can be incorporated at any time without adverse effects on existing assets or systems or prior Open SEAS deployments.

Quick and (virtually) risk-free increments means that adopting semantic approaches in the enterprise can be accelerated (or not) based on empirical benefits and available budgets.

An Emphasis on Learning

The Open SEAS framework is built on a solid foundation, but it also one that is incomplete. Deployments of semantic technologies and approaches are still quite early in the enterprise, whether measured in numbers, scope or depth. In order for the framework — and the practice of semantic adoption in general — to continue to expand and be relevant in the enterprise, active learning and documentation is essential. One of the reasons for the affiliation of Open SEAS with MIKE2.0 is to leverage these strong roots in methodological learning.

Where Do We Go From Here?

The nature of Open SEAS and its parent Semantic Enterprise Solution Offering touches most offerings within the MIKE2.0 framework. There is much to be done to integrate the semantic enterprise perspective into these other possibilities, plus much that needs to be learned and documented for the offering itself. The concept of the semantic enterprise, after all, is relatively new with few prominent case studies.

As the offering points out, there are some dozens of addition necessary resources that are available and ready to be packaged and moved into the MIKE2.0 framework. These efforts are a priority, and will continue over the coming weeks.

But, more importantly, beyond that, the experience and practitioner base needs to grow. Much is unknown regarding key aspects of the offering:

  • What are the priority application areas which promise the greatest return on investment?
  • What are best practices for adoption and technologies across the entire semantic enterprise stack?
  • Many tools and techniques are still legacies and outgrowths of the research and academic communities. How can these be adopted and modified to meet enterprise standards and expectations?
  • What are the “best” ontology and vocabulary building blocks upon which to model and help frame the enterprise’s interoperability needs?
  • What are the most cost-effective strategies for leveraging existing information and infrastructure assets, while transitioning away from them where appropriate?

Despite these questions, emergence is the way complex systems arise out of a multiple of relatively simple interactions, exhibiting new and unforeseen properties in the process. RDF is an emergent model. It begins as simple “fact” statements of triples, that may then be combined and expanded into ever-more complex structures and stories. As an internal, canonical data model, RDF has advantages for information federation and development over any other approach. It can represent, describe, combine, extend and adapt data and their organizational schema flexibly and at will. Applications built upon RDF can explore and analyze in ways not easily available with other models.

Combined with an open-world approach, new information can be brought in and incorporated to the framework step-by-step. Perhaps the greatest promise in an ongoing transition to become a semantic enterprise is how an inherently incremental and building-block approach might alter prior practices and risks across the entire information management spectrum.

We invite you to join us and to contribute to this effort. I encourage you to join MIKE2.0 if you have not already done so, and check out announcements on this blog for ongoing developments.

Posted:February 23, 2010

MIKE2.0 Logo
A Maturing Standard, Worthy of Adoption

Enterprises are hungry for guidance and assistance in learning how to embrace semantics and semantic technologies in their organization. Because of our services and products and my blog writings, we field many inquiries at Structured Dynamics about best practices and methods for transitioning to a semantic enterprise.

Until the middle of last year, we had been mostly focused on software development projects and our middleware efforts via things like conStruct, structWSF, irON and UMBEL. While we also were helping in early engagement and assessment efforts, it was becoming clear that more formalized (and documented!) methods and techniques were warranted. We needed concrete next steps to offer the organization once they became intrigued and then excited about what it might mean to become a semantic enterprise.

For decades, of course, various management and IT consultancies have focused on assisting enterprises adopt new work methods and information management approaches and technologies. These practices have resulted in a wealth of knowledge and methods, all attuned to enterprise needs and culture. Unfortunately, these methods have also been highly proprietary and hidden behind case studies and engagements often purposely kept from public view.

So, in parallel with formulating and documenting our own approaches — some of which are quite new and unique to the semantic space (with its open world flavor as we practice it) — we also have been active students for what others have done and written about information management assessment and change in the enterprise. Despite the hundreds of management books published each year and the deluge of articles and pundits, there are surprisingly few “meaty” sources of actual methods and templates around which to build concrete assessment and adoption methods.

The challenge here is not to present simply a few ideas or to spin some writings (or a full book!) around them. Rather, we need the templates, checklists, guidances, tools listings, frameworks, methods, test harnesses, codified approaches, scheduling and budgeting constructs, and so forth that takes initial excitement and ideas to prototyping and then deployment. These methodological assets take tens to hundreds of person-years to develop. They must also embody the philosophies and approaches consistent with our views and innovations.

Customers like to see the methods and deliverables that assessment and planning efforts can bring to them. But traditional consultancies have been naturally reluctant to share these intellectual assets with the marketplace — unless for a fee. Like many growing small companies before us, Structured Dynamics was thus embarking on systematically building its own assets up, as engagements and time allowed.

Welcome to MIKE2.0 and A Bit of History

I first heard of MIKE2.0 from Alan Morrison of PriceWaterhouseCoopers’ Center for Technology and Innovation and from Steve Ardire, a senior advisor to SD. My first reaction was pretty negative, both because I couldn’t believe why anyone would name a methodology after me (hehe) and I also have been pretty cool to the proliferation of version numbers for things other than software or standards.

However, through Alan and Steve’s good offices we were then introduced to two of the leaders of MIKE2.0, Sean McClowry of PWC and then Rob Hillard of Deloitte. Along with BearingPoint, the original initiator and contributor to MIKE2.0, these three organizations and their key principals provide much of the organizational horsepower and resource support to MIKE2.0.

Based on the fantastic support of the community and the resources of MIKE2.0 itself (see concluding section on Why We Like the Framework), we began digging deeper into the MIKE2.0 Web site and its methodology and resources. For the reasons summarized in this article, we were amazed with the scope and completeness of the framework, and very comfortable with its approach to developing working deployments consistent with our own philosophy of incremental expansion and learning.

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for enterprise information management. It provides a comprehensive methodology (747 significant articles so far) that can be applied across a number of different projects within the information management space. While initially focused around structured data, the goal of MIKE2.0 is to provide a comprehensive methodology for any type of information development.

Information development is an approach organizations can apply to treat information as a strategic asset through their complete supply chain: from how it is created, accessed, presented and used in decision-making to how it is shared, kept secure, stored and destroyed. Information development is a key concept of the MIKE2.0 methodology and a central tenet of its philosophy:

The concept of Information Development is based on the premise that due to its complexity, we currently lack the methods, technologies and skills to solve our information management challenges. Many of the techniques in use today are relatively immature and fragmented and the problems keep getting more difficult to solve. This is one of the reasons we see so many problems today and why organizations that manage information well are so successful.

MIKE2.0 is not a framework for general transactional or operational purposes regarding data or records in the enterprise. (Though it does support functions related to analyzing that information.) Rather, MIKE2.0 is geared to the knowledge management or information management environment, with a clear emphasis on enterprise-wide issues, information integration and collaboration.

The MIKE2.0 methodology was initially created by a team from BearingPoint, a leading management and technology consultancy. The project started as “MIKE2″, an internal approach to aid enterprises to improve their information management. The MIKE2 initiative was started in early 2005 and the methodology was brought through a number of release cycles until it reached a mature state in late 2005. “MIKE2.0″ involved taking this approach and making it open source and more collaborative. Much of the content of the MIKE2.0 methodology was made available to the open source community in late December 2006. The actual MIKE2.0 Web site and release occurred in 2007.

Anyone can join MIKE2.0, which adheres to an open source and Creative Commons model. Governance of MIKE2.0 is based on a meritocracy model, similar to the principles followed by the Apache Software Foundation.

There is much additional background on MIKE2.0. Also, for an explanation of the rationale for the framework, see the MIKE2.0 article, A New Model for the Enterprise.

A Surprisingly Robust and Complete Framework

MIKE2.0 provides a complete delivery framework for information management projects in the enterprise. The assets associated with this framework first are based on templates and guidelines that can be applied to any information management area. This is a key source of our interest in the framework.

But, there is also real content behind these templates. There is a slate of “solution offerings” geared to most areas of enterprise information management. There are “solution capabilities” that describe the tools and templates by which these solutions need to be specified, planned and tracked. There are frameworks for relating specific vendor and open source tools to each offering. And, there are general strategic and other guidances for how to communicate the current state of the discipline as well as its possible future states.

The next diagram captures some of these major elements:

Perhaps the most important aspect of this framework, however, are the ways by which it provides solid guidance for how entirely new solution areas — the semantic enterprise, for example, in Structured Dynamics’ own case — can be expressed and “codified” in ways meaningful to enterprise customers. These frameworks provide a common competency across all areas of enterprise interest in information development and management. For a relatively new and small vendor such as us, this framework provides a credible meeting ground with the market.

A Phased and Incremental Approach to Information Development

The fundamental approach to a MIKE2.0 offering is staged and incremental. This is very much in keeping with Structured Dynamics’ own philosophy, which, more importantly, also is consonant with the phased adoption and expansion of open semantic techologies within the enterprise.

Under the MIKE2.0 framework, the first two phases relate to strategy and assessment. The next three phases (of the five standard ones) produce the first meaningful implementation of the offering. Depending, that may range from a prototype to broader deployment, based on the maturity of the offering. Thereafter, scale-out and expansion occurs via a series of potential increments:

The incremental aspects of the later three phases are not dissimilar from “spiral” deployments common to some government procurements. The truth remains, however, that actual experience is quite limited in later increments, and whether these methodologies can hold over long periods of time is unknown. Despite this caution, most failures occur in the earliest phases of a project. MIKE2.0 has strong framework support in these early phases.

A Broad Spectrum of Capabilities, Assets and Solutions

MIKE2.0 “solutions” are presented as offerings from single ones to a variety of clusters or groupings. These types reflect the real circumstances of applications and deployments at either the departmental or enterprise level. They may range from systematic to those that address specific business and technology problems. Tools and solutions may be work process, human, or technological, proprietary or open.

An overarching purpose of the MIKE2.0 methodology is to couch these variations into a consistent and holistic framework that allows individual or multiple pieces to be combined and inter-related. This consistency is a key to the core objective of information management interoperability across whatever solution profile the enterprise may choose to adopt.

This objective is best expressed via the Overall Implementation Guide. Thus, while detailed aspects of MIKE2.0′s solution offerings may encompass very specific techniques, design patterns and process steps, in combination these pieces can be combined into meaningful wholes.

This spectrum of solution possibilities is organized according to:

  • Vendor Solution Offerings – integrated approaches to solving problems from a vendor perspective and are often product-specific
  • The SAFE Architecture – an adaptive framework that is flexible and extensible, and can be deployed incrementally

These groupings are shown in the diagram below, with the “core” and composite groupings shown in the middle:

Nearly Two Score Core and Composite Offerings

These central core and composite groupings, of course, are comprised of more focused and specific solutions. While it is really not the purpose of this piece to describe any of these MIKE2.0 specifics in detail, the next diagram helps illustrate the scope and breadth of the current framework.

Here are the some 30+ individual “core” solution offerings:

These are also accompanied by 8 or so cross-cutting “composite” solutions that reach across many of the core aspects.

Whether core or component, there is a patterned set of resources, guidances and templates that accompany each solution. The MIKE2.0 Web site and resources are generally organized around these various core or composite solutions.

Why We Like the Framework

MIKE2.0 is a project that walks its talk. Here are some of the reasons why we like the framework and how it is managed, and why we plan to be active participants as it moves forward:

  • Open source with a true collaboration and welcoming commitment
  • A sympatico philosophy and grounding
  • Knowledgeable and friendly leaders and governance structure
  • An incremental, adaptive framework, in keeping with our own beliefs and the reality of the marketplace for emerging practices
  • A proven core methodology, with many existing templates and tools for leveraging methodology development and extensions
  • Much available content (about 800 articles as of this date)
  • A smart, active community contributing on a constant basis
  • Backup and support from leading management and IT consultants
  • Active evangelists, with attention to community communications and care-and-feeding
  • The MIKE2.0 environment itself (based on the OmCollab collaboration platform), which is well thought out with many nice community and content aspects
  • A budget and roadmap for how to extend the methodology and achieve the vision.

We invite you to learn more about MIKE2.0 and join with us in helping it to continue to grow and mature.

And, oh, as to that aversion to the MIKE2.0 name? Well, with our recent addition of Citizen DAN, it is apparent we are adopting as many boys as we can. Welcome to the family, MIKE2.0!

Posted by AI3's author, Mike Bergman Posted on February 23, 2010 at 1:51 am in Adaptive Innovation, MIKE2.0, Open Source, Structured Dynamics | Comments (6)
The URI link reference to this post is: http://www.mkbergman.com/867/mike2-0-open-source-information-development-in-the-enterprise/
The URI to trackback this post is: http://www.mkbergman.com/867/mike2-0-open-source-information-development-in-the-enterprise/trackback/