Posted:May 9, 2010

We’re Speaking on Rich Interfaces and MIKE2.0

SemTech 2010 Conference
As I reported about a year ago after my first attendance, I think the Semantic Technology Conference is the best venue going for pragmatic discussion of semantic approaches in the enterprise. I’m really pleased that I will be attending again this year. The conference (#SemTech) will be held at the Hilton Union Square in downtown San Francisco on June 21-25, 2010. Now in its sixth year and the largest of its kind, it is again projected to attract 1500 attendees or so.

I will be presenting two papers this year, covering rather dramatically different topics. Such is the business of a young company like Structured Dynamics that wears many hats!

Rich User Interfaces for Semantic Technologies

A really exciting presentation for us is, Sizzle for the Steak: Rich, Visual Interfaces for Ontology-driven Apps, on Wed, June 23 in the 2:00 PM – 3:00 PM session.

A nagging gap in the semantic technology stack is acceptable — better still, compelling — user experiences. After our exile for a couple of years doing essential infrastructure work, we have been unshackled over the past year or so to innovate on user interfaces for semantic technologies.

Our unique approach uses adaptive ontologies to drive rich Internet applications (RIAs) through what we call “semantic components.” This framework is unbelievably flexible and powerful and can seamlessly interact with our structWSF Web services framework and its conStruct Drupal implementations.

We will be showing these rich user interfaces for the first time in this session. We will show concept explorers, “slicer-and-dicers”, dashboards, information extraction and annotation, mapping, data visualization and ontology management. Get your visualization anyway you’d like, and for any slice you’d like!

While we will focus on the sizzle and demos, we will also explain a bit of the technology that is working behind Oz’s curtain.

Methods for Becoming a Semantic Enterprise

A more informal, interactive F2F discussion will be, MIKE2.0 for the Semantic Enterprise, on Thurs, June 24 in the 4:45 PM – 5:45 PM slot.

MIKE2.0 (Method for an Integrated Knowledge Environment) is an open source methodology for enterprise information management that is coupled with a collaborative framework for information development. It is oriented around a variety of solution “offerings”, ranging from the comprehensive and the composite to specific practices and technologies. A couple of months back, I gave an overview of MIKE2.0 that was pretty popular.

We have been instrumental in adding a semantic enterprise component to MIKE2.0, with our specific version of it called Open SEAS. In this Face-to-Face session, experts and desirous practitioners will join together to discuss how to effectively leverage this framework. While I will intro and facilitate, expect many other MIKE2.0 aficionados to participate.

This is perhaps a new concept to many, but what is exciting about MIKE2.0 is that it provides a methodology and documentation complement to technology alone. When combined with that technology, all pieces comprise what might be called a total open solution. I personally think it is the next logical step beyond open source.

Hope to See You There!

So, if you have not already made plans, consider adjusting your schedule today. And, contact me in advance (mike at structureddynamics dot com) if you’ll be there. We’d love to chat!

Posted:March 23, 2010

Optimus Prime transformer
Open Source, Open World, Web, and Semantics to Transform the Enterprise

Ten years ago the message was the end of obscene rents from proprietary enterprise software licenses. Five years ago the message was the arrival and fast maturing of open source. Today, the message is the open world and semantics.

These forces are conspiring to change much within enterprise IT. And, this change will undoubtedly be for the good — for the enterprise. But these forces are not necessarily good news within conventional IT departments and definitely not for traditional vendors unwilling to transform their business models.

I have been beating the tom-tom on this topic for a few months, specifically in regards to the semantic enterprise. But I have by no means been alone nor unique. The last two weeks have seen an interesting confluence of reports and commentaries by others that richen the story of the changing information technology landscape. I’ll be drawing on the observations of Thomas Wailgum (CIO magazine) [1], John Blossom [2] and Andy Mulholland, CTO of Capgemini [3].

The New Normal

“After nearly five decades of gate-keeping prominence, corporate IT is in trouble and at a crossroads like never before in its mercurial and storied history as a corporate function. You may be too big to fail, but you’re not too big to succeed. What will you do?”

– Thomas Wailgum [1]

Wailgum describes the “New Normal” and how it might kill IT [1]. He picks up on the viewpoint that ties the recent meltdowns in the financial sector as a seismic force for changes in information technology. While he acknowledges many past challenges to IT from PCs and servers and Y2K and software becoming a commodity, he puts the global recession’s impact on business — the “New Normal”– into an entirely different category.

His basic thesis is that these financial shocks are forcing companies to scrutinize IT as never before, in particular “unfavorable licensing agreements and much-too-much shelfware; ill-conceived purchasing and integration strategies; and questionable software married to entrenched business processes.”

Yet, he also argues that IT and its systems are too ingrained into the core business processes of the enterprise to be allowed to fail. IT systems are now thoroughly intertwined with:

  • ERP systems – the financial, administrative and procurement backbone of every organization
  • Business development and BI
  • Operations and forecasting
  • Customer service and call centers
  • Networking and security
  • Sales and marketing via CRM and lead generation
  • Supply chain applications in manufacturing and shipping.

But top management is disappointed and disaffected. IT systems gobble up too many limited resources. They are inflexible. They are old and require still more limited resources to modernize. They are complex. They create and impose delays. And all of these negatives lead to huge losses in opportunity costs. Wailgum notes Gartner, for instance, as saying that by 2012 perhaps 20 percent of businesses will own no IT assets at all in their desire to outsource this headache.

“Enterprise systems are doing it wrong. And not just a little bit, either. Orders of magnitude wrong. Billions and billions of dollars worth of wrong. Hang-our-heads-in-shame wrong. It’s time to stop the madness.”

– Tim Bray, as quoted in [1]

I think this devastating diagnosis is largely correct, though perhaps incomplete in that no mention is made of the flipside: what IT has failed to deliver. I think this flipside is equally damning.

Despite decades of trying, IT still has not broken down the data stovepipes in the enterprise. Rather, they have proliferated like rabbits. And, IT has failed to unlock the data in the 80% of enterprise information contained within documents (unstructured data).

Unfortunately, after largely zeroing in and mostly diagnosing the situation, Wailgum’s remedy comes off sounding like a tired 12-step program. He argues for new mindsets, better communications, getting in touch with customers, being willing to take risks, and being nimble. Well, duh.

So, over the decades of IT failures there has been accompanying decades of criticism, hand-wringing, and hackneyed solutions. Without some more insightful thinking, this analysis can make our understanding of the New Normal look pretty old.

Not Necessarily Good News for Vendors

John Blossom [2] picks up on these arguments and looks at the issues from the vendor’s perspective. Blossom characterizes Wailgum’s piece as “outlining the enormous value gap that’s been arising in enterprise information technologies.” And, while clearly new approaches are needed and farming them out may become more prevalent, Blossom cautions this is not necessarily good news for vendors.

“. . . the trend towards agnosticism in finding solutions to information problems is only going to get stronger. Whatever platform, tool or information service can solve the job today will get used, as long as it’s affordable and helps major organizations adapt to their needs.”

“. . . many solutions oriented at first towards small to medium enterprises are likely to scale up cost-effectively as platforms from which more targeted information services can be launched to meet the needs of larger enterprises. If agility favors smaller companies that lack the legacy of failed IT investments that larger organizations must still bear, then there will be increasing pressure on large organizations to adopt similar methods.”
“. . . if you thought that your business could be segregated from the Web as a whole, increasingly you’ll be dead wrong.”
– John Blossom [2]

As Blossom puts it, “what seems to be happening is that many of the business processes through which these enterprises survived and thrived over the past several decades are shooting blanks. . . . many of the fundamental concepts of IT that have been promoted for the past few decades no longer give businesses operational advantages but they have to keep spending on them anyway.”

As he has been arguing for quite some time, one fundamental change agent has been the Web itself. “The Web has accelerated the flow of information and services that can lead to effective decision-making far more rapidly than enterprise IT managers have been able to accommodate.”

Web search engines and social media tools can begin to replace some of the dedicated expenditures and systems within the enterprise. Moreover, the extent, growth and value of external data and content is readily apparent. Without outreach and accommodation of external data — even if it can solve its own internal data federation challenges — the individual enterprise is at risk of itself becoming a stovepipe.

Prior focuses on strategy and capturing workflows are perhaps being supplanted by the need for operational flexibility and on-the-fly aggregation and rapid service development tools. In an increasingly interconnected and rapidly changing world with massive information growth, being able to control workflows and to depend on central IT platforms may become last decade’s “Old Normal.” Floating on top of these massive forces and riding with their tides is a better survival tactic than digging fixed emplacements in the face of the tsunami.

These factors of Web, open source, agnosticism as to platform or software applications, and the need to mash up innovations from anywhere are not the traditional vendor game. Just as businesses and their IT departments must get leaner, so must the expectation of vendors to extort exorbitant rents from their clients. “Fasten your seatbelts, it’s going to be a bumpy night!” [4]

So, Blossom agrees with the Wailgum diagnosis, but also helps us begin to understand parts of the cure. Blossom argues the importance of:

  • Web approaches and architectures
  • Incorporation of external data
  • Leverage of Web applications, and
  • Use of open standards and APIs to avoid vendor lock-in.

Much, if not all of this, can be provided by open source. But open source is not a sine qua non: commercial products that embrace these approaches can also be compatible components across the stack.

A Semantic Lever on An Open World Fulcrum

But — even with these components — a full cure still lacks a couple of crucial factors.

These remaining gaps are emphasized in Andy Mulholland’s recent blog post [3]. His post was occasioned by the press announcement that Structured Dynamics (my firm) had donated its Semantic Enterprise Adoption and Solutions, or SEAS, methodology to MIKE2.0 [5]. Mulholland was suggesting his audience needed to know about this Method for an Integrated Knowledge Environment because some of the major audit partnerships have decided to get behind MIKE2.0 with its explicit and open source purpose of managing knowledge environments and their data and provenance.

“In ‘closed’ — or some might say normal — IT environments where all data sources can be carefully controlled, all statements are taken to be false unless explicitly known to be true. However most ‘new’ data is from the ‘open’ environment of the web and in semantic data. If this is not specifically flagged as true it is categorised as ‘unknown’ rather then false. This single characteristic to me is in many ways the most crucial issue to understand as we go forward into using mixed data sets to support complex ‘business intelligence’ or ‘decision support’ around externally driven events, and situations.”

– Andy Mulholland, CTO, Capgemini [3]

As Mulholland notes, “. . . it’s not just more data, it’s the forms of data, and what the data is used for, all of which add to the complications. . . . Sadly the proliferation of data has mostly been in unstructured data in formats suitable for direct human use.”

So, one remaining factor is thus how to extract meaning from unstructured (text) content. It is here that semantics and various natural language processing (NLP) components come in. Implied in the incorporation of data extracted from unstructured sources is a data model expressly designed for such integration.

Yet, without a fulcrum, the semantic lever can still not move the world. Mulholland insightfully nails this fundamental missing piece — the “most crucial issue” — as the use of the open world assumption.

From an enterprise perspective and in relation to the points of this article, an open world assumption is not merely a different way to look at the world. More fundamentally, it is a different way to do business and a very different way to do IT.

I have summarized these points before, but they deserve reiteration. Open world frameworks provide some incredibly important benefits for knowledge management applications in the enterprise:

  • Domains can be analyzed and inspected incrementally
  • Schema can be incomplete and developed and refined incrementally
  • The data and the structures within these open world frameworks can be used and expressed in a piecemeal or incomplete manner
  • We can readily combine data with partial characterizations with other data having complete characterizations
  • Systems built with open world frameworks are flexible and robust; as new information or structure is gained, it can be incorporated without negating the information already resident, and
  • Open world systems can readily bridge or embrace closed world subsystems.

Archimedes is attributed to the apocryphal quote, “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.” [6] I have also had lawyer friends tell me that the essence of many court cases is found in a single pivotal assertion or statement in the arguments. I think it fair to say that the open world approach plays such a central role in unlocking the adaptive way for IT to move forward.

Bringing the Factors Together via Open SEAS

Open SEAS
As Mulholland notes, we have donated our Open SEAS methodology [7] to MIKE2.0 in the hopes of seeing greater adoption and collaboration. This is useful, and all are welcome to review, comment and contribute to the methodology, indeed as is the case for all aspects of MIKE2.0.

But the essential point of this article is that Open SEAS also embraces most — if not all — of the factors necessary to address the New Normal IT function.

Pillars of the Open Semantic Enterprise

Open SEAS is explicitly designed to facilitate becoming an open semantic enterprise. Namely, this means an organization that uses the languages and standards of the semantic Web, including RDF, RDFS, OWL, SPARQL and others to integrate existing information assets, using the best practices of linked data and the open world assumption, and targeting knowledge management applications. It does so based on Web-oriented architectures and approaches and uses ontologies as an “integration layer” across existing assets.

The foundational approaches to the open semantic enterprise do not necessarily mean open data nor open source (though they are suitable for these purposes with many open source tools available). The techniques can equivalently be applied to internal, closed, proprietary data and structures. The techniques can themselves be used as a basis for bringing external information into the enterprise. ‘Open’ is in reference to the critical use of the open world assumption.

These practices do not require replacing current systems and assets; they can be applied equally to public or proprietary information; and they can be tested and deployed incrementally at low risk and cost. The very foundations of the practice encourage a learn-as-you-go approach and active and agile adaptation. While embracing the open semantic enterprise can lead to quite disruptive benefits and changes, it can be accomplished as such with minimal disruption in itself. This is its most compelling aspect.

We believe this offers IT an exciting, incremental and low-risk path for moving forward. All existing assets can be left in place and — in essence — modernized in place. No massive shifts and no massive commitments are required. As benefits and budgets allow, the extent of the semantic interoperability layer may be extended as needed and as affordable.

The open semantic enterprise is not magic nor some panacea. Simply consider it as bringing rationality to what has become a broken IT system. Embracing the open semantic enterprise can help the New Normal be a good and more adaptive normal.


[1] Thomas Wailgum, 2010. “Why the New Normal Could Kill IT,” March 12, 2010 online story in CIO Magazine; see http://www.cio.com/article/575563/Why_the_New_Normal_Could_Kill_IT.
[2] John Blossom, 2010. “Enterprise Publishing and the “New Normal” in I.T. – Are You Missing the Trend?,” March 15, 2010 blog post on ContentBlogger; see http://www.shore.com/commentary/weblogs/2010/03/enterprise-publishing-and-new-normal-in.html.
[3] Andy Mulholland, 2010. “Meet MIKE – Methodology for Managing Data and its Use,” March 12, 2010 ePractice.eu Blog post; see http://www.epractice.eu/en/blog/309352. Also, see Mulholland’s Capgemini CTO Blog.
[4] Bette Davis (as Margo Channing) uttered this famous line in All About Eve (1950).
[5] MIKE2.0 is a Method for an Integrated Knowledge Environment is an open source methodology for enterprise information management that provides a framework for information development. The MIKE2.0 Methodology is part of the overall Open Methodology Framework and is a collaborative effort to help organisations who have invested heavily in applications and infrastructures, but haven’t focused on the data and information needs of the business.
[6] As quoted in The Lever, “”Archimedes, however, in writing to King Hiero, whose friend and near relation he was, had stated that given the force, any given weight might be moved, and even boasted, we are told, relying on the strength of demonstration, that if there were another earth, by going into it he could remove this.” from Plutarch (c. 45-120 AD) in the Life of Marcellus, as translated by John Dryden (1631-1700).
[7] The Open SEAS framework is part of the MIKE2.0 Semantic Enterprise Solution capability. It adds some 40 new resources to this area, importantly including reasoning for the validity of statements in ‘open’ situations.
Posted:March 1, 2010

New Release Builds on the MIKE2.0 Methodology and Deliverables

Open SEAS

Today, Structured Dynamics is pleased to release Open SEAS, its methodology for Semantic Enterprise Adoption and Solutions. At the same time, we are donating the framework to the open source MIKE2.0 Method for an Integrated Knowledge Environment project.

Open SEAS provides a framework for the enterprise to establish a coherent, consistent and interoperable layer across its information assets. It is compliant with the MIKE2.0 Semantic Enterprise Solution Offering.

Open SEAS has been developed for enterprises desiring to initiate or extend their involvement with semantic technologies. It is inherently incremental, low-cost and low-risk.

Donation and Relation to MIKE2.0

Concurrent with this release, Structured Dynamics is also donating the methodology and all of its related intellectual assets to the MIKE2.0 project. Under Creative Commons license and MIKE2.0′s content governance policies, the community’s current 2000+ members are now free to expand and use the Open SEAS methodology in any manner they see fit.
MIKE2.0 Logo

Last week, I began to introduce MIKE2.0 and its methodology to the readers of this blog. MIKE2.0 provides a complete delivery environment and methodology for information management projects in the enterprise. Solutions — from the specific to the composite — are described and packaged with respect to plans, management communications, products (open source and proprietary), activities, benchmarks, and deliverables. Delivery is accomplished over multiple increments, split into five phases from definition and planning to deployment. The assets associated with this framework first are based on templates and guidelines that can be applied to any information management area. The framework allows for multiple projects to be combined and inter-related, all under a common methodology. More information and a good entry point is provided on the What is MIKE2.0? page on the project’s main Web site.

MIKE2.0 presently has some 800 resources across about 40 solution areas. With Structured Dynamics’ donation, there are now about 40 resources related to the semantic enterprise, many of them major, accompanied by many images and figures. This contribution makes the Semantic Enterprise Solution Offering instantly one of the more complete within MIKE2.0. As noted below, this contribution is also just a beginning of our commitment.

Basic Overview of Open SEAS

The Open SEAS framework is Structured Dynamics’ specific implementation framework for MIKE2.0′s Semantic Enterprise Solution Offering. This section overviews some of Open SEAS‘ key facets.

A Grounding in the Open World Approach

Many enterprise information systems, particularly relational ones, embody a closed world assumption that holds that any statement that is not known to be true is false. This premise works well where there is complete coverage of specific items, such as the enumeration of all customers or all products.

Yet, in most areas of the real (”open”) world there is no guarantee or likelihood of complete coverage. Under an open world assumption the lack of a given assertion or fact does not imply whether that possible assertion is true or false: it simply is not known. An open world assumption is one of the key factors that defines the open Semantic Enterprise Offering and enables it to be deployed incrementally. It is also the basis for enabling linkage to external (often incomplete) datasets.
Pillars of the Open Semantic Enterprise

Fortunately, there is no requirement for enterprises to make some philosophical commitment to either closed- or open-world systems or reasoning. It is perfectly acceptable to combine traditional closed-world relational systems with open-world reasoning. It is also not necessary to make any choices or trade-offs about using public v. private data or combinations thereof. All combinations are acceptable when the basis for integration is an open-world one.

Open SEAS is grounded in this “open” style. It can be employed in virtually any enterprise circumstance and at any scope, and expanded in a similar way as budget and needs allow.

Other Basic Pillars to the Framework

Open SEAS is based on seven pillars, which themselves inform the basis for the MIKE2.0 Guiding Principles for the Open Semantic Enterprise. These principles cover data model, architecture, deployment practices and approach for how an enterprise can begin and then extend its use of semantics for information interoperability.

Important aspects are linked data or Web-oriented architecture, but it is really the unique combination of open-world approach and the RDF data model and its semantic power that provide the distinctive differences for Open SEAS. An exciting prospect — but still in its early stages of discovery and implementation — is the role of adaptive ontologies to power ontology-driven applications. These prospects, if fully realized, could totally remake how knowledge workers interact and specify the applications that manage their information environment.

Embracing the Layered Semantic Enterprise Architecture

Open SEAS also fully embraces the Layered Semantic Enterprise Architecture of MIKE2.0′s Semantic Enterprise Offering. This architecture acts as a subsequent set of functions or middleware with respect to the MIKE2.0′s standard SAFE Architecture. Most of the existing SAFE architecture resides in the Existing Assets layer. The specific aspects of Open SEAS resides in the layers above, namely Access/Conversion, Ontologies and the Applications Layers.

Using (Mostly) Open Source to Fill Gaps in the Technology Stack

Stitching together this interoperability layer above existing information and infrastructure assets requires many diverse tools and products, and there still are gaps. The layer figure below shows the semantic enterprise architecture overlaid with some representative open source projects and tools that plug some of those gaps.

Open SEAS also maintains a comprehensive roster of open source and proprietary tools in all aspects of semantic technology, ranging from data storage and converters, to Web services and middleware, and then to ultimate user applications. A database of nearly 1,000 tools in all areas is maintained for potential applicability to the methodology.

Quick, Adaptive, Agile Increments

The inherently incremental nature of the Open SEAS framework encourages experimentation, affordable deployments, and experience gathering. Because the systems and deployments put into place with this framework are based on the open world approach and use the extensible RDF data model, expansions in scope, sophistication or domain can be incorporated at any time without adverse effects on existing assets or systems or prior Open SEAS deployments.

Quick and (virtually) risk-free increments means that adopting semantic approaches in the enterprise can be accelerated (or not) based on empirical benefits and available budgets.

An Emphasis on Learning

The Open SEAS framework is built on a solid foundation, but it also one that is incomplete. Deployments of semantic technologies and approaches are still quite early in the enterprise, whether measured in numbers, scope or depth. In order for the framework — and the practice of semantic adoption in general — to continue to expand and be relevant in the enterprise, active learning and documentation is essential. One of the reasons for the affiliation of Open SEAS with MIKE2.0 is to leverage these strong roots in methodological learning.

Where Do We Go From Here?

The nature of Open SEAS and its parent Semantic Enterprise Solution Offering touches most offerings within the MIKE2.0 framework. There is much to be done to integrate the semantic enterprise perspective into these other possibilities, plus much that needs to be learned and documented for the offering itself. The concept of the semantic enterprise, after all, is relatively new with few prominent case studies.

As the offering points out, there are some dozens of addition necessary resources that are available and ready to be packaged and moved into the MIKE2.0 framework. These efforts are a priority, and will continue over the coming weeks.

But, more importantly, beyond that, the experience and practitioner base needs to grow. Much is unknown regarding key aspects of the offering:

  • What are the priority application areas which promise the greatest return on investment?
  • What are best practices for adoption and technologies across the entire semantic enterprise stack?
  • Many tools and techniques are still legacies and outgrowths of the research and academic communities. How can these be adopted and modified to meet enterprise standards and expectations?
  • What are the “best” ontology and vocabulary building blocks upon which to model and help frame the enterprise’s interoperability needs?
  • What are the most cost-effective strategies for leveraging existing information and infrastructure assets, while transitioning away from them where appropriate?

Despite these questions, emergence is the way complex systems arise out of a multiple of relatively simple interactions, exhibiting new and unforeseen properties in the process. RDF is an emergent model. It begins as simple “fact” statements of triples, that may then be combined and expanded into ever-more complex structures and stories. As an internal, canonical data model, RDF has advantages for information federation and development over any other approach. It can represent, describe, combine, extend and adapt data and their organizational schema flexibly and at will. Applications built upon RDF can explore and analyze in ways not easily available with other models.

Combined with an open-world approach, new information can be brought in and incorporated to the framework step-by-step. Perhaps the greatest promise in an ongoing transition to become a semantic enterprise is how an inherently incremental and building-block approach might alter prior practices and risks across the entire information management spectrum.

We invite you to join us and to contribute to this effort. I encourage you to join MIKE2.0 if you have not already done so, and check out announcements on this blog for ongoing developments.

Posted:February 23, 2010

MIKE2.0 Logo
A Maturing Standard, Worthy of Adoption

Enterprises are hungry for guidance and assistance in learning how to embrace semantics and semantic technologies in their organization. Because of our services and products and my blog writings, we field many inquiries at Structured Dynamics about best practices and methods for transitioning to a semantic enterprise.

Until the middle of last year, we had been mostly focused on software development projects and our middleware efforts via things like conStruct, structWSF, irON and UMBEL. While we also were helping in early engagement and assessment efforts, it was becoming clear that more formalized (and documented!) methods and techniques were warranted. We needed concrete next steps to offer the organization once they became intrigued and then excited about what it might mean to become a semantic enterprise.

For decades, of course, various management and IT consultancies have focused on assisting enterprises adopt new work methods and information management approaches and technologies. These practices have resulted in a wealth of knowledge and methods, all attuned to enterprise needs and culture. Unfortunately, these methods have also been highly proprietary and hidden behind case studies and engagements often purposely kept from public view.

So, in parallel with formulating and documenting our own approaches — some of which are quite new and unique to the semantic space (with its open world flavor as we practice it) — we also have been active students for what others have done and written about information management assessment and change in the enterprise. Despite the hundreds of management books published each year and the deluge of articles and pundits, there are surprisingly few “meaty” sources of actual methods and templates around which to build concrete assessment and adoption methods.

The challenge here is not to present simply a few ideas or to spin some writings (or a full book!) around them. Rather, we need the templates, checklists, guidances, tools listings, frameworks, methods, test harnesses, codified approaches, scheduling and budgeting constructs, and so forth that takes initial excitement and ideas to prototyping and then deployment. These methodological assets take tens to hundreds of person-years to develop. They must also embody the philosophies and approaches consistent with our views and innovations.

Customers like to see the methods and deliverables that assessment and planning efforts can bring to them. But traditional consultancies have been naturally reluctant to share these intellectual assets with the marketplace — unless for a fee. Like many growing small companies before us, Structured Dynamics was thus embarking on systematically building its own assets up, as engagements and time allowed.

Welcome to MIKE2.0 and A Bit of History

I first heard of MIKE2.0 from Alan Morrison of PriceWaterhouseCoopers’ Center for Technology and Innovation and from Steve Ardire, a senior advisor to SD. My first reaction was pretty negative, both because I couldn’t believe why anyone would name a methodology after me (hehe) and I also have been pretty cool to the proliferation of version numbers for things other than software or standards.

However, through Alan and Steve’s good offices we were then introduced to two of the leaders of MIKE2.0, Sean McClowry of PWC and then Rob Hillard of Deloitte. Along with BearingPoint, the original initiator and contributor to MIKE2.0, these three organizations and their key principals provide much of the organizational horsepower and resource support to MIKE2.0.

Based on the fantastic support of the community and the resources of MIKE2.0 itself (see concluding section on Why We Like the Framework), we began digging deeper into the MIKE2.0 Web site and its methodology and resources. For the reasons summarized in this article, we were amazed with the scope and completeness of the framework, and very comfortable with its approach to developing working deployments consistent with our own philosophy of incremental expansion and learning.

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for enterprise information management. It provides a comprehensive methodology (747 significant articles so far) that can be applied across a number of different projects within the information management space. While initially focused around structured data, the goal of MIKE2.0 is to provide a comprehensive methodology for any type of information development.

Information development is an approach organizations can apply to treat information as a strategic asset through their complete supply chain: from how it is created, accessed, presented and used in decision-making to how it is shared, kept secure, stored and destroyed. Information development is a key concept of the MIKE2.0 methodology and a central tenet of its philosophy:

The concept of Information Development is based on the premise that due to its complexity, we currently lack the methods, technologies and skills to solve our information management challenges. Many of the techniques in use today are relatively immature and fragmented and the problems keep getting more difficult to solve. This is one of the reasons we see so many problems today and why organizations that manage information well are so successful.

MIKE2.0 is not a framework for general transactional or operational purposes regarding data or records in the enterprise. (Though it does support functions related to analyzing that information.) Rather, MIKE2.0 is geared to the knowledge management or information management environment, with a clear emphasis on enterprise-wide issues, information integration and collaboration.

The MIKE2.0 methodology was initially created by a team from BearingPoint, a leading management and technology consultancy. The project started as “MIKE2″, an internal approach to aid enterprises to improve their information management. The MIKE2 initiative was started in early 2005 and the methodology was brought through a number of release cycles until it reached a mature state in late 2005. “MIKE2.0″ involved taking this approach and making it open source and more collaborative. Much of the content of the MIKE2.0 methodology was made available to the open source community in late December 2006. The actual MIKE2.0 Web site and release occurred in 2007.

Anyone can join MIKE2.0, which adheres to an open source and Creative Commons model. Governance of MIKE2.0 is based on a meritocracy model, similar to the principles followed by the Apache Software Foundation.

There is much additional background on MIKE2.0. Also, for an explanation of the rationale for the framework, see the MIKE2.0 article, A New Model for the Enterprise.

A Surprisingly Robust and Complete Framework

MIKE2.0 provides a complete delivery framework for information management projects in the enterprise. The assets associated with this framework first are based on templates and guidelines that can be applied to any information management area. This is a key source of our interest in the framework.

But, there is also real content behind these templates. There is a slate of “solution offerings” geared to most areas of enterprise information management. There are “solution capabilities” that describe the tools and templates by which these solutions need to be specified, planned and tracked. There are frameworks for relating specific vendor and open source tools to each offering. And, there are general strategic and other guidances for how to communicate the current state of the discipline as well as its possible future states.

The next diagram captures some of these major elements:

Perhaps the most important aspect of this framework, however, are the ways by which it provides solid guidance for how entirely new solution areas — the semantic enterprise, for example, in Structured Dynamics’ own case — can be expressed and “codified” in ways meaningful to enterprise customers. These frameworks provide a common competency across all areas of enterprise interest in information development and management. For a relatively new and small vendor such as us, this framework provides a credible meeting ground with the market.

A Phased and Incremental Approach to Information Development

The fundamental approach to a MIKE2.0 offering is staged and incremental. This is very much in keeping with Structured Dynamics’ own philosophy, which, more importantly, also is consonant with the phased adoption and expansion of open semantic techologies within the enterprise.

Under the MIKE2.0 framework, the first two phases relate to strategy and assessment. The next three phases (of the five standard ones) produce the first meaningful implementation of the offering. Depending, that may range from a prototype to broader deployment, based on the maturity of the offering. Thereafter, scale-out and expansion occurs via a series of potential increments:

The incremental aspects of the later three phases are not dissimilar from “spiral” deployments common to some government procurements. The truth remains, however, that actual experience is quite limited in later increments, and whether these methodologies can hold over long periods of time is unknown. Despite this caution, most failures occur in the earliest phases of a project. MIKE2.0 has strong framework support in these early phases.

A Broad Spectrum of Capabilities, Assets and Solutions

MIKE2.0 “solutions” are presented as offerings from single ones to a variety of clusters or groupings. These types reflect the real circumstances of applications and deployments at either the departmental or enterprise level. They may range from systematic to those that address specific business and technology problems. Tools and solutions may be work process, human, or technological, proprietary or open.

An overarching purpose of the MIKE2.0 methodology is to couch these variations into a consistent and holistic framework that allows individual or multiple pieces to be combined and inter-related. This consistency is a key to the core objective of information management interoperability across whatever solution profile the enterprise may choose to adopt.

This objective is best expressed via the Overall Implementation Guide. Thus, while detailed aspects of MIKE2.0′s solution offerings may encompass very specific techniques, design patterns and process steps, in combination these pieces can be combined into meaningful wholes.

This spectrum of solution possibilities is organized according to:

  • Vendor Solution Offerings – integrated approaches to solving problems from a vendor perspective and are often product-specific
  • The SAFE Architecture – an adaptive framework that is flexible and extensible, and can be deployed incrementally

These groupings are shown in the diagram below, with the “core” and composite groupings shown in the middle:

Nearly Two Score Core and Composite Offerings

These central core and composite groupings, of course, are comprised of more focused and specific solutions. While it is really not the purpose of this piece to describe any of these MIKE2.0 specifics in detail, the next diagram helps illustrate the scope and breadth of the current framework.

Here are the some 30+ individual “core” solution offerings:

These are also accompanied by 8 or so cross-cutting “composite” solutions that reach across many of the core aspects.

Whether core or component, there is a patterned set of resources, guidances and templates that accompany each solution. The MIKE2.0 Web site and resources are generally organized around these various core or composite solutions.

Why We Like the Framework

MIKE2.0 is a project that walks its talk. Here are some of the reasons why we like the framework and how it is managed, and why we plan to be active participants as it moves forward:

  • Open source with a true collaboration and welcoming commitment
  • A sympatico philosophy and grounding
  • Knowledgeable and friendly leaders and governance structure
  • An incremental, adaptive framework, in keeping with our own beliefs and the reality of the marketplace for emerging practices
  • A proven core methodology, with many existing templates and tools for leveraging methodology development and extensions
  • Much available content (about 800 articles as of this date)
  • A smart, active community contributing on a constant basis
  • Backup and support from leading management and IT consultants
  • Active evangelists, with attention to community communications and care-and-feeding
  • The MIKE2.0 environment itself (based on the OmCollab collaboration platform), which is well thought out with many nice community and content aspects
  • A budget and roadmap for how to extend the methodology and achieve the vision.

We invite you to learn more about MIKE2.0 and join with us in helping it to continue to grow and mature.

And, oh, as to that aversion to the MIKE2.0 name? Well, with our recent addition of Citizen DAN, it is apparent we are adopting as many boys as we can. Welcome to the family, MIKE2.0!

Posted by AI3's author, Mike Bergman Posted on February 23, 2010 at 1:51 am in Adaptive Innovation, MIKE2.0, Open Source, Structured Dynamics | Comments (6)
The URI link reference to this post is: http://www.mkbergman.com/867/mike2-0-open-source-information-development-in-the-enterprise/
The URI to trackback this post is: http://www.mkbergman.com/867/mike2-0-open-source-information-development-in-the-enterprise/trackback/