Enterprises are hungry for guidance and assistance in learning how to embrace semantics and semantic technologies in their organization. Because of our services and products and my blog writings, we field many inquiries at Structured Dynamics about best practices and methods for transitioning to a semantic enterprise.
Until the middle of last year, we had been mostly focused on software development projects and our middleware efforts via things like conStruct, structWSF, irON and UMBEL. While we also were helping in early engagement and assessment efforts, it was becoming clear that more formalized (and documented!) methods and techniques were warranted. We needed concrete next steps to offer the organization once they became intrigued and then excited about what it might mean to become a semantic enterprise.
For decades, of course, various management and IT consultancies have focused on assisting enterprises adopt new work methods and information management approaches and technologies. These practices have resulted in a wealth of knowledge and methods, all attuned to enterprise needs and culture. Unfortunately, these methods have also been highly proprietary and hidden behind case studies and engagements often purposely kept from public view.
So, in parallel with formulating and documenting our own approaches — some of which are quite new and unique to the semantic space (with its open world flavor as we practice it) — we also have been active students for what others have done and written about information management assessment and change in the enterprise. Despite the hundreds of management books published each year and the deluge of articles and pundits, there are surprisingly few “meaty” sources of actual methods and templates around which to build concrete assessment and adoption methods.
The challenge here is not to present simply a few ideas or to spin some writings (or a full book!) around them. Rather, we need the templates, checklists, guidances, tools listings, frameworks, methods, test harnesses, codified approaches, scheduling and budgeting constructs, and so forth that takes initial excitement and ideas to prototyping and then deployment. These methodological assets take tens to hundreds of person-years to develop. They must also embody the philosophies and approaches consistent with our views and innovations.
Customers like to see the methods and deliverables that assessment and planning efforts can bring to them. But traditional consultancies have been naturally reluctant to share these intellectual assets with the marketplace — unless for a fee. Like many growing small companies before us, Structured Dynamics was thus embarking on systematically building its own assets up, as engagements and time allowed.
I first heard of MIKE2.0 from Alan Morrison of PriceWaterhouseCoopers’ Center for Technology and Innovation and from Steve Ardire, a senior advisor to SD. My first reaction was pretty negative, both because I couldn’t believe why anyone would name a methodology after me (hehe) and I also have been pretty cool to the proliferation of version numbers for things other than software or standards.
However, through Alan and Steve’s good offices we were then introduced to two of the leaders of MIKE2.0, Sean McClowry of PWC and then Rob Hillard of Deloitte. Along with BearingPoint, the original initiator and contributor to MIKE2.0, these three organizations and their key principals provide much of the organizational horsepower and resource support to MIKE2.0.
Based on the fantastic support of the community and the resources of MIKE2.0 itself (see concluding section on Why We Like the Framework), we began digging deeper into the MIKE2.0 Web site and its methodology and resources. For the reasons summarized in this article, we were amazed with the scope and completeness of the framework, and very comfortable with its approach to developing working deployments consistent with our own philosophy of incremental expansion and learning.
Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for enterprise information management. It provides a comprehensive methodology (747 significant articles so far) that can be applied across a number of different projects within the information management space. While initially focused around structured data, the goal of MIKE2.0 is to provide a comprehensive methodology for any type of information development.
Information development is an approach organizations can apply to treat information as a strategic asset through their complete supply chain: from how it is created, accessed, presented and used in decision-making to how it is shared, kept secure, stored and destroyed. Information development is a key concept of the MIKE2.0 methodology and a central tenet of its philosophy:
MIKE2.0 is not a framework for general transactional or operational purposes regarding data or records in the enterprise. (Though it does support functions related to analyzing that information.) Rather, MIKE2.0 is geared to the knowledge management or information management environment, with a clear emphasis on enterprise-wide issues, information integration and collaboration.
The MIKE2.0 methodology was initially created by a team from BearingPoint, a leading management and technology consultancy. The project started as “MIKE2″, an internal approach to aid enterprises to improve their information management. The MIKE2 initiative was started in early 2005 and the methodology was brought through a number of release cycles until it reached a mature state in late 2005. “MIKE2.0″ involved taking this approach and making it open source and more collaborative. Much of the content of the MIKE2.0 methodology was made available to the open source community in late December 2006. The actual MIKE2.0 Web site and release occurred in 2007.
Anyone can join MIKE2.0, which adheres to an open source and Creative Commons model. Governance of MIKE2.0 is based on a meritocracy model, similar to the principles followed by the Apache Software Foundation.
MIKE2.0 provides a complete delivery framework for information management projects in the enterprise. The assets associated with this framework first are based on templates and guidelines that can be applied to any information management area. This is a key source of our interest in the framework.
But, there is also real content behind these templates. There is a slate of “solution offerings” geared to most areas of enterprise information management. There are “solution capabilities” that describe the tools and templates by which these solutions need to be specified, planned and tracked. There are frameworks for relating specific vendor and open source tools to each offering. And, there are general strategic and other guidances for how to communicate the current state of the discipline as well as its possible future states.
The next diagram captures some of these major elements:
Perhaps the most important aspect of this framework, however, are the ways by which it provides solid guidance for how entirely new solution areas — the semantic enterprise, for example, in Structured Dynamics’ own case — can be expressed and “codified” in ways meaningful to enterprise customers. These frameworks provide a common competency across all areas of enterprise interest in information development and management. For a relatively new and small vendor such as us, this framework provides a credible meeting ground with the market.
The fundamental approach to a MIKE2.0 offering is staged and incremental. This is very much in keeping with Structured Dynamics’ own philosophy, which, more importantly, also is consonant with the phased adoption and expansion of open semantic techologies within the enterprise.
Under the MIKE2.0 framework, the first two phases relate to strategy and assessment. The next three phases (of the five standard ones) produce the first meaningful implementation of the offering. Depending, that may range from a prototype to broader deployment, based on the maturity of the offering. Thereafter, scale-out and expansion occurs via a series of potential increments:
The incremental aspects of the later three phases are not dissimilar from “spiral” deployments common to some government procurements. The truth remains, however, that actual experience is quite limited in later increments, and whether these methodologies can hold over long periods of time is unknown. Despite this caution, most failures occur in the earliest phases of a project. MIKE2.0 has strong framework support in these early phases.
MIKE2.0 “solutions” are presented as offerings from single ones to a variety of clusters or groupings. These types reflect the real circumstances of applications and deployments at either the departmental or enterprise level. They may range from systematic to those that address specific business and technology problems. Tools and solutions may be work process, human, or technological, proprietary or open.
An overarching purpose of the MIKE2.0 methodology is to couch these variations into a consistent and holistic framework that allows individual or multiple pieces to be combined and inter-related. This consistency is a key to the core objective of information management interoperability across whatever solution profile the enterprise may choose to adopt.
This objective is best expressed via the Overall Implementation Guide. Thus, while detailed aspects of MIKE2.0′s solution offerings may encompass very specific techniques, design patterns and process steps, in combination these pieces can be combined into meaningful wholes.
This spectrum of solution possibilities is organized according to:
These groupings are shown in the diagram below, with the “core” and composite groupings shown in the middle:
These central core and composite groupings, of course, are comprised of more focused and specific solutions. While it is really not the purpose of this piece to describe any of these MIKE2.0 specifics in detail, the next diagram helps illustrate the scope and breadth of the current framework.
Here are the some 30+ individual “core” solution offerings:
These are also accompanied by 8 or so cross-cutting “composite” solutions that reach across many of the core aspects.
Whether core or component, there is a patterned set of resources, guidances and templates that accompany each solution. The MIKE2.0 Web site and resources are generally organized around these various core or composite solutions.
MIKE2.0 is a project that walks its talk. Here are some of the reasons why we like the framework and how it is managed, and why we plan to be active participants as it moves forward:
We invite you to learn more about MIKE2.0 and join with us in helping it to continue to grow and mature.
And, oh, as to that aversion to the MIKE2.0 name? Well, with our recent addition of Citizen DAN, it is apparent we are adopting as many boys as we can. Welcome to the family, MIKE2.0!
It is gratifying to see the emergence of the term semantic enterprise, with much increased attention and commentary. But, similar to different styles and patterns in software programming, there is not a single (nor best, depending on circumstance) way to approach becoming a semantic enterprise.
In this piece I contrast two styles. The more traditional and familiar one is comprehensive, complete and “engineered” in its approach. The second, and emerging style, is more adaptive and incremental. While Structured Dynamics is a proponent and thought leader for the adaptive style, the use and applicability of either approach is really a function of objectives and circumstances. The choice of approach depends on use case, and should not be a dogmatic one.
Any time a contrast is posed, one should be on guard about setting up a rhetorical strawman. There may perhaps be a bit of this flavor in this article; if so, it is unintended. It is probably best to realize that there is a gradient — or spectrum — of possible approaches between these contrasting styles. The real message is to understand these differences such that you can comfortably place your own organization at the right points along this spectrum.
The general idea of semantics in the enterprise preceeds the use of the term, having been somewhat captured before by the ideas of enterprise application integration, enterprise information integration and other concepts even related to data federation and data warehousing stretching back to the 1980s. However, as a specific label, we can look back to the first mentions in the late 1990s and more concerted attention beginning from about 2002 or so onward . As another indicator, since 2005 the Semantic Technology Conference has given specific prominence to the enterprise .
Throughout this period, the sense from academic papers, many vendors, and most pundits  has been on things like automated reasoning, machine-aided decision making, aspects of artificial intelligence, and so forth. The general tone is often framed as “revolution” or “massive changes” or something “entirely new.” If you are a consultant or software/implementation vendor — especially where VC money is backing the venture with hopes for big returns and home runs — it may make cynical sense to sell such large and costly change.
I believe there are circumstances where the Semantic Enterprise writ this large may make sense and be financially justified. But, this kind of “big change” view has also seen relatively few visible (or successful) deployments. It has colored what it means to be a semantic enterprise. And, I believe, it has weakened market credibility by perhaps overpromising and underdelivering. The conventional view of what it is be a semantic enterprise deserves to be balanced.
So, as we balance this understanding of the semantic enterprise to one that is more nuanced, we can contrast the characteristics of the two apposite styles as follows:
|Characteristics of the
Comprehensive, ‘Engineered’ Style
|Characteristics of the
Adaptive, Incremental Style
Note we have labeled the conventional approach as the “comprehensive, engineering” style; its contrast, and the one we position more closely to, is the “adaptive, incremental” style.
[Others have posited contrasting styles, most often as "top down" v. "bottom up." However, in one interpretation of that distinction, "top down" means a layer on top of the existing Web . On the other hand, “top down” is more often understood in the sense of a “comprehensive, engineered” view, consistent with my own understanding . Yet no matter which characterization, neither captures what I feel to be the more important considerations of mindset, logic and premise.]
Though the table above contrasts many points, I think there are two main distinctions to the adaptive approach. First, it firmly embraces the open world assumption. OWA is key to an incremental, “learn as you go” deployment that is also well suited to incorporation of external information. The second main distinction is to leverage and build from existing assets.
Yet as noted in the opening, which of these approaches makes better sense depends on circumstance. One aspect of circumstance is available budget and deployment times for pilots or proofs-of-concept. Another aspect, of course, is the planned use or application for the deployment.
These are by no means hard distinctions, but in general we can see these contrasting approaches applying to the following uses:
|Applications and Uses for the
Comprehensive, ‘Engineered’ Style
(i.e., more CWA driven)
|Applications and Uses for the
Adaptive, Incremental Style
(i.e., more OWA driven)
A critical distinction is the nature of the enterprise itself. “External-facing” enterprises or functions that want or need to incorporate much external information (say, marketing or competitive intelligence) are advised to look closely at the adaptive approach. Organizations that have more complete control over their circumstances should perhaps focus on the conventional approach.
In previous writings I have pointed to the manifest benefits that can accrue to the semantic enterprise [see, esp. 10]. But we also have witnessed nearly a decade of promotion for semantics in the enterprise, with perhaps a lack of progress in some areas or unmet promises in others. These raise questions and skepticism of the real eventual costs and benefits.
I believe some of this skepticism is inherent with anything new — the general IT fatigue from what the current “next great thing” might be. But I also believe that some of this skepticism results from an approach to semantics in the enterprise that is both lengthy to deploy and high cost.
The key advantage of the adaptive, incremental approach is that the whole IT game in the enterprise can change. An open world approach enables adoption as it proves itself and as budgets allow. Commitments made under this approach have, in essence, permanent value. Past fears and concerns about making “wrong” bets no longer apply. With learning, targets can be re-adjusted, structure re-defined and applications re-focused, all as new discoveries and broadening scope dictate.
This does not make the adaptive approach better than the conventional one. But, it does make it less risky and, well, more adaptive.
As we see more collaboration forums emerge, one question that naturally arises is the joint authoring or editing of images. This is particularly important as “official” slide decks or presentations come to the fore.
Like many of you, I have been creating and editing images for years. I am by no means a graphics artist, but images and diagrams have been essential for communicating my work.
Until a few years back, I was totally a bitmap man. I used Paint Shop Pro (bought by Corel in 2004 and getting long in the tooth) and did a lot of copying and pasting.
I switched to Inkscape about two years ago for the following reasons:
Once you have a working image in Inkscape, make sure all collaborators have a copy of the software. Then:
Of course, it is more often the case that not all collaborators may have a copy of Inkscape or that the image began in the SVG format.
The image below began as a Windows Powerpoint clip art file, which has then gone through some modifications. Note the bearded guy’s hand holding the paper is out of registry (because I screwed up in earlier editing, but I also can easily fix because it is a vector image! ). Also note we have the border from Inkscape as suggested above. This file, BTW, is people.png, and was created as a PNG after a screen capture from Inkscape:
When beginning in Powerpoint or as clip art, files in the format of Windows metafile (*.wmf) or extended WMF (*.emf) work well. (For example, you can download and play with the native Inkscape format of people.svg, or the people.wmf or people.emf versions of the image above.) If you already have images in a Powerpoint presentation, save in one of these two formats, with (*.emf) preferred. (EMF is generally better for text.)
You can open or load these files directly into Inkscape. Generally, they will come in as a group of vectors; to edit the pieces, you should “ungroup.”
After editing per the instructions in the previous section, if you need to re-insert back into Powerpoint, please use the *.emf format (and make sure you do not save text as paths).
Note the latter option, text not as path, is the far superior one. However, also note that borders are added to the figures and vertical text is rotated 90o back to horizontal. Nonetheless, the figure is fully editable, including text. Also, if the original Inkscape figures are constructed with lines of the same color as fills, the border conversion also works well.
Frankly, especially with text, because there can be orientation and other changes going from Inkscape to Powerpoint, I recommend using Inkscape and its native SVG for all early modifications and to keep a canonical copy of your images. Then, prior to completion of the deck, save as EMF for import into Powerpoint and then clean up. If changes later need to be made to the graphic, I recommend doing so in Inkscape and then re-importing.
I should note there is an option, as well, in Inkscape to convert raster images to vector ones (use Path -> Trace bitmap … and invoke the multiple scans with colors). This is doable, but involves quite a bit of image copying, manipulation and color separation to achieve workable results. You may want to see further Inkscape’s documentation on tracing, or more fully this reference dealing with color.
Of course, there are likely many other ways to approach these issues of collaboration and sharing. I will leave it to others to suggest and explain those options.