We had a party this week to celebrate my daughter and her boyfriend’s career move to Seattle. It was a great time with many reminiscences of our life here in Iowa City over the past decade. Then it struck me: almost to a day I had now been working from a home office for 20 years!
Wow. I had not really been paying attention. That realization in turn brought back its own memories, and caused me to reflect back on my two decades of working from home.
I left my position as director of energy research at the American Public Power Association in early June 1989. We were just a month away from my son’s birth and had decided we did not want to raise our children in Washington, DC. The District at that time was totally dysfunctional and had earned the moniker of “Murder Capital of America.” While we loved our home in Barnaby Woods (Chevy Chase) DC and our neighbors, we wanted a smaller and safer community with more connectedness.
My wife, at that time a post doc at the National Institutes of Health in Bethesda, also was committed to her profession and career. The Washington area was unlikely to be an immediate prospect for her to find a permanent position. Indeed, our generation was just coming to grips with the new challenges of two professional families: There needs to be career choices and flexibility between the partners. Some professions, like lawyering or doctoring or sales or computer programming, have much locational flexibility. Others, such as bench scientists in biology, as for my wife, less so. In academia, position openings occur at their own place and time.
I had also been climbing my way in a corporate and office environment for more than a dozen years and was ready for my own career change. As a professional, I had never been my own boss and wanted to see how I could fare in the consulting and entrepreneurial worlds. By doing so, I could also bring flexibility to my wife’s locational options for her career.
At that time, without a doubt, the enabling technology for my new career shift was the fax machine. The ability to interact with clients with documents in more-or-less real time was pivotal. My first fax machine was a Sanyo thermal model (can’t recall the model now; it is long gone and they have since gotten out of that business). I recall buying cartoons of thermal fax rolls frequently and the copies that faded in the file cabinet drawers.
Of course, the phone and plane were also pivotal. In the early years my monthly phone bills were astronomical and I flew around 100,000 miles per year. But, it was the fax that really enabled me to cut the locational knot. But how strange: from thousands upon thousands of faxed pages in the early years to only a few per year today! The Web, of course, has really proven to be the true enabler over the past decade.
Prior to shifting to a home office it took me about 45 min to commute or bike to APPA. If taking public transportation, I had to walk to the local bus stop, transfer at the Metro subway station, and then walk the remaining distance. While this gave me time to read most of the the morning Washington Post, I knew that by eliminating this commute I could save 90 min a day for new productivity.
What first surprised me, though, was the fact that I also no longer needed to keep my office computer and home computer synchronized. Since, like most ambitious professionals, I also worked some in the evenings, I had overlooked the time it took to keep files and documents synchronized. I was saving about another 30 min per day in digital transfers between home and office. With this new choice to work from home, I was saving 2 hrs per day!
I only had a home office in DC for a short period before we moved to Montana. Since Montana, I have had only two further offices (homes).
My early experience in DC suggested I wanted a more dedicated office space, so we did so for the home we designed in Montana. I was also able to design reserved office space again for here in Iowa City. (The DC home office and the interim one in South Dakota prior to Iowa were converted bedrooms, definitely not recommended!)
Planning office space in advance means you can tailor the space to your work habits. For me, I want lots of natural light, a view from the windows, and lots of desk and whiteboard space. I also needed room for office equipment (copiers in the early years, fax, printers and the like) and file cabinets. When in Montana, I designed up and had built my own office furniture suite that makes me feel I’m commanding the bridge of the Starship Enterprise (see pictures of my current office).
Teaching myself and the kids that office time and office space were fairly sacrosanct was important, too. Sure, it was helpful to be around for the kids for boo-boos and emergencies and dedicated kid’s time, and to be able to be there for home repairs and the like, but for the most part I tried to treat my office as a separate space and to have the kids do so, too. Frankly, since my family has grown up with no other experience than Dad working from home, it has always felt natural and been a matter of course.
A real key is to be able to shut the office door and return to normal home life in the rest of the house. And, of course, the need for the opposite is also true. It is probably the case that I spend more time in my home office than most regular professionals do in their organization’s office, but my career has always been my passion anyway and not what I consider to be a job.
In the early years I was fairly unusual, I think, for working from home. I certainly gave many local talks and was frequently invited by service organizations to speak over lunch on the experience of “telecommuting”. Today, working from home is no longer unusual and the Internet technology and support to do so makes it a breeze.
I have been able to run both consulting and software development companies from my home office over the years. I have seen the gamut of meetings ranging from with developers before massive whiteboards in my home basement to running and coordinating 20-person companies in their own office space with investors and Boards.
With a willingness to travel, it seems like all organizational possibilities are now open to the home worker. For quality of life and other reasons, the fact that today many larger knowledge organizations offer remote office centers and commuting flexibility speaks volumes to how far “telecommuting” has really come.
How much difference two decades can make!
I personally could never return to a standard office setting. For me, the home office with its flexibility and productivity and ability to find contemplative time simply can not be beat.
I really welcome what is happening in online meeting software and other Web apps that are reducing the need for travel and face-to-face meetings. For while the technology and culture has improved markedly to support working from home over the past two decades, the pain and hassle of travel has only worsened.
I have transitioned from a million-miler frequent flyer to a rooted house plant. I try to chose my travel venues carefully and when I do travel I try to do so for longer periods to absorb the shocks. It is perhaps a too frequent refrain, but it is just a damn shame how getting a meal, being treated with pleasure and courtesy, having some legroom, and getting a drink are air travel amenities of a now bygone era.
There are now many, many more of us (you) who work from home and it really is no longer a topic of conversation. A quick search tells me perhaps 5 million or more US workers predominantly work from home, with some 15% of all workers doing so on occasion. In the predominant professional and business services, financial activities, and education and health services, this percentage can reach as high as 30% of workers now doing paid work from home to one degree or another.
Of course, personality, job requirements, and physical space may not make working from home a good choice for you. But if you have not tried it and it sounds interesting, by all means: Try it!
For twenty years, it has been a great choice for me and for my family. This is indeed a nice 20th anniversary to celebrate!
I’ve tried to avoid the general frenzy, but please see:
I recently wrote about WOA (Web-oriented architecture), a term coined by Nick Gall, and how it represented a natural marriage between RESTful Web services and RESTful linked data. There was, of course, a method behind that posting to foreshadow some pending announcements from UMBEL and Zitgist.
Well, those announcements are now at hand, and it is time to disclose some of the method behind our madness.
As Fred Giasson notes in his announcement posting, UMBEL has just released some new Web services with fully RESTful endpoints. We have been working on the design and architecture behind this for some time and, all I can say is, it’s UMBELievable!
As Fred notes, there is further background information on the UMBEL project — which is a lightweight reference structure based on about 20,000 subject concepts and their relationships for placing Web content and data in context with other data — and the API philosophy underlying these new Web services. For that background, please check out those references; that is not my main point here.
We discussed much in coming up with the new design for these UMBEL Web services. Most prominent was taking seriously a RESTful design and grounding all of our decisions in the HTTP 1.1 protocol. Given the shared approaches between RESTful services and linked data, this correspondence felt natural.
What was perhaps most surprising, though, was how complete and well suited HTTP was as a design and architectural basis for these services. Sure, we understood the distinctions of GET and POST and persistent URIs and the need to maintain stateless sessions with idempotent design, but what we did not fully appreciate was how content and serialization negotiation and error and status messages also were natural results of paying close attention to HTTP. For example, here is what the UMBEL Web services design now embraces:
There are likely other services out there that embrace this full extent of RESTful design (though we are not aware of them). What we are finding most exciting, though, is the ease with which we can extend our design into new services and to mesh up data with other existing ones. This idea of scalability and distributed interoperability is truly, truly powerful.
It is almost like, sure, we knew the words and the principles behind REST and a Web-oriented architecture, but had really not fully taken them to heart. As our mindset now embraces these ideas, we feel like we have now looked clearly into the crystal ball of data and applications. We very much like what we see. WOA is most cool.
For lack of a better phrase, Zitgist has a component internal plan that it calls its ‘Grand Vision’ for moving forward. Though something of a living document, this reference describes how Zitgist is going about its business and development. It does not describe our markets or products (of course, other internal documents do that), but our internal development approaches and architectural principles.
Just as we have seen a natural marriage between RESTful Web services and RESTful linked data, there are other natural fits and synergies. Some involve component design and architecting for pipeline models. Some involve the natural fit of domain-specific languages (DSLs) to common terminology and design, too. Still others involve use of such constructs in both GUIs and command-line interfaces (CLIs), again all built from common language and terminology that non-programmers and subject matter experts alike can readily embrace. Finally, some is a preference for Python to wrap legacy apps and to provide a productive scripting environment for DSLs.
If one can step back a bit and realize there are some common threads to the principles behind RESTful Web services and linked data, that very same mindset can be applied to many other architectural and design issues. For us, at Zitgist, these realizations have been like turning on a very bright light. We can see clearly now, and it is pretty UMBELievable. These are indeed exciting times.
BTW, I would like to thank Eric Hoffer for the very clever play on words with the UMBELievable tag line. Thanks, Eric, you rock!
In the longer version, Nick describes WOA as based on the architecture of the Web that he further characterizes as “globally linked, decentralized, and [with] uniform intermediary processing of application state via self-describing messages.”
WOA is a subset of the service-oriented architectural style. He describes SOA as comprising discrete functions that are packaged into modular and shareable elements (“services”) that are made available in a distributed and loosely coupled manner.
Representational state transfer (REST) is an architectural style for distributed hypermedia systems such as the World Wide Web. It was named and defined in Roy Fielding‘s 2000 doctoral thesis; Roy is also one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification.
REST provides principles for how resources are defined and used and addressed with simple interfaces without additional messaging layers such as SOAP or RPC. The principles are couched within the framework of a generalized architectural style and are not limited to the Web, though they are a foundation to it.
REST and WOA stand in contrast to earlier Web service styles that are often known by the WS* acronym (such as WSDL, etc.). (Much has been written on RESTful Web services v. “big” WS*-based ones; one of my own postings goes back to an interview with Tim Bray back in November 2006.)
Shortly after Nick coined the WOA acronym, REST luminaries such as Sam Ruby gave the meme some airplay . From an enterprise and client perspective, Dion Hinchliffe in particular has expanded and written extensively on WOA. Besides his own blog, Dion has also discussed WOA several times on his Enterprise Web 2.0 blog for ZDNet.
Largely due to these efforts (and — some would claim — the difficulties associated with earlier WS* Web services) enterprises are paying much greater heed to WOA. It is increasingly being blogged about and highlighted at enterprise conferences .
While exciting, that is not what is most important in my view. What is important is that the natural connection between WOA and linked data is now beginning to be made.
Linked data is a set of best practices for publishing and deploying data on the Web using the RDF data model. The data objects are named using Web uniform resource identifiers (URIs), emphasize data interconnections, and adhere to REST principles.
Most recently, Nick began picking up the theme of linked data on his new Gartner blog. Enterprises now appreciate the value of an emerging service aspect based on HTTP and accessible by URIs. The idea is jelling that enterprises can now process linked data architected in the same manner.
I think the similar perspectives between REST Web services and linked data become a very natural and easily digested concept for enterprise IT architects. This is a receptive audience because it is these same individuals who have experienced first-hand the challenges and failures of past hype and complexity from non-RESTful designs.
It helps immensely, of course, that we can now look at the major Web players such as Google and Amazon and others — not to mention the overall success of the Web itself — to validate the architecture and associated protocols for the Web. The Web is now understood as the largest Machine designed by humans and one that has been operational every second of its existence.
Many of the same internal enterprise arguments that are being made in support of WOA as a service architecture can be applied to linked data as a data framework. For example, look at Dion’s 12 Things You Should Know About REST and WOA and see how most of the points can be readily adopted to linked data.
So, enterprise thought leaders are moving closer to what we now see as the reality and scalability of the Web done right. They are getting close, but there is still one piece missing.
I admit that I have sometimes tended to think of enterprise systems as distinct from the public Web. And, for sure, there are real and important distinctions. But from an architecture and design perspective, enterprises have much to learn from the Web’s success.
With the Web we see the advantages of a simple design, of universal identifiers, of idempotent operations, of simple messaging, of distributed and modular services, of simple interfaces, and, frankly, of openness and decentralization. The core foundations of HTTP and adherence to REST principles have led to a system of such scale and innovation and (growing) ubiquity as to defy belief.
So, the first observation is that the Web will be the central computing touchstone and framework for all computing systems for the foreseeable future. There simply is no question that interoperating with the Web is now an enterprise imperative. This truth has been evident for some time.
But the reciprocal truth is that these realities are themselves a direct outcome of the Web’s architecture and basic protocol, HTTP. The false dichotomy of enterprise systems as being distinct from the Web arises from seeing the Web solely as a phenomenon and not as one whose basic success should be giving us lessons in architecture and design.
Thus, we first saw the emergence of Web services as an important enteprise thrust — we wanted to be on the Web. But that was not initially undertaken consistent with Web design — which is REST or WOA — but rather as another “layer” in the historical way of doing enterprise IT. We were not of the Web. As the error of that approach became evident, we began to see the trend toward “true” Web services that are now consonant with the architecture and design of the actual Web.
So, why should these same lessons and principles not apply as well to data? And, of course, they do.
If there is one area that enterprises have been abject failures in for more than 30 years it is data interoperability. ETL and enterprise busses and all sorts of complex data warehousing and EAI and EIA mumbo jumbo have kept many vendors fat and happy, but few enterprise customers so. On almost every single dimension, these failed systems have violated the basic principles now in force on the Web based on simplicity, uniform interfaces, etc.
OK, so how many of you have read the HTTP specifications ? How many understand them? What do you think the fundamental operational and architectural and design basis of the Web is?
HTTP is often described as a communications protocol, but it really is much more. It represents the operating system of the Web as well as the embodiment of a design philosophy and architecture. Within its specification lies the secret of the Web’s success. REST and WOA quite possibly require nothing more to understand than the HTTP specification.
Of course, the HTTP specification is not the end of the story, just the essential beginning for adaptive design. Other specifications and systems layer upon this foundation. But, the key point is that if you can be cool with HTTP, you are doing it right to be a Web actor. And being a cool Web actor means you will meet many other cool actors and be around for a long, long time to come.
An understanding of HTTP can provide similar insights with respect to data and data interoperability. Indeed, the fancy name of linked data is nothing more than data on the Web done right — that is, according to the HTTP specifications.
Just as packets need their routers to get to their proper location based on resolving the names of a URI to a physical device, data or information on the Web needs similar context. And, one mechanism by which such context can be provided is through some form of logical referencing framework by which information can be routed to its right “neighborhood”.
I am not speaking of routing to physical locations now, but the routing to the logical locations about what information “is about” and what it “means”. On the simple level of language, a dictionary provides such a function by giving us the definition of what a word “means”. Similar coherent and contextual frameworks can be designed for any information requirement and scope.
Of course, enterprises have been doing similar things internally for years by adopting common vocabularies and the like. Relational data schema are one such framework even if they are not always codified or understood by their enterprises as such.
Over the past decade or two we have seen trade and industry associations and standards bodies, among others, extend these ideas of common vocabularies and information structures such as taxonomies and metadata to work across enterprises. This investment is meaningful and can be quite easily leveraged.
As Nick notes, efforts such as what surrounds XBRL are one vocabulary that can help provide this “routing” in the context of financial data and reporting. So, too, can UMBEL as a general reference framework of 20,000 subject concepts. Indeed, our unveiling of the recent LOD constellation points to a growing set of vocabularies and classes available for such contexts. Literally thousands and thousands of such existing structures can be converted to Web-compliant linked data to provide the information routing hubs necessary for global interoperability.
And, so now we come down to that missing piece. Once we add context as the third leg to this framework stool to provide semantic grounding, I think we are now seeing the full formula powerfully emerge for the semantic Web:
SW = WOA + linked data + coherent context
This simple formula becomes a very powerful combination.
Just as older legacy systems can be exposed as Web services, and older Web services can be turned into WOA ones compliant with the Web’s architecture, we can transition our data in similar ways.
The Web has been pointing us to adaptive design for both services and data since its inception. It is time to finally pay attention.
Today marks the first public release of UMBEL, a lightweight subject concept reference structure for the Web. This version 0.70 release required a full 12 months and many person-years of development effort.
UMBEL (Upper Mapping and Binding Exchange Layer) is a lightweight ontology structure for relating Web content and data to a standard set of 20,000 subject concepts. Its purpose is to provide a fixed set of common reference points in the global knowledge space. These subject concepts have defined relationships between them, and can act as semantic binding nodes for any Web content or data. The UMBEL reference structure is a large, inclusive, linked concept graph.
Connecting to the UMBEL structure gives context and coherence to Web data. In this manner, Web data can be linked, made interoperable, and more easily navigated and discovered. UMBEL is a great vehicle for interconnecting content metadata.
The UMBEL vocabulary defines some important new predicates and leverages existing semantic Web standards. The ontology is provided as Linked Data with Web services access (and pending SPARQL endpoints). Besides its 20,000 subject concepts and relationships distilled from OpenCyc, a further 1.5 million named entities are mapped to that structure. The system is easily extendable.
Fred Giasson, UMBEL’s co-editor, posts separately on how the UMBEL vocabulary can enrich existing semantic Web ontologies and techniques. Also, see the project’s Web site for additional background and explanatory information on the project.
UMBEL is provided as open source under the Creative Commons 3.0 Attribution-Share Alike license; the complete ontology with all subject concepts, definitions, terms and relationships can be freely downloaded. All subject concepts are Web-accessible as Linked Data URIs.
Five volumes of technical documentation are available. The two key volumes explaining the UMBEL project and process are UMBEL Ontology, Vol. A1: Technical Documentation (also online) and Distilling Subject Concepts from OpenCyc, Vol. B1: Overview and Methodology.
A new overview slideshow is also available.
There are two input files for Cytoscape, the open source program used for certain large-scale UMBEL visualization and analysis:
The two complete references to all current and archived files and access procedures in the UMBEL project are UMBEL Ontology, Vol. A2: Subject Concepts and Named Entities Instantiation and Distilling Subject Concepts from OpenCyc, Vol. B2: Files Documentation. Finally, the fifth documentation volume accompanying the release is Distilling Subject Concepts from OpenCyc, Vol. B3: Appendices, which provides supporting materials and detailed backup.
As discussed on the Web site on UMBEL’s role, the project currently has adopted two pivotal positions with respect to OpenCyc and its use:
For these positions to be effective, we are putting in place mechanisms for UMBEL to collect and forward community comments regarding the suitability of the subject concept structure, and for Cycorp to deliberate on that input and respond as appropriate to maintain the coherence of the knowledge base.
Fortunately, Cycorp has been supremely responsive to date and made changes to the OpenCyc concept structure and its conversion to OWL in support of needs and observations brought forth by the UMBEL project. We anticipate this excellent working relationship to continue.
This version 0.70 release is based on versioning and numbering as presented in the supporting documentation. But, also, releasing with a version increment below 1.0 additionally signals the newness and relative immaturity of the system.
This release is the first one in which the UMBEL subject concepts and ontology will be applied as a real vocabulary in public settings. Some areas are known to be weaker and less complete than others. Some areas, such as the coverage of Internet and the Web topics particular to domain experts, are relatively sparse. Other areas, such as organizing science and academic disciplines, have seen much improvement, but more is necessary. Still additional areas will certainly surface as warranting better subject concept coverage.
Input mechanisms are being put in place for user feedback and input and discussion is always welcomed at the project’s discussion forum and mailing list. We anticipate rapid changes and versioning over the next six months or so, which is also roughly the forecasted horizon for the first production-grade version 1.0.
A number of individuals and organizations have contributed significantly to this release, for which the project offers hearty thanks.
|Zitgist LLC has been the major source of staff time and hosting services to the project. Two of Zitgist’s principals, Mike Bergman and Fred Giasson, have acted as editors on the UMBEL project.Zitgist also has contributed nearly two person-years of effort to the project.Zitgist intends on continuing to lead and manage the project with a substantial future commitment of time and effort.|
|OpenLink Software has been the major source of infrastructure, financing and software for the project. OpenLink’s Virtuoso virtual data management system is the hosting software environment for UMBEL and its Web services.Kingsley Idehen, CEO and President of OpenLink, has been a key source of inspiration for the project.|
|Cycorp is the developer of the Cyc knowledge base, with more than 1,000 person-years of effort behind it, from which the OpenCyc open source version is derived.Since the initial selection of OpenCyc for UMBEL, Cycorp staff have devoted many person-months of effort to help explain the underlying system and, then, most recently, to make improvements and revisions to OpenCyc and its OWL version in response to project input. Larry Lefkowitz, VP of business development, has been a very effective interface with the project.|
|YAGO is a project from Fabian Suchanek, Gjergji Kasneci and Gerhard Weikum of the Max-Planck-Institute for Computer Science, Saarbruecken, Germany. It is based on extracting and organizing entities from Wikipedia according to the WordNet concept structure.YAGO demonstrated the methodology for how to replace the native Wikipedia structure with alternate external structures and provided the starting set of named entities used within UMBEL. Fabian has been especially helpful in data, software and methodology support to the project.|
|The Cyc Foundation and its members have been devoted to Web exposure of OpenCyc and have provided great guidance to the project in learning and navigating the knowledge base. Their concepts browser and other Web services have also been extremely helpful to the project’s initial ideas and testing.Mark Baltzegar and John De Oliveira, the two lead directors of the Cyc Foundation, have been particularly helpful.|
|Moritz Stefaner is one of the innovators and rising stars in large-scale data visualization.Moritz has kindly contributed his cool Flash explorer implementation used in UMBEL’s Subject Concept Explorer and continues to make ongoing improvements to UMBEL’s visualization.Moritz’s Web site and separate blog are each worth perusing for neat graphics and ideas.|
Thanks, all of you! This is a day we have worked long and hard to see come to reality. As Fred puts it, let the fun begin!